diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 3ec01b5cb..3c88c4cb1 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,265 +1,270 @@ # Contributing to Terraform -**First:** if you're unsure or afraid of _anything_, just ask -or submit the issue or pull request anyways. You won't be yelled at for -giving your best effort. The worst that can happen is that you'll be -politely asked to change something. We appreciate any sort of contributions, -and don't want a wall of rules to get in the way of that. +--- -However, for those individuals who want a bit more guidance on the -best way to contribute to the project, read on. This document will cover -what we're looking for. By addressing all the points we're looking for, -it raises the chances we can quickly merge or address your contributions. +This repository contains only Terraform core, which includes the command line +interface and the main graph engine. Providers are implemented as plugins that +each have their own repository in +[the `terraform-providers` organization](https://github.com/terraform-providers) +on GitHub. Instructions for developing each provider are in the associated +README file. For more information, see +[the provider development overview](https://www.terraform.io/docs/plugins/provider.html). -Specifically, we have provided checklists below for each type of issue and pull -request that can happen on the project. These checklists represent everything -we need to be able to review and respond quickly. +--- -## HashiCorp, Official, and Community Providers +Terraform is an open source project and we appreciate contributions of various +kinds, including bug reports and fixes, enhancement proposals, documentation +updates, and user experience feedback. -We separate providers out into what we call "HashiCorp Providers", "Partner Providers" and "Community Providers". +To record a bug report, enhancement proposal, or give any other product +feedback, please [open a GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose) +using the most appropriate issue template. Please do fill in all of the +information the issue templates request, because we've seen from experience that +this will maximize the chance that we'll be able to act on your feedback. -HashiCorp providers are providers that we dedicate full time engineers to -improving, supporting the latest features, and fixing bugs. These are providers -we understand deeply and are confident we have the resources to manage -ourselves. +Please note that we _don't_ use GitHub issues for usage questions. If you have +a question about how to use Terraform in general or how to solve a specific +problem with Terraform, please start a topic in +[the Terraform community forum](https://discuss.hashicorp.com/c/terraform-core), +where both Terraform team members and community members participate in +discussions. -Partner providers are providers where we depend on our partners to -contribute fixes and enhancements to improve. HashiCorp will run automated -tests and ensure these providers continue to work, but will not dedicate full -time engineers to add new features to these providers. These providers are -available in official Terraform releases, but the functionality is primarily -contributed. +**All communication on GitHub, the community forum, and other HashiCorp-provided +communication channels is subject to +[the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).** -All HashiCorp and Partner providers can be found in the (terraform-providers github organization)[https://github.com/terraform-providers]. -Any provider issues should be opened in the provider's repository. +## Terraform CLI/Core Development Environment -Our testing standards are the same for both HashiCorp and Official providers, -and HashiCorp runs full acceptance test suites for every provider nightly to -ensure Terraform remains stable. +This repository contains the source code for Terraform CLI, which is the main +component of Terraform that contains the core Terraform engine. -Community Providers are providers that are neither maintained nor tested by -HashiCorp. We can make no promises that these providers will work with any given -version of Terraform. These providers are not automatically installed by -`terraform init` and instead require manual installation. +The HashiCorp-maintained Terraform providers are also open source but are not +in this repository; instead, they are each in their own repository in +[the `terraform-providers` organization](https://github.com/terraform-providers) +on GitHub. -We make the distinction between these types of providers to help -highlight the vast amounts of community effort that goes in to making Terraform -great, and to help contributors better understand the role HashiCorp employees -play in the various areas of the code base. +This repository also does not include the source code for some other parts of +the Terraform product including Terraform Cloud, Terraform Enterprise, and the +Terraform Registry. Those components are not open source, though if you have +feedback about them (including bug reports) please do feel free to +[open a GitHub issue on this repository](https://github.com/hashicorp/terraform/issues/new/choose). -## Issues +--- -### Issue Reporting Checklists +If you wish to work on the Terraform CLI source code, you'll first need to +install the [Go](https://golang.org/) compiler and the version control system +[Git](https://git-scm.com/). -We welcome feature requests and bug reports. Below you'll find checklists with -guidelines for well-formed issues of each type. +At this time the Terraform development environment is targeting only Linux and +Mac OS X systems. While Terraform itself is compatible with Windows, +unfortunately the unit test suite currently contains Unix-specific assumptions +around maximum path lengths, path separators, etc. -#### Bug Reports +Refer to the file [`.go-version`](.go-version) to see which version of Go +Terraform is currently built with. Other versions will often work, but if you +run into any build or testing problems please try with the specific Go version +indicated. You can optionally simplify the installation of multiple specific +versions of Go on your system by installing +[`goenv`](https://github.com/syndbg/goenv), which reads `.go-version` and +automatically selects the correct Go version. - - [ ] __Test against latest release__: Make sure you test against the latest - released version. It is possible we already fixed the bug you're experiencing. +Use Git to clone this repository into a location of your choice. Terraform is +using [Go Modules](https://blog.golang.org/using-go-modules), and so you +should _not_ clone it inside your `GOPATH`. - - [ ] __Search for possible duplicate reports__: It's helpful to keep bug - reports consolidated to one thread, so do a quick search on existing bug - reports to check if anybody else has reported the same thing. You can scope - searches by the label "bug" to help narrow things down. +Switch into the root directory of the cloned repository and build Terraform +using the Go toolchain in the standard way: - - [ ] __Include steps to reproduce__: Provide steps to reproduce the issue, - along with your `.tf` files, with secrets removed, so we can try to - reproduce it. Without this, it makes it much harder to fix the issue. +``` +cd terraform +go install . +``` - - [ ] __For panics, include `crash.log`__: If you experienced a panic, please - create a [gist](https://gist.github.com) of the *entire* generated crash log - for us to look at. Double check no sensitive items were in the log. +The first time you run the `go install` command, the Go toolchain will download +any library dependencies that you don't already have in your Go modules cache. +Subsequent builds will be faster because these dependencies will already be +available on your local disk. -#### Feature Requests +Once the compilation process succeeds, you can find a `terraform` executable in +the Go executable directory. If you haven't overridden it with the `GOBIN` +environment variable, the executable directory is the `bin` directory inside +the directory returned by the following command: - - [ ] __Search for possible duplicate requests__: It's helpful to keep requests - consolidated to one thread, so do a quick search on existing requests to - check if anybody else has reported the same thing. You can scope searches by - the label "enhancement" to help narrow things down. +``` +go env GOPATH +``` - - [ ] __Include a use case description__: In addition to describing the - behavior of the feature you'd like to see added, it's helpful to also lay - out the reason why the feature would be important and how it would benefit - Terraform users. +If you are planning to make changes to the Terraform source code, you should +run the unit test suite before you start to make sure everything is initially +passing: -#### Questions +``` +go test ./... +``` -Please do not use GitHub to ask questions! Instead: +As you make your changes, you can re-run the above command to ensure that the +tests are _still_ passing. If you are working only on a specific Go package, +you can speed up your testing cycle by testing only that single package, or +packages under a particular package prefix: - * __Search for answers in Terraform documentation__ +``` +go test ./command/... +go test ./addrs +``` - * __Ask in the Community Forum__: Use [the community forum](https://discuss.hashicorp.com/c/terraform-core) for questions not answered by the documentation. +## Acceptance Tests: Testing interactions with external services - * __Request an update to the documentation__: If you find that the - documentation is confusing or incorrect, open an issue (or a pull request) and - let us know. +Terraform's unit test suite is self-contained, using mocks and local files +to help ensure that it can run offline and is unlikely to be broken by changes +to outside systems. -### Issue Lifecycle +However, several Terraform components interact with external services, such +as the automatic provider installation mechanism, the Terraform Registry, +Terraform Cloud, etc. -1. The issue is reported. +There are some optional tests in the Terraform CLI codebase that _do_ interact +with external services, which we collectively refer to as "acceptance tests". +You can enable these by setting the environment variable `TF_ACC=1` when +running the tests. We recommend focusing only on the specific package you +are working on when enabling acceptance tests, both because it can help the +test run to complete faster and because you are less likely to encounter +failures due to drift in systems unrelated to your current goal: -2. The issue is verified and categorized by a Terraform collaborator. - Categorization is done via GitHub labels. We generally use a two-label - system of (1) issue/PR type, and (2) section of the codebase. Type is - usually "bug", "enhancement", "documentation", or "question", and section - can be any of the providers or provisioners or "core". +``` +TF_ACC=1 go test ./internal/initwd +``` -3. Unless it is critical, the issue is left for a period of time (sometimes - many weeks), giving outside contributors a chance to address the issue. +Because the acceptance tests depend on services outside of the Terraform +codebase, and because the acceptance tests are usually used only when making +changes to the systems they cover, it is common and expected that drift in +those external systems will cause test failures. Because of this, prior to +working on a system covered by acceptance tests it's important to run the +existing tests for that system in an _unchanged_ work tree first and respond +to any test failures that preexist, to avoid misinterpreting such failures as +bugs in your new changes. -4. The issue is addressed in a pull request or commit. The issue will be - referenced in the commit message so that the code that fixes it is clearly - linked. +## Generated Code -5. The issue is closed. Sometimes, valid issues will be closed to keep - the issue tracker clean. The issue is still indexed and available for - future viewers, or can be re-opened if necessary. +Some files in the Terraform CLI codebase are generated. In most cases, we +update these using `go generate`, which is the standard way to encapsulate +code generation steps in a Go codebase. -## Pull Requests +``` +go generate ./... +``` -Thank you for contributing! Here you'll find information on what to include in -your Pull Request to ensure it is accepted quickly. +Use `git diff` afterwards to inspect the changes and ensure that they are what +you expected. - * Pull requests that don't follow the guidelines will be annotated with what - they're missing. A community or core team member may be able to swing around - and help finish up the work, but these PRs will generally hang out much - longer until they can be completed and merged. +Terraform includes generated Go stub code for the Terraform provider plugin +protocol, which is defined using Protocol Buffers. Because the Protocol Buffers +tools are not written in Go and thus cannot be automatically installed using +`go get`, we follow a different process for generating these, which requires +that you've already installed a suitable version of `protoc`: -### Pull Request Lifecycle +``` +make protobuf +``` -1. You are welcome to submit your pull request for commentary or review before - it is fully completed. Please prefix the title of your pull request with - "[WIP]" to indicate this. It's also a good idea to include specific - questions or items you'd like feedback on. +## External Dependencies -2. Once you believe your pull request is ready to be merged, you can remove any - "[WIP]" prefix from the title and a core team member will review. Follow - [the checklists below](#checklists-for-contribution) to help ensure that - your contribution will be merged quickly. +Terraform uses Go Modules for dependency management, but currently uses +"vendoring" to include copies of all of the external library dependencies +in the Terraform repository to allow builds to complete even if third-party +dependency sources are unavailable. -3. One of Terraform's core team members will look over your contribution and - either provide comments letting you know if there is anything left to do. We - do our best to provide feedback in a timely manner, but it may take some - time for us to respond. +Our dependency licensing policy for Terraform excludes proprietary licenses +and "copyleft"-style licenses. We accept the common Mozilla Public License v2, +MIT License, and BSD licenses. We will consider other open source licenses +in similar spirit to those three, but if you plan to include such a dependency +in a contribution we'd recommend opening a GitHub issue first to discuss what +you intend to implement and what dependencies it will require so that the +Terraform team can review the relevant licenses to for whether they meet our +licensing needs. -4. Once all outstanding comments and checklist items have been addressed, your - contribution will be merged! Merged PRs will be included in the next - Terraform release. The core team takes care of updating the CHANGELOG as - they merge. +If you need to add a new dependency to Terraform or update the selected version +for an existing one, use `go get` from the root of the Terraform repository +as follows: -5. In rare cases, we might decide that a PR should be closed. We'll make sure - to provide clear reasoning when this happens. +``` +go get github.com/hashicorp/hcl/v2@2.0.0 +``` -### Checklists for Contribution +This command will download the requested version (2.0.0 in the above example) +and record that version selection in the `go.mod` file. It will also record +checksums for the module in the `go.sum`. -There are several different kinds of contribution, each of which has its own -standards for a speedy review. The following sections describe guidelines for -each type of contribution. +To complete the dependency change, clean up any redundancy in the module +metadata files and resynchronize the `vendor` directory with the new package +selections by running the following commands: -#### Documentation Update +``` +go mod tidy +go mod vendor +``` -Because [Terraform's website][website] is in the same repo as the code, it's -easy for anybody to help us improve our docs. +To ensure that the vendoring has worked correctly, be sure to run the unit +test suite at least once in _vendoring_ mode, where Go will use the vendored +dependencies to build the test programs: - - [ ] __Reasoning for docs update__: Including a quick explanation for why the - update needed is helpful for reviewers. - - [ ] __Relevant Terraform version__: Is this update worth deploying to the - site immediately, or is it referencing an upcoming version of Terraform and - should get pushed out with the next release? +``` +go test -mod=vendor ./... +``` -#### New Provider +Because dependency changes affect a shared, top-level file, they are more likely +than some other change types to become conflicted with other proposed changes +during the code review process. For that reason, and to make dependency changes +more visible in the change history, we prefer to record dependency changes as +separate commits that include only the results of the above commands and the +minimal set of changes to Terraform's own code for compatibility with the +new version: -Implementing a new provider gives Terraform the ability to manage resources in -a whole new API. It's a larger undertaking, but brings major new functionality -into Terraform. +``` +git add go.mod go.sum vendor +git commit -m "vendor: go get github.com/hashicorp/hcl/v2@2.0.0" +``` -Terraform Providers are external plugins, not in the Terraform codebase. Please -see the [Provider Development Program](https://www.terraform.io/guides/terraform-provider-development-program.html) documentation if you are interested in -submitting a new provider. +You can then make use of the new or updated dependency in new code added in +subsequent commits. -#### Core Bugfix/Enhancement +## Proposing a Change -We are always happy when any developer is interested in diving into Terraform's -core to help out! Here's what we look for in smaller Core PRs. +If you'd like to contribute a code change to Terraform, we'd love to review +a GitHub pull request. - - [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at - several different layers of abstraction. Generally the best place to start - is with a "Context Test". These are higher level test that interact - end-to-end with most of Terraform's core. They are divided into test files - for each major action (plan, apply, etc.). Getting a failing test is a great - way to prove out a bug report or a new enhancement. With a context test in - place, you can work on implementation and lower level unit tests. Lower - level tests are largely context dependent, but the Context Tests are almost - always part of core work. - - [ ] __Documentation updates__: If the core change involves anything that - needs to be reflected in our documentation, you can make those changes in - the same PR. The [Terraform website][website] source is in this repo and - includes instructions for getting a local copy of the site up and running if - you'd like to preview your changes. - - [ ] __Well-formed Code__: Do your best to follow existing conventions you - see in the codebase, and ensure your code is formatted with `go fmt`. (The - Travis CI build will fail if `go fmt` has not been run on incoming code.) - The PR reviewers can help out on this front, and may provide comments with - suggestions on how to improve the code. +In order to be respectful of the time of community contributors, we prefer to +discuss potential changes in GitHub issues prior to implementation. That will +allow us to give design feedback up front and set expectations about the scope +of the change, and, for larger changes, how best to approach the work such that +the Terraform team can review it and merge it along with other concurrent work. -#### Core Feature +If the bug you wish to fix or enhancement you wish to implement isn't already +covered by a GitHub issue that contains feedback from the Terraform team, +please do start a discussion (either in +[a new GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose) +or an existing one, as appropriate) before you invest significant development +time. If you mention your intent to implement the change described in your +issue, the Terraform team can prioritize including implementation-related +feedback in the subsequent discussion. -If you're interested in taking on a larger core feature, it's a good idea to -get feedback early and often on the effort. +At this time, we do not have a formal process for reviewing outside proposals +that significantly change Terraform's workflow, its primary usage patterns, +and its language. While we do hope to put such a thing in place in the future, +we wish to be up front with potential contributors that unfortunately we are +unlikely to be able to give prompt feedback for large proposals that could +entail a significant design phase, though we are still interested to hear about +your use-cases so that we can consider ways to meet them as part of other +larger projects. - - [ ] __Early validation of idea and implementation plan__: Terraform's core - is complicated enough that there are often several ways to implement - something, each of which has different implications and tradeoffs. Working - through a plan of attack with the team before you dive into implementation - will help ensure that you're working in the right direction. Opening a GitHub - issue, or commenting on an existing issue, is a great way to get these - conversations started. - - [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at - several different layers of abstraction. Generally the best place to start - is with a "Context Test". These are higher level test that interact - end-to-end with most of Terraform's core. They are divided into test files - for each major action (plan, apply, etc.). Getting a failing test is a great - way to prove out a bug report or a new enhancement. With a context test in - place, you can work on implementation and lower level unit tests. Lower - level tests are largely context dependent, but the Context Tests are almost - always part of core work. - - [ ] __Documentation updates__: If the core change involves anything that - needs to be reflected in our documentation, you can make those changes in - the same PR. The [Terraform website][website] source is in this repo and - includes instructions for getting a local copy of the site up and running if - you'd like to preview your changes. - - [ ] __Well-formed Code__: Do your best to follow existing conventions you - see in the codebase, and ensure your code is formatted with `go fmt`. (The - Travis CI build will fail if `go fmt` has not been run on incoming code.) - The PR reviewers can help out on this front, and may provide comments with - suggestions on how to improve the code. +Most changes will involve updates to the test suite, and changes to Terraform's +documentation. The Terraform team can advise on different testing strategies +for specific scenarios, and may ask you to revise the specific phrasing of +your proposed documentation prose to match better with the standard "voice" of +Terraform's documentation. -### Writing Acceptance Tests - -#### Acceptance Tests Often Cost Money to Run - -Because acceptance tests create real resources, they often cost money to run. -Because the resources only exist for a short period of time, the total amount -of money required is usually a relatively small. Nevertheless, we don't want -financial limitations to be a barrier to contribution, so if you are unable to -pay to run acceptance tests for your contribution, simply mention this in your -pull request. We will happily accept "best effort" implementations of -acceptance tests and run them for you on our side. This might mean that your PR -takes a bit longer to merge, but it most definitely is not a blocker for -contributions. - -#### Running an Acceptance Test - -Acceptance tests can be run using the `testacc` target in the Terraform -`Makefile`. The individual tests to run can be controlled using a regular -expression. Prior to running the tests provider configuration details such as -access keys must be made available as environment variables. - - -[website]: https://github.com/hashicorp/terraform/tree/master/website -[acctests]: https://github.com/hashicorp/terraform#acceptance-tests -[community forum]: https://discuss.hashicorp.com/c/terraform-core -[ml]: https://groups.google.com/group/terraform-tool +This repository is primarily maintained by a small team at HashiCorp along with +their other responsibilities, so unfortunately we cannot always respond +promptly to pull requests, particularly if they do not relate to an existing +GitHub issue where the Terraform team has already participated. We _are_ +grateful for all contributions however, and will give feedback on pull requests +as soon as we're able. diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml new file mode 100644 index 000000000..3962f0fe9 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -0,0 +1,11 @@ +blank_issues_enabled: false +contact_links: + - name: Provider-related Feedback and Questions + url: https://github.com/terraform-providers + about: Each provider (e.g. AWS, Azure, GCP, Oracle, K8S, etc.) has its own repository, any provider related issues or questions should be directed to appropriate provider repository. + - name: Provider Development Feedback and Questions + url: https://github.com/hashicorp/terraform-plugin-sdk/issues/new/choose + about: Plugin SDK has its own repository, any SDK and provider development related issues or questions should be directed there. + - name: Terraform Language or Workflow Questions + url: https://discuss.hashicorp.com/c/terraform-core + about: Please ask and answer language or workflow related questions through the Terraform Core Community Forum. diff --git a/.github/ISSUE_TEMPLATE/provider_issue.md b/.github/ISSUE_TEMPLATE/provider_issue.md deleted file mode 100644 index 2a9e52b3c..000000000 --- a/.github/ISSUE_TEMPLATE/provider_issue.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -name: Provider issue (AWS, Azure, GCP, Oracle, Kubernetes, etc.) -about: Do you have a bug, feature request, or other issue with a provider (not Terraform core or the HCL language itself)? ---- - -Hi there, - -Each provider has it's own repository, and issues should be opened there not on the main Terraform repository. - -Here are some of the most common: - -* [AWS](https://github.com/terraform-providers/terraform-provider-aws) -* [Azure](https://github.com/terraform-providers/terraform-provider-azurerm) -* [Google](https://github.com/terraform-providers/terraform-provider-google) -* [Oracle](https://github.com/terraform-providers/terraform-provider-oci) -* [Kubernetes](https://github.com/terraform-providers/terraform-provider-kubernetes) - -See the [terraform-providers](https://github.com/terraform-providers) GitHub organization for many others. diff --git a/.github/SECURITY.md b/.github/SECURITY.md new file mode 100644 index 000000000..6cffe8c42 --- /dev/null +++ b/.github/SECURITY.md @@ -0,0 +1,4 @@ +# Vulnerability Reporting + +Please disclose security vulnerabilities responsibly by following the procedure +described at https://www.hashicorp.com/security#vulnerability-reporting diff --git a/.github/SUPPORT.md b/.github/SUPPORT.md index 1d41099a6..9ba846c38 100644 --- a/.github/SUPPORT.md +++ b/.github/SUPPORT.md @@ -1,5 +1,4 @@ # Support -Terraform is a mature project with a growing community. There are active, dedicated people willing to help you through various mediums. - -Take a look at those mediums listed at https://www.terraform.io/community.html +If you have questions about Terraform usage, please feel free to create a topic +on [the official community forum](https://discuss.hashicorp.com/c/terraform-core). diff --git a/.gitignore b/.gitignore index f095d09ce..6efe5a8cb 100644 --- a/.gitignore +++ b/.gitignore @@ -29,3 +29,6 @@ website/vendor # Test exclusions !command/testdata/**/*.tfstate !command/testdata/**/.terraform/ + +# Coverage +coverage.txt diff --git a/.go-version b/.go-version index 166a50ffa..434711004 100644 --- a/.go-version +++ b/.go-version @@ -1 +1 @@ -1.12.9 +1.12.13 diff --git a/.travis.yml b/.travis.yml index c3960ca49..a4a7914aa 100644 --- a/.travis.yml +++ b/.travis.yml @@ -4,7 +4,7 @@ services: - docker language: go go: -- "1.12.9" +- "1.12.13" # add TF_CONSUL_TEST=1 to run consul tests # they were causing timouts in travis @@ -33,17 +33,22 @@ before_script: - git config --global url.https://github.com/.insteadOf ssh://git@github.com/ script: -- make test +- make fmtcheck generate +- bash scripts/travis.sh - go mod verify - make e2etest - GOOS=windows go build -mod=vendor # website-test is temporarily disabled while we get the website build back in shape after the v0.12 reorganization #- make website-test +after_success: + - bash <(curl -s https://codecov.io/bash) + branches: only: - master - v0.11 + - v0.12 notifications: irc: channels: diff --git a/BUILDING.md b/BUILDING.md index 8b52b8341..bd3da0ff5 100644 --- a/BUILDING.md +++ b/BUILDING.md @@ -1,43 +1,65 @@ # Building Terraform -This document contains details about the process for building binaries for -Terraform. +This document contains details about the process for building release-style +binaries for Terraform. + +(If you are intending instead to make changes to Terraform and build binaries +only for your local testing, see +[the contributing guide](.github/CONTRIBUTING.md).) ## Versioning -As a pre-1.0 project, we use the MINOR and PATCH versions as follows: +Until Terraform v1.0, Terraform's versioning scheme is as follows: - * a `MINOR` version increment indicates a release that may contain backwards - incompatible changes - * a `PATCH` version increment indicates a release that may contain bugfixes as - well as additive (backwards compatible) features and enhancements +* Full version strings start with a zero in the initial position. +* The second position increments for _major_ releases, which may contain + backwards incompatible changes. +* The third and final position increments for _minor_ releases, which + we aim to keep backwards compatible with prior releases for the same major + version. + +Although the Terraform team takes care to preserve compatibility between +major releases, major release upgrades will often require a subset of users +to take specific upgrade actions. This issue will persist while the product +design is refined in preparation for more specific backward-compatibility promises +in a later Terraform 1.0 release. ## Process -If only need to build binaries for the platform you're running (Windows, Linux, -Mac OS X etc..), you can follow the instructions in the README for [Developing -Terraform][1]. +Terraform release binaries are built via cross-compilation on a Linux +system, using [gox](https://github.com/mitchellh/gox). -The guide below outlines the steps HashiCorp takes to build the official release -binaries for Terraform. This process will generate a set of binaries for each supported -platform, using the [gox](https://github.com/mitchellh/gox) tool. +The steps below are a subset of the steps HashiCorp uses to prepare the +official distribution packages available from +[the download page](https://www.terraform.io/downloads.html). This +process will generate an executable for each of the supported target platforms. +HashiCorp prepares release binaries on Linux amd64 systems. This build process +may need to be adjusted for other host platforms. ```sh # clone the repository if needed git clone https://github.com/hashicorp/terraform.git cd terraform -# Verify unit tests pass +# Verify that the unit tests are passing make test -# Build the release +# Run preparation steps and then build the executable for each target platform +# in the subdirectory "pkg". # This generates binaries for each platform and places them in the pkg folder make bin ``` -After running these commands, you should have binaries for all supported -platforms in the `pkg` folder. +Official releases are subsequently then packaged, hashed, and signed before +uploading to [the HashiCorp releases service](https://releases.hashicorp.com/terraform/). +Those final packaging steps are not fully reproducible using the contents +of this repository due to the use of HashiCorp's private signing key. However, +you can place the generated executables in `.zip` archives to produce a +similar result without the checksums and digital signature. +## Release Bundles for use in Terraform Enterprise -[1]: https://github.com/hashicorp/terraform#developing-terraform +If you wish to build distribution archives that blend official Terraform +release executables with a mixture of official and third-party provider builds, +see [the `terraform-bundle` tool](tools/terraform-bundle). diff --git a/CHANGELOG.md b/CHANGELOG.md index 287bdd237..5fbbb9edb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,287 +1,22 @@ -## 0.12.11 (Unreleased) +## 0.13.0 (Unreleased) -BUG FIXES: +BREAKING CHANGES: -* config: Clean up orphan modules in the presence of -target [GH-21313] - -## 0.12.10 (October 07, 2019) +* command/import: remove the deprecated `-provider` command line argument [GH-24090] +#22862 fixed a bug where the `import` command was not properly attaching the configured provider for a resource to be imported, making the `-provider` command line argument unnecessary. +* config: Inside `provisioner` blocks that have `when = destroy` set, and inside any `connection` blocks that are used by such `provisioner` blocks, it is now an error to refer to any objects other than `self`, `count`, or `each` [GH-24083] +* config: The `merge` function now returns more precise type information, making it usable for values passed to `for_each` [GH-24032] ENHANCEMENTS: +* config: `templatefile` function will now return a helpful error message if a given variable has an invalid name, rather than relying on a syntax error in the template parsing itself. [GH-24184] -* `terraform plan` and `terraform apply` will now warn when the `-target` option is used, to draw attention to the fact that the result of applying the plan is likely to be incomplete, and to remind to re-run `terraform plan` with no targets afterwards to ensure that the configuration has converged. ([#22783](https://github.com/hashicorp/terraform/issues/22783)) -* config: New function `parseint` for parsing strings containing digits as integers in various bases. ([#22747](https://github.com/hashicorp/terraform/issues/22747)) -* config: New function `cidrsubnets`, which is a companion to the existing function `cidrsubnet` which can allocate multiple consecutive subnet prefixes (possibly of different prefix lengths) in a single call. ([#22858](https://github.com/hashicorp/terraform/issues/22858)) -* backend/google: The GCS backend now supports OAuth2 token authentication. ([#21772](https://github.com/hashicorp/terraform/issues/21772)) -* provisioner/habitat: Multiple updates and fixes, see PR for details ([#22705](https://github.com/hashicorp/terraform/issues/22705)) - -BUG FIXES: - -* backend/manta: fix panic when `insecure_skip_tls_verify` was not set ([#22918](https://github.com/hashicorp/terraform/issues/22918)) - -## 0.12.9 (September 17, 2019) - -NOTES: -* core: `ignore_changes` is now processed (in addition to existing behaviors) before the provider plan is run. This means that users may see fewer planned changes when using `ignore_changes`, as before this change, changes to ignored attributes were still being sent to CustomizeDiff in providers (which could mean cascading changes for some resources). This should be indicative that providers are no longer getting changes that were marked as ignored, but if unexpected plans are seen while using `ignore_changes`, investigate the settings in the `ignore_changes` block to ensure the appropriate attributes are set. ([#22520](https://github.com/hashicorp/terraform/issues/22520)) - -ENHANCEMENTS: -* provisioners/habitat: `accept_license` argument available to automate accepting the EULA, now required by this client ([#22745](https://github.com/hashicorp/terraform/issues/22745)) -* config: add source addressing to unknown value errors in `for_each` ([#22760](https://github.com/hashicorp/terraform/issues/22760)) - -BUG FIXES: -* command/console: support -var and -var-file flags ([#22145](https://github.com/hashicorp/terraform/issues/22145)) -* command/show: Fixed bug with wrong errors being returned or swallowed. ([#22772](https://github.com/hashicorp/terraform/issues/22772)) -* config: The `cidrhost`, `cidrsubnet`, and `cidrnetmask` functions now behave correctly with IPv6 prefixes that are short enough for the host portion to be greater than 64-bit or 32-bit (depending on the target architecture). ([#22505](https://github.com/hashicorp/terraform/issues/22505)) -* config: Fixed bug on empty sets with `for_each` ([#22281](https://github.com/hashicorp/terraform/issues/22281)) - -## 0.12.8 (September 04, 2019) - -NEW FEATURES: -* lang/funcs: New `fileset` function, for finding static local files that match a glob pattern. ([#22523](https://github.com/hashicorp/terraform/issues/22523)) - -ENHANCEMENTS: -* remote-state/pg: add option to skip schema creation ([#21607](https://github.com/hashicorp/terraform/issues/21607)) - -BUG FIXES: -* command/console: use user-supplied `-plugin-dir` ([#22616](https://github.com/hashicorp/terraform/issues/22616)) -* config: ensure sets are appropriately known for `for_each` ([#22597](https://github.com/hashicorp/terraform/issues/22597)) - -## 0.12.7 (August 22, 2019) - -NEW FEATURES: -* New functions `regex` and `regexall` allow applying a regular expression pattern to a string and retrieving any matching substring(s) ([#22353](https://github.com/hashicorp/terraform/issues/22353)) - -ENHANCEMENTS: -* lang/funcs: `lookup()` can work with maps of lists, maps and objects ([#22269](https://github.com/hashicorp/terraform/issues/22269)) -* SDK: helper/acctest: Add function to return random IP address ([#22312](https://github.com/hashicorp/terraform/issues/22312)) -* SDK: httpclient: Introduce composable `UserAgent(version)` ([#22272](https://github.com/hashicorp/terraform/issues/22272)) -* connection/ssh: Support certificate authentication ([#22156](https://github.com/hashicorp/terraform/issues/22156)) - -BUG FIXES: -* config: reduce MinItems and MaxItems validation during decoding, to allow for use of dynamic blocks ([#22530](https://github.com/hashicorp/terraform/issues/22530)) -* config: don't validate MinItems and MaxItems in CoerceValue, allowing providers to set incomplete values ([#22478](https://github.com/hashicorp/terraform/issues/22478)) -* config: fix panic on tuples with `for_each` ([#22279](https://github.com/hashicorp/terraform/issues/22279)) -* config: fix references to `each` of `for_each` in -s ([#22289](https://github.com/hashicorp/terraform/issues/22289)) -* config: fix panic when using nested dynamic blocks ([#22314](https://github.com/hashicorp/terraform/issues/22314)) -* config: ensure consistent evaluation when moving between single resources and `for_each` in addressing ([#22454](https://github.com/hashicorp/terraform/issues/22454)) -* core: only start a single instance of each required provisioner ([#22553](https://github.com/hashicorp/terraform/issues/22553)) -* command: fix issue where commands occasionally exited before the error message printed ([#22373](https://github.com/hashicorp/terraform/issues/22373)) -* command/0.12upgrade: use user-supplied plugin-dir ([#22306](https://github.com/hashicorp/terraform/issues/22306)) -* command/hook_ui: Truncate the ID considering multibyte characters ([#18823](https://github.com/hashicorp/terraform/issues/18823)) -* command/fmt: Terraform fmt no longer inserts spaces after % ([#22356](https://github.com/hashicorp/terraform/issues/22356)) -* command/state: Allow moving resources to modules not yet in state ([#22299](https://github.com/hashicorp/terraform/issues/22299)) -* backend/google: Now using the OAuth2 token endpoint on `googleapis.com` instead of `google.com`. These endpoints are equivalent in functionality but `googleapis.com` hosts are resolvable from private Google Cloud Platform VPCs where other connectivity is restricted. ([#22451](https://github.com/hashicorp/terraform/issues/22451)) - -## 0.12.6 (July 31, 2019) - -NOTES: - -* backend/s3: After this update, the AWS Go SDK will prefer credentials found via the `AWS_PROFILE` environment variable when both the `AWS_PROFILE` environment variable and the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables are statically defined. Previously the SDK would ignore the `AWS_PROFILE` environment variable, if static environment credentials were also specified. This is listed as a bug fix in the AWS Go SDK release notes. ([#22253](https://github.com/hashicorp/terraform/issues/22253)) - -NEW FEATURES: -* backend/oss: added support for assume role config ([#22186](https://github.com/hashicorp/terraform/issues/22186)) -* config: Resources can now use a for_each meta-argument ([#17179](https://github.com/hashicorp/terraform/issues/17179)) - -ENHANCEMENTS: -* backend/s3: Add support for assuming role via web identity token via the `AWS_WEB_IDENTITY_TOKEN_FILE` and `AWS_ROLE_ARN` environment variables ([#22253](https://github.com/hashicorp/terraform/issues/22253)) -* backend/s3: Support automatic region validation for `me-south-1`. For AWS operations to work in the new region, the region must be explicitly enabled as outlined in the [AWS Documentation](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable) ([#22253](https://github.com/hashicorp/terraform/issues/22253)) -* connection/ssh: Improve connection debug messages ([#22097](https://github.com/hashicorp/terraform/issues/22097)) - -BUG FIXES: -* backend/remote: remove misleading contents from error message ([#22148](https://github.com/hashicorp/terraform/issues/22148)) -* backend/s3: Load credentials via the `AWS_PROFILE` environment variable (if available) when `AWS_PROFILE` is defined along with `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` ([#22253](https://github.com/hashicorp/terraform/issues/22253)) -* config: Improve conditionals to returns the correct type when dynamic values are present but unevaluated ([#22137](https://github.com/hashicorp/terraform/issues/22137)) -* config: Fix panic when mistakingly using `dynamic` on an attribute ([#22169](https://github.com/hashicorp/terraform/issues/22169)) -* cli: Fix crash with reset connection during init ([#22146](https://github.com/hashicorp/terraform/issues/22146)) -* cli: show all deposed instances and prevent crash in `show` command ([#22149](https://github.com/hashicorp/terraform/issues/22149)) -* configs/configupgrade: Fix crash with nil hilNode ([#22181](https://github.com/hashicorp/terraform/issues/22181)) -* command/fmt: now formats correctly in presence of here-docs ([#21434](https://github.com/hashicorp/terraform/issues/21434)) -* helper/schema: don't skip deprecation check during validation when attribute value is unknown ([#22262](https://github.com/hashicorp/terraform/issues/22262)) -* plugin/sdk: allow MinItems > 1 when dynamic blocks ([#22221](https://github.com/hashicorp/terraform/issues/22221)) -* plugin/sdk: fix reflect panics in helper/schema validation ([#22236](https://github.com/hashicorp/terraform/issues/22236)) - -## 0.12.5 (July 18, 2019) - -ENHANCEMENTS: -* command/format: No longer show no-ops in `terraform show`, since nothing will change ([#21907](https://github.com/hashicorp/terraform/issues/21907)) -* backend/s3: Support for assuming role using credential process from the shared AWS configuration file (support profile containing both `credential_process` and `role_arn` configurations) ([#21908](https://github.com/hashicorp/terraform/issues/21908)) -* connection/ssh: Abort ssh connections when the server is no longer responding ([#22037](https://github.com/hashicorp/terraform/issues/22037)) -* connection/ssh: Support ssh diffie-hellman-group-exchange-sha256 key exchange ([#22037](https://github.com/hashicorp/terraform/issues/22037)) - -BUG FIXES: -* backend/remote: fix conflict with normalized config dir and vcs root working directory ([#22096](https://github.com/hashicorp/terraform/issues/22096)) -* backend/remote: be transparent about what filesystem prefix Terraform is uploading to the remote system, and why it's doing that ([#22121](https://github.com/hashicorp/terraform/issues/22121)) -* configs: Ensure diagnostics are properly recorded from nested modules ([#22098](https://github.com/hashicorp/terraform/issues/22098)) -* core: Prevent inconsistent final plan error when using dynamic in a set-type block ([#22057](https://github.com/hashicorp/terraform/issues/22057)) -* lang/funcs: Allow null values in `compact` function ([#22044](https://github.com/hashicorp/terraform/issues/22044)) -* lang/funcs: Pass through empty list in `chunklist` ([#22119](https://github.com/hashicorp/terraform/issues/22119)) - -## 0.12.4 (July 11, 2019) - -NEW FEATURES: - -* lang/funcs: new `abspath` function returns the absolute path to a given file ([#21409](https://github.com/hashicorp/terraform/issues/21409)) -* backend/swift: support for user configured state object names in swift containers ([#17465](https://github.com/hashicorp/terraform/issues/17465)) - -BUG FIXES: - -* core: Prevent crash when a resource has no current valid instance ([#21979](https://github.com/hashicorp/terraform/issues/21979)) -* plugin/sdk: Prevent empty strings from being replaced with default values ([#21806](https://github.com/hashicorp/terraform/issues/21806)) -* plugin/sdk: Ensure resource timeouts are not lost when there is an empty plan ([#21814](https://github.com/hashicorp/terraform/issues/21814)) -* plugin/sdk: Don't add null elements to diagnostic paths when validating config ([#21884](https://github.com/hashicorp/terraform/issues/21884)) -* lang/funcs: Add missing map of bool support for `lookup` ([#21863](https://github.com/hashicorp/terraform/issues/21863)) -* config: Fix issue with downloading BitBucket modules from deprecated V1 API by updating go-getter dependency ([#21948](https://github.com/hashicorp/terraform/issues/21948)) -* config: Fix conditionals to evaluate to the correct type when using null ([#21957](https://github.com/hashicorp/terraform/issues/21957)) - -## 0.12.3 (June 24, 2019) - -ENHANCEMENTS: - -* config: add GCS source support for modules ([#21254](https://github.com/hashicorp/terraform/issues/21254)) -* command/format: Reduce extra whitespaces & new lines ([#21334](https://github.com/hashicorp/terraform/issues/21334)) -* backend/s3: Support for chaining assume IAM role from AWS shared configuration files ([#21815](https://github.com/hashicorp/terraform/issues/21815)) - -BUG FIXES: - -* configs: Can now use references like `tags["foo"]` in `ignore_changes` to ignore in-place updates to specific keys in a map ([#21788](https://github.com/hashicorp/terraform/issues/21788)) -* configs: Fix panic on missing value for `version` attribute in `provider` blocks. ([#21825](https://github.com/hashicorp/terraform/issues/21825)) -* lang/funcs: Fix `merge` panic on null values. Now will give an error if null used ([#21695](https://github.com/hashicorp/terraform/issues/21695)) -* backend/remote: Fix "Conflict" error if the first state snapshot written after a Terraform CLI upgrade has the same content as the prior state. ([#21811](https://github.com/hashicorp/terraform/issues/21811)) -* backend/s3: Fix AWS shared configuration file credential source not assuming a role with environment and ECS credentials ([#21815](https://github.com/hashicorp/terraform/issues/21815)) - -## 0.12.2 (June 12, 2019) - -NEW FEATURES: - -* provisioners: new provisioner: `puppet` ([#18851](https://github.com/hashicorp/terraform/issues/18851)) -* `range` function for generating a sequence of numbers as a list ([#21461](https://github.com/hashicorp/terraform/issues/21461)) -* `yamldecode` and *experimental* `yamlencode` functions for working with YAML-serialized data ([#21459](https://github.com/hashicorp/terraform/issues/21459)) -* `uuidv5` function for generating name-based (as opposed to pseudorandom) UUIDs ([#21244](https://github.com/hashicorp/terraform/issues/21244)) -* backend/oss: Add support for Alibaba OSS remote state ([#16927](https://github.com/hashicorp/terraform/issues/16927)) - -ENHANCEMENTS: - -* config: consider build metadata when interpreting module versions ([#21640](https://github.com/hashicorp/terraform/issues/21640)) -* backend/http: implement retries for the http backend ([#19702](https://github.com/hashicorp/terraform/issues/19702)) -* backend/swift: authentication mechanisms now more consistent with other OpenStack-compatible tools ([#18671](https://github.com/hashicorp/terraform/issues/18671)) -* backend/swift: add application credential support ([#20914](https://github.com/hashicorp/terraform/pull/20914)) - -BUG FIXES: - -* command/show: use the state snapshot included in the planfile when rendering a plan to json ([#21597](https://github.com/hashicorp/terraform/issues/21597)) -* config: Fix issue with empty dynamic blocks failing when usign ConfigModeAttr ([#21549](https://github.com/hashicorp/terraform/issues/21549)) -* core: Re-validate resource config during final plan ([#21555](https://github.com/hashicorp/terraform/issues/21555)) -* core: Fix missing resource timeouts during destroy ([#21611](https://github.com/hashicorp/terraform/issues/21611)) -* core: Don't panic when encountering an invalid `depends_on` ([#21590](https://github.com/hashicorp/terraform/issues/21590)) -* backend: Fix panic when upgrading from a state with a hash value greater than MaxInt ([#21484](https://github.com/hashicorp/terraform/issues/21484)) - -## 0.12.1 (June 3, 2019) - -BUG FIXES: - -* core: Always try to select a workspace after initialization ([#21234](https://github.com/hashicorp/terraform/issues/21234)) -* command/show: fix inconsistent json output causing a panic ([#21541](https://github.com/hashicorp/terraform/issues/21541)) -* config: `distinct` function no longer panics when given an empty list ([#21538](https://github.com/hashicorp/terraform/issues/21538)) -* config: Don't panic when a `version` constraint is added to a module that was previously initialized without one ([#21542](https://github.com/hashicorp/terraform/issues/21542)) -* config: `matchkeys` function argument type checking will no longer fail incorrectly during validation ([#21576](https://github.com/hashicorp/terraform/issues/21576)) -* backend/local: Don't panic if an instance in the state only has deposed instances, and no current instance ([#21575](https://github.com/hashicorp/terraform/issues/21575)) - -## 0.12.0 (May 22, 2019) +BUG FIXES: +* cli: Fix `terraform state mv` to correctly set the resource each mode based on the target address [GH-24254] +* cli: The `terraform plan` command (and the implied plan run by `terraform apply` with no arguments) will now print any warnings that were generated even if there are no changes to be made. [GH-24095] +* core: Instances are now destroyed only using their stored state, removing many cycle errors [GH-24083] --- +For information on prior major releases, see their changelogs: -This is the aggregated summary of changes compared to v0.11.14. If you'd like to see the incremental changelog through each of the v0.12.0 prereleases, please refer to [the v0.12.0-rc1 changelog](https://github.com/hashicorp/terraform/blob/v0.12.0-rc1/CHANGELOG.md). - ---- - -The focus of v0.12.0 was on improvements to the Terraform language made in response to all of the feedback and experience gathered on prior versions. We hope that these language improvements will help to make configurations for more complex situations more readable, and improve the usability of re-usable modules. - -However, an overhaul of this kind inevitably means that 100% compatibility is not possible. The updated language is designed to be broadly compatible with the 0.11 language as documented, but some of the improvements required a slightly stricter parser and language model in order to resolve ambiguity or to give better feedback in error messages. - -If you are upgrading to v0.12.0, we strongly recommend reading [the upgrade guide](https://www.terraform.io/upgrade-guides/0-12.html) to learn the recommended upgrade process, which includes a tool to automatically upgrade many improved language constructs and to indicate situations where human intuition is required to complete the upgrade. - -### Incompatibilities and Notes - -* As noted above, the language overhaul means that several aspects of the language are now parsed or evaluated more strictly than before, so configurations that employ workarounds for prior version limitations or that followed conventions other than what was shown in documentation may require some updates. For more information, please refer to [the upgrade guide](https://www.terraform.io/upgrade-guides/0-12.html). - -* In order to give better feedback about mistakes, Terraform now validates that all variable names set via `-var` and `-var-file` options correspond to declared variables, generating errors or warnings if not. In situations where automation is providing a fixed set of variables to all configurations (whether they are using them or not), use [`TF_VAR_` environment variables](https://www.terraform.io/docs/commands/environment-variables.html#tf_var_name) instead, which are ignored if they do not correspond to a declared variable. - -* The wire protocol for provider and provisioner plugins has changed, so plugins built against prior versions of Terraform are not compatible with Terraform v0.12. The most commonly-downloaded providers already had v0.12-compatible releases at the time of v0.12.0 release, but some other providers (particularly those distributed independently of the `terraform init` installation mechanism) will need to make new releases before they can be used with Terraform v0.12 or later. - -* The index API for automatic provider installation in `terraform init` is now provided by the Terraform Registry at `registry.terraform.io`, rather than the indexes directly on `releases.hashicorp.com`. The "releases" server is still currently the distribution source for the release archives themselves at the time of writing, but that may change over time. - -* The serialization formats for persisted state snapshots and saved plans have changed. Third-party tools that parse these artifacts will need to be updated to support these new serialization formats. - - For most use-cases, we recommend instead using [`terraform show -json`](https://www.terraform.io/docs/commands/show.html#json-output) to read the content of state or plan, in a form that is less likely to see significant breaking changes in future releases. - -* [`terraform validate`](https://www.terraform.io/docs/commands/validate.html) now has a slightly smaller scope than before, focusing only on configuration syntax and type/value checking. This makes it safe to run in unattended scenarios, such as on save in a text editor. - -### New Features - -The full set of language improvements is too large to list them all out exhaustively, so the list below covers some highlights: - -* **First-class expressions:** Prior to v0.12, expressions could be used only via string interpolation, like `"${var.foo}"`. Expressions are now fully integrated into the language, allowing them to be used directly as argument values, like `ami = var.ami`. - -* **`for` expressions:** This new expression construct allows the construction of a list or map by transforming and filtering elements from another list or map. For more information, refer to [the _`for` expressions_ documentation](https://www.terraform.io/docs/configuration/expressions.html#for-expressions). - -* **Dynamic configuration blocks:** For nested configuration blocks accepted as part of a resource configuration, it is now possible to dynamically generate zero or more blocks corresponding to items in a list or map using the special new `dynamic` block construct. This is the official replacement for the common (but buggy) unofficial workaround of treating a block type name as if it were an attribute expecting a list of maps value, which worked sometimes before as a result of some unintended coincidences in the implementation. - -* **Generalised "splat" operator:** The `aws_instance.foo.*.id` syntax was previously a special case only for resources with `count` set. It is now an operator within the expression language that can be applied to any list value. There is also an optional new splat variant that allows both index and attribute access operations on each item in the list. For more information, refer to [the _Splat Expressions_ documentation](https://www.terraform.io/docs/configuration/expressions.html#splat-expressions). - -* **Nullable argument values:** It is now possible to use a conditional expression like `var.foo != "" ? var.foo : null` to conditionally leave an argument value unset, whereas before Terraform required the configuration author to provide a specific default value in this case. Assigning `null` to an argument is equivalent to omitting that argument entirely. - -* **Rich types in module inputs variables and output values:** Terraform v0.7 added support for returning flat lists and maps of strings, but this is now generalized to allow returning arbitrary nested data structures with mixed types. Module authors can specify an expected [type constraint](https://www.terraform.io/docs/configuration/types.html) for each input variable to allow early type checking of arguments. - -* **Resource and module object values:** An entire resource or module can now be treated as an object value within expressions, including passing them through input variables and output values to other modules, using an attribute-less reference syntax, like `aws_instance.foo`. - -* **Extended template syntax:** The simple interpolation syntax from prior versions is extended to become a simple template language, with support for conditional interpolations and repeated interpolations through iteration. For more information, see [the _String Templates_ documentation](https://www.terraform.io/docs/configuration/expressions.html#string-templates). - -* **`jsondecode` and `csvdecode` interpolation functions:** Due to the richer type system in the new configuration language implementation, we can now offer functions for decoding serialization formats. [`jsondecode`](https://www.terraform.io/docs/configuration/functions/jsondecode.html) is the opposite of [`jsonencode`](https://www.terraform.io/docs/configuration/functions/jsonencode.html), while [`csvdecode`](https://www.terraform.io/docs/configuration/functions/csvdecode.html) provides a way to load in lists of maps from a compact tabular representation. - -* **Revamped error messages:** Error messages relating to configuration now always include information about where in the configuration the problem was found, along with other contextual information. We have also revisited many of the most common error messages to reword them for clarity, consistency, and actionability. - -* **Structual plan output:** When Terraform renders the set of changes it plans to make, it will now use formatting designed to be similar to the input configuration language, including nested rendering of individual changes within multi-line strings, JSON strings, and nested collections. - -### Other Improvements - -* `terraform validate` now accepts an argument `-json` which produces machine-readable output. Please refer to the documentation for this command for details on the format and some caveats that consumers must consider when using this interface. ([#17539](https://github.com/hashicorp/terraform/issues/17539)) - -* The JSON-based variant of the Terraform language now has a more tightly-specified and reliable mapping to the native syntax variant. In prior versions, certain Terraform configuration features did not function as expected or were not usable via the JSON-based forms. For more information, see [the _JSON Configuration Syntax_ documentation](https://www.terraform.io/docs/configuration/syntax-json.html). - -* The new built-in function [`templatefile`](https://www.terraform.io/docs/configuration/functions/templatefile.html) allows rendering a template from a file directly in the language, without installing the separate Template provider and using the `template_file` data source. - -* The new built-in function [`formatdate`](https://www.terraform.io/docs/configuration/functions/formatdate.html), which is a specialized string formatting function for creating machine-oriented timestamp strings in various formats. - -* The new built-in functions [`reverse`](https://www.terraform.io/docs/configuration/functions/reverse.html), which reverses the order of items in a list, and [`strrev`](https://www.terraform.io/docs/configuration/functions/strrev.html), which reverses the order of Unicode characters in a string. - -* A new `pg` state storage backend allows storing state in a PostgreSQL database. - -* The `azurerm` state storage backend supports new authentication mechanisms, custom resource manager endpoints, and HTTP proxies. - -* The `s3` state storage backend now supports `credential_source` in AWS configuration files, support for the new AWS regions `eu-north-1` and `ap-east-1`, and several other improvements previously made in the `aws` provider. - -* The `swift` state storage backend now supports locking and workspaces. - -### Bug Fixes - -Quite a few bugs were fixed indirectly as a result of improvements to the underlying language engine, so a fully-comprehensive list of fixed bugs is not possible, but some of the more commonly-encountered bugs that are fixed in this release include: - -* config: The conditional operator `... ? ... : ...` now works with result values of any type and only returns evaluation errors for the chosen result expression, as those familiar with this operator in other languages might expect. - -* config: Accept and ignore UTF-8 byte-order mark for configuration files ([#19715](https://github.com/hashicorp/terraform/issues/19715)) - -* config: When using a splat expression like `aws_instance.foo.*.id`, the addition of a new instance to the set (whose `id` is therefore not known until after apply) will no longer cause all of the other ids in the resulting list to appear unknown. - -* config: The `jsonencode` function now preserves the types of values passed to it, even inside nested structures, whereas before it had a tendency to convert primitive-typed values to string representations. - -* config: The `format` and `formatlist` functions now attempt automatic type conversions when the given values do not match the "verbs" in the format string, rather than producing a result with error placeholders in it. - -* config: Assigning a list containing one or more unknown values to an argument expecting a list no longer produces the incorrect error message "should be a list", because Terraform is now able to track the individual elements as being unknown rather than the list as a whole, and to track the type of each unknown value. (This also avoids any need to place seemingly-redundant list brackets around values that are already lists, which would now be interpreted as a list of lists.) - -* cli: When `create_before_destroy` is enabled for a resource, replacement actions are reflected correctly in rendered plans as `+/-` rather than `-/+`, and described as such in the UI messages. - -* core: Various root causes of the "diffs didn't match during apply" class of error are now checked at their source, allowing Terraform to either avoid the problem occurring altogether (ideally) or to provide a more actionable error message to help with reporting, finding, and fixing the bug. - ---- - -For information on v0.11 and prior releases, please see [the v0.11 branch changelog](https://github.com/hashicorp/terraform/blob/v0.11/CHANGELOG.md). +* [v0.12](https://github.com/hashicorp/terraform/blob/v0.12/CHANGELOG.md) +* [v0.11 and earlier](https://github.com/hashicorp/terraform/blob/v0.11/CHANGELOG.md) diff --git a/CODEOWNERS b/CODEOWNERS index 218352e86..524078797 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -1,4 +1,8 @@ -# remote-state backends. -/backend/remote-state/azure @terraform-azure -/backend/remote-state/gcs @terraform-google -/backend/remote-state/s3 @terraform-aws +# Each line is a file pattern followed by one or more owners. +# More on CODEOWNERS files: https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners + +# Remote-state backends +/backend/remote-state/azure @hashicorp/terraform-azure +/backend/remote-state/gcs @hashicorp/terraform-google +/backend/remote-state/s3 @hashicorp/terraform-aws +/backend/remote-state/s3 @likexian \ No newline at end of file diff --git a/Dockerfile b/Dockerfile index c5afc0d80..095c65f94 100644 --- a/Dockerfile +++ b/Dockerfile @@ -11,7 +11,7 @@ FROM golang:alpine LABEL maintainer="HashiCorp Terraform Team " -RUN apk add --update git bash openssh +RUN apk add --no-cache git bash openssh ENV TF_DEV=true ENV TF_RELEASE=1 diff --git a/Makefile b/Makefile index 368294409..143bc1ff7 100644 --- a/Makefile +++ b/Makefile @@ -5,11 +5,6 @@ WEBSITE_REPO=github.com/hashicorp/terraform-website default: test -tools: - GO111MODULE=off go get -u golang.org/x/tools/cmd/stringer - GO111MODULE=off go get -u golang.org/x/tools/cmd/cover - GO111MODULE=off go get -u github.com/golang/mock/mockgen - # bin generates the releaseable binaries for Terraform bin: fmtcheck generate @TF_RELEASE=1 sh -c "'$(CURDIR)/scripts/build.sh'" @@ -44,10 +39,12 @@ testacc: fmtcheck generate TF_ACC=1 go test $(TEST) -v $(TESTARGS) -mod=vendor -timeout 120m # e2etest runs the end-to-end tests against a generated Terraform binary +# and a generated terraform-bundle binary. # The TF_ACC here allows network access, but does not require any special -# credentials since the e2etests use local-only providers such as "null". +# credentials. e2etest: generate TF_ACC=1 go test -mod=vendor -v ./command/e2etest + TF_ACC=1 go test -mod=vendor -v ./tools/terraform-bundle/e2etest test-compile: fmtcheck generate @if [ "$(TEST)" = "./..." ]; then \ @@ -62,9 +59,6 @@ testrace: fmtcheck generate TF_ACC= go test -mod=vendor -race $(TEST) $(TESTARGS) cover: - @go tool cover 2>/dev/null; if [ $$? -eq 3 ]; then \ - go get -u golang.org/x/tools/cmd/cover; \ - fi go test $(TEST) -coverprofile=coverage.out go tool cover -html=coverage.out rm coverage.out @@ -72,7 +66,7 @@ cover: # generate runs `go generate` to build the dynamically generated # source files, except the protobuf stubs which are built instead with # "make protobuf". -generate: tools +generate: GOFLAGS=-mod=vendor go generate ./... # go fmt doesn't support -mod=vendor but it still wants to populate the # module cache with everything in go.mod even though formatting requires @@ -147,4 +141,4 @@ endif # under parallel conditions. .NOTPARALLEL: -.PHONY: bin cover default dev e2etest fmt fmtcheck generate protobuf plugin-dev quickdev test-compile test testacc testrace tools vendor-status website website-test +.PHONY: bin cover default dev e2etest fmt fmtcheck generate protobuf plugin-dev quickdev test-compile test testacc testrace vendor-status website website-test diff --git a/README.md b/README.md index bc808a63e..e4769e6ea 100644 --- a/README.md +++ b/README.md @@ -2,10 +2,9 @@ Terraform ========= - Website: https://www.terraform.io -- [![Gitter chat](https://badges.gitter.im/hashicorp-terraform/Lobby.png)](https://gitter.im/hashicorp-terraform/Lobby) -- Mailing list: [Google Groups](http://groups.google.com/group/terraform-tool) +- Forums: [HashiCorp Discuss](https://discuss.hashicorp.com/c/terraform-core) -Terraform +Terraform Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. @@ -34,134 +33,9 @@ All documentation is available on the [Terraform website](http://www.terraform.i Developing Terraform -------------------- -If you wish to work on Terraform itself or any of its built-in providers, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.11+ is *required*). - This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins that each have their own repository in [the `terraform-providers` organization](https://github.com/terraform-providers) on GitHub. Instructions for developing each provider are in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html). -For local development of Terraform core, first make sure Go is properly installed and that a -[GOPATH](http://golang.org/doc/code.html#GOPATH) has been set. You will also need to add `$GOPATH/bin` to your `$PATH`. - -Next, using [Git](https://git-scm.com/), clone this repository into `$GOPATH/src/github.com/hashicorp/terraform`. - -You'll need to run `make tools` to install some required tools, then `make`. This will compile the code and then run the tests. If this exits with exit status 0, then everything is working! -You only need to run `make tools` once (or when the tools change). - -```sh -$ cd "$GOPATH/src/github.com/hashicorp/terraform" -$ make tools -$ make -``` - -To compile a development version of Terraform and the built-in plugins, run `make dev`. This will build everything using [gox](https://github.com/mitchellh/gox) and put Terraform binaries in the `bin` and `$GOPATH/bin` folders: - -```sh -$ make dev -... -$ bin/terraform -... -``` - -If you're developing a specific package, you can run tests for just that package by specifying the `TEST` variable. For example below, only `terraform` package tests will be run. - -```sh -$ make test TEST=./terraform -... -``` - -If you're working on a specific provider which has not been separated into an individual repository and only wish to rebuild that provider, you can use the `plugin-dev` target. For example, to build only the Test provider: - -```sh -$ make plugin-dev PLUGIN=provider-test -``` - -### Dependencies - -Terraform uses Go Modules for dependency management, but for the moment is -continuing to use Go 1.6-style vendoring for compatibility with tools that -have not yet been updated for full Go Modules support. - -If you're developing Terraform, there are a few tasks you might need to perform. - -#### Adding a dependency - -If you're adding a dependency, you'll need to vendor it in the same Pull Request as the code that depends on it. You should do this in a separate commit from your code, as makes PR review easier and Git history simpler to read in the future. - -To add a dependency: - -Assuming your work is on a branch called `my-feature-branch`, the steps look like this: - -1. Add an `import` statement to a suitable package in the Terraform code. - -2. Run `go mod vendor` to download the latest version of the module containing - the imported package into the `vendor/` directory, and update the `go.mod` - and `go.sum` files. - -3. Review the changes in git and commit them. - -#### Updating a dependency - -To update a dependency: - -1. Run `go get -u module-path@version-number`, such as `go get -u github.com/hashicorp/hcl@2.0.0` - -2. Run `go mod vendor` to update the vendored copy in the `vendor/` directory. - -3. Review the changes in git and commit them. - -### Acceptance Tests - -Terraform has a comprehensive [acceptance -test](http://en.wikipedia.org/wiki/Acceptance_testing) suite covering the -built-in providers. - -### Cross Compilation and Building for Distribution - -If you wish to cross-compile Terraform for another architecture, you can set the `XC_OS` and `XC_ARCH` environment variables to values representing the target operating system and architecture before calling `make`. The output is placed in the `pkg` subdirectory tree both expanded in a directory representing the OS/architecture combination and as a ZIP archive. - -For example, to compile 64-bit Linux binaries on Mac OS X, you can run: - -```sh -$ XC_OS=linux XC_ARCH=amd64 make bin -... -$ file pkg/linux_amd64/terraform -terraform: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped -``` - -`XC_OS` and `XC_ARCH` can be space separated lists representing different combinations of operating system and architecture. For example, to compile for both Linux and Mac OS X, targeting both 32- and 64-bit architectures, you can run: - -```sh -$ XC_OS="linux darwin" XC_ARCH="386 amd64" make bin -... -$ tree ./pkg/ -P "terraform|*.zip" -./pkg/ -├── darwin_386 -│   └── terraform -├── darwin_386.zip -├── darwin_amd64 -│   └── terraform -├── darwin_amd64.zip -├── linux_386 -│   └── terraform -├── linux_386.zip -├── linux_amd64 -│   └── terraform -└── linux_amd64.zip - -4 directories, 8 files -``` - -_Note: Cross-compilation uses [gox](https://github.com/mitchellh/gox), which requires toolchains to be built with versions of Go prior to 1.5. In order to successfully cross-compile with older versions of Go, you will need to run `gox -build-toolchain` before running the commands detailed above._ - -#### Docker - -When using docker you don't need to have any of the Go development tools installed and you can clone terraform to any location on disk (doesn't have to be in your $GOPATH). This is useful for users who want to build `master` or a specific branch for testing without setting up a proper Go environment. - -For example, run the following command to install the required tools and build terraform in a linux-based container for macOS. - -```sh -docker run --rm -v $(pwd):/go/src/github.com/hashicorp/terraform -w /go/src/github.com/hashicorp/terraform -e XC_OS=darwin -e XC_ARCH=amd64 golang:latest bash -c "apt-get update && apt-get install -y zip && make tools bin" -``` - +To learn more about compiling Terraform and contributing suggested changes, please refer to [the contributing guide](.github/CONTRIBUTING.md). ## License [Mozilla Public License v2.0](https://github.com/hashicorp/terraform/blob/master/LICENSE) diff --git a/addrs/input_variable.go b/addrs/input_variable.go index d2c046c11..975c72f1e 100644 --- a/addrs/input_variable.go +++ b/addrs/input_variable.go @@ -14,6 +14,15 @@ func (v InputVariable) String() string { return "var." + v.Name } +// Absolute converts the receiver into an absolute address within the given +// module instance. +func (v InputVariable) Absolute(m ModuleInstance) AbsInputVariableInstance { + return AbsInputVariableInstance{ + Module: m, + Variable: v, + } +} + // AbsInputVariableInstance is the address of an input variable within a // particular module instance. type AbsInputVariableInstance struct { @@ -34,7 +43,7 @@ func (m ModuleInstance) InputVariable(name string) AbsInputVariableInstance { func (v AbsInputVariableInstance) String() string { if len(v.Module) == 0 { - return v.String() + return v.Variable.String() } return fmt.Sprintf("%s.%s", v.Module.String(), v.Variable.String()) diff --git a/addrs/instance_key.go b/addrs/instance_key.go index cef8b2796..ff128be5b 100644 --- a/addrs/instance_key.go +++ b/addrs/instance_key.go @@ -18,6 +18,10 @@ import ( type InstanceKey interface { instanceKeySigil() String() string + + // Value returns the cty.Value of the appropriate type for the InstanceKey + // value. + Value() cty.Value } // ParseInstanceKey returns the instance key corresponding to the given value, @@ -56,6 +60,10 @@ func (k IntKey) String() string { return fmt.Sprintf("[%d]", int(k)) } +func (k IntKey) Value() cty.Value { + return cty.NumberIntVal(int64(k)) +} + // StringKey is the InstanceKey representation representing string indices, as // used when the "for_each" argument is specified with a map or object type. type StringKey string @@ -69,6 +77,10 @@ func (k StringKey) String() string { return fmt.Sprintf("[%q]", string(k)) } +func (k StringKey) Value() cty.Value { + return cty.StringVal(string(k)) +} + // InstanceKeyLess returns true if the first given instance key i should sort // before the second key j, and false otherwise. func InstanceKeyLess(i, j InstanceKey) bool { diff --git a/addrs/module.go b/addrs/module.go index 6420c6301..e2984662c 100644 --- a/addrs/module.go +++ b/addrs/module.go @@ -33,7 +33,11 @@ func (m Module) String() string { if len(m) == 0 { return "" } - return strings.Join([]string(m), ".") + var steps []string + for _, s := range m { + steps = append(steps, "module", s) + } + return strings.Join(steps, ".") } // Child returns the address of a child call in the receiver, identified by the diff --git a/addrs/module_instance.go b/addrs/module_instance.go index c81784e7a..b2b51717a 100644 --- a/addrs/module_instance.go +++ b/addrs/module_instance.go @@ -57,7 +57,7 @@ func ParseModuleInstance(traversal hcl.Traversal) (ModuleInstance, tfdiags.Diagn // If a reference string is coming from a source that should be identified in // error messages then the caller should instead parse it directly using a // suitable function from the HCL API and pass the traversal itself to -// ParseProviderConfigCompact. +// ParseModuleInstance. // // Error diagnostics are returned if either the parsing fails or the analysis // of the traversal fails. There is no way for the caller to distinguish the @@ -410,6 +410,26 @@ func (m ModuleInstance) TargetContains(other Targetable) bool { } } +// Module returns the address of the module that this instance is an instance +// of. +func (m ModuleInstance) Module() Module { + if len(m) == 0 { + return nil + } + ret := make(Module, len(m)) + for i, step := range m { + ret[i] = step.Name + } + return ret +} + func (m ModuleInstance) targetableSigil() { // ModuleInstance is targetable } + +func (s ModuleInstanceStep) String() string { + if s.InstanceKey != NoKey { + return s.Name + s.InstanceKey.String() + } + return s.Name +} diff --git a/addrs/provider.go b/addrs/provider.go new file mode 100644 index 000000000..18aca141e --- /dev/null +++ b/addrs/provider.go @@ -0,0 +1,302 @@ +package addrs + +import ( + "fmt" + "strings" + + "golang.org/x/net/idna" + + "github.com/hashicorp/hcl/v2" + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform/tfdiags" +) + +// Provider encapsulates a single provider type. In the future this will be +// extended to include additional fields including Namespace and SourceHost +type Provider struct { + Type string + Namespace string + Hostname svchost.Hostname +} + +// DefaultRegistryHost is the hostname used for provider addresses that do +// not have an explicit hostname. +const DefaultRegistryHost = svchost.Hostname("registry.terraform.io") + +// LegacyProviderNamespace is the special string used in the Namespace field +// of type Provider to mark a legacy provider address. This special namespace +// value would normally be invalid, and can be used only when the hostname is +// DefaultRegistryHost because that host owns the mapping from legacy name to +// FQN. +const LegacyProviderNamespace = "-" + +// String returns an FQN string, indended for use in output. +func (pt Provider) String() string { + if pt.IsZero() { + panic("called String on zero-value addrs.Provider") + } + return pt.Hostname.ForDisplay() + "/" + pt.Namespace + "/" + pt.Type +} + +// NewProvider constructs a provider address from its parts, and normalizes +// the namespace and type parts to lowercase using unicode case folding rules +// so that resulting addrs.Provider values can be compared using standard +// Go equality rules (==). +// +// The hostname is given as a svchost.Hostname, which is required by the +// contract of that type to have already been normalized for equality testing. +// +// This function will panic if the given namespace or type name are not valid. +// When accepting namespace or type values from outside the program, use +// ParseProviderPart first to check that the given value is valid. +func NewProvider(hostname svchost.Hostname, namespace, typeName string) Provider { + if namespace == LegacyProviderNamespace { + // Legacy provider addresses must always be created via + // NewLegacyProvider so that we can use static analysis to find + // codepaths still working with those. + panic("attempt to create legacy provider address using NewProvider; use NewLegacyProvider instead") + } + + return Provider{ + Type: MustParseProviderPart(typeName), + Namespace: MustParseProviderPart(namespace), + Hostname: hostname, + } +} + +// NewDefaultProvider returns the default address of a HashiCorp-maintained, +// Registry-hosted provider. +func NewDefaultProvider(name string) Provider { + return Provider{ + Type: MustParseProviderPart(name), + Namespace: "hashicorp", + Hostname: DefaultRegistryHost, + } +} + +// NewLegacyProvider returns a mock address for a provider. +// This will be removed when ProviderType is fully integrated. +func NewLegacyProvider(name string) Provider { + return Provider{ + // We intentionally don't normalize and validate the legacy names, + // because existing code expects legacy provider names to pass through + // verbatim, even if not compliant with our new naming rules. + Type: name, + Namespace: LegacyProviderNamespace, + Hostname: DefaultRegistryHost, + } +} + +// LegacyString returns the provider type, which is frequently used +// interchangeably with provider name. This function can and should be removed +// when provider type is fully integrated. As a safeguard for future +// refactoring, this function panics if the Provider is not a legacy provider. +func (pt Provider) LegacyString() string { + if pt.IsZero() { + panic("called LegacyString on zero-value addrs.Provider") + } + if pt.Namespace != LegacyProviderNamespace { + panic(pt.String() + " is not a legacy addrs.Provider") + } + return pt.Type +} + +// IsZero returns true if the receiver is the zero value of addrs.Provider. +// +// The zero value is not a valid addrs.Provider and calling other methods on +// such a value is likely to either panic or otherwise misbehave. +func (pt Provider) IsZero() bool { + return pt == Provider{} +} + +// LessThan returns true if the receiver should sort before the other given +// address in an ordered list of provider addresses. +// +// This ordering is an arbitrary one just to allow deterministic results from +// functions that would otherwise have no natural ordering. It's subject +// to change in future. +func (pt Provider) LessThan(other Provider) bool { + switch { + case pt.Hostname != other.Hostname: + return pt.Hostname < other.Hostname + case pt.Namespace != other.Namespace: + return pt.Namespace < other.Namespace + default: + return pt.Type < other.Type + } +} + +// ParseProviderSourceString parses the source attribute and returns a provider. +// This is intended primarily to parse the FQN-like strings returned by +// terraform-config-inspect. +// +// The following are valid source string formats: +// name +// namespace/name +// hostname/namespace/name +func ParseProviderSourceString(str string) (Provider, tfdiags.Diagnostics) { + var ret Provider + var diags tfdiags.Diagnostics + + // split the source string into individual components + parts := strings.Split(str, "/") + if len(parts) == 0 || len(parts) > 3 { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider source string", + Detail: `The "source" attribute must be in the format "[hostname/][namespace/]name"`, + }) + return ret, diags + } + + // check for an invalid empty string in any part + for i := range parts { + if parts[i] == "" { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider source string", + Detail: `The "source" attribute must be in the format "[hostname/][namespace/]name"`, + }) + return ret, diags + } + } + + // check the 'name' portion, which is always the last part + givenName := parts[len(parts)-1] + name, err := ParseProviderPart(givenName) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider type", + Detail: fmt.Sprintf(`Invalid provider type %q in source %q: %s"`, name, str, err), + }) + return ret, diags + } + ret.Type = name + ret.Hostname = DefaultRegistryHost + + if len(parts) == 1 { + // FIXME: update this to NewDefaultProvider in the provider source release + return NewLegacyProvider(parts[0]), diags + } + + if len(parts) >= 2 { + // the namespace is always the second-to-last part + givenNamespace := parts[len(parts)-2] + if givenNamespace == LegacyProviderNamespace { + // For now we're tolerating legacy provider addresses until we've + // finished updating the rest of the codebase to no longer use them, + // or else we'd get errors round-tripping through legacy subsystems. + ret.Namespace = LegacyProviderNamespace + } else { + namespace, err := ParseProviderPart(givenNamespace) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider namespace", + Detail: fmt.Sprintf(`Invalid provider namespace %q in source %q: %s"`, namespace, str, err), + }) + return Provider{}, diags + } + ret.Namespace = namespace + } + } + + // Final Case: 3 parts + if len(parts) == 3 { + // the namespace is always the first part in a three-part source string + hn, err := svchost.ForComparison(parts[0]) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider source hostname", + Detail: fmt.Sprintf(`Invalid provider source hostname namespace %q in source %q: %s"`, hn, str, err), + }) + return Provider{}, diags + } + ret.Hostname = hn + } + + if ret.Namespace == LegacyProviderNamespace && ret.Hostname != DefaultRegistryHost { + // Legacy provider addresses must always be on the default registry + // host, because the default registry host decides what actual FQN + // each one maps to. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider namespace", + Detail: "The legacy provider namespace \"-\" can be used only with hostname " + DefaultRegistryHost.ForDisplay() + ".", + }) + return Provider{}, diags + } + + return ret, diags +} + +// ParseProviderPart processes an addrs.Provider namespace or type string +// provided by an end-user, producing a normalized version if possible or +// an error if the string contains invalid characters. +// +// A provider part is processed in the same way as an individual label in a DNS +// domain name: it is transformed to lowercase per the usual DNS case mapping +// and normalization rules and may contain only letters, digits, and dashes. +// Additionally, dashes may not appear at the start or end of the string. +// +// These restrictions are intended to allow these names to appear in fussy +// contexts such as directory/file names on case-insensitive filesystems, +// repository names on GitHub, etc. We're using the DNS rules in particular, +// rather than some similar rules defined locally, because the hostname part +// of an addrs.Provider is already a hostname and it's ideal to use exactly +// the same case folding and normalization rules for all of the parts. +// +// In practice a provider type string conventionally does not contain dashes +// either. Such names are permitted, but providers with such type names will be +// hard to use because their resource type names will not be able to contain +// the provider type name and thus each resource will need an explicit provider +// address specified. (A real-world example of such a provider is the +// "google-beta" variant of the GCP provider, which has resource types that +// start with the "google_" prefix instead.) +// +// It's valid to pass the result of this function as the argument to a +// subsequent call, in which case the result will be identical. +func ParseProviderPart(given string) (string, error) { + if len(given) == 0 { + return "", fmt.Errorf("must have at least one character") + } + + // We're going to process the given name using the same "IDNA" library we + // use for the hostname portion, since it already implements the case + // folding rules we want. + // + // The idna library doesn't expose individual label parsing directly, but + // once we've verified it doesn't contain any dots we can just treat it + // like a top-level domain for this library's purposes. + if strings.ContainsRune(given, '.') { + return "", fmt.Errorf("dots are not allowed") + } + + // We don't allow names containing multiple consecutive dashes, just as + // a matter of preference: they look weird, confusing, or incorrect. + // This also, as a side-effect, prevents the use of the "punycode" + // indicator prefix "xn--" that would cause the IDNA library to interpret + // the given name as punycode, because that would be weird and unexpected. + if strings.Contains(given, "--") { + return "", fmt.Errorf("cannot use multiple consecutive dashes") + } + + result, err := idna.Lookup.ToUnicode(given) + if err != nil { + return "", fmt.Errorf("must contain only letters, digits, and dashes, and may not use leading or trailing dashes") + } + + return result, nil +} + +// MustParseProviderPart is a wrapper around ParseProviderPart that panics if +// it returns an error. +func MustParseProviderPart(given string) string { + result, err := ParseProviderPart(given) + if err != nil { + panic(err.Error()) + } + return result +} diff --git a/addrs/provider_config.go b/addrs/provider_config.go index aaef1d3a4..aba6afceb 100644 --- a/addrs/provider_config.go +++ b/addrs/provider_config.go @@ -4,153 +4,104 @@ import ( "fmt" "github.com/hashicorp/terraform/tfdiags" + "github.com/zclconf/go-cty/cty" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hclsyntax" ) -// ProviderConfig is the address of a provider configuration. -type ProviderConfig struct { - Type string +// ProviderConfig is an interface type whose dynamic type can be either +// LocalProviderConfig or AbsProviderConfig, in order to represent situations +// where a value might either be module-local or absolute but the decision +// cannot be made until runtime. +// +// Where possible, use either LocalProviderConfig or AbsProviderConfig directly +// instead, to make intent more clear. ProviderConfig can be used only in +// situations where the recipient of the value has some out-of-band way to +// determine a "current module" to use if the value turns out to be +// a LocalProviderConfig. +// +// Recipients of non-nil ProviderConfig values that actually need +// AbsProviderConfig values should call ResolveAbsProviderAddr on the +// *configs.Config value representing the root module configuration, which +// handles the translation from local to fully-qualified using mapping tables +// defined in the configuration. +// +// Recipients of a ProviderConfig value can assume it can contain only a +// LocalProviderConfig value, an AbsProviderConfigValue, or nil to represent +// the absense of a provider config in situations where that is meaningful. +type ProviderConfig interface { + providerConfig() +} + +// LocalProviderConfig is the address of a provider configuration from the +// perspective of references in a particular module. +// +// Finding the corresponding AbsProviderConfig will require looking up the +// LocalName in the providers table in the module's configuration; there is +// no syntax-only translation between these types. +type LocalProviderConfig struct { + LocalName string // If not empty, Alias identifies which non-default (aliased) provider // configuration this address refers to. Alias string } -// NewDefaultProviderConfig returns the address of the default (un-aliased) -// configuration for the provider with the given type name. -func NewDefaultProviderConfig(typeName string) ProviderConfig { - return ProviderConfig{ - Type: typeName, +var _ ProviderConfig = LocalProviderConfig{} + +// NewDefaultLocalProviderConfig returns the address of the default (un-aliased) +// configuration for the provider with the given local type name. +func NewDefaultLocalProviderConfig(LocalNameName string) LocalProviderConfig { + return LocalProviderConfig{ + LocalName: LocalNameName, } } -// ParseProviderConfigCompact parses the given absolute traversal as a relative -// provider address in compact form. The following are examples of traversals -// that can be successfully parsed as compact relative provider configuration -// addresses: -// -// aws -// aws.foo -// -// This function will panic if given a relative traversal. -// -// If the returned diagnostics contains errors then the result value is invalid -// and must not be used. -func ParseProviderConfigCompact(traversal hcl.Traversal) (ProviderConfig, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - ret := ProviderConfig{ - Type: traversal.RootName(), - } +// providerConfig Implements addrs.ProviderConfig. +func (pc LocalProviderConfig) providerConfig() {} - if len(traversal) < 2 { - // Just a type name, then. - return ret, diags - } - - aliasStep := traversal[1] - switch ts := aliasStep.(type) { - case hcl.TraverseAttr: - ret.Alias = ts.Name - return ret, diags - default: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid provider configuration address", - Detail: "The provider type name must either stand alone or be followed by an alias name separated with a dot.", - Subject: aliasStep.SourceRange().Ptr(), - }) - } - - if len(traversal) > 2 { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid provider configuration address", - Detail: "Extraneous extra operators after provider configuration address.", - Subject: traversal[2:].SourceRange().Ptr(), - }) - } - - return ret, diags -} - -// ParseProviderConfigCompactStr is a helper wrapper around ParseProviderConfigCompact -// that takes a string and parses it with the HCL native syntax traversal parser -// before interpreting it. -// -// This should be used only in specialized situations since it will cause the -// created references to not have any meaningful source location information. -// If a reference string is coming from a source that should be identified in -// error messages then the caller should instead parse it directly using a -// suitable function from the HCL API and pass the traversal itself to -// ParseProviderConfigCompact. -// -// Error diagnostics are returned if either the parsing fails or the analysis -// of the traversal fails. There is no way for the caller to distinguish the -// two kinds of diagnostics programmatically. If error diagnostics are returned -// then the returned address is invalid. -func ParseProviderConfigCompactStr(str string) (ProviderConfig, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1}) - diags = diags.Append(parseDiags) - if parseDiags.HasErrors() { - return ProviderConfig{}, diags - } - - addr, addrDiags := ParseProviderConfigCompact(traversal) - diags = diags.Append(addrDiags) - return addr, diags -} - -// Absolute returns an AbsProviderConfig from the receiver and the given module -// instance address. -func (pc ProviderConfig) Absolute(module ModuleInstance) AbsProviderConfig { - return AbsProviderConfig{ - Module: module, - ProviderConfig: pc, - } -} - -func (pc ProviderConfig) String() string { - if pc.Type == "" { +func (pc LocalProviderConfig) String() string { + if pc.LocalName == "" { // Should never happen; always indicates a bug return "provider." } if pc.Alias != "" { - return fmt.Sprintf("provider.%s.%s", pc.Type, pc.Alias) + return fmt.Sprintf("provider.%s.%s", pc.LocalName, pc.Alias) } - return "provider." + pc.Type + return "provider." + pc.LocalName } // StringCompact is an alternative to String that returns the form that can // be parsed by ParseProviderConfigCompact, without the "provider." prefix. -func (pc ProviderConfig) StringCompact() string { +func (pc LocalProviderConfig) StringCompact() string { if pc.Alias != "" { - return fmt.Sprintf("%s.%s", pc.Type, pc.Alias) + return fmt.Sprintf("%s.%s", pc.LocalName, pc.Alias) } - return pc.Type + return pc.LocalName } // AbsProviderConfig is the absolute address of a provider configuration // within a particular module instance. type AbsProviderConfig struct { - Module ModuleInstance - ProviderConfig ProviderConfig + Module ModuleInstance + Provider Provider + Alias string } +var _ ProviderConfig = AbsProviderConfig{} + // ParseAbsProviderConfig parses the given traversal as an absolute provider // address. The following are examples of traversals that can be successfully // parsed as absolute provider configuration addresses: // -// provider.aws -// provider.aws.foo -// module.bar.provider.aws -// module.bar.module.baz.provider.aws.foo -// module.foo[1].provider.aws.foo +// provider["registry.terraform.io/hashicorp/aws"] +// provider["registry.terraform.io/hashicorp/aws"].foo +// module.bar.provider["registry.terraform.io/hashicorp/aws"] +// module.bar.module.baz.provider["registry.terraform.io/hashicorp/aws"].foo +// module.foo[1].provider["registry.terraform.io/hashicorp/aws"].foo // // This type of address is used, for example, to record the relationships // between resources and provider configurations in the state structure. @@ -180,8 +131,22 @@ func ParseAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags return ret, diags } - if tt, ok := remain[1].(hcl.TraverseAttr); ok { - ret.ProviderConfig.Type = tt.Name + if tt, ok := remain[1].(hcl.TraverseIndex); ok { + if !tt.Key.Type().Equals(cty.String) { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider configuration address", + Detail: "The prefix \"provider.\" must be followed by a provider type name.", + Subject: remain[1].SourceRange().Ptr(), + }) + return ret, diags + } + p, sourceDiags := ParseProviderSourceString(tt.Key.AsString()) + ret.Provider = p + if sourceDiags.HasErrors() { + diags = diags.Append(sourceDiags) + return ret, diags + } } else { diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, @@ -194,7 +159,7 @@ func ParseAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags if len(remain) == 3 { if tt, ok := remain[2].(hcl.TraverseAttr); ok { - ret.ProviderConfig.Alias = tt.Name + ret.Alias = tt.Name } else { diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, @@ -226,6 +191,18 @@ func ParseAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags // the returned address is invalid. func ParseAbsProviderConfigStr(str string) (AbsProviderConfig, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics + traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1}) + diags = diags.Append(parseDiags) + if parseDiags.HasErrors() { + return AbsProviderConfig{}, diags + } + addr, addrDiags := ParseAbsProviderConfig(traversal) + diags = diags.Append(addrDiags) + return addr, diags +} + +func ParseLegacyAbsProviderConfigStr(str string) (AbsProviderConfig, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1}) diags = diags.Append(parseDiags) @@ -233,34 +210,101 @@ func ParseAbsProviderConfigStr(str string) (AbsProviderConfig, tfdiags.Diagnosti return AbsProviderConfig{}, diags } - addr, addrDiags := ParseAbsProviderConfig(traversal) + addr, addrDiags := ParseLegacyAbsProviderConfig(traversal) diags = diags.Append(addrDiags) return addr, diags } -// ProviderConfigDefault returns the address of the default provider config -// of the given type inside the recieving module instance. -func (m ModuleInstance) ProviderConfigDefault(name string) AbsProviderConfig { +// ParseLegacyAbsProviderConfig parses the given traversal as an absolute +// provider address. The following are examples of traversals that can be +// successfully parsed as legacy absolute provider configuration addresses: +// +// provider.aws +// provider.aws.foo +// module.bar.provider.aws +// module.bar.module.baz.provider.aws.foo +// module.foo[1].provider.aws.foo +// +// This type of address is used in legacy state and may appear in state v4 if +// the provider config addresses have not been normalized to include provider +// FQN. +func ParseLegacyAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags.Diagnostics) { + modInst, remain, diags := parseModuleInstancePrefix(traversal) + ret := AbsProviderConfig{ + Module: modInst, + } + + if len(remain) < 2 || remain.RootName() != "provider" { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider configuration address", + Detail: "Provider address must begin with \"provider.\", followed by a provider type name.", + Subject: remain.SourceRange().Ptr(), + }) + return ret, diags + } + if len(remain) > 3 { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider configuration address", + Detail: "Extraneous operators after provider configuration alias.", + Subject: hcl.Traversal(remain[3:]).SourceRange().Ptr(), + }) + return ret, diags + } + + // We always assume legacy-style providers in legacy state + if tt, ok := remain[1].(hcl.TraverseAttr); ok { + ret.Provider = NewLegacyProvider(tt.Name) + } else { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider configuration address", + Detail: "The prefix \"provider.\" must be followed by a provider type name.", + Subject: remain[1].SourceRange().Ptr(), + }) + return ret, diags + } + + if len(remain) == 3 { + if tt, ok := remain[2].(hcl.TraverseAttr); ok { + ret.Alias = tt.Name + } else { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider configuration address", + Detail: "Provider type name must be followed by a configuration alias name.", + Subject: remain[2].SourceRange().Ptr(), + }) + return ret, diags + } + } + + return ret, diags +} + +// ProviderConfigDefault returns the address of the default provider config of +// the given type inside the recieving module instance. +func (m ModuleInstance) ProviderConfigDefault(provider Provider) AbsProviderConfig { return AbsProviderConfig{ - Module: m, - ProviderConfig: ProviderConfig{ - Type: name, - }, + Module: m, + Provider: provider, } } -// ProviderConfigAliased returns the address of an aliased provider config -// of with given type and alias inside the recieving module instance. -func (m ModuleInstance) ProviderConfigAliased(name, alias string) AbsProviderConfig { +// ProviderConfigAliased returns the address of an aliased provider config of +// the given type and alias inside the recieving module instance. +func (m ModuleInstance) ProviderConfigAliased(provider Provider, alias string) AbsProviderConfig { return AbsProviderConfig{ - Module: m, - ProviderConfig: ProviderConfig{ - Type: name, - Alias: alias, - }, + Module: m, + Provider: provider, + Alias: alias, } } +// providerConfig Implements addrs.ProviderConfig. +func (pc AbsProviderConfig) providerConfig() {} + // Inherited returns an address that the receiving configuration address might // inherit from in a parent module. The second bool return value indicates if // such inheritance is possible, and thus whether the returned address is valid. @@ -269,9 +313,9 @@ func (m ModuleInstance) ProviderConfigAliased(name, alias string) AbsProviderCon // other than the root module. Even if a valid address is returned, inheritence // may not be performed for other reasons, such as if the calling module // provided explicit provider configurations within the call for this module. -// The ProviderTransformer graph transform in the main terraform module has -// the authoritative logic for provider inheritance, and this method is here -// mainly just for its benefit. +// The ProviderTransformer graph transform in the main terraform module has the +// authoritative logic for provider inheritance, and this method is here mainly +// just for its benefit. func (pc AbsProviderConfig) Inherited() (AbsProviderConfig, bool) { // Can't inherit if we're already in the root. if len(pc.Module) == 0 { @@ -279,19 +323,52 @@ func (pc AbsProviderConfig) Inherited() (AbsProviderConfig, bool) { } // Can't inherit if we have an alias. - if pc.ProviderConfig.Alias != "" { + if pc.Alias != "" { return AbsProviderConfig{}, false } // Otherwise, we might inherit from a configuration with the same - // provider name in the parent module instance. + // provider type in the parent module instance. parentMod := pc.Module.Parent() - return pc.ProviderConfig.Absolute(parentMod), true + return AbsProviderConfig{ + Module: parentMod, + Provider: pc.Provider, + }, true + } -func (pc AbsProviderConfig) String() string { - if len(pc.Module) == 0 { - return pc.ProviderConfig.String() +// LegacyString() returns a legacy-style AbsProviderConfig string and should only be used for legacy state shimming. +func (pc AbsProviderConfig) LegacyString() string { + if pc.Alias != "" { + if len(pc.Module) == 0 { + return fmt.Sprintf("%s.%s.%s", "provider", pc.Provider.LegacyString(), pc.Alias) + } else { + return fmt.Sprintf("%s.%s.%s.%s", pc.Module.String(), "provider", pc.Provider.LegacyString(), pc.Alias) + } } - return fmt.Sprintf("%s.%s", pc.Module.String(), pc.ProviderConfig.String()) + if len(pc.Module) == 0 { + return fmt.Sprintf("%s.%s", "provider", pc.Provider.LegacyString()) + } + return fmt.Sprintf("%s.%s.%s", pc.Module.String(), "provider", pc.Provider.LegacyString()) +} + +// String() returns a string representation of an AbsProviderConfig in the following format: +// +// provider["example.com/namespace/name"] +// provider["example.com/namespace/name"].alias +// module.module-name.provider["example.com/namespace/name"] +// module.module-name.provider["example.com/namespace/name"].alias +func (pc AbsProviderConfig) String() string { + if pc.Alias != "" { + if len(pc.Module) == 0 { + return fmt.Sprintf("%s[%q].%s", "provider", pc.Provider.String(), pc.Alias) + } else { + return fmt.Sprintf("%s.%s[%q].%s", pc.Module.String(), "provider", pc.Provider.String(), pc.Alias) + } + } + if len(pc.Module) == 0 { + return fmt.Sprintf("%s[%q]", "provider", pc.Provider.String()) + } + + return fmt.Sprintf("%s.%s[%q]", pc.Module.String(), "provider", pc.Provider.String()) } diff --git a/addrs/provider_config_test.go b/addrs/provider_config_test.go index 1d92a8348..a48129e57 100644 --- a/addrs/provider_config_test.go +++ b/addrs/provider_config_test.go @@ -9,68 +9,6 @@ import ( "github.com/hashicorp/hcl/v2/hclsyntax" ) -func TestParseProviderConfigCompact(t *testing.T) { - tests := []struct { - Input string - Want ProviderConfig - WantDiag string - }{ - { - `aws`, - ProviderConfig{ - Type: "aws", - }, - ``, - }, - { - `aws.foo`, - ProviderConfig{ - Type: "aws", - Alias: "foo", - }, - ``, - }, - { - `aws["foo"]`, - ProviderConfig{}, - `The provider type name must either stand alone or be followed by an alias name separated with a dot.`, - }, - } - - for _, test := range tests { - t.Run(test.Input, func(t *testing.T) { - traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(test.Input), "", hcl.Pos{}) - if len(parseDiags) != 0 { - t.Errorf("unexpected diagnostics during parse") - for _, diag := range parseDiags { - t.Logf("- %s", diag) - } - return - } - - got, diags := ParseProviderConfigCompact(traversal) - - if test.WantDiag != "" { - if len(diags) != 1 { - t.Fatalf("got %d diagnostics; want 1", len(diags)) - } - gotDetail := diags[0].Description().Detail - if gotDetail != test.WantDiag { - t.Fatalf("wrong diagnostic detail\ngot: %s\nwant: %s", gotDetail, test.WantDiag) - } - return - } else { - if len(diags) != 0 { - t.Fatalf("got %d diagnostics; want 0", len(diags)) - } - } - - for _, problem := range deep.Equal(got, test.Want) { - t.Error(problem) - } - }) - } -} func TestParseAbsProviderConfig(t *testing.T) { tests := []struct { Input string @@ -78,57 +16,65 @@ func TestParseAbsProviderConfig(t *testing.T) { WantDiag string }{ { - `provider.aws`, + `provider["registry.terraform.io/hashicorp/aws"]`, AbsProviderConfig{ Module: RootModuleInstance, - ProviderConfig: ProviderConfig{ - Type: "aws", + Provider: Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: "registry.terraform.io", }, }, ``, }, { - `provider.aws.foo`, + `provider["registry.terraform.io/hashicorp/aws"].foo`, AbsProviderConfig{ Module: RootModuleInstance, - ProviderConfig: ProviderConfig{ - Type: "aws", - Alias: "foo", + Provider: Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: "registry.terraform.io", }, + Alias: "foo", }, ``, }, { - `module.baz.provider.aws`, + `module.baz.provider["registry.terraform.io/hashicorp/aws"]`, AbsProviderConfig{ Module: ModuleInstance{ { Name: "baz", }, }, - ProviderConfig: ProviderConfig{ - Type: "aws", + Provider: Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: "registry.terraform.io", }, }, ``, }, { - `module.baz.provider.aws.foo`, + `module.baz.provider["registry.terraform.io/hashicorp/aws"].foo`, AbsProviderConfig{ Module: ModuleInstance{ { Name: "baz", }, }, - ProviderConfig: ProviderConfig{ - Type: "aws", - Alias: "foo", + Provider: Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: "registry.terraform.io", }, + Alias: "foo", }, ``, }, { - `module.baz["foo"].provider.aws`, + `module.baz["foo"].provider["registry.terraform.io/hashicorp/aws"]`, AbsProviderConfig{ Module: ModuleInstance{ { @@ -136,14 +82,16 @@ func TestParseAbsProviderConfig(t *testing.T) { InstanceKey: StringKey("foo"), }, }, - ProviderConfig: ProviderConfig{ - Type: "aws", + Provider: Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: "registry.terraform.io", }, }, ``, }, { - `module.baz[1].provider.aws`, + `module.baz[1].provider["registry.terraform.io/hashicorp/aws"]`, AbsProviderConfig{ Module: ModuleInstance{ { @@ -151,14 +99,16 @@ func TestParseAbsProviderConfig(t *testing.T) { InstanceKey: IntKey(1), }, }, - ProviderConfig: ProviderConfig{ - Type: "aws", + Provider: Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: "registry.terraform.io", }, }, ``, }, { - `module.baz[1].module.bar.provider.aws`, + `module.baz[1].module.bar.provider["registry.terraform.io/hashicorp/aws"]`, AbsProviderConfig{ Module: ModuleInstance{ { @@ -169,8 +119,10 @@ func TestParseAbsProviderConfig(t *testing.T) { Name: "bar", }, }, - ProviderConfig: ProviderConfig{ - Type: "aws", + Provider: Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: "registry.terraform.io", }, }, ``, @@ -196,12 +148,7 @@ func TestParseAbsProviderConfig(t *testing.T) { `Extraneous operators after provider configuration alias.`, }, { - `provider["aws"]`, - AbsProviderConfig{}, - `The prefix "provider." must be followed by a provider type name.`, - }, - { - `provider.aws["foo"]`, + `provider["aws"]["foo"]`, AbsProviderConfig{}, `Provider type name must be followed by a configuration alias name.`, }, @@ -211,9 +158,9 @@ func TestParseAbsProviderConfig(t *testing.T) { `Provider address must begin with "provider.", followed by a provider type name.`, }, { - `module.foo["provider"]`, + `provider[0]`, AbsProviderConfig{}, - `Provider address must begin with "provider.", followed by a provider type name.`, + `The prefix "provider." must be followed by a provider type name.`, }, } @@ -251,3 +198,93 @@ func TestParseAbsProviderConfig(t *testing.T) { }) } } + +func TestAbsProviderConfigString(t *testing.T) { + tests := []struct { + Config AbsProviderConfig + Want string + }{ + { + AbsProviderConfig{ + Module: RootModuleInstance, + Provider: NewLegacyProvider("foo"), + }, + `provider["registry.terraform.io/-/foo"]`, + }, + { + AbsProviderConfig{ + Module: RootModuleInstance.Child("child_module", NoKey), + Provider: NewLegacyProvider("foo"), + }, + `module.child_module.provider["registry.terraform.io/-/foo"]`, + }, + { + AbsProviderConfig{ + Module: RootModuleInstance, + Alias: "bar", + Provider: NewLegacyProvider("foo"), + }, + `provider["registry.terraform.io/-/foo"].bar`, + }, + { + AbsProviderConfig{ + Module: RootModuleInstance.Child("child_module", NoKey), + Alias: "bar", + Provider: NewLegacyProvider("foo"), + }, + `module.child_module.provider["registry.terraform.io/-/foo"].bar`, + }, + } + + for _, test := range tests { + got := test.Config.String() + if got != test.Want { + t.Errorf("wrong result. Got %s, want %s\n", got, test.Want) + } + } +} + +func TestAbsProviderConfigLegacyString(t *testing.T) { + tests := []struct { + Config AbsProviderConfig + Want string + }{ + { + AbsProviderConfig{ + Module: RootModuleInstance, + Provider: NewLegacyProvider("foo"), + }, + `provider.foo`, + }, + { + AbsProviderConfig{ + Module: RootModuleInstance.Child("child_module", NoKey), + Provider: NewLegacyProvider("foo"), + }, + `module.child_module.provider.foo`, + }, + { + AbsProviderConfig{ + Module: RootModuleInstance, + Alias: "bar", + Provider: NewLegacyProvider("foo"), + }, + `provider.foo.bar`, + }, + { + AbsProviderConfig{ + Module: RootModuleInstance.Child("child_module", NoKey), + Alias: "bar", + Provider: NewLegacyProvider("foo"), + }, + `module.child_module.provider.foo.bar`, + }, + } + + for _, test := range tests { + got := test.Config.LegacyString() + if got != test.Want { + t.Errorf("wrong result. Got %s, want %s\n", got, test.Want) + } + } +} diff --git a/addrs/provider_test.go b/addrs/provider_test.go new file mode 100644 index 000000000..ab4b0864d --- /dev/null +++ b/addrs/provider_test.go @@ -0,0 +1,245 @@ +package addrs + +import ( + "testing" + + "github.com/go-test/deep" + svchost "github.com/hashicorp/terraform-svchost" +) + +func TestParseProviderSourceStr(t *testing.T) { + tests := map[string]struct { + Want Provider + Err bool + }{ + "registry.terraform.io/hashicorp/aws": { + Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: DefaultRegistryHost, + }, + false, + }, + "registry.Terraform.io/HashiCorp/AWS": { + Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: DefaultRegistryHost, + }, + false, + }, + "hashicorp/aws": { + Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: DefaultRegistryHost, + }, + false, + }, + "HashiCorp/AWS": { + Provider{ + Type: "aws", + Namespace: "hashicorp", + Hostname: DefaultRegistryHost, + }, + false, + }, + "aws": { + Provider{ + Type: "aws", + Namespace: "-", + Hostname: DefaultRegistryHost, + }, + false, + }, + "AWS": { + Provider{ + // No case folding here because we're currently handling this + // as a legacy one. When this changes to be a _default_ + // address in future (registry.terraform.io/hashicorp/aws) + // then we should start applying case folding to it, making + // Type appear as "aws" here instead. + Type: "AWS", + Namespace: "-", + Hostname: DefaultRegistryHost, + }, + false, + }, + "example.com/foo-bar/baz-boop": { + Provider{ + Type: "baz-boop", + Namespace: "foo-bar", + Hostname: svchost.Hostname("example.com"), + }, + false, + }, + "foo-bar/baz-boop": { + Provider{ + Type: "baz-boop", + Namespace: "foo-bar", + Hostname: DefaultRegistryHost, + }, + false, + }, + "localhost:8080/foo/bar": { + Provider{ + Type: "bar", + Namespace: "foo", + Hostname: svchost.Hostname("localhost:8080"), + }, + false, + }, + "example.com/too/many/parts/here": { + Provider{}, + true, + }, + "/too///many//slashes": { + Provider{}, + true, + }, + "///": { + Provider{}, + true, + }, + "badhost!/hashicorp/aws": { + Provider{}, + true, + }, + "example.com/badnamespace!/aws": { + Provider{}, + true, + }, + "example.com/bad--namespace/aws": { + Provider{}, + true, + }, + "example.com/-badnamespace/aws": { + Provider{}, + true, + }, + "example.com/badnamespace-/aws": { + Provider{}, + true, + }, + "example.com/bad.namespace/aws": { + Provider{}, + true, + }, + "example.com/hashicorp/badtype!": { + Provider{}, + true, + }, + "example.com/hashicorp/bad--type": { + Provider{}, + true, + }, + "example.com/hashicorp/-badtype": { + Provider{}, + true, + }, + "example.com/hashicorp/badtype-": { + Provider{}, + true, + }, + "example.com/hashicorp/bad.type": { + Provider{}, + true, + }, + } + + for name, test := range tests { + got, diags := ParseProviderSourceString(name) + for _, problem := range deep.Equal(got, test.Want) { + t.Errorf(problem) + } + if len(diags) > 0 { + if test.Err == false { + t.Errorf("got error, expected success") + } + } else { + if test.Err { + t.Errorf("got success, expected error") + } + } + } +} + +func TestParseProviderPart(t *testing.T) { + tests := map[string]struct { + Want string + Error string + }{ + `foo`: { + `foo`, + ``, + }, + `FOO`: { + `foo`, + ``, + }, + `Foo`: { + `foo`, + ``, + }, + `abc-123`: { + `abc-123`, + ``, + }, + `Испытание`: { + `испытание`, + ``, + }, + `münchen`: { // this is a precomposed u with diaeresis + `münchen`, // this is a precomposed u with diaeresis + ``, + }, + `münchen`: { // this is a separate u and combining diaeresis + `münchen`, // this is a precomposed u with diaeresis + ``, + }, + `abc--123`: { + ``, + `cannot use multiple consecutive dashes`, + }, + `xn--80akhbyknj4f`: { // this is the punycode form of "испытание", but we don't accept punycode here + ``, + `cannot use multiple consecutive dashes`, + }, + `abc.123`: { + ``, + `dots are not allowed`, + }, + `-abc123`: { + ``, + `must contain only letters, digits, and dashes, and may not use leading or trailing dashes`, + }, + `abc123-`: { + ``, + `must contain only letters, digits, and dashes, and may not use leading or trailing dashes`, + }, + ``: { + ``, + `must have at least one character`, + }, + } + + for given, test := range tests { + t.Run(given, func(t *testing.T) { + got, err := ParseProviderPart(given) + if test.Error != "" { + if err == nil { + t.Errorf("unexpected success\ngot: %s\nwant: %s", err, test.Error) + } else if got := err.Error(); got != test.Error { + t.Errorf("wrong error\ngot: %s\nwant: %s", got, test.Error) + } + } else { + if err != nil { + t.Errorf("unexpected error\ngot: %s\nwant: ", err) + } else if got != test.Want { + t.Errorf("wrong result\ngot: %s\nwant: %s", got, test.Want) + } + } + }) + } + +} diff --git a/addrs/provider_type.go b/addrs/provider_type.go deleted file mode 100644 index 64b8ac869..000000000 --- a/addrs/provider_type.go +++ /dev/null @@ -1,7 +0,0 @@ -package addrs - -// ProviderType encapsulates a single provider type. In the future this will be -// extended to include additional fields including Namespace and SourceHost -type ProviderType struct { - Name string -} diff --git a/addrs/resource.go b/addrs/resource.go index 28667708e..94f3c3012 100644 --- a/addrs/resource.go +++ b/addrs/resource.go @@ -50,9 +50,9 @@ func (r Resource) Absolute(module ModuleInstance) AbsResource { } } -// DefaultProviderConfig returns the address of the provider configuration -// that should be used for the resource identified by the reciever if it -// does not have a provider configuration address explicitly set in +// DefaultProvider returns the address of the provider whose default +// configuration shouldbe used for the resource identified by the reciever if +// it does not have a provider configuration address explicitly set in // configuration. // // This method is not able to verify that such a configuration exists, nor @@ -60,14 +60,18 @@ func (r Resource) Absolute(module ModuleInstance) AbsResource { // configurations from parent modules. It just does a static analysis of the // receiving address and returns an address to start from, relative to the // same module that contains the resource. -func (r Resource) DefaultProviderConfig() ProviderConfig { +func (r Resource) DefaultProvider() Provider { typeName := r.Type if under := strings.Index(typeName, "_"); under != -1 { typeName = typeName[:under] } - return ProviderConfig{ - Type: typeName, - } + + // TODO: For now we're returning a _legacy_ provider address here + // because the rest of Terraform isn't yet prepared to deal with + // non-legacy ones. Once we phase out legacy addresses this should + // switch to being a _default_ provider address, i.e. one in the + // releases.hashicorp.com/hashicorp/... namespace. + return NewLegacyProvider(typeName) } // ResourceInstance is an address for a specific instance of a resource. @@ -253,7 +257,7 @@ func (r AbsResourceInstance) Less(o AbsResourceInstance) bool { // resource lifecycle has a slightly different address format. type ResourceMode rune -//go:generate stringer -type ResourceMode +//go:generate go run golang.org/x/tools/cmd/stringer -type ResourceMode const ( // InvalidResourceMode is the zero value of ResourceMode and is not diff --git a/backend/atlas/state_client_test.go b/backend/atlas/state_client_test.go index 6c370941a..0e4795c70 100644 --- a/backend/atlas/state_client_test.go +++ b/backend/atlas/state_client_test.go @@ -7,6 +7,7 @@ import ( "crypto/tls" "crypto/x509" "encoding/json" + "errors" "net/http" "net/http/httptest" "net/url" @@ -216,17 +217,20 @@ func TestStateClient_UnresolvableConflict(t *testing.T) { if err := terraform.WriteState(state, &stateJson); err != nil { t.Fatalf("err: %s", err) } - doneCh := make(chan struct{}) + errCh := make(chan error) go func() { - defer close(doneCh) + defer close(errCh) if err := client.Put(stateJson.Bytes()); err == nil { - t.Fatal("Expected error from state conflict, got none.") + errCh <- errors.New("expected error from state conflict, got none.") + return } }() select { - case <-doneCh: - // OK + case err := <-errCh: + if err != nil { + t.Fatalf("error from anonymous test goroutine: %s", err) + } case <-time.After(500 * time.Millisecond): t.Fatalf("Timed out after 500ms, probably because retrying infinitely.") } diff --git a/backend/backend.go b/backend/backend.go index 6f220b6b5..268b52f67 100644 --- a/backend/backend.go +++ b/backend/backend.go @@ -196,6 +196,16 @@ type Operation struct { Targets []addrs.Targetable Variables map[string]UnparsedVariableValue + // Some operations use root module variables only opportunistically or + // don't need them at all. If this flag is set, the backend must treat + // all variables as optional and provide an unknown value for any required + // variables that aren't set in order to allow partial evaluation against + // the resulting incomplete context. + // + // This flag is honored only if PlanFile isn't set. If PlanFile is set then + // the variables set in the plan are used instead, and they must be valid. + AllowUnsetVariables bool + // Input/output/control options. UIIn terraform.UIInput UIOut terraform.UIOutput diff --git a/backend/init/init.go b/backend/init/init.go index 8f70a1f81..d4a239791 100644 --- a/backend/init/init.go +++ b/backend/init/init.go @@ -5,8 +5,8 @@ package init import ( "sync" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/backend" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/tfdiags" "github.com/zclconf/go-cty/cty" @@ -16,6 +16,7 @@ import ( backendArtifactory "github.com/hashicorp/terraform/backend/remote-state/artifactory" backendAzure "github.com/hashicorp/terraform/backend/remote-state/azure" backendConsul "github.com/hashicorp/terraform/backend/remote-state/consul" + backendCos "github.com/hashicorp/terraform/backend/remote-state/cos" backendEtcdv2 "github.com/hashicorp/terraform/backend/remote-state/etcdv2" backendEtcdv3 "github.com/hashicorp/terraform/backend/remote-state/etcdv3" backendGCS "github.com/hashicorp/terraform/backend/remote-state/gcs" @@ -57,6 +58,7 @@ func Init(services *disco.Disco) { "atlas": func() backend.Backend { return backendAtlas.New() }, "azurerm": func() backend.Backend { return backendAzure.New() }, "consul": func() backend.Backend { return backendConsul.New() }, + "cos": func() backend.Backend { return backendCos.New() }, "etcd": func() backend.Backend { return backendEtcdv2.New() }, "etcdv3": func() backend.Backend { return backendEtcdv3.New() }, "gcs": func() backend.Backend { return backendGCS.New() }, diff --git a/backend/init/init_test.go b/backend/init/init_test.go index 74dbf53aa..2b7571b54 100644 --- a/backend/init/init_test.go +++ b/backend/init/init_test.go @@ -18,6 +18,7 @@ func TestInit_backend(t *testing.T) { {"atlas", "*atlas.Backend"}, {"azurerm", "*azure.Backend"}, {"consul", "*consul.Backend"}, + {"cos", "*cos.Backend"}, {"etcdv3", "*etcd.Backend"}, {"gcs", "*gcs.Backend"}, {"inmem", "*inmem.Backend"}, diff --git a/backend/local/backend.go b/backend/local/backend.go index 62c016c49..e73959499 100644 --- a/backend/local/backend.go +++ b/backend/local/backend.go @@ -14,7 +14,6 @@ import ( "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/command/clistate" "github.com/hashicorp/terraform/configs/configschema" - "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/states/statemgr" "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/tfdiags" @@ -180,11 +179,6 @@ func (b *Local) Configure(obj cty.Value) tfdiags.Diagnostics { var diags tfdiags.Diagnostics - type Config struct { - Path string `hcl:"path,optional"` - WorkspaceDir string `hcl:"workspace_dir,optional"` - } - if val := obj.GetAttr("path"); !val.IsNull() { p := val.AsString() b.StatePath = p @@ -456,39 +450,6 @@ func (b *Local) Colorize() *colorstring.Colorize { } } -func (b *Local) schemaConfigure(ctx context.Context) error { - d := schema.FromContextBackendConfig(ctx) - - // Set the path if it is set - pathRaw, ok := d.GetOk("path") - if ok { - path := pathRaw.(string) - if path == "" { - return fmt.Errorf("configured path is empty") - } - - b.StatePath = path - b.StateOutPath = path - } - - if raw, ok := d.GetOk("workspace_dir"); ok { - path := raw.(string) - if path != "" { - b.StateWorkspaceDir = path - } - } - - // Legacy name, which ConflictsWith workspace_dir - if raw, ok := d.GetOk("environment_dir"); ok { - path := raw.(string) - if path != "" { - b.StateWorkspaceDir = path - } - } - - return nil -} - // StatePaths returns the StatePath, StateOutPath, and StateBackupPath as // configured from the CLI. func (b *Local) StatePaths(name string) (stateIn, stateOut, backupOut string) { diff --git a/backend/local/backend_apply.go b/backend/local/backend_apply.go index 8604d3ff9..8be67f472 100644 --- a/backend/local/backend_apply.go +++ b/backend/local/backend_apply.go @@ -65,9 +65,9 @@ func (b *Local) opApply( // If we're refreshing before apply, perform that if op.PlanRefresh { log.Printf("[INFO] backend/local: apply calling Refresh") - _, err := tfCtx.Refresh() - if err != nil { - diags = diags.Append(err) + _, refreshDiags := tfCtx.Refresh() + diags = diags.Append(refreshDiags) + if diags.HasErrors() { runningOp.Result = backend.OperationFailure b.ShowDiagnostics(diags) return diff --git a/backend/local/backend_apply_test.go b/backend/local/backend_apply_test.go index cbf674761..dccf6f791 100644 --- a/backend/local/backend_apply_test.go +++ b/backend/local/backend_apply_test.go @@ -59,7 +59,7 @@ func TestLocal_applyBasic(t *testing.T) { checkState(t, b.StateOutPath, ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ami = bar `) } @@ -176,7 +176,7 @@ func TestLocal_applyError(t *testing.T) { checkState(t, b.StateOutPath, ` test_instance.foo: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ami = bar `) } @@ -226,7 +226,7 @@ func TestLocal_applyBackendFail(t *testing.T) { checkState(t, "errored.tfstate", ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ami = bar `) } @@ -261,26 +261,6 @@ func testOperationApply(t *testing.T, configDir string) (*backend.Operation, fun }, configCleanup } -// testApplyState is just a common state that we use for testing refresh. -func testApplyState() *terraform.State { - return &terraform.State{ - Version: 2, - Modules: []*terraform.ModuleState{ - &terraform.ModuleState{ - Path: []string{"root"}, - Resources: map[string]*terraform.ResourceState{ - "test_instance.foo": &terraform.ResourceState{ - Type: "test_instance", - Primary: &terraform.InstanceState{ - ID: "bar", - }, - }, - }, - }, - }, - } -} - // applyFixtureSchema returns a schema suitable for processing the // configuration in testdata/apply . This schema should be // assigned to a mock provider named "test". diff --git a/backend/local/backend_local.go b/backend/local/backend_local.go index 58bc4f5cb..4fffc5b66 100644 --- a/backend/local/backend_local.go +++ b/backend/local/backend_local.go @@ -4,10 +4,12 @@ import ( "context" "fmt" "log" + "sort" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/command/clistate" + "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/configs/configload" "github.com/hashicorp/terraform/plans/planfile" "github.com/hashicorp/terraform/states/statemgr" @@ -103,8 +105,6 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload. // If input asking is enabled, then do that if op.PlanFile == nil && b.OpInput { mode := terraform.InputModeProvider - mode |= terraform.InputModeVar - mode |= terraform.InputModeVarUnset log.Printf("[TRACE] backend/local: requesting interactive input, if necessary") inputDiags := tfCtx.Input(mode) @@ -136,14 +136,27 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts) } opts.Config = config - variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables) + var rawVariables map[string]backend.UnparsedVariableValue + if op.AllowUnsetVariables { + // Rather than prompting for input, we'll just stub out the required + // but unset variables with unknown values to represent that they are + // placeholders for values the user would need to provide for other + // operations. + rawVariables = b.stubUnsetRequiredVariables(op.Variables, config.Module.Variables) + } else { + // If interactive input is enabled, we might gather some more variable + // values through interactive prompts. + // TODO: Need to route the operation context through into here, so that + // the interactive prompts can be sensitive to its timeouts/etc. + rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, opts.UIInput) + } + + variables, varDiags := backend.ParseVariableValues(rawVariables, config.Module.Variables) diags = diags.Append(varDiags) if diags.HasErrors() { return nil, nil, diags } - if op.Variables != nil { - opts.Variables = variables - } + opts.Variables = variables tfCtx, ctxDiags := terraform.NewContext(&opts) diags = diags.Append(ctxDiags) @@ -245,10 +258,155 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO return tfCtx, snap, diags } -const validateWarnHeader = ` -There are warnings related to your configuration. If no errors occurred, -Terraform will continue despite these warnings. It is a good idea to resolve -these warnings in the near future. +// interactiveCollectVariables attempts to complete the given existing +// map of variables by interactively prompting for any variables that are +// declared as required but not yet present. +// +// If interactive input is disabled for this backend instance then this is +// a no-op. If input is enabled but fails for some reason, the resulting +// map will be incomplete. For these reasons, the caller must still validate +// that the result is complete and valid. +// +// This function does not modify the map given in "existing", but may return +// it unchanged if no modifications are required. If modifications are required, +// the result is a new map with all of the elements from "existing" plus +// additional elements as appropriate. +// +// Interactive prompting is a "best effort" thing for first-time user UX and +// not something we expect folks to be relying on for routine use. Terraform +// is primarily a non-interactive tool and so we prefer to report in error +// messages that variables are not set rather than reporting that input failed: +// the primary resolution to missing variables is to provide them by some other +// means. +func (b *Local) interactiveCollectVariables(ctx context.Context, existing map[string]backend.UnparsedVariableValue, vcs map[string]*configs.Variable, uiInput terraform.UIInput) map[string]backend.UnparsedVariableValue { + var needed []string + if b.OpInput && uiInput != nil { + for name, vc := range vcs { + if !vc.Required() { + continue // We only prompt for required variables + } + if _, exists := existing[name]; !exists { + needed = append(needed, name) + } + } + } else { + log.Print("[DEBUG] backend/local: Skipping interactive prompts for variables because input is disabled") + } + if len(needed) == 0 { + return existing + } -Warnings: -` + log.Printf("[DEBUG] backend/local: will prompt for input of unset required variables %s", needed) + + // If we get here then we're planning to prompt for at least one additional + // variable's value. + sort.Strings(needed) // prompt in lexical order + ret := make(map[string]backend.UnparsedVariableValue, len(vcs)) + for k, v := range existing { + ret[k] = v + } + for _, name := range needed { + vc := vcs[name] + rawValue, err := uiInput.Input(ctx, &terraform.InputOpts{ + Id: fmt.Sprintf("var.%s", name), + Query: fmt.Sprintf("var.%s", name), + Description: vc.Description, + }) + if err != nil { + // Since interactive prompts are best-effort, we'll just continue + // here and let subsequent validation report this as a variable + // not specified. + log.Printf("[WARN] backend/local: Failed to request user input for variable %q: %s", name, err) + continue + } + ret[name] = unparsedInteractiveVariableValue{Name: name, RawValue: rawValue} + } + return ret +} + +// stubUnsetVariables ensures that all required variables defined in the +// configuration exist in the resulting map, by adding new elements as necessary. +// +// The stubbed value of any additions will be an unknown variable conforming +// to the variable's configured type constraint, meaning that no particular +// value is known and that one must be provided by the user in order to get +// a complete result. +// +// Unset optional attributes (those with default values) will not be populated +// by this function, under the assumption that a later step will handle those. +// In this sense, stubUnsetRequiredVariables is essentially a non-interactive, +// non-error-producing variant of interactiveCollectVariables that creates +// placeholders for values the user would be prompted for interactively on +// other operations. +// +// This function should be used only in situations where variables values +// will not be directly used and the variables map is being constructed only +// to produce a complete Terraform context for some ancillary functionality +// like "terraform console", "terraform state ...", etc. +// +// This function is guaranteed not to modify the given map, but it may return +// the given map unchanged if no additions are required. If additions are +// required then the result will be a new map containing everything in the +// given map plus additional elements. +func (b *Local) stubUnsetRequiredVariables(existing map[string]backend.UnparsedVariableValue, vcs map[string]*configs.Variable) map[string]backend.UnparsedVariableValue { + var missing bool // Do we need to add anything? + for name, vc := range vcs { + if !vc.Required() { + continue // We only stub required variables + } + if _, exists := existing[name]; !exists { + missing = true + } + } + if !missing { + return existing + } + + // If we get down here then there's at least one variable value to add. + ret := make(map[string]backend.UnparsedVariableValue, len(vcs)) + for k, v := range existing { + ret[k] = v + } + for name, vc := range vcs { + if !vc.Required() { + continue + } + if _, exists := existing[name]; !exists { + ret[name] = unparsedUnknownVariableValue{Name: name, WantType: vc.Type} + } + } + return ret +} + +type unparsedInteractiveVariableValue struct { + Name, RawValue string +} + +var _ backend.UnparsedVariableValue = unparsedInteractiveVariableValue{} + +func (v unparsedInteractiveVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + val, valDiags := mode.Parse(v.Name, v.RawValue) + diags = diags.Append(valDiags) + if diags.HasErrors() { + return nil, diags + } + return &terraform.InputValue{ + Value: val, + SourceType: terraform.ValueFromInput, + }, diags +} + +type unparsedUnknownVariableValue struct { + Name string + WantType cty.Type +} + +var _ backend.UnparsedVariableValue = unparsedUnknownVariableValue{} + +func (v unparsedUnknownVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) { + return &terraform.InputValue{ + Value: cty.UnknownVal(v.WantType), + SourceType: terraform.ValueFromInput, + }, nil +} diff --git a/backend/local/backend_plan.go b/backend/local/backend_plan.go index 0a7f28dac..0d58eced6 100644 --- a/backend/local/backend_plan.go +++ b/backend/local/backend_plan.go @@ -8,6 +8,9 @@ import ( "sort" "strings" + "github.com/mitchellh/cli" + "github.com/mitchellh/colorstring" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/command/format" @@ -83,9 +86,9 @@ func (b *Local) opPlan( b.CLI.Output(b.Colorize().Color(strings.TrimSpace(planRefreshing) + "\n")) } - refreshedState, err := tfCtx.Refresh() - if err != nil { - diags = diags.Append(err) + refreshedState, refreshDiags := tfCtx.Refresh() + diags = diags.Append(refreshDiags) + if diags.HasErrors() { b.ReportResult(runningOp, diags) return } @@ -159,6 +162,8 @@ func (b *Local) opPlan( if plan.Changes.Empty() { b.CLI.Output("\n" + b.Colorize().Color(strings.TrimSpace(planNoChanges))) + // Even if there are no changes, there still could be some warnings + b.ShowDiagnostics(diags) return } @@ -190,6 +195,21 @@ func (b *Local) opPlan( } func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terraform.Schemas) { + RenderPlan(plan, state, schemas, b.CLI, b.Colorize()) +} + +// RenderPlan renders the given plan to the given UI. +// +// This is exported only so that the "terraform show" command can re-use it. +// Ideally it would be somewhere outside of this backend code so that both +// can call into it, but we're leaving it here for now in order to avoid +// disruptive refactoring. +// +// If you find yourself wanting to call this function from a third callsite, +// please consider whether it's time to do the more disruptive refactoring +// so that something other than the local backend package is offering this +// functionality. +func RenderPlan(plan *plans.Plan, state *states.State, schemas *terraform.Schemas, ui cli.Ui, colorize *colorstring.Colorize) { counts := map[plans.Action]int{} var rChanges []*plans.ResourceInstanceChangeSrc for _, change := range plan.Changes.Resources { @@ -223,9 +243,9 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra fmt.Fprintf(headerBuf, "%s read (data resources)\n", format.DiffActionSymbol(plans.Read)) } - b.CLI.Output(b.Colorize().Color(headerBuf.String())) + ui.Output(colorize.Color(headerBuf.String())) - b.CLI.Output("Terraform will perform the following actions:\n") + ui.Output("Terraform will perform the following actions:\n") // Note: we're modifying the backing slice of this plan object in-place // here. The ordering of resource changes in a plan is not significant, @@ -244,16 +264,17 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra if rcs.Action == plans.NoOp { continue } - providerSchema := schemas.ProviderSchema(rcs.ProviderAddr.ProviderConfig.Type) + + providerSchema := schemas.ProviderSchema(rcs.ProviderAddr.Provider) if providerSchema == nil { // Should never happen - b.CLI.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.ProviderAddr)) + ui.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.ProviderAddr)) continue } rSchema, _ := providerSchema.SchemaForResourceAddr(rcs.Addr.Resource.Resource) if rSchema == nil { // Should never happen - b.CLI.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.Addr)) + ui.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.Addr)) continue } @@ -267,11 +288,11 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra } } - b.CLI.Output(format.ResourceChange( + ui.Output(format.ResourceChange( rcs, tainted, rSchema, - b.CLIColor, + colorize, )) } @@ -288,23 +309,13 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra stats[change.Action]++ } } - b.CLI.Output(b.Colorize().Color(fmt.Sprintf( + ui.Output(colorize.Color(fmt.Sprintf( "[reset][bold]Plan:[reset] "+ "%d to add, %d to change, %d to destroy.", stats[plans.Create], stats[plans.Update], stats[plans.Delete], ))) } -const planErrNoConfig = ` -No configuration files found! - -Plan requires configuration to be present. Planning without a configuration -would mark everything for destruction, which is normally not what is desired. -If you would like to destroy everything, please run plan with the "-destroy" -flag or create a single empty configuration file. Otherwise, please create -a Terraform configuration file in the path being executed and try again. -` - const planHeaderIntro = ` An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: diff --git a/backend/local/backend_plan_test.go b/backend/local/backend_plan_test.go index 1aa4183a9..9aa028653 100644 --- a/backend/local/backend_plan_test.go +++ b/backend/local/backend_plan_test.go @@ -215,9 +215,10 @@ func TestLocal_planDeposedOnly(t *testing.T) { }] }`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) })) b.CLI = cli.NewMockUi() @@ -658,9 +659,10 @@ func testPlanState() *states.State { }] }`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } @@ -684,9 +686,10 @@ func testPlanState_withDataSource() *states.State { }] }`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) rootModule.SetResourceInstanceCurrent( addrs.Resource{ @@ -700,9 +703,10 @@ func testPlanState_withDataSource() *states.State { "filter": "foo" }`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } @@ -726,9 +730,10 @@ func testPlanState_tainted() *states.State { }] }`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } diff --git a/backend/local/backend_refresh_test.go b/backend/local/backend_refresh_test.go index 4f5e1500e..c6b79217e 100644 --- a/backend/local/backend_refresh_test.go +++ b/backend/local/backend_refresh_test.go @@ -42,7 +42,7 @@ func TestLocal_refresh(t *testing.T) { checkState(t, b.StateOutPath, ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `) } @@ -72,7 +72,7 @@ func TestLocal_refreshNoConfig(t *testing.T) { checkState(t, b.StateOutPath, ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `) } @@ -105,7 +105,7 @@ func TestLocal_refreshNilModuleWithInput(t *testing.T) { checkState(t, b.StateOutPath, ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `) } @@ -163,7 +163,7 @@ func TestLocal_refreshInput(t *testing.T) { checkState(t, b.StateOutPath, ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `) } @@ -196,7 +196,7 @@ func TestLocal_refreshValidate(t *testing.T) { checkState(t, b.StateOutPath, ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `) } diff --git a/backend/local/hook_count_action.go b/backend/local/hook_count_action.go index 9a28464c2..9adcd9047 100644 --- a/backend/local/hook_count_action.go +++ b/backend/local/hook_count_action.go @@ -1,6 +1,6 @@ package local -//go:generate stringer -type=countHookAction hook_count_action.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=countHookAction hook_count_action.go type countHookAction byte diff --git a/backend/local/testing.go b/backend/local/testing.go index dccddb5da..fb73cce87 100644 --- a/backend/local/testing.go +++ b/backend/local/testing.go @@ -113,8 +113,8 @@ func TestLocalProvider(t *testing.T, b *Local, name string, schema *terraform.Pr // Setup our provider b.ContextOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - name: providers.FactoryFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider(name): providers.FactoryFixed(p), }, ) diff --git a/backend/operation_type.go b/backend/operation_type.go index 1739dc7fc..f2c84de2c 100644 --- a/backend/operation_type.go +++ b/backend/operation_type.go @@ -1,6 +1,6 @@ package backend -//go:generate stringer -type=OperationType operation_type.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=OperationType operation_type.go // OperationType is an enum used with Operation to specify the operation // type to perform for Terraform. diff --git a/backend/remote-state/artifactory/backend.go b/backend/remote-state/artifactory/backend.go index d085f21b5..775584a51 100644 --- a/backend/remote-state/artifactory/backend.go +++ b/backend/remote-state/artifactory/backend.go @@ -3,6 +3,7 @@ package artifactory import ( "context" + cleanhttp "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/state" @@ -65,9 +66,10 @@ func (b *Backend) configure(ctx context.Context) error { subpath := data.Get("subpath").(string) clientConf := &artifactory.ClientConfig{ - BaseURL: url, - Username: userName, - Password: password, + BaseURL: url, + Username: userName, + Password: password, + Transport: cleanhttp.DefaultPooledTransport(), } nativeClient := artifactory.NewClient(clientConf) diff --git a/backend/remote-state/azure/arm_client.go b/backend/remote-state/azure/arm_client.go index bcc511a44..7316b254c 100644 --- a/backend/remote-state/azure/arm_client.go +++ b/backend/remote-state/azure/arm_client.go @@ -13,9 +13,9 @@ import ( armStorage "github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage" "github.com/Azure/azure-sdk-for-go/storage" "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/adal" "github.com/Azure/go-autorest/autorest/azure" "github.com/hashicorp/go-azure-helpers/authentication" + "github.com/hashicorp/go-azure-helpers/sender" "github.com/hashicorp/terraform/httpclient" ) @@ -75,12 +75,12 @@ func buildArmClient(config BackendConfig) (*ArmClient, error) { return nil, fmt.Errorf("Error building ARM Config: %+v", err) } - oauthConfig, err := adal.NewOAuthConfig(env.ActiveDirectoryEndpoint, armConfig.TenantID) + oauthConfig, err := armConfig.BuildOAuthConfig(env.ActiveDirectoryEndpoint) if err != nil { return nil, err } - auth, err := armConfig.GetAuthorizationToken(oauthConfig, env.TokenAudience) + auth, err := armConfig.GetAuthorizationToken(sender.BuildSender("backend/remote-state/azure"), oauthConfig, env.TokenAudience) if err != nil { return nil, err } diff --git a/backend/remote-state/azure/sender.go b/backend/remote-state/azure/sender.go index 90a2fb5bd..d2b432a65 100644 --- a/backend/remote-state/azure/sender.go +++ b/backend/remote-state/azure/sender.go @@ -21,7 +21,7 @@ func withRequestLogging() autorest.SendDecorator { return func(s autorest.Sender) autorest.Sender { return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) { // only log if logging's enabled - logLevel := logging.LogLevel() + logLevel := logging.CurrentLogLevel() if logLevel == "" { return s.Do(r) } diff --git a/backend/remote-state/cos/backend.go b/backend/remote-state/cos/backend.go new file mode 100644 index 000000000..ce502e5cc --- /dev/null +++ b/backend/remote-state/cos/backend.go @@ -0,0 +1,169 @@ +package cos + +import ( + "context" + "fmt" + "net/http" + "net/url" + "strings" + "time" + + "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/helper/schema" + "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common" + "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile" + tag "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813" + "github.com/tencentyun/cos-go-sdk-v5" +) + +// Default value from environment variable +const ( + PROVIDER_SECRET_ID = "TENCENTCLOUD_SECRET_ID" + PROVIDER_SECRET_KEY = "TENCENTCLOUD_SECRET_KEY" + PROVIDER_REGION = "TENCENTCLOUD_REGION" +) + +// Backend implements "backend".Backend for tencentCloud cos +type Backend struct { + *schema.Backend + + cosContext context.Context + cosClient *cos.Client + tagClient *tag.Client + + region string + bucket string + prefix string + key string + encrypt bool + acl string +} + +// New creates a new backend for TencentCloud cos remote state. +func New() backend.Backend { + s := &schema.Backend{ + Schema: map[string]*schema.Schema{ + "secret_id": { + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc(PROVIDER_SECRET_ID, nil), + Description: "Secret id of Tencent Cloud", + }, + "secret_key": { + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc(PROVIDER_SECRET_KEY, nil), + Description: "Secret key of Tencent Cloud", + Sensitive: true, + }, + "region": { + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc(PROVIDER_REGION, nil), + Description: "The region of the COS bucket", + InputDefault: "ap-guangzhou", + }, + "bucket": { + Type: schema.TypeString, + Required: true, + Description: "The name of the COS bucket", + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + Description: "The directory for saving the state file in bucket", + ValidateFunc: func(v interface{}, s string) ([]string, []error) { + prefix := v.(string) + if strings.HasPrefix(prefix, "/") || strings.HasPrefix(prefix, "./") { + return nil, []error{fmt.Errorf("prefix must not start with '/' or './'")} + } + return nil, nil + }, + }, + "key": { + Type: schema.TypeString, + Optional: true, + Description: "The path for saving the state file in bucket", + Default: "terraform.tfstate", + ValidateFunc: func(v interface{}, s string) ([]string, []error) { + if strings.HasPrefix(v.(string), "/") || strings.HasSuffix(v.(string), "/") { + return nil, []error{fmt.Errorf("key can not start and end with '/'")} + } + return nil, nil + }, + }, + "encrypt": { + Type: schema.TypeBool, + Optional: true, + Description: "Whether to enable server side encryption of the state file", + Default: true, + }, + "acl": { + Type: schema.TypeString, + Optional: true, + Description: "Object ACL to be applied to the state file", + Default: "private", + ValidateFunc: func(v interface{}, s string) ([]string, []error) { + value := v.(string) + if value != "private" && value != "public-read" { + return nil, []error{fmt.Errorf( + "acl value invalid, expected %s or %s, got %s", + "private", "public-read", value)} + } + return nil, nil + }, + }, + }, + } + + result := &Backend{Backend: s} + result.Backend.ConfigureFunc = result.configure + + return result +} + +// configure init cos client +func (b *Backend) configure(ctx context.Context) error { + if b.cosClient != nil { + return nil + } + + b.cosContext = ctx + data := schema.FromContextBackendConfig(b.cosContext) + + b.region = data.Get("region").(string) + b.bucket = data.Get("bucket").(string) + b.prefix = data.Get("prefix").(string) + b.key = data.Get("key").(string) + b.encrypt = data.Get("encrypt").(bool) + b.acl = data.Get("acl").(string) + + u, err := url.Parse(fmt.Sprintf("https://%s.cos.%s.myqcloud.com", b.bucket, b.region)) + if err != nil { + return err + } + + b.cosClient = cos.NewClient( + &cos.BaseURL{BucketURL: u}, + &http.Client{ + Timeout: 60 * time.Second, + Transport: &cos.AuthorizationTransport{ + SecretID: data.Get("secret_id").(string), + SecretKey: data.Get("secret_key").(string), + }, + }, + ) + + credential := common.NewCredential( + data.Get("secret_id").(string), + data.Get("secret_key").(string), + ) + + cpf := profile.NewClientProfile() + cpf.HttpProfile.ReqMethod = "POST" + cpf.HttpProfile.ReqTimeout = 300 + cpf.Language = "en-US" + b.tagClient, err = tag.NewClient(credential, b.region, cpf) + + return err +} diff --git a/backend/remote-state/cos/backend_state.go b/backend/remote-state/cos/backend_state.go new file mode 100644 index 000000000..2bc3f2428 --- /dev/null +++ b/backend/remote-state/cos/backend_state.go @@ -0,0 +1,178 @@ +package cos + +import ( + "fmt" + "log" + "path" + "sort" + "strings" + + "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/state" + "github.com/hashicorp/terraform/state/remote" + "github.com/hashicorp/terraform/states" + "github.com/likexian/gokit/assert" +) + +// Define file suffix +const ( + stateFileSuffix = ".tfstate" + lockFileSuffix = ".tflock" +) + +// Workspaces returns a list of names for the workspaces +func (b *Backend) Workspaces() ([]string, error) { + c, err := b.client("tencentcloud") + if err != nil { + return nil, err + } + + obs, err := c.getBucket(b.prefix) + log.Printf("[DEBUG] list all workspaces, objects: %v, error: %v", obs, err) + if err != nil { + return nil, err + } + + ws := []string{backend.DefaultStateName} + for _, vv := range obs { + // .tfstate + if !strings.HasSuffix(vv.Key, stateFileSuffix) { + continue + } + // default worksapce + if path.Join(b.prefix, b.key) == vv.Key { + continue + } + // // + prefix := strings.TrimRight(b.prefix, "/") + "/" + parts := strings.Split(strings.TrimPrefix(vv.Key, prefix), "/") + if len(parts) > 0 && parts[0] != "" { + ws = append(ws, parts[0]) + } + } + + sort.Strings(ws[1:]) + log.Printf("[DEBUG] list all workspaces, workspaces: %v", ws) + + return ws, nil +} + +// DeleteWorkspace deletes the named workspaces. The "default" state cannot be deleted. +func (b *Backend) DeleteWorkspace(name string) error { + log.Printf("[DEBUG] delete workspace, workspace: %v", name) + + if name == backend.DefaultStateName || name == "" { + return fmt.Errorf("default state is not allow to delete") + } + + c, err := b.client(name) + if err != nil { + return err + } + + return c.Delete() +} + +// StateMgr manage the state, if the named state not exists, a new file will created +func (b *Backend) StateMgr(name string) (state.State, error) { + log.Printf("[DEBUG] state manager, current workspace: %v", name) + + c, err := b.client(name) + if err != nil { + return nil, err + } + stateMgr := &remote.State{Client: c} + + ws, err := b.Workspaces() + if err != nil { + return nil, err + } + + if !assert.IsContains(ws, name) { + log.Printf("[DEBUG] workspace %v not exists", name) + + // take a lock on this state while we write it + lockInfo := state.NewLockInfo() + lockInfo.Operation = "init" + lockId, err := c.Lock(lockInfo) + if err != nil { + return nil, fmt.Errorf("Failed to lock cos state: %s", err) + } + + // Local helper function so we can call it multiple places + lockUnlock := func(e error) error { + if err := stateMgr.Unlock(lockId); err != nil { + return fmt.Errorf(unlockErrMsg, err, lockId) + } + return e + } + + // Grab the value + if err := stateMgr.RefreshState(); err != nil { + err = lockUnlock(err) + return nil, err + } + + // If we have no state, we have to create an empty state + if v := stateMgr.State(); v == nil { + if err := stateMgr.WriteState(states.NewState()); err != nil { + err = lockUnlock(err) + return nil, err + } + if err := stateMgr.PersistState(); err != nil { + err = lockUnlock(err) + return nil, err + } + } + + // Unlock, the state should now be initialized + if err := lockUnlock(nil); err != nil { + return nil, err + } + } + + return stateMgr, nil +} + +// client returns a remoteClient for the named state. +func (b *Backend) client(name string) (*remoteClient, error) { + if strings.TrimSpace(name) == "" { + return nil, fmt.Errorf("state name not allow to be empty") + } + + return &remoteClient{ + cosContext: b.cosContext, + cosClient: b.cosClient, + tagClient: b.tagClient, + bucket: b.bucket, + stateFile: b.stateFile(name), + lockFile: b.lockFile(name), + encrypt: b.encrypt, + acl: b.acl, + }, nil +} + +// stateFile returns state file path by name +func (b *Backend) stateFile(name string) string { + if name == backend.DefaultStateName { + return path.Join(b.prefix, b.key) + } + return path.Join(b.prefix, name, b.key) +} + +// lockFile returns lock file path by name +func (b *Backend) lockFile(name string) string { + return b.stateFile(name) + lockFileSuffix +} + +// unlockErrMsg is error msg for unlock failed +const unlockErrMsg = ` +Unlocking the state file on TencentCloud cos backend failed: + +Error message: %v +Lock ID (gen): %s + +You may have to force-unlock this state in order to use it again. +The TencentCloud backend acquires a lock during initialization +to ensure the initial state file is created. +` diff --git a/backend/remote-state/cos/backend_test.go b/backend/remote-state/cos/backend_test.go new file mode 100644 index 000000000..b372571af --- /dev/null +++ b/backend/remote-state/cos/backend_test.go @@ -0,0 +1,227 @@ +package cos + +import ( + "crypto/md5" + "fmt" + "os" + "testing" + "time" + + "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/state/remote" + "github.com/likexian/gokit/assert" +) + +const ( + defaultPrefix = "" + defaultKey = "terraform.tfstate" +) + +// Testing Thanks to GCS + +func TestStateFile(t *testing.T) { + t.Parallel() + + cases := []struct { + prefix string + stateName string + key string + wantStateFile string + wantLockFile string + }{ + {"", "default", "default.tfstate", "default.tfstate", "default.tfstate.tflock"}, + {"", "default", "test.tfstate", "test.tfstate", "test.tfstate.tflock"}, + {"", "dev", "test.tfstate", "dev/test.tfstate", "dev/test.tfstate.tflock"}, + {"terraform/test", "default", "default.tfstate", "terraform/test/default.tfstate", "terraform/test/default.tfstate.tflock"}, + {"terraform/test", "default", "test.tfstate", "terraform/test/test.tfstate", "terraform/test/test.tfstate.tflock"}, + {"terraform/test", "dev", "test.tfstate", "terraform/test/dev/test.tfstate", "terraform/test/dev/test.tfstate.tflock"}, + } + + for _, c := range cases { + b := &Backend{ + prefix: c.prefix, + key: c.key, + } + assert.Equal(t, b.stateFile(c.stateName), c.wantStateFile) + assert.Equal(t, b.lockFile(c.stateName), c.wantLockFile) + } +} + +func TestRemoteClient(t *testing.T) { + t.Parallel() + + bucket := bucketName(t) + + be := setupBackend(t, bucket, defaultPrefix, defaultKey, false) + defer teardownBackend(t, be) + + ss, err := be.StateMgr(backend.DefaultStateName) + assert.Nil(t, err) + + rs, ok := ss.(*remote.State) + assert.True(t, ok) + + remote.TestClient(t, rs.Client) +} + +func TestRemoteClientWithPrefix(t *testing.T) { + t.Parallel() + + prefix := "prefix/test" + bucket := bucketName(t) + + be := setupBackend(t, bucket, prefix, defaultKey, false) + defer teardownBackend(t, be) + + ss, err := be.StateMgr(backend.DefaultStateName) + assert.Nil(t, err) + + rs, ok := ss.(*remote.State) + assert.True(t, ok) + + remote.TestClient(t, rs.Client) +} + +func TestRemoteClientWithEncryption(t *testing.T) { + t.Parallel() + + bucket := bucketName(t) + + be := setupBackend(t, bucket, defaultPrefix, defaultKey, true) + defer teardownBackend(t, be) + + ss, err := be.StateMgr(backend.DefaultStateName) + assert.Nil(t, err) + + rs, ok := ss.(*remote.State) + assert.True(t, ok) + + remote.TestClient(t, rs.Client) +} + +func TestRemoteLocks(t *testing.T) { + t.Parallel() + + bucket := bucketName(t) + + be := setupBackend(t, bucket, defaultPrefix, defaultKey, false) + defer teardownBackend(t, be) + + remoteClient := func() (remote.Client, error) { + ss, err := be.StateMgr(backend.DefaultStateName) + if err != nil { + return nil, err + } + + rs, ok := ss.(*remote.State) + if !ok { + return nil, fmt.Errorf("be.StateMgr(): got a %T, want a *remote.State", ss) + } + + return rs.Client, nil + } + + c0, err := remoteClient() + assert.Nil(t, err) + + c1, err := remoteClient() + assert.Nil(t, err) + + remote.TestRemoteLocks(t, c0, c1) +} + +func TestBackend(t *testing.T) { + t.Parallel() + + bucket := bucketName(t) + + be0 := setupBackend(t, bucket, defaultPrefix, defaultKey, false) + defer teardownBackend(t, be0) + + be1 := setupBackend(t, bucket, defaultPrefix, defaultKey, false) + defer teardownBackend(t, be1) + + backend.TestBackendStates(t, be0) + backend.TestBackendStateLocks(t, be0, be1) + backend.TestBackendStateForceUnlock(t, be0, be1) +} + +func TestBackendWithPrefix(t *testing.T) { + t.Parallel() + + prefix := "prefix/test" + bucket := bucketName(t) + + be0 := setupBackend(t, bucket, prefix, defaultKey, false) + defer teardownBackend(t, be0) + + be1 := setupBackend(t, bucket, prefix+"/", defaultKey, false) + defer teardownBackend(t, be1) + + backend.TestBackendStates(t, be0) + backend.TestBackendStateLocks(t, be0, be1) +} + +func TestBackendWithEncryption(t *testing.T) { + t.Parallel() + + bucket := bucketName(t) + + be0 := setupBackend(t, bucket, defaultPrefix, defaultKey, true) + defer teardownBackend(t, be0) + + be1 := setupBackend(t, bucket, defaultPrefix, defaultKey, true) + defer teardownBackend(t, be1) + + backend.TestBackendStates(t, be0) + backend.TestBackendStateLocks(t, be0, be1) +} + +func setupBackend(t *testing.T, bucket, prefix, key string, encrypt bool) backend.Backend { + t.Helper() + + skip := os.Getenv("TF_COS_APPID") == "" + if skip { + t.Skip("This test require setting TF_COS_APPID environment variables") + } + + if os.Getenv(PROVIDER_REGION) == "" { + os.Setenv(PROVIDER_REGION, "ap-guangzhou") + } + + appId := os.Getenv("TF_COS_APPID") + region := os.Getenv(PROVIDER_REGION) + + config := map[string]interface{}{ + "region": region, + "bucket": bucket + appId, + "prefix": prefix, + "key": key, + } + + b := backend.TestBackendConfig(t, New(), backend.TestWrapConfig(config)) + be := b.(*Backend) + + c, err := be.client("tencentcloud") + assert.Nil(t, err) + + err = c.putBucket() + assert.Nil(t, err) + + return b +} + +func teardownBackend(t *testing.T, b backend.Backend) { + t.Helper() + + c, err := b.(*Backend).client("tencentcloud") + assert.Nil(t, err) + + err = c.deleteBucket(true) + assert.Nil(t, err) +} + +func bucketName(t *testing.T) string { + unique := fmt.Sprintf("%s-%x", t.Name(), time.Now().UnixNano()) + return fmt.Sprintf("terraform-test-%s-%s", fmt.Sprintf("%x", md5.Sum([]byte(unique)))[:10], "") +} diff --git a/backend/remote-state/cos/client.go b/backend/remote-state/cos/client.go new file mode 100644 index 000000000..281fd2b7a --- /dev/null +++ b/backend/remote-state/cos/client.go @@ -0,0 +1,403 @@ +package cos + +import ( + "bytes" + "context" + "crypto/md5" + "encoding/json" + "fmt" + "io/ioutil" + "log" + "net/http" + "strings" + "time" + + multierror "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform/state" + "github.com/hashicorp/terraform/state/remote" + tag "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813" + "github.com/tencentyun/cos-go-sdk-v5" +) + +const ( + lockTagKey = "tencentcloud-terraform-lock" +) + +// RemoteClient implements the client of remote state +type remoteClient struct { + cosContext context.Context + cosClient *cos.Client + tagClient *tag.Client + + bucket string + stateFile string + lockFile string + encrypt bool + acl string +} + +// Get returns remote state file +func (c *remoteClient) Get() (*remote.Payload, error) { + log.Printf("[DEBUG] get remote state file %s", c.stateFile) + + exists, data, checksum, err := c.getObject(c.stateFile) + if err != nil { + return nil, err + } + + if !exists { + return nil, nil + } + + payload := &remote.Payload{ + Data: data, + MD5: []byte(checksum), + } + + return payload, nil +} + +// Put put state file to remote +func (c *remoteClient) Put(data []byte) error { + log.Printf("[DEBUG] put remote state file %s", c.stateFile) + + return c.putObject(c.stateFile, data) +} + +// Delete delete remote state file +func (c *remoteClient) Delete() error { + log.Printf("[DEBUG] delete remote state file %s", c.stateFile) + + return c.deleteObject(c.stateFile) +} + +// Lock lock remote state file for writing +func (c *remoteClient) Lock(info *state.LockInfo) (string, error) { + log.Printf("[DEBUG] lock remote state file %s", c.lockFile) + + err := c.cosLock(c.bucket, c.lockFile) + if err != nil { + return "", c.lockError(err) + } + defer c.cosUnlock(c.bucket, c.lockFile) + + exists, _, _, err := c.getObject(c.lockFile) + if err != nil { + return "", c.lockError(err) + } + + if exists { + return "", c.lockError(fmt.Errorf("lock file %s exists", c.lockFile)) + } + + info.Path = c.lockFile + data, err := json.Marshal(info) + if err != nil { + return "", c.lockError(err) + } + + check := fmt.Sprintf("%x", md5.Sum(data)) + err = c.putObject(c.lockFile, data) + if err != nil { + return "", c.lockError(err) + } + + return check, nil +} + +// Unlock unlock remote state file +func (c *remoteClient) Unlock(check string) error { + log.Printf("[DEBUG] unlock remote state file %s", c.lockFile) + + info, err := c.lockInfo() + if err != nil { + return c.lockError(err) + } + + if info.ID != check { + return c.lockError(fmt.Errorf("lock id mismatch, %v != %v", info.ID, check)) + } + + err = c.deleteObject(c.lockFile) + if err != nil { + return c.lockError(err) + } + + return nil +} + +// lockError returns state.LockError +func (c *remoteClient) lockError(err error) *state.LockError { + log.Printf("[DEBUG] failed to lock or unlock %s: %v", c.lockFile, err) + + lockErr := &state.LockError{ + Err: err, + } + + info, infoErr := c.lockInfo() + if infoErr != nil { + lockErr.Err = multierror.Append(lockErr.Err, infoErr) + } else { + lockErr.Info = info + } + + return lockErr +} + +// lockInfo returns LockInfo from lock file +func (c *remoteClient) lockInfo() (*state.LockInfo, error) { + exists, data, checksum, err := c.getObject(c.lockFile) + if err != nil { + return nil, err + } + + if !exists { + return nil, fmt.Errorf("lock file %s not exists", c.lockFile) + } + + info := &state.LockInfo{} + if err := json.Unmarshal(data, info); err != nil { + return nil, err + } + + info.ID = checksum + + return info, nil +} + +// getObject get remote object +func (c *remoteClient) getObject(cosFile string) (exists bool, data []byte, checksum string, err error) { + rsp, err := c.cosClient.Object.Get(c.cosContext, cosFile, nil) + if rsp == nil { + log.Printf("[DEBUG] getObject %s: error: %v", cosFile, err) + err = fmt.Errorf("failed to open file at %v: %v", cosFile, err) + return + } + defer rsp.Body.Close() + + log.Printf("[DEBUG] getObject %s: code: %d, error: %v", cosFile, rsp.StatusCode, err) + if err != nil { + if rsp.StatusCode == 404 { + err = nil + } else { + err = fmt.Errorf("failed to open file at %v: %v", cosFile, err) + } + return + } + + checksum = rsp.Header.Get("X-Cos-Meta-Md5") + log.Printf("[DEBUG] getObject %s: checksum: %s", cosFile, checksum) + if len(checksum) != 32 { + err = fmt.Errorf("failed to open file at %v: checksum %s invalid", cosFile, checksum) + return + } + + exists = true + data, err = ioutil.ReadAll(rsp.Body) + log.Printf("[DEBUG] getObject %s: data length: %d", cosFile, len(data)) + if err != nil { + err = fmt.Errorf("failed to open file at %v: %v", cosFile, err) + return + } + + check := fmt.Sprintf("%x", md5.Sum(data)) + log.Printf("[DEBUG] getObject %s: check: %s", cosFile, check) + if check != checksum { + err = fmt.Errorf("failed to open file at %v: checksum mismatch, %s != %s", cosFile, check, checksum) + return + } + + return +} + +// putObject put object to remote +func (c *remoteClient) putObject(cosFile string, data []byte) error { + opt := &cos.ObjectPutOptions{ + ObjectPutHeaderOptions: &cos.ObjectPutHeaderOptions{ + XCosMetaXXX: &http.Header{ + "X-Cos-Meta-Md5": []string{fmt.Sprintf("%x", md5.Sum(data))}, + }, + }, + ACLHeaderOptions: &cos.ACLHeaderOptions{ + XCosACL: c.acl, + }, + } + + if c.encrypt { + opt.ObjectPutHeaderOptions.XCosServerSideEncryption = "AES256" + } + + r := bytes.NewReader(data) + rsp, err := c.cosClient.Object.Put(c.cosContext, cosFile, r, opt) + if rsp == nil { + log.Printf("[DEBUG] putObject %s: error: %v", cosFile, err) + return fmt.Errorf("failed to save file to %v: %v", cosFile, err) + } + defer rsp.Body.Close() + + log.Printf("[DEBUG] putObject %s: code: %d, error: %v", cosFile, rsp.StatusCode, err) + if err != nil { + return fmt.Errorf("failed to save file to %v: %v", cosFile, err) + } + + return nil +} + +// deleteObject delete remote object +func (c *remoteClient) deleteObject(cosFile string) error { + rsp, err := c.cosClient.Object.Delete(c.cosContext, cosFile) + if rsp == nil { + log.Printf("[DEBUG] deleteObject %s: error: %v", cosFile, err) + return fmt.Errorf("failed to delete file %v: %v", cosFile, err) + } + defer rsp.Body.Close() + + log.Printf("[DEBUG] deleteObject %s: code: %d, error: %v", cosFile, rsp.StatusCode, err) + if rsp.StatusCode == 404 { + return nil + } + + if err != nil { + return fmt.Errorf("failed to delete file %v: %v", cosFile, err) + } + + return nil +} + +// getBucket list bucket by prefix +func (c *remoteClient) getBucket(prefix string) (obs []cos.Object, err error) { + fs, rsp, err := c.cosClient.Bucket.Get(c.cosContext, &cos.BucketGetOptions{Prefix: prefix}) + if rsp == nil { + log.Printf("[DEBUG] getBucket %s/%s: error: %v", c.bucket, prefix, err) + err = fmt.Errorf("bucket %s not exists", c.bucket) + return + } + defer rsp.Body.Close() + + log.Printf("[DEBUG] getBucket %s/%s: code: %d, error: %v", c.bucket, prefix, rsp.StatusCode, err) + if rsp.StatusCode == 404 { + err = fmt.Errorf("bucket %s not exists", c.bucket) + return + } + + if err != nil { + return + } + + return fs.Contents, nil +} + +// putBucket create cos bucket +func (c *remoteClient) putBucket() error { + rsp, err := c.cosClient.Bucket.Put(c.cosContext, nil) + if rsp == nil { + log.Printf("[DEBUG] putBucket %s: error: %v", c.bucket, err) + return fmt.Errorf("failed to create bucket %v: %v", c.bucket, err) + } + defer rsp.Body.Close() + + log.Printf("[DEBUG] putBucket %s: code: %d, error: %v", c.bucket, rsp.StatusCode, err) + if rsp.StatusCode == 409 { + return nil + } + + if err != nil { + return fmt.Errorf("failed to create bucket %v: %v", c.bucket, err) + } + + return nil +} + +// deleteBucket delete cos bucket +func (c *remoteClient) deleteBucket(recursive bool) error { + if recursive { + obs, err := c.getBucket("") + if err != nil { + if strings.Contains(err.Error(), "not exists") { + return nil + } + log.Printf("[DEBUG] deleteBucket %s: empty bucket error: %v", c.bucket, err) + return fmt.Errorf("failed to empty bucket %v: %v", c.bucket, err) + } + for _, v := range obs { + c.deleteObject(v.Key) + } + } + + rsp, err := c.cosClient.Bucket.Delete(c.cosContext) + if rsp == nil { + log.Printf("[DEBUG] deleteBucket %s: error: %v", c.bucket, err) + return fmt.Errorf("failed to delete bucket %v: %v", c.bucket, err) + } + defer rsp.Body.Close() + + log.Printf("[DEBUG] deleteBucket %s: code: %d, error: %v", c.bucket, rsp.StatusCode, err) + if rsp.StatusCode == 404 { + return nil + } + + if err != nil { + return fmt.Errorf("failed to delete bucket %v: %v", c.bucket, err) + } + + return nil +} + +// cosLock lock cos for writing +func (c *remoteClient) cosLock(bucket, cosFile string) error { + log.Printf("[DEBUG] lock cos file %s:%s", bucket, cosFile) + + cosPath := fmt.Sprintf("%s:%s", bucket, cosFile) + lockTagValue := fmt.Sprintf("%x", md5.Sum([]byte(cosPath))) + + return c.CreateTag(lockTagKey, lockTagValue) +} + +// cosUnlock unlock cos writing +func (c *remoteClient) cosUnlock(bucket, cosFile string) error { + log.Printf("[DEBUG] unlock cos file %s:%s", bucket, cosFile) + + cosPath := fmt.Sprintf("%s:%s", bucket, cosFile) + lockTagValue := fmt.Sprintf("%x", md5.Sum([]byte(cosPath))) + + var err error + for i := 0; i < 30; i++ { + err = c.DeleteTag(lockTagKey, lockTagValue) + if err == nil { + return nil + } + time.Sleep(1 * time.Second) + } + + return err +} + +// CreateTag create tag by key and value +func (c *remoteClient) CreateTag(key, value string) error { + request := tag.NewCreateTagRequest() + request.TagKey = &key + request.TagValue = &value + + _, err := c.tagClient.CreateTag(request) + log.Printf("[DEBUG] create tag %s:%s: error: %v", key, value, err) + if err != nil { + return fmt.Errorf("failed to create tag: %s -> %s: %s", key, value, err) + } + + return nil +} + +// DeleteTag create tag by key and value +func (c *remoteClient) DeleteTag(key, value string) error { + request := tag.NewDeleteTagRequest() + request.TagKey = &key + request.TagValue = &value + + _, err := c.tagClient.DeleteTag(request) + log.Printf("[DEBUG] delete tag %s:%s: error: %v", key, value, err) + if err != nil { + return fmt.Errorf("failed to delete tag: %s -> %s: %s", key, value, err) + } + + return nil +} diff --git a/backend/remote-state/gcs/backend.go b/backend/remote-state/gcs/backend.go index 41521bf34..750be3f2f 100644 --- a/backend/remote-state/gcs/backend.go +++ b/backend/remote-state/gcs/backend.go @@ -136,6 +136,8 @@ func (b *Backend) configure(ctx context.Context) error { }) } else if v, ok := data.GetOk("credentials"); ok { creds = v.(string) + } else if v := os.Getenv("GOOGLE_BACKEND_CREDENTIALS"); v != "" { + creds = v } else { creds = os.Getenv("GOOGLE_CREDENTIALS") } diff --git a/backend/remote-state/http/client_test.go b/backend/remote-state/http/client_test.go index 7b6f04e62..c05be2098 100644 --- a/backend/remote-state/http/client_test.go +++ b/backend/remote-state/http/client_test.go @@ -26,7 +26,7 @@ func TestHTTPClient(t *testing.T) { url, err := url.Parse(ts.URL) if err != nil { - t.Fatalf("err: %s", err) + t.Fatalf("Parse: %s", err) } // Test basic get/update @@ -73,6 +73,10 @@ func TestHTTPClient(t *testing.T) { UpdateMethod: "PUT", Client: retryablehttp.NewClient(), } + if err != nil { + t.Fatalf("Parse: %s", err) + } + remote.TestClient(t, client) // first time through: 201 remote.TestClient(t, client) // second time, with identical data: 204 @@ -83,18 +87,13 @@ func TestHTTPClient(t *testing.T) { defer ts.Close() url, err = url.Parse(ts.URL) + if err != nil { + t.Fatalf("Parse: %s", err) + } client = &httpClient{URL: url, Client: retryablehttp.NewClient()} remote.TestClient(t, client) } -func assertError(t *testing.T, err error, expected string) { - if err == nil { - t.Fatalf("Expected empty config to err") - } else if err.Error() != expected { - t.Fatalf("Expected err.Error() to be \"%s\", got \"%s\"", expected, err.Error()) - } -} - type testHTTPHandler struct { Data []byte Locked bool diff --git a/backend/remote-state/oss/backend.go b/backend/remote-state/oss/backend.go index 0b0a4bd71..8e868c176 100644 --- a/backend/remote-state/oss/backend.go +++ b/backend/remote-state/oss/backend.go @@ -5,11 +5,13 @@ import ( "encoding/json" "fmt" "github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests" + "github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses" "github.com/aliyun/alibaba-cloud-sdk-go/services/sts" "github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/validation" + "github.com/jmespath/go-jmespath" "io/ioutil" "os" "runtime" @@ -21,6 +23,7 @@ import ( "github.com/aliyun/aliyun-tablestore-go-sdk/tablestore" "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/terraform/version" + "github.com/mitchellh/go-homedir" "log" "net/http" "strconv" @@ -52,6 +55,13 @@ func New() backend.Backend { DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_SECURITY_TOKEN", ""), }, + "ecs_role_name": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ECS_ROLE_NAME", os.Getenv("ALICLOUD_ECS_ROLE_NAME")), + Description: "The RAM Role Name attached on a ECS instance for API operations. You can retrieve this from the 'Access Control' section of the Alibaba Cloud console.", + }, + "region": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -140,6 +150,7 @@ func New() backend.Backend { "shared_credentials_file": { Type: schema.TypeString, Optional: true, + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_SHARED_CREDENTIALS_FILE", ""), Description: "This is the path to the shared credentials file. If this is not set and a profile is specified, `~/.aliyun/config.json` will be used.", }, "profile": { @@ -276,8 +287,17 @@ func (b *Backend) configure(ctx context.Context) error { } } + if accessKey == "" { + ecsRoleName := getBackendConfig(d.Get("ecs_role_name").(string), "ram_role_name") + subAccessKeyId, subAccessKeySecret, subSecurityToken, err := getAuthCredentialByEcsRoleName(ecsRoleName) + if err != nil { + return err + } + accessKey, secretKey, securityToken = subAccessKeyId, subAccessKeySecret, subSecurityToken + } + if roleArn != "" { - subAccessKeyId, subAccessKeySecret, subSecurityToken, err := getAssumeRoleAK(accessKey, secretKey, region, roleArn, sessionName, policy, sessionExpiration) + subAccessKeyId, subAccessKeySecret, subSecurityToken, err := getAssumeRoleAK(accessKey, secretKey, securityToken, region, roleArn, sessionName, policy, sessionExpiration) if err != nil { return err } @@ -347,7 +367,7 @@ func (b *Backend) getOSSEndpointByRegion(access_key, secret_key, security_token, return endpointsResponse, nil } -func getAssumeRoleAK(accessKey, secretKey, region, roleArn, sessionName, policy string, sessionExpiration int) (string, string, string, error) { +func getAssumeRoleAK(accessKey, secretKey, stsToken, region, roleArn, sessionName, policy string, sessionExpiration int) (string, string, string, error) { request := sts.CreateAssumeRoleRequest() request.RoleArn = roleArn request.RoleSessionName = sessionName @@ -355,7 +375,13 @@ func getAssumeRoleAK(accessKey, secretKey, region, roleArn, sessionName, policy request.Policy = policy request.Scheme = "https" - client, err := sts.NewClientWithAccessKey(region, accessKey, secretKey) + var client *sts.Client + var err error + if stsToken == "" { + client, err = sts.NewClientWithAccessKey(region, accessKey, secretKey) + } else { + client, err = sts.NewClientWithStsToken(region, accessKey, secretKey, stsToken) + } if err != nil { return "", "", "", err } @@ -445,7 +471,11 @@ func getConfigFromProfile(d *schema.ResourceData, ProfileKey string) (interface{ return nil, nil } current := d.Get("profile").(string) - profilePath := d.Get("shared_credentials_file").(string) + // Set CredsFilename, expanding home directory + profilePath, err := homedir.Expand(d.Get("shared_credentials_file").(string)) + if err != nil { + return nil, err + } if profilePath == "" { profilePath = fmt.Sprintf("%s/.aliyun/config.json", os.Getenv("HOME")) if runtime.GOOS == "windows" { @@ -453,7 +483,7 @@ func getConfigFromProfile(d *schema.ResourceData, ProfileKey string) (interface{ } } providerConfig = make(map[string]interface{}) - _, err := os.Stat(profilePath) + _, err = os.Stat(profilePath) if !os.IsNotExist(err) { data, err := ioutil.ReadFile(profilePath) if err != nil { @@ -503,3 +533,78 @@ func getConfigFromProfile(d *schema.ResourceData, ProfileKey string) (interface{ return providerConfig[ProfileKey], nil } + +var securityCredURL = "http://100.100.100.200/latest/meta-data/ram/security-credentials/" + +// getAuthCredentialByEcsRoleName aims to access meta to get sts credential +// Actually, the job should be done by sdk, but currently not all resources and products support alibaba-cloud-sdk-go, +// and their go sdk does support ecs role name. +// This method is a temporary solution and it should be removed after all go sdk support ecs role name +// The related PR: https://github.com/terraform-providers/terraform-provider-alicloud/pull/731 +func getAuthCredentialByEcsRoleName(ecsRoleName string) (accessKey, secretKey, token string, err error) { + + if ecsRoleName == "" { + return + } + requestUrl := securityCredURL + ecsRoleName + httpRequest, err := http.NewRequest(requests.GET, requestUrl, strings.NewReader("")) + if err != nil { + err = fmt.Errorf("build sts requests err: %s", err.Error()) + return + } + httpClient := &http.Client{} + httpResponse, err := httpClient.Do(httpRequest) + if err != nil { + err = fmt.Errorf("get Ecs sts token err : %s", err.Error()) + return + } + + response := responses.NewCommonResponse() + err = responses.Unmarshal(response, httpResponse, "") + if err != nil { + err = fmt.Errorf("Unmarshal Ecs sts token response err : %s", err.Error()) + return + } + + if response.GetHttpStatus() != http.StatusOK { + err = fmt.Errorf("get Ecs sts token err, httpStatus: %d, message = %s", response.GetHttpStatus(), response.GetHttpContentString()) + return + } + var data interface{} + err = json.Unmarshal(response.GetHttpContentBytes(), &data) + if err != nil { + err = fmt.Errorf("refresh Ecs sts token err, json.Unmarshal fail: %s", err.Error()) + return + } + code, err := jmespath.Search("Code", data) + if err != nil { + err = fmt.Errorf("refresh Ecs sts token err, fail to get Code: %s", err.Error()) + return + } + if code.(string) != "Success" { + err = fmt.Errorf("refresh Ecs sts token err, Code is not Success") + return + } + accessKeyId, err := jmespath.Search("AccessKeyId", data) + if err != nil { + err = fmt.Errorf("refresh Ecs sts token err, fail to get AccessKeyId: %s", err.Error()) + return + } + accessKeySecret, err := jmespath.Search("AccessKeySecret", data) + if err != nil { + err = fmt.Errorf("refresh Ecs sts token err, fail to get AccessKeySecret: %s", err.Error()) + return + } + securityToken, err := jmespath.Search("SecurityToken", data) + if err != nil { + err = fmt.Errorf("refresh Ecs sts token err, fail to get SecurityToken: %s", err.Error()) + return + } + + if accessKeyId == nil || accessKeySecret == nil || securityToken == nil { + err = fmt.Errorf("there is no any available accesskey, secret and security token for Ecs role %s", ecsRoleName) + return + } + + return accessKeyId.(string), accessKeySecret.(string), securityToken.(string), nil +} diff --git a/backend/remote-state/s3/backend_state.go b/backend/remote-state/s3/backend_state.go index 646932476..861c284b4 100644 --- a/backend/remote-state/s3/backend_state.go +++ b/backend/remote-state/s3/backend_state.go @@ -18,6 +18,8 @@ import ( ) func (b *Backend) Workspaces() ([]string, error) { + const maxKeys = 1000 + prefix := "" if b.workspaceKeyPrefix != "" { @@ -25,24 +27,24 @@ func (b *Backend) Workspaces() ([]string, error) { } params := &s3.ListObjectsInput{ - Bucket: &b.bucketName, - Prefix: aws.String(prefix), - } - - resp, err := b.s3Client.ListObjects(params) - if err != nil { - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == s3.ErrCodeNoSuchBucket { - return nil, fmt.Errorf(errS3NoSuchBucket, err) - } - return nil, err + Bucket: &b.bucketName, + Prefix: aws.String(prefix), + MaxKeys: aws.Int64(maxKeys), } wss := []string{backend.DefaultStateName} - for _, obj := range resp.Contents { - ws := b.keyEnv(*obj.Key) - if ws != "" { - wss = append(wss, ws) + err := b.s3Client.ListObjectsPages(params, func(page *s3.ListObjectsOutput, lastPage bool) bool { + for _, obj := range page.Contents { + ws := b.keyEnv(*obj.Key) + if ws != "" { + wss = append(wss, ws) + } } + return !lastPage + }) + + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == s3.ErrCodeNoSuchBucket { + return nil, fmt.Errorf(errS3NoSuchBucket, err) } sort.Strings(wss[1:]) diff --git a/backend/remote-state/s3/backend_test.go b/backend/remote-state/s3/backend_test.go index 1ebe8450e..903cf237f 100644 --- a/backend/remote-state/s3/backend_test.go +++ b/backend/remote-state/s3/backend_test.go @@ -229,19 +229,24 @@ func TestBackendExtraPaths(t *testing.T) { ddbTable: b.ddbTable, } + // Write the first state stateMgr := &remote.State{Client: client} stateMgr.WriteState(s1) if err := stateMgr.PersistState(); err != nil { t.Fatal(err) } + // Write the second state + // Note a new state manager - otherwise, because these + // states are equal, the state will not Put to the remote client.path = b.path("s2") - stateMgr.WriteState(s2) - if err := stateMgr.PersistState(); err != nil { + stateMgr2 := &remote.State{Client: client} + stateMgr2.WriteState(s2) + if err := stateMgr2.PersistState(); err != nil { t.Fatal(err) } - s2Lineage := stateMgr.StateSnapshotMeta().Lineage + s2Lineage := stateMgr2.StateSnapshotMeta().Lineage if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil { t.Fatal(err) diff --git a/backend/remote/backend.go b/backend/remote/backend.go index c8bc867e7..a30b252ac 100644 --- a/backend/remote/backend.go +++ b/backend/remote/backend.go @@ -14,12 +14,12 @@ import ( tfe "github.com/hashicorp/go-tfe" version "github.com/hashicorp/go-version" + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/state" "github.com/hashicorp/terraform/state/remote" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/tfdiags" tfversion "github.com/hashicorp/terraform/version" @@ -267,12 +267,17 @@ func (b *Remote) Configure(obj cty.Value) tfdiags.Diagnostics { // Return an error if we still don't have a token at this point. if token == "" { + loginCommand := "terraform login" + if b.hostname != defaultHostname { + loginCommand = loginCommand + " " + b.hostname + } diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, "Required token could not be found", fmt.Sprintf( - "Make sure you configured a credentials block for %s in your CLI Config File.", + "Run the following command to generate a token for %s:\n %s", b.hostname, + loginCommand, ), )) return diags diff --git a/backend/remote/backend_apply.go b/backend/remote/backend_apply.go index 6e187fc34..ef5b1511c 100644 --- a/backend/remote/backend_apply.go +++ b/backend/remote/backend_apply.go @@ -75,10 +75,7 @@ func (b *Remote) opApply(stopCtx, cancelCtx context.Context, op *backend.Operati )) } - variables, parseDiags := b.parseVariableValues(op) - diags = diags.Append(parseDiags) - - if len(variables) > 0 { + if b.hasExplicitVariableValues(op) { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, "Run variables are currently not supported", diff --git a/backend/remote/backend_common.go b/backend/remote/backend_common.go index 930e01646..c2d6602dd 100644 --- a/backend/remote/backend_common.go +++ b/backend/remote/backend_common.go @@ -14,7 +14,6 @@ import ( tfe "github.com/hashicorp/go-tfe" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/terraform" - "github.com/hashicorp/terraform/tfdiags" ) var ( @@ -201,32 +200,42 @@ func (b *Remote) waitForRun(stopCtx, cancelCtx context.Context, op *backend.Oper } } -func (b *Remote) parseVariableValues(op *backend.Operation) (terraform.InputValues, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - result := make(terraform.InputValues) - +// hasExplicitVariableValues is a best-effort check to determine whether the +// user has provided -var or -var-file arguments to a remote operation. +// +// The results may be inaccurate if the configuration is invalid or if +// individual variable values are invalid. That's okay because we only use this +// result to hint the user to set variables a different way. It's always the +// remote system's responsibility to do final validation of the input. +func (b *Remote) hasExplicitVariableValues(op *backend.Operation) bool { // Load the configuration using the caller-provided configuration loader. config, _, configDiags := op.ConfigLoader.LoadConfigWithSnapshot(op.ConfigDir) - diags = diags.Append(configDiags) - if diags.HasErrors() { - return nil, diags + if configDiags.HasErrors() { + // If we can't load the configuration then we'll assume no explicit + // variable values just to let the remote operation start and let + // the remote system return the same set of configuration errors. + return false } - variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables) - diags = diags.Append(varDiags) - if diags.HasErrors() { - return nil, diags - } + // We're intentionally ignoring the diagnostics here because validation + // of the variable values is the responsibilty of the remote system. Our + // goal here is just to make a best effort count of how many variable + // values are coming from -var or -var-file CLI arguments so that we can + // hint the user that those are not supported for remote operations. + variables, _ := backend.ParseVariableValues(op.Variables, config.Module.Variables) - // Save only the explicitly defined variables. - for k, v := range variables { + // Check for explicitly-defined (-var and -var-file) variables, which the + // remote backend does not support. All other source types are okay, + // because they are implicit from the execution context anyway and so + // their final values will come from the _remote_ execution context. + for _, v := range variables { switch v.SourceType { case terraform.ValueFromCLIArg, terraform.ValueFromNamedFile: - result[k] = v + return true } } - return result, diags + return false } func (b *Remote) costEstimate(stopCtx, cancelCtx context.Context, op *backend.Operation, r *tfe.Run) error { diff --git a/backend/remote/backend_context.go b/backend/remote/backend_context.go index d548a4eec..9461cb54a 100644 --- a/backend/remote/backend_context.go +++ b/backend/remote/backend_context.go @@ -2,16 +2,21 @@ package remote import ( "context" + "fmt" "log" "strings" "github.com/hashicorp/errwrap" tfe "github.com/hashicorp/go-tfe" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hclsyntax" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/command/clistate" + "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/states/statemgr" "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/tfdiags" + "github.com/zclconf/go-cty/cty" ) // Context implements backend.Enhanced. @@ -47,6 +52,17 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu return nil, nil, diags } + defer func() { + // If we're returning with errors, and thus not producing a valid + // context, we'll want to avoid leaving the remote workspace locked. + if diags.HasErrors() { + err := op.StateLocker.Unlock(nil) + if err != nil { + diags = diags.Append(errwrap.Wrapf("Error unlocking state: {{err}}", err)) + } + } + }() + log.Printf("[TRACE] backend/remote: reading remote state for workspace %q", workspace) if err := stateMgr.RefreshState(); err != nil { diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err)) @@ -88,28 +104,34 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu return nil, nil, diags } - if tfeVariables != nil { - if op.Variables == nil { - op.Variables = make(map[string]backend.UnparsedVariableValue) - } - for _, v := range tfeVariables.Items { - if v.Sensitive { - v.Value = "" + if op.AllowUnsetVariables { + // If we're not going to use the variables in an operation we'll be + // more lax about them, stubbing out any unset ones as unknown. + // This gives us enough information to produce a consistent context, + // but not enough information to run a real operation (plan, apply, etc) + opts.Variables = stubAllVariables(op.Variables, config.Module.Variables) + } else { + if tfeVariables != nil { + if op.Variables == nil { + op.Variables = make(map[string]backend.UnparsedVariableValue) } - op.Variables[v.Key] = &unparsedVariableValue{ - value: v.Value, - source: terraform.ValueFromEnvVar, + for _, v := range tfeVariables.Items { + if v.Category == tfe.CategoryTerraform { + op.Variables[v.Key] = &remoteStoredVariableValue{ + definition: v, + } + } } } - } - if op.Variables != nil { - variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables) - diags = diags.Append(varDiags) - if diags.HasErrors() { - return nil, nil, diags + if op.Variables != nil { + variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables) + diags = diags.Append(varDiags) + if diags.HasErrors() { + return nil, nil, diags + } + opts.Variables = variables } - opts.Variables = variables } tfCtx, ctxDiags := terraform.NewContext(&opts) @@ -119,3 +141,114 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu return tfCtx, stateMgr, diags } + +func stubAllVariables(vv map[string]backend.UnparsedVariableValue, decls map[string]*configs.Variable) terraform.InputValues { + ret := make(terraform.InputValues, len(decls)) + + for name, cfg := range decls { + raw, exists := vv[name] + if !exists { + ret[name] = &terraform.InputValue{ + Value: cty.UnknownVal(cfg.Type), + SourceType: terraform.ValueFromConfig, + } + continue + } + + val, diags := raw.ParseVariableValue(cfg.ParsingMode) + if diags.HasErrors() { + ret[name] = &terraform.InputValue{ + Value: cty.UnknownVal(cfg.Type), + SourceType: terraform.ValueFromConfig, + } + continue + } + ret[name] = val + } + + return ret +} + +// remoteStoredVariableValue is a backend.UnparsedVariableValue implementation +// that translates from the go-tfe representation of stored variables into +// the Terraform Core backend representation of variables. +type remoteStoredVariableValue struct { + definition *tfe.Variable +} + +var _ backend.UnparsedVariableValue = (*remoteStoredVariableValue)(nil) + +func (v *remoteStoredVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + var val cty.Value + + switch { + case v.definition.Sensitive: + // If it's marked as sensitive then it's not available for use in + // local operations. We'll use an unknown value as a placeholder for + // it so that operations that don't need it might still work, but + // we'll also produce a warning about it to add context for any + // errors that might result here. + val = cty.DynamicVal + if !v.definition.HCL { + // If it's not marked as HCL then we at least know that the + // value must be a string, so we'll set that in case it allows + // us to do some more precise type checking. + val = cty.UnknownVal(cty.String) + } + + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Warning, + fmt.Sprintf("Value for var.%s unavailable", v.definition.Key), + fmt.Sprintf("The value of variable %q is marked as sensitive in the remote workspace. This operation always runs locally, so the value for that variable is not available.", v.definition.Key), + )) + + case v.definition.HCL: + // If the variable value is marked as being in HCL syntax, we need to + // parse it the same way as it would be interpreted in a .tfvars + // file because that is how it would get passed to Terraform CLI for + // a remote operation and we want to mimic that result as closely as + // possible. + var exprDiags hcl.Diagnostics + expr, exprDiags := hclsyntax.ParseExpression([]byte(v.definition.Value), "", hcl.Pos{Line: 1, Column: 1}) + if expr != nil { + var moreDiags hcl.Diagnostics + val, moreDiags = expr.Value(nil) + exprDiags = append(exprDiags, moreDiags...) + } else { + // We'll have already put some errors in exprDiags above, so we'll + // just stub out the value here. + val = cty.DynamicVal + } + + // We don't have sufficient context to return decent error messages + // for syntax errors in the remote values, so we'll just return a + // generic message instead for now. + // (More complete error messages will still result from true remote + // operations, because they'll run on the remote system where we've + // materialized the values into a tfvars file we can report from.) + if exprDiags.HasErrors() { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + fmt.Sprintf("Invalid expression for var.%s", v.definition.Key), + fmt.Sprintf("The value of variable %q is marked in the remote workspace as being specified in HCL syntax, but the given value is not valid HCL. Stored variable values must be valid literal expressions and may not contain references to other variables or calls to functions.", v.definition.Key), + )) + } + + default: + // A variable value _not_ marked as HCL is always be a string, given + // literally. + val = cty.StringVal(v.definition.Value) + } + + return &terraform.InputValue{ + Value: val, + + // We mark these as "from input" with the rationale that entering + // variable values into the Terraform Cloud or Enterprise UI is, + // roughly speaking, a similar idea to entering variable values at + // the interactive CLI prompts. It's not a perfect correspondance, + // but it's closer than the other options. + SourceType: terraform.ValueFromInput, + }, diags +} diff --git a/backend/remote/backend_context_test.go b/backend/remote/backend_context_test.go new file mode 100644 index 000000000..03de0f427 --- /dev/null +++ b/backend/remote/backend_context_test.go @@ -0,0 +1,214 @@ +package remote + +import ( + "testing" + + tfe "github.com/hashicorp/go-tfe" + "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/configs" + "github.com/hashicorp/terraform/internal/initwd" + "github.com/zclconf/go-cty/cty" +) + +func TestRemoteStoredVariableValue(t *testing.T) { + tests := map[string]struct { + Def *tfe.Variable + Want cty.Value + WantError string + }{ + "string literal": { + &tfe.Variable{ + Key: "test", + Value: "foo", + HCL: false, + Sensitive: false, + }, + cty.StringVal("foo"), + ``, + }, + "string HCL": { + &tfe.Variable{ + Key: "test", + Value: `"foo"`, + HCL: true, + Sensitive: false, + }, + cty.StringVal("foo"), + ``, + }, + "list HCL": { + &tfe.Variable{ + Key: "test", + Value: `[]`, + HCL: true, + Sensitive: false, + }, + cty.EmptyTupleVal, + ``, + }, + "null HCL": { + &tfe.Variable{ + Key: "test", + Value: `null`, + HCL: true, + Sensitive: false, + }, + cty.NullVal(cty.DynamicPseudoType), + ``, + }, + "literal sensitive": { + &tfe.Variable{ + Key: "test", + HCL: false, + Sensitive: true, + }, + cty.UnknownVal(cty.String), + ``, + }, + "HCL sensitive": { + &tfe.Variable{ + Key: "test", + HCL: true, + Sensitive: true, + }, + cty.DynamicVal, + ``, + }, + "HCL computation": { + // This (stored expressions containing computation) is not a case + // we intentionally supported, but it became possible for remote + // operations in Terraform 0.12 (due to Terraform Cloud/Enterprise + // just writing the HCL verbatim into generated `.tfvars` files). + // We support it here for consistency, and we continue to support + // it in both places for backward-compatibility. In practice, + // there's little reason to do computation in a stored variable + // value because references are not supported. + &tfe.Variable{ + Key: "test", + Value: `[for v in ["a"] : v]`, + HCL: true, + Sensitive: false, + }, + cty.TupleVal([]cty.Value{cty.StringVal("a")}), + ``, + }, + "HCL syntax error": { + &tfe.Variable{ + Key: "test", + Value: `[`, + HCL: true, + Sensitive: false, + }, + cty.DynamicVal, + `Invalid expression for var.test: The value of variable "test" is marked in the remote workspace as being specified in HCL syntax, but the given value is not valid HCL. Stored variable values must be valid literal expressions and may not contain references to other variables or calls to functions.`, + }, + "HCL with references": { + &tfe.Variable{ + Key: "test", + Value: `foo.bar`, + HCL: true, + Sensitive: false, + }, + cty.DynamicVal, + `Invalid expression for var.test: The value of variable "test" is marked in the remote workspace as being specified in HCL syntax, but the given value is not valid HCL. Stored variable values must be valid literal expressions and may not contain references to other variables or calls to functions.`, + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + v := &remoteStoredVariableValue{ + definition: test.Def, + } + // This ParseVariableValue implementation ignores the parsing mode, + // so we'll just always parse literal here. (The parsing mode is + // selected by the remote server, not by our local configuration.) + gotIV, diags := v.ParseVariableValue(configs.VariableParseLiteral) + if test.WantError != "" { + if !diags.HasErrors() { + t.Fatalf("missing expected error\ngot: \nwant: %s", test.WantError) + } + errStr := diags.Err().Error() + if errStr != test.WantError { + t.Fatalf("wrong error\ngot: %s\nwant: %s", errStr, test.WantError) + } + } else { + if diags.HasErrors() { + t.Fatalf("unexpected error\ngot: %s\nwant: ", diags.Err().Error()) + } + got := gotIV.Value + if !test.Want.RawEquals(got) { + t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) + } + } + }) + } +} + +func TestRemoteContextWithVars(t *testing.T) { + catTerraform := tfe.CategoryTerraform + catEnv := tfe.CategoryEnv + + tests := map[string]struct { + Opts *tfe.VariableCreateOptions + WantError string + }{ + "Terraform variable": { + &tfe.VariableCreateOptions{ + Category: &catTerraform, + }, + `Value for undeclared variable: A variable named "key" was assigned a value, but the root module does not declare a variable of that name. To use this value, add a "variable" block to the configuration.`, + }, + "environment variable": { + &tfe.VariableCreateOptions{ + Category: &catEnv, + }, + ``, + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + configDir := "./testdata/empty" + + b, bCleanup := testBackendDefault(t) + defer bCleanup() + + _, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir) + defer configCleanup() + + op := &backend.Operation{ + ConfigDir: configDir, + ConfigLoader: configLoader, + Workspace: backend.DefaultStateName, + } + + v := test.Opts + if v.Key == nil { + key := "key" + v.Key = &key + } + if v.Workspace == nil { + v.Workspace = &tfe.Workspace{ + Name: b.workspace, + } + } + b.client.Variables.Create(nil, *v) + + _, _, diags := b.Context(op) + + if test.WantError != "" { + if !diags.HasErrors() { + t.Fatalf("missing expected error\ngot: \nwant: %s", test.WantError) + } + errStr := diags.Err().Error() + if errStr != test.WantError { + t.Fatalf("wrong error\ngot: %s\nwant: %s", errStr, test.WantError) + } + } else { + if diags.HasErrors() { + t.Fatalf("unexpected error\ngot: %s\nwant: ", diags.Err().Error()) + } + } + }) + } +} diff --git a/backend/remote/backend_mock.go b/backend/remote/backend_mock.go index d337d976e..42b51fd26 100644 --- a/backend/remote/backend_mock.go +++ b/backend/remote/backend_mock.go @@ -27,6 +27,7 @@ type mockClient struct { PolicyChecks *mockPolicyChecks Runs *mockRuns StateVersions *mockStateVersions + Variables *mockVariables Workspaces *mockWorkspaces } @@ -40,6 +41,7 @@ func newMockClient() *mockClient { c.PolicyChecks = newMockPolicyChecks(c) c.Runs = newMockRuns(c) c.StateVersions = newMockStateVersions(c) + c.Variables = newMockVariables(c) c.Workspaces = newMockWorkspaces(c) return c } @@ -945,6 +947,63 @@ func (m *mockStateVersions) Download(ctx context.Context, url string) ([]byte, e return state, nil } +type mockVariables struct { + client *mockClient + workspaces map[string]*tfe.VariableList +} + +func newMockVariables(client *mockClient) *mockVariables { + return &mockVariables{ + client: client, + workspaces: make(map[string]*tfe.VariableList), + } +} + +func (m *mockVariables) List(ctx context.Context, options tfe.VariableListOptions) (*tfe.VariableList, error) { + vl := m.workspaces[*options.Workspace] + return vl, nil +} + +func (m *mockVariables) Create(ctx context.Context, options tfe.VariableCreateOptions) (*tfe.Variable, error) { + v := &tfe.Variable{ + ID: generateID("var-"), + Key: *options.Key, + Category: *options.Category, + } + if options.Value != nil { + v.Value = *options.Value + } + if options.HCL != nil { + v.HCL = *options.HCL + } + if options.Sensitive != nil { + v.Sensitive = *options.Sensitive + } + + workspace := options.Workspace.Name + + if m.workspaces[workspace] == nil { + m.workspaces[workspace] = &tfe.VariableList{} + } + + vl := m.workspaces[workspace] + vl.Items = append(vl.Items, v) + + return v, nil +} + +func (m *mockVariables) Read(ctx context.Context, variableID string) (*tfe.Variable, error) { + panic("not implemented") +} + +func (m *mockVariables) Update(ctx context.Context, variableID string, options tfe.VariableUpdateOptions) (*tfe.Variable, error) { + panic("not implemented") +} + +func (m *mockVariables) Delete(ctx context.Context, variableID string) error { + panic("not implemented") +} + type mockWorkspaces struct { client *mockClient workspaceIDs map[string]*tfe.Workspace diff --git a/backend/remote/backend_plan.go b/backend/remote/backend_plan.go index 9563080e1..c73c63f18 100644 --- a/backend/remote/backend_plan.go +++ b/backend/remote/backend_plan.go @@ -78,10 +78,7 @@ func (b *Remote) opPlan(stopCtx, cancelCtx context.Context, op *backend.Operatio )) } - variables, parseDiags := b.parseVariableValues(op) - diags = diags.Append(parseDiags) - - if len(variables) > 0 { + if b.hasExplicitVariableValues(op) { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, "Run variables are currently not supported", @@ -164,10 +161,12 @@ func (b *Remote) plan(stopCtx, cancelCtx context.Context, op *backend.Operation, The remote workspace is configured to work with configuration at %s relative to the target repository. -Therefore Terraform will upload the full contents of the following directory -to capture the filesystem context the remote workspace expects: +Terraform will upload the contents of the following directory, +excluding files or directories as defined by a .terraformignore file +at %s/.terraformignore (if it is present), +in order to capture the filesystem context the remote workspace expects: %s -`), w.WorkingDirectory, configDir) + "\n") +`), w.WorkingDirectory, configDir, configDir) + "\n") } } diff --git a/backend/remote/backend_test.go b/backend/remote/backend_test.go index ac3fd01eb..053a1b35e 100644 --- a/backend/remote/backend_test.go +++ b/backend/remote/backend_test.go @@ -5,8 +5,8 @@ import ( "strings" "testing" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/backend" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/version" "github.com/zclconf/go-cty/cty" @@ -64,6 +64,19 @@ func TestRemote_config(t *testing.T) { }), confErr: "Failed to request discovery document", }, + // localhost advertises TFE services, but has no token in the credentials + "without_a_token": { + config: cty.ObjectVal(map[string]cty.Value{ + "hostname": cty.StringVal("localhost"), + "organization": cty.StringVal("hashicorp"), + "token": cty.NullVal(cty.String), + "workspaces": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("prod"), + "prefix": cty.NullVal(cty.String), + }), + }), + confErr: "terraform login localhost", + }, "with_a_name": { config: cty.ObjectVal(map[string]cty.Value{ "hostname": cty.NullVal(cty.String), diff --git a/backend/remote/testing.go b/backend/remote/testing.go index 11197487a..e506f28f9 100644 --- a/backend/remote/testing.go +++ b/backend/remote/testing.go @@ -10,16 +10,18 @@ import ( "testing" tfe "github.com/hashicorp/go-tfe" + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/auth" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/configs/configschema" + "github.com/hashicorp/terraform/httpclient" "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/state/remote" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/auth" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/tfdiags" + "github.com/hashicorp/terraform/version" "github.com/mitchellh/cli" "github.com/zclconf/go-cty/cty" @@ -121,6 +123,7 @@ func testBackend(t *testing.T, obj cty.Value) (*Remote, func()) { b.client.PolicyChecks = mc.PolicyChecks b.client.Runs = mc.Runs b.client.StateVersions = mc.StateVersions + b.client.Variables = mc.Variables b.client.Workspaces = mc.Workspaces b.ShowDiagnostics = func(vals ...interface{}) { @@ -268,6 +271,7 @@ func testDisco(s *httptest.Server) *disco.Disco { "versions.v1": fmt.Sprintf("%s/v1/versions/", s.URL), } d := disco.NewWithCredentialsSource(credsSrc) + d.SetUserAgent(httpclient.TerraformUserAgent(version.String())) d.ForceHostServices(svchost.Hostname(defaultHostname), services) d.ForceHostServices(svchost.Hostname("localhost"), services) diff --git a/backend/testing.go b/backend/testing.go index 1fc081db5..4521fd753 100644 --- a/backend/testing.go +++ b/backend/testing.go @@ -150,9 +150,10 @@ func TestBackendStates(t *testing.T, b Backend) { Status: states.ObjectReady, SchemaVersion: 0, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) // write a distinct known state to bar diff --git a/backend/unparsed_value.go b/backend/unparsed_value.go index 1b60161f0..65a05c823 100644 --- a/backend/unparsed_value.go +++ b/backend/unparsed_value.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/tfdiags" + "github.com/zclconf/go-cty/cty" ) // UnparsedVariableValue represents a variable value provided by the caller @@ -24,6 +25,22 @@ type UnparsedVariableValue interface { ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) } +// ParseVariableValues processes a map of unparsed variable values by +// correlating each one with the given variable declarations which should +// be from a root module. +// +// The map of unparsed variable values should include variables from all +// possible root module declarations sources such that it is as complete as +// it can possibly be for the current operation. If any declared variables +// are not included in the map, ParseVariableValues will either substitute +// a configured default value or produce an error. +// +// If this function returns without any errors in the diagnostics, the +// resulting input values map is guaranteed to be valid and ready to pass +// to terraform.NewContext. If the diagnostics contains errors, the returned +// InputValues may be incomplete but will include the subset of variables +// that were successfully processed, allowing for careful analysis of the +// partial result. func ParseVariableValues(vv map[string]UnparsedVariableValue, decls map[string]*configs.Variable) (terraform.InputValues, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics ret := make(terraform.InputValues, len(vv)) @@ -63,12 +80,11 @@ func ParseVariableValues(vv map[string]UnparsedVariableValue, decls map[string]* // should migrate to using environment variables instead before // this becomes an error in a future major release. if seenUndeclaredInFile < 3 { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagWarning, - Summary: "Value for undeclared variable", - Detail: fmt.Sprintf("The root module does not declare a variable named %q. To use this value, add a \"variable\" block to the configuration.\n\nUsing a variables file to set an undeclared variable is deprecated and will become an error in a future release. If you wish to provide certain \"global\" settings to all configurations in your organization, use TF_VAR_... environment variables to set these instead.", name), - Subject: val.SourceRange.ToHCL().Ptr(), - }) + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Warning, + "Value for undeclared variable", + fmt.Sprintf("The root module does not declare a variable named %q but a value was found in file %q. To use this value, add a \"variable\" block to the configuration.\n\nUsing a variables file to set an undeclared variable is deprecated and will become an error in a future release. If you wish to provide certain \"global\" settings to all configurations in your organization, use TF_VAR_... environment variables to set these instead.", name, val.SourceRange.Filename), + )) } seenUndeclaredInFile++ @@ -107,5 +123,40 @@ func ParseVariableValues(vv map[string]UnparsedVariableValue, decls map[string]* }) } + // By this point we should've gathered all of the required root module + // variables from one of the many possible sources. We'll now populate + // any we haven't gathered as their defaults and fail if any of the + // missing ones are required. + for name, vc := range decls { + if _, defined := ret[name]; defined { + continue + } + + if vc.Required() { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "No value for required variable", + Detail: fmt.Sprintf("The root module input variable %q is not set, and has no default value. Use a -var or -var-file command line argument to provide a value for this variable.", name), + Subject: vc.DeclRange.Ptr(), + }) + + // We'll include a placeholder value anyway, just so that our + // result is complete for any calling code that wants to cautiously + // analyze it for diagnostic purposes. Since our diagnostics now + // includes an error, normal processing will ignore this result. + ret[name] = &terraform.InputValue{ + Value: cty.DynamicVal, + SourceType: terraform.ValueFromConfig, + SourceRange: tfdiags.SourceRangeFromHCL(vc.DeclRange), + } + } else { + ret[name] = &terraform.InputValue{ + Value: vc.Default, + SourceType: terraform.ValueFromConfig, + SourceRange: tfdiags.SourceRangeFromHCL(vc.DeclRange), + } + } + } + return ret, diags } diff --git a/backend/unparsed_value_test.go b/backend/unparsed_value_test.go index 6112d7c71..27fba6257 100644 --- a/backend/unparsed_value_test.go +++ b/backend/unparsed_value_test.go @@ -3,6 +3,8 @@ package backend import ( "testing" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/hcl/v2" "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/configs" @@ -17,19 +19,53 @@ func TestParseVariableValuesUndeclared(t *testing.T) { "undeclared2": testUnparsedVariableValue("2"), "undeclared3": testUnparsedVariableValue("3"), "undeclared4": testUnparsedVariableValue("4"), + "declared1": testUnparsedVariableValue("5"), + } + decls := map[string]*configs.Variable{ + "declared1": { + Name: "declared1", + Type: cty.String, + ParsingMode: configs.VariableParseLiteral, + DeclRange: hcl.Range{ + Filename: "fake.tf", + Start: hcl.Pos{Line: 2, Column: 1, Byte: 0}, + End: hcl.Pos{Line: 2, Column: 1, Byte: 0}, + }, + }, + "missing1": { + Name: "missing1", + Type: cty.String, + ParsingMode: configs.VariableParseLiteral, + DeclRange: hcl.Range{ + Filename: "fake.tf", + Start: hcl.Pos{Line: 3, Column: 1, Byte: 0}, + End: hcl.Pos{Line: 3, Column: 1, Byte: 0}, + }, + }, + "missing2": { + Name: "missing1", + Type: cty.String, + ParsingMode: configs.VariableParseLiteral, + Default: cty.StringVal("default for missing2"), + DeclRange: hcl.Range{ + Filename: "fake.tf", + Start: hcl.Pos{Line: 4, Column: 1, Byte: 0}, + End: hcl.Pos{Line: 4, Column: 1, Byte: 0}, + }, + }, } - decls := map[string]*configs.Variable{} - _, diags := ParseVariableValues(vv, decls) + gotVals, diags := ParseVariableValues(vv, decls) for _, diag := range diags { t.Logf("%s: %s", diag.Description().Summary, diag.Description().Detail) } - if got, want := len(diags), 4; got != want { + if got, want := len(diags), 5; got != want { t.Fatalf("wrong number of diagnostics %d; want %d", got, want) } const undeclSingular = `Value for undeclared variable` const undeclPlural = `Values for undeclared variables` + const missingRequired = `No value for required variable` if got, want := diags[0].Description().Summary, undeclSingular; got != want { t.Errorf("wrong summary for diagnostic 0\ngot: %s\nwant: %s", got, want) @@ -43,6 +79,42 @@ func TestParseVariableValuesUndeclared(t *testing.T) { if got, want := diags[3].Description().Summary, undeclPlural; got != want { t.Errorf("wrong summary for diagnostic 3\ngot: %s\nwant: %s", got, want) } + if got, want := diags[4].Description().Summary, missingRequired; got != want { + t.Errorf("wrong summary for diagnostic 4\ngot: %s\nwant: %s", got, want) + } + + wantVals := terraform.InputValues{ + "declared1": { + Value: cty.StringVal("5"), + SourceType: terraform.ValueFromNamedFile, + SourceRange: tfdiags.SourceRange{ + Filename: "fake.tfvars", + Start: tfdiags.SourcePos{Line: 1, Column: 1, Byte: 0}, + End: tfdiags.SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + "missing1": { + Value: cty.DynamicVal, + SourceType: terraform.ValueFromConfig, + SourceRange: tfdiags.SourceRange{ + Filename: "fake.tf", + Start: tfdiags.SourcePos{Line: 3, Column: 1, Byte: 0}, + End: tfdiags.SourcePos{Line: 3, Column: 1, Byte: 0}, + }, + }, + "missing2": { + Value: cty.StringVal("default for missing2"), + SourceType: terraform.ValueFromConfig, + SourceRange: tfdiags.SourceRange{ + Filename: "fake.tf", + Start: tfdiags.SourcePos{Line: 4, Column: 1, Byte: 0}, + End: tfdiags.SourcePos{Line: 4, Column: 1, Byte: 0}, + }, + }, + } + if diff := cmp.Diff(wantVals, gotVals, cmp.Comparer(cty.Value.RawEquals)); diff != "" { + t.Errorf("wrong result\n%s", diff) + } } type testUnparsedVariableValue string diff --git a/builtin/providers/test/resource_import_other_test.go b/builtin/providers/test/resource_import_other_test.go index 1965d9e66..9ace0525e 100644 --- a/builtin/providers/test/resource_import_other_test.go +++ b/builtin/providers/test/resource_import_other_test.go @@ -23,7 +23,6 @@ resource "test_resource_import_other" "foo" { { ImportState: true, ResourceName: "test_resource_import_other.foo", - ImportStateCheck: func(iss []*terraform.InstanceState) error { if got, want := len(iss), 2; got != want { return fmt.Errorf("wrong number of resources %d; want %d", got, want) diff --git a/builtin/provisioners/puppet/resource_provisioner.go b/builtin/provisioners/puppet/resource_provisioner.go index 35c7dd2a3..767b352af 100644 --- a/builtin/provisioners/puppet/resource_provisioner.go +++ b/builtin/provisioners/puppet/resource_provisioner.go @@ -128,7 +128,7 @@ func applyFn(ctx context.Context) error { if p.OSType == "" { switch connType := state.Ephemeral.ConnInfo["type"]; connType { - case "ssh", "": + case "ssh", "": // The default connection type is ssh, so if the type is empty assume ssh p.OSType = "linux" case "winrm": p.OSType = "windows" @@ -259,16 +259,30 @@ func (p *provisioner) generateAutosignToken(certname string) (string, error) { } func (p *provisioner) installPuppetAgentOpenSource() error { + task := "puppet_agent::install" + + connType := p.instanceState.Ephemeral.ConnInfo["type"] + if connType == "" { + connType = "ssh" + } + + agentConnInfo := map[string]string{ + "type": connType, + "host": p.instanceState.Ephemeral.ConnInfo["host"], + "user": p.instanceState.Ephemeral.ConnInfo["user"], + "password": p.instanceState.Ephemeral.ConnInfo["password"], // Required on Windows only + } + result, err := bolt.Task( - p.instanceState.Ephemeral.ConnInfo, + agentConnInfo, p.BoltTimeout, p.UseSudo, - "puppet_agent::install", + task, nil, ) if err != nil || result.Items[0].Status != "success" { - return fmt.Errorf("puppet_agent::install failed: %s\n%+v", err, result) + return fmt.Errorf("%s failed: %s\n%+v", task, err, result) } return nil diff --git a/checkpoint.go b/checkpoint.go index 4837e4763..5885bb345 100644 --- a/checkpoint.go +++ b/checkpoint.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/go-checkpoint" "github.com/hashicorp/terraform/command" + "github.com/hashicorp/terraform/command/cliconfig" ) func init() { @@ -17,7 +18,7 @@ var checkpointResult chan *checkpoint.CheckResponse // runCheckpoint runs a HashiCorp Checkpoint request. You can read about // Checkpoint here: https://github.com/hashicorp/go-checkpoint. -func runCheckpoint(c *Config) { +func runCheckpoint(c *cliconfig.Config) { // If the user doesn't want checkpoint at all, then return. if c.DisableCheckpoint { log.Printf("[INFO] Checkpoint disabled. Not running.") @@ -25,7 +26,7 @@ func runCheckpoint(c *Config) { return } - configDir, err := ConfigDir() + configDir, err := cliconfig.ConfigDir() if err != nil { log.Printf("[ERR] Checkpoint setup error: %s", err) checkpointResult <- nil diff --git a/command/apply.go b/command/apply.go index 28ddec774..76e500bf3 100644 --- a/command/apply.go +++ b/command/apply.go @@ -248,11 +248,15 @@ Usage: terraform apply [options] [DIR-OR-PLAN] Options: + -auto-approve Skip interactive approval of plan before applying. + -backup=path Path to backup the existing state file before modifying. Defaults to the "-state-out" path with ".backup" extension. Set to "-" to disable backup. - -auto-approve Skip interactive approval of plan before applying. + -compact-warnings If Terraform produces any warnings that are not + accompanied by errors, show them in a more compact + form that includes only the summary messages. -lock=true Lock the state file when locking is supported. diff --git a/command/apply_destroy_test.go b/command/apply_destroy_test.go index 3cdf998f7..bdcfa1a07 100644 --- a/command/apply_destroy_test.go +++ b/command/apply_destroy_test.go @@ -29,7 +29,10 @@ func TestApply_destroy(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, originalState) @@ -122,7 +125,10 @@ func TestApply_destroyLockedState(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, originalState) @@ -189,12 +195,15 @@ func TestApply_destroyTargeted(t *testing.T) { Mode: addrs.ManagedResourceMode, Type: "test_instance", Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), &states.ResourceInstanceObjectSrc{ AttrsJSON: []byte(`{"id":"i-ab123"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -203,10 +212,14 @@ func TestApply_destroyTargeted(t *testing.T) { Name: "foo", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte(`{"id":"i-abc123"}`), - Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"i-abc123"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_instance.foo")}, + Status: states.ObjectReady, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), ) }) statePath := testStateFile(t, originalState) diff --git a/command/apply_test.go b/command/apply_test.go index b949b6af7..f411d138d 100644 --- a/command/apply_test.go +++ b/command/apply_test.go @@ -177,7 +177,7 @@ func TestApply_parallelism(t *testing.T) { // to ApplyResourceChange, we need to use a number of separate providers // here. They will all have the same mock implementation function assigned // but crucially they will each have their own mutex. - providerFactories := map[string]providers.Factory{} + providerFactories := map[addrs.Provider]providers.Factory{} for i := 0; i < 10; i++ { name := fmt.Sprintf("test%d", i) provider := &terraform.MockProvider{} @@ -203,7 +203,7 @@ func TestApply_parallelism(t *testing.T) { NewState: cty.EmptyObjectVal, } } - providerFactories[name] = providers.FactoryFixed(provider) + providerFactories[addrs.NewLegacyProvider(name)] = providers.FactoryFixed(provider) } testingOverrides := &testingOverrides{ ProviderResolver: providers.ResolverFixed(providerFactories), @@ -423,7 +423,11 @@ func TestApply_input(t *testing.T) { test = false defer func() { test = true }() - // Set some default reader/writers for the inputs + // The configuration for this test includes a declaration of variable + // "foo" with no default, and we don't set it on the command line below, + // so the apply command will produce an interactive prompt for the + // value of var.foo. We'll answer "foo" here, and we expect the output + // value "result" to echo that back to us below. defaultInputReader = bytes.NewBufferString("foo\n") defaultInputWriter = new(bytes.Buffer) @@ -829,7 +833,10 @@ func TestApply_refresh(t *testing.T) { AttrsJSON: []byte(`{"ami":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, originalState) @@ -983,7 +990,10 @@ func TestApply_state(t *testing.T) { AttrsJSON: []byte(`{"ami":"foo"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, originalState) @@ -1347,7 +1357,10 @@ func TestApply_backup(t *testing.T) { AttrsJSON: []byte("{\n \"id\": \"bar\"\n }"), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, originalState) @@ -1648,7 +1661,10 @@ func applyFixturePlanFile(t *testing.T) string { Type: "test_instance", Name: "foo", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ Action: plans.Create, Before: priorValRaw, diff --git a/command/cliconfig/cliconfig.go b/command/cliconfig/cliconfig.go index 5925b40b3..ea0bf1e57 100644 --- a/command/cliconfig/cliconfig.go +++ b/command/cliconfig/cliconfig.go @@ -17,7 +17,7 @@ import ( "github.com/hashicorp/hcl" - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" "github.com/hashicorp/terraform/tfdiags" ) diff --git a/command/cliconfig/credentials.go b/command/cliconfig/credentials.go index d23ede4b6..907978407 100644 --- a/command/cliconfig/credentials.go +++ b/command/cliconfig/credentials.go @@ -12,10 +12,10 @@ import ( "github.com/zclconf/go-cty/cty" ctyjson "github.com/zclconf/go-cty/cty/json" + "github.com/hashicorp/terraform-svchost" + svcauth "github.com/hashicorp/terraform-svchost/auth" "github.com/hashicorp/terraform/configs/hcl2shim" pluginDiscovery "github.com/hashicorp/terraform/plugin/discovery" - "github.com/hashicorp/terraform/svchost" - svcauth "github.com/hashicorp/terraform/svchost/auth" ) // credentialsConfigFile returns the path for the special configuration file diff --git a/command/cliconfig/credentials_test.go b/command/cliconfig/credentials_test.go index 3cb0212f0..22a9e3f83 100644 --- a/command/cliconfig/credentials_test.go +++ b/command/cliconfig/credentials_test.go @@ -10,8 +10,8 @@ import ( "github.com/google/go-cmp/cmp" "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/svchost" - svcauth "github.com/hashicorp/terraform/svchost/auth" + "github.com/hashicorp/terraform-svchost" + svcauth "github.com/hashicorp/terraform-svchost/auth" ) func TestCredentialsForHost(t *testing.T) { diff --git a/command/command_test.go b/command/command_test.go index 58b817c7a..1bb715b97 100644 --- a/command/command_test.go +++ b/command/command_test.go @@ -100,6 +100,12 @@ func tempDir(t *testing.T) string { if err != nil { t.Fatalf("err: %s", err) } + + dir, err = filepath.EvalSymlinks(dir) + if err != nil { + t.Fatal(err) + } + if err := os.RemoveAll(dir); err != nil { t.Fatalf("err: %s", err) } @@ -114,8 +120,8 @@ func testFixturePath(name string) string { func metaOverridesForProvider(p providers.Interface) *testingOverrides { return &testingOverrides{ ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": providers.FactoryFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): providers.FactoryFixed(p), }, ), } @@ -124,8 +130,8 @@ func metaOverridesForProvider(p providers.Interface) *testingOverrides { func metaOverridesForProviderAndProvisioner(p providers.Interface, pr provisioners.Interface) *testingOverrides { return &testingOverrides{ ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": providers.FactoryFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): providers.FactoryFixed(p), }, ), Provisioners: map[string]provisioners.Factory{ @@ -260,12 +266,15 @@ func testState() *states.State { // The weird whitespace here is reflective of how this would // get written out in a real state file, due to the indentation // of all of the containing wrapping objects and arrays. - AttrsJSON: []byte("{\n \"id\": \"bar\"\n }"), - Status: states.ObjectReady, + AttrsJSON: []byte("{\n \"id\": \"bar\"\n }"), + Status: states.ObjectReady, + Dependencies: []addrs.AbsResource{}, + DependsOn: []addrs.Referenceable{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), ) // DeepCopy is used here to ensure our synthetic state matches exactly // with a state that will have been copied during the command @@ -484,6 +493,11 @@ func testTempDir(t *testing.T) string { t.Fatalf("err: %s", err) } + d, err = filepath.EvalSymlinks(d) + if err != nil { + t.Fatal(err) + } + return d } @@ -866,3 +880,11 @@ func normalizeJSON(t *testing.T, src []byte) string { } return buf.String() } + +func mustResourceAddr(s string) addrs.AbsResource { + addr, diags := addrs.ParseAbsResourceStr(s) + if diags.HasErrors() { + panic(diags.Err()) + } + return addr +} diff --git a/command/console.go b/command/console.go index ea596c783..81ba3a8a9 100644 --- a/command/console.go +++ b/command/console.go @@ -77,6 +77,7 @@ func (c *ConsoleCommand) Run(args []string) int { opReq := c.Operation(b) opReq.ConfigDir = configPath opReq.ConfigLoader, err = c.initConfigLoader() + opReq.AllowUnsetVariables = true // we'll just evaluate them as unknown if err != nil { diags = diags.Append(err) c.showDiagnostics(diags) @@ -95,12 +96,8 @@ func (c *ConsoleCommand) Run(args []string) int { // Get the context ctx, _, ctxDiags := local.Context(opReq) - diags = diags.Append(ctxDiags) - if ctxDiags.HasErrors() { - c.showDiagnostics(diags) - return 1 - } + // Creating the context can result in a lock, so ensure we release it defer func() { err := opReq.StateLocker.Unlock(nil) if err != nil { @@ -108,6 +105,12 @@ func (c *ConsoleCommand) Run(args []string) int { } }() + diags = diags.Append(ctxDiags) + if ctxDiags.HasErrors() { + c.showDiagnostics(diags) + return 1 + } + // Setup the UI so we can output directly to stdout ui := &cli.BasicUi{ Writer: wrappedstreams.Stdout(), diff --git a/command/console_test.go b/command/console_test.go index c30ade9e3..d0b66701f 100644 --- a/command/console_test.go +++ b/command/console_test.go @@ -84,3 +84,52 @@ func TestConsole_tfvars(t *testing.T) { t.Fatalf("bad: %q", actual) } } + +func TestConsole_unsetRequiredVars(t *testing.T) { + // This test is verifying that it's possible to run "terraform console" + // without providing values for all required variables, without + // "terraform console" producing an interactive prompt for those variables + // or producing errors. Instead, it should allow evaluation in that + // partial context but see the unset variables values as being unknown. + + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + p := testProvider() + ui := new(cli.MockUi) + c := &ConsoleCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(p), + Ui: ui, + }, + } + + var output bytes.Buffer + defer testStdinPipe(t, strings.NewReader("var.foo\n"))() + outCloser := testStdoutCapture(t, &output) + + args := []string{ + // This test fixture includes variable "foo" {}, which we are + // intentionally not setting here. + testFixturePath("apply-vars"), + } + code := c.Run(args) + outCloser() + + // Because we're running "terraform console" in piped input mode, we're + // expecting it to return a nonzero exit status here but the message + // must be the one indicating that it did attempt to evaluate var.foo and + // got an unknown value in return, rather than an error about var.foo + // not being set or a failure to prompt for it. + if code == 0 { + t.Fatalf("unexpected success\n%s", ui.OutputWriter.String()) + } + + // The error message should be the one console produces when it encounters + // an unknown value. + got := ui.ErrorWriter.String() + want := `Error: Result depends on values that cannot be determined` + if !strings.Contains(got, want) { + t.Fatalf("wrong output\ngot:\n%s\n\nwant string containing %q", got, want) + } +} diff --git a/command/debug_json2dot.go b/command/debug_json2dot.go deleted file mode 100644 index 5b05e1773..000000000 --- a/command/debug_json2dot.go +++ /dev/null @@ -1,66 +0,0 @@ -package command - -import ( - "fmt" - "os" - "strings" - - "github.com/hashicorp/terraform/dag" - "github.com/mitchellh/cli" -) - -// DebugJSON2DotCommand is a Command implementation that translates a json -// graph debug log to Dot format. -type DebugJSON2DotCommand struct { - Meta -} - -func (c *DebugJSON2DotCommand) Run(args []string) int { - args, err := c.Meta.process(args, true) - if err != nil { - return 1 - } - cmdFlags := c.Meta.extendedFlagSet("debug json2dot") - - if err := cmdFlags.Parse(args); err != nil { - return cli.RunResultHelp - } - - fileName := cmdFlags.Arg(0) - if fileName == "" { - return cli.RunResultHelp - } - - f, err := os.Open(fileName) - if err != nil { - c.Ui.Error(fmt.Sprintf(errInvalidLog, err)) - return cli.RunResultHelp - } - - dot, err := dag.JSON2Dot(f) - if err != nil { - c.Ui.Error(fmt.Sprintf(errInvalidLog, err)) - return cli.RunResultHelp - } - - c.Ui.Output(string(dot)) - return 0 -} - -func (c *DebugJSON2DotCommand) Help() string { - helpText := ` -Usage: terraform debug json2dot input.json - - Translate a graph debug file to dot format. - - This command takes a single json graph log file and converts it to a single - dot graph written to stdout. -` - return strings.TrimSpace(helpText) -} - -func (c *DebugJSON2DotCommand) Synopsis() string { - return "Convert json graph log to dot" -} - -const errInvalidLog = `Error parsing log file: %[1]s` diff --git a/command/debug_json2dot_test.go b/command/debug_json2dot_test.go deleted file mode 100644 index 3e72048ae..000000000 --- a/command/debug_json2dot_test.go +++ /dev/null @@ -1,53 +0,0 @@ -package command - -import ( - "io/ioutil" - "os" - "strings" - "testing" - - "github.com/hashicorp/terraform/dag" - "github.com/mitchellh/cli" -) - -func TestDebugJSON2Dot(t *testing.T) { - // create the graph JSON output - logFile, err := ioutil.TempFile(testingDir, "tf") - if err != nil { - t.Fatal(err) - } - defer os.Remove(logFile.Name()) - - var g dag.Graph - g.SetDebugWriter(logFile) - - g.Add(1) - g.Add(2) - g.Add(3) - g.Connect(dag.BasicEdge(1, 2)) - g.Connect(dag.BasicEdge(2, 3)) - - ui := new(cli.MockUi) - c := &DebugJSON2DotCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(testProvider()), - Ui: ui, - }, - } - - args := []string{ - logFile.Name(), - } - if code := c.Run(args); code != 0 { - t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) - } - - output := ui.OutputWriter.String() - if !strings.HasPrefix(output, "digraph {") { - t.Fatalf("doesn't look like digraph: %s", output) - } - - if !strings.Contains(output, `subgraph "root" {`) { - t.Fatalf("doesn't contains root subgraph: %s", output) - } -} diff --git a/command/format/diagnostic.go b/command/format/diagnostic.go index ed34ddbb9..6b8232650 100644 --- a/command/format/diagnostic.go +++ b/command/format/diagnostic.go @@ -177,6 +177,51 @@ func Diagnostic(diag tfdiags.Diagnostic, sources map[string][]byte, color *color return buf.String() } +// DiagnosticWarningsCompact is an alternative to Diagnostic for when all of +// the given diagnostics are warnings and we want to show them compactly, +// with only two lines per warning and excluding all of the detail information. +// +// The caller may optionally pre-process the given diagnostics with +// ConsolidateWarnings, in which case this function will recognize consolidated +// messages and include an indication that they are consolidated. +// +// Do not pass non-warning diagnostics to this function, or the result will +// be nonsense. +func DiagnosticWarningsCompact(diags tfdiags.Diagnostics, color *colorstring.Colorize) string { + var b strings.Builder + b.WriteString(color.Color("[bold][yellow]Warnings:[reset]\n\n")) + for _, diag := range diags { + sources := tfdiags.WarningGroupSourceRanges(diag) + b.WriteString(fmt.Sprintf("- %s\n", diag.Description().Summary)) + if len(sources) > 0 { + mainSource := sources[0] + if mainSource.Subject != nil { + if len(sources) > 1 { + b.WriteString(fmt.Sprintf( + " on %s line %d (and %d more)\n", + mainSource.Subject.Filename, + mainSource.Subject.Start.Line, + len(sources)-1, + )) + } else { + b.WriteString(fmt.Sprintf( + " on %s line %d\n", + mainSource.Subject.Filename, + mainSource.Subject.Start.Line, + )) + } + } else if len(sources) > 1 { + b.WriteString(fmt.Sprintf( + " (%d occurences of this warning)\n", + len(sources), + )) + } + } + } + + return b.String() +} + func parseRange(src []byte, rng hcl.Range) (*hcl.File, int) { filename := rng.Filename offset := rng.Start.Byte diff --git a/command/format/diagnostic_test.go b/command/format/diagnostic_test.go new file mode 100644 index 000000000..18812089c --- /dev/null +++ b/command/format/diagnostic_test.go @@ -0,0 +1,73 @@ +package format + +import ( + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/hcl/v2" + "github.com/mitchellh/colorstring" + + "github.com/hashicorp/terraform/tfdiags" +) + +func TestDiagnosticWarningsCompact(t *testing.T) { + var diags tfdiags.Diagnostics + diags = diags.Append(tfdiags.SimpleWarning("foo")) + diags = diags.Append(tfdiags.SimpleWarning("foo")) + diags = diags.Append(tfdiags.SimpleWarning("bar")) + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "source foo", + Detail: "...", + Subject: &hcl.Range{ + Filename: "source.tf", + Start: hcl.Pos{Line: 2, Column: 1, Byte: 5}, + End: hcl.Pos{Line: 2, Column: 1, Byte: 5}, + }, + }) + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "source foo", + Detail: "...", + Subject: &hcl.Range{ + Filename: "source.tf", + Start: hcl.Pos{Line: 3, Column: 1, Byte: 7}, + End: hcl.Pos{Line: 3, Column: 1, Byte: 7}, + }, + }) + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "source bar", + Detail: "...", + Subject: &hcl.Range{ + Filename: "source2.tf", + Start: hcl.Pos{Line: 1, Column: 1, Byte: 1}, + End: hcl.Pos{Line: 1, Column: 1, Byte: 1}, + }, + }) + + // ConsolidateWarnings groups together the ones + // that have source location information and that + // have the same summary text. + diags = diags.ConsolidateWarnings(1) + + // A zero-value Colorize just passes all the formatting + // codes back to us, so we can test them literally. + got := DiagnosticWarningsCompact(diags, &colorstring.Colorize{}) + want := `[bold][yellow]Warnings:[reset] + +- foo +- foo +- bar +- source foo + on source.tf line 2 (and 1 more) +- source bar + on source2.tf line 1 +` + if got != want { + t.Errorf( + "wrong result\ngot:\n%s\n\nwant:\n%s\n\ndiff:\n%s", + got, want, cmp.Diff(want, got), + ) + } +} diff --git a/command/format/diff.go b/command/format/diff.go index c726f0ede..2f2258de0 100644 --- a/command/format/diff.go +++ b/command/format/diff.go @@ -70,22 +70,7 @@ func ResourceChange( } buf.WriteString(color.Color("[reset]\n")) - switch change.Action { - case plans.Create: - buf.WriteString(color.Color("[green] +[reset] ")) - case plans.Read: - buf.WriteString(color.Color("[cyan] <=[reset] ")) - case plans.Update: - buf.WriteString(color.Color("[yellow] ~[reset] ")) - case plans.DeleteThenCreate: - buf.WriteString(color.Color("[red]-[reset]/[green]+[reset] ")) - case plans.CreateThenDelete: - buf.WriteString(color.Color("[green]+[reset]/[red]-[reset] ")) - case plans.Delete: - buf.WriteString(color.Color("[red] -[reset] ")) - default: - buf.WriteString(color.Color("??? ")) - } + buf.WriteString(color.Color(DiffActionSymbol(change.Action)) + " ") switch addr.Resource.Resource.Mode { case addrs.ManagedResourceMode: @@ -502,7 +487,7 @@ func (p *blockBodyDiffPrinter) writeValue(val cty.Value, action plans.Action, in ty, err := ctyjson.ImpliedType(src) // check for the special case of "null", which decodes to nil, // and just allow it to be printed out directly - if err == nil && !ty.IsPrimitiveType() && val.AsString() != "null" { + if err == nil && !ty.IsPrimitiveType() && strings.TrimSpace(val.AsString()) != "null" { jv, err := ctyjson.Unmarshal(src, ty) if err == nil { p.buf.WriteString("jsonencode(") @@ -520,6 +505,21 @@ func (p *blockBodyDiffPrinter) writeValue(val cty.Value, action plans.Action, in } } } + + if strings.Contains(val.AsString(), "\n") { + // It's a multi-line string, so we want to use the multi-line + // rendering so it'll be readable. Rather than re-implement + // that here, we'll just re-use the multi-line string diff + // printer with no changes, which ends up producing the + // result we want here. + // The path argument is nil because we don't track path + // information into strings and we know that a string can't + // have any indices or attributes that might need to be marked + // as (requires replacement), which is what that argument is for. + p.writeValueDiff(val, val, indent, nil) + break + } + fmt.Fprintf(p.buf, "%q", val.AsString()) case cty.Bool: if val.True() { @@ -1014,8 +1014,9 @@ func (p *blockBodyDiffPrinter) writeActionSymbol(action plans.Action) { } func (p *blockBodyDiffPrinter) pathForcesNewResource(path cty.Path) bool { - if !p.action.IsReplace() { - // "requiredReplace" only applies when the instance is being replaced + if !p.action.IsReplace() || p.requiredReplace.Empty() { + // "requiredReplace" only applies when the instance is being replaced, + // and we should only inspect that set if it is not empty return false } return p.requiredReplace.Has(path) @@ -1071,8 +1072,8 @@ func ctySequenceDiff(old, new []cty.Value) []*plans.Change { var oldI, newI, lcsI int for oldI < len(old) || newI < len(new) || lcsI < len(lcs) { for oldI < len(old) && (lcsI >= len(lcs) || !old[oldI].RawEquals(lcs[lcsI])) { - isObjectDiff := old[oldI].Type().IsObjectType() && (newI >= len(new) || new[newI].Type().IsObjectType()) - if isObjectDiff && newI < len(new) { + isObjectDiff := old[oldI].Type().IsObjectType() && newI < len(new) && new[newI].Type().IsObjectType() && (lcsI >= len(lcs) || !new[newI].RawEquals(lcs[lcsI])) + if isObjectDiff { ret = append(ret, &plans.Change{ Action: plans.Update, Before: old[oldI], @@ -1190,3 +1191,26 @@ func ctyNullBlockSetAsEmpty(in cty.Value) cty.Value { // sets, so our result here is always a set. return cty.SetValEmpty(in.Type().ElementType()) } + +// DiffActionSymbol returns a string that, once passed through a +// colorstring.Colorize, will produce a result that can be written +// to a terminal to produce a symbol made of three printable +// characters, possibly interspersed with VT100 color codes. +func DiffActionSymbol(action plans.Action) string { + switch action { + case plans.DeleteThenCreate: + return "[red]-[reset]/[green]+[reset]" + case plans.CreateThenDelete: + return "[green]+[reset]/[red]-[reset]" + case plans.Create: + return " [green]+[reset]" + case plans.Delete: + return " [red]-[reset]" + case plans.Read: + return " [cyan]<=[reset]" + case plans.Update: + return " [yellow]~[reset]" + default: + return " ?" + } +} diff --git a/command/format/diff_test.go b/command/format/diff_test.go index afbed3c8d..ef731d5b2 100644 --- a/command/format/diff_test.go +++ b/command/format/diff_test.go @@ -50,6 +50,26 @@ func TestResourceChange_primitiveTypes(t *testing.T) { + resource "test_instance" "example" { + string = "null" } +`, + }, + "creation (null string with extra whitespace)": { + Action: plans.Create, + Mode: addrs.ManagedResourceMode, + Before: cty.NullVal(cty.EmptyObject), + After: cty.ObjectVal(map[string]cty.Value{ + "string": cty.StringVal("null "), + }), + Schema: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "string": {Type: cty.String, Optional: true}, + }, + }, + RequiredReplace: cty.NewPathSet(), + Tainted: false, + ExpectedOutput: ` # test_instance.example will be created + + resource "test_instance" "example" { + + string = "null " + } `, }, "deletion": { @@ -207,6 +227,37 @@ new line + new line EOT } +`, + }, + "addition of multi-line string field": { + Action: plans.Update, + Mode: addrs.ManagedResourceMode, + Before: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("i-02ae66f368e8518a9"), + "more_lines": cty.NullVal(cty.String), + }), + After: cty.ObjectVal(map[string]cty.Value{ + "id": cty.UnknownVal(cty.String), + "more_lines": cty.StringVal(`original +new line +`), + }), + Schema: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "id": {Type: cty.String, Optional: true, Computed: true}, + "more_lines": {Type: cty.String, Optional: true}, + }, + }, + RequiredReplace: cty.NewPathSet(), + Tainted: false, + ExpectedOutput: ` # test_instance.example will be updated in-place + ~ resource "test_instance" "example" { + ~ id = "i-02ae66f368e8518a9" -> (known after apply) + + more_lines = <<~EOT + original + new line + EOT + } `, }, "force-new update of multi-line string field": { @@ -857,11 +908,11 @@ func TestResourceChange_JSON(t *testing.T) { Mode: addrs.ManagedResourceMode, Before: cty.ObjectVal(map[string]cty.Value{ "id": cty.StringVal("i-02ae66f368e8518a9"), - "json_field": cty.StringVal(`[{"one": "111"}, {"two": "222"}]`), + "json_field": cty.StringVal(`[{"one": "111"}, {"two": "222"}, {"three": "333"}]`), }), After: cty.ObjectVal(map[string]cty.Value{ "id": cty.UnknownVal(cty.String), - "json_field": cty.StringVal(`[{"one": "111"}]`), + "json_field": cty.StringVal(`[{"one": "111"}, {"three": "333"}]`), }), Schema: &configschema.Block{ Attributes: map[string]*configschema.Attribute{ @@ -882,6 +933,9 @@ func TestResourceChange_JSON(t *testing.T) { - { - two = "222" }, + { + three = "333" + }, ] ) } @@ -3103,7 +3157,10 @@ func runTestCases(t *testing.T, testCases map[string]testCase) { Type: "test_instance", Name: "example", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ Action: tc.Action, Before: before, diff --git a/command/format/plan.go b/command/format/plan.go deleted file mode 100644 index ef129a928..000000000 --- a/command/format/plan.go +++ /dev/null @@ -1,306 +0,0 @@ -package format - -import ( - "bytes" - "fmt" - "log" - "sort" - "strings" - - "github.com/mitchellh/colorstring" - - "github.com/hashicorp/terraform/addrs" - "github.com/hashicorp/terraform/plans" - "github.com/hashicorp/terraform/states" - "github.com/hashicorp/terraform/terraform" -) - -// Plan is a representation of a plan optimized for display to -// an end-user, as opposed to terraform.Plan which is for internal use. -// -// DisplayPlan excludes implementation details that may otherwise appear -// in the main plan, such as destroy actions on data sources (which are -// there only to clean up the state). -type Plan struct { - Resources []*InstanceDiff -} - -// InstanceDiff is a representation of an instance diff optimized -// for display, in conjunction with DisplayPlan. -type InstanceDiff struct { - Addr *terraform.ResourceAddress - Action plans.Action - - // Attributes describes changes to the attributes of the instance. - // - // For destroy diffs this is always nil. - Attributes []*AttributeDiff - - Tainted bool - Deposed bool -} - -// AttributeDiff is a representation of an attribute diff optimized -// for display, in conjunction with DisplayInstanceDiff. -type AttributeDiff struct { - // Path is a dot-delimited traversal through possibly many levels of list and map structure, - // intended for display purposes only. - Path string - - Action plans.Action - - OldValue string - NewValue string - - NewComputed bool - Sensitive bool - ForcesNew bool -} - -// PlanStats gives summary counts for a Plan. -type PlanStats struct { - ToAdd, ToChange, ToDestroy int -} - -// NewPlan produces a display-oriented Plan from a terraform.Plan. -func NewPlan(changes *plans.Changes) *Plan { - log.Printf("[TRACE] NewPlan for %#v", changes) - ret := &Plan{} - if changes == nil { - // Nothing to do! - return ret - } - - for _, rc := range changes.Resources { - addr := rc.Addr - log.Printf("[TRACE] NewPlan found %s (%s)", addr, rc.Action) - dataSource := addr.Resource.Resource.Mode == addrs.DataResourceMode - - // We create "delete" actions for data resources so we can clean - // up their entries in state, but this is an implementation detail - // that users shouldn't see. - if dataSource && rc.Action == plans.Delete { - continue - } - - if rc.Action == plans.NoOp { - continue - } - - // For now we'll shim this to work with our old types. - // TODO: Update for the new plan types, ideally also switching over to - // a structural diff renderer instead of a flat renderer. - did := &InstanceDiff{ - Addr: terraform.NewLegacyResourceInstanceAddress(addr), - Action: rc.Action, - } - - if rc.DeposedKey != states.NotDeposed { - did.Deposed = true - } - - // Since this is just a temporary stub implementation on the way - // to us replacing this with the structural diff renderer, we currently - // don't include any attributes here. - // FIXME: Implement the structural diff renderer to replace this - // codepath altogether. - - ret.Resources = append(ret.Resources, did) - } - - // Sort the instance diffs by their addresses for display. - sort.Slice(ret.Resources, func(i, j int) bool { - iAddr := ret.Resources[i].Addr - jAddr := ret.Resources[j].Addr - return iAddr.Less(jAddr) - }) - - return ret -} - -// Format produces and returns a text representation of the receiving plan -// intended for display in a terminal. -// -// If color is not nil, it is used to colorize the output. -func (p *Plan) Format(color *colorstring.Colorize) string { - if p.Empty() { - return "This plan does nothing." - } - - if color == nil { - color = &colorstring.Colorize{ - Colors: colorstring.DefaultColors, - Reset: false, - } - } - - // Find the longest path length of all the paths that are changing, - // so we can align them all. - keyLen := 0 - for _, r := range p.Resources { - for _, attr := range r.Attributes { - key := attr.Path - - if len(key) > keyLen { - keyLen = len(key) - } - } - } - - buf := new(bytes.Buffer) - for _, r := range p.Resources { - formatPlanInstanceDiff(buf, r, keyLen, color) - } - - return strings.TrimSpace(buf.String()) -} - -// Stats returns statistics about the plan -func (p *Plan) Stats() PlanStats { - var ret PlanStats - for _, r := range p.Resources { - switch r.Action { - case plans.Create: - ret.ToAdd++ - case plans.Update: - ret.ToChange++ - case plans.DeleteThenCreate, plans.CreateThenDelete: - ret.ToAdd++ - ret.ToDestroy++ - case plans.Delete: - ret.ToDestroy++ - } - } - return ret -} - -// ActionCounts returns the number of diffs for each action type -func (p *Plan) ActionCounts() map[plans.Action]int { - ret := map[plans.Action]int{} - for _, r := range p.Resources { - ret[r.Action]++ - } - return ret -} - -// Empty returns true if there is at least one resource diff in the receiving plan. -func (p *Plan) Empty() bool { - return len(p.Resources) == 0 -} - -// DiffActionSymbol returns a string that, once passed through a -// colorstring.Colorize, will produce a result that can be written -// to a terminal to produce a symbol made of three printable -// characters, possibly interspersed with VT100 color codes. -func DiffActionSymbol(action plans.Action) string { - switch action { - case plans.DeleteThenCreate: - return "[red]-[reset]/[green]+[reset]" - case plans.CreateThenDelete: - return "[green]+[reset]/[red]-[reset]" - case plans.Create: - return " [green]+[reset]" - case plans.Delete: - return " [red]-[reset]" - case plans.Read: - return " [cyan]<=[reset]" - case plans.Update: - return " [yellow]~[reset]" - default: - return " ?" - } -} - -// formatPlanInstanceDiff writes the text representation of the given instance diff -// to the given buffer, using the given colorizer. -func formatPlanInstanceDiff(buf *bytes.Buffer, r *InstanceDiff, keyLen int, colorizer *colorstring.Colorize) { - addrStr := r.Addr.String() - - // Determine the color for the text (green for adding, yellow - // for change, red for delete), and symbol, and output the - // resource header. - color := "yellow" - symbol := DiffActionSymbol(r.Action) - oldValues := true - switch r.Action { - case plans.DeleteThenCreate, plans.CreateThenDelete: - color = "yellow" - case plans.Create: - color = "green" - oldValues = false - case plans.Delete: - color = "red" - case plans.Read: - color = "cyan" - oldValues = false - } - - var extraStr string - if r.Tainted { - extraStr = extraStr + " (tainted)" - } - if r.Deposed { - extraStr = extraStr + " (deposed)" - } - if r.Action.IsReplace() { - extraStr = extraStr + colorizer.Color(" [red][bold](new resource required)") - } - - buf.WriteString( - colorizer.Color(fmt.Sprintf( - "[%s]%s [%s]%s%s\n", - color, symbol, color, addrStr, extraStr, - )), - ) - - for _, attr := range r.Attributes { - - v := attr.NewValue - var dispV string - switch { - case v == "" && attr.NewComputed: - dispV = "" - case attr.Sensitive: - dispV = "" - default: - dispV = fmt.Sprintf("%q", v) - } - - updateMsg := "" - switch { - case attr.ForcesNew && r.Action.IsReplace(): - updateMsg = colorizer.Color(" [red](forces new resource)") - case attr.Sensitive && oldValues: - updateMsg = colorizer.Color(" [yellow](attribute changed)") - } - - if oldValues { - u := attr.OldValue - var dispU string - switch { - case attr.Sensitive: - dispU = "" - default: - dispU = fmt.Sprintf("%q", u) - } - buf.WriteString(fmt.Sprintf( - " %s:%s %s => %s%s\n", - attr.Path, - strings.Repeat(" ", keyLen-len(attr.Path)), - dispU, dispV, - updateMsg, - )) - } else { - buf.WriteString(fmt.Sprintf( - " %s:%s %s%s\n", - attr.Path, - strings.Repeat(" ", keyLen-len(attr.Path)), - dispV, - updateMsg, - )) - } - } - - // Write the reset color so we don't bleed color into later text - buf.WriteString(colorizer.Color("[reset]\n")) -} diff --git a/command/format/state.go b/command/format/state.go index be1ea24de..31616c9cf 100644 --- a/command/format/state.go +++ b/command/format/state.go @@ -139,13 +139,14 @@ func formatStateModule(p blockBodyDiffPrinter, m *states.Module, schemas *terraf } var schema *configschema.Block - provider := m.Resources[key].ProviderConfig.ProviderConfig.StringCompact() + + provider := m.Resources[key].ProviderConfig.Provider if _, exists := schemas.Providers[provider]; !exists { // This should never happen in normal use because we should've // loaded all of the schemas and checked things prior to this // point. We can't return errors here, but since this is UI code // we will try to do _something_ reasonable. - p.buf.WriteString(fmt.Sprintf("# missing schema for provider %q\n\n", provider)) + p.buf.WriteString(fmt.Sprintf("# missing schema for provider %q\n\n", provider.LegacyString())) continue } diff --git a/command/format/state_test.go b/command/format/state_test.go index 2b00da9b8..40c97b124 100644 --- a/command/format/state_test.go +++ b/command/format/state_test.go @@ -138,8 +138,8 @@ func testProviderSchema() *terraform.ProviderSchema { func testSchemas() *terraform.Schemas { provider := testProvider() return &terraform.Schemas{ - Providers: map[string]*terraform.ProviderSchema{ - "test": provider.GetSchemaReturn, + Providers: map[addrs.Provider]*terraform.ProviderSchema{ + addrs.NewLegacyProvider("test"): provider.GetSchemaReturn, }, } } @@ -243,9 +243,10 @@ func basicState(t *testing.T) *states.State { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles"}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) rootModule.SetResourceInstanceCurrent( addrs.Resource{ @@ -258,9 +259,10 @@ func basicState(t *testing.T) *states.State { SchemaVersion: 1, AttrsJSON: []byte(`{"compute":"sure"}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } @@ -293,9 +295,10 @@ func stateWithMoreOutputs(t *testing.T) *states.State { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles"}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } @@ -319,9 +322,10 @@ func nestedState(t *testing.T) *states.State { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } @@ -341,9 +345,10 @@ func deposedState(t *testing.T) *states.State { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } @@ -369,9 +374,10 @@ func onlyDeposedState(t *testing.T) *states.State { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) rootModule.SetResourceInstanceDeposed( addrs.Resource{ @@ -385,9 +391,10 @@ func onlyDeposedState(t *testing.T) *states.State { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) return state } diff --git a/command/graph.go b/command/graph.go index 113a41527..440dd74da 100644 --- a/command/graph.go +++ b/command/graph.go @@ -96,6 +96,7 @@ func (c *GraphCommand) Run(args []string) int { opReq.ConfigDir = configPath opReq.ConfigLoader, err = c.initConfigLoader() opReq.PlanFile = planFile + opReq.AllowUnsetVariables = true if err != nil { diags = diags.Append(err) c.showDiagnostics(diags) @@ -110,13 +111,6 @@ func (c *GraphCommand) Run(args []string) int { return 1 } - defer func() { - err := opReq.StateLocker.Unlock(nil) - if err != nil { - c.Ui.Error(err.Error()) - } - }() - // Determine the graph type graphType := terraform.GraphTypePlan if plan != nil { @@ -190,12 +184,11 @@ Options: -draw-cycles Highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors. - -module-depth=n Specifies the depth of modules to show in the output. - By default this is -1, which will expand all. - -type=plan Type of graph to output. Can be: plan, plan-destroy, apply, validate, input, refresh. + -module-depth=n (deprecated) In prior versions of Terraform, specified the + depth of modules to show in the output. ` return strings.TrimSpace(helpText) } diff --git a/command/graph_test.go b/command/graph_test.go index b8712e603..a3b4e6a14 100644 --- a/command/graph_test.go +++ b/command/graph_test.go @@ -33,7 +33,7 @@ func TestGraph(t *testing.T) { } output := ui.OutputWriter.String() - if !strings.Contains(output, "provider.test") { + if !strings.Contains(output, `provider["registry.terraform.io/-/test"]`) { t.Fatalf("doesn't look like digraph: %s", output) } } @@ -80,7 +80,7 @@ func TestGraph_noArgs(t *testing.T) { } output := ui.OutputWriter.String() - if !strings.Contains(output, "provider.test") { + if !strings.Contains(output, `provider["registry.terraform.io/-/test"]`) { t.Fatalf("doesn't look like digraph: %s", output) } } @@ -125,7 +125,10 @@ func TestGraph_plan(t *testing.T) { Before: plans.DynamicValue(`{}`), After: plans.DynamicValue(`null`), }, - ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, }) emptyConfig, err := plans.NewDynamicValue(cty.EmptyObjectVal, cty.EmptyObject) if err != nil { @@ -158,7 +161,7 @@ func TestGraph_plan(t *testing.T) { } output := ui.OutputWriter.String() - if !strings.Contains(output, "provider.test") { + if !strings.Contains(output, `provider["registry.terraform.io/-/test"]`) { t.Fatalf("doesn't look like digraph: %s", output) } } diff --git a/command/import.go b/command/import.go index 5cdf446af..5996040d8 100644 --- a/command/import.go +++ b/command/import.go @@ -43,7 +43,6 @@ func (c *ImportCommand) Run(args []string) int { cmdFlags.StringVar(&c.Meta.stateOutPath, "state-out", "", "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") cmdFlags.StringVar(&configPath, "config", pwd, "path") - cmdFlags.StringVar(&c.Meta.provider, "provider", "", "provider") cmdFlags.BoolVar(&c.Meta.stateLock, "lock", true, "lock state") cmdFlags.DurationVar(&c.Meta.stateLockTimeout, "lock-timeout", 0, "lock timeout") cmdFlags.BoolVar(&c.Meta.allowMissingConfig, "allow-missing-config", false, "allow missing config") @@ -156,35 +155,6 @@ func (c *ImportCommand) Run(args []string) int { return 1 } - // Also parse the user-provided provider address, if any. - var providerAddr addrs.AbsProviderConfig - if c.Meta.provider != "" { - traversal, travDiags := hclsyntax.ParseTraversalAbs([]byte(c.Meta.provider), `-provider=...`, hcl.Pos{Line: 1, Column: 1}) - diags = diags.Append(travDiags) - if travDiags.HasErrors() { - c.showDiagnostics(diags) - c.Ui.Info(importCommandInvalidAddressReference) - return 1 - } - relAddr, addrDiags := addrs.ParseProviderConfigCompact(traversal) - diags = diags.Append(addrDiags) - if addrDiags.HasErrors() { - c.showDiagnostics(diags) - return 1 - } - providerAddr = relAddr.Absolute(addrs.RootModuleInstance) - } else { - // Use a default address inferred from the resource type. - // We assume the same module as the resource address here, which - // may get resolved to an inherited provider when we construct the - // import graph inside ctx.Import, called below. - if rc != nil && rc.ProviderConfigRef != nil { - providerAddr = rc.ProviderConfigAddr().Absolute(addr.Module) - } else { - providerAddr = resourceRelAddr.DefaultProviderConfig().Absolute(addr.Module) - } - } - // Check for user-supplied plugin path if c.pluginPath, err = c.loadPluginPath(); err != nil { c.Ui.Error(fmt.Sprintf("Error loading plugin path: %s", err)) @@ -233,13 +203,8 @@ func (c *ImportCommand) Run(args []string) int { // Get the context ctx, state, ctxDiags := local.Context(opReq) - diags = diags.Append(ctxDiags) - if ctxDiags.HasErrors() { - c.showDiagnostics(diags) - return 1 - } - // Make sure to unlock the state + // Creating the context can result in a lock, so ensure we release it defer func() { err := opReq.StateLocker.Unlock(nil) if err != nil { @@ -247,15 +212,20 @@ func (c *ImportCommand) Run(args []string) int { } }() + diags = diags.Append(ctxDiags) + if ctxDiags.HasErrors() { + c.showDiagnostics(diags) + return 1 + } + // Perform the import. Note that as you can see it is possible for this // API to import more than one resource at once. For now, we only allow // one while we stabilize this feature. newState, importDiags := ctx.Import(&terraform.ImportOpts{ Targets: []*terraform.ImportTarget{ &terraform.ImportTarget{ - Addr: addr, - ID: args[1], - ProviderAddr: providerAddr, + Addr: addr, + ID: args[1], }, }, }) @@ -340,11 +310,6 @@ Options: -no-color If specified, output won't contain any color. - -provider=provider Deprecated: Override the provider configuration to use - when importing the object. By default, Terraform uses the - provider specified in the configuration for the target - resource, and that is the best behavior in most cases. - -state=PATH Path to the source state file. Defaults to the configured backend, or "terraform.tfstate" diff --git a/command/import_test.go b/command/import_test.go index 504c1d3e9..3171091fc 100644 --- a/command/import_test.go +++ b/command/import_test.go @@ -258,6 +258,75 @@ func TestImport_remoteState(t *testing.T) { testStateOutput(t, statePath, testImportStr) } +// early failure on import should not leave stale lock +func TestImport_initializationErrorShouldUnlock(t *testing.T) { + td := tempDir(t) + copy.CopyDir(testFixturePath("import-provider-remote-state"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + statePath := "imported.tfstate" + + // init our backend + ui := cli.NewMockUi() + m := Meta{ + testingOverrides: metaOverridesForProvider(testProvider()), + Ui: ui, + } + + ic := &InitCommand{ + Meta: m, + providerInstaller: &mockProviderInstaller{ + Providers: map[string][]string{ + "test": []string{"1.2.3"}, + }, + + Dir: m.pluginDir(), + }, + } + + // (Using log here rather than t.Log so that these messages interleave with other trace logs) + log.Print("[TRACE] TestImport_initializationErrorShouldUnlock running: terraform init") + if code := ic.Run([]string{}); code != 0 { + t.Fatalf("init failed\n%s", ui.ErrorWriter) + } + + // overwrite the config with one including a resource from an invalid provider + copy.CopyFile(filepath.Join(testFixturePath("import-provider-invalid"), "main.tf"), filepath.Join(td, "main.tf")) + + p := testProvider() + ui = new(cli.MockUi) + c := &ImportCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(p), + Ui: ui, + }, + } + + args := []string{ + "unknown_instance.baz", + "bar", + } + log.Printf("[TRACE] TestImport_initializationErrorShouldUnlock running: terraform import %s %s", args[0], args[1]) + + // this should fail + if code := c.Run(args); code != 1 { + fmt.Println(ui.OutputWriter) + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // specifically, it should fail due to a missing provider + msg := ui.ErrorWriter.String() + if want := "Could not satisfy plugin requirements"; !strings.Contains(msg, want) { + t.Errorf("incorrect message\nwant substring: %s\ngot:\n%s", want, msg) + } + + // verify that the local state was unlocked after initialization error + if _, err := os.Stat(filepath.Join(td, fmt.Sprintf(".%s.lock.info", statePath))); !os.IsNotExist(err) { + t.Fatal("state left locked after import") + } +} + func TestImport_providerConfigWithVar(t *testing.T) { defer testChdir(t, testFixturePath("import-provider-var"))() @@ -332,6 +401,63 @@ func TestImport_providerConfigWithVar(t *testing.T) { testStateOutput(t, statePath, testImportStr) } +func TestImport_providerConfigWithDataSource(t *testing.T) { + defer testChdir(t, testFixturePath("import-provider-datasource"))() + + statePath := testTempFile(t) + + p := testProvider() + ui := new(cli.MockUi) + c := &ImportCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(p), + Ui: ui, + }, + } + + p.ImportResourceStateFn = nil + p.ImportResourceStateResponse = providers.ImportResourceStateResponse{ + ImportedResources: []providers.ImportedResource{ + { + TypeName: "test_instance", + State: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("yay"), + }), + }, + }, + } + p.GetSchemaReturn = &terraform.ProviderSchema{ + Provider: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "foo": {Type: cty.String, Optional: true}, + }, + }, + ResourceTypes: map[string]*configschema.Block{ + "test_instance": { + Attributes: map[string]*configschema.Attribute{ + "id": {Type: cty.String, Optional: true, Computed: true}, + }, + }, + }, + DataSources: map[string]*configschema.Block{ + "test_data": { + Attributes: map[string]*configschema.Attribute{ + "id": {Type: cty.String, Optional: true, Computed: true}, + }, + }, + }, + } + + args := []string{ + "-state", statePath, + "test_instance.foo", + "bar", + } + if code := c.Run(args); code != 1 { + t.Fatalf("bad, wanted error: %d\n\n%s", code, ui.ErrorWriter.String()) + } +} + func TestImport_providerConfigWithVarDefault(t *testing.T) { defer testChdir(t, testFixturePath("import-provider-var-default"))() @@ -479,156 +605,6 @@ func TestImport_providerConfigWithVarFile(t *testing.T) { testStateOutput(t, statePath, testImportStr) } -func TestImport_customProvider(t *testing.T) { - defer testChdir(t, testFixturePath("import-provider-aliased"))() - - statePath := testTempFile(t) - - p := testProvider() - ui := new(cli.MockUi) - c := &ImportCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(p), - Ui: ui, - }, - } - - p.ImportResourceStateFn = nil - p.ImportResourceStateResponse = providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_instance", - State: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("yay"), - }), - }, - }, - } - p.GetSchemaReturn = &terraform.ProviderSchema{ - Provider: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "foo": {Type: cty.String, Optional: true}, - }, - }, - ResourceTypes: map[string]*configschema.Block{ - "test_instance": { - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Optional: true, Computed: true}, - }, - }, - }, - } - - args := []string{ - "-provider", "test.alias", - "-state", statePath, - "test_instance.foo", - "bar", - } - if code := c.Run(args); code != 0 { - t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) - } - - if !p.ImportResourceStateCalled { - t.Fatal("ImportResourceState should be called") - } - - testStateOutput(t, statePath, testImportCustomProviderStr) -} - -// This tests behavior when the provider name does not match the implied -// provider name -func TestImport_providerNameMismatch(t *testing.T) { - defer testChdir(t, testFixturePath("import-provider-mismatch"))() - - statePath := testTempFile(t) - - p := testProvider() - ui := new(cli.MockUi) - c := &ImportCommand{ - Meta: Meta{ - testingOverrides: &testingOverrides{ - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test-beta": providers.FactoryFixed(p), - }, - ), - }, - Ui: ui, - }, - } - - configured := false - p.ConfigureNewFn = func(req providers.ConfigureRequest) providers.ConfigureResponse { - configured = true - - cfg := req.Config - if !cfg.Type().HasAttribute("foo") { - return providers.ConfigureResponse{ - Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("configuration has no foo argument")), - } - } - if got, want := cfg.GetAttr("foo"), cty.StringVal("baz"); !want.RawEquals(got) { - return providers.ConfigureResponse{ - Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("foo argument is %#v, but want %#v", got, want)), - } - } - - return providers.ConfigureResponse{} - } - - p.ImportResourceStateFn = nil - p.ImportResourceStateResponse = providers.ImportResourceStateResponse{ - ImportedResources: []providers.ImportedResource{ - { - TypeName: "test_instance", - State: cty.ObjectVal(map[string]cty.Value{ - "id": cty.StringVal("yay"), - }), - }, - }, - } - p.GetSchemaReturn = &terraform.ProviderSchema{ - Provider: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "foo": {Type: cty.String, Optional: true}, - }, - }, - ResourceTypes: map[string]*configschema.Block{ - "test_instance": { - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Optional: true, Computed: true}, - }, - }, - }, - } - - args := []string{ - "-provider", "test-beta", - "-state", statePath, - "test_instance.foo", - "bar", - } - - if code := c.Run(args); code != 0 { - t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) - } - - // Verify that the test-beta provider was configured - if !configured { - t.Fatal("Configure should be called") - } - - if !p.ImportResourceStateCalled { - t.Fatal("ImportResourceState (provider 'test-beta') should be called") - } - - if !p.ReadResourceCalled { - t.Fatal("ReadResource (provider 'test-beta' should be called") - } - - testStateOutput(t, statePath, testImportProviderMismatchStr) -} func TestImport_allowMissingResourceConfig(t *testing.T) { defer testChdir(t, testFixturePath("import-missing-resource-config"))() @@ -890,17 +866,18 @@ func TestImport_pluginDir(t *testing.T) { // Now we need to go through some plugin init. // This discovers our fake plugin and writes the lock file. + initUi := new(cli.MockUi) initCmd := &InitCommand{ Meta: Meta{ pluginPath: []string{"./plugins"}, - Ui: cli.NewMockUi(), + Ui: initUi, }, providerInstaller: &discovery.ProviderInstaller{ PluginProtocolVersion: discovery.PluginInstallProtocolVersion, }, } if code := initCmd.Run(nil); code != 0 { - t.Fatal(initCmd.Meta.Ui.(*cli.MockUi).ErrorWriter.String()) + t.Fatal(initUi.ErrorWriter.String()) } args := []string{ @@ -930,17 +907,17 @@ func TestImport_pluginDir(t *testing.T) { const testImportStr = ` test_instance.foo: ID = yay - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` const testImportCustomProviderStr = ` test_instance.foo: ID = yay - provider = provider.test.alias + provider = provider["registry.terraform.io/-/test"].alias ` const testImportProviderMismatchStr = ` test_instance.foo: ID = yay - provider = provider.test-beta + provider = provider["registry.terraform.io/-/test-beta"] ` diff --git a/command/init.go b/command/init.go index 3f9408be4..3db440f10 100644 --- a/command/init.go +++ b/command/init.go @@ -25,6 +25,7 @@ import ( "github.com/hashicorp/terraform/states" "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/tfdiags" + "github.com/hashicorp/terraform/version" ) // InitCommand is a Command implementation that takes a Terraform @@ -295,6 +296,15 @@ func (c *InitCommand) Run(args []string) int { } back = be } + } else { + // load the previously-stored backend config + be, backendDiags := c.Meta.backendFromState() + diags = diags.Append(backendDiags) + if backendDiags.HasErrors() { + c.showDiagnostics(diags) + return 1 + } + back = be } if back == nil { @@ -496,7 +506,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state configReqs := configDeps.AllPluginRequirements() // FIXME: This is weird because ConfigTreeDependencies was written before // we switched over to using earlyConfig as the main source of dependencies. - // In future we should clean this up to be a more reasoable API. + // In future we should clean this up to be a more reasonable API. stateReqs := terraform.ConfigTreeDependencies(nil, state).AllPluginRequirements() requirements := configReqs.Merge(stateReqs) @@ -517,7 +527,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state } for provider, reqd := range missing { - pty := addrs.ProviderType{Name: provider} + pty := addrs.NewLegacyProvider(provider) _, providerDiags, err := c.providerInstaller.Get(pty, reqd.Versions) diags = diags.Append(providerDiags) @@ -559,7 +569,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state // Generic version incompatible msg c.Ui.Error(fmt.Sprintf(errProviderIncompatible, provider, constraint)) case err == discovery.ErrorSignatureVerification: - c.Ui.Error(fmt.Sprintf(errSignatureVerification, provider)) + c.Ui.Error(fmt.Sprintf(errSignatureVerification, provider, version.SemVer)) case err == discovery.ErrorChecksumVerification, err == discovery.ErrorMissingChecksumVerification: c.Ui.Error(fmt.Sprintf(errChecksumVerification, provider)) @@ -597,7 +607,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state available = c.providerPluginSet() // re-discover to see newly-installed plugins // internal providers were already filtered out, since we don't need to get them. - chosen := choosePlugins(available, nil, requirements) + chosen := chooseProviders(available, nil, requirements) digests := map[string][]byte{} for name, meta := range chosen { @@ -1011,9 +1021,12 @@ were changed after this version was released to the Registry. ` const errSignatureVerification = ` -[reset][bold][red]Error verifying GPG signature for provider %[1]q[reset][red] -Terraform was unable to verify the GPG signature of the downloaded provider -files using the keys downloaded from the Terraform Registry. This may mean that -the publisher of the provider removed the key it was signed with, or that the -distributed files were changed after this version was released. +[reset][bold][red]Error:[reset][bold] Untrusted signing key for provider %[1]q[reset] + +This provider package is not signed with the HashiCorp signing key, and is +therefore incompatible with Terraform v%[2]s. + +A later version of Terraform may have introduced other signing keys that would +accept this provider. Alternatively, an earlier version of this provider may +be compatible with Terraform v%[2]s. ` diff --git a/command/init_test.go b/command/init_test.go index 4dff16680..9cbdeb158 100644 --- a/command/init_test.go +++ b/command/init_test.go @@ -947,6 +947,56 @@ func TestInit_rcProviders(t *testing.T) { } } +func TestInit_providerSource(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + + configDirName := "init-required-providers" + copy.CopyDir(testFixturePath(configDirName), filepath.Join(td, configDirName)) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + ui := new(cli.MockUi) + m := Meta{ + testingOverrides: metaOverridesForProvider(testProvider()), + Ui: ui, + } + + c := &InitCommand{ + Meta: m, + providerInstaller: &mockProviderInstaller{}, + } + + // make our plugin paths + if err := os.MkdirAll(c.pluginDir(), 0755); err != nil { + t.Fatal(err) + } + if err := os.MkdirAll(DefaultPluginVendorDir, 0755); err != nil { + t.Fatal(err) + } + + // add some dummy providers + // the auto plugin directory + testPath := filepath.Join(c.pluginDir(), "terraform-provider-test_v1.2.3_x4") + if err := ioutil.WriteFile(testPath, []byte("test bin"), 0755); err != nil { + t.Fatal(err) + } + // the vendor path + sourcePath := filepath.Join(DefaultPluginVendorDir, "terraform-provider-source_v1.2.3_x4") + if err := ioutil.WriteFile(sourcePath, []byte("test bin"), 0755); err != nil { + t.Fatal(err) + } + + args := []string{configDirName} + + if code := c.Run(args); code != 0 { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } + if strings.Contains(ui.OutputWriter.String(), "Terraform has initialized, but configuration upgrades may be needed") { + t.Fatalf("unexpected \"configuration upgrade\" warning in output") + } +} + func TestInit_getUpgradePlugins(t *testing.T) { // Create a temporary working directory that is empty td := tempDir(t) @@ -1187,6 +1237,7 @@ func TestInit_providerLockFile(t *testing.T) { func TestInit_pluginDirReset(t *testing.T) { td := testTempDir(t) + defer os.RemoveAll(td) defer testChdir(t, td)() ui := new(cli.MockUi) diff --git a/command/internal_plugin.go b/command/internal_plugin.go index b26ba1df6..33de8569a 100644 --- a/command/internal_plugin.go +++ b/command/internal_plugin.go @@ -33,7 +33,24 @@ func BuildPluginCommandString(pluginType, pluginName string) (string, error) { return strings.Join(parts, TFSPACE), nil } +// Internal plugins do not support any CLI args, but we do receive flags that +// main.go:mergeEnvArgs has merged in from EnvCLI. Instead of making main.go +// aware of this exception, we strip all flags from our args. Flags are easily +// identified by the '-' prefix, ensured by the cli package used. +func StripArgFlags(args []string) []string { + argsNoFlags := []string{} + for i := range args { + if !strings.HasPrefix(args[i], "-") { + argsNoFlags = append(argsNoFlags, args[i]) + } + } + return argsNoFlags +} + func (c *InternalPluginCommand) Run(args []string) int { + // strip flags from args, only use subcommands. + args = StripArgFlags(args) + if len(args) != 2 { log.Printf("Wrong number of args; expected: terraform internal-plugin pluginType pluginName") return 1 diff --git a/command/internal_plugin_test.go b/command/internal_plugin_test.go index f96b5feae..832ec6b15 100644 --- a/command/internal_plugin_test.go +++ b/command/internal_plugin_test.go @@ -1,13 +1,17 @@ package command -import "testing" +import ( + "testing" + + "github.com/hashicorp/terraform/addrs" +) func TestInternalPlugin_InternalProviders(t *testing.T) { m := new(Meta) providers := m.internalProviders() // terraform is the only provider moved back to internal for _, name := range []string{"terraform"} { - pf, ok := providers[name] + pf, ok := providers[addrs.NewLegacyProvider(name)] if !ok { t.Errorf("Expected to find %s in InternalProviders", name) } @@ -42,3 +46,12 @@ func TestInternalPlugin_BuildPluginCommandString(t *testing.T) { t.Errorf("Expected command to end with %s; got:\n%s\n", expected, actual) } } + +func TestInternalPlugin_StripArgFlags(t *testing.T) { + actual := StripArgFlags([]string{"provisioner", "remote-exec", "-var-file=my_vars.tfvars", "-flag"}) + expected := []string{"provisioner", "remote-exec"} + // Must be same length and order. + if len(actual) != len(expected) || expected[0] != actual[0] || actual[1] != actual[1] { + t.Fatalf("Expected args to be exactly '%s', got '%s'", expected, actual) + } +} diff --git a/command/jsonconfig/config.go b/command/jsonconfig/config.go index 0b27c7af4..21775db7d 100644 --- a/command/jsonconfig/config.go +++ b/command/jsonconfig/config.go @@ -139,7 +139,9 @@ func marshalProviderConfigs( } for k, pc := range c.Module.ProviderConfigs { - schema := schemas.ProviderConfig(pc.Name) + // FIXME: lookup providerFqn from config + providerFqn := addrs.NewLegacyProvider(pc.Name) + schema := schemas.ProviderConfig(providerFqn) p := providerConfig{ Name: pc.Name, Alias: pc.Alias, @@ -301,8 +303,10 @@ func marshalResources(resources map[string]*configs.Resource, schemas *terraform } } + // TODO: get actual providerFqn + providerFqn := addrs.NewLegacyProvider(v.ProviderConfigAddr().LocalName) schema, schemaVer := schemas.ResourceTypeConfig( - v.ProviderConfigAddr().Type, + providerFqn, v.Mode, v.Type, ) diff --git a/command/jsonplan/plan.go b/command/jsonplan/plan.go index f1563abef..063267275 100644 --- a/command/jsonplan/plan.go +++ b/command/jsonplan/plan.go @@ -178,7 +178,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform } schema, _ := schemas.ResourceTypeConfig( - rc.ProviderAddr.ProviderConfig.Type, + rc.ProviderAddr.Provider, addr.Resource.Resource.Mode, addr.Resource.Resource.Type, ) @@ -252,7 +252,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform r.ModuleAddress = addr.Module.String() r.Name = addr.Resource.Resource.Name r.Type = addr.Resource.Resource.Type - r.ProviderName = rc.ProviderAddr.ProviderConfig.StringCompact() + r.ProviderName = rc.ProviderAddr.Provider.LegacyString() p.ResourceChanges = append(p.ResourceChanges, r) diff --git a/command/jsonplan/values.go b/command/jsonplan/values.go index a6e96d1d8..263d2b5b9 100644 --- a/command/jsonplan/values.go +++ b/command/jsonplan/values.go @@ -26,7 +26,7 @@ type stateValues struct { type attributeValues map[string]interface{} func marshalAttributeValues(value cty.Value, schema *configschema.Block) attributeValues { - if value == cty.NilVal { + if value == cty.NilVal || value.IsNull() { return nil } ret := make(attributeValues) @@ -96,16 +96,36 @@ func marshalPlannedValues(changes *plans.Changes, schemas *terraform.Schemas) (m containingModule := resource.Addr.Module.String() moduleResourceMap[containingModule] = append(moduleResourceMap[containingModule], resource.Addr) - // root has no parents. - if containingModule != "" { + // the root module has no parents + if !resource.Addr.Module.IsRoot() { parent := resource.Addr.Module.Parent().String() - // we likely will see multiple resources in one module, so we + // we expect to see multiple resources in one module, so we // only need to report the "parent" module for each child module // once. if !seenModules[containingModule] { moduleMap[parent] = append(moduleMap[parent], resource.Addr.Module) seenModules[containingModule] = true } + + // If any given parent module has no resources, it needs to be + // added to the moduleMap. This walks through the current + // resources' modules' ancestors, taking advantage of the fact + // that Ancestors() returns an ordered slice, and verifies that + // each one is in the map. + ancestors := resource.Addr.Module.Ancestors() + for i, ancestor := range ancestors[:len(ancestors)-1] { + aStr := ancestor.String() + + // childStr here is the immediate child of the current step + childStr := ancestors[i+1].String() + // we likely will see multiple resources in one module, so we + // only need to report the "parent" module for each child module + // once. + if !seenModules[childStr] { + moduleMap[aStr] = append(moduleMap[aStr], ancestors[i+1]) + seenModules[childStr] = true + } + } } } } @@ -144,7 +164,7 @@ func marshalPlanResources(changes *plans.Changes, ris []addrs.AbsResourceInstanc Address: r.Addr.String(), Type: r.Addr.Resource.Resource.Type, Name: r.Addr.Resource.Resource.Name, - ProviderName: r.ProviderAddr.ProviderConfig.StringCompact(), + ProviderName: r.ProviderAddr.Provider.LegacyString(), Index: r.Addr.Resource.Key, } @@ -161,7 +181,7 @@ func marshalPlanResources(changes *plans.Changes, ris []addrs.AbsResourceInstanc } schema, schemaVer := schemas.ResourceTypeConfig( - r.ProviderAddr.ProviderConfig.Type, + r.ProviderAddr.Provider, r.Addr.Resource.Resource.Mode, resource.Type, ) diff --git a/command/jsonplan/values_test.go b/command/jsonplan/values_test.go index 6b04dba02..9fe9043cb 100644 --- a/command/jsonplan/values_test.go +++ b/command/jsonplan/values_test.go @@ -30,6 +30,18 @@ func TestMarshalAttributeValues(t *testing.T) { }, nil, }, + { + cty.NullVal(cty.String), + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "foo": { + Type: cty.String, + Optional: true, + }, + }, + }, + nil, + }, { cty.ObjectVal(map[string]cty.Value{ "foo": cty.StringVal("bar"), @@ -246,7 +258,10 @@ func TestMarshalPlanResources(t *testing.T) { Type: "test_thing", Name: "example", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ Action: test.Action, Before: before, @@ -278,8 +293,8 @@ func TestMarshalPlanResources(t *testing.T) { func testSchemas() *terraform.Schemas { return &terraform.Schemas{ - Providers: map[string]*terraform.ProviderSchema{ - "test": &terraform.ProviderSchema{ + Providers: map[addrs.Provider]*terraform.ProviderSchema{ + addrs.NewLegacyProvider("test"): &terraform.ProviderSchema{ ResourceTypes: map[string]*configschema.Block{ "test_thing": { Attributes: map[string]*configschema.Attribute{ diff --git a/command/jsonprovider/provider.go b/command/jsonprovider/provider.go index 5c45fd472..7f331e7e0 100644 --- a/command/jsonprovider/provider.go +++ b/command/jsonprovider/provider.go @@ -35,7 +35,7 @@ func Marshal(s *terraform.Schemas) ([]byte, error) { providers := newProviders() for k, v := range s.Providers { - providers.Schemas[k] = marshalProvider(v) + providers.Schemas[k.LegacyString()] = marshalProvider(v) } ret, err := json.Marshal(providers) diff --git a/command/jsonprovider/provider_test.go b/command/jsonprovider/provider_test.go index 64b21d746..120e4fa73 100644 --- a/command/jsonprovider/provider_test.go +++ b/command/jsonprovider/provider_test.go @@ -7,6 +7,7 @@ import ( "github.com/google/go-cmp/cmp" "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/terraform" ) @@ -117,8 +118,8 @@ func TestMarshalProvider(t *testing.T) { func testProviders() *terraform.Schemas { return &terraform.Schemas{ - Providers: map[string]*terraform.ProviderSchema{ - "test": testProvider(), + Providers: map[addrs.Provider]*terraform.ProviderSchema{ + addrs.NewLegacyProvider("test"): testProvider(), }, } } diff --git a/command/jsonstate/state.go b/command/jsonstate/state.go index 68ed520a0..bcc12cd55 100644 --- a/command/jsonstate/state.go +++ b/command/jsonstate/state.go @@ -91,6 +91,9 @@ type resource struct { // Tainted is true if the resource is tainted in terraform state. Tainted bool `json:"tainted,omitempty"` + + // Deposed is set if the resource is deposed in terraform state. + DeposedKey string `json:"deposed_key,omitempty"` } // attributeValues is the JSON representation of the attribute values of the @@ -98,9 +101,10 @@ type resource struct { type attributeValues map[string]interface{} func marshalAttributeValues(value cty.Value, schema *configschema.Block) attributeValues { - if value == cty.NilVal { + if value == cty.NilVal || value.IsNull() { return nil } + ret := make(attributeValues) it := value.ElementIterator() @@ -246,18 +250,18 @@ func marshalResources(resources map[string]*states.Resource, schemas *terraform. for _, r := range resources { for k, ri := range r.Instances { - resource := resource{ + current := resource{ Address: r.Addr.String(), Type: r.Addr.Type, Name: r.Addr.Name, - ProviderName: r.ProviderConfig.ProviderConfig.StringCompact(), + ProviderName: r.ProviderConfig.Provider.LegacyString(), } switch r.Addr.Mode { case addrs.ManagedResourceMode: - resource.Mode = "managed" + current.Mode = "managed" case addrs.DataResourceMode: - resource.Mode = "data" + current.Mode = "data" default: return ret, fmt.Errorf("resource %s has an unsupported mode %s", r.Addr.String(), @@ -266,41 +270,76 @@ func marshalResources(resources map[string]*states.Resource, schemas *terraform. } if r.EachMode != states.NoEach { - resource.Index = k + current.Index = k } schema, _ := schemas.ResourceTypeConfig( - r.ProviderConfig.ProviderConfig.Type, + r.ProviderConfig.Provider, r.Addr.Mode, r.Addr.Type, ) - resource.SchemaVersion = ri.Current.SchemaVersion - if schema == nil { - return nil, fmt.Errorf("no schema found for %s", r.Addr.String()) - } - riObj, err := ri.Current.Decode(schema.ImpliedType()) - if err != nil { - return nil, err - } + // It is possible that the only instance is deposed + if ri.Current != nil { + current.SchemaVersion = ri.Current.SchemaVersion - resource.AttributeValues = marshalAttributeValues(riObj.Value, schema) - - if len(riObj.Dependencies) > 0 { - dependencies := make([]string, len(riObj.Dependencies)) - for i, v := range riObj.Dependencies { - dependencies[i] = v.String() + if schema == nil { + return nil, fmt.Errorf("no schema found for %s", r.Addr.String()) } - resource.DependsOn = dependencies + riObj, err := ri.Current.Decode(schema.ImpliedType()) + if err != nil { + return nil, err + } + + current.AttributeValues = marshalAttributeValues(riObj.Value, schema) + + if len(riObj.Dependencies) > 0 { + dependencies := make([]string, len(riObj.Dependencies)) + for i, v := range riObj.Dependencies { + dependencies[i] = v.String() + } + current.DependsOn = dependencies + } + + if riObj.Status == states.ObjectTainted { + current.Tainted = true + } + ret = append(ret, current) } - if riObj.Status == states.ObjectTainted { - resource.Tainted = true - } + for deposedKey, rios := range ri.Deposed { + // copy the base fields from the current instance + deposed := resource{ + Address: current.Address, + Type: current.Type, + Name: current.Name, + ProviderName: current.ProviderName, + Mode: current.Mode, + Index: current.Index, + } - ret = append(ret, resource) + riObj, err := rios.Decode(schema.ImpliedType()) + if err != nil { + return nil, err + } + + deposed.AttributeValues = marshalAttributeValues(riObj.Value, schema) + + if len(riObj.Dependencies) > 0 { + dependencies := make([]string, len(riObj.Dependencies)) + for i, v := range riObj.Dependencies { + dependencies[i] = v.String() + } + deposed.DependsOn = dependencies + } + + if riObj.Status == states.ObjectTainted { + deposed.Tainted = true + } + deposed.DeposedKey = deposedKey.String() + ret = append(ret, deposed) + } } - } sort.Slice(ret, func(i, j int) bool { diff --git a/command/jsonstate/state_test.go b/command/jsonstate/state_test.go index ee7410416..cacb11988 100644 --- a/command/jsonstate/state_test.go +++ b/command/jsonstate/state_test.go @@ -91,6 +91,18 @@ func TestMarshalAttributeValues(t *testing.T) { }, nil, }, + { + cty.NullVal(cty.String), + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "foo": { + Type: cty.String, + Optional: true, + }, + }, + }, + nil, + }, { cty.ObjectVal(map[string]cty.Value{ "foo": cty.StringVal("bar"), @@ -158,19 +170,20 @@ func TestMarshalAttributeValues(t *testing.T) { } func TestMarshalResources(t *testing.T) { - tests := []struct { + deposedKey := states.NewDeposedKey() + tests := map[string]struct { Resources map[string]*states.Resource Schemas *terraform.Schemas Want []resource Err bool }{ - { + "nil": { nil, nil, nil, false, }, - { + "single resource": { map[string]*states.Resource{ "test_thing.baz": { Addr: addrs.Resource{ @@ -188,9 +201,10 @@ func TestMarshalResources(t *testing.T) { }, }, }, - ProviderConfig: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + ProviderConfig: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, }, }, testSchemas(), @@ -211,29 +225,137 @@ func TestMarshalResources(t *testing.T) { }, false, }, + "deposed resource": { + map[string]*states.Resource{ + "test_thing.baz": { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "bar", + }, + EachMode: states.EachList, + Instances: map[addrs.InstanceKey]*states.ResourceInstance{ + addrs.IntKey(0): { + Deposed: map[states.DeposedKey]*states.ResourceInstanceObjectSrc{ + states.DeposedKey(deposedKey): &states.ResourceInstanceObjectSrc{ + SchemaVersion: 1, + Status: states.ObjectReady, + AttrsJSON: []byte(`{"woozles":"confuzles"}`), + }, + }, + }, + }, + ProviderConfig: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + }, + }, + testSchemas(), + []resource{ + resource{ + Address: "test_thing.bar", + Mode: "managed", + Type: "test_thing", + Name: "bar", + Index: addrs.IntKey(0), + ProviderName: "test", + DeposedKey: deposedKey.String(), + AttributeValues: attributeValues{ + "foozles": json.RawMessage(`null`), + "woozles": json.RawMessage(`"confuzles"`), + }, + }, + }, + false, + }, + "deposed and current resource": { + map[string]*states.Resource{ + "test_thing.baz": { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "bar", + }, + EachMode: states.EachList, + Instances: map[addrs.InstanceKey]*states.ResourceInstance{ + addrs.IntKey(0): { + Deposed: map[states.DeposedKey]*states.ResourceInstanceObjectSrc{ + states.DeposedKey(deposedKey): &states.ResourceInstanceObjectSrc{ + SchemaVersion: 1, + Status: states.ObjectReady, + AttrsJSON: []byte(`{"woozles":"confuzles"}`), + }, + }, + Current: &states.ResourceInstanceObjectSrc{ + SchemaVersion: 1, + Status: states.ObjectReady, + AttrsJSON: []byte(`{"woozles":"confuzles"}`), + }, + }, + }, + ProviderConfig: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + }, + }, + testSchemas(), + []resource{ + resource{ + Address: "test_thing.bar", + Mode: "managed", + Type: "test_thing", + Name: "bar", + Index: addrs.IntKey(0), + ProviderName: "test", + SchemaVersion: 1, + AttributeValues: attributeValues{ + "foozles": json.RawMessage(`null`), + "woozles": json.RawMessage(`"confuzles"`), + }, + }, + resource{ + Address: "test_thing.bar", + Mode: "managed", + Type: "test_thing", + Name: "bar", + Index: addrs.IntKey(0), + ProviderName: "test", + DeposedKey: deposedKey.String(), + AttributeValues: attributeValues{ + "foozles": json.RawMessage(`null`), + "woozles": json.RawMessage(`"confuzles"`), + }, + }, + }, + false, + }, } - for _, test := range tests { - got, err := marshalResources(test.Resources, test.Schemas) - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") + for name, test := range tests { + t.Run(name, func(t *testing.T) { + got, err := marshalResources(test.Resources, test.Schemas) + if test.Err { + if err == nil { + t.Fatal("succeeded; want error") + } + return + } else if err != nil { + t.Fatalf("unexpected error: %s", err) } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - eq := reflect.DeepEqual(got, test.Want) - if !eq { - t.Fatalf("wrong result:\nGot: %#v\nWant: %#v\n", got, test.Want) - } + eq := reflect.DeepEqual(got, test.Want) + if !eq { + t.Fatalf("wrong result:\nGot: %#v\nWant: %#v\n", got, test.Want) + } + }) } } func testSchemas() *terraform.Schemas { return &terraform.Schemas{ - Providers: map[string]*terraform.ProviderSchema{ - "test": &terraform.ProviderSchema{ + Providers: map[addrs.Provider]*terraform.ProviderSchema{ + addrs.NewLegacyProvider("test"): &terraform.ProviderSchema{ ResourceTypes: map[string]*configschema.Block{ "test_thing": { Attributes: map[string]*configschema.Attribute{ diff --git a/command/login.go b/command/login.go index 527d43643..be2d79c93 100644 --- a/command/login.go +++ b/command/login.go @@ -10,14 +10,16 @@ import ( "math/rand" "net" "net/http" + "net/url" "path/filepath" "strings" + tfe "github.com/hashicorp/go-tfe" + svchost "github.com/hashicorp/terraform-svchost" + svcauth "github.com/hashicorp/terraform-svchost/auth" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/command/cliconfig" "github.com/hashicorp/terraform/httpclient" - "github.com/hashicorp/terraform/svchost" - svcauth "github.com/hashicorp/terraform/svchost/auth" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/tfdiags" uuid "github.com/hashicorp/go-uuid" @@ -103,21 +105,12 @@ func (c *LoginCommand) Run(args []string) int { return 1 } - creds := c.Services.CredentialsSource() - - // In normal use (i.e. without test mocks/fakes) creds will be an instance - // of the command/cliconfig.CredentialsSource type, which has some extra - // methods we can use to give the user better feedback about what we're - // going to do. credsCtx will be nil if it's any other implementation, - // though. - var credsCtx *loginCredentialsContext - if c, ok := creds.(*cliconfig.CredentialsSource); ok { - filename, _ := c.CredentialsFilePath() - credsCtx = &loginCredentialsContext{ - Location: c.HostCredentialsLocation(hostname), - LocalFilename: filename, // empty in the very unlikely event that we can't select a config directory for this user - HelperType: c.CredentialsHelperType(), - } + creds := c.Services.CredentialsSource().(*cliconfig.CredentialsSource) + filename, _ := creds.CredentialsFilePath() + credsCtx := &loginCredentialsContext{ + Location: creds.HostCredentialsLocation(hostname), + LocalFilename: filename, // empty in the very unlikely event that we can't select a config directory for this user + HelperType: creds.CredentialsHelperType(), } clientConfig, err := host.ServiceOAuthClient("login.v1") @@ -125,25 +118,49 @@ func (c *LoginCommand) Run(args []string) int { case nil: // Great! No problem, then. case *disco.ErrServiceNotProvided: - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Host does not support Terraform login", - fmt.Sprintf("The given hostname %q does not allow creating Terraform authorization tokens.", dispHostname), - )) + // This is also fine! We'll try the manual token creation process. case *disco.ErrVersionNotSupported: diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, + tfdiags.Warning, "Host does not support Terraform login", fmt.Sprintf("The given hostname %q allows creating Terraform authorization tokens, but requires a newer version of Terraform CLI to do so.", dispHostname), )) default: diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, + tfdiags.Warning, "Host does not support Terraform login", fmt.Sprintf("The given hostname %q cannot support \"terraform login\": %s.", dispHostname, err), )) } + // If login service is unavailable, check for a TFE v2 API as fallback + var service *url.URL + if clientConfig == nil { + service, err = host.ServiceURL("tfe.v2") + switch err.(type) { + case nil: + // Success! + case *disco.ErrServiceNotProvided: + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Host does not support Terraform tokens API", + fmt.Sprintf("The given hostname %q does not support creating Terraform authorization tokens.", dispHostname), + )) + case *disco.ErrVersionNotSupported: + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Host does not support Terraform tokens API", + fmt.Sprintf("The given hostname %q allows creating Terraform authorization tokens, but requires a newer version of Terraform CLI to do so.", dispHostname), + )) + default: + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Host does not support Terraform tokens API", + fmt.Sprintf("The given hostname %q cannot support \"terraform login\": %s.", dispHostname, err), + )) + } + } + if credsCtx.Location == cliconfig.CredentialsInOtherFile { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, @@ -157,37 +174,41 @@ func (c *LoginCommand) Run(args []string) int { return 1 } - var token *oauth2.Token - switch { - case clientConfig.SupportedGrantTypes.Has(disco.OAuthAuthzCodeGrant): - // We prefer an OAuth code grant if the server supports it. - var tokenDiags tfdiags.Diagnostics - token, tokenDiags = c.interactiveGetTokenByCode(hostname, credsCtx, clientConfig) - diags = diags.Append(tokenDiags) - if tokenDiags.HasErrors() { - c.showDiagnostics(diags) - return 1 + var token svcauth.HostCredentialsToken + var tokenDiags tfdiags.Diagnostics + + // Prefer Terraform login if available + if clientConfig != nil { + var oauthToken *oauth2.Token + + switch { + case clientConfig.SupportedGrantTypes.Has(disco.OAuthAuthzCodeGrant): + // We prefer an OAuth code grant if the server supports it. + oauthToken, tokenDiags = c.interactiveGetTokenByCode(hostname, credsCtx, clientConfig) + case clientConfig.SupportedGrantTypes.Has(disco.OAuthOwnerPasswordGrant) && hostname == svchost.Hostname("app.terraform.io"): + // The password grant type is allowed only for Terraform Cloud SaaS. + oauthToken, tokenDiags = c.interactiveGetTokenByPassword(hostname, credsCtx, clientConfig) + default: + tokenDiags = tokenDiags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Host does not support Terraform login", + fmt.Sprintf("The given hostname %q does not allow any OAuth grant types that are supported by this version of Terraform.", dispHostname), + )) } - case clientConfig.SupportedGrantTypes.Has(disco.OAuthOwnerPasswordGrant) && hostname == svchost.Hostname("app.terraform.io"): - // The password grant type is allowed only for Terraform Cloud SaaS. - var tokenDiags tfdiags.Diagnostics - token, tokenDiags = c.interactiveGetTokenByPassword(hostname, credsCtx, clientConfig) - diags = diags.Append(tokenDiags) - if tokenDiags.HasErrors() { - c.showDiagnostics(diags) - return 1 + if oauthToken != nil { + token = svcauth.HostCredentialsToken(oauthToken.AccessToken) } - default: - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Host does not support Terraform login", - fmt.Sprintf("The given hostname %q does not allow any OAuth grant types that are supported by this version of Terraform.", dispHostname), - )) + } else if service != nil { + token, tokenDiags = c.interactiveGetTokenByUI(hostname, credsCtx, service) + } + + diags = diags.Append(tokenDiags) + if diags.HasErrors() { c.showDiagnostics(diags) return 1 } - err = creds.StoreForHost(hostname, svcauth.HostCredentialsToken(token.AccessToken)) + err = creds.StoreForHost(hostname, token) if err != nil { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, @@ -468,10 +489,94 @@ func (c *LoginCommand) interactiveGetTokenByPassword(hostname svchost.Hostname, return token, diags } -func (c *LoginCommand) interactiveContextConsent(hostname svchost.Hostname, grantType disco.OAuthGrantType, credsCtx *loginCredentialsContext) (bool, tfdiags.Diagnostics) { +func (c *LoginCommand) interactiveGetTokenByUI(hostname svchost.Hostname, credsCtx *loginCredentialsContext, service *url.URL) (svcauth.HostCredentialsToken, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - c.Ui.Output(fmt.Sprintf("Terraform will request an API token for %s using OAuth.\n", hostname.ForDisplay())) + confirm, confirmDiags := c.interactiveContextConsent(hostname, disco.OAuthGrantType(""), credsCtx) + diags = diags.Append(confirmDiags) + if !confirm { + diags = diags.Append(errors.New("Login cancelled")) + return "", diags + } + + tokensURL := url.URL{ + Scheme: "https", + Host: service.Hostname(), + Path: "/app/settings/tokens", + RawQuery: "source=terraform-login", + } + + launchBrowserManually := false + if c.BrowserLauncher != nil { + err := c.BrowserLauncher.OpenURL(tokensURL.String()) + if err == nil { + c.Ui.Output(fmt.Sprintf("Terraform must now open a web browser to the tokens page for %s.\n", hostname.ForDisplay())) + c.Ui.Output(fmt.Sprintf("If a browser does not open this automatically, open the following URL to proceed:\n %s\n", tokensURL.String())) + } else { + // Assume we're on a platform where opening a browser isn't possible. + launchBrowserManually = true + } + } else { + launchBrowserManually = true + } + + if launchBrowserManually { + c.Ui.Output(fmt.Sprintf("Open the following URL to access the tokens page for %s:\n %s\n", hostname.ForDisplay(), tokensURL.String())) + } + + c.Ui.Output("\n---------------------------------------------------------------------------------\n") + c.Ui.Output("Generate a token using your browser, and copy-paste it into this prompt.\n") + + // credsCtx might not be set if we're using a mock credentials source + // in a test, but it should always be set in normal use. + if credsCtx != nil { + switch credsCtx.Location { + case cliconfig.CredentialsViaHelper: + c.Ui.Output(fmt.Sprintf("Terraform will store the token in the configured %q credentials helper\nfor use by subsequent commands.\n", credsCtx.HelperType)) + case cliconfig.CredentialsInPrimaryFile, cliconfig.CredentialsNotAvailable: + c.Ui.Output(fmt.Sprintf("Terraform will store the token in plain text in the following file\nfor use by subsequent commands:\n %s\n", credsCtx.LocalFilename)) + } + } + + token, err := c.Ui.AskSecret(fmt.Sprintf(c.Colorize().Color("Token for [bold]%s[reset]:"), hostname.ForDisplay())) + if err != nil { + diags := diags.Append(fmt.Errorf("Failed to retrieve token: %s", err)) + return "", diags + } + + token = strings.TrimSpace(token) + cfg := &tfe.Config{ + Address: service.String(), + BasePath: service.Path, + Token: token, + Headers: make(http.Header), + } + client, err := tfe.NewClient(cfg) + if err != nil { + diags = diags.Append(fmt.Errorf("Failed to create API client: %s", err)) + return "", diags + } + user, err := client.Users.ReadCurrent(context.Background()) + if err == tfe.ErrUnauthorized { + diags = diags.Append(fmt.Errorf("Token is invalid: %s", err)) + return "", diags + } else if err != nil { + diags = diags.Append(fmt.Errorf("Failed to retrieve user account details: %s", err)) + return "", diags + } + c.Ui.Output(fmt.Sprintf(c.Colorize().Color("\nRetrieved token for user [bold]%s[reset]\n"), user.Username)) + + return svcauth.HostCredentialsToken(token), nil +} + +func (c *LoginCommand) interactiveContextConsent(hostname svchost.Hostname, grantType disco.OAuthGrantType, credsCtx *loginCredentialsContext) (bool, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + mechanism := "OAuth" + if grantType == "" { + mechanism = "your browser" + } + + c.Ui.Output(fmt.Sprintf("Terraform will request an API token for %s using %s.\n", hostname.ForDisplay(), mechanism)) if grantType.UsesAuthorizationEndpoint() { c.Ui.Output( diff --git a/command/login_test.go b/command/login_test.go index 33d68cb5a..b03e49c82 100644 --- a/command/login_test.go +++ b/command/login_test.go @@ -12,11 +12,14 @@ import ( "github.com/mitchellh/cli" + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/command/cliconfig" oauthserver "github.com/hashicorp/terraform/command/testdata/login-oauth-server" + tfeserver "github.com/hashicorp/terraform/command/testdata/login-tfe-server" "github.com/hashicorp/terraform/command/webbrowser" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/disco" + "github.com/hashicorp/terraform/httpclient" + "github.com/hashicorp/terraform/version" ) func TestLogin(t *testing.T) { @@ -25,6 +28,12 @@ func TestLogin(t *testing.T) { s := httptest.NewServer(oauthserver.Handler) defer s.Close() + // tfeserver.Handler is a stub TFE API implementation which will respond + // to ping and current account requests, when requests are authenticated + // with token "good-token" + ts := httptest.NewServer(tfeserver.Handler) + defer ts.Close() + loginTestCase := func(test func(t *testing.T, c *LoginCommand, ui *cli.MockUi, inp func(string))) func(t *testing.T) { return func(t *testing.T) { t.Helper() @@ -43,6 +52,7 @@ func TestLogin(t *testing.T) { browserLauncher := webbrowser.NewMockLauncher(ctx) creds := cliconfig.EmptyCredentialsSourceForTests(filepath.Join(workDir, "credentials.tfrc.json")) svcs := disco.NewWithCredentialsSource(creds) + svcs.SetUserAgent(httpclient.TerraformUserAgent(version.String())) inputBuf := &bytes.Buffer{} ui.InputReader = inputBuf @@ -67,6 +77,13 @@ func TestLogin(t *testing.T) { "token": s.URL + "/token", }, }) + svcs.ForceHostServices(svchost.Hostname("tfe.acme.com"), map[string]interface{}{ + // This represents a Terraform Enterprise instance which does not + // yet support the login API, but does support the TFE tokens API. + "tfe.v2": ts.URL + "/api/v2", + "tfe.v2.1": ts.URL + "/api/v2", + "tfe.v2.2": ts.URL + "/api/v2", + }) svcs.ForceHostServices(svchost.Hostname("unsupported.example.net"), map[string]interface{}{ // This host intentionally left blank. }) @@ -122,13 +139,50 @@ func TestLogin(t *testing.T) { } })) - t.Run("host without login support", loginTestCase(func(t *testing.T, c *LoginCommand, ui *cli.MockUi, inp func(string)) { + t.Run("TFE host without login support", loginTestCase(func(t *testing.T, c *LoginCommand, ui *cli.MockUi, inp func(string)) { + // Enter "yes" at the consent prompt, then paste a token with some + // accidental whitespace. + inp("yes\n good-token \n") + status := c.Run([]string{"tfe.acme.com"}) + if status != 0 { + t.Fatalf("unexpected error code %d\nstderr:\n%s", status, ui.ErrorWriter.String()) + } + + credsSrc := c.Services.CredentialsSource() + creds, err := credsSrc.ForHost(svchost.Hostname("tfe.acme.com")) + if err != nil { + t.Errorf("failed to retrieve credentials: %s", err) + } + if got, want := creds.Token(), "good-token"; got != want { + t.Errorf("wrong token %q; want %q", got, want) + } + })) + + t.Run("TFE host without login support, incorrectly pasted token", loginTestCase(func(t *testing.T, c *LoginCommand, ui *cli.MockUi, inp func(string)) { + // Enter "yes" at the consent prompt, then paste an invalid token. + inp("yes\ngood-tok\n") + status := c.Run([]string{"tfe.acme.com"}) + if status != 1 { + t.Fatalf("unexpected error code %d\nstderr:\n%s", status, ui.ErrorWriter.String()) + } + + credsSrc := c.Services.CredentialsSource() + creds, err := credsSrc.ForHost(svchost.Hostname("tfe.acme.com")) + if err != nil { + t.Errorf("failed to retrieve credentials: %s", err) + } + if creds != nil { + t.Errorf("wrong token %q; should have no token", creds.Token()) + } + })) + + t.Run("host without login or TFE API support", loginTestCase(func(t *testing.T, c *LoginCommand, ui *cli.MockUi, inp func(string)) { status := c.Run([]string{"unsupported.example.net"}) if status == 0 { t.Fatalf("successful exit; want error") } - if got, want := ui.ErrorWriter.String(), "Error: Host does not support Terraform login"; !strings.Contains(got, want) { + if got, want := ui.ErrorWriter.String(), "Error: Host does not support Terraform tokens API"; !strings.Contains(got, want) { t.Fatalf("missing expected error message\nwant: %s\nfull output:\n%s", want, got) } })) diff --git a/command/logout.go b/command/logout.go new file mode 100644 index 000000000..1ecaf3b47 --- /dev/null +++ b/command/logout.go @@ -0,0 +1,162 @@ +package command + +import ( + "fmt" + "path/filepath" + "strings" + + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform/command/cliconfig" + "github.com/hashicorp/terraform/tfdiags" +) + +// LogoutCommand is a Command implementation which removes stored credentials +// for a remote service host. +type LogoutCommand struct { + Meta +} + +// Run implements cli.Command. +func (c *LogoutCommand) Run(args []string) int { + args, err := c.Meta.process(args, false) + if err != nil { + return 1 + } + + cmdFlags := c.Meta.defaultFlagSet("logout") + cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } + if err := cmdFlags.Parse(args); err != nil { + return 1 + } + + args = cmdFlags.Args() + if len(args) > 1 { + c.Ui.Error( + "The logout command expects at most one argument: the host to log out of.") + cmdFlags.Usage() + return 1 + } + + var diags tfdiags.Diagnostics + + givenHostname := "app.terraform.io" + if len(args) != 0 { + givenHostname = args[0] + } + + hostname, err := svchost.ForComparison(givenHostname) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Invalid hostname", + fmt.Sprintf("The given hostname %q is not valid: %s.", givenHostname, err.Error()), + )) + c.showDiagnostics(diags) + return 1 + } + + // From now on, since we've validated the given hostname, we should use + // dispHostname in the UI to ensure we're presenting it in the canonical + // form, in case that helps users with debugging when things aren't + // working as expected. (Perhaps the normalization is part of the cause.) + dispHostname := hostname.ForDisplay() + + creds := c.Services.CredentialsSource().(*cliconfig.CredentialsSource) + filename, _ := creds.CredentialsFilePath() + credsCtx := &loginCredentialsContext{ + Location: creds.HostCredentialsLocation(hostname), + LocalFilename: filename, // empty in the very unlikely event that we can't select a config directory for this user + HelperType: creds.CredentialsHelperType(), + } + + if credsCtx.Location == cliconfig.CredentialsInOtherFile { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + fmt.Sprintf("Credentials for %s are manually configured", dispHostname), + "The \"terraform logout\" command cannot log out because credentials for this host are manually configured in a CLI configuration file.\n\nTo log out, revoke the existing credentials and remove that block from the CLI configuration.", + )) + } + + if diags.HasErrors() { + c.showDiagnostics(diags) + return 1 + } + + // credsCtx might not be set if we're using a mock credentials source + // in a test, but it should always be set in normal use. + if credsCtx != nil { + switch credsCtx.Location { + case cliconfig.CredentialsNotAvailable: + c.Ui.Output(fmt.Sprintf("No credentials for %s are stored.\n", dispHostname)) + return 0 + case cliconfig.CredentialsViaHelper: + c.Ui.Output(fmt.Sprintf("Removing the stored credentials for %s from the configured\n%q credentials helper.\n", dispHostname, credsCtx.HelperType)) + case cliconfig.CredentialsInPrimaryFile: + c.Ui.Output(fmt.Sprintf("Removing the stored credentials for %s from the following file:\n %s\n", dispHostname, credsCtx.LocalFilename)) + } + } + + err = creds.ForgetForHost(hostname) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to remove API token", + fmt.Sprintf("Unable to remove stored API token: %s", err), + )) + } + + c.showDiagnostics(diags) + if diags.HasErrors() { + return 1 + } + + c.Ui.Output( + fmt.Sprintf( + c.Colorize().Color(strings.TrimSpace(` +[green][bold]Success![reset] [bold]Terraform has removed the stored API token for %s.[reset] +`)), + dispHostname, + ) + "\n", + ) + + return 0 +} + +// Help implements cli.Command. +func (c *LogoutCommand) Help() string { + defaultFile := c.defaultOutputFile() + if defaultFile == "" { + // Because this is just for the help message and it's very unlikely + // that a user wouldn't have a functioning home directory anyway, + // we'll just use a placeholder here. The real command has some + // more complex behavior for this case. This result is not correct + // on all platforms, but given how unlikely we are to hit this case + // that seems okay. + defaultFile = "~/.terraform/credentials.tfrc.json" + } + + helpText := ` +Usage: terraform logout [hostname] + + Removes locally-stored credentials for specified hostname. + + Note: the API token is only removed from local storage, not destroyed on the + remote server, so it will remain valid until manually revoked. + + If no hostname is provided, the default hostname is app.terraform.io. + %s +` + return strings.TrimSpace(helpText) +} + +// Synopsis implements cli.Command. +func (c *LogoutCommand) Synopsis() string { + return "Remove locally-stored credentials for a remote host" +} + +func (c *LogoutCommand) defaultOutputFile() string { + if c.CLIConfigDir == "" { + return "" // no default available + } + return filepath.Join(c.CLIConfigDir, "credentials.tfrc.json") +} diff --git a/command/logout_test.go b/command/logout_test.go new file mode 100644 index 000000000..a4c17cf17 --- /dev/null +++ b/command/logout_test.go @@ -0,0 +1,81 @@ +package command + +import ( + "io/ioutil" + "os" + "path/filepath" + "testing" + + "github.com/mitchellh/cli" + + svchost "github.com/hashicorp/terraform-svchost" + svcauth "github.com/hashicorp/terraform-svchost/auth" + "github.com/hashicorp/terraform-svchost/disco" + "github.com/hashicorp/terraform/command/cliconfig" +) + +func TestLogout(t *testing.T) { + workDir, err := ioutil.TempDir("", "terraform-test-command-logout") + if err != nil { + t.Fatalf("cannot create temporary directory: %s", err) + } + defer os.RemoveAll(workDir) + + ui := cli.NewMockUi() + credsSrc := cliconfig.EmptyCredentialsSourceForTests(filepath.Join(workDir, "credentials.tfrc.json")) + + c := &LogoutCommand{ + Meta: Meta{ + Ui: ui, + Services: disco.NewWithCredentialsSource(credsSrc), + }, + } + + testCases := []struct { + // Hostname to associate a pre-stored token + hostname string + // Command-line arguments + args []string + // true iff the token at hostname should be removed by the command + shouldRemove bool + }{ + // If no command-line arguments given, should remove app.terraform.io token + {"app.terraform.io", []string{}, true}, + + // Can still specify app.terraform.io explicitly + {"app.terraform.io", []string{"app.terraform.io"}, true}, + + // Can remove tokens for other hostnames + {"tfe.example.com", []string{"tfe.example.com"}, true}, + + // Logout does not remove tokens for other hostnames + {"tfe.example.com", []string{"other-tfe.acme.com"}, false}, + } + for _, tc := range testCases { + host := svchost.Hostname(tc.hostname) + token := svcauth.HostCredentialsToken("some-token") + err = credsSrc.StoreForHost(host, token) + if err != nil { + t.Fatalf("unexpected error storing credentials: %s", err) + } + + status := c.Run(tc.args) + if status != 0 { + t.Fatalf("unexpected error code %d\nstderr:\n%s", status, ui.ErrorWriter.String()) + } + + creds, err := credsSrc.ForHost(host) + if err != nil { + t.Errorf("failed to retrieve credentials: %s", err) + } + if tc.shouldRemove { + if creds != nil { + t.Errorf("wrong token %q; should have no token", creds.Token()) + } + } else { + if got, want := creds.Token(), "some-token"; got != want { + t.Errorf("wrong token %q; want %q", got, want) + } + } + } +} diff --git a/command/meta.go b/command/meta.go index d55c9f8a5..9b3faa8b3 100644 --- a/command/meta.go +++ b/command/meta.go @@ -14,6 +14,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/backend/local" @@ -22,9 +23,9 @@ import ( "github.com/hashicorp/terraform/configs/configload" "github.com/hashicorp/terraform/helper/experiment" "github.com/hashicorp/terraform/helper/wrappedstreams" + "github.com/hashicorp/terraform/internal/getproviders" "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/provisioners" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/tfdiags" "github.com/mitchellh/cli" @@ -74,6 +75,11 @@ type Meta struct { // into the given directory. PluginCacheDir string + // ProviderSource allows determining the available versions of a provider + // and determines where a distribution package for a particular + // provider version can be obtained. + ProviderSource getproviders.Source + // OverrideDataDir, if non-empty, overrides the return value of the // DataDir method for situations where the local .terraform/ directory // is not suitable, e.g. because of a read-only filesystem. @@ -157,6 +163,9 @@ type Meta struct { // init. // // reconfigure forces init to ignore any stored configuration. + // + // compactWarnings (-compact-warnings) selects a more compact presentation + // of warnings in the output when they are not accompanied by errors. statePath string stateOutPath string backupPath string @@ -166,6 +175,7 @@ type Meta struct { stateLockTimeout time.Duration forceInitCopy bool reconfigure bool + compactWarnings bool // Used with the import command to allow import of state when no matching config exists. allowMissingConfig bool @@ -242,8 +252,6 @@ func (m *Meta) InputMode() terraform.InputMode { var mode terraform.InputMode mode |= terraform.InputModeProvider - mode |= terraform.InputModeVar - mode |= terraform.InputModeVarUnset return mode } @@ -379,6 +387,7 @@ func (m *Meta) extendedFlagSet(n string) *flag.FlagSet { f.BoolVar(&m.input, "input", true, "input") f.Var((*FlagTargetSlice)(&m.targets), "target", "resource to target") + f.BoolVar(&m.compactWarnings, "compact-warnings", false, "use compact warnings") if m.variableArgs.items == nil { m.variableArgs = newRawFlags("-var") @@ -482,6 +491,34 @@ func (m *Meta) showDiagnostics(vals ...interface{}) { diags = diags.Append(vals...) diags.Sort() + if len(diags) == 0 { + return + } + + diags = diags.ConsolidateWarnings(1) + + // Since warning messages are generally competing + if m.compactWarnings { + // If the user selected compact warnings and all of the diagnostics are + // warnings then we'll use a more compact representation of the warnings + // that only includes their summaries. + // We show full warnings if there are also errors, because a warning + // can sometimes serve as good context for a subsequent error. + useCompact := true + for _, diag := range diags { + if diag.Severity() != tfdiags.Warning { + useCompact = false + break + } + } + if useCompact { + msg := format.DiagnosticWarningsCompact(diags, m.Colorize()) + msg = "\n" + msg + "\nTo see the full warning notes, run Terraform without -compact-warnings.\n" + m.Ui.Warn(msg) + return + } + } + for _, diag := range diags { // TODO: Actually measure the terminal width and pass it here. // For now, we don't have easy access to the writer that diff --git a/command/meta_backend.go b/command/meta_backend.go index 3a0cc473e..650d4a26e 100644 --- a/command/meta_backend.go +++ b/command/meta_backend.go @@ -346,9 +346,10 @@ func (m *Meta) backendConfig(opts *BackendOpts) (*configs.Backend, int, tfdiags. if opts.Config == nil { // check if the config was missing, or just not required - conf, err := m.loadBackendConfig(".") - if err != nil { - return nil, 0, err + conf, moreDiags := m.loadBackendConfig(".") + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return nil, 0, diags } if conf == nil { @@ -561,6 +562,75 @@ func (m *Meta) backendFromConfig(opts *BackendOpts) (backend.Backend, tfdiags.Di } } +// backendFromState returns the initialized (not configured) backend directly +// from the state. This should be used only when a user runs `terraform init +// -backend=false`. This function returns a local backend if there is no state +// or no backend configured. +func (m *Meta) backendFromState() (backend.Backend, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + // Get the path to where we store a local cache of backend configuration + // if we're using a remote backend. This may not yet exist which means + // we haven't used a non-local backend before. That is okay. + statePath := filepath.Join(m.DataDir(), DefaultStateFilename) + sMgr := &state.LocalState{Path: statePath} + if err := sMgr.RefreshState(); err != nil { + diags = diags.Append(fmt.Errorf("Failed to load state: %s", err)) + return nil, diags + } + s := sMgr.State() + if s == nil { + // no state, so return a local backend + log.Printf("[TRACE] Meta.Backend: backend has not previously been initialized in this working directory") + return backendLocal.New(), diags + } + if s.Backend == nil { + // s.Backend is nil, so return a local backend + log.Printf("[TRACE] Meta.Backend: working directory was previously initialized but has no backend (is using legacy remote state?)") + return backendLocal.New(), diags + } + log.Printf("[TRACE] Meta.Backend: working directory was previously initialized for %q backend", s.Backend.Type) + + //backend init function + if s.Backend.Type == "" { + return backendLocal.New(), diags + } + f := backendInit.Backend(s.Backend.Type) + if f == nil { + diags = diags.Append(fmt.Errorf(strings.TrimSpace(errBackendSavedUnknown), s.Backend.Type)) + return nil, diags + } + b := f() + + // The configuration saved in the working directory state file is used + // in this case, since it will contain any additional values that + // were provided via -backend-config arguments on terraform init. + schema := b.ConfigSchema() + configVal, err := s.Backend.Config(schema) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to decode current backend config", + fmt.Sprintf("The backend configuration created by the most recent run of \"terraform init\" could not be decoded: %s. The configuration may have been initialized by an earlier version that used an incompatible configuration structure. Run \"terraform init -reconfigure\" to force re-initialization of the backend.", err), + )) + return nil, diags + } + + // Validate the config and then configure the backend + newVal, validDiags := b.PrepareConfig(configVal) + diags = diags.Append(validDiags) + if validDiags.HasErrors() { + return nil, diags + } + + configDiags := b.Configure(newVal) + diags = diags.Append(configDiags) + if configDiags.HasErrors() { + return nil, diags + } + + return b, diags +} + //------------------------------------------------------------------- // Backend Config Scenarios // diff --git a/command/meta_backend_test.go b/command/meta_backend_test.go index 92a3ee258..0b099c3d3 100644 --- a/command/meta_backend_test.go +++ b/command/meta_backend_test.go @@ -22,6 +22,7 @@ import ( backendInit "github.com/hashicorp/terraform/backend/init" backendLocal "github.com/hashicorp/terraform/backend/local" + backendInmem "github.com/hashicorp/terraform/backend/remote-state/inmem" ) // Test empty directory with no config/state creates a local state. @@ -1771,7 +1772,7 @@ func TestMetaBackend_configureWithExtra(t *testing.T) { } } -// when confniguring a default local state, don't delete local state +// when configuring a default local state, don't delete local state func TestMetaBackend_localDoesNotDeleteLocal(t *testing.T) { // Create a temporary working directory that is empty td := tempDir(t) @@ -1860,6 +1861,30 @@ func TestMetaBackend_configToExtra(t *testing.T) { } } +// no config; return inmem backend stored in state +func TestBackendFromState(t *testing.T) { + td := tempDir(t) + copy.CopyDir(testFixturePath("backend-from-state"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + // Setup the meta + m := testMetaBackend(t, nil) + // terraform caches a small "state" file that stores the backend config. + // This test must override m.dataDir so it loads the "terraform.tfstate" file in the + // test directory as the backend config cache + m.OverrideDataDir = td + + stateBackend, diags := m.backendFromState() + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + + if _, ok := stateBackend.(*backendInmem.Backend); !ok { + t.Fatal("did not get expected inmem backend") + } +} + func testMetaBackend(t *testing.T, args []string) *Meta { var m Meta m.Ui = new(cli.MockUi) diff --git a/command/meta_config.go b/command/meta_config.go index ec7b3ec16..2bb104a11 100644 --- a/command/meta_config.go +++ b/command/meta_config.go @@ -173,10 +173,13 @@ func (m *Meta) dirIsConfigPath(dir string) bool { // directory even if loadBackendConfig succeeded.) func (m *Meta) loadBackendConfig(rootDir string) (*configs.Backend, tfdiags.Diagnostics) { mod, diags := m.loadSingleModule(rootDir) + + // Only return error diagnostics at this point. Any warnings will be caught + // again later and duplicated in the output. if diags.HasErrors() { return nil, diags } - return mod.Backend, diags + return mod.Backend, nil } // loadValuesFile loads a file that defines a single map of key/value pairs. diff --git a/command/meta_test.go b/command/meta_test.go index 978dca8c7..10ea80406 100644 --- a/command/meta_test.go +++ b/command/meta_test.go @@ -78,7 +78,7 @@ func TestMetaInputMode(t *testing.T) { t.Fatalf("err: %s", err) } - if m.InputMode() != terraform.InputModeStd|terraform.InputModeVarUnset { + if m.InputMode() != terraform.InputModeStd { t.Fatalf("bad: %#v", m.InputMode()) } } @@ -98,7 +98,7 @@ func TestMetaInputMode_envVar(t *testing.T) { } off := terraform.InputMode(0) - on := terraform.InputModeStd | terraform.InputModeVarUnset + on := terraform.InputModeStd cases := []struct { EnvVar string Expected terraform.InputMode @@ -134,63 +134,6 @@ func TestMetaInputMode_disable(t *testing.T) { } } -func TestMetaInputMode_defaultVars(t *testing.T) { - test = false - defer func() { test = true }() - - // Create a temporary directory for our cwd - d := tempDir(t) - os.MkdirAll(d, 0755) - defer os.RemoveAll(d) - defer testChdir(t, d)() - - // Create the default vars file - err := ioutil.WriteFile( - filepath.Join(d, DefaultVarsFilename), - []byte(""), - 0644) - if err != nil { - t.Fatalf("err: %s", err) - } - - m := new(Meta) - args := []string{} - args, err = m.process(args, false) - if err != nil { - t.Fatalf("err: %s", err) - } - - fs := m.extendedFlagSet("foo") - if err := fs.Parse(args); err != nil { - t.Fatalf("err: %s", err) - } - - if m.InputMode()&terraform.InputModeVar == 0 { - t.Fatalf("bad: %#v", m.InputMode()) - } -} - -func TestMetaInputMode_vars(t *testing.T) { - test = false - defer func() { test = true }() - - m := new(Meta) - args := []string{"-var", "foo=bar"} - - fs := m.extendedFlagSet("foo") - if err := fs.Parse(args); err != nil { - t.Fatalf("err: %s", err) - } - - if m.InputMode()&terraform.InputModeVar == 0 { - t.Fatalf("bad: %#v", m.InputMode()) - } - - if m.InputMode()&terraform.InputModeVarUnset == 0 { - t.Fatalf("bad: %#v", m.InputMode()) - } -} - func TestMeta_initStatePaths(t *testing.T) { m := new(Meta) m.initStatePaths() diff --git a/command/output.go b/command/output.go index c2c0fd672..ac07764c7 100644 --- a/command/output.go +++ b/command/output.go @@ -120,7 +120,7 @@ func (c *OutputCommand) Run(args []string) int { "become available. If you are using interpolation, please verify\n" + "the interpolated value is not empty. You can use the \n" + "`terraform console` command to assist.") - return 1 + return 0 } if name == "" { diff --git a/command/output_test.go b/command/output_test.go index dd447c60f..cbb03ddf8 100644 --- a/command/output_test.go +++ b/command/output_test.go @@ -136,7 +136,7 @@ func TestOutput_emptyOutputsErr(t *testing.T) { args := []string{ "-state", statePath, } - if code := c.Run(args); code != 1 { + if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) } } @@ -292,7 +292,7 @@ func TestOutput_noArgs(t *testing.T) { } args := []string{} - if code := c.Run(args); code != 1 { + if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.OutputWriter.String()) } } @@ -313,7 +313,7 @@ func TestOutput_noState(t *testing.T) { "-state", statePath, "foo", } - if code := c.Run(args); code != 1 { + if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) } } @@ -335,7 +335,7 @@ func TestOutput_noVars(t *testing.T) { "-state", statePath, "bar", } - if code := c.Run(args); code != 1 { + if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) } } diff --git a/command/plan.go b/command/plan.go index d3b159653..e3e181566 100644 --- a/command/plan.go +++ b/command/plan.go @@ -96,7 +96,6 @@ func (c *PlanCommand) Run(args []string) int { opReq := c.Operation(b) opReq.ConfigDir = configPath opReq.Destroy = destroy - opReq.PlanRefresh = refresh opReq.PlanOutPath = outPath opReq.PlanRefresh = refresh opReq.Type = backend.OperationTypePlan @@ -202,6 +201,10 @@ Usage: terraform plan [options] [DIR] Options: + -compact-warnings If Terraform produces any warnings that are not + accompanied by errors, show them in a more compact form + that includes only the summary messages. + -destroy If set, a plan will be generated to destroy all resources managed by the given configuration and state. diff --git a/command/plan_test.go b/command/plan_test.go index 2d384edf4..1891833e7 100644 --- a/command/plan_test.go +++ b/command/plan_test.go @@ -124,7 +124,10 @@ func TestPlan_destroy(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) outPath := testTempFile(t) @@ -240,7 +243,10 @@ func TestPlan_outPathNoChange(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","ami":"bar","network_interface":[{"description":"Main network interface","device_index":"0"}]}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, originalState) @@ -575,6 +581,10 @@ func TestPlan_varsUnset(t *testing.T) { test = false defer func() { test = true }() + // The plan command will prompt for interactive input of var.foo. + // We'll answer "bar" to that prompt, which should then allow this + // configuration to apply even though var.foo doesn't have a + // default value and there are no -var arguments on our command line. defaultInputReader = bytes.NewBufferString("bar\n") p := planVarsFixtureProvider() diff --git a/command/plugins.go b/command/plugins.go index bc0a02933..6a3a7f0ab 100644 --- a/command/plugins.go +++ b/command/plugins.go @@ -15,6 +15,7 @@ import ( plugin "github.com/hashicorp/go-plugin" "github.com/kardianos/osext" + "github.com/hashicorp/terraform/addrs" terraformProvider "github.com/hashicorp/terraform/builtin/providers/terraform" tfplugin "github.com/hashicorp/terraform/plugin" "github.com/hashicorp/terraform/plugin/discovery" @@ -35,16 +36,16 @@ type multiVersionProviderResolver struct { // (will produce an error if one is set). This should be used only in // exceptional circumstances since it forces the provider's release // schedule to be tied to that of Terraform Core. - Internal map[string]providers.Factory + Internal map[addrs.Provider]providers.Factory } -func choosePlugins(avail discovery.PluginMetaSet, internal map[string]providers.Factory, reqd discovery.PluginRequirements) map[string]discovery.PluginMeta { +func chooseProviders(avail discovery.PluginMetaSet, internal map[addrs.Provider]providers.Factory, reqd discovery.PluginRequirements) map[string]discovery.PluginMeta { candidates := avail.ConstrainVersions(reqd) ret := map[string]discovery.PluginMeta{} for name, metas := range candidates { // If the provider is in our internal map then we ignore any // discovered plugins for it since these are dealt with separately. - if _, isInternal := internal[name]; isInternal { + if _, isInternal := internal[addrs.NewLegacyProvider(name)]; isInternal { continue } @@ -58,18 +59,18 @@ func choosePlugins(avail discovery.PluginMetaSet, internal map[string]providers. func (r *multiVersionProviderResolver) ResolveProviders( reqd discovery.PluginRequirements, -) (map[string]providers.Factory, []error) { - factories := make(map[string]providers.Factory, len(reqd)) +) (map[addrs.Provider]providers.Factory, []error) { + factories := make(map[addrs.Provider]providers.Factory, len(reqd)) var errs []error - chosen := choosePlugins(r.Available, r.Internal, reqd) + chosen := chooseProviders(r.Available, r.Internal, reqd) for name, req := range reqd { - if factory, isInternal := r.Internal[name]; isInternal { + if factory, isInternal := r.Internal[addrs.NewLegacyProvider(name)]; isInternal { if !req.Versions.Unconstrained() { errs = append(errs, fmt.Errorf("provider.%s: this provider is built in to Terraform and so it does not support version constraints", name)) continue } - factories[name] = factory + factories[addrs.NewLegacyProvider(name)] = factory continue } @@ -84,7 +85,7 @@ func (r *multiVersionProviderResolver) ResolveProviders( continue } - factories[name] = providerFactory(newest) + factories[addrs.NewLegacyProvider(name)] = providerFactory(newest) } else { msg := fmt.Sprintf("provider.%s: no suitable version installed", name) @@ -280,9 +281,9 @@ func (m *Meta) providerResolver() providers.Resolver { } } -func (m *Meta) internalProviders() map[string]providers.Factory { - return map[string]providers.Factory{ - "terraform": func() (providers.Interface, error) { +func (m *Meta) internalProviders() map[addrs.Provider]providers.Factory { + return map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("terraform"): func() (providers.Interface, error) { return terraformProvider.NewProvider(), nil }, } @@ -297,7 +298,7 @@ func (m *Meta) missingPlugins(avail discovery.PluginMetaSet, reqd discovery.Plug for name, versionSet := range reqd { // internal providers can't be missing - if _, ok := internal[name]; ok { + if _, ok := internal[addrs.NewLegacyProvider(name)]; ok { continue } diff --git a/command/plugins_test.go b/command/plugins_test.go index 1a2719c3a..19eaf5964 100644 --- a/command/plugins_test.go +++ b/command/plugins_test.go @@ -24,8 +24,8 @@ func TestMultiVersionProviderResolver(t *testing.T) { }) resolver := &multiVersionProviderResolver{ - Internal: map[string]providers.Factory{ - "internal": providers.FactoryFixed( + Internal: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("internal"): providers.FactoryFixed( &terraform.MockProvider{ GetSchemaReturn: &terraform.ProviderSchema{ ResourceTypes: map[string]*configschema.Block{ @@ -51,7 +51,7 @@ func TestMultiVersionProviderResolver(t *testing.T) { if ct := len(got); ct != 1 { t.Errorf("wrong number of results %d; want 1", ct) } - if _, exists := got["plugin"]; !exists { + if _, exists := got[addrs.NewLegacyProvider("plugin")]; !exists { t.Errorf("provider \"plugin\" not in result") } }) @@ -79,7 +79,7 @@ func TestMultiVersionProviderResolver(t *testing.T) { if ct := len(got); ct != 1 { t.Errorf("wrong number of results %d; want 1", ct) } - if _, exists := got["internal"]; !exists { + if _, exists := got[addrs.NewLegacyProvider("internal")]; !exists { t.Errorf("provider \"internal\" not in result") } }) @@ -99,6 +99,7 @@ func TestMultiVersionProviderResolver(t *testing.T) { func TestPluginPath(t *testing.T) { td := testTempDir(t) + defer os.RemoveAll(td) defer testChdir(t, td)() pluginPath := []string{"a", "b", "c"} @@ -121,7 +122,7 @@ func TestPluginPath(t *testing.T) { func TestInternalProviders(t *testing.T) { m := Meta{} internal := m.internalProviders() - tfProvider, err := internal["terraform"]() + tfProvider, err := internal[addrs.NewLegacyProvider("terraform")]() if err != nil { t.Fatal(err) } @@ -148,10 +149,10 @@ func (i *mockProviderInstaller) FileName(provider, version string) string { return fmt.Sprintf("terraform-provider-%s_v%s_x4", provider, version) } -func (i *mockProviderInstaller) Get(provider addrs.ProviderType, req discovery.Constraints) (discovery.PluginMeta, tfdiags.Diagnostics, error) { +func (i *mockProviderInstaller) Get(provider addrs.Provider, req discovery.Constraints) (discovery.PluginMeta, tfdiags.Diagnostics, error) { var diags tfdiags.Diagnostics noMeta := discovery.PluginMeta{} - versions := i.Providers[provider.Name] + versions := i.Providers[provider.Type] if len(versions) == 0 { return noMeta, diags, fmt.Errorf("provider %q not found", provider) } @@ -169,7 +170,7 @@ func (i *mockProviderInstaller) Get(provider addrs.ProviderType, req discovery.C if req.Allows(version) { // provider filename - name := i.FileName(provider.Name, v) + name := i.FileName(provider.Type, v) path := filepath.Join(i.Dir, name) f, err := os.Create(path) if err != nil { @@ -177,7 +178,7 @@ func (i *mockProviderInstaller) Get(provider addrs.ProviderType, req discovery.C } f.Close() return discovery.PluginMeta{ - Name: provider.Name, + Name: provider.Type, Version: discovery.VersionStr(v), Path: path, }, diags, nil @@ -200,8 +201,8 @@ func (i *mockProviderInstaller) PurgeUnused(map[string]discovery.PluginMeta) (di type callbackPluginInstaller func(provider string, req discovery.Constraints) (discovery.PluginMeta, tfdiags.Diagnostics, error) -func (cb callbackPluginInstaller) Get(provider addrs.ProviderType, req discovery.Constraints) (discovery.PluginMeta, tfdiags.Diagnostics, error) { - return cb(provider.Name, req) +func (cb callbackPluginInstaller) Get(provider addrs.Provider, req discovery.Constraints) (discovery.PluginMeta, tfdiags.Diagnostics, error) { + return cb(provider.Type, req) } func (cb callbackPluginInstaller) PurgeUnused(map[string]discovery.PluginMeta) (discovery.PluginMetaSet, error) { diff --git a/command/providers.go b/command/providers.go index 368fd0e6b..958110afb 100644 --- a/command/providers.go +++ b/command/providers.go @@ -3,8 +3,8 @@ package command import ( "fmt" "path/filepath" - "sort" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/moduledeps" "github.com/hashicorp/terraform/terraform" @@ -118,14 +118,13 @@ func (c *ProvidersCommand) Run(args []string) int { } func providersCommandPopulateTreeNode(node treeprint.Tree, deps *moduledeps.Module) { - names := make([]string, 0, len(deps.Providers)) - for name := range deps.Providers { - names = append(names, string(name)) + fqns := make([]addrs.Provider, 0, len(deps.Providers)) + for fqn := range deps.Providers { + fqns = append(fqns, fqn) } - sort.Strings(names) - for _, name := range names { - dep := deps.Providers[moduledeps.ProviderInstance(name)] + for _, fqn := range fqns { + dep := deps.Providers[fqn] versionsStr := dep.Constraints.String() if versionsStr != "" { versionsStr = " " + versionsStr @@ -137,7 +136,7 @@ func providersCommandPopulateTreeNode(node treeprint.Tree, deps *moduledeps.Modu case moduledeps.ProviderDependencyFromState: reasonStr = " (from state)" } - node.AddNode(fmt.Sprintf("provider.%s%s%s", name, versionsStr, reasonStr)) + node.AddNode(fmt.Sprintf("provider.%s%s%s", fqn.LegacyString(), versionsStr, reasonStr)) } for _, child := range deps.Children { diff --git a/command/providers_schema.go b/command/providers_schema.go index f26a1c96f..2f1c5f941 100644 --- a/command/providers_schema.go +++ b/command/providers_schema.go @@ -81,6 +81,7 @@ func (c *ProvidersSchemaCommand) Run(args []string) int { opReq := c.Operation(b) opReq.ConfigDir = cwd opReq.ConfigLoader, err = c.initConfigLoader() + opReq.AllowUnsetVariables = true if err != nil { diags = diags.Append(err) c.showDiagnostics(diags) diff --git a/command/refresh.go b/command/refresh.go index db7ca9c43..dbb4ab61e 100644 --- a/command/refresh.go +++ b/command/refresh.go @@ -124,6 +124,10 @@ Options: modifying. Defaults to the "-state-out" path with ".backup" extension. Set to "-" to disable backup. + -compact-warnings If Terraform produces any warnings that are not + accompanied by errors, show them in a more compact form + that includes only the summary messages. + -input=true Ask for input for variables if not directly set. -lock=true Lock the state file when locking is supported. diff --git a/command/refresh_test.go b/command/refresh_test.go index aec83d334..c0a94fc24 100644 --- a/command/refresh_test.go +++ b/command/refresh_test.go @@ -27,6 +27,8 @@ import ( "github.com/hashicorp/terraform/terraform" ) +var equateEmpty = cmpopts.EquateEmpty() + func TestRefresh(t *testing.T) { state := testState() statePath := testStateFile(t, state) @@ -275,7 +277,8 @@ func TestRefresh_defaultState(t *testing.T) { expected := &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsJSON: []byte("{\n \"ami\": null,\n \"id\": \"yes\"\n }"), - Dependencies: []addrs.Referenceable{}, + Dependencies: []addrs.AbsResource{}, + DependsOn: []addrs.Referenceable{}, } if !reflect.DeepEqual(actual, expected) { t.Fatalf("wrong new object\ngot: %swant: %s", spew.Sdump(actual), spew.Sdump(expected)) @@ -339,7 +342,8 @@ func TestRefresh_outPath(t *testing.T) { expected := &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsJSON: []byte("{\n \"ami\": null,\n \"id\": \"yes\"\n }"), - Dependencies: []addrs.Referenceable{}, + Dependencies: []addrs.AbsResource{}, + DependsOn: []addrs.Referenceable{}, } if !reflect.DeepEqual(actual, expected) { t.Fatalf("wrong new object\ngot: %swant: %s", spew.Sdump(actual), spew.Sdump(expected)) @@ -568,7 +572,8 @@ func TestRefresh_backup(t *testing.T) { expected := &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsJSON: []byte("{\n \"ami\": null,\n \"id\": \"changed\"\n }"), - Dependencies: []addrs.Referenceable{}, + Dependencies: []addrs.AbsResource{}, + DependsOn: []addrs.Referenceable{}, } if !reflect.DeepEqual(actual, expected) { t.Fatalf("wrong new object\ngot: %swant: %s", spew.Sdump(actual), spew.Sdump(expected)) @@ -623,8 +628,10 @@ func TestRefresh_disableBackup(t *testing.T) { } newState := testStateRead(t, statePath) - if !reflect.DeepEqual(newState, state) { - t.Fatalf("bad: %#v", newState) + if !cmp.Equal(state, newState, equateEmpty) { + spew.Config.DisableMethods = true + fmt.Println(cmp.Diff(state, newState, equateEmpty)) + t.Fatalf("bad: %s", newState) } newState = testStateRead(t, outPath) @@ -632,7 +639,8 @@ func TestRefresh_disableBackup(t *testing.T) { expected := &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsJSON: []byte("{\n \"ami\": null,\n \"id\": \"yes\"\n }"), - Dependencies: []addrs.Referenceable{}, + Dependencies: []addrs.AbsResource{}, + DependsOn: []addrs.Referenceable{}, } if !reflect.DeepEqual(actual, expected) { t.Fatalf("wrong new object\ngot: %swant: %s", spew.Sdump(actual), spew.Sdump(expected)) @@ -749,10 +757,10 @@ foo = "bar" const testRefreshStr = ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` const testRefreshCwdStr = ` test_instance.foo: ID = yes - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` diff --git a/command/show.go b/command/show.go index 20c918384..d7b5f7ef8 100644 --- a/command/show.go +++ b/command/show.go @@ -6,6 +6,7 @@ import ( "strings" "github.com/hashicorp/terraform/backend" + localBackend "github.com/hashicorp/terraform/backend/local" "github.com/hashicorp/terraform/command/format" "github.com/hashicorp/terraform/command/jsonplan" "github.com/hashicorp/terraform/command/jsonstate" @@ -91,6 +92,7 @@ func (c *ShowCommand) Run(args []string) int { opReq.ConfigDir = cwd opReq.PlanFile = planFile opReq.ConfigLoader, err = c.initConfigLoader() + opReq.AllowUnsetVariables = true if err != nil { diags = diags.Append(err) c.showDiagnostics(diags) @@ -151,8 +153,16 @@ func (c *ShowCommand) Run(args []string) int { c.Ui.Output(string(jsonPlan)) return 0 } - dispPlan := format.NewPlan(plan.Changes) - c.Ui.Output(dispPlan.Format(c.Colorize())) + + // FIXME: We currently call into the local backend for this, since + // the "terraform plan" logic lives there and our package call graph + // means we can't orient this dependency the other way around. In + // future we'll hopefully be able to refactor the backend architecture + // a little so that CLI UI rendering always happens in this "command" + // package rather than in the backends themselves, but for now we're + // accepting this oddity because "terraform show" is a less commonly + // used way to render a plan than "terraform plan" is. + localBackend.RenderPlan(plan, stateFile.State, schemas, c.Ui, c.Colorize()) return 0 } diff --git a/command/show_test.go b/command/show_test.go index b66a17168..86a4a790c 100644 --- a/command/show_test.go +++ b/command/show_test.go @@ -2,6 +2,7 @@ package command import ( "encoding/json" + "fmt" "io/ioutil" "os" "path/filepath" @@ -41,7 +42,9 @@ func TestShow(t *testing.T) { func TestShow_noArgs(t *testing.T) { // Create the default state statePath := testStateFile(t, testState()) - defer testChdir(t, filepath.Dir(statePath))() + stateDir := filepath.Dir(statePath) + defer os.RemoveAll(stateDir) + defer testChdir(t, stateDir)() ui := new(cli.MockUi) c := &ShowCommand{ @@ -51,16 +54,73 @@ func TestShow_noArgs(t *testing.T) { }, } - args := []string{} + // the statefile created by testStateFile is named state.tfstate + // so one arg is required + args := []string{"state.tfstate"} if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.OutputWriter.String()) } + + if !strings.Contains(ui.OutputWriter.String(), "# test_instance.foo:") { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } +} + +// https://github.com/hashicorp/terraform/issues/21462 +func TestShow_aliasedProvider(t *testing.T) { + // Create the default state with aliased resource + testState := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + // The weird whitespace here is reflective of how this would + // get written out in a real state file, due to the indentation + // of all of the containing wrapping objects and arrays. + AttrsJSON: []byte("{\n \"id\": \"bar\"\n }"), + Status: states.ObjectReady, + Dependencies: []addrs.AbsResource{}, + DependsOn: []addrs.Referenceable{}, + }, + addrs.RootModuleInstance.ProviderConfigAliased(addrs.NewLegacyProvider("test"), "alias"), + ) + }) + + statePath := testStateFile(t, testState) + stateDir := filepath.Dir(statePath) + defer os.RemoveAll(stateDir) + defer testChdir(t, stateDir)() + + ui := new(cli.MockUi) + c := &ShowCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(testProvider()), + Ui: ui, + }, + } + + fmt.Println(os.Getwd()) + + // the statefile created by testStateFile is named state.tfstate + args := []string{"state.tfstate"} + if code := c.Run(args); code != 0 { + t.Fatalf("bad exit code: \n%s", ui.OutputWriter.String()) + } + + if strings.Contains(ui.OutputWriter.String(), "# missing schema for provider \"test.alias\"") { + t.Fatalf("bad output: \n%s", ui.OutputWriter.String()) + } } func TestShow_noArgsNoState(t *testing.T) { // Create the default state statePath := testStateFile(t, testState()) - defer testChdir(t, filepath.Dir(statePath))() + stateDir := filepath.Dir(statePath) + defer os.RemoveAll(stateDir) + defer testChdir(t, stateDir)() ui := new(cli.MockUi) c := &ShowCommand{ @@ -70,7 +130,8 @@ func TestShow_noArgsNoState(t *testing.T) { }, } - args := []string{} + // the statefile created by testStateFile is named state.tfstate + args := []string{"state.tfstate"} if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.OutputWriter.String()) } @@ -79,7 +140,7 @@ func TestShow_noArgsNoState(t *testing.T) { func TestShow_plan(t *testing.T) { planPath := testPlanFileNoop(t) - ui := new(cli.MockUi) + ui := cli.NewMockUi() c := &ShowCommand{ Meta: Meta{ testingOverrides: metaOverridesForProvider(testProvider()), @@ -93,10 +154,42 @@ func TestShow_plan(t *testing.T) { if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) } + + want := `Terraform will perform the following actions` + got := ui.OutputWriter.String() + if !strings.Contains(got, want) { + t.Errorf("missing expected output\nwant: %s\ngot:\n%s", want, got) + } +} + +func TestShow_planWithChanges(t *testing.T) { + planPathWithChanges := showFixturePlanFile(t, plans.DeleteThenCreate) + + ui := cli.NewMockUi() + c := &ShowCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(showFixtureProvider()), + Ui: ui, + }, + } + + args := []string{ + planPathWithChanges, + } + + if code := c.Run(args); code != 0 { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } + + want := `test_instance.foo must be replaced` + got := ui.OutputWriter.String() + if !strings.Contains(got, want) { + t.Errorf("missing expected output\nwant: %s\ngot:\n%s", want, got) + } } func TestShow_plan_json(t *testing.T) { - planPath := showFixturePlanFile(t) + planPath := showFixturePlanFile(t, plans.Create) ui := new(cli.MockUi) c := &ShowCommand{ @@ -118,6 +211,7 @@ func TestShow_plan_json(t *testing.T) { func TestShow_state(t *testing.T) { originalState := testState() statePath := testStateFile(t, originalState) + defer os.RemoveAll(filepath.Dir(statePath)) ui := new(cli.MockUi) c := &ShowCommand{ @@ -371,9 +465,10 @@ func showFixtureProvider() *terraform.MockProvider { } // showFixturePlanFile creates a plan file at a temporary location containing a -// single change to create the test_instance.foo that is included in the "show" +// single change to create or update the test_instance.foo that is included in the "show" // test fixture, returning the location of that plan file. -func showFixturePlanFile(t *testing.T) string { +// `action` is the planned change you would like to elicit +func showFixturePlanFile(t *testing.T, action plans.Action) string { _, snap := testModuleWithSnapshot(t, "show") plannedVal := cty.ObjectVal(map[string]cty.Value{ "id": cty.UnknownVal(cty.String), @@ -394,9 +489,12 @@ func showFixturePlanFile(t *testing.T) string { Type: "test_instance", Name: "foo", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ - Action: plans.Create, + Action: action, Before: priorValRaw, After: plannedValRaw, }, diff --git a/command/state_mv.go b/command/state_mv.go index 00b48306f..b4dc14321 100644 --- a/command/state_mv.go +++ b/command/state_mv.go @@ -278,36 +278,32 @@ func (c *StateMvCommand) Run(args []string) int { c.Ui.Output(fmt.Sprintf("%s %q to %q", prefix, addrFrom.String(), args[1])) if !dryRun { fromResourceAddr := addrFrom.ContainingResource() - fromProviderAddr := ssFrom.Resource(fromResourceAddr).ProviderConfig + fromResource := ssFrom.Resource(fromResourceAddr) + fromProviderAddr := fromResource.ProviderConfig ssFrom.ForgetResourceInstanceAll(addrFrom) ssFrom.RemoveResourceIfEmpty(fromResourceAddr) + // since this is moving an instance, we can infer the target + // mode from the address. + toEachMode := eachModeForInstanceKey(addrTo.Resource.Key) + rs := stateTo.Resource(addrTo.ContainingResource()) if rs == nil { // If we're moving to an address without an index then that // suggests the user's intent is to establish both the // resource and the instance at the same time (since the - // address covers both), but if there's an index in the - // target then the resource must already exist. - if addrTo.Resource.Key != addrs.NoKey { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - msgInvalidTarget, - fmt.Sprintf("Cannot move to %s: %s does not exist in the current state.", addrTo, addrTo.ContainingResource()), - )) - c.showDiagnostics(diags) - return 1 - } - + // address covers both). If there's an index in the + // target then allow creating the new instance here. resourceAddr := addrTo.ContainingResource() stateTo.SyncWrapper().SetResourceMeta( resourceAddr, - states.NoEach, + toEachMode, fromProviderAddr, // in this case, we bring the provider along as if we were moving the whole resource ) rs = stateTo.Resource(resourceAddr) } + rs.EachMode = toEachMode rs.Instances[addrTo.Resource.Key] = is } default: @@ -317,6 +313,28 @@ func (c *StateMvCommand) Run(args []string) int { fmt.Sprintf("Cannot move %s: Terraform doesn't know how to move this object.", rawAddrFrom), )) } + + // Look for any dependencies that may be effected and + // remove them to ensure they are recreated in full. + for _, mod := range stateTo.Modules { + for _, res := range mod.Resources { + for _, ins := range res.Instances { + if ins.Current == nil { + continue + } + + for _, dep := range ins.Current.Dependencies { + // check both directions here, since we may be moving + // an instance which is in a resource, or a module + // which can contain a resource. + if dep.TargetContains(rawAddrFrom) || rawAddrFrom.TargetContains(dep) { + ins.Current.Dependencies = nil + break + } + } + } + } + } } if dryRun { @@ -358,6 +376,20 @@ func (c *StateMvCommand) Run(args []string) int { return 0 } +func eachModeForInstanceKey(key addrs.InstanceKey) states.EachMode { + switch key.(type) { + case addrs.IntKey: + return states.EachList + case addrs.StringKey: + return states.EachMap + default: + if key == addrs.NoKey { + return states.NoEach + } + panic(fmt.Sprintf("don't know an each mode for instance key %#v", key)) + } +} + // sourceObjectAddrs takes a single source object address and expands it to // potentially multiple objects that need to be handled within it. // diff --git a/command/state_mv_test.go b/command/state_mv_test.go index 504c4fca1..d7968b5e6 100644 --- a/command/state_mv_test.go +++ b/command/state_mv_test.go @@ -27,7 +27,10 @@ func TestStateMv(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -36,10 +39,14 @@ func TestStateMv(t *testing.T) { Name: "baz", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), - Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), + Status: states.ObjectReady, + Dependencies: []addrs.AbsResource{mustResourceAddr("test_instance.foo")}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), ) }) statePath := testStateFile(t, state) @@ -61,7 +68,7 @@ func TestStateMv(t *testing.T) { "test_instance.bar", } if code := c.Run(args); code != 0 { - t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + t.Fatalf("return code: %d\n\n%s", code, ui.ErrorWriter.String()) } // Test it is correct @@ -73,6 +80,70 @@ func TestStateMv(t *testing.T) { t.Fatalf("bad: %#v", backups) } testStateOutput(t, backups[0], testStateMvOutputOriginal) + + // Change the single instance to a counted instance + args = []string{ + "-state", statePath, + "test_instance.bar", + "test_instance.bar[0]", + } + if code := c.Run(args); code != 0 { + t.Fatalf("return code: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // extract the resource and verify the mode + s := testStateRead(t, statePath) + addr, diags := addrs.ParseAbsResourceStr("test_instance.bar") + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + i := s.Resource(addr) + if i.EachMode != states.EachList { + t.Fatalf("expected each mode List, got %s", i.EachMode) + } + + // change from list to map + args = []string{ + "-state", statePath, + "test_instance.bar[0]", + "test_instance.bar[\"baz\"]", + } + if code := c.Run(args); code != 0 { + t.Fatalf("return code: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // extract the resource and verify the mode + s = testStateRead(t, statePath) + addr, diags = addrs.ParseAbsResourceStr("test_instance.bar") + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + i = s.Resource(addr) + if i.EachMode != states.EachMap { + t.Fatalf("expected each mode Map, got %s", i.EachMode) + } + + // change from from map back to single + args = []string{ + "-state", statePath, + "test_instance.bar[\"baz\"]", + "test_instance.bar", + } + if code := c.Run(args); code != 0 { + t.Fatalf("return code: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // extract the resource and verify the mode + s = testStateRead(t, statePath) + addr, diags = addrs.ParseAbsResourceStr("test_instance.bar") + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + i = s.Resource(addr) + if i.EachMode != states.NoEach { + t.Fatalf("expected each mode NoEach, got %s", i.EachMode) + } + } func TestStateMv_resourceToInstance(t *testing.T) { @@ -87,7 +158,10 @@ func TestStateMv_resourceToInstance(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -96,10 +170,14 @@ func TestStateMv_resourceToInstance(t *testing.T) { Name: "baz", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), - Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), + Status: states.ObjectReady, + Dependencies: []addrs.AbsResource{mustResourceAddr("test_instance.foo")}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), ) s.SetResourceMeta( addrs.Resource{ @@ -108,7 +186,10 @@ func TestStateMv_resourceToInstance(t *testing.T) { Name: "bar", }.Absolute(addrs.RootModuleInstance), states.EachList, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -137,12 +218,12 @@ func TestStateMv_resourceToInstance(t *testing.T) { testStateOutput(t, statePath, ` test_instance.bar.0: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.baz: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value `) @@ -167,7 +248,10 @@ func TestStateMv_instanceToResource(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -179,7 +263,10 @@ func TestStateMv_instanceToResource(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -208,12 +295,12 @@ func TestStateMv_instanceToResource(t *testing.T) { testStateOutput(t, statePath, ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.baz: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value `) @@ -226,17 +313,88 @@ test_instance.baz: testStateOutput(t, backups[0], ` test_instance.baz: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.0: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value `) } +func TestStateMv_instanceToNewResource(t *testing.T) { + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "foo", + }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), + Status: states.ObjectReady, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + }) + statePath := testStateFile(t, state) + + p := testProvider() + ui := new(cli.MockUi) + c := &StateMvCommand{ + StateMeta{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(p), + Ui: ui, + }, + }, + } + + args := []string{ + "-state", statePath, + "test_instance.foo[0]", + "test_instance.bar[\"new\"]", + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // Test it is correct + testStateOutput(t, statePath, ` +test_instance.bar["new"]: + ID = bar + provider = provider["registry.terraform.io/-/test"] + bar = value + foo = value +`) + + // now move the instance to a new resource in a new module + args = []string{ + "-state", statePath, + "test_instance.bar[\"new\"]", + "module.test.test_instance.baz[\"new\"]", + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // Test it is correct + testStateOutput(t, statePath, ` + +module.test: + test_instance.baz["new"]: + ID = bar + provider = provider["registry.terraform.io/-/test"] + bar = value + foo = value +`) +} + func TestStateMv_differentResourceTypes(t *testing.T) { state := states.BuildState(func(s *states.SyncState) { s.SetResourceInstanceCurrent( @@ -249,7 +407,10 @@ func TestStateMv_differentResourceTypes(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -299,7 +460,10 @@ func TestStateMv_explicitWithBackend(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -311,7 +475,10 @@ func TestStateMv_explicitWithBackend(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -368,7 +535,10 @@ func TestStateMv_backupExplicit(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -377,10 +547,14 @@ func TestStateMv_backupExplicit(t *testing.T) { Name: "baz", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), - Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), + Status: states.ObjectReady, + Dependencies: []addrs.AbsResource{mustResourceAddr("test_instance.foo")}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), ) }) statePath := testStateFile(t, state) @@ -426,7 +600,10 @@ func TestStateMv_stateOutNew(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -477,7 +654,10 @@ func TestStateMv_stateOutExisting(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, stateSrc) @@ -493,7 +673,10 @@ func TestStateMv_stateOutExisting(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) stateOutPath := testStateFile(t, stateDst) @@ -570,7 +753,10 @@ func TestStateMv_stateOutNew_count(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -582,7 +768,10 @@ func TestStateMv_stateOutNew_count(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -594,7 +783,10 @@ func TestStateMv_stateOutNew_count(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -649,7 +841,10 @@ func TestStateMv_stateOutNew_largeCount(t *testing.T) { AttrsJSON: []byte(fmt.Sprintf(`{"id":"foo%d","foo":"value","bar":"value"}`, i)), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) } s.SetResourceInstanceCurrent( @@ -662,7 +857,10 @@ func TestStateMv_stateOutNew_largeCount(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -713,7 +911,10 @@ func TestStateMv_stateOutNew_nestedModule(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -725,7 +926,10 @@ func TestStateMv_stateOutNew_nestedModule(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -777,7 +981,10 @@ func TestStateMv_toNewModule(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -829,42 +1036,46 @@ func TestStateMv_toNewModule(t *testing.T) { } testStateOutput(t, stateOutPath2, testStateMvModuleNewModule_stateOut) } + func TestStateMv_withinBackend(t *testing.T) { td := tempDir(t) copy.CopyDir(testFixturePath("backend-unchanged"), td) defer os.RemoveAll(td) defer testChdir(t, td)() - state := &terraform.State{ - Modules: []*terraform.ModuleState{ - &terraform.ModuleState{ - Path: []string{"root"}, - Resources: map[string]*terraform.ResourceState{ - "test_instance.foo": &terraform.ResourceState{ - Type: "test_instance", - Primary: &terraform.InstanceState{ - ID: "bar", - Attributes: map[string]string{ - "foo": "value", - "bar": "value", - }, - }, - }, - - "test_instance.baz": &terraform.ResourceState{ - Type: "test_instance", - Primary: &terraform.InstanceState{ - ID: "foo", - Attributes: map[string]string{ - "foo": "value", - "bar": "value", - }, - }, - }, - }, + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), + Status: states.ObjectReady, }, - }, - } + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "baz", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), + Status: states.ObjectReady, + Dependencies: []addrs.AbsResource{mustResourceAddr("test_instance.foo")}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + }) // the local backend state file is "foo" statePath := "local-state.tfstate" @@ -876,7 +1087,7 @@ func TestStateMv_withinBackend(t *testing.T) { } defer f.Close() - if err := terraform.WriteState(state, f); err != nil { + if err := writeStateForTesting(state, f); err != nil { t.Fatal(err) } @@ -986,12 +1197,15 @@ func TestStateMv_fromBackendToLocal(t *testing.T) { const testStateMvOutputOriginal = ` test_instance.baz: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value + + Dependencies: + test_instance.foo test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -999,12 +1213,12 @@ test_instance.foo: const testStateMvOutput = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.baz: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1012,12 +1226,12 @@ test_instance.baz: const testStateMvCount_stateOut = ` test_instance.bar.0: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.1: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1025,7 +1239,7 @@ test_instance.bar.1: const testStateMvCount_stateOutSrc = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1033,17 +1247,17 @@ test_instance.bar: const testStateMvCount_stateOutOriginal = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.0: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.1: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1051,57 +1265,57 @@ test_instance.foo.1: const testStateMvLargeCount_stateOut = ` test_instance.bar.0: ID = foo0 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.1: ID = foo1 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.2: ID = foo2 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.3: ID = foo3 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.4: ID = foo4 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.5: ID = foo5 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.6: ID = foo6 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.7: ID = foo7 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.8: ID = foo8 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.9: ID = foo9 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.bar.10: ID = foo10 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1109,7 +1323,7 @@ test_instance.bar.10: const testStateMvLargeCount_stateOutSrc = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1117,62 +1331,62 @@ test_instance.bar: const testStateMvLargeCount_stateOutOriginal = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.0: ID = foo0 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.1: ID = foo1 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.2: ID = foo2 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.3: ID = foo3 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.4: ID = foo4 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.5: ID = foo5 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.6: ID = foo6 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.7: ID = foo7 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.8: ID = foo8 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.9: ID = foo9 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo.10: ID = foo10 - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1182,13 +1396,13 @@ const testStateMvNestedModule_stateOut = ` module.bar.child1: test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value module.bar.child2: test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1198,7 +1412,7 @@ const testStateMvNewModule_stateOut = ` module.bar: test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1208,7 +1422,7 @@ const testStateMvModuleNewModule_stateOut = ` module.foo: test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1216,7 +1430,7 @@ module.foo: const testStateMvNewModule_stateOutOriginal = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1230,13 +1444,13 @@ const testStateMvNestedModule_stateOutOriginal = ` module.foo.child1: test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value module.foo.child2: test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1244,7 +1458,7 @@ module.foo.child2: const testStateMvOutput_stateOut = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1256,7 +1470,7 @@ const testStateMvOutput_stateOutSrc = ` const testStateMvOutput_stateOutOriginal = ` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1268,18 +1482,18 @@ const testStateMvExisting_stateSrc = ` const testStateMvExisting_stateDst = ` test_instance.bar: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.qux: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` const testStateMvExisting_stateSrcOriginal = ` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -1287,13 +1501,13 @@ test_instance.foo: const testStateMvExisting_stateDstOriginal = ` test_instance.qux: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` const testStateMvOriginal_backend = ` test_instance.baz: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` diff --git a/command/state_rm_test.go b/command/state_rm_test.go index a35c247bb..02fea0adf 100644 --- a/command/state_rm_test.go +++ b/command/state_rm_test.go @@ -25,7 +25,10 @@ func TestStateRm(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -37,7 +40,10 @@ func TestStateRm(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -84,7 +90,10 @@ func TestStateRmNotChildModule(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) // This second instance has the same local address as the first but // is in a child module. Older versions of Terraform would incorrectly @@ -99,7 +108,10 @@ func TestStateRmNotChildModule(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -129,7 +141,7 @@ func TestStateRmNotChildModule(t *testing.T) { module.child: test_instance.foo: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value `) @@ -142,14 +154,14 @@ module.child: testStateOutput(t, backups[0], ` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value module.child: test_instance.foo: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value `) @@ -167,7 +179,10 @@ func TestStateRmNoArgs(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -179,7 +194,10 @@ func TestStateRmNoArgs(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -220,7 +238,10 @@ func TestStateRmNonExist(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -232,7 +253,10 @@ func TestStateRmNonExist(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -274,7 +298,10 @@ func TestStateRm_backupExplicit(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -286,7 +313,10 @@ func TestStateRm_backupExplicit(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -384,7 +414,10 @@ func TestStateRm_backendState(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -396,7 +429,10 @@ func TestStateRm_backendState(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -443,12 +479,12 @@ func TestStateRm_backendState(t *testing.T) { const testStateRmOutputOriginal = ` test_instance.bar: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` @@ -456,7 +492,7 @@ test_instance.foo: const testStateRmOutput = ` test_instance.bar: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] bar = value foo = value ` diff --git a/command/state_show.go b/command/state_show.go index bd9d2c48b..f91f051b6 100644 --- a/command/state_show.go +++ b/command/state_show.go @@ -72,6 +72,7 @@ func (c *StateShowCommand) Run(args []string) int { // Build the operation (required to get the schemas) opReq := c.Operation(b) + opReq.AllowUnsetVariables = true opReq.ConfigDir = cwd opReq.ConfigLoader, err = c.initConfigLoader() @@ -114,11 +115,18 @@ func (c *StateShowCommand) Run(args []string) int { return 1 } + // check if the resource has a configured provider, otherwise this will use the default provider + rs := state.Resource(addr.ContainingResource()) + absPc := addrs.AbsProviderConfig{ + Provider: rs.ProviderConfig.Provider, + Alias: rs.ProviderConfig.Alias, + Module: addrs.RootModuleInstance, + } singleInstance := states.NewState() singleInstance.EnsureModule(addr.Module).SetResourceInstanceCurrent( addr.Resource, is.Current, - addr.Resource.Resource.DefaultProviderConfig().Absolute(addr.Module), + absPc, ) output := format.State(&format.StateOpts{ diff --git a/command/state_show_test.go b/command/state_show_test.go index e1c574e9a..e382bdf90 100644 --- a/command/state_show_test.go +++ b/command/state_show_test.go @@ -6,6 +6,7 @@ import ( "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" + "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/states" "github.com/hashicorp/terraform/terraform" "github.com/mitchellh/cli" @@ -24,7 +25,10 @@ func TestStateShow(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -79,7 +83,10 @@ func TestStateShow_multi(t *testing.T) { AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -91,7 +98,10 @@ func TestStateShow_multi(t *testing.T) { AttrsJSON: []byte(`{"id":"foo","foo":"value","bar":"value"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(submod), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: submod, + }, ) }) statePath := testStateFile(t, state) @@ -182,6 +192,69 @@ func TestStateShow_emptyState(t *testing.T) { } } +func TestStateShow_configured_provider(t *testing.T) { + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{"id":"bar","foo":"value","bar":"value"}`), + Status: states.ObjectReady, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test-beta"), + Module: addrs.RootModuleInstance, + }, + ) + }) + statePath := testStateFile(t, state) + + p := testProvider() + p.GetSchemaReturn = &terraform.ProviderSchema{ + ResourceTypes: map[string]*configschema.Block{ + "test_instance": { + Attributes: map[string]*configschema.Attribute{ + "id": {Type: cty.String, Optional: true, Computed: true}, + "foo": {Type: cty.String, Optional: true}, + "bar": {Type: cty.String, Optional: true}, + }, + }, + }, + } + + ui := new(cli.MockUi) + c := &StateShowCommand{ + Meta: Meta{ + testingOverrides: &testingOverrides{ + ProviderResolver: providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test-beta"): providers.FactoryFixed(p), + }, + ), + }, + Ui: ui, + }, + } + + args := []string{ + "-state", statePath, + "test_instance.foo", + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // Test that outputs were displayed + expected := strings.TrimSpace(testStateShowOutput) + "\n" + actual := ui.OutputWriter.String() + if actual != expected { + t.Fatalf("Expected:\n%q\n\nTo equal:\n%q", actual, expected) + } +} + const testStateShowOutput = ` # test_instance.foo: resource "test_instance" "foo" { diff --git a/command/taint_test.go b/command/taint_test.go index bcdb76a9c..a34365d4b 100644 --- a/command/taint_test.go +++ b/command/taint_test.go @@ -24,7 +24,10 @@ func TestTaint(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -59,7 +62,10 @@ func TestTaint_lockedState(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -245,7 +251,10 @@ func TestTaint_missing(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -278,7 +287,10 @@ func TestTaint_missingAllow(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -354,7 +366,10 @@ func TestTaint_module(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -366,7 +381,10 @@ func TestTaint_module(t *testing.T) { AttrsJSON: []byte(`{"id":"blah"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -392,22 +410,22 @@ func TestTaint_module(t *testing.T) { const testTaintStr = ` test_instance.foo: (tainted) ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` const testTaintDefaultStr = ` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` const testTaintModuleStr = ` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] module.child: test_instance.blah: (tainted) ID = blah - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` diff --git a/command/testdata/backend-from-state/terraform.tfstate b/command/testdata/backend-from-state/terraform.tfstate new file mode 100644 index 000000000..091ecc19a --- /dev/null +++ b/command/testdata/backend-from-state/terraform.tfstate @@ -0,0 +1,10 @@ +{ + "version": 3, + "terraform_version": "0.12.0", + "serial": 7, + "lineage": "configured", + "backend": { + "type": "inmem", + "config": {} + } +} diff --git a/command/testdata/import-provider-datasource/main.tf b/command/testdata/import-provider-datasource/main.tf new file mode 100644 index 000000000..3b9fa3a3e --- /dev/null +++ b/command/testdata/import-provider-datasource/main.tf @@ -0,0 +1,13 @@ +provider "test" { + foo = data.test_data.key.id +} + +provider "test" { + alias = "credentials" +} + +data "test_data" "key" { + provider = test.credentials +} + +resource "test_instance" "foo" {} diff --git a/command/testdata/import-provider-invalid/main.tf b/command/testdata/import-provider-invalid/main.tf new file mode 100644 index 000000000..c156850d3 --- /dev/null +++ b/command/testdata/import-provider-invalid/main.tf @@ -0,0 +1,15 @@ +terraform { + backend "local" { + path = "imported.tfstate" + } +} + +provider "test" { + foo = "bar" +} + +resource "test_instance" "foo" { +} + +resource "unknown_instance" "baz" { +} diff --git a/command/testdata/import-provider-mismatch/main.tf b/command/testdata/import-provider-mismatch/main.tf deleted file mode 100644 index 420e765c1..000000000 --- a/command/testdata/import-provider-mismatch/main.tf +++ /dev/null @@ -1,7 +0,0 @@ -provider "test-beta" { - foo = "baz" -} - -resource "test_instance" "foo" { - provider = "test-beta" -} diff --git a/command/testdata/init-required-providers/main.tf b/command/testdata/init-required-providers/main.tf new file mode 100644 index 000000000..d1f51a32c --- /dev/null +++ b/command/testdata/init-required-providers/main.tf @@ -0,0 +1,8 @@ +terraform { + required_providers { + test = "1.2.3" + source = { + version = "1.2.3" + } + } +} diff --git a/command/testdata/login-tfe-server/tfeserver.go b/command/testdata/login-tfe-server/tfeserver.go new file mode 100644 index 000000000..11d164abc --- /dev/null +++ b/command/testdata/login-tfe-server/tfeserver.go @@ -0,0 +1,54 @@ +// Package tfeserver is a test stub implementing a subset of the TFE API used +// only for the testing of the "terraform login" command. +package tfeserver + +import ( + "fmt" + "net/http" + "strings" +) + +const ( + goodToken = "good-token" + accountDetails = `{"data":{"id":"user-abc123","type":"users","attributes":{"username":"testuser","email":"testuser@example.com"}}}` +) + +// Handler is an implementation of net/http.Handler that provides a stub +// TFE API server implementation with the following endpoints: +// +// /ping - API existence endpoint +// /account/details - current user endpoint +var Handler http.Handler + +type handler struct{} + +func (h handler) ServeHTTP(resp http.ResponseWriter, req *http.Request) { + resp.Header().Set("Content-Type", "application/vnd.api+json") + switch req.URL.Path { + case "/api/v2/ping": + h.servePing(resp, req) + case "/api/v2/account/details": + h.serveAccountDetails(resp, req) + default: + fmt.Printf("404 when fetching %s\n", req.URL.String()) + http.Error(resp, `{"errors":[{"status":"404","title":"not found"}]}`, http.StatusNotFound) + } +} + +func (h handler) servePing(resp http.ResponseWriter, req *http.Request) { + resp.WriteHeader(http.StatusNoContent) +} + +func (h handler) serveAccountDetails(resp http.ResponseWriter, req *http.Request) { + if !strings.Contains(req.Header.Get("Authorization"), goodToken) { + http.Error(resp, `{"errors":[{"status":"401","title":"unauthorized"}]}`, http.StatusUnauthorized) + return + } + + resp.WriteHeader(http.StatusOK) + resp.Write([]byte(accountDetails)) +} + +func init() { + Handler = handler{} +} diff --git a/command/testdata/show-json/basic-delete/terraform.tfstate b/command/testdata/show-json/basic-delete/terraform.tfstate index db49d3e68..4a3b3612c 100644 --- a/command/testdata/show-json/basic-delete/terraform.tfstate +++ b/command/testdata/show-json/basic-delete/terraform.tfstate @@ -9,7 +9,7 @@ "mode": "managed", "type": "test_instance", "name": "test", - "provider": "provider.test", + "provider": "provider[\"registry.terraform.io/-/test\"]", "instances": [ { "schema_version": 0, @@ -24,7 +24,7 @@ "mode": "managed", "type": "test_instance", "name": "test-delete", - "provider": "provider.test", + "provider": "provider[\"registry.terraform.io/-/test\"]", "instances": [ { "schema_version": 0, @@ -36,4 +36,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/command/testdata/show-json/basic-update/terraform.tfstate b/command/testdata/show-json/basic-update/terraform.tfstate index dfc796a88..f68865a9b 100644 --- a/command/testdata/show-json/basic-update/terraform.tfstate +++ b/command/testdata/show-json/basic-update/terraform.tfstate @@ -9,7 +9,7 @@ "mode": "managed", "type": "test_instance", "name": "test", - "provider": "provider.test", + "provider": "provider[\"registry.terraform.io/-/test\"]", "instances": [ { "schema_version": 0, @@ -21,4 +21,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/command/testdata/show-json/modules/output.json b/command/testdata/show-json/modules/output.json index be1b6fdf9..13a8702ab 100644 --- a/command/testdata/show-json/modules/output.json +++ b/command/testdata/show-json/modules/output.json @@ -274,8 +274,8 @@ } }, "provider_config": { - "module_test_foo:test": { - "module_address": "module_test_foo", + "module.module_test_foo:test": { + "module_address": "module.module_test_foo", "name": "test" } } diff --git a/command/testdata/show-json/nested-modules/modules/more-modules/main.tf b/command/testdata/show-json/nested-modules/modules/more-modules/main.tf index 7e1ffafe1..2e5273a57 100644 --- a/command/testdata/show-json/nested-modules/modules/more-modules/main.tf +++ b/command/testdata/show-json/nested-modules/modules/more-modules/main.tf @@ -1,4 +1,7 @@ -variable "ok" { - default = "something" - description = "description" +variable "test_var" { + default = "bar-var" +} + +resource "test_instance" "test" { + ami = var.test_var } diff --git a/command/testdata/show-json/nested-modules/output.json b/command/testdata/show-json/nested-modules/output.json index 2dcc54773..d4304dff5 100644 --- a/command/testdata/show-json/nested-modules/output.json +++ b/command/testdata/show-json/nested-modules/output.json @@ -1,9 +1,54 @@ { "format_version": "0.1", - "terraform_version": "0.12.1-dev", "planned_values": { - "root_module": {} + "root_module": { + "child_modules": [ + { + "address": "module.my_module", + "child_modules": [ + { + "resources": [ + { + "address": "module.my_module.module.more.test_instance.test", + "mode": "managed", + "type": "test_instance", + "name": "test", + "provider_name": "test", + "schema_version": 0, + "values": { + "ami": "bar-var" + } + } + ], + "address": "module.my_module.module.more" + } + ] + } + ] + } }, + "resource_changes": [ + { + "address": "module.my_module.module.more.test_instance.test", + "module_address": "module.my_module.module.more", + "mode": "managed", + "type": "test_instance", + "name": "test", + "provider_name": "test", + "change": { + "actions": [ + "create" + ], + "before": null, + "after": { + "ami": "bar-var" + }, + "after_unknown": { + "id": true + } + } + } + ], "configuration": { "root_module": { "module_calls": { @@ -12,15 +57,31 @@ "module": { "module_calls": { "more": { + "source": "./more-modules", "module": { + "resources": [ + { + "address": "test_instance.test", + "mode": "managed", + "type": "test_instance", + "name": "test", + "provider_config_key": "more:test", + "expressions": { + "ami": { + "references": [ + "var.test_var" + ] + } + }, + "schema_version": 0 + } + ], "variables": { - "ok": { - "default": "something", - "description": "description" + "test_var": { + "default": "bar-var" } } - }, - "source": "./more-modules" + } } } } @@ -28,4 +89,4 @@ } } } -} +} \ No newline at end of file diff --git a/command/testdata/state-list-backend-custom/local-state.tfstate b/command/testdata/state-list-backend-custom/local-state.tfstate index db3d0b7c7..f357c3012 100644 --- a/command/testdata/state-list-backend-custom/local-state.tfstate +++ b/command/testdata/state-list-backend-custom/local-state.tfstate @@ -9,7 +9,7 @@ "mode": "managed", "type": "null_resource", "name": "a", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, diff --git a/command/testdata/state-list-backend-default/terraform.tfstate b/command/testdata/state-list-backend-default/terraform.tfstate index db3d0b7c7..f357c3012 100644 --- a/command/testdata/state-list-backend-default/terraform.tfstate +++ b/command/testdata/state-list-backend-default/terraform.tfstate @@ -9,7 +9,7 @@ "mode": "managed", "type": "null_resource", "name": "a", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, diff --git a/command/testdata/state-push-serial-newer/local-state.tfstate b/command/testdata/state-push-serial-newer/local-state.tfstate index 5d4c977bb..012c8857a 100644 --- a/command/testdata/state-push-serial-newer/local-state.tfstate +++ b/command/testdata/state-push-serial-newer/local-state.tfstate @@ -8,7 +8,7 @@ "mode": "managed", "type": "null_resource", "name": "a", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, diff --git a/command/testdata/state-push-serial-newer/replace.tfstate b/command/testdata/state-push-serial-newer/replace.tfstate index a5789c5fe..ad94a1f6e 100644 --- a/command/testdata/state-push-serial-newer/replace.tfstate +++ b/command/testdata/state-push-serial-newer/replace.tfstate @@ -8,7 +8,7 @@ "mode": "managed", "type": "null_resource", "name": "b", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, diff --git a/command/untaint_test.go b/command/untaint_test.go index c5f7275f1..9584a7654 100644 --- a/command/untaint_test.go +++ b/command/untaint_test.go @@ -23,7 +23,10 @@ func TestUntaint(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectTainted, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -46,7 +49,7 @@ func TestUntaint(t *testing.T) { expected := strings.TrimSpace(` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `) testStateOutput(t, statePath, expected) } @@ -63,7 +66,10 @@ func TestUntaint_lockedState(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectTainted, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -136,14 +142,14 @@ func TestUntaint_backup(t *testing.T) { testStateOutput(t, path+".backup", strings.TrimSpace(` test_instance.foo: (tainted) ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `)) // State is untainted testStateOutput(t, path, strings.TrimSpace(` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `)) } @@ -193,7 +199,7 @@ func TestUntaint_backupDisable(t *testing.T) { testStateOutput(t, path, strings.TrimSpace(` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `)) } @@ -255,7 +261,7 @@ func TestUntaint_defaultState(t *testing.T) { testStateOutput(t, path, strings.TrimSpace(` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `)) } @@ -271,7 +277,10 @@ func TestUntaint_missing(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectTainted, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -304,7 +313,10 @@ func TestUntaint_missingAllow(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectTainted, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -368,12 +380,12 @@ func TestUntaint_stateOut(t *testing.T) { testStateOutput(t, path, strings.TrimSpace(` test_instance.foo: (tainted) ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `)) testStateOutput(t, "foo", strings.TrimSpace(` test_instance.foo: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `)) } @@ -389,7 +401,10 @@ func TestUntaint_module(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectTainted, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -401,7 +416,10 @@ func TestUntaint_module(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectTainted, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) statePath := testStateFile(t, state) @@ -424,11 +442,11 @@ func TestUntaint_module(t *testing.T) { testStateOutput(t, statePath, strings.TrimSpace(` test_instance.foo: (tainted) ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] module.child: test_instance.blah: ID = bar - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `)) } diff --git a/command/validate.go b/command/validate.go index 5d84cb0ea..76155df19 100644 --- a/command/validate.go +++ b/command/validate.go @@ -25,6 +25,8 @@ func (c *ValidateCommand) Run(args []string) int { return 1 } + // TODO: The `var` and `var-file` options are not actually used, and should + // be removed in the next major release. if c.Meta.variableArgs.items == nil { c.Meta.variableArgs = newRawFlags("-var") } @@ -42,12 +44,22 @@ func (c *ValidateCommand) Run(args []string) int { return 1 } + var diags tfdiags.Diagnostics + + // If set, output a warning indicating that these values are not used. + if !varValues.Empty() || !varFiles.Empty() { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Warning, + "The -var and -var-file flags are not used in validate. Setting them has no effect.", + "These flags will be removed in a future version of Terraform.", + )) + } + // After this point, we must only produce JSON output if JSON mode is // enabled, so all errors should be accumulated into diags and we'll // print out a suitable result at the end, depending on the format // selection. All returns from this point on must be tail-calls into // c.showResults in order to produce the expected output. - var diags tfdiags.Diagnostics args = cmdFlags.Args() var dirPath string diff --git a/command/version.go b/command/version.go index 9c9e30727..7d05fd9c8 100644 --- a/command/version.go +++ b/command/version.go @@ -108,7 +108,7 @@ func (c *VersionCommand) Run(args []string) int { if info.Outdated { c.Ui.Output(fmt.Sprintf( "\nYour version of Terraform is out of date! The latest version\n"+ - "is %s. You can update by downloading from www.terraform.io/downloads.html", + "is %s. You can update by downloading from https://www.terraform.io/downloads.html", info.Latest)) } } diff --git a/command/webbrowser/native.go b/command/webbrowser/native.go index 4e8281ce1..77d503a2c 100644 --- a/command/webbrowser/native.go +++ b/command/webbrowser/native.go @@ -2,6 +2,8 @@ package webbrowser import ( "github.com/pkg/browser" + "os/exec" + "strings" ) // NewNativeLauncher creates and returns a Launcher that will attempt to interact @@ -13,6 +15,18 @@ func NewNativeLauncher() Launcher { type nativeLauncher struct{} +func hasProgram(name string) bool { + _, err := exec.LookPath(name) + return err == nil +} + func (l nativeLauncher) OpenURL(url string) error { + // Windows Subsystem for Linux (bash for Windows) doesn't have xdg-open available + // but you can execute cmd.exe from there; try to identify it + if !hasProgram("xdg-open") && hasProgram("cmd.exe") { + r := strings.NewReplacer("&", "^&") + exec.Command("cmd.exe", "/c", "start", r.Replace(url)).Run() + } + return browser.OpenURL(url) } diff --git a/command/workspace_command.go b/command/workspace_command.go index 3be8f522a..406e8f8ca 100644 --- a/command/workspace_command.go +++ b/command/workspace_command.go @@ -33,7 +33,7 @@ func (c *WorkspaceCommand) Help() string { helpText := ` Usage: terraform workspace - New, list, show, select and delete Terraform workspaces. + new, list, show, select and delete Terraform workspaces. ` return strings.TrimSpace(helpText) diff --git a/command/workspace_command_test.go b/command/workspace_command_test.go index ea645e006..4d37da610 100644 --- a/command/workspace_command_test.go +++ b/command/workspace_command_test.go @@ -241,7 +241,10 @@ func TestWorkspace_createWithState(t *testing.T) { AttrsJSON: []byte(`{"id":"bar"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) diff --git a/command/workspace_delete.go b/command/workspace_delete.go index a8639b3ae..e57fbe32b 100644 --- a/command/workspace_delete.go +++ b/command/workspace_delete.go @@ -119,6 +119,8 @@ func (c *WorkspaceDeleteCommand) Run(args []string) int { } if err := stateMgr.RefreshState(); err != nil { + // We need to release the lock before exit + stateLocker.Unlock(nil) c.Ui.Error(err.Error()) return 1 } @@ -126,6 +128,8 @@ func (c *WorkspaceDeleteCommand) Run(args []string) int { hasResources := stateMgr.State().HasResources() if hasResources && !force { + // We need to release the lock before exit + stateLocker.Unlock(nil) c.Ui.Error(fmt.Sprintf(strings.TrimSpace(envNotEmpty), workspace)) return 1 } diff --git a/commands.go b/commands.go index fffbb3639..7ea882c89 100644 --- a/commands.go +++ b/commands.go @@ -6,13 +6,14 @@ import ( "github.com/mitchellh/cli" + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/auth" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/command" "github.com/hashicorp/terraform/command/cliconfig" "github.com/hashicorp/terraform/command/webbrowser" + "github.com/hashicorp/terraform/internal/getproviders" pluginDiscovery "github.com/hashicorp/terraform/plugin/discovery" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/auth" - "github.com/hashicorp/terraform/svchost/disco" ) // runningInAutomationEnvName gives the name of an environment variable that @@ -37,7 +38,7 @@ const ( OutputPrefix = "o:" ) -func initCommands(config *Config, services *disco.Disco) { +func initCommands(config *cliconfig.Config, services *disco.Disco, providerSrc getproviders.Source) { var inAutomation bool if v := os.Getenv(runningInAutomationEnvName); v != "" { inAutomation = true @@ -67,6 +68,7 @@ func initCommands(config *Config, services *disco.Disco) { Ui: Ui, Services: services, + ProviderSource: providerSrc, BrowserLauncher: webbrowser.NewNativeLauncher(), RunningInAutomation: inAutomation, @@ -182,15 +184,17 @@ func initCommands(config *Config, services *disco.Disco) { }, nil }, - // "terraform login" is disabled until Terraform Cloud is ready to - // support it. - /* - "login": func() (cli.Command, error) { - return &command.LoginCommand{ - Meta: meta, - }, nil - }, - */ + "login": func() (cli.Command, error) { + return &command.LoginCommand{ + Meta: meta, + }, nil + }, + + "logout": func() (cli.Command, error) { + return &command.LogoutCommand{ + Meta: meta, + }, nil + }, "output": func() (cli.Command, error) { return &command.OutputCommand{ @@ -314,12 +318,6 @@ func initCommands(config *Config, services *disco.Disco) { }, nil }, - "debug json2dot": func() (cli.Command, error) { - return &command.DebugJSON2DotCommand{ - Meta: meta, - }, nil - }, - "force-unlock": func() (cli.Command, error) { return &command.UnlockCommand{ Meta: meta, @@ -390,7 +388,7 @@ func makeShutdownCh() <-chan struct{} { return resultCh } -func credentialsSource(config *Config) (auth.CredentialsSource, error) { +func credentialsSource(config *cliconfig.Config) (auth.CredentialsSource, error) { helperPlugins := pluginDiscovery.FindPlugins("credentials", globalPluginDirs()) return config.CredentialsSource(helperPlugins) } diff --git a/communicator/ssh/communicator.go b/communicator/ssh/communicator.go index 349d67937..f39d70898 100644 --- a/communicator/ssh/communicator.go +++ b/communicator/ssh/communicator.go @@ -206,7 +206,7 @@ func (c *Communicator) Connect(o terraform.UIOutput) (err error) { } log.Printf("[DEBUG] Setting up a session to request agent forwarding") - session, err := c.newSession() + session, err := c.client.NewSession() if err != nil { return err } diff --git a/config.go b/config.go deleted file mode 100644 index fcb48ffb9..000000000 --- a/config.go +++ /dev/null @@ -1,62 +0,0 @@ -package main - -// This file has some compatibility aliases/wrappers for functionality that -// has now moved into command/cliconfig . -// -// Don't add anything new here! If new functionality is needed, better to just -// add it in command/cliconfig and then call there directly. - -import ( - "github.com/hashicorp/terraform/command/cliconfig" - "github.com/hashicorp/terraform/tfdiags" -) - -//go:generate go run ./scripts/generate-plugins.go - -// Config is the structure of the configuration for the Terraform CLI. -// -// This is not the configuration for Terraform itself. That is in the -// "configs" package. -type Config = cliconfig.Config - -// ConfigHost is the structure of the "host" nested block within the CLI -// configuration, which can be used to override the default service host -// discovery behavior for a particular hostname. -type ConfigHost = cliconfig.ConfigHost - -// ConfigCredentialsHelper is the structure of the "credentials_helper" -// nested block within the CLI configuration. -type ConfigCredentialsHelper = cliconfig.ConfigCredentialsHelper - -// BuiltinConfig is the built-in defaults for the configuration. These -// can be overridden by user configurations. -var BuiltinConfig = cliconfig.BuiltinConfig - -// ConfigFile returns the default path to the configuration file. -// -// On Unix-like systems this is the ".terraformrc" file in the home directory. -// On Windows, this is the "terraform.rc" file in the application data -// directory. -func ConfigFile() (string, error) { - return cliconfig.ConfigFile() -} - -// ConfigDir returns the configuration directory for Terraform. -func ConfigDir() (string, error) { - return cliconfig.ConfigDir() -} - -// LoadConfig reads the CLI configuration from the various filesystem locations -// and from the environment, returning a merged configuration along with any -// diagnostics (errors and warnings) encountered along the way. -func LoadConfig() (*Config, tfdiags.Diagnostics) { - return cliconfig.LoadConfig() -} - -// EnvConfig returns a Config populated from environment variables. -// -// Any values specified in this config should override those set in the -// configuration file. -func EnvConfig() *Config { - return cliconfig.EnvConfig() -} diff --git a/config/config_test.go b/config/config_test.go index 93ac8ece6..926c1d9b4 100644 --- a/config/config_test.go +++ b/config/config_test.go @@ -282,13 +282,6 @@ func TestConfigValidate_countInt(t *testing.T) { } } -func TestConfigValidate_countInt_HCL2(t *testing.T) { - c := testConfigHCL2(t, "validate-count-int") - if err := c.Validate(); err != nil { - t.Fatalf("unexpected error: %s", err) - } -} - func TestConfigValidate_countBadContext(t *testing.T) { c := testConfig(t, "validate-count-bad-context") @@ -320,25 +313,6 @@ func TestConfigValidate_countNotInt(t *testing.T) { } } -func TestConfigValidate_countNotInt_HCL2(t *testing.T) { - c := testConfigHCL2(t, "validate-count-not-int-const") - if err := c.Validate(); err == nil { - t.Fatal("should not be valid") - } -} - -func TestConfigValidate_countNotIntUnknown_HCL2(t *testing.T) { - c := testConfigHCL2(t, "validate-count-not-int") - // In HCL2 this is not an error because the unknown variable interpolates - // to produce an unknown string, which we assume (incorrectly, it turns out) - // will become a string containing only digits. This is okay because - // the config validation is only a "best effort" and we'll get a definitive - // result during the validation graph walk. - if err := c.Validate(); err != nil { - t.Fatalf("unexpected error: %s", err) - } -} - func TestConfigValidate_countUserVar(t *testing.T) { c := testConfig(t, "validate-count-user-var") if err := c.Validate(); err != nil { @@ -346,13 +320,6 @@ func TestConfigValidate_countUserVar(t *testing.T) { } } -func TestConfigValidate_countUserVar_HCL2(t *testing.T) { - c := testConfigHCL2(t, "validate-count-user-var") - if err := c.Validate(); err != nil { - t.Fatalf("err: %s", err) - } -} - func TestConfigValidate_countLocalValue(t *testing.T) { c := testConfig(t, "validate-local-value-count") if err := c.Validate(); err != nil { @@ -740,23 +707,6 @@ func testConfig(t *testing.T, name string) *Config { return c } -// testConfigHCL loads a config, forcing it to be processed with the HCL2 -// loader even if it doesn't explicitly opt in to the HCL2 experiment. -func testConfigHCL2(t *testing.T, name string) *Config { - t.Helper() - cer, _, err := globalHCL2Loader.loadFile(filepath.Join(fixtureDir, name, "main.tf")) - if err != nil { - t.Fatalf("failed to load %s: %s", name, err) - } - - cfg, err := cer.Config() - if err != nil { - t.Fatalf("failed to decode %s: %s", name, err) - } - - return cfg -} - func TestConfigDataCount(t *testing.T) { c := testConfig(t, "data-count") actual, err := c.Resources[0].Count() diff --git a/config/hcl2_shim_util.go b/config/hcl2_shim_util.go deleted file mode 100644 index b6550749a..000000000 --- a/config/hcl2_shim_util.go +++ /dev/null @@ -1,134 +0,0 @@ -package config - -import ( - "fmt" - - "github.com/zclconf/go-cty/cty/function/stdlib" - - "github.com/hashicorp/hil/ast" - "github.com/hashicorp/terraform/configs/hcl2shim" - - hcl2 "github.com/hashicorp/hcl/v2" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" - "github.com/zclconf/go-cty/cty/function" -) - -// --------------------------------------------------------------------------- -// This file contains some helper functions that are used to shim between -// HCL2 concepts and HCL/HIL concepts, to help us mostly preserve the existing -// public API that was built around HCL/HIL-oriented approaches. -// --------------------------------------------------------------------------- - -func hcl2InterpolationFuncs() map[string]function.Function { - hcl2Funcs := map[string]function.Function{} - - for name, hilFunc := range Funcs() { - hcl2Funcs[name] = hcl2InterpolationFuncShim(hilFunc) - } - - // Some functions in the old world are dealt with inside langEvalConfig - // due to their legacy reliance on direct access to the symbol table. - // Since 0.7 they don't actually need it anymore and just ignore it, - // so we're cheating a bit here and exploiting that detail by passing nil. - hcl2Funcs["lookup"] = hcl2InterpolationFuncShim(interpolationFuncLookup(nil)) - hcl2Funcs["keys"] = hcl2InterpolationFuncShim(interpolationFuncKeys(nil)) - hcl2Funcs["values"] = hcl2InterpolationFuncShim(interpolationFuncValues(nil)) - - // As a bonus, we'll provide the JSON-handling functions from the cty - // function library since its "jsonencode" is more complete (doesn't force - // weird type conversions) and HIL's type system can't represent - // "jsondecode" at all. The result of jsondecode will eventually be forced - // to conform to the HIL type system on exit into the rest of Terraform due - // to our shimming right now, but it should be usable for decoding _within_ - // an expression. - hcl2Funcs["jsonencode"] = stdlib.JSONEncodeFunc - hcl2Funcs["jsondecode"] = stdlib.JSONDecodeFunc - - return hcl2Funcs -} - -func hcl2InterpolationFuncShim(hilFunc ast.Function) function.Function { - spec := &function.Spec{} - - for i, hilArgType := range hilFunc.ArgTypes { - spec.Params = append(spec.Params, function.Parameter{ - Type: hcl2shim.HCL2TypeForHILType(hilArgType), - Name: fmt.Sprintf("arg%d", i+1), // HIL args don't have names, so we'll fudge it - }) - } - - if hilFunc.Variadic { - spec.VarParam = &function.Parameter{ - Type: hcl2shim.HCL2TypeForHILType(hilFunc.VariadicType), - Name: "varargs", // HIL args don't have names, so we'll fudge it - } - } - - spec.Type = func(args []cty.Value) (cty.Type, error) { - return hcl2shim.HCL2TypeForHILType(hilFunc.ReturnType), nil - } - spec.Impl = func(args []cty.Value, retType cty.Type) (cty.Value, error) { - hilArgs := make([]interface{}, len(args)) - for i, arg := range args { - hilV := hcl2shim.HILVariableFromHCL2Value(arg) - - // Although the cty function system does automatic type conversions - // to match the argument types, cty doesn't distinguish int and - // float and so we may need to adjust here to ensure that the - // wrapped function gets exactly the Go type it was expecting. - var wantType ast.Type - if i < len(hilFunc.ArgTypes) { - wantType = hilFunc.ArgTypes[i] - } else { - wantType = hilFunc.VariadicType - } - switch { - case hilV.Type == ast.TypeInt && wantType == ast.TypeFloat: - hilV.Type = wantType - hilV.Value = float64(hilV.Value.(int)) - case hilV.Type == ast.TypeFloat && wantType == ast.TypeInt: - hilV.Type = wantType - hilV.Value = int(hilV.Value.(float64)) - } - - // HIL functions actually expect to have the outermost variable - // "peeled" but any nested values (in lists or maps) will - // still have their ast.Variable wrapping. - hilArgs[i] = hilV.Value - } - - hilResult, err := hilFunc.Callback(hilArgs) - if err != nil { - return cty.DynamicVal, err - } - - // Just as on the way in, we get back a partially-peeled ast.Variable - // which we need to re-wrap in order to convert it back into what - // we're calling a "config value". - rv := hcl2shim.HCL2ValueFromHILVariable(ast.Variable{ - Type: hilFunc.ReturnType, - Value: hilResult, - }) - - return convert.Convert(rv, retType) // if result is unknown we'll force the correct type here - } - return function.New(spec) -} - -func hcl2EvalWithUnknownVars(expr hcl2.Expression) (cty.Value, hcl2.Diagnostics) { - trs := expr.Variables() - vars := map[string]cty.Value{} - val := cty.DynamicVal - - for _, tr := range trs { - name := tr.RootName() - vars[name] = val - } - - ctx := &hcl2.EvalContext{ - Variables: vars, - Functions: hcl2InterpolationFuncs(), - } - return expr.Value(ctx) -} diff --git a/config/hcl2_shim_util_test.go b/config/hcl2_shim_util_test.go deleted file mode 100644 index 82052f33a..000000000 --- a/config/hcl2_shim_util_test.go +++ /dev/null @@ -1,176 +0,0 @@ -package config - -import ( - "testing" - - hcl2 "github.com/hashicorp/hcl/v2" - hcl2syntax "github.com/hashicorp/hcl/v2/hclsyntax" - "github.com/zclconf/go-cty/cty" -) - -func TestHCL2InterpolationFuncs(t *testing.T) { - // This is not a comprehensive test of all the functions (they are tested - // in interpolation_funcs_test.go already) but rather just calling a - // representative set via the HCL2 API to verify that the HCL2-to-HIL - // function shim is working as expected. - tests := []struct { - Expr string - Want cty.Value - Err bool - }{ - { - `upper("hello")`, - cty.StringVal("HELLO"), - false, - }, - { - `abs(-2)`, - cty.NumberIntVal(2), - false, - }, - { - `abs(-2.5)`, - cty.NumberFloatVal(2.5), - false, - }, - { - `cidrsubnet("")`, - cty.DynamicVal, - true, // not enough arguments - }, - { - `cidrsubnet("10.1.0.0/16", 8, 2)`, - cty.StringVal("10.1.2.0/24"), - false, - }, - { - `concat([])`, - // Since HIL doesn't maintain element type information for list - // types, HCL2 can't either without elements to sniff. - cty.ListValEmpty(cty.DynamicPseudoType), - false, - }, - { - `concat([], [])`, - cty.ListValEmpty(cty.DynamicPseudoType), - false, - }, - { - `concat(["a"], ["b", "c"])`, - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - }), - false, - }, - { - `list()`, - cty.ListValEmpty(cty.DynamicPseudoType), - false, - }, - { - `list("a", "b", "c")`, - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - }), - false, - }, - { - `list(list("a"), list("b"), list("c"))`, - // The types emerge here in a bit of a strange tangle because of - // the guesswork we do when trying to recover lost information from - // HIL, but the rest of the language doesn't really care whether - // we use lists or tuples here as long as we are consistent with - // the type system invariants. - cty.ListVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("a")}), - cty.TupleVal([]cty.Value{cty.StringVal("b")}), - cty.TupleVal([]cty.Value{cty.StringVal("c")}), - }), - false, - }, - { - `list(list("a"), "b")`, - cty.DynamicVal, - true, // inconsistent types - }, - { - `length([])`, - cty.NumberIntVal(0), - false, - }, - { - `length([2])`, - cty.NumberIntVal(1), - false, - }, - { - `jsonencode(2)`, - cty.StringVal(`2`), - false, - }, - { - `jsonencode(true)`, - cty.StringVal(`true`), - false, - }, - { - `jsonencode("foo")`, - cty.StringVal(`"foo"`), - false, - }, - { - `jsonencode({})`, - cty.StringVal(`{}`), - false, - }, - { - `jsonencode([1])`, - cty.StringVal(`[1]`), - false, - }, - { - `jsondecode("{}")`, - cty.EmptyObjectVal, - false, - }, - { - `jsondecode("[5, true]")[0]`, - cty.NumberIntVal(5), - false, - }, - } - - for _, test := range tests { - t.Run(test.Expr, func(t *testing.T) { - expr, diags := hcl2syntax.ParseExpression([]byte(test.Expr), "", hcl2.Pos{Line: 1, Column: 1}) - if len(diags) != 0 { - for _, diag := range diags { - t.Logf("- %s", diag) - } - t.Fatalf("unexpected diagnostics while parsing expression") - } - - got, diags := expr.Value(&hcl2.EvalContext{ - Functions: hcl2InterpolationFuncs(), - }) - gotErr := diags.HasErrors() - if gotErr != test.Err { - if test.Err { - t.Errorf("expected errors but got none") - } else { - t.Errorf("unexpected errors") - for _, diag := range diags { - t.Logf("- %s", diag) - } - } - } - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\nexpr: %s\ngot: %#v\nwant: %#v", test.Expr, got, test.Want) - } - }) - } -} diff --git a/config/interpolate.go b/config/interpolate.go index 599e5ecdb..4cc9e9757 100644 --- a/config/interpolate.go +++ b/config/interpolate.go @@ -29,10 +29,6 @@ func (r varRange) SourceRange() tfdiags.SourceRange { return r.rng } -func makeVarRange(rng tfdiags.SourceRange) varRange { - return varRange{rng} -} - // CountVariable is a variable for referencing information about // the count. type CountVariable struct { diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go deleted file mode 100644 index 6a2050c91..000000000 --- a/config/interpolate_funcs.go +++ /dev/null @@ -1,1761 +0,0 @@ -package config - -import ( - "bytes" - "compress/gzip" - "crypto/md5" - "crypto/rsa" - "crypto/sha1" - "crypto/sha256" - "crypto/sha512" - "crypto/x509" - "encoding/base64" - "encoding/hex" - "encoding/json" - "encoding/pem" - "fmt" - "io/ioutil" - "math" - "net" - "net/url" - "path/filepath" - "regexp" - "sort" - "strconv" - "strings" - "time" - - "github.com/apparentlymart/go-cidr/cidr" - "github.com/hashicorp/go-uuid" - "github.com/hashicorp/hil" - "github.com/hashicorp/hil/ast" - "github.com/mitchellh/go-homedir" - "golang.org/x/crypto/bcrypt" -) - -// stringSliceToVariableValue converts a string slice into the value -// required to be returned from interpolation functions which return -// TypeList. -func stringSliceToVariableValue(values []string) []ast.Variable { - output := make([]ast.Variable, len(values)) - for index, value := range values { - output[index] = ast.Variable{ - Type: ast.TypeString, - Value: value, - } - } - return output -} - -// listVariableSliceToVariableValue converts a list of lists into the value -// required to be returned from interpolation functions which return TypeList. -func listVariableSliceToVariableValue(values [][]ast.Variable) []ast.Variable { - output := make([]ast.Variable, len(values)) - - for index, value := range values { - output[index] = ast.Variable{ - Type: ast.TypeList, - Value: value, - } - } - return output -} - -func listVariableValueToStringSlice(values []ast.Variable) ([]string, error) { - output := make([]string, len(values)) - for index, value := range values { - if value.Type != ast.TypeString { - return []string{}, fmt.Errorf("list has non-string element (%T)", value.Type.String()) - } - output[index] = value.Value.(string) - } - return output, nil -} - -// Funcs is the mapping of built-in functions for configuration. -func Funcs() map[string]ast.Function { - return map[string]ast.Function{ - "abs": interpolationFuncAbs(), - "basename": interpolationFuncBasename(), - "base64decode": interpolationFuncBase64Decode(), - "base64encode": interpolationFuncBase64Encode(), - "base64gzip": interpolationFuncBase64Gzip(), - "base64sha256": interpolationFuncBase64Sha256(), - "base64sha512": interpolationFuncBase64Sha512(), - "bcrypt": interpolationFuncBcrypt(), - "ceil": interpolationFuncCeil(), - "chomp": interpolationFuncChomp(), - "cidrhost": interpolationFuncCidrHost(), - "cidrnetmask": interpolationFuncCidrNetmask(), - "cidrsubnet": interpolationFuncCidrSubnet(), - "coalesce": interpolationFuncCoalesce(), - "coalescelist": interpolationFuncCoalesceList(), - "compact": interpolationFuncCompact(), - "concat": interpolationFuncConcat(), - "contains": interpolationFuncContains(), - "dirname": interpolationFuncDirname(), - "distinct": interpolationFuncDistinct(), - "element": interpolationFuncElement(), - "chunklist": interpolationFuncChunklist(), - "file": interpolationFuncFile(), - "matchkeys": interpolationFuncMatchKeys(), - "flatten": interpolationFuncFlatten(), - "floor": interpolationFuncFloor(), - "format": interpolationFuncFormat(), - "formatlist": interpolationFuncFormatList(), - "indent": interpolationFuncIndent(), - "index": interpolationFuncIndex(), - "join": interpolationFuncJoin(), - "jsonencode": interpolationFuncJSONEncode(), - "length": interpolationFuncLength(), - "list": interpolationFuncList(), - "log": interpolationFuncLog(), - "lower": interpolationFuncLower(), - "map": interpolationFuncMap(), - "max": interpolationFuncMax(), - "md5": interpolationFuncMd5(), - "merge": interpolationFuncMerge(), - "min": interpolationFuncMin(), - "pathexpand": interpolationFuncPathExpand(), - "pow": interpolationFuncPow(), - "uuid": interpolationFuncUUID(), - "replace": interpolationFuncReplace(), - "reverse": interpolationFuncReverse(), - "rsadecrypt": interpolationFuncRsaDecrypt(), - "sha1": interpolationFuncSha1(), - "sha256": interpolationFuncSha256(), - "sha512": interpolationFuncSha512(), - "signum": interpolationFuncSignum(), - "slice": interpolationFuncSlice(), - "sort": interpolationFuncSort(), - "split": interpolationFuncSplit(), - "substr": interpolationFuncSubstr(), - "timestamp": interpolationFuncTimestamp(), - "timeadd": interpolationFuncTimeAdd(), - "title": interpolationFuncTitle(), - "transpose": interpolationFuncTranspose(), - "trimspace": interpolationFuncTrimSpace(), - "upper": interpolationFuncUpper(), - "urlencode": interpolationFuncURLEncode(), - "zipmap": interpolationFuncZipMap(), - } -} - -// interpolationFuncList creates a list from the parameters passed -// to it. -func interpolationFuncList() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{}, - ReturnType: ast.TypeList, - Variadic: true, - VariadicType: ast.TypeAny, - Callback: func(args []interface{}) (interface{}, error) { - var outputList []ast.Variable - - for i, val := range args { - switch v := val.(type) { - case string: - outputList = append(outputList, ast.Variable{Type: ast.TypeString, Value: v}) - case []ast.Variable: - outputList = append(outputList, ast.Variable{Type: ast.TypeList, Value: v}) - case map[string]ast.Variable: - outputList = append(outputList, ast.Variable{Type: ast.TypeMap, Value: v}) - default: - return nil, fmt.Errorf("unexpected type %T for argument %d in list", v, i) - } - } - - // we don't support heterogeneous types, so make sure all types match the first - if len(outputList) > 0 { - firstType := outputList[0].Type - for i, v := range outputList[1:] { - if v.Type != firstType { - return nil, fmt.Errorf("unexpected type %s for argument %d in list", v.Type, i+1) - } - } - } - - return outputList, nil - }, - } -} - -// interpolationFuncMap creates a map from the parameters passed -// to it. -func interpolationFuncMap() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{}, - ReturnType: ast.TypeMap, - Variadic: true, - VariadicType: ast.TypeAny, - Callback: func(args []interface{}) (interface{}, error) { - outputMap := make(map[string]ast.Variable) - - if len(args)%2 != 0 { - return nil, fmt.Errorf("requires an even number of arguments, got %d", len(args)) - } - - var firstType *ast.Type - for i := 0; i < len(args); i += 2 { - key, ok := args[i].(string) - if !ok { - return nil, fmt.Errorf("argument %d represents a key, so it must be a string", i+1) - } - val := args[i+1] - variable, err := hil.InterfaceToVariable(val) - if err != nil { - return nil, err - } - // Enforce map type homogeneity - if firstType == nil { - firstType = &variable.Type - } else if variable.Type != *firstType { - return nil, fmt.Errorf("all map values must have the same type, got %s then %s", firstType.Printable(), variable.Type.Printable()) - } - // Check for duplicate keys - if _, ok := outputMap[key]; ok { - return nil, fmt.Errorf("argument %d is a duplicate key: %q", i+1, key) - } - outputMap[key] = variable - } - - return outputMap, nil - }, - } -} - -// interpolationFuncCompact strips a list of multi-variable values -// (e.g. as returned by "split") of any empty strings. -func interpolationFuncCompact() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList}, - ReturnType: ast.TypeList, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - inputList := args[0].([]ast.Variable) - - var outputList []string - for _, val := range inputList { - strVal, ok := val.Value.(string) - if !ok { - return nil, fmt.Errorf( - "compact() may only be used with flat lists, this list contains elements of %s", - val.Type.Printable()) - } - if strVal == "" { - continue - } - - outputList = append(outputList, strVal) - } - return stringSliceToVariableValue(outputList), nil - }, - } -} - -// interpolationFuncCidrHost implements the "cidrhost" function that -// fills in the host part of a CIDR range address to create a single -// host address -func interpolationFuncCidrHost() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeString, // starting CIDR mask - ast.TypeInt, // host number to insert - }, - ReturnType: ast.TypeString, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - hostNum := args[1].(int) - _, network, err := net.ParseCIDR(args[0].(string)) - if err != nil { - return nil, fmt.Errorf("invalid CIDR expression: %s", err) - } - - ip, err := cidr.Host(network, hostNum) - if err != nil { - return nil, err - } - - return ip.String(), nil - }, - } -} - -// interpolationFuncCidrNetmask implements the "cidrnetmask" function -// that returns the subnet mask in IP address notation. -func interpolationFuncCidrNetmask() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeString, // CIDR mask - }, - ReturnType: ast.TypeString, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - _, network, err := net.ParseCIDR(args[0].(string)) - if err != nil { - return nil, fmt.Errorf("invalid CIDR expression: %s", err) - } - - return net.IP(network.Mask).String(), nil - }, - } -} - -// interpolationFuncCidrSubnet implements the "cidrsubnet" function that -// adds an additional subnet of the given length onto an existing -// IP block expressed in CIDR notation. -func interpolationFuncCidrSubnet() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeString, // starting CIDR mask - ast.TypeInt, // number of bits to extend the prefix - ast.TypeInt, // network number to append to the prefix - }, - ReturnType: ast.TypeString, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - extraBits := args[1].(int) - subnetNum := args[2].(int) - _, network, err := net.ParseCIDR(args[0].(string)) - if err != nil { - return nil, fmt.Errorf("invalid CIDR expression: %s", err) - } - - // For portability with 32-bit systems where the subnet number - // will be a 32-bit int, we only allow extension of 32 bits in - // one call even if we're running on a 64-bit machine. - // (Of course, this is significant only for IPv6.) - if extraBits > 32 { - return nil, fmt.Errorf("may not extend prefix by more than 32 bits") - } - - newNetwork, err := cidr.Subnet(network, extraBits, subnetNum) - if err != nil { - return nil, err - } - - return newNetwork.String(), nil - }, - } -} - -// interpolationFuncCoalesce implements the "coalesce" function that -// returns the first non null / empty string from the provided input -func interpolationFuncCoalesce() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Variadic: true, - VariadicType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - if len(args) < 2 { - return nil, fmt.Errorf("must provide at least two arguments") - } - for _, arg := range args { - argument := arg.(string) - - if argument != "" { - return argument, nil - } - } - return "", nil - }, - } -} - -// interpolationFuncCoalesceList implements the "coalescelist" function that -// returns the first non empty list from the provided input -func interpolationFuncCoalesceList() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList}, - ReturnType: ast.TypeList, - Variadic: true, - VariadicType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - if len(args) < 2 { - return nil, fmt.Errorf("must provide at least two arguments") - } - for _, arg := range args { - argument := arg.([]ast.Variable) - - if len(argument) > 0 { - return argument, nil - } - } - return make([]ast.Variable, 0), nil - }, - } -} - -// interpolationFuncContains returns true if an element is in the list -// and return false otherwise -func interpolationFuncContains() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList, ast.TypeString}, - ReturnType: ast.TypeBool, - Callback: func(args []interface{}) (interface{}, error) { - _, err := interpolationFuncIndex().Callback(args) - if err != nil { - return false, nil - } - return true, nil - }, - } -} - -// interpolationFuncConcat implements the "concat" function that concatenates -// multiple lists. -func interpolationFuncConcat() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList}, - ReturnType: ast.TypeList, - Variadic: true, - VariadicType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - var outputList []ast.Variable - - for _, arg := range args { - for _, v := range arg.([]ast.Variable) { - switch v.Type { - case ast.TypeString: - outputList = append(outputList, v) - case ast.TypeList: - outputList = append(outputList, v) - case ast.TypeMap: - outputList = append(outputList, v) - default: - return nil, fmt.Errorf("concat() does not support lists of %s", v.Type.Printable()) - } - } - } - - // we don't support heterogeneous types, so make sure all types match the first - if len(outputList) > 0 { - firstType := outputList[0].Type - for _, v := range outputList[1:] { - if v.Type != firstType { - return nil, fmt.Errorf("unexpected %s in list of %s", v.Type.Printable(), firstType.Printable()) - } - } - } - - return outputList, nil - }, - } -} - -// interpolationFuncPow returns base x exponential of y. -func interpolationFuncPow() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeFloat, ast.TypeFloat}, - ReturnType: ast.TypeFloat, - Callback: func(args []interface{}) (interface{}, error) { - return math.Pow(args[0].(float64), args[1].(float64)), nil - }, - } -} - -// interpolationFuncFile implements the "file" function that allows -// loading contents from a file. -func interpolationFuncFile() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - path, err := homedir.Expand(args[0].(string)) - if err != nil { - return "", err - } - data, err := ioutil.ReadFile(path) - if err != nil { - return "", err - } - - return string(data), nil - }, - } -} - -// interpolationFuncFormat implements the "format" function that does -// string formatting. -func interpolationFuncFormat() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - Variadic: true, - VariadicType: ast.TypeAny, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - format := args[0].(string) - return fmt.Sprintf(format, args[1:]...), nil - }, - } -} - -// interpolationFuncMax returns the maximum of the numeric arguments -func interpolationFuncMax() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeFloat}, - ReturnType: ast.TypeFloat, - Variadic: true, - VariadicType: ast.TypeFloat, - Callback: func(args []interface{}) (interface{}, error) { - max := args[0].(float64) - - for i := 1; i < len(args); i++ { - max = math.Max(max, args[i].(float64)) - } - - return max, nil - }, - } -} - -// interpolationFuncMin returns the minimum of the numeric arguments -func interpolationFuncMin() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeFloat}, - ReturnType: ast.TypeFloat, - Variadic: true, - VariadicType: ast.TypeFloat, - Callback: func(args []interface{}) (interface{}, error) { - min := args[0].(float64) - - for i := 1; i < len(args); i++ { - min = math.Min(min, args[i].(float64)) - } - - return min, nil - }, - } -} - -// interpolationFuncPathExpand will expand any `~`'s found with the full file path -func interpolationFuncPathExpand() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - return homedir.Expand(args[0].(string)) - }, - } -} - -// interpolationFuncCeil returns the the least integer value greater than or equal to the argument -func interpolationFuncCeil() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeFloat}, - ReturnType: ast.TypeInt, - Callback: func(args []interface{}) (interface{}, error) { - return int(math.Ceil(args[0].(float64))), nil - }, - } -} - -// interpolationFuncLog returns the logarithnm. -func interpolationFuncLog() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeFloat, ast.TypeFloat}, - ReturnType: ast.TypeFloat, - Callback: func(args []interface{}) (interface{}, error) { - return math.Log(args[0].(float64)) / math.Log(args[1].(float64)), nil - }, - } -} - -// interpolationFuncChomp removes trailing newlines from the given string -func interpolationFuncChomp() ast.Function { - newlines := regexp.MustCompile(`(?:\r\n?|\n)*\z`) - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - return newlines.ReplaceAllString(args[0].(string), ""), nil - }, - } -} - -// interpolationFuncFloorreturns returns the greatest integer value less than or equal to the argument -func interpolationFuncFloor() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeFloat}, - ReturnType: ast.TypeInt, - Callback: func(args []interface{}) (interface{}, error) { - return int(math.Floor(args[0].(float64))), nil - }, - } -} - -func interpolationFuncZipMap() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeList, // Keys - ast.TypeList, // Values - }, - ReturnType: ast.TypeMap, - Callback: func(args []interface{}) (interface{}, error) { - keys := args[0].([]ast.Variable) - values := args[1].([]ast.Variable) - - if len(keys) != len(values) { - return nil, fmt.Errorf("count of keys (%d) does not match count of values (%d)", - len(keys), len(values)) - } - - for i, val := range keys { - if val.Type != ast.TypeString { - return nil, fmt.Errorf("keys must be strings. value at position %d is %s", - i, val.Type.Printable()) - } - } - - result := map[string]ast.Variable{} - for i := 0; i < len(keys); i++ { - result[keys[i].Value.(string)] = values[i] - } - - return result, nil - }, - } -} - -// interpolationFuncFormatList implements the "formatlist" function that does -// string formatting on lists. -func interpolationFuncFormatList() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeAny}, - Variadic: true, - VariadicType: ast.TypeAny, - ReturnType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - // Make a copy of the variadic part of args - // to avoid modifying the original. - varargs := make([]interface{}, len(args)-1) - copy(varargs, args[1:]) - - // Verify we have some arguments - if len(varargs) == 0 { - return nil, fmt.Errorf("no arguments to formatlist") - } - - // Convert arguments that are lists into slices. - // Confirm along the way that all lists have the same length (n). - var n int - listSeen := false - for i := 1; i < len(args); i++ { - s, ok := args[i].([]ast.Variable) - if !ok { - continue - } - - // Mark that we've seen at least one list - listSeen = true - - // Convert the ast.Variable to a slice of strings - parts, err := listVariableValueToStringSlice(s) - if err != nil { - return nil, err - } - - // otherwise the list is sent down to be indexed - varargs[i-1] = parts - - // Check length - if n == 0 { - // first list we've seen - n = len(parts) - continue - } - if n != len(parts) { - return nil, fmt.Errorf("format: mismatched list lengths: %d != %d", n, len(parts)) - } - } - - // If we didn't see a list this is an error because we - // can't determine the return value length. - if !listSeen { - return nil, fmt.Errorf( - "formatlist requires at least one list argument") - } - - // Do the formatting. - format := args[0].(string) - - // Generate a list of formatted strings. - list := make([]string, n) - fmtargs := make([]interface{}, len(varargs)) - for i := 0; i < n; i++ { - for j, arg := range varargs { - switch arg := arg.(type) { - default: - fmtargs[j] = arg - case []string: - fmtargs[j] = arg[i] - } - } - list[i] = fmt.Sprintf(format, fmtargs...) - } - return stringSliceToVariableValue(list), nil - }, - } -} - -// interpolationFuncIndent indents a multi-line string with the -// specified number of spaces -func interpolationFuncIndent() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeInt, ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - spaces := args[0].(int) - data := args[1].(string) - pad := strings.Repeat(" ", spaces) - return strings.Replace(data, "\n", "\n"+pad, -1), nil - }, - } -} - -// interpolationFuncIndex implements the "index" function that allows one to -// find the index of a specific element in a list -func interpolationFuncIndex() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList, ast.TypeString}, - ReturnType: ast.TypeInt, - Callback: func(args []interface{}) (interface{}, error) { - haystack := args[0].([]ast.Variable) - needle := args[1].(string) - for index, element := range haystack { - if needle == element.Value { - return index, nil - } - } - return nil, fmt.Errorf("Could not find '%s' in '%s'", needle, haystack) - }, - } -} - -// interpolationFuncBasename implements the "dirname" function. -func interpolationFuncDirname() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - return filepath.Dir(args[0].(string)), nil - }, - } -} - -// interpolationFuncDistinct implements the "distinct" function that -// removes duplicate elements from a list. -func interpolationFuncDistinct() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList}, - ReturnType: ast.TypeList, - Variadic: true, - VariadicType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - var list []string - - if len(args) != 1 { - return nil, fmt.Errorf("accepts only one argument.") - } - - if argument, ok := args[0].([]ast.Variable); ok { - for _, element := range argument { - if element.Type != ast.TypeString { - return nil, fmt.Errorf( - "only works for flat lists, this list contains elements of %s", - element.Type.Printable()) - } - list = appendIfMissing(list, element.Value.(string)) - } - } - - return stringSliceToVariableValue(list), nil - }, - } -} - -// helper function to add an element to a list, if it does not already exsit -func appendIfMissing(slice []string, element string) []string { - for _, ele := range slice { - if ele == element { - return slice - } - } - return append(slice, element) -} - -// for two lists `keys` and `values` of equal length, returns all elements -// from `values` where the corresponding element from `keys` is in `searchset`. -func interpolationFuncMatchKeys() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList, ast.TypeList, ast.TypeList}, - ReturnType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - output := make([]ast.Variable, 0) - - values, _ := args[0].([]ast.Variable) - keys, _ := args[1].([]ast.Variable) - searchset, _ := args[2].([]ast.Variable) - - if len(keys) != len(values) { - return nil, fmt.Errorf("length of keys and values should be equal") - } - - for i, key := range keys { - for _, search := range searchset { - if res, err := compareSimpleVariables(key, search); err != nil { - return nil, err - } else if res == true { - output = append(output, values[i]) - break - } - } - } - // if searchset is empty, then output is an empty list as well. - // if we haven't matched any key, then output is an empty list. - return output, nil - }, - } -} - -// compare two variables of the same type, i.e. non complex one, such as TypeList or TypeMap -func compareSimpleVariables(a, b ast.Variable) (bool, error) { - if a.Type != b.Type { - return false, fmt.Errorf( - "won't compare items of different types %s and %s", - a.Type.Printable(), b.Type.Printable()) - } - switch a.Type { - case ast.TypeString: - return a.Value.(string) == b.Value.(string), nil - default: - return false, fmt.Errorf( - "can't compare items of type %s", - a.Type.Printable()) - } -} - -// interpolationFuncJoin implements the "join" function that allows -// multi-variable values to be joined by some character. -func interpolationFuncJoin() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - Variadic: true, - VariadicType: ast.TypeList, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - var list []string - - if len(args) < 2 { - return nil, fmt.Errorf("not enough arguments to join()") - } - - for _, arg := range args[1:] { - for _, part := range arg.([]ast.Variable) { - if part.Type != ast.TypeString { - return nil, fmt.Errorf( - "only works on flat lists, this list contains elements of %s", - part.Type.Printable()) - } - list = append(list, part.Value.(string)) - } - } - - return strings.Join(list, args[0].(string)), nil - }, - } -} - -// interpolationFuncJSONEncode implements the "jsonencode" function that encodes -// a string, list, or map as its JSON representation. -func interpolationFuncJSONEncode() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeAny}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - var toEncode interface{} - - switch typedArg := args[0].(type) { - case string: - toEncode = typedArg - - case []ast.Variable: - strings := make([]string, len(typedArg)) - - for i, v := range typedArg { - if v.Type != ast.TypeString { - variable, _ := hil.InterfaceToVariable(typedArg) - toEncode, _ = hil.VariableToInterface(variable) - - jEnc, err := json.Marshal(toEncode) - if err != nil { - return "", fmt.Errorf("failed to encode JSON data '%s'", toEncode) - } - return string(jEnc), nil - - } - strings[i] = v.Value.(string) - } - toEncode = strings - - case map[string]ast.Variable: - stringMap := make(map[string]string) - for k, v := range typedArg { - if v.Type != ast.TypeString { - variable, _ := hil.InterfaceToVariable(typedArg) - toEncode, _ = hil.VariableToInterface(variable) - - jEnc, err := json.Marshal(toEncode) - if err != nil { - return "", fmt.Errorf("failed to encode JSON data '%s'", toEncode) - } - return string(jEnc), nil - } - stringMap[k] = v.Value.(string) - } - toEncode = stringMap - - default: - return "", fmt.Errorf("unknown type for JSON encoding: %T", args[0]) - } - - jEnc, err := json.Marshal(toEncode) - if err != nil { - return "", fmt.Errorf("failed to encode JSON data '%s'", toEncode) - } - return string(jEnc), nil - }, - } -} - -// interpolationFuncReplace implements the "replace" function that does -// string replacement. -func interpolationFuncReplace() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString, ast.TypeString, ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - search := args[1].(string) - replace := args[2].(string) - - // We search/replace using a regexp if the string is surrounded - // in forward slashes. - if len(search) > 1 && search[0] == '/' && search[len(search)-1] == '/' { - re, err := regexp.Compile(search[1 : len(search)-1]) - if err != nil { - return nil, err - } - - return re.ReplaceAllString(s, replace), nil - } - - return strings.Replace(s, search, replace, -1), nil - }, - } -} - -// interpolationFuncReverse implements the "reverse" function that does list reversal -func interpolationFuncReverse() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList}, - ReturnType: ast.TypeList, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - inputList := args[0].([]ast.Variable) - - reversedList := make([]ast.Variable, len(inputList)) - for idx := range inputList { - reversedList[len(inputList)-1-idx] = inputList[idx] - } - - return reversedList, nil - }, - } -} - -func interpolationFuncLength() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeAny}, - ReturnType: ast.TypeInt, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - subject := args[0] - - switch typedSubject := subject.(type) { - case string: - return len(typedSubject), nil - case []ast.Variable: - return len(typedSubject), nil - case map[string]ast.Variable: - return len(typedSubject), nil - } - - return 0, fmt.Errorf("arguments to length() must be a string, list, or map") - }, - } -} - -func interpolationFuncSignum() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeInt}, - ReturnType: ast.TypeInt, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - num := args[0].(int) - switch { - case num < 0: - return -1, nil - case num > 0: - return +1, nil - default: - return 0, nil - } - }, - } -} - -// interpolationFuncSlice returns a portion of the input list between from, inclusive and to, exclusive. -func interpolationFuncSlice() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeList, // inputList - ast.TypeInt, // from - ast.TypeInt, // to - }, - ReturnType: ast.TypeList, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - inputList := args[0].([]ast.Variable) - from := args[1].(int) - to := args[2].(int) - - if from < 0 { - return nil, fmt.Errorf("from index must be >= 0") - } - if to > len(inputList) { - return nil, fmt.Errorf("to index must be <= length of the input list") - } - if from > to { - return nil, fmt.Errorf("from index must be <= to index") - } - - var outputList []ast.Variable - for i, val := range inputList { - if i >= from && i < to { - outputList = append(outputList, val) - } - } - return outputList, nil - }, - } -} - -// interpolationFuncSort sorts a list of a strings lexographically -func interpolationFuncSort() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList}, - ReturnType: ast.TypeList, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - inputList := args[0].([]ast.Variable) - - // Ensure that all the list members are strings and - // create a string slice from them - members := make([]string, len(inputList)) - for i, val := range inputList { - if val.Type != ast.TypeString { - return nil, fmt.Errorf( - "sort() may only be used with lists of strings - %s at index %d", - val.Type.String(), i) - } - - members[i] = val.Value.(string) - } - - sort.Strings(members) - return stringSliceToVariableValue(members), nil - }, - } -} - -// interpolationFuncSplit implements the "split" function that allows -// strings to split into multi-variable values -func interpolationFuncSplit() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString, ast.TypeString}, - ReturnType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - sep := args[0].(string) - s := args[1].(string) - elements := strings.Split(s, sep) - return stringSliceToVariableValue(elements), nil - }, - } -} - -// interpolationFuncLookup implements the "lookup" function that allows -// dynamic lookups of map types within a Terraform configuration. -func interpolationFuncLookup(vs map[string]ast.Variable) ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeMap, ast.TypeString}, - ReturnType: ast.TypeString, - Variadic: true, - VariadicType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - defaultValue := "" - defaultValueSet := false - if len(args) > 2 { - defaultValue = args[2].(string) - defaultValueSet = true - } - if len(args) > 3 { - return "", fmt.Errorf("lookup() takes no more than three arguments") - } - index := args[1].(string) - mapVar := args[0].(map[string]ast.Variable) - - v, ok := mapVar[index] - if !ok { - if defaultValueSet { - return defaultValue, nil - } else { - return "", fmt.Errorf( - "lookup failed to find '%s'", - args[1].(string)) - } - } - if v.Type != ast.TypeString { - return nil, fmt.Errorf( - "lookup() may only be used with flat maps, this map contains elements of %s", - v.Type.Printable()) - } - - return v.Value.(string), nil - }, - } -} - -// interpolationFuncElement implements the "element" function that allows -// a specific index to be looked up in a multi-variable value. Note that this will -// wrap if the index is larger than the number of elements in the multi-variable value. -func interpolationFuncElement() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList, ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - list := args[0].([]ast.Variable) - if len(list) == 0 { - return nil, fmt.Errorf("element() may not be used with an empty list") - } - - index, err := strconv.Atoi(args[1].(string)) - if err != nil || index < 0 { - return "", fmt.Errorf( - "invalid number for index, got %s", args[1]) - } - - resolvedIndex := index % len(list) - - v := list[resolvedIndex] - if v.Type != ast.TypeString { - return nil, fmt.Errorf( - "element() may only be used with flat lists, this list contains elements of %s", - v.Type.Printable()) - } - return v.Value, nil - }, - } -} - -// returns the `list` items chunked by `size`. -func interpolationFuncChunklist() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeList, // inputList - ast.TypeInt, // size - }, - ReturnType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - output := make([]ast.Variable, 0) - - values, _ := args[0].([]ast.Variable) - size, _ := args[1].(int) - - // errors if size is negative - if size < 0 { - return nil, fmt.Errorf("The size argument must be positive") - } - - // if size is 0, returns a list made of the initial list - if size == 0 { - output = append(output, ast.Variable{ - Type: ast.TypeList, - Value: values, - }) - return output, nil - } - - variables := make([]ast.Variable, 0) - chunk := ast.Variable{ - Type: ast.TypeList, - Value: variables, - } - l := len(values) - for i, v := range values { - variables = append(variables, v) - - // Chunk when index isn't 0, or when reaching the values's length - if (i+1)%size == 0 || (i+1) == l { - chunk.Value = variables - output = append(output, chunk) - variables = make([]ast.Variable, 0) - } - } - - return output, nil - }, - } -} - -// interpolationFuncKeys implements the "keys" function that yields a list of -// keys of map types within a Terraform configuration. -func interpolationFuncKeys(vs map[string]ast.Variable) ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeMap}, - ReturnType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - mapVar := args[0].(map[string]ast.Variable) - keys := make([]string, 0) - - for k, _ := range mapVar { - keys = append(keys, k) - } - - sort.Strings(keys) - - // Keys are guaranteed to be strings - return stringSliceToVariableValue(keys), nil - }, - } -} - -// interpolationFuncValues implements the "values" function that yields a list of -// keys of map types within a Terraform configuration. -func interpolationFuncValues(vs map[string]ast.Variable) ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeMap}, - ReturnType: ast.TypeList, - Callback: func(args []interface{}) (interface{}, error) { - mapVar := args[0].(map[string]ast.Variable) - keys := make([]string, 0) - - for k, _ := range mapVar { - keys = append(keys, k) - } - - sort.Strings(keys) - - values := make([]string, len(keys)) - for index, key := range keys { - if value, ok := mapVar[key].Value.(string); ok { - values[index] = value - } else { - return "", fmt.Errorf("values(): %q has element with bad type %s", - key, mapVar[key].Type) - } - } - - variable, err := hil.InterfaceToVariable(values) - if err != nil { - return nil, err - } - - return variable.Value, nil - }, - } -} - -// interpolationFuncBasename implements the "basename" function. -func interpolationFuncBasename() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - return filepath.Base(args[0].(string)), nil - }, - } -} - -// interpolationFuncBase64Encode implements the "base64encode" function that -// allows Base64 encoding. -func interpolationFuncBase64Encode() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - return base64.StdEncoding.EncodeToString([]byte(s)), nil - }, - } -} - -// interpolationFuncBase64Decode implements the "base64decode" function that -// allows Base64 decoding. -func interpolationFuncBase64Decode() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - sDec, err := base64.StdEncoding.DecodeString(s) - if err != nil { - return "", fmt.Errorf("failed to decode base64 data '%s'", s) - } - return string(sDec), nil - }, - } -} - -// interpolationFuncBase64Gzip implements the "gzip" function that allows gzip -// compression encoding the result using base64 -func interpolationFuncBase64Gzip() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - - var b bytes.Buffer - gz := gzip.NewWriter(&b) - if _, err := gz.Write([]byte(s)); err != nil { - return "", fmt.Errorf("failed to write gzip raw data: '%s'", s) - } - if err := gz.Flush(); err != nil { - return "", fmt.Errorf("failed to flush gzip writer: '%s'", s) - } - if err := gz.Close(); err != nil { - return "", fmt.Errorf("failed to close gzip writer: '%s'", s) - } - - return base64.StdEncoding.EncodeToString(b.Bytes()), nil - }, - } -} - -// interpolationFuncLower implements the "lower" function that does -// string lower casing. -func interpolationFuncLower() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - toLower := args[0].(string) - return strings.ToLower(toLower), nil - }, - } -} - -func interpolationFuncMd5() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - h := md5.New() - h.Write([]byte(s)) - hash := hex.EncodeToString(h.Sum(nil)) - return hash, nil - }, - } -} - -func interpolationFuncMerge() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeMap}, - ReturnType: ast.TypeMap, - Variadic: true, - VariadicType: ast.TypeMap, - Callback: func(args []interface{}) (interface{}, error) { - outputMap := make(map[string]ast.Variable) - - for _, arg := range args { - for k, v := range arg.(map[string]ast.Variable) { - outputMap[k] = v - } - } - - return outputMap, nil - }, - } -} - -// interpolationFuncUpper implements the "upper" function that does -// string upper casing. -func interpolationFuncUpper() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - toUpper := args[0].(string) - return strings.ToUpper(toUpper), nil - }, - } -} - -func interpolationFuncSha1() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - h := sha1.New() - h.Write([]byte(s)) - hash := hex.EncodeToString(h.Sum(nil)) - return hash, nil - }, - } -} - -// hexadecimal representation of sha256 sum -func interpolationFuncSha256() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - h := sha256.New() - h.Write([]byte(s)) - hash := hex.EncodeToString(h.Sum(nil)) - return hash, nil - }, - } -} - -func interpolationFuncSha512() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - h := sha512.New() - h.Write([]byte(s)) - hash := hex.EncodeToString(h.Sum(nil)) - return hash, nil - }, - } -} - -func interpolationFuncTrimSpace() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - trimSpace := args[0].(string) - return strings.TrimSpace(trimSpace), nil - }, - } -} - -func interpolationFuncBase64Sha256() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - h := sha256.New() - h.Write([]byte(s)) - shaSum := h.Sum(nil) - encoded := base64.StdEncoding.EncodeToString(shaSum[:]) - return encoded, nil - }, - } -} - -func interpolationFuncBase64Sha512() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - h := sha512.New() - h.Write([]byte(s)) - shaSum := h.Sum(nil) - encoded := base64.StdEncoding.EncodeToString(shaSum[:]) - return encoded, nil - }, - } -} - -func interpolationFuncBcrypt() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - Variadic: true, - VariadicType: ast.TypeString, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - defaultCost := 10 - - if len(args) > 1 { - costStr := args[1].(string) - cost, err := strconv.Atoi(costStr) - if err != nil { - return "", err - } - - defaultCost = cost - } - - if len(args) > 2 { - return "", fmt.Errorf("bcrypt() takes no more than two arguments") - } - - input := args[0].(string) - out, err := bcrypt.GenerateFromPassword([]byte(input), defaultCost) - if err != nil { - return "", fmt.Errorf("error occured generating password %s", err.Error()) - } - - return string(out), nil - }, - } -} - -func interpolationFuncUUID() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - return uuid.GenerateUUID() - }, - } -} - -// interpolationFuncTimestamp -func interpolationFuncTimestamp() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - return time.Now().UTC().Format(time.RFC3339), nil - }, - } -} - -func interpolationFuncTimeAdd() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeString, // input timestamp string in RFC3339 format - ast.TypeString, // duration to add to input timestamp that should be parsable by time.ParseDuration - }, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - - ts, err := time.Parse(time.RFC3339, args[0].(string)) - if err != nil { - return nil, err - } - duration, err := time.ParseDuration(args[1].(string)) - if err != nil { - return nil, err - } - - return ts.Add(duration).Format(time.RFC3339), nil - }, - } -} - -// interpolationFuncTitle implements the "title" function that returns a copy of the -// string in which first characters of all the words are capitalized. -func interpolationFuncTitle() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - toTitle := args[0].(string) - return strings.Title(toTitle), nil - }, - } -} - -// interpolationFuncSubstr implements the "substr" function that allows strings -// to be truncated. -func interpolationFuncSubstr() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ - ast.TypeString, // input string - ast.TypeInt, // offset - ast.TypeInt, // length - }, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - str := args[0].(string) - offset := args[1].(int) - length := args[2].(int) - - // Interpret a negative offset as being equivalent to a positive - // offset taken from the end of the string. - if offset < 0 { - offset += len(str) - } - - // Interpret a length of `-1` as indicating that the substring - // should start at `offset` and continue until the end of the - // string. Any other negative length (other than `-1`) is invalid. - if length == -1 { - length = len(str) - } else if length >= 0 { - length += offset - } else { - return nil, fmt.Errorf("length should be a non-negative integer") - } - - if offset > len(str) || offset < 0 { - return nil, fmt.Errorf("offset cannot be larger than the length of the string") - } - - if length > len(str) { - return nil, fmt.Errorf("'offset + length' cannot be larger than the length of the string") - } - - return str[offset:length], nil - }, - } -} - -// Flatten until it's not ast.TypeList -func flattener(finalList []ast.Variable, flattenList []ast.Variable) []ast.Variable { - for _, val := range flattenList { - if val.Type == ast.TypeList { - finalList = flattener(finalList, val.Value.([]ast.Variable)) - } else { - finalList = append(finalList, val) - } - } - return finalList -} - -// Flatten to single list -func interpolationFuncFlatten() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeList}, - ReturnType: ast.TypeList, - Variadic: false, - Callback: func(args []interface{}) (interface{}, error) { - inputList := args[0].([]ast.Variable) - - var outputList []ast.Variable - return flattener(outputList, inputList), nil - }, - } -} - -func interpolationFuncURLEncode() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - return url.QueryEscape(s), nil - }, - } -} - -// interpolationFuncTranspose implements the "transpose" function -// that converts a map (string,list) to a map (string,list) where -// the unique values of the original lists become the keys of the -// new map and the keys of the original map become values for the -// corresponding new keys. -func interpolationFuncTranspose() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeMap}, - ReturnType: ast.TypeMap, - Callback: func(args []interface{}) (interface{}, error) { - - inputMap := args[0].(map[string]ast.Variable) - outputMap := make(map[string]ast.Variable) - tmpMap := make(map[string][]string) - - for inKey, inVal := range inputMap { - if inVal.Type != ast.TypeList { - return nil, fmt.Errorf("transpose requires a map of lists of strings") - } - values := inVal.Value.([]ast.Variable) - for _, listVal := range values { - if listVal.Type != ast.TypeString { - return nil, fmt.Errorf("transpose requires the given map values to be lists of strings") - } - outKey := listVal.Value.(string) - if _, ok := tmpMap[outKey]; !ok { - tmpMap[outKey] = make([]string, 0) - } - outVal := tmpMap[outKey] - outVal = append(outVal, inKey) - sort.Strings(outVal) - tmpMap[outKey] = outVal - } - } - - for outKey, outVal := range tmpMap { - values := make([]ast.Variable, 0) - for _, v := range outVal { - values = append(values, ast.Variable{Type: ast.TypeString, Value: v}) - } - outputMap[outKey] = ast.Variable{Type: ast.TypeList, Value: values} - } - return outputMap, nil - }, - } -} - -// interpolationFuncAbs returns the absolute value of a given float. -func interpolationFuncAbs() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeFloat}, - ReturnType: ast.TypeFloat, - Callback: func(args []interface{}) (interface{}, error) { - return math.Abs(args[0].(float64)), nil - }, - } -} - -// interpolationFuncRsaDecrypt implements the "rsadecrypt" function that does -// RSA decryption. -func interpolationFuncRsaDecrypt() ast.Function { - return ast.Function{ - ArgTypes: []ast.Type{ast.TypeString, ast.TypeString}, - ReturnType: ast.TypeString, - Callback: func(args []interface{}) (interface{}, error) { - s := args[0].(string) - key := args[1].(string) - - b, err := base64.StdEncoding.DecodeString(s) - if err != nil { - return "", fmt.Errorf("Failed to decode input %q: cipher text must be base64-encoded", s) - } - - block, _ := pem.Decode([]byte(key)) - if block == nil { - return "", fmt.Errorf("Failed to read key %q: no key found", key) - } - if block.Headers["Proc-Type"] == "4,ENCRYPTED" { - return "", fmt.Errorf( - "Failed to read key %q: password protected keys are\n"+ - "not supported. Please decrypt the key prior to use.", key) - } - - x509Key, err := x509.ParsePKCS1PrivateKey(block.Bytes) - if err != nil { - return "", err - } - - out, err := rsa.DecryptPKCS1v15(nil, x509Key, b) - if err != nil { - return "", err - } - - return string(out), nil - }, - } -} diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go deleted file mode 100644 index e883429e9..000000000 --- a/config/interpolate_funcs_test.go +++ /dev/null @@ -1,2995 +0,0 @@ -package config - -import ( - "fmt" - "io/ioutil" - "os" - "reflect" - "testing" - "time" - - "path/filepath" - - "github.com/hashicorp/hil" - "github.com/hashicorp/hil/ast" - "github.com/mitchellh/go-homedir" - "golang.org/x/crypto/bcrypt" -) - -func TestInterpolateFuncZipMap(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${zipmap(var.list, var.list2)}`, - map[string]interface{}{ - "Hello": "bar", - "World": "baz", - }, - false, - }, - { - `${zipmap(var.list, var.nonstrings)}`, - map[string]interface{}{ - "Hello": []interface{}{"bar", "baz"}, - "World": []interface{}{"boo", "foo"}, - }, - false, - }, - { - `${zipmap(var.nonstrings, var.list2)}`, - nil, - true, - }, - { - `${zipmap(var.list, var.differentlengthlist)}`, - nil, - true, - }, - }, - Vars: map[string]ast.Variable{ - "var.list": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "Hello", - }, - { - Type: ast.TypeString, - Value: "World", - }, - }, - }, - "var.list2": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "bar", - }, - { - Type: ast.TypeString, - Value: "baz", - }, - }, - }, - "var.differentlengthlist": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "bar", - }, - { - Type: ast.TypeString, - Value: "baz", - }, - { - Type: ast.TypeString, - Value: "boo", - }, - }, - }, - "var.nonstrings": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "bar", - }, - { - Type: ast.TypeString, - Value: "baz", - }, - }, - }, - { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "boo", - }, - { - Type: ast.TypeString, - Value: "foo", - }, - }, - }, - }, - }, - }, - }) -} - -func TestInterpolateFuncList(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // empty input returns empty list - { - `${list()}`, - []interface{}{}, - false, - }, - - // single input returns list of length 1 - { - `${list("hello")}`, - []interface{}{"hello"}, - false, - }, - - // two inputs returns list of length 2 - { - `${list("hello", "world")}`, - []interface{}{"hello", "world"}, - false, - }, - - // not a string input gives error - { - `${list("hello", 42)}`, - nil, - true, - }, - - // list of lists - { - `${list("${var.list}", "${var.list2}")}`, - []interface{}{[]interface{}{"Hello", "World"}, []interface{}{"bar", "baz"}}, - false, - }, - - // list of maps - { - `${list("${var.map}", "${var.map2}")}`, - []interface{}{map[string]interface{}{"key": "bar"}, map[string]interface{}{"key2": "baz"}}, - false, - }, - - // error on a heterogeneous list - { - `${list("first", "${var.list}")}`, - nil, - true, - }, - }, - Vars: map[string]ast.Variable{ - "var.list": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "Hello", - }, - { - Type: ast.TypeString, - Value: "World", - }, - }, - }, - "var.list2": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "bar", - }, - { - Type: ast.TypeString, - Value: "baz", - }, - }, - }, - - "var.map": { - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key": { - Type: ast.TypeString, - Value: "bar", - }, - }, - }, - "var.map2": { - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key2": { - Type: ast.TypeString, - Value: "baz", - }, - }, - }, - }, - }) -} - -func TestInterpolateFuncMax(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${max()}`, - nil, - true, - }, - - { - `${max("")}`, - nil, - true, - }, - - { - `${max(-1, 0, 1)}`, - "1", - false, - }, - - { - `${max(1, 0, -1)}`, - "1", - false, - }, - - { - `${max(-1, -2)}`, - "-1", - false, - }, - - { - `${max(-1)}`, - "-1", - false, - }, - }, - }) -} - -func TestInterpolateFuncMin(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${min()}`, - nil, - true, - }, - - { - `${min("")}`, - nil, - true, - }, - - { - `${min(-1, 0, 1)}`, - "-1", - false, - }, - - { - `${min(1, 0, -1)}`, - "-1", - false, - }, - - { - `${min(-1, -2)}`, - "-2", - false, - }, - - { - `${min(-1)}`, - "-1", - false, - }, - }, - }) -} - -func TestInterpolateFuncPow(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${pow(1, 0)}`, - "1", - false, - }, - { - `${pow(1, 1)}`, - "1", - false, - }, - - { - `${pow(2, 0)}`, - "1", - false, - }, - { - `${pow(2, 1)}`, - "2", - false, - }, - { - `${pow(3, 2)}`, - "9", - false, - }, - { - `${pow(-3, 2)}`, - "9", - false, - }, - { - `${pow(2, -2)}`, - "0.25", - false, - }, - { - `${pow(0, 2)}`, - "0", - false, - }, - { - `${pow("invalid-input", 2)}`, - nil, - true, - }, - { - `${pow(2, "invalid-input")}`, - nil, - true, - }, - { - `${pow(2)}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncFloor(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${floor()}`, - nil, - true, - }, - - { - `${floor("")}`, - nil, - true, - }, - - { - `${floor("-1.3")}`, // there appears to be a AST bug where the parsed argument ends up being -1 without the "s - "-2", - false, - }, - - { - `${floor(1.7)}`, - "1", - false, - }, - }, - }) -} - -func TestInterpolateFuncCeil(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${ceil()}`, - nil, - true, - }, - - { - `${ceil("")}`, - nil, - true, - }, - - { - `${ceil(-1.8)}`, - "-1", - false, - }, - - { - `${ceil(1.2)}`, - "2", - false, - }, - }, - }) -} - -func TestInterpolateFuncLog(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${log(1, 10)}`, - "0", - false, - }, - { - `${log(10, 10)}`, - "1", - false, - }, - - { - `${log(0, 10)}`, - "-Inf", - false, - }, - { - `${log(10, 0)}`, - "-0", - false, - }, - }, - }) -} - -func TestInterpolateFuncChomp(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${chomp()}`, - nil, - true, - }, - - { - `${chomp("hello world")}`, - "hello world", - false, - }, - - { - fmt.Sprintf(`${chomp("%s")}`, "goodbye\ncruel\nworld"), - "goodbye\ncruel\nworld", - false, - }, - - { - fmt.Sprintf(`${chomp("%s")}`, "goodbye\r\nwindows\r\nworld"), - "goodbye\r\nwindows\r\nworld", - false, - }, - - { - fmt.Sprintf(`${chomp("%s")}`, "goodbye\ncruel\nworld\n"), - "goodbye\ncruel\nworld", - false, - }, - - { - fmt.Sprintf(`${chomp("%s")}`, "goodbye\ncruel\nworld\n\n\n\n"), - "goodbye\ncruel\nworld", - false, - }, - - { - fmt.Sprintf(`${chomp("%s")}`, "goodbye\r\nwindows\r\nworld\r\n"), - "goodbye\r\nwindows\r\nworld", - false, - }, - - { - fmt.Sprintf(`${chomp("%s")}`, "goodbye\r\nwindows\r\nworld\r\n\r\n\r\n\r\n"), - "goodbye\r\nwindows\r\nworld", - false, - }, - }, - }) -} - -func TestInterpolateFuncMap(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // empty input returns empty map - { - `${map()}`, - map[string]interface{}{}, - false, - }, - - // odd args is error - { - `${map("odd")}`, - nil, - true, - }, - - // two args returns map w/ one k/v - { - `${map("hello", "world")}`, - map[string]interface{}{"hello": "world"}, - false, - }, - - // four args get two k/v - { - `${map("hello", "world", "what's", "up?")}`, - map[string]interface{}{"hello": "world", "what's": "up?"}, - false, - }, - - // map of lists is okay - { - `${map("hello", list("world"), "what's", list("up?"))}`, - map[string]interface{}{ - "hello": []interface{}{"world"}, - "what's": []interface{}{"up?"}, - }, - false, - }, - - // map of maps is okay - { - `${map("hello", map("there", "world"), "what's", map("really", "up?"))}`, - map[string]interface{}{ - "hello": map[string]interface{}{"there": "world"}, - "what's": map[string]interface{}{"really": "up?"}, - }, - false, - }, - - // keys have to be strings - { - `${map(list("listkey"), "val")}`, - nil, - true, - }, - - // types have to match - { - `${map("some", "strings", "also", list("lists"))}`, - nil, - true, - }, - - // duplicate keys are an error - { - `${map("key", "val", "key", "again")}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncCompact(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // empty string within array - { - `${compact(split(",", "a,,b"))}`, - []interface{}{"a", "b"}, - false, - }, - - // empty string at the end of array - { - `${compact(split(",", "a,b,"))}`, - []interface{}{"a", "b"}, - false, - }, - - // single empty string - { - `${compact(split(",", ""))}`, - []interface{}{}, - false, - }, - - // errrors on list of lists - { - `${compact(list(list("a"), list("b")))}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncCidrHost(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${cidrhost("192.168.1.0/24", 5)}`, - "192.168.1.5", - false, - }, - { - `${cidrhost("192.168.1.0/24", -5)}`, - "192.168.1.251", - false, - }, - { - `${cidrhost("192.168.1.0/24", -256)}`, - "192.168.1.0", - false, - }, - { - `${cidrhost("192.168.1.0/30", 255)}`, - nil, - true, // 255 doesn't fit in two bits - }, - { - `${cidrhost("192.168.1.0/30", -255)}`, - nil, - true, // 255 doesn't fit in two bits - }, - { - `${cidrhost("not-a-cidr", 6)}`, - nil, - true, // not a valid CIDR mask - }, - { - `${cidrhost("10.256.0.0/8", 6)}`, - nil, - true, // can't have an octet >255 - }, - }, - }) -} - -func TestInterpolateFuncCidrNetmask(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${cidrnetmask("192.168.1.0/24")}`, - "255.255.255.0", - false, - }, - { - `${cidrnetmask("192.168.1.0/32")}`, - "255.255.255.255", - false, - }, - { - `${cidrnetmask("0.0.0.0/0")}`, - "0.0.0.0", - false, - }, - { - // This doesn't really make sense for IPv6 networks - // but it ought to do something sensible anyway. - `${cidrnetmask("1::/64")}`, - "ffff:ffff:ffff:ffff::", - false, - }, - { - `${cidrnetmask("not-a-cidr")}`, - nil, - true, // not a valid CIDR mask - }, - { - `${cidrnetmask("10.256.0.0/8")}`, - nil, - true, // can't have an octet >255 - }, - }, - }) -} - -func TestInterpolateFuncCidrSubnet(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${cidrsubnet("192.168.2.0/20", 4, 6)}`, - "192.168.6.0/24", - false, - }, - { - `${cidrsubnet("fe80::/48", 16, 6)}`, - "fe80:0:0:6::/64", - false, - }, - { - // IPv4 address encoded in IPv6 syntax gets normalized - `${cidrsubnet("::ffff:192.168.0.0/112", 8, 6)}`, - "192.168.6.0/24", - false, - }, - { - `${cidrsubnet("192.168.0.0/30", 4, 6)}`, - nil, - true, // not enough bits left - }, - { - `${cidrsubnet("192.168.0.0/16", 2, 16)}`, - nil, - true, // can't encode 16 in 2 bits - }, - { - `${cidrsubnet("not-a-cidr", 4, 6)}`, - nil, - true, // not a valid CIDR mask - }, - { - `${cidrsubnet("10.256.0.0/8", 4, 6)}`, - nil, - true, // can't have an octet >255 - }, - }, - }) -} - -func TestInterpolateFuncCoalesce(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${coalesce("first", "second", "third")}`, - "first", - false, - }, - { - `${coalesce("", "second", "third")}`, - "second", - false, - }, - { - `${coalesce("", "", "")}`, - "", - false, - }, - { - `${coalesce("foo")}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncCoalesceList(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${coalescelist(list("first"), list("second"), list("third"))}`, - []interface{}{"first"}, - false, - }, - { - `${coalescelist(list(), list("second"), list("third"))}`, - []interface{}{"second"}, - false, - }, - { - `${coalescelist(list(), list(), list())}`, - []interface{}{}, - false, - }, - { - `${coalescelist(list("foo"))}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncConcat(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // String + list - // no longer supported, now returns an error - { - `${concat("a", split(",", "b,c"))}`, - nil, - true, - }, - - // List + string - // no longer supported, now returns an error - { - `${concat(split(",", "a,b"), "c")}`, - nil, - true, - }, - - // Single list - { - `${concat(split(",", ",foo,"))}`, - []interface{}{"", "foo", ""}, - false, - }, - { - `${concat(split(",", "a,b,c"))}`, - []interface{}{"a", "b", "c"}, - false, - }, - - // Two lists - { - `${concat(split(",", "a,b,c"), split(",", "d,e"))}`, - []interface{}{"a", "b", "c", "d", "e"}, - false, - }, - // Two lists with different separators - { - `${concat(split(",", "a,b,c"), split(" ", "d e"))}`, - []interface{}{"a", "b", "c", "d", "e"}, - false, - }, - - // More lists - { - `${concat(split(",", "a,b"), split(",", "c,d"), split(",", "e,f"), split(",", "0,1"))}`, - []interface{}{"a", "b", "c", "d", "e", "f", "0", "1"}, - false, - }, - - // list vars - { - `${concat("${var.list}", "${var.list}")}`, - []interface{}{"a", "b", "a", "b"}, - false, - }, - // lists of lists - { - `${concat("${var.lists}", "${var.lists}")}`, - []interface{}{[]interface{}{"c", "d"}, []interface{}{"c", "d"}}, - false, - }, - - // lists of maps - { - `${concat("${var.maps}", "${var.maps}")}`, - []interface{}{map[string]interface{}{"key1": "a", "key2": "b"}, map[string]interface{}{"key1": "a", "key2": "b"}}, - false, - }, - - // multiple strings - // no longer supported, now returns an error - { - `${concat("string1", "string2")}`, - nil, - true, - }, - - // mismatched types - { - `${concat("${var.lists}", "${var.maps}")}`, - nil, - true, - }, - }, - Vars: map[string]ast.Variable{ - "var.list": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "a", - }, - { - Type: ast.TypeString, - Value: "b", - }, - }, - }, - "var.lists": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "c", - }, - { - Type: ast.TypeString, - Value: "d", - }, - }, - }, - }, - }, - "var.maps": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key1": { - Type: ast.TypeString, - Value: "a", - }, - "key2": { - Type: ast.TypeString, - Value: "b", - }, - }, - }, - }, - }, - }, - }) -} - -func TestInterpolateFuncContains(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.listOfStrings": interfaceToVariableSwallowError([]string{"notfoo", "stillnotfoo", "bar"}), - "var.listOfInts": interfaceToVariableSwallowError([]int{1, 2, 3}), - }, - Cases: []testFunctionCase{ - { - `${contains(var.listOfStrings, "bar")}`, - "true", - false, - }, - - { - `${contains(var.listOfStrings, "foo")}`, - "false", - false, - }, - { - `${contains(var.listOfInts, 1)}`, - "true", - false, - }, - { - `${contains(var.listOfInts, 10)}`, - "false", - false, - }, - { - `${contains(var.listOfInts, "2")}`, - "true", - false, - }, - }, - }) -} - -func TestInterpolateFuncMerge(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // basic merge - { - `${merge(map("a", "b"), map("c", "d"))}`, - map[string]interface{}{"a": "b", "c": "d"}, - false, - }, - - // merge with conflicts is ok, last in wins. - { - `${merge(map("a", "b", "c", "X"), map("c", "d"))}`, - map[string]interface{}{"a": "b", "c": "d"}, - false, - }, - - // merge variadic - { - `${merge(map("a", "b"), map("c", "d"), map("e", "f"))}`, - map[string]interface{}{"a": "b", "c": "d", "e": "f"}, - false, - }, - - // merge with variables - { - `${merge(var.maps[0], map("c", "d"))}`, - map[string]interface{}{"key1": "a", "key2": "b", "c": "d"}, - false, - }, - - // only accept maps - { - `${merge(map("a", "b"), list("c", "d"))}`, - nil, - true, - }, - - // merge maps of maps - { - `${merge(map("a", var.maps[0]), map("b", var.maps[1]))}`, - map[string]interface{}{ - "b": map[string]interface{}{"key3": "d", "key4": "c"}, - "a": map[string]interface{}{"key1": "a", "key2": "b"}, - }, - false, - }, - // merge maps of lists - { - `${merge(map("a", list("b")), map("c", list("d", "e")))}`, - map[string]interface{}{"a": []interface{}{"b"}, "c": []interface{}{"d", "e"}}, - false, - }, - // merge map of various kinds - { - `${merge(map("a", var.maps[0]), map("b", list("c", "d")))}`, - map[string]interface{}{"a": map[string]interface{}{"key1": "a", "key2": "b"}, "b": []interface{}{"c", "d"}}, - false, - }, - }, - Vars: map[string]ast.Variable{ - "var.maps": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key1": { - Type: ast.TypeString, - Value: "a", - }, - "key2": { - Type: ast.TypeString, - Value: "b", - }, - }, - }, - { - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key3": { - Type: ast.TypeString, - Value: "d", - }, - "key4": { - Type: ast.TypeString, - Value: "c", - }, - }, - }, - }, - }, - }, - }) - -} - -func TestInterpolateFuncDirname(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${dirname("/foo/bar/baz")}`, - "/foo/bar", - false, - }, - }, - }) -} - -func TestInterpolateFuncDistinct(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // 3 duplicates - { - `${distinct(concat(split(",", "user1,user2,user3"), split(",", "user1,user2,user3")))}`, - []interface{}{"user1", "user2", "user3"}, - false, - }, - // 1 duplicate - { - `${distinct(concat(split(",", "user1,user2,user3"), split(",", "user1,user4")))}`, - []interface{}{"user1", "user2", "user3", "user4"}, - false, - }, - // too many args - { - `${distinct(concat(split(",", "user1,user2,user3"), split(",", "user1,user4")), "foo")}`, - nil, - true, - }, - // non-flat list is an error - { - `${distinct(list(list("a"), list("a")))}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncMatchKeys(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // normal usage - { - `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2"))}`, - []interface{}{"b"}, - false, - }, - // normal usage 2, check the order - { - `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2", "ref1"))}`, - []interface{}{"a", "b"}, - false, - }, - // duplicate item in searchset - { - `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2", "ref2"))}`, - []interface{}{"b"}, - false, - }, - // no matches - { - `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref4"))}`, - []interface{}{}, - false, - }, - // no matches 2 - { - `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list())}`, - []interface{}{}, - false, - }, - // zero case - { - `${matchkeys(list(), list(), list("nope"))}`, - []interface{}{}, - false, - }, - // complex values - { - `${matchkeys(list(list("a", "a")), list("a"), list("a"))}`, - []interface{}{[]interface{}{"a", "a"}}, - false, - }, - // errors - // different types - { - `${matchkeys(list("a"), list(1), list("a"))}`, - nil, - true, - }, - // different types - { - `${matchkeys(list("a"), list(list("a"), list("a")), list("a"))}`, - nil, - true, - }, - // lists of different length is an error - { - `${matchkeys(list("a"), list("a", "b"), list("a"))}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncFile(t *testing.T) { - tf, err := ioutil.TempFile("", "tf") - if err != nil { - t.Fatalf("err: %s", err) - } - path := tf.Name() - tf.Write([]byte("foo")) - tf.Close() - defer os.Remove(path) - - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - fmt.Sprintf(`${file("%s")}`, path), - "foo", - false, - }, - - // Invalid path - { - `${file("/i/dont/exist")}`, - nil, - true, - }, - - // Too many args - { - `${file("foo", "bar")}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncFormat(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${format("hello")}`, - "hello", - false, - }, - - { - `${format("hello %s", "world")}`, - "hello world", - false, - }, - - { - `${format("hello %d", 42)}`, - "hello 42", - false, - }, - - { - `${format("hello %05d", 42)}`, - "hello 00042", - false, - }, - - { - `${format("hello %05d", 12345)}`, - "hello 12345", - false, - }, - }, - }) -} - -func TestInterpolateFuncFormatList(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // formatlist requires at least one list - { - `${formatlist("hello")}`, - nil, - true, - }, - { - `${formatlist("hello %s", "world")}`, - nil, - true, - }, - // formatlist applies to each list element in turn - { - `${formatlist("<%s>", split(",", "A,B"))}`, - []interface{}{"", ""}, - false, - }, - // formatlist repeats scalar elements - { - `${join(", ", formatlist("%s=%s", "x", split(",", "A,B,C")))}`, - "x=A, x=B, x=C", - false, - }, - // Multiple lists are walked in parallel - { - `${join(", ", formatlist("%s=%s", split(",", "A,B,C"), split(",", "1,2,3")))}`, - "A=1, B=2, C=3", - false, - }, - // Mismatched list lengths generate an error - { - `${formatlist("%s=%2s", split(",", "A,B,C,D"), split(",", "1,2,3"))}`, - nil, - true, - }, - // Works with lists of length 1 [GH-2240] - { - `${formatlist("%s.id", split(",", "demo-rest-elb"))}`, - []interface{}{"demo-rest-elb.id"}, - false, - }, - // Works with empty lists [GH-7607] - { - `${formatlist("%s", var.emptylist)}`, - []interface{}{}, - false, - }, - }, - Vars: map[string]ast.Variable{ - "var.emptylist": { - Type: ast.TypeList, - Value: []ast.Variable{}, - }, - }, - }) -} - -func TestInterpolateFuncIndex(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.list1": interfaceToVariableSwallowError([]string{"notfoo", "stillnotfoo", "bar"}), - "var.list2": interfaceToVariableSwallowError([]string{"foo"}), - "var.list3": interfaceToVariableSwallowError([]string{"foo", "spam", "bar", "eggs"}), - }, - Cases: []testFunctionCase{ - { - `${index("test", "")}`, - nil, - true, - }, - - { - `${index(var.list1, "foo")}`, - nil, - true, - }, - - { - `${index(var.list2, "foo")}`, - "0", - false, - }, - - { - `${index(var.list3, "bar")}`, - "2", - false, - }, - }, - }) -} - -func TestInterpolateFuncIndent(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${indent(4, "Fleas: -Adam -Had'em - -E.E. Cummings")}`, - "Fleas:\n Adam\n Had'em\n \n E.E. Cummings", - false, - }, - { - `${indent(4, "oneliner")}`, - "oneliner", - false, - }, - { - `${indent(4, "#!/usr/bin/env bash -date -pwd")}`, - "#!/usr/bin/env bash\n date\n pwd", - false, - }, - }, - }) -} - -func TestInterpolateFuncJoin(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.a_list": interfaceToVariableSwallowError([]string{"foo"}), - "var.a_longer_list": interfaceToVariableSwallowError([]string{"foo", "bar", "baz"}), - "var.list_of_lists": interfaceToVariableSwallowError([]interface{}{[]string{"foo"}, []string{"bar"}, []string{"baz"}}), - }, - Cases: []testFunctionCase{ - { - `${join(",")}`, - nil, - true, - }, - - { - `${join(",", var.a_list)}`, - "foo", - false, - }, - - { - `${join(".", var.a_longer_list)}`, - "foo.bar.baz", - false, - }, - - { - `${join(".", var.list_of_lists)}`, - nil, - true, - }, - { - `${join(".", list(list("nested")))}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncJSONEncode(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "easy": ast.Variable{ - Value: "test", - Type: ast.TypeString, - }, - "hard": ast.Variable{ - Value: " foo \\ \n \t \" bar ", - Type: ast.TypeString, - }, - "list": interfaceToVariableSwallowError([]string{"foo", "bar\tbaz"}), - "emptylist": ast.Variable{ - Value: []ast.Variable{}, - Type: ast.TypeList, - }, - "map": interfaceToVariableSwallowError(map[string]string{ - "foo": "bar", - "ba \n z": "q\\x", - }), - "emptymap": interfaceToVariableSwallowError(map[string]string{}), - "nestedlist": interfaceToVariableSwallowError([][]string{{"foo"}}), - "nestedmap": interfaceToVariableSwallowError(map[string][]string{"foo": {"bar"}}), - }, - Cases: []testFunctionCase{ - { - `${jsonencode("test")}`, - `"test"`, - false, - }, - { - `${jsonencode(easy)}`, - `"test"`, - false, - }, - { - `${jsonencode(hard)}`, - `" foo \\ \n \t \" bar "`, - false, - }, - { - `${jsonencode("")}`, - `""`, - false, - }, - { - `${jsonencode()}`, - nil, - true, - }, - { - `${jsonencode(list)}`, - `["foo","bar\tbaz"]`, - false, - }, - { - `${jsonencode(emptylist)}`, - `[]`, - false, - }, - { - `${jsonencode(map)}`, - `{"ba \n z":"q\\x","foo":"bar"}`, - false, - }, - { - `${jsonencode(emptymap)}`, - `{}`, - false, - }, - { - `${jsonencode(nestedlist)}`, - `[["foo"]]`, - false, - }, - { - `${jsonencode(nestedmap)}`, - `{"foo":["bar"]}`, - false, - }, - }, - }) -} - -func TestInterpolateFuncReplace(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // Regular search and replace - { - `${replace("hello", "hel", "bel")}`, - "bello", - false, - }, - - // Search string doesn't match - { - `${replace("hello", "nope", "bel")}`, - "hello", - false, - }, - - // Regular expression - { - `${replace("hello", "/l/", "L")}`, - "heLLo", - false, - }, - - { - `${replace("helo", "/(l)/", "$1$1")}`, - "hello", - false, - }, - - // Bad regexp - { - `${replace("helo", "/(l/", "$1$1")}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncReverse(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.inputlist": { - Type: ast.TypeList, - Value: []ast.Variable{ - {Type: ast.TypeString, Value: "a"}, - {Type: ast.TypeString, Value: "b"}, - {Type: ast.TypeString, Value: "1"}, - {Type: ast.TypeString, Value: "d"}, - }, - }, - "var.emptylist": { - Type: ast.TypeList, - // Intentionally 0-lengthed list - Value: []ast.Variable{}, - }, - }, - Cases: []testFunctionCase{ - { - `${reverse(var.inputlist)}`, - []interface{}{"d", "1", "b", "a"}, - false, - }, - { - `${reverse(var.emptylist)}`, - []interface{}{}, - false, - }, - }, - }) -} - -func TestInterpolateFuncLength(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // Raw strings - { - `${length("")}`, - "0", - false, - }, - { - `${length("a")}`, - "1", - false, - }, - { - `${length(" ")}`, - "1", - false, - }, - { - `${length(" a ,")}`, - "4", - false, - }, - { - `${length("aaa")}`, - "3", - false, - }, - - // Lists - { - `${length(split(",", "a"))}`, - "1", - false, - }, - { - `${length(split(",", "foo,"))}`, - "2", - false, - }, - { - `${length(split(",", ",foo,"))}`, - "3", - false, - }, - { - `${length(split(",", "foo,bar"))}`, - "2", - false, - }, - { - `${length(split(".", "one.two.three.four.five"))}`, - "5", - false, - }, - // Want length 0 if we split an empty string then compact - { - `${length(compact(split(",", "")))}`, - "0", - false, - }, - // Works for maps - { - `${length(map("k", "v"))}`, - "1", - false, - }, - { - `${length(map("k1", "v1", "k2", "v2"))}`, - "2", - false, - }, - }, - }) -} - -func TestInterpolateFuncSignum(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${signum()}`, - nil, - true, - }, - - { - `${signum("")}`, - nil, - true, - }, - - { - `${signum(0)}`, - "0", - false, - }, - - { - `${signum(15)}`, - "1", - false, - }, - - { - `${signum(-29)}`, - "-1", - false, - }, - }, - }) -} - -func TestInterpolateFuncSlice(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // Negative from index - { - `${slice(list("a"), -1, 0)}`, - nil, - true, - }, - // From index > to index - { - `${slice(list("a", "b", "c"), 2, 1)}`, - nil, - true, - }, - // To index too large - { - `${slice(var.list_of_strings, 1, 4)}`, - nil, - true, - }, - // Empty slice - { - `${slice(var.list_of_strings, 1, 1)}`, - []interface{}{}, - false, - }, - { - `${slice(var.list_of_strings, 1, 2)}`, - []interface{}{"b"}, - false, - }, - { - `${slice(var.list_of_strings, 0, length(var.list_of_strings) - 1)}`, - []interface{}{"a", "b"}, - false, - }, - }, - Vars: map[string]ast.Variable{ - "var.list_of_strings": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "a", - }, - { - Type: ast.TypeString, - Value: "b", - }, - { - Type: ast.TypeString, - Value: "c", - }, - }, - }, - }, - }) -} - -func TestInterpolateFuncSort(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.strings": ast.Variable{ - Type: ast.TypeList, - Value: []ast.Variable{ - {Type: ast.TypeString, Value: "c"}, - {Type: ast.TypeString, Value: "a"}, - {Type: ast.TypeString, Value: "b"}, - }, - }, - "var.notstrings": ast.Variable{ - Type: ast.TypeList, - Value: []ast.Variable{ - {Type: ast.TypeList, Value: []ast.Variable{}}, - {Type: ast.TypeString, Value: "b"}, - }, - }, - }, - Cases: []testFunctionCase{ - { - `${sort(var.strings)}`, - []interface{}{"a", "b", "c"}, - false, - }, - { - `${sort(var.notstrings)}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncSplit(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${split(",")}`, - nil, - true, - }, - - { - `${split(",", "")}`, - []interface{}{""}, - false, - }, - - { - `${split(",", "foo")}`, - []interface{}{"foo"}, - false, - }, - - { - `${split(",", ",,,")}`, - []interface{}{"", "", "", ""}, - false, - }, - - { - `${split(",", "foo,")}`, - []interface{}{"foo", ""}, - false, - }, - - { - `${split(",", ",foo,")}`, - []interface{}{"", "foo", ""}, - false, - }, - - { - `${split(".", "foo.bar.baz")}`, - []interface{}{"foo", "bar", "baz"}, - false, - }, - }, - }) -} - -func TestInterpolateFuncLookup(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.foo": { - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "bar": { - Type: ast.TypeString, - Value: "baz", - }, - }, - }, - "var.map_of_lists": ast.Variable{ - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "bar": { - Type: ast.TypeList, - Value: []ast.Variable{ - { - Type: ast.TypeString, - Value: "baz", - }, - }, - }, - }, - }, - }, - Cases: []testFunctionCase{ - { - `${lookup(var.foo, "bar")}`, - "baz", - false, - }, - - // Invalid key - { - `${lookup(var.foo, "baz")}`, - nil, - true, - }, - - // Supplied default with valid key - { - `${lookup(var.foo, "bar", "")}`, - "baz", - false, - }, - - // Supplied default with invalid key - { - `${lookup(var.foo, "zip", "")}`, - "", - false, - }, - - // Too many args - { - `${lookup(var.foo, "bar", "", "abc")}`, - nil, - true, - }, - - // Cannot lookup into map of lists - { - `${lookup(var.map_of_lists, "bar")}`, - nil, - true, - }, - - // Non-empty default - { - `${lookup(var.foo, "zap", "xyz")}`, - "xyz", - false, - }, - }, - }) -} - -func TestInterpolateFuncKeys(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.foo": ast.Variable{ - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "bar": ast.Variable{ - Value: "baz", - Type: ast.TypeString, - }, - "qux": ast.Variable{ - Value: "quack", - Type: ast.TypeString, - }, - }, - }, - "var.str": ast.Variable{ - Value: "astring", - Type: ast.TypeString, - }, - }, - Cases: []testFunctionCase{ - { - `${keys(var.foo)}`, - []interface{}{"bar", "qux"}, - false, - }, - - // Invalid key - { - `${keys(var.not)}`, - nil, - true, - }, - - // Too many args - { - `${keys(var.foo, "bar")}`, - nil, - true, - }, - - // Not a map - { - `${keys(var.str)}`, - nil, - true, - }, - }, - }) -} - -// Confirm that keys return in sorted order, and values return in the order of -// their sorted keys. -func TestInterpolateFuncKeyValOrder(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.foo": ast.Variable{ - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "D": ast.Variable{ - Value: "2", - Type: ast.TypeString, - }, - "C": ast.Variable{ - Value: "Y", - Type: ast.TypeString, - }, - "A": ast.Variable{ - Value: "X", - Type: ast.TypeString, - }, - "10": ast.Variable{ - Value: "Z", - Type: ast.TypeString, - }, - "1": ast.Variable{ - Value: "4", - Type: ast.TypeString, - }, - "3": ast.Variable{ - Value: "W", - Type: ast.TypeString, - }, - }, - }, - }, - Cases: []testFunctionCase{ - { - `${keys(var.foo)}`, - []interface{}{"1", "10", "3", "A", "C", "D"}, - false, - }, - - { - `${values(var.foo)}`, - []interface{}{"4", "Z", "W", "X", "Y", "2"}, - false, - }, - }, - }) -} - -func TestInterpolateFuncValues(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.foo": ast.Variable{ - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "bar": ast.Variable{ - Value: "quack", - Type: ast.TypeString, - }, - "qux": ast.Variable{ - Value: "baz", - Type: ast.TypeString, - }, - }, - }, - "var.str": ast.Variable{ - Value: "astring", - Type: ast.TypeString, - }, - }, - Cases: []testFunctionCase{ - { - `${values(var.foo)}`, - []interface{}{"quack", "baz"}, - false, - }, - - // Invalid key - { - `${values(var.not)}`, - nil, - true, - }, - - // Too many args - { - `${values(var.foo, "bar")}`, - nil, - true, - }, - - // Not a map - { - `${values(var.str)}`, - nil, - true, - }, - - // Map of lists - { - `${values(map("one", list()))}`, - nil, - true, - }, - }, - }) -} - -func interfaceToVariableSwallowError(input interface{}) ast.Variable { - variable, _ := hil.InterfaceToVariable(input) - return variable -} - -func TestInterpolateFuncElement(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.a_list": interfaceToVariableSwallowError([]string{"foo", "baz"}), - "var.a_short_list": interfaceToVariableSwallowError([]string{"foo"}), - "var.empty_list": interfaceToVariableSwallowError([]interface{}{}), - "var.a_nested_list": interfaceToVariableSwallowError([]interface{}{[]string{"foo"}, []string{"baz"}}), - }, - Cases: []testFunctionCase{ - { - `${element(var.a_list, "1")}`, - "baz", - false, - }, - - { - `${element(var.a_short_list, "0")}`, - "foo", - false, - }, - - // Invalid index should wrap vs. out-of-bounds - { - `${element(var.a_list, "2")}`, - "foo", - false, - }, - - // Negative number should fail - { - `${element(var.a_short_list, "-1")}`, - nil, - true, - }, - - // Empty list should fail - { - `${element(var.empty_list, 0)}`, - nil, - true, - }, - - // Too many args - { - `${element(var.a_list, "0", "2")}`, - nil, - true, - }, - - // Only works on single-level lists - { - `${element(var.a_nested_list, "0")}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncChunklist(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // normal usage - { - `${chunklist(list("a", "b", "c"), 1)}`, - []interface{}{ - []interface{}{"a"}, - []interface{}{"b"}, - []interface{}{"c"}, - }, - false, - }, - // `size` is pair and the list has an impair number of items - { - `${chunklist(list("a", "b", "c"), 2)}`, - []interface{}{ - []interface{}{"a", "b"}, - []interface{}{"c"}, - }, - false, - }, - // list made of the same list, since size is 0 - { - `${chunklist(list("a", "b", "c"), 0)}`, - []interface{}{[]interface{}{"a", "b", "c"}}, - false, - }, - // negative size of chunks - { - `${chunklist(list("a", "b", "c"), -1)}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncBasename(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${basename("/foo/bar/baz")}`, - "baz", - false, - }, - }, - }) -} - -func TestInterpolateFuncBase64Encode(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // Regular base64 encoding - { - `${base64encode("abc123!?$*&()'-=@~")}`, - "YWJjMTIzIT8kKiYoKSctPUB+", - false, - }, - }, - }) -} - -func TestInterpolateFuncBase64Decode(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // Regular base64 decoding - { - `${base64decode("YWJjMTIzIT8kKiYoKSctPUB+")}`, - "abc123!?$*&()'-=@~", - false, - }, - - // Invalid base64 data decoding - { - `${base64decode("this-is-an-invalid-base64-data")}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncLower(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${lower("HELLO")}`, - "hello", - false, - }, - - { - `${lower("")}`, - "", - false, - }, - - { - `${lower()}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncUpper(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${upper("hello")}`, - "HELLO", - false, - }, - - { - `${upper("")}`, - "", - false, - }, - - { - `${upper()}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncSha1(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${sha1("test")}`, - "a94a8fe5ccb19ba61c4c0873d391e987982fbbd3", - false, - }, - }, - }) -} - -func TestInterpolateFuncSha256(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { // hexadecimal representation of sha256 sum - `${sha256("test")}`, - "9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08", - false, - }, - }, - }) -} - -func TestInterpolateFuncSha512(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${sha512("test")}`, - "ee26b0dd4af7e749aa1a8ee3c10ae9923f618980772e473f8819a5d4940e0db27ac185f8a0e1d5f84f88bc887fd67b143732c304cc5fa9ad8e6f57f50028a8ff", - false, - }, - }, - }) -} - -func TestInterpolateFuncTitle(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${title("hello")}`, - "Hello", - false, - }, - - { - `${title("hello world")}`, - "Hello World", - false, - }, - - { - `${title("")}`, - "", - false, - }, - - { - `${title()}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncTrimSpace(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${trimspace(" test ")}`, - "test", - false, - }, - }, - }) -} - -func TestInterpolateFuncBase64Gzip(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${base64gzip("test")}`, - "H4sIAAAAAAAA/ypJLS4BAAAA//8BAAD//wx+f9gEAAAA", - false, - }, - }, - }) -} - -func TestInterpolateFuncBase64Sha256(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${base64sha256("test")}`, - "n4bQgYhMfWWaL+qgxVrQFaO/TxsrC4Is0V1sFbDwCgg=", - false, - }, - { // This will differ because we're base64-encoding hex represantiation, not raw bytes - `${base64encode(sha256("test"))}`, - "OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZjMTViMGYwMGEwOA==", - false, - }, - }, - }) -} - -func TestInterpolateFuncBase64Sha512(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${base64sha512("test")}`, - "7iaw3Ur350mqGo7jwQrpkj9hiYB3Lkc/iBml1JQODbJ6wYX4oOHV+E+IvIh/1nsUNzLDBMxfqa2Ob1f1ACio/w==", - false, - }, - { // This will differ because we're base64-encoding hex represantiation, not raw bytes - `${base64encode(sha512("test"))}`, - "ZWUyNmIwZGQ0YWY3ZTc0OWFhMWE4ZWUzYzEwYWU5OTIzZjYxODk4MDc3MmU0NzNmODgxOWE1ZDQ5NDBlMGRiMjdhYzE4NWY4YTBlMWQ1Zjg0Zjg4YmM4ODdmZDY3YjE0MzczMmMzMDRjYzVmYTlhZDhlNmY1N2Y1MDAyOGE4ZmY=", - false, - }, - }, - }) -} - -func TestInterpolateFuncMd5(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${md5("tada")}`, - "ce47d07243bb6eaf5e1322c81baf9bbf", - false, - }, - { // Confirm that we're not trimming any whitespaces - `${md5(" tada ")}`, - "aadf191a583e53062de2d02c008141c4", - false, - }, - { // We accept empty string too - `${md5("")}`, - "d41d8cd98f00b204e9800998ecf8427e", - false, - }, - }, - }) -} - -func TestInterpolateFuncUUID(t *testing.T) { - results := make(map[string]bool) - - for i := 0; i < 100; i++ { - ast, err := hil.Parse("${uuid()}") - if err != nil { - t.Fatalf("err: %s", err) - } - - result, err := hil.Eval(ast, langEvalConfig(nil)) - if err != nil { - t.Fatalf("err: %s", err) - } - - if results[result.Value.(string)] { - t.Fatalf("Got unexpected duplicate uuid: %s", result.Value) - } - - results[result.Value.(string)] = true - } -} - -func TestInterpolateFuncTimestamp(t *testing.T) { - currentTime := time.Now().UTC() - ast, err := hil.Parse("${timestamp()}") - if err != nil { - t.Fatalf("err: %s", err) - } - - result, err := hil.Eval(ast, langEvalConfig(nil)) - if err != nil { - t.Fatalf("err: %s", err) - } - resultTime, err := time.Parse(time.RFC3339, result.Value.(string)) - if err != nil { - t.Fatalf("Error parsing timestamp: %s", err) - } - - if resultTime.Sub(currentTime).Seconds() > 10.0 { - t.Fatalf("Timestamp Diff too large. Expected: %s\nReceived: %s", currentTime.Format(time.RFC3339), result.Value.(string)) - } -} - -func TestInterpolateFuncTimeAdd(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${timeadd("2017-11-22T00:00:00Z", "1s")}`, - "2017-11-22T00:00:01Z", - false, - }, - { - `${timeadd("2017-11-22T00:00:00Z", "10m1s")}`, - "2017-11-22T00:10:01Z", - false, - }, - { // also support subtraction - `${timeadd("2017-11-22T00:00:00Z", "-1h")}`, - "2017-11-21T23:00:00Z", - false, - }, - { // Invalid format timestamp - `${timeadd("2017-11-22", "-1h")}`, - nil, - true, - }, - { // Invalid format duration (day is not supported by ParseDuration) - `${timeadd("2017-11-22T00:00:00Z", "1d")}`, - nil, - true, - }, - }, - }) -} - -type testFunctionConfig struct { - Cases []testFunctionCase - Vars map[string]ast.Variable -} - -type testFunctionCase struct { - Input string - Result interface{} - Error bool -} - -func testFunction(t *testing.T, config testFunctionConfig) { - t.Helper() - for _, tc := range config.Cases { - t.Run(tc.Input, func(t *testing.T) { - ast, err := hil.Parse(tc.Input) - if err != nil { - t.Fatalf("unexpected parse error: %s", err) - } - - result, err := hil.Eval(ast, langEvalConfig(config.Vars)) - if err != nil != tc.Error { - t.Fatalf("unexpected eval error: %s", err) - } - - if !reflect.DeepEqual(result.Value, tc.Result) { - t.Errorf("wrong result\ngiven: %s\ngot: %#v\nwant: %#v", tc.Input, result.Value, tc.Result) - } - }) - } -} - -func TestInterpolateFuncPathExpand(t *testing.T) { - homePath, err := homedir.Dir() - if err != nil { - t.Fatalf("Error getting home directory: %v", err) - } - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${pathexpand("~/test-file")}`, - filepath.Join(homePath, "test-file"), - false, - }, - { - `${pathexpand("~/another/test/file")}`, - filepath.Join(homePath, "another/test/file"), - false, - }, - { - `${pathexpand("/root/file")}`, - "/root/file", - false, - }, - { - `${pathexpand("/")}`, - "/", - false, - }, - { - `${pathexpand()}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncSubstr(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${substr("foobar", 0, 0)}`, - "", - false, - }, - { - `${substr("foobar", 0, -1)}`, - "foobar", - false, - }, - { - `${substr("foobar", 0, 3)}`, - "foo", - false, - }, - { - `${substr("foobar", 3, 3)}`, - "bar", - false, - }, - { - `${substr("foobar", -3, 3)}`, - "bar", - false, - }, - - // empty string - { - `${substr("", 0, 0)}`, - "", - false, - }, - - // invalid offset - { - `${substr("", 1, 0)}`, - nil, - true, - }, - { - `${substr("foo", -4, -1)}`, - nil, - true, - }, - - // invalid length - { - `${substr("", 0, 1)}`, - nil, - true, - }, - { - `${substr("", 0, -2)}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncBcrypt(t *testing.T) { - node, err := hil.Parse(`${bcrypt("test")}`) - if err != nil { - t.Fatalf("err: %s", err) - } - - result, err := hil.Eval(node, langEvalConfig(nil)) - if err != nil { - t.Fatalf("err: %s", err) - } - err = bcrypt.CompareHashAndPassword([]byte(result.Value.(string)), []byte("test")) - - if err != nil { - t.Fatalf("Error comparing hash and password: %s", err) - } - - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - //Negative test for more than two parameters - { - `${bcrypt("test", 15, 12)}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncFlatten(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - // empty string within array - { - `${flatten(split(",", "a,,b"))}`, - []interface{}{"a", "", "b"}, - false, - }, - - // typical array - { - `${flatten(split(",", "a,b,c"))}`, - []interface{}{"a", "b", "c"}, - false, - }, - - // empty array - { - `${flatten(split(",", ""))}`, - []interface{}{""}, - false, - }, - - // list of lists - { - `${flatten(list(list("a"), list("b")))}`, - []interface{}{"a", "b"}, - false, - }, - // list of lists of lists - { - `${flatten(list(list("a"), list(list("b","c"))))}`, - []interface{}{"a", "b", "c"}, - false, - }, - // list of strings - { - `${flatten(list("a", "b", "c"))}`, - []interface{}{"a", "b", "c"}, - false, - }, - }, - }) -} - -func TestInterpolateFuncURLEncode(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${urlencode("abc123-_")}`, - "abc123-_", - false, - }, - { - `${urlencode("foo:bar@localhost?foo=bar&bar=baz")}`, - "foo%3Abar%40localhost%3Ffoo%3Dbar%26bar%3Dbaz", - false, - }, - { - `${urlencode("mailto:email?subject=this+is+my+subject")}`, - "mailto%3Aemail%3Fsubject%3Dthis%2Bis%2Bmy%2Bsubject", - false, - }, - { - `${urlencode("foo/bar")}`, - "foo%2Fbar", - false, - }, - }, - }) -} - -func TestInterpolateFuncTranspose(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.map": ast.Variable{ - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key1": ast.Variable{ - Type: ast.TypeList, - Value: []ast.Variable{ - {Type: ast.TypeString, Value: "a"}, - {Type: ast.TypeString, Value: "b"}, - }, - }, - "key2": ast.Variable{ - Type: ast.TypeList, - Value: []ast.Variable{ - {Type: ast.TypeString, Value: "a"}, - {Type: ast.TypeString, Value: "b"}, - {Type: ast.TypeString, Value: "c"}, - }, - }, - "key3": ast.Variable{ - Type: ast.TypeList, - Value: []ast.Variable{ - {Type: ast.TypeString, Value: "c"}, - }, - }, - "key4": ast.Variable{ - Type: ast.TypeList, - Value: []ast.Variable{}, - }, - }}, - "var.badmap": ast.Variable{ - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key1": ast.Variable{ - Type: ast.TypeList, - Value: []ast.Variable{ - {Type: ast.TypeList, Value: []ast.Variable{}}, - {Type: ast.TypeList, Value: []ast.Variable{}}, - }, - }, - }}, - "var.worsemap": ast.Variable{ - Type: ast.TypeMap, - Value: map[string]ast.Variable{ - "key1": ast.Variable{ - Type: ast.TypeString, - Value: "not-a-list", - }, - }}, - }, - Cases: []testFunctionCase{ - { - `${transpose(var.map)}`, - map[string]interface{}{ - "a": []interface{}{"key1", "key2"}, - "b": []interface{}{"key1", "key2"}, - "c": []interface{}{"key2", "key3"}, - }, - false, - }, - { - `${transpose(var.badmap)}`, - nil, - true, - }, - { - `${transpose(var.worsemap)}`, - nil, - true, - }, - }, - }) -} - -func TestInterpolateFuncAbs(t *testing.T) { - testFunction(t, testFunctionConfig{ - Cases: []testFunctionCase{ - { - `${abs()}`, - nil, - true, - }, - { - `${abs("")}`, - nil, - true, - }, - { - `${abs(0)}`, - "0", - false, - }, - { - `${abs(1)}`, - "1", - false, - }, - { - `${abs(-1)}`, - "1", - false, - }, - { - `${abs(1.0)}`, - "1", - false, - }, - { - `${abs(-1.0)}`, - "1", - false, - }, - { - `${abs(-3.14)}`, - "3.14", - false, - }, - { - `${abs(-42.001)}`, - "42.001", - false, - }, - }, - }) -} - -func TestInterpolateFuncRsaDecrypt(t *testing.T) { - testFunction(t, testFunctionConfig{ - Vars: map[string]ast.Variable{ - "var.cipher_base64": ast.Variable{ - Type: ast.TypeString, - Value: "eczGaDhXDbOFRZGhjx2etVzWbRqWDlmq0bvNt284JHVbwCgObiuyX9uV0LSAMY707IEgMkExJqXmsB4OWKxvB7epRB9G/3+F+pcrQpODlDuL9oDUAsa65zEpYF0Wbn7Oh7nrMQncyUPpyr9WUlALl0gRWytOA23S+y5joa4M34KFpawFgoqTu/2EEH4Xl1zo+0fy73fEto+nfkUY+meuyGZ1nUx/+DljP7ZqxHBFSlLODmtuTMdswUbHbXbWneW51D7Jm7xB8nSdiA2JQNK5+Sg5x8aNfgvFTt/m2w2+qpsyFa5Wjeu6fZmXSl840CA07aXbk9vN4I81WmJyblD/ZA==", - }, - "var.private_key": ast.Variable{ - Type: ast.TypeString, - Value: ` ------BEGIN RSA PRIVATE KEY----- -MIIEowIBAAKCAQEAgUElV5mwqkloIrM8ZNZ72gSCcnSJt7+/Usa5G+D15YQUAdf9 -c1zEekTfHgDP+04nw/uFNFaE5v1RbHaPxhZYVg5ZErNCa/hzn+x10xzcepeS3KPV -Xcxae4MR0BEegvqZqJzN9loXsNL/c3H/B+2Gle3hTxjlWFb3F5qLgR+4Mf4ruhER -1v6eHQa/nchi03MBpT4UeJ7MrL92hTJYLdpSyCqmr8yjxkKJDVC2uRrr+sTSxfh7 -r6v24u/vp/QTmBIAlNPgadVAZw17iNNb7vjV7Gwl/5gHXonCUKURaV++dBNLrHIZ -pqcAM8wHRph8mD1EfL9hsz77pHewxolBATV+7QIDAQABAoIBAC1rK+kFW3vrAYm3 -+8/fQnQQw5nec4o6+crng6JVQXLeH32qXShNf8kLLG/Jj0vaYcTPPDZw9JCKkTMQ -0mKj9XR/5DLbBMsV6eNXXuvJJ3x4iKW5eD9WkLD4FKlNarBRyO7j8sfPTqXW7uat -NxWdFH7YsSRvNh/9pyQHLWA5OituidMrYbc3EUx8B1GPNyJ9W8Q8znNYLfwYOjU4 -Wv1SLE6qGQQH9Q0WzA2WUf8jklCYyMYTIywAjGb8kbAJlKhmj2t2Igjmqtwt1PYc -pGlqbtQBDUiWXt5S4YX/1maIQ/49yeNUajjpbJiH3DbhJbHwFTzP3pZ9P9GHOzlG -kYR+wSECgYEAw/Xida8kSv8n86V3qSY/I+fYQ5V+jDtXIE+JhRnS8xzbOzz3v0WS -Oo5H+o4nJx5eL3Ghb3Gcm0Jn46dHrxinHbm+3RjXv/X6tlbxIYjRSQfHOTSMCTvd -qcliF5vC6RCLXuc7R+IWR1Ky6eDEZGtrvt3DyeYABsp9fRUFR/6NluUCgYEAqNsw -1aSl7WJa27F0DoJdlU9LWerpXcazlJcIdOz/S9QDmSK3RDQTdqfTxRmrxiYI9LEs -mkOkvzlnnOBMpnZ3ZOU5qIRfprecRIi37KDAOHWGnlC0EWGgl46YLb7/jXiWf0AG -Y+DfJJNd9i6TbIDWu8254/erAS6bKMhW/3q7f2kCgYAZ7Id/BiKJAWRpqTRBXlvw -BhXoKvjI2HjYP21z/EyZ+PFPzur/lNaZhIUlMnUfibbwE9pFggQzzf8scM7c7Sf+ -mLoVSdoQ/Rujz7CqvQzi2nKSsM7t0curUIb3lJWee5/UeEaxZcmIufoNUrzohAWH -BJOIPDM4ssUTLRq7wYM9uQKBgHCBau5OP8gE6mjKuXsZXWUoahpFLKwwwmJUp2vQ -pOFPJ/6WZOlqkTVT6QPAcPUbTohKrF80hsZqZyDdSfT3peFx4ZLocBrS56m6NmHR -UYHMvJ8rQm76T1fryHVidz85g3zRmfBeWg8yqT5oFg4LYgfLsPm1gRjOhs8LfPvI -OLlRAoGBAIZ5Uv4Z3s8O7WKXXUe/lq6j7vfiVkR1NW/Z/WLKXZpnmvJ7FgxN4e56 -RXT7GwNQHIY8eDjDnsHxzrxd+raOxOZeKcMHj3XyjCX3NHfTscnsBPAGYpY/Wxzh -T8UYnFu6RzkixElTf2rseEav7rkdKkI3LAeIZy7B0HulKKsmqVQ7 ------END RSA PRIVATE KEY----- -`, - }, - "var.wrong_private_key": ast.Variable{ - Type: ast.TypeString, - Value: ` ------BEGIN RSA PRIVATE KEY----- -MIIEowIBAAKCAQEAlrCgnEVgmNKCq7KPc+zUU5IrxPu1ClMNJS7RTsTPEkbwe5SB -p+6V6WtCbD/X/lDRRGbOENChh1Phulb7lViqgrdpHydgsrKoS5ah3DfSIxLFLE00 -9Yo4TCYwgw6+s59j16ZAFVinaQ9l6Kmrb2ll136hMrz8QKh+qw+onOLd38WFgm+W -ZtUqSXf2LANzfzzy4OWFNyFqKaCAolSkPdTS9Nz+svtScvp002DQp8OdP1AgPO+l -o5N3M38Fftapwg0pCtJ5Zq0NRWIXEonXiTEMA6zy3gEZVOmDxoIFUWnmrqlMJLFy -5S6LDrHSdqJhCxDK6WRZj43X9j8spktk3eGhMwIDAQABAoIBAAem8ID/BOi9x+Tw -LFi2rhGQWqimH4tmrEQ3HGnjlKBY+d1MrUjZ1MMFr1nP5CgF8pqGnfA8p/c3Sz8r -K5tp5T6+EZiDZ2WrrOApxg5ox0MAsQKO6SGO40z6o3wEQ6rbbTaGOrraxaWQIpyu -AQanU4Sd6ZGqByVBaS1GnklZO+shCHqw73b7g1cpLEmFzcYnKHYHlUUIsstMe8E1 -BaCY0CH7JbWBjcbiTnBVwIRZuu+EjGiQuhTilYL2OWqoMVg1WU0L2IFpR8lkf/2W -SBx5J6xhwbBGASOpM+qidiN580GdPzGhWYSqKGroHEzBm6xPSmV1tadNA26WFG4p -pthLiAECgYEA5BsPRpNYJAQLu5B0N7mj9eEp0HABVEgL/MpwiImjaKdAwp78HM64 -IuPvJxs7r+xESiIz4JyjR8zrQjYOCKJsARYkmNlEuAz0SkHabCw1BdEBwUhjUGVB -efoERK6GxfAoNqmSDwsOvHFOtsmDIlbHmg7G2rUxNVpeou415BSB0B8CgYEAqR4J -YHKk2Ibr9rU+rBU33TcdTGw0aAkFNAVeqM9j0haWuFXmV3RArgoy09lH+2Ha6z/g -fTX2xSDAWV7QUlLOlBRIhurPAo2jO2yCrGHPZcWiugstrR2hTTInigaSnCmK3i7F -6sYmL3S7K01IcVNxSlWvGijtClT92Cl2WUCTfG0CgYAiEjyk4QtQTd5mxLvnOu5X -oqs5PBGmwiAwQRiv/EcRMbJFn7Oupd3xMDSflbzDmTnWDOfMy/jDl8MoH6TW+1PA -kcsjnYhbKWwvz0hN0giVdtOZSDO1ZXpzOrn6fEsbM7T9/TQY1SD9WrtUKCNTNL0Z -sM1ZC6lu+7GZCpW4HKwLJwKBgQCRT0yxQXBg1/UxwuO5ynV4rx2Oh76z0WRWIXMH -S0MyxdP1SWGkrS/SGtM3cg/GcHtA/V6vV0nUcWK0p6IJyjrTw2XZ/zGluPuTWJYi -9dvVT26Vunshrz7kbH7KuwEICy3V4IyQQHeY+QzFlR70uMS0IVFWAepCoWqHbIDT -CYhwNQKBgGPcLXmjpGtkZvggl0aZr9LsvCTckllSCFSI861kivL/rijdNoCHGxZv -dfDkLTLcz9Gk41rD9Gxn/3sqodnTAc3Z2PxFnzg1Q/u3+x6YAgBwI/g/jE2xutGW -H7CurtMwALQ/n/6LUKFmjRZjqbKX9SO2QSaC3grd6sY9Tu+bZjLe ------END RSA PRIVATE KEY----- -`, - }, - }, - Cases: []testFunctionCase{ - // Base-64 encoded cipher decrypts correctly - { - `${rsadecrypt(var.cipher_base64, var.private_key)}`, - "message", - false, - }, - // Raw cipher - { - `${rsadecrypt(base64decode(var.cipher_base64), var.private_key)}`, - nil, - true, - }, - // Wrong key - { - `${rsadecrypt(var.cipher_base64, var.wrong_private_key)}`, - nil, - true, - }, - // Bad key - { - `${rsadecrypt(var.cipher_base64, "bad key")}`, - nil, - true, - }, - // Empty key - { - `${rsadecrypt(var.cipher_base64, "")}`, - nil, - true, - }, - // Bad cipher - { - `${rsadecrypt("bad cipher", var.private_key)}`, - nil, - true, - }, - // Bad base64-encoded cipher - { - `${rsadecrypt(base64encode("bad cipher"), var.private_key)}`, - nil, - true, - }, - // Empty cipher - { - `${rsadecrypt("", var.private_key)}`, - nil, - true, - }, - // Too many arguments - { - `${rsadecrypt("", "", "")}`, - nil, - true, - }, - // One argument - { - `${rsadecrypt("")}`, - nil, - true, - }, - // No arguments - { - `${rsadecrypt()}`, - nil, - true, - }, - }, - }) -} diff --git a/config/lang.go b/config/lang.go deleted file mode 100644 index 890d30beb..000000000 --- a/config/lang.go +++ /dev/null @@ -1,11 +0,0 @@ -package config - -import ( - "github.com/hashicorp/hil/ast" -) - -type noopNode struct{} - -func (n *noopNode) Accept(ast.Visitor) ast.Node { return n } -func (n *noopNode) Pos() ast.Pos { return ast.Pos{} } -func (n *noopNode) Type(ast.Scope) (ast.Type, error) { return ast.TypeString, nil } diff --git a/config/raw_config.go b/config/raw_config.go index a7d4d595e..32d38114c 100644 --- a/config/raw_config.go +++ b/config/raw_config.go @@ -7,9 +7,6 @@ import ( "strconv" "sync" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" - hcl2 "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hil" "github.com/hashicorp/hil/ast" @@ -165,7 +162,12 @@ func (r *RawConfig) Interpolate(vs map[string]ast.Variable) error { r.lock.Lock() defer r.lock.Unlock() - config := langEvalConfig(vs) + // Create the evaluation configuration we use to execute + config := &hil.EvalConfig{ + GlobalScope: &ast.BasicScope{ + VarMap: vs, + }, + } return r.interpolate(func(root ast.Node) (interface{}, error) { // None of the variables we need are computed, meaning we should // be able to properly evaluate. @@ -349,37 +351,9 @@ func (r *RawConfig) couldBeInteger() bool { _, err := strconv.ParseInt(r.Value().(string), 0, 0) return err == nil } else { - // HCL2 experiment path: using the HCL2 API via shims - // - // This path catches fewer situations because we have to assume all - // variables are entirely unknown in HCL2, rather than the assumption - // above that all variables can be numbers because names like "var.foo" - // are considered a single variable rather than an attribute access. - // This is fine in practice, because we get a definitive answer - // during the graph walk when we have real values to work with. - attrs, diags := r.Body.JustAttributes() - if diags.HasErrors() { - // This body is not just a single attribute with a value, so - // this can't be a number. - return false - } - attr, hasAttr := attrs[r.Key] - if !hasAttr { - return false - } - result, diags := hcl2EvalWithUnknownVars(attr.Expr) - if diags.HasErrors() { - // We'll conservatively assume that this error is a result of - // us not being ready to fully-populate the scope, and catch - // any further problems during the main graph walk. - return true - } - - // If the result is convertable to number then we'll allow it. - // We do this because an unknown string is optimistically convertable - // to number (might be "5") but a _known_ string "hello" is not. - _, err := convert.Convert(result, cty.Number) - return err == nil + // We briefly tried to gradually implement HCL2 support by adding a + // branch here, but that experiment was not successful. + panic("HCL2 experimental path no longer supported") } } @@ -430,21 +404,3 @@ type gobRawConfig struct { Key string Raw map[string]interface{} } - -// langEvalConfig returns the evaluation configuration we use to execute. -func langEvalConfig(vs map[string]ast.Variable) *hil.EvalConfig { - funcMap := make(map[string]ast.Function) - for k, v := range Funcs() { - funcMap[k] = v - } - funcMap["lookup"] = interpolationFuncLookup(vs) - funcMap["keys"] = interpolationFuncKeys(vs) - funcMap["values"] = interpolationFuncValues(vs) - - return &hil.EvalConfig{ - GlobalScope: &ast.BasicScope{ - VarMap: vs, - FuncMap: funcMap, - }, - } -} diff --git a/config/resource_mode.go b/config/resource_mode.go index 877c6e848..dd915217c 100644 --- a/config/resource_mode.go +++ b/config/resource_mode.go @@ -1,6 +1,6 @@ package config -//go:generate stringer -type=ResourceMode -output=resource_mode_string.go resource_mode.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=ResourceMode -output=resource_mode_string.go resource_mode.go type ResourceMode int const ( diff --git a/configs/compat_shim.go b/configs/compat_shim.go index e594ebd40..b645ac890 100644 --- a/configs/compat_shim.go +++ b/configs/compat_shim.go @@ -69,28 +69,21 @@ func shimTraversalInString(expr hcl.Expression, wantKeyword bool) (hcl.Expressio ) diags = append(diags, tDiags...) - // For initial release our deprecation warnings are disabled to allow - // a period where modules can be compatible with both old and new - // conventions. - // FIXME: Re-enable these deprecation warnings in a release prior to - // Terraform 0.13 and then remove the shims altogether for 0.13. - /* - if wantKeyword { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagWarning, - Summary: "Quoted keywords are deprecated", - Detail: "In this context, keywords are expected literally rather than in quotes. Previous versions of Terraform required quotes, but that usage is now deprecated. Remove the quotes surrounding this keyword to silence this warning.", - Subject: &srcRange, - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagWarning, - Summary: "Quoted references are deprecated", - Detail: "In this context, references are expected literally rather than in quotes. Previous versions of Terraform required quotes, but that usage is now deprecated. Remove the quotes surrounding this reference to silence this warning.", - Subject: &srcRange, - }) - } - */ + if wantKeyword { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Quoted keywords are deprecated", + Detail: "In this context, keywords are expected literally rather than in quotes. Terraform 0.11 and earlier required quotes, but quoted keywords are now deprecated and will be removed in a future version of Terraform. Remove the quotes surrounding this keyword to silence this warning.", + Subject: &srcRange, + }) + } else { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Quoted references are deprecated", + Detail: "In this context, references are expected literally rather than in quotes. Terraform 0.11 and earlier required quotes, but quoted references are now deprecated and will be removed in a future version of Terraform. Remove the quotes surrounding this reference to silence this warning.", + Subject: &srcRange, + }) + } return &hclsyntax.ScopeTraversalExpr{ Traversal: traversal, @@ -114,3 +107,58 @@ func shimIsIgnoreChangesStar(expr hcl.Expression) bool { } return val.AsString() == "*" } + +// warnForDeprecatedInterpolations returns warning diagnostics if the given +// body can be proven to contain attributes whose expressions are native +// syntax expressions consisting entirely of a single template interpolation, +// which is a deprecated way to include a non-literal value in configuration. +// +// This is a best-effort sort of thing which relies on the physical HCL native +// syntax AST, so it might not catch everything. The main goal is to catch the +// "obvious" cases in order to help spread awareness that this old form is +// deprecated, when folks copy it from older examples they've found on the +// internet that were written for Terraform 0.11 or earlier. +func warnForDeprecatedInterpolationsInBody(body hcl.Body) hcl.Diagnostics { + var diags hcl.Diagnostics + + nativeBody, ok := body.(*hclsyntax.Body) + if !ok { + // If it's not native syntax then we've nothing to do here. + return diags + } + + for _, attr := range nativeBody.Attributes { + moreDiags := warnForDeprecatedInterpolationsInExpr(attr.Expr) + diags = append(diags, moreDiags...) + } + + for _, block := range nativeBody.Blocks { + // We'll also go hunting in nested blocks + moreDiags := warnForDeprecatedInterpolationsInBody(block.Body) + diags = append(diags, moreDiags...) + } + + return diags +} + +func warnForDeprecatedInterpolationsInExpr(expr hcl.Expression) hcl.Diagnostics { + var diags hcl.Diagnostics + + if _, ok := expr.(*hclsyntax.TemplateWrapExpr); !ok { + // We're only interested in TemplateWrapExpr, because that's how + // the HCL native syntax parser represents the case of a template + // that consists entirely of a single interpolation expression, which + // is therefore subject to the special case of passing through the + // inner value without conversion to string. + return diags + } + + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Interpolation-only expressions are deprecated", + Detail: "Terraform 0.11 and earlier required all non-constant expressions to be provided via interpolation syntax, but this pattern is now deprecated. To silence this warning, remove the \"${ sequence from the start and the }\" sequence from the end of this expression, leaving just the inner expression.\n\nTemplate interpolation syntax is still used to construct strings from expressions when the template includes multiple interpolation sequences or a mixture of literal strings and interpolations. This deprecation applies only to templates that consist entirely of a single interpolation sequence.", + Subject: expr.Range().Ptr(), + }) + + return diags +} diff --git a/configs/config.go b/configs/config.go index cc10fb9c4..5c5b3eb75 100644 --- a/configs/config.go +++ b/configs/config.go @@ -1,6 +1,7 @@ package configs import ( + "fmt" "sort" version "github.com/hashicorp/go-version" @@ -162,7 +163,7 @@ func (c *Config) DescendentForInstance(path addrs.ModuleInstance) *Config { return current } -// ProviderTypes returns the names of each distinct provider type referenced +// ProviderTypes returns the FQNs of each distinct provider type referenced // in the receiving configuration. // // This is a helper for easily determining which provider types are required @@ -170,32 +171,39 @@ func (c *Config) DescendentForInstance(path addrs.ModuleInstance) *Config { // information and so callers are expected to have already dealt with // provider version selection in an earlier step and have identified suitable // versions for each provider. -func (c *Config) ProviderTypes() []string { - m := make(map[string]struct{}) +func (c *Config) ProviderTypes() []addrs.Provider { + m := make(map[addrs.Provider]struct{}) c.gatherProviderTypes(m) - ret := make([]string, 0, len(m)) + ret := make([]addrs.Provider, 0, len(m)) for k := range m { ret = append(ret, k) } - sort.Strings(ret) + sort.Slice(ret, func(i, j int) bool { + return ret[i].String() < ret[j].String() + }) return ret } -func (c *Config) gatherProviderTypes(m map[string]struct{}) { + +func (c *Config) gatherProviderTypes(m map[addrs.Provider]struct{}) { if c == nil { return } + // FIXME: These are currently all assuming legacy provider addresses. + // As part of phasing those out we'll need to change this to look up + // the true provider addresses via the local-to-FQN mapping table + // stored inside c.Module. for _, pc := range c.Module.ProviderConfigs { - m[pc.Name] = struct{}{} + m[addrs.NewLegacyProvider(pc.Name)] = struct{}{} } for _, rc := range c.Module.ManagedResources { providerAddr := rc.ProviderConfigAddr() - m[providerAddr.Type] = struct{}{} + m[addrs.NewLegacyProvider(providerAddr.LocalName)] = struct{}{} } for _, rc := range c.Module.DataResources { providerAddr := rc.ProviderConfigAddr() - m[providerAddr.Type] = struct{}{} + m[addrs.NewLegacyProvider(providerAddr.LocalName)] = struct{}{} } // Must also visit our child modules, recursively. @@ -203,3 +211,57 @@ func (c *Config) gatherProviderTypes(m map[string]struct{}) { cc.gatherProviderTypes(m) } } + +// ResolveAbsProviderAddr returns the AbsProviderConfig represented by the given +// ProviderConfig address, which must not be nil or this method will panic. +// +// If the given address is already an AbsProviderConfig then this method returns +// it verbatim, and will always succeed. If it's a LocalProviderConfig then +// it will consult the local-to-FQN mapping table for the given module +// to find the absolute address corresponding to the given local one. +// +// The module address to resolve local addresses in must be given in the second +// argument, and must refer to a module that exists under the receiver or +// else this method will panic. +func (c *Config) ResolveAbsProviderAddr(addr addrs.ProviderConfig, inModule addrs.ModuleInstance) addrs.AbsProviderConfig { + switch addr := addr.(type) { + + case addrs.AbsProviderConfig: + return addr + + case addrs.LocalProviderConfig: + // Find the descendent Config that contains the module that this + // local config belongs to. + mc := c.DescendentForInstance(inModule) + if mc == nil { + panic(fmt.Sprintf("ResolveAbsProviderAddr with non-existent module %s", inModule.String())) + } + + var provider addrs.Provider + if providerReq, exists := c.Module.ProviderRequirements[addr.LocalName]; exists { + provider = providerReq.Type + } else { + // FIXME: For now we're returning a _legacy_ address as fallback here, + // but once we remove legacy addresses this should actually be a + // _default_ provider address. + provider = addrs.NewLegacyProvider(addr.LocalName) + } + + return addrs.AbsProviderConfig{ + Module: inModule, + Provider: provider, + Alias: addr.Alias, + } + + default: + panic(fmt.Sprintf("cannot ResolveAbsProviderAddr(%v, ...)", addr)) + } + +} + +// ProviderForConfigAddr returns the FQN for a given addrs.ProviderConfig, first +// by checking for the provider in module.ProviderRequirements and falling +// back to addrs.NewLegacyProvider if it is not found. +func (c *Config) ProviderForConfigAddr(addr addrs.LocalProviderConfig) addrs.Provider { + return c.ResolveAbsProviderAddr(addr, addrs.RootModuleInstance).Provider +} diff --git a/configs/config_test.go b/configs/config_test.go index 7e9f9e130..012e44fc6 100644 --- a/configs/config_test.go +++ b/configs/config_test.go @@ -4,6 +4,8 @@ import ( "testing" "github.com/go-test/deep" + + "github.com/hashicorp/terraform/addrs" ) func TestConfigProviderTypes(t *testing.T) { @@ -18,12 +20,78 @@ func TestConfigProviderTypes(t *testing.T) { } got := cfg.ProviderTypes() - want := []string{ - "aws", - "null", - "template", + want := []addrs.Provider{ + addrs.NewLegacyProvider("aws"), + addrs.NewLegacyProvider("null"), + addrs.NewLegacyProvider("template"), } for _, problem := range deep.Equal(got, want) { t.Error(problem) } } + +func TestConfigResolveAbsProviderAddr(t *testing.T) { + mod, diags := testModuleFromDir("testdata/providers-explicit-fqn") + if diags.HasErrors() { + t.Fatal(diags.Error()) + } + + cfg, diags := BuildConfig(mod, nil) + if diags.HasErrors() { + t.Fatal(diags.Error()) + } + + t.Run("already absolute", func(t *testing.T) { + addr := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("test"), + Alias: "boop", + } + got := cfg.ResolveAbsProviderAddr(addr, addrs.RootModuleInstance) + if got, want := got.String(), addr.String(); got != want { + t.Errorf("wrong result\ngot: %s\nwant: %s", got, want) + } + }) + t.Run("local, implied mapping", func(t *testing.T) { + addr := addrs.LocalProviderConfig{ + LocalName: "implied", + Alias: "boop", + } + got := cfg.ResolveAbsProviderAddr(addr, addrs.RootModuleInstance) + want := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + // FIXME: At the time of writing we still have LocalProviderConfig + // nested inside AbsProviderConfig, but a future change will + // stop tis embedding and just have an addrs.Provider and an alias + // string here, at which point the correct result will be: + // Provider as the addrs repr of "registry.terraform.io/hashicorp/implied" + // Alias as "boop". + Provider: addrs.NewLegacyProvider("implied"), + Alias: "boop", + } + if got, want := got.String(), want.String(); got != want { + t.Errorf("wrong result\ngot: %s\nwant: %s", got, want) + } + }) + t.Run("local, explicit mapping", func(t *testing.T) { + addr := addrs.LocalProviderConfig{ + LocalName: "foo-test", // this is explicitly set in the config + Alias: "boop", + } + got := cfg.ResolveAbsProviderAddr(addr, addrs.RootModuleInstance) + want := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + // FIXME: At the time of writing we're not actually supporting + // the explicit mapping to FQNs because we're still in + // legacy-only mode, so this is temporarily correct. However, + // once we are fully supporting this we should expect to see + // the "registry.terraform.io/foo/test" FQN here, while still + // preserving the "boop" alias. + Provider: addrs.NewLegacyProvider("foo-test"), + Alias: "boop", + } + if got, want := got.String(), want.String(); got != want { + t.Errorf("wrong result\ngot: %s\nwant: %s", got, want) + } + }) +} diff --git a/configs/configload/copy_dir.go b/configs/configload/copy_dir.go index ebbeb3b62..840a7aa97 100644 --- a/configs/configload/copy_dir.go +++ b/configs/configload/copy_dir.go @@ -4,7 +4,6 @@ import ( "io" "os" "path/filepath" - "strings" ) // copyDir copies the src directory contents into dst. Both directories @@ -24,15 +23,6 @@ func copyDir(dst, src string) error { return nil } - if strings.HasPrefix(filepath.Base(path), ".") { - // Skip any dot files - if info.IsDir() { - return filepath.SkipDir - } else { - return nil - } - } - // The "path" has the src prefixed to it. We need to join our // destination with the path without the src on it. dstPath := filepath.Join(dst, path[len(src):]) diff --git a/configs/configload/getter.go b/configs/configload/getter.go index 75c7ef1f4..146f04b38 100644 --- a/configs/configload/getter.go +++ b/configs/configload/getter.go @@ -19,6 +19,7 @@ import ( var goGetterDetectors = []getter.Detector{ new(getter.GitHubDetector), + new(getter.GitDetector), new(getter.BitBucketDetector), new(getter.GCSDetector), new(getter.S3Detector), @@ -84,7 +85,7 @@ func (g reusingGetter) getWithGoGetter(instPath, addr string) (string, error) { log.Printf("[DEBUG] will download %q to %s", packageAddr, instPath) - realAddr, err := getter.Detect(packageAddr, instPath, getter.Detectors) + realAddr, err := getter.Detect(packageAddr, instPath, goGetterDetectors) if err != nil { return "", err } diff --git a/configs/configload/loader.go b/configs/configload/loader.go index 416b48fc8..a09b80c8c 100644 --- a/configs/configload/loader.go +++ b/configs/configload/loader.go @@ -4,9 +4,9 @@ import ( "fmt" "path/filepath" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/registry" - "github.com/hashicorp/terraform/svchost/disco" "github.com/spf13/afero" ) diff --git a/configs/configload/module_mgr.go b/configs/configload/module_mgr.go index 3c410eeb7..16871e310 100644 --- a/configs/configload/module_mgr.go +++ b/configs/configload/module_mgr.go @@ -4,9 +4,9 @@ import ( "os" "path/filepath" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/internal/modsdir" "github.com/hashicorp/terraform/registry" - "github.com/hashicorp/terraform/svchost/disco" "github.com/spf13/afero" ) diff --git a/configs/configschema/schema.go b/configs/configschema/schema.go index 5a67334d4..f4702d369 100644 --- a/configs/configschema/schema.go +++ b/configs/configschema/schema.go @@ -83,7 +83,7 @@ type NestedBlock struct { // blocks. type NestingMode int -//go:generate stringer -type=NestingMode +//go:generate go run golang.org/x/tools/cmd/stringer -type=NestingMode const ( nestingModeInvalid NestingMode = iota diff --git a/configs/configupgrade/analysis.go b/configs/configupgrade/analysis.go index 93209fcde..b998c87e1 100644 --- a/configs/configupgrade/analysis.go +++ b/configs/configupgrade/analysis.go @@ -109,12 +109,9 @@ func (u *Upgrader) analyze(ms ModuleSources) (*analysis, error) { } } - inst := moduledeps.ProviderInstance(name) - if alias != "" { - inst = moduledeps.ProviderInstance(name + "." + alias) - } - log.Printf("[TRACE] Provider block requires provider %q", inst) - m.Providers[inst] = moduledeps.ProviderDependency{ + fqn := addrs.NewLegacyProvider(name) + log.Printf("[TRACE] Provider block requires provider %q", fqn.LegacyString()) + m.Providers[fqn] = moduledeps.ProviderDependency{ Constraints: constraints, Reason: moduledeps.ProviderDependencyExplicit, } @@ -178,18 +175,23 @@ func (u *Upgrader) analyze(ms ModuleSources) (*analysis, error) { } } + var fqn addrs.Provider if providerKey == "" { - providerKey = rAddr.DefaultProviderConfig().StringCompact() + fqn = rAddr.DefaultProvider() + } else { + // ProviderDependencies only need to know the provider FQN + // strip any alias from the providerKey + parts := strings.Split(providerKey, ".") + fqn = addrs.NewLegacyProvider(parts[0]) } - inst := moduledeps.ProviderInstance(providerKey) - log.Printf("[TRACE] Resource block for %s requires provider %q", rAddr, inst) - if _, exists := m.Providers[inst]; !exists { - m.Providers[inst] = moduledeps.ProviderDependency{ + log.Printf("[TRACE] Resource block for %s requires provider %q", rAddr, fqn) + if _, exists := m.Providers[fqn]; !exists { + m.Providers[fqn] = moduledeps.ProviderDependency{ Reason: moduledeps.ProviderDependencyImplicit, } } - ret.ResourceProviderType[rAddr] = inst.Type() + ret.ResourceProviderType[rAddr] = fqn.Type } } @@ -241,11 +243,11 @@ func (u *Upgrader) analyze(ms ModuleSources) (*analysis, error) { return nil, fmt.Errorf("error resolving providers:\n%s", errorsMsg) } - for name, fn := range providerFactories { - log.Printf("[TRACE] Fetching schema from provider %q", name) + for fqn, fn := range providerFactories { + log.Printf("[TRACE] Fetching schema from provider %q", fqn.LegacyString()) provider, err := fn() if err != nil { - return nil, fmt.Errorf("failed to load provider %q: %s", name, err) + return nil, fmt.Errorf("failed to load provider %q: %s", fqn.LegacyString(), err) } resp := provider.GetSchema() @@ -264,7 +266,7 @@ func (u *Upgrader) analyze(ms ModuleSources) (*analysis, error) { for t, s := range resp.DataSources { schema.DataSources[t] = s.Block } - ret.ProviderSchemas[name] = schema + ret.ProviderSchemas[fqn.LegacyString()] = schema } for name, fn := range u.Provisioners { diff --git a/configs/configupgrade/upgrade_expr.go b/configs/configupgrade/upgrade_expr.go index a99aaf5f3..a38ee284d 100644 --- a/configs/configupgrade/upgrade_expr.go +++ b/configs/configupgrade/upgrade_expr.go @@ -38,9 +38,9 @@ Value: return upgradeExpr(tv.Token, filename, interp, an) case hcl1token.Token: - litVal := tv.Value() switch tv.Type { case hcl1token.STRING: + litVal := tv.Value() if !interp { // Easy case, then. printQuotedString(&buf, litVal.(string)) @@ -141,6 +141,7 @@ Value: buf.WriteString(marker) case hcl1token.BOOL: + litVal := tv.Value() if litVal.(bool) { buf.WriteString("true") } else { @@ -148,12 +149,28 @@ Value: } case hcl1token.NUMBER: - num := tv.Value() - buf.WriteString(strconv.FormatInt(num.(int64), 10)) + num, err := strconv.ParseInt(tv.Text, 0, 64) + if err != nil { + diags = diags.Append(&hcl2.Diagnostic{ + Severity: hcl2.DiagError, + Summary: "Invalid number value", + Detail: fmt.Sprintf("Parsing failed: %s", err), + Subject: hcl1PosRange(filename, tv.Pos).Ptr(), + }) + } + buf.WriteString(strconv.FormatInt(num, 10)) case hcl1token.FLOAT: - num := tv.Value() - buf.WriteString(strconv.FormatFloat(num.(float64), 'f', -1, 64)) + num, err := strconv.ParseFloat(tv.Text, 64) + if err != nil { + diags = diags.Append(&hcl2.Diagnostic{ + Severity: hcl2.DiagError, + Summary: "Invalid float value", + Detail: fmt.Sprintf("Parsing failed: %s", err), + Subject: hcl1PosRange(filename, tv.Pos).Ptr(), + }) + } + buf.WriteString(strconv.FormatFloat(num, 'f', -1, 64)) default: // For everything else we'll just pass through the given bytes verbatim, diff --git a/configs/configupgrade/upgrade_test.go b/configs/configupgrade/upgrade_test.go index f4940ebee..db15d850f 100644 --- a/configs/configupgrade/upgrade_test.go +++ b/configs/configupgrade/upgrade_test.go @@ -14,6 +14,7 @@ import ( "github.com/davecgh/go-spew/spew" "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/terraform/addrs" backendinit "github.com/hashicorp/terraform/backend/init" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/helper/logging" @@ -186,8 +187,8 @@ func diffSourceFilesFallback(got, want []byte) []byte { return buf.Bytes() } -var testProviders = map[string]providers.Factory{ - "test": providers.Factory(func() (providers.Interface, error) { +var testProviders = map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): providers.Factory(func() (providers.Interface, error) { p := &terraform.MockProvider{} p.GetSchemaReturn = &terraform.ProviderSchema{ ResourceTypes: map[string]*configschema.Block{ @@ -236,7 +237,7 @@ var testProviders = map[string]providers.Factory{ } return p, nil }), - "terraform": providers.Factory(func() (providers.Interface, error) { + addrs.NewLegacyProvider("terraform"): providers.Factory(func() (providers.Interface, error) { p := &terraform.MockProvider{} p.GetSchemaReturn = &terraform.ProviderSchema{ DataSources: map[string]*configschema.Block{ @@ -252,7 +253,7 @@ var testProviders = map[string]providers.Factory{ } return p, nil }), - "aws": providers.Factory(func() (providers.Interface, error) { + addrs.NewLegacyProvider("aws"): providers.Factory(func() (providers.Interface, error) { // This is here only so we can test the provisioner connection info // migration behavior, which is resource-type specific. Do not use // it in any other tests. diff --git a/configs/experiments.go b/configs/experiments.go new file mode 100644 index 000000000..435bac11d --- /dev/null +++ b/configs/experiments.go @@ -0,0 +1,156 @@ +package configs + +import ( + "fmt" + + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/terraform/experiments" +) + +// sniffActiveExperiments does minimal parsing of the given body for +// "terraform" blocks with "experiments" attributes, returning the +// experiments found. +// +// This is separate from other processing so that we can be sure that all of +// the experiments are known before we process the result of the module config, +// and thus we can take into account which experiments are active when deciding +// how to decode. +func sniffActiveExperiments(body hcl.Body) (experiments.Set, hcl.Diagnostics) { + rootContent, _, diags := body.PartialContent(configFileTerraformBlockSniffRootSchema) + + ret := experiments.NewSet() + + for _, block := range rootContent.Blocks { + content, _, blockDiags := block.Body.PartialContent(configFileExperimentsSniffBlockSchema) + diags = append(diags, blockDiags...) + + attr, exists := content.Attributes["experiments"] + if !exists { + continue + } + + exps, expDiags := decodeExperimentsAttr(attr) + diags = append(diags, expDiags...) + if !expDiags.HasErrors() { + ret = experiments.SetUnion(ret, exps) + } + } + + return ret, diags +} + +func decodeExperimentsAttr(attr *hcl.Attribute) (experiments.Set, hcl.Diagnostics) { + var diags hcl.Diagnostics + + exprs, moreDiags := hcl.ExprList(attr.Expr) + diags = append(diags, moreDiags...) + if moreDiags.HasErrors() { + return nil, diags + } + + var ret = experiments.NewSet() + for _, expr := range exprs { + kw := hcl.ExprAsKeyword(expr) + if kw == "" { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid experiment keyword", + Detail: "Elements of \"experiments\" must all be keywords representing active experiments.", + Subject: expr.Range().Ptr(), + }) + continue + } + + exp, err := experiments.GetCurrent(kw) + switch err := err.(type) { + case experiments.UnavailableError: + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Unknown experiment keyword", + Detail: fmt.Sprintf("There is no current experiment with the keyword %q.", kw), + Subject: expr.Range().Ptr(), + }) + case experiments.ConcludedError: + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Experiment has concluded", + Detail: fmt.Sprintf("Experiment %q is no longer available. %s", kw, err.Message), + Subject: expr.Range().Ptr(), + }) + case nil: + // No error at all means it's valid and current. + ret.Add(exp) + + // However, experimental features are subject to breaking changes + // in future releases, so we'll warn about them to help make sure + // folks aren't inadvertently using them in places where that'd be + // inappropriate, particularly if the experiment is active in a + // shared module they depend on. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: fmt.Sprintf("Experimental feature %q is active", exp.Keyword()), + Detail: "Experimental features are subject to breaking changes in future minor or patch releases, based on feedback.\n\nIf you have feedback on the design of this feature, please open a GitHub issue to discuss it.", + Subject: expr.Range().Ptr(), + }) + + default: + // This should never happen, because GetCurrent is not documented + // to return any other error type, but we'll handle it to be robust. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid experiment keyword", + Detail: fmt.Sprintf("Could not parse %q as an experiment keyword: %s.", kw, err.Error()), + Subject: expr.Range().Ptr(), + }) + } + } + return ret, diags +} + +func checkModuleExperiments(m *Module) hcl.Diagnostics { + var diags hcl.Diagnostics + + // When we have current experiments, this is a good place to check that + // the features in question can only be used when the experiments are + // active. Return error diagnostics if a feature is being used without + // opting in to the feature. For example: + /* + if !m.ActiveExperiments.Has(experiments.ResourceForEach) { + for _, rc := range m.ManagedResources { + if rc.ForEach != nil { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Resource for_each is experimental", + Detail: "This feature is currently an opt-in experiment, subject to change in future releases based on feedback.\n\nActivate the feature for this module by adding resource_for_each to the list of active experiments.", + Subject: rc.ForEach.Range().Ptr(), + }) + } + } + for _, rc := range m.DataResources { + if rc.ForEach != nil { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Resource for_each is experimental", + Detail: "This feature is currently an opt-in experiment, subject to change in future releases based on feedback.\n\nActivate the feature for this module by adding resource_for_each to the list of active experiments.", + Subject: rc.ForEach.Range().Ptr(), + }) + } + } + } + */ + + if !m.ActiveExperiments.Has(experiments.VariableValidation) { + for _, vc := range m.Variables { + if len(vc.Validations) != 0 { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Custom variable validation is experimental", + Detail: "This feature is currently an opt-in experiment, subject to change in future releases based on feedback.\n\nActivate the feature for this module by adding variable_validation to the list of active experiments.", + Subject: vc.Validations[0].DeclRange.Ptr(), + }) + } + } + } + + return diags +} diff --git a/configs/experiments_test.go b/configs/experiments_test.go new file mode 100644 index 000000000..20c8bac68 --- /dev/null +++ b/configs/experiments_test.go @@ -0,0 +1,113 @@ +package configs + +import ( + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/hcl/v2" + + "github.com/hashicorp/terraform/experiments" +) + +func TestExperimentsConfig(t *testing.T) { + // The experiment registrations are global, so we need to do some special + // patching in order to get a predictable set for our tests. + current := experiments.Experiment("current") + concluded := experiments.Experiment("concluded") + currentExperiments := experiments.NewSet(current) + concludedExperiments := map[experiments.Experiment]string{ + concluded: "Reticulate your splines.", + } + defer experiments.OverrideForTesting(t, currentExperiments, concludedExperiments)() + + t.Run("current", func(t *testing.T) { + parser := NewParser(nil) + mod, diags := parser.LoadConfigDir("testdata/experiments/current") + if got, want := len(diags), 1; got != want { + t.Fatalf("wrong number of diagnostics %d; want %d", got, want) + } + got := diags[0] + want := &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: `Experimental feature "current" is active`, + Detail: "Experimental features are subject to breaking changes in future minor or patch releases, based on feedback.\n\nIf you have feedback on the design of this feature, please open a GitHub issue to discuss it.", + Subject: &hcl.Range{ + Filename: "testdata/experiments/current/current_experiment.tf", + Start: hcl.Pos{Line: 2, Column: 18, Byte: 29}, + End: hcl.Pos{Line: 2, Column: 25, Byte: 36}, + }, + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong warning\n%s", diff) + } + if got, want := len(mod.ActiveExperiments), 1; got != want { + t.Errorf("wrong number of experiments %d; want %d", got, want) + } + if !mod.ActiveExperiments.Has(current) { + t.Errorf("module does not indicate current experiment as active") + } + }) + t.Run("concluded", func(t *testing.T) { + parser := NewParser(nil) + _, diags := parser.LoadConfigDir("testdata/experiments/concluded") + if got, want := len(diags), 1; got != want { + t.Fatalf("wrong number of diagnostics %d; want %d", got, want) + } + got := diags[0] + want := &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: `Experiment has concluded`, + Detail: `Experiment "concluded" is no longer available. Reticulate your splines.`, + Subject: &hcl.Range{ + Filename: "testdata/experiments/concluded/concluded_experiment.tf", + Start: hcl.Pos{Line: 2, Column: 18, Byte: 29}, + End: hcl.Pos{Line: 2, Column: 27, Byte: 38}, + }, + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong error\n%s", diff) + } + }) + t.Run("concluded", func(t *testing.T) { + parser := NewParser(nil) + _, diags := parser.LoadConfigDir("testdata/experiments/unknown") + if got, want := len(diags), 1; got != want { + t.Fatalf("wrong number of diagnostics %d; want %d", got, want) + } + got := diags[0] + want := &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: `Unknown experiment keyword`, + Detail: `There is no current experiment with the keyword "unknown".`, + Subject: &hcl.Range{ + Filename: "testdata/experiments/unknown/unknown_experiment.tf", + Start: hcl.Pos{Line: 2, Column: 18, Byte: 29}, + End: hcl.Pos{Line: 2, Column: 25, Byte: 36}, + }, + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong error\n%s", diff) + } + }) + t.Run("invalid", func(t *testing.T) { + parser := NewParser(nil) + _, diags := parser.LoadConfigDir("testdata/experiments/invalid") + if got, want := len(diags), 1; got != want { + t.Fatalf("wrong number of diagnostics %d; want %d", got, want) + } + got := diags[0] + want := &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: `Invalid expression`, + Detail: `A static list expression is required.`, + Subject: &hcl.Range{ + Filename: "testdata/experiments/invalid/invalid_experiments.tf", + Start: hcl.Pos{Line: 2, Column: 17, Byte: 28}, + End: hcl.Pos{Line: 2, Column: 24, Byte: 35}, + }, + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong error\n%s", diff) + } + }) +} diff --git a/configs/module.go b/configs/module.go index bd4182a5c..1aba88172 100644 --- a/configs/module.go +++ b/configs/module.go @@ -6,6 +6,7 @@ import ( "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/experiments" ) // Module is a container for a set of configuration constructs that are @@ -25,9 +26,12 @@ type Module struct { CoreVersionConstraints []VersionConstraint + ActiveExperiments experiments.Set + Backend *Backend ProviderConfigs map[string]*Provider - ProviderRequirements map[string][]VersionConstraint + ProviderRequirements map[string]ProviderRequirements + ProviderLocalNames map[addrs.Provider]string Variables map[string]*Variable Locals map[string]*Local @@ -53,9 +57,11 @@ type Module struct { type File struct { CoreVersionConstraints []VersionConstraint - Backends []*Backend - ProviderConfigs []*Provider - ProviderRequirements []*ProviderRequirement + ActiveExperiments experiments.Set + + Backends []*Backend + ProviderConfigs []*Provider + RequiredProviders []*RequiredProvider Variables []*Variable Locals []*Local @@ -79,7 +85,8 @@ func NewModule(primaryFiles, overrideFiles []*File) (*Module, hcl.Diagnostics) { var diags hcl.Diagnostics mod := &Module{ ProviderConfigs: map[string]*Provider{}, - ProviderRequirements: map[string][]VersionConstraint{}, + ProviderRequirements: map[string]ProviderRequirements{}, + ProviderLocalNames: map[addrs.Provider]string{}, Variables: map[string]*Variable{}, Locals: map[string]*Local{}, Outputs: map[string]*Output{}, @@ -98,6 +105,11 @@ func NewModule(primaryFiles, overrideFiles []*File) (*Module, hcl.Diagnostics) { diags = append(diags, fileDiags...) } + diags = append(diags, checkModuleExperiments(mod)...) + + // Generate the FQN -> LocalProviderName map + mod.gatherProviderLocalNames() + return mod, diags } @@ -124,6 +136,8 @@ func (m *Module) appendFile(file *File) hcl.Diagnostics { m.CoreVersionConstraints = append(m.CoreVersionConstraints, constraint) } + m.ActiveExperiments = experiments.SetUnion(m.ActiveExperiments, file.ActiveExperiments) + for _, b := range file.Backends { if m.Backend != nil { diags = append(diags, &hcl.Diagnostic{ @@ -160,8 +174,25 @@ func (m *Module) appendFile(file *File) hcl.Diagnostics { m.ProviderConfigs[key] = pc } - for _, reqd := range file.ProviderRequirements { - m.ProviderRequirements[reqd.Name] = append(m.ProviderRequirements[reqd.Name], reqd.Requirement) + for _, reqd := range file.RequiredProviders { + // As an interim *testing* step, we will accept a source argument + // but assume that the source is a legacy provider. This allows us to + // exercise the provider local names -> fqn logic without changing + // terraform's behavior. + if reqd.Source != "" { + // Fixme: once the rest of the provider source logic is implemented, + // update this to get the addrs.Provider by using + // addrs.ParseProviderSourceString() + } + fqn := addrs.NewLegacyProvider(reqd.Name) + if existing, exists := m.ProviderRequirements[reqd.Name]; exists { + if existing.Type != fqn { + panic("provider fqn mismatch") + } + existing.VersionConstraints = append(existing.VersionConstraints, reqd.Requirement) + } else { + m.ProviderRequirements[reqd.Name] = ProviderRequirements{Type: fqn, VersionConstraints: []VersionConstraint{reqd.Requirement}} + } } for _, v := range file.Variables { @@ -304,8 +335,8 @@ func (m *Module) mergeFile(file *File) hcl.Diagnostics { } } - if len(file.ProviderRequirements) != 0 { - mergeProviderVersionConstraints(m.ProviderRequirements, file.ProviderRequirements) + if len(file.RequiredProviders) != 0 { + mergeProviderVersionConstraints(m.ProviderRequirements, file.RequiredProviders) } for _, v := range file.Variables { @@ -402,3 +433,35 @@ func (m *Module) mergeFile(file *File) hcl.Diagnostics { return diags } + +// gatherProviderLocalNames is a helper function that populatesA a map of +// provider FQNs -> provider local names. This information is useful for +// user-facing output, which should include both the FQN and LocalName. It must +// only be populated after the module has been parsed. +func (m *Module) gatherProviderLocalNames() { + providers := make(map[addrs.Provider]string) + for k, v := range m.ProviderRequirements { + providers[v.Type] = k + } + m.ProviderLocalNames = providers +} + +// LocalNameForProvider returns the module-specific user-supplied local name for +// a given provider FQN, or the default local name if none was supplied. +func (m *Module) LocalNameForProvider(p addrs.Provider) string { + if existing, exists := m.ProviderLocalNames[p]; exists { + return existing + } else { + // If there isn't a map entry, fall back to the default: + // Type = LocalName + return p.Type + } +} + +// ProviderForLocalConfig returns the provider FQN for a given LocalProviderConfig +func (m *Module) ProviderForLocalConfig(pc addrs.LocalProviderConfig) addrs.Provider { + if provider, exists := m.ProviderRequirements[pc.String()]; exists { + return provider.Type + } + return addrs.NewLegacyProvider(pc.LocalName) +} diff --git a/configs/module_merge.go b/configs/module_merge.go index 401b1c0a8..6ab19c451 100644 --- a/configs/module_merge.go +++ b/configs/module_merge.go @@ -35,7 +35,7 @@ func (p *Provider) merge(op *Provider) hcl.Diagnostics { return diags } -func mergeProviderVersionConstraints(recv map[string][]VersionConstraint, ovrd []*ProviderRequirement) { +func mergeProviderVersionConstraints(recv map[string]ProviderRequirements, ovrd []*RequiredProvider) { // Any provider name that's mentioned in the override gets nilled out in // our map so that we'll rebuild it below. Any provider not mentioned is // left unchanged. @@ -43,7 +43,8 @@ func mergeProviderVersionConstraints(recv map[string][]VersionConstraint, ovrd [ delete(recv, reqd.Name) } for _, reqd := range ovrd { - recv[reqd.Name] = append(recv[reqd.Name], reqd.Requirement) + fqn := addrs.NewLegacyProvider(reqd.Name) + recv[reqd.Name] = ProviderRequirements{Type: fqn, VersionConstraints: []VersionConstraint{reqd.Requirement}} } } diff --git a/configs/module_merge_test.go b/configs/module_merge_test.go index 6d3b4a2c4..4575339d3 100644 --- a/configs/module_merge_test.go +++ b/configs/module_merge_test.go @@ -3,8 +3,10 @@ package configs import ( "testing" + version "github.com/hashicorp/go-version" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/gohcl" + "github.com/hashicorp/terraform/addrs" "github.com/zclconf/go-cty/cty" ) @@ -199,3 +201,77 @@ func TestModuleOverrideDynamic(t *testing.T) { } }) } + +func TestMergeProviderVersionConstraints(t *testing.T) { + v1, _ := version.NewConstraint("1.0.0") + vc1 := VersionConstraint{ + Required: v1, + } + v2, _ := version.NewConstraint("2.0.0") + vc2 := VersionConstraint{ + Required: v2, + } + + tests := map[string]struct { + Input map[string]ProviderRequirements + Override []*RequiredProvider + Want map[string]ProviderRequirements + }{ + "basic merge": { + map[string]ProviderRequirements{ + "random": ProviderRequirements{ + Type: addrs.Provider{Type: "random"}, + VersionConstraints: []VersionConstraint{}, + }, + }, + []*RequiredProvider{ + &RequiredProvider{ + Name: "null", + Requirement: VersionConstraint{}, + }, + }, + map[string]ProviderRequirements{ + "random": ProviderRequirements{ + Type: addrs.Provider{Type: "random"}, + VersionConstraints: []VersionConstraint{}, + }, + "null": ProviderRequirements{ + Type: addrs.NewLegacyProvider("null"), + VersionConstraints: []VersionConstraint{ + VersionConstraint{ + Required: version.Constraints(nil), + DeclRange: hcl.Range{}, + }, + }, + }, + }, + }, + "override version constraint": { + map[string]ProviderRequirements{ + "random": ProviderRequirements{ + Type: addrs.Provider{Type: "random"}, + VersionConstraints: []VersionConstraint{vc1}, + }, + }, + []*RequiredProvider{ + &RequiredProvider{ + Name: "random", + Requirement: vc2, + }, + }, + map[string]ProviderRequirements{ + "random": ProviderRequirements{ + Type: addrs.NewLegacyProvider("random"), + VersionConstraints: []VersionConstraint{vc2}, + }, + }, + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + mergeProviderVersionConstraints(test.Input, test.Override) + assertResultDeepEqual(t, test.Input, test.Want) + }) + } +} diff --git a/configs/module_test.go b/configs/module_test.go new file mode 100644 index 000000000..1898bdfe2 --- /dev/null +++ b/configs/module_test.go @@ -0,0 +1,34 @@ +package configs + +import ( + "testing" + + "github.com/hashicorp/terraform/addrs" +) + +// TestNewModule_provider_fqns exercises module.gatherProviderLocalNames() +func TestNewModule_provider_local_name(t *testing.T) { + mod, diags := testModuleFromDir("testdata/providers-explicit-fqn") + if diags.HasErrors() { + t.Fatal(diags.Error()) + } + + // FIXME: while the provider source is set to "foo/test", terraform + // currently assumes everything is a legacy provider and the localname and + // type match. This test will be updated when provider source is fully + // implemented. + p := addrs.NewLegacyProvider("foo-test") + if name, exists := mod.ProviderLocalNames[p]; !exists { + t.Fatal("provider FQN foo/test not found") + } else { + if name != "foo-test" { + t.Fatalf("provider localname mismatch: got %s, want foo-test", name) + } + } + + // ensure the reverse lookup (fqn to local name) works as well + localName := mod.LocalNameForProvider(p) + if localName != "foo-test" { + t.Fatal("provider local name not found") + } +} diff --git a/configs/named_values.go b/configs/named_values.go index 81f6093e3..128bd2787 100644 --- a/configs/named_values.go +++ b/configs/named_values.go @@ -2,6 +2,7 @@ package configs import ( "fmt" + "unicode" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/ext/typeexpr" @@ -14,7 +15,7 @@ import ( ) // A consistent detail message for all "not a valid identifier" diagnostics. -const badIdentifierDetail = "A name must start with a letter and may contain only letters, digits, underscores, and dashes." +const badIdentifierDetail = "A name must start with a letter or underscore and may contain only letters, digits, underscores, and dashes." // Variable represents a "variable" block in a module or file. type Variable struct { @@ -23,6 +24,7 @@ type Variable struct { Default cty.Value Type cty.Type ParsingMode VariableParsingMode + Validations []*VariableValidation DescriptionSet bool @@ -119,6 +121,21 @@ func decodeVariableBlock(block *hcl.Block, override bool) (*Variable, hcl.Diagno v.Default = val } + for _, block := range content.Blocks { + switch block.Type { + + case "validation": + vv, moreDiags := decodeVariableValidationBlock(v.Name, block, override) + diags = append(diags, moreDiags...) + v.Validations = append(v.Validations, vv) + + default: + // The above cases should be exhaustive for all block types + // defined in variableBlockSchema + panic(fmt.Sprintf("unhandled block type %q", block.Type)) + } + } + return v, diags } @@ -138,10 +155,28 @@ func decodeVariableType(expr hcl.Expression) (cty.Type, VariableParsingMode, hcl str := val.AsString() switch str { case "string": + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Quoted type constraints are deprecated", + Detail: "Terraform 0.11 and earlier required type constraints to be given in quotes, but that form is now deprecated and will be removed in a future version of Terraform. To silence this warning, remove the quotes around \"string\".", + Subject: expr.Range().Ptr(), + }) return cty.String, VariableParseLiteral, diags case "list": + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Quoted type constraints are deprecated", + Detail: "Terraform 0.11 and earlier required type constraints to be given in quotes, but that form is now deprecated and will be removed in a future version of Terraform. To silence this warning, remove the quotes around \"list\" and write list(string) instead to explicitly indicate that the list elements are strings.", + Subject: expr.Range().Ptr(), + }) return cty.List(cty.DynamicPseudoType), VariableParseHCL, diags case "map": + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Quoted type constraints are deprecated", + Detail: "Terraform 0.11 and earlier required type constraints to be given in quotes, but that form is now deprecated and will be removed in a future version of Terraform. To silence this warning, remove the quotes around \"map\" and write map(string) instead to explicitly indicate that the map elements are strings.", + Subject: expr.Range().Ptr(), + }) return cty.Map(cty.DynamicPseudoType), VariableParseHCL, diags default: return cty.DynamicPseudoType, VariableParseHCL, hcl.Diagnostics{{ @@ -179,6 +214,12 @@ func decodeVariableType(expr hcl.Expression) (cty.Type, VariableParsingMode, hcl } } +// Required returns true if this variable is required to be set by the caller, +// or false if there is a default value that will be used when it isn't set. +func (v *Variable) Required() bool { + return v.Default == cty.NilVal +} + // VariableParsingMode defines how values of a particular variable given by // text-only mechanisms (command line arguments and environment variables) // should be parsed to produce the final value. @@ -226,6 +267,157 @@ func (m VariableParsingMode) Parse(name, value string) (cty.Value, hcl.Diagnosti } } +// VariableValidation represents a configuration-defined validation rule +// for a particular input variable, given as a "validation" block inside +// a "variable" block. +type VariableValidation struct { + // Condition is an expression that refers to the variable being tested + // and contains no other references. The expression must return true + // to indicate that the value is valid or false to indicate that it is + // invalid. If the expression produces an error, that's considered a bug + // in the module defining the validation rule, not an error in the caller. + Condition hcl.Expression + + // ErrorMessage is one or more full sentences, which would need to be in + // English for consistency with the rest of the error message output but + // can in practice be in any language as long as it ends with a period. + // The message should describe what is required for the condition to return + // true in a way that would make sense to a caller of the module. + ErrorMessage string + + DeclRange hcl.Range +} + +func decodeVariableValidationBlock(varName string, block *hcl.Block, override bool) (*VariableValidation, hcl.Diagnostics) { + var diags hcl.Diagnostics + vv := &VariableValidation{ + DeclRange: block.DefRange, + } + + if override { + // For now we'll just forbid overriding validation blocks, to simplify + // the initial design. If we can find a clear use-case for overriding + // validations in override files and there's a way to define it that + // isn't confusing then we could relax this. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Can't override variable validation rules", + Detail: "Variable \"validation\" blocks cannot be used in override files.", + Subject: vv.DeclRange.Ptr(), + }) + return vv, diags + } + + content, moreDiags := block.Body.Content(variableValidationBlockSchema) + diags = append(diags, moreDiags...) + + if attr, exists := content.Attributes["condition"]; exists { + vv.Condition = attr.Expr + + // The validation condition can only refer to the variable itself, + // to ensure that the variable declaration can't create additional + // edges in the dependency graph. + goodRefs := 0 + for _, traversal := range vv.Condition.Variables() { + ref, moreDiags := addrs.ParseRef(traversal) + if !moreDiags.HasErrors() { + if addr, ok := ref.Subject.(addrs.InputVariable); ok { + if addr.Name == varName { + goodRefs++ + continue // Reference is valid + } + } + } + // If we fall out here then the reference is invalid. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid reference in variable validation", + Detail: fmt.Sprintf("The condition for variable %q can only refer to the variable itself, using var.%s.", varName, varName), + Subject: traversal.SourceRange().Ptr(), + }) + } + if goodRefs < 1 { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid variable validation condition", + Detail: fmt.Sprintf("The condition for variable %q must refer to var.%s in order to test incoming values.", varName, varName), + Subject: attr.Expr.Range().Ptr(), + }) + } + } + + if attr, exists := content.Attributes["error_message"]; exists { + moreDiags := gohcl.DecodeExpression(attr.Expr, nil, &vv.ErrorMessage) + diags = append(diags, moreDiags...) + if !moreDiags.HasErrors() { + const errSummary = "Invalid validation error message" + switch { + case vv.ErrorMessage == "": + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: errSummary, + Detail: "An empty string is not a valid nor useful error message.", + Subject: attr.Expr.Range().Ptr(), + }) + case !looksLikeSentences(vv.ErrorMessage): + // Because we're going to include this string verbatim as part + // of a bigger error message written in our usual style in + // English, we'll require the given error message to conform + // to that. We might relax this in future if e.g. we start + // presenting these error messages in a different way, or if + // Terraform starts supporting producing error messages in + // other human languages, etc. + // For pragmatism we also allow sentences ending with + // exclamation points, but we don't mention it explicitly here + // because that's not really consistent with the Terraform UI + // writing style. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: errSummary, + Detail: "Validation error message must be at least one full English sentence starting with an uppercase letter and ending with a period or question mark.", + Subject: attr.Expr.Range().Ptr(), + }) + } + } + } + + return vv, diags +} + +// looksLikeSentence is a simple heuristic that encourages writing error +// messages that will be presentable when included as part of a larger +// Terraform error diagnostic whose other text is written in the Terraform +// UI writing style. +// +// This is intentionally not a very strong validation since we're assuming +// that module authors want to write good messages and might just need a nudge +// about Terraform's specific style, rather than that they are going to try +// to work around these rules to write a lower-quality message. +func looksLikeSentences(s string) bool { + if len(s) < 1 { + return false + } + runes := []rune(s) // HCL guarantees that all strings are valid UTF-8 + first := runes[0] + last := runes[len(s)-1] + + // If the first rune is a letter then it must be an uppercase letter. + // (This will only see the first rune in a multi-rune combining sequence, + // but the first rune is generally the letter if any are, and if not then + // we'll just ignore it because we're primarily expecting English messages + // right now anyway, for consistency with all of Terraform's other output.) + if unicode.IsLetter(first) && !unicode.IsUpper(first) { + return false + } + + // The string must be at least one full sentence, which implies having + // sentence-ending punctuation. + // (This assumes that if a sentence ends with quotes then the period + // will be outside the quotes, which is consistent with Terraform's UI + // writing style.) + return last == '.' || last == '?' || last == '!' +} + // Output represents an "output" block in a module or file. type Output struct { Name string @@ -343,6 +535,24 @@ var variableBlockSchema = &hcl.BodySchema{ Name: "type", }, }, + Blocks: []hcl.BlockHeaderSchema{ + { + Type: "validation", + }, + }, +} + +var variableValidationBlockSchema = &hcl.BodySchema{ + Attributes: []hcl.AttributeSchema{ + { + Name: "condition", + Required: true, + }, + { + Name: "error_message", + Required: true, + }, + }, } var outputBlockSchema = &hcl.BodySchema{ diff --git a/configs/parser_config.go b/configs/parser_config.go index d4cbc945c..e8f94721d 100644 --- a/configs/parser_config.go +++ b/configs/parser_config.go @@ -42,6 +42,12 @@ func (p *Parser) loadConfigFile(path string, override bool) (*File, hcl.Diagnost file.CoreVersionConstraints, reqDiags = sniffCoreVersionRequirements(body) diags = append(diags, reqDiags...) + // We'll load the experiments first because other decoding logic in the + // loop below might depend on these experiments. + var expDiags hcl.Diagnostics + file.ActiveExperiments, expDiags = sniffActiveExperiments(body) + diags = append(diags, expDiags...) + content, contentDiags := body.Content(configFileSchema) diags = append(diags, contentDiags...) @@ -52,8 +58,9 @@ func (p *Parser) loadConfigFile(path string, override bool) (*File, hcl.Diagnost content, contentDiags := block.Body.Content(terraformBlockSchema) diags = append(diags, contentDiags...) - // We ignore the "terraform_version" attribute here because - // sniffCoreVersionRequirements already dealt with that above. + // We ignore the "terraform_version" and "experiments" attributes + // here because sniffCoreVersionRequirements and + // sniffActiveExperiments already dealt with those above. for _, innerBlock := range content.Blocks { switch innerBlock.Type { @@ -68,7 +75,7 @@ func (p *Parser) loadConfigFile(path string, override bool) (*File, hcl.Diagnost case "required_providers": reqs, reqsDiags := decodeRequiredProvidersBlock(innerBlock) diags = append(diags, reqsDiags...) - file.ProviderRequirements = append(file.ProviderRequirements, reqs...) + file.RequiredProviders = append(file.RequiredProviders, reqs...) default: // Should never happen because the above cases should be exhaustive @@ -148,7 +155,7 @@ func (p *Parser) loadConfigFile(path string, override bool) (*File, hcl.Diagnost // able to find, but may return no constraints at all if the given body is // so invalid that it cannot be decoded at all. func sniffCoreVersionRequirements(body hcl.Body) ([]VersionConstraint, hcl.Diagnostics) { - rootContent, _, diags := body.PartialContent(configFileVersionSniffRootSchema) + rootContent, _, diags := body.PartialContent(configFileTerraformBlockSniffRootSchema) var constraints []VersionConstraint @@ -213,9 +220,8 @@ var configFileSchema = &hcl.BodySchema{ // a configuration file. var terraformBlockSchema = &hcl.BodySchema{ Attributes: []hcl.AttributeSchema{ - { - Name: "required_version", - }, + {Name: "required_version"}, + {Name: "experiments"}, }, Blocks: []hcl.BlockHeaderSchema{ { @@ -228,8 +234,9 @@ var terraformBlockSchema = &hcl.BodySchema{ }, } -// configFileVersionSniffRootSchema is a schema for sniffCoreVersionRequirements -var configFileVersionSniffRootSchema = &hcl.BodySchema{ +// configFileTerraformBlockSniffRootSchema is a schema for +// sniffCoreVersionRequirements and sniffActiveExperiments. +var configFileTerraformBlockSniffRootSchema = &hcl.BodySchema{ Blocks: []hcl.BlockHeaderSchema{ { Type: "terraform", @@ -245,3 +252,13 @@ var configFileVersionSniffBlockSchema = &hcl.BodySchema{ }, }, } + +// configFileExperimentsSniffBlockSchema is a schema for sniffActiveExperiments, +// to decode a single attribute from inside a "terraform" block. +var configFileExperimentsSniffBlockSchema = &hcl.BodySchema{ + Attributes: []hcl.AttributeSchema{ + { + Name: "experiments", + }, + }, +} diff --git a/configs/parser_config_dir.go b/configs/parser_config_dir.go index afdd69833..2923af93a 100644 --- a/configs/parser_config_dir.go +++ b/configs/parser_config_dir.go @@ -154,9 +154,9 @@ func IsEmptyDir(path string) (bool, error) { } p := NewParser(nil) - fs, os, err := p.dirFiles(path) - if err != nil { - return false, err + fs, os, diags := p.dirFiles(path) + if diags.HasErrors() { + return false, diags } return len(fs) == 0 && len(os) == 0, nil diff --git a/configs/parser_config_test.go b/configs/parser_config_test.go index 22272cf07..7832914c9 100644 --- a/configs/parser_config_test.go +++ b/configs/parser_config_test.go @@ -1,10 +1,15 @@ package configs import ( + "bufio" + "bytes" "io/ioutil" "path/filepath" + "strings" "testing" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/hcl/v2" ) @@ -34,8 +39,8 @@ func TestParserLoadConfigFileSuccess(t *testing.T) { }) _, diags := parser.LoadConfigFile(name) - if diags.HasErrors() { - t.Errorf("unexpected error diagnostics") + if len(diags) != 0 { + t.Errorf("unexpected diagnostics") for _, diag := range diags { t.Logf("- %s", diag) } @@ -124,16 +129,6 @@ func TestParserLoadConfigFileFailureMessages(t *testing.T) { hcl.DiagError, "Unsuitable value type", }, - { - "valid-files/resources-ignorechanges-all-legacy.tf", - hcl.DiagWarning, - "Deprecated ignore_changes wildcard", - }, - { - "valid-files/resources-ignorechanges-all-legacy.tf.json", - hcl.DiagWarning, - "Deprecated ignore_changes wildcard", - }, } for _, test := range tests { @@ -164,3 +159,127 @@ func TestParserLoadConfigFileFailureMessages(t *testing.T) { }) } } + +// TestParseLoadConfigFileWarning is a test that verifies files from +// testdata/warning-files produce particular warnings. +// +// This test does not verify that reading these files produces the correct +// file element contents in spite of those warnings. More detailed assertions +// may be made on some subset of these configuration files in other tests. +func TestParserLoadConfigFileWarning(t *testing.T) { + files, err := ioutil.ReadDir("testdata/warning-files") + if err != nil { + t.Fatal(err) + } + + for _, info := range files { + name := info.Name() + t.Run(name, func(t *testing.T) { + src, err := ioutil.ReadFile(filepath.Join("testdata/warning-files", name)) + if err != nil { + t.Fatal(err) + } + + // First we'll scan the file to see what warnings are expected. + // That's declared inside the files themselves by using the + // string "WARNING: " somewhere on each line that is expected + // to produce a warning, followed by the expected warning summary + // text. A single-line comment (with #) is the main way to do that. + const marker = "WARNING: " + sc := bufio.NewScanner(bytes.NewReader(src)) + wantWarnings := make(map[int]string) + lineNum := 1 + for sc.Scan() { + lineText := sc.Text() + if idx := strings.Index(lineText, marker); idx != -1 { + summaryText := lineText[idx+len(marker):] + wantWarnings[lineNum] = summaryText + } + lineNum++ + } + + parser := testParser(map[string]string{ + name: string(src), + }) + + _, diags := parser.LoadConfigFile(name) + if diags.HasErrors() { + t.Errorf("unexpected error diagnostics") + for _, diag := range diags { + t.Logf("- %s", diag) + } + } + + gotWarnings := make(map[int]string) + for _, diag := range diags { + if diag.Severity != hcl.DiagWarning || diag.Subject == nil { + continue + } + gotWarnings[diag.Subject.Start.Line] = diag.Summary + } + + if diff := cmp.Diff(wantWarnings, gotWarnings); diff != "" { + t.Errorf("wrong warnings\n%s", diff) + } + }) + } +} + +// TestParseLoadConfigFileError is a test that verifies files from +// testdata/warning-files produce particular errors. +// +// This test does not verify that reading these files produces the correct +// file element contents in spite of those errors. More detailed assertions +// may be made on some subset of these configuration files in other tests. +func TestParserLoadConfigFileError(t *testing.T) { + files, err := ioutil.ReadDir("testdata/error-files") + if err != nil { + t.Fatal(err) + } + + for _, info := range files { + name := info.Name() + t.Run(name, func(t *testing.T) { + src, err := ioutil.ReadFile(filepath.Join("testdata/error-files", name)) + if err != nil { + t.Fatal(err) + } + + // First we'll scan the file to see what warnings are expected. + // That's declared inside the files themselves by using the + // string "ERROR: " somewhere on each line that is expected + // to produce a warning, followed by the expected warning summary + // text. A single-line comment (with #) is the main way to do that. + const marker = "ERROR: " + sc := bufio.NewScanner(bytes.NewReader(src)) + wantErrors := make(map[int]string) + lineNum := 1 + for sc.Scan() { + lineText := sc.Text() + if idx := strings.Index(lineText, marker); idx != -1 { + summaryText := lineText[idx+len(marker):] + wantErrors[lineNum] = summaryText + } + lineNum++ + } + + parser := testParser(map[string]string{ + name: string(src), + }) + + _, diags := parser.LoadConfigFile(name) + + gotErrors := make(map[int]string) + for _, diag := range diags { + if diag.Severity != hcl.DiagError || diag.Subject == nil { + continue + } + gotErrors[diag.Subject.Start.Line] = diag.Summary + } + + if diff := cmp.Diff(wantErrors, gotErrors); diff != "" { + t.Errorf("wrong errors\n%s", diff) + } + }) + } +} diff --git a/configs/provider.go b/configs/provider.go index 30a062940..0dd187045 100644 --- a/configs/provider.go +++ b/configs/provider.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/hcl/v2/hclsyntax" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/tfdiags" ) // Provider represents a "provider" block in a module or file. A provider @@ -27,7 +28,17 @@ type Provider struct { } func decodeProviderBlock(block *hcl.Block) (*Provider, hcl.Diagnostics) { - content, config, diags := block.Body.PartialContent(providerBlockSchema) + var diags hcl.Diagnostics + + // Produce deprecation messages for any pre-0.12-style + // single-interpolation-only expressions. We do this up front here because + // then we can also catch instances inside special blocks like "connection", + // before PartialContent extracts them. + moreDiags := warnForDeprecatedInterpolationsInBody(block.Body) + diags = append(diags, moreDiags...) + + content, config, moreDiags := block.Body.PartialContent(providerBlockSchema) + diags = append(diags, moreDiags...) provider := &Provider{ Name: block.Labels[0], @@ -83,10 +94,10 @@ func decodeProviderBlock(block *hcl.Block) (*Provider, hcl.Diagnostics) { // Addr returns the address of the receiving provider configuration, relative // to its containing module. -func (p *Provider) Addr() addrs.ProviderConfig { - return addrs.ProviderConfig{ - Type: p.Name, - Alias: p.Alias, +func (p *Provider) Addr() addrs.LocalProviderConfig { + return addrs.LocalProviderConfig{ + LocalName: p.Name, + Alias: p.Alias, } } @@ -97,28 +108,82 @@ func (p *Provider) moduleUniqueKey() string { return p.Name } -// ProviderRequirement represents a declaration of a dependency on a particular -// provider version without actually configuring that provider. This is used in -// child modules that expect a provider to be passed in from their parent. -type ProviderRequirement struct { - Name string - Requirement VersionConstraint +// ParseProviderConfigCompact parses the given absolute traversal as a relative +// provider address in compact form. The following are examples of traversals +// that can be successfully parsed as compact relative provider configuration +// addresses: +// +// aws +// aws.foo +// +// This function will panic if given a relative traversal. +// +// If the returned diagnostics contains errors then the result value is invalid +// and must not be used. +func ParseProviderConfigCompact(traversal hcl.Traversal) (addrs.LocalProviderConfig, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + ret := addrs.LocalProviderConfig{ + LocalName: traversal.RootName(), + } + + if len(traversal) < 2 { + // Just a type name, then. + return ret, diags + } + + aliasStep := traversal[1] + switch ts := aliasStep.(type) { + case hcl.TraverseAttr: + ret.Alias = ts.Name + return ret, diags + default: + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider configuration address", + Detail: "The provider type name must either stand alone or be followed by an alias name separated with a dot.", + Subject: aliasStep.SourceRange().Ptr(), + }) + } + + if len(traversal) > 2 { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider configuration address", + Detail: "Extraneous extra operators after provider configuration address.", + Subject: traversal[2:].SourceRange().Ptr(), + }) + } + + return ret, diags } -func decodeRequiredProvidersBlock(block *hcl.Block) ([]*ProviderRequirement, hcl.Diagnostics) { - attrs, diags := block.Body.JustAttributes() - var reqs []*ProviderRequirement - for name, attr := range attrs { - req, reqDiags := decodeVersionConstraint(attr) - diags = append(diags, reqDiags...) - if !diags.HasErrors() { - reqs = append(reqs, &ProviderRequirement{ - Name: name, - Requirement: req, - }) - } +// ParseProviderConfigCompactStr is a helper wrapper around ParseProviderConfigCompact +// that takes a string and parses it with the HCL native syntax traversal parser +// before interpreting it. +// +// This should be used only in specialized situations since it will cause the +// created references to not have any meaningful source location information. +// If a reference string is coming from a source that should be identified in +// error messages then the caller should instead parse it directly using a +// suitable function from the HCL API and pass the traversal itself to +// ParseProviderConfigCompact. +// +// Error diagnostics are returned if either the parsing fails or the analysis +// of the traversal fails. There is no way for the caller to distinguish the +// two kinds of diagnostics programmatically. If error diagnostics are returned +// then the returned address is invalid. +func ParseProviderConfigCompactStr(str string) (addrs.LocalProviderConfig, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + + traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1}) + diags = diags.Append(parseDiags) + if parseDiags.HasErrors() { + return addrs.LocalProviderConfig{}, diags } - return reqs, diags + + addr, addrDiags := ParseProviderConfigCompact(traversal) + diags = diags.Append(addrDiags) + return addr, diags } var providerBlockSchema = &hcl.BodySchema{ diff --git a/configs/provider_requirements.go b/configs/provider_requirements.go new file mode 100644 index 000000000..7dbcf6ba3 --- /dev/null +++ b/configs/provider_requirements.go @@ -0,0 +1,82 @@ +package configs + +import ( + version "github.com/hashicorp/go-version" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/terraform/addrs" +) + +// RequiredProvider represents a declaration of a dependency on a particular +// provider version without actually configuring that provider. This is used in +// child modules that expect a provider to be passed in from their parent. +type RequiredProvider struct { + Name string + Source string // TODO + Requirement VersionConstraint +} + +// ProviderRequirements represents merged provider version constraints. +// VersionConstraints come from terraform.require_providers blocks and provider +// blocks. +type ProviderRequirements struct { + Type addrs.Provider + VersionConstraints []VersionConstraint +} + +func decodeRequiredProvidersBlock(block *hcl.Block) ([]*RequiredProvider, hcl.Diagnostics) { + attrs, diags := block.Body.JustAttributes() + var reqs []*RequiredProvider + for name, attr := range attrs { + expr, err := attr.Expr.Value(nil) + if err != nil { + diags = append(diags, err...) + } + + switch { + case expr.Type().IsPrimitiveType(): + vc, reqDiags := decodeVersionConstraint(attr) + diags = append(diags, reqDiags...) + reqs = append(reqs, &RequiredProvider{ + Name: name, + Requirement: vc, + }) + case expr.Type().IsObjectType(): + ret := &RequiredProvider{Name: name} + if expr.Type().HasAttribute("version") { + vc := VersionConstraint{ + DeclRange: attr.Range, + } + constraintStr := expr.GetAttr("version").AsString() + constraints, err := version.NewConstraint(constraintStr) + if err != nil { + // NewConstraint doesn't return user-friendly errors, so we'll just + // ignore the provided error and produce our own generic one. + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid version constraint", + Detail: "This string does not use correct version constraint syntax.", + Subject: attr.Expr.Range().Ptr(), + }) + } else { + vc.Required = constraints + ret.Requirement = vc + } + } + if expr.Type().HasAttribute("source") { + ret.Source = expr.GetAttr("source").AsString() + } + reqs = append(reqs, ret) + default: + // should not happen + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid provider_requirements syntax", + Detail: "provider_requirements entries must be strings or objects.", + Subject: attr.Expr.Range().Ptr(), + }) + reqs = append(reqs, &RequiredProvider{Name: name}) + return reqs, diags + } + } + return reqs, diags +} diff --git a/configs/provider_requirements_test.go b/configs/provider_requirements_test.go new file mode 100644 index 000000000..000cb0078 --- /dev/null +++ b/configs/provider_requirements_test.go @@ -0,0 +1,191 @@ +package configs + +import ( + "fmt" + "sort" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/google/go-cmp/cmp/cmpopts" + version "github.com/hashicorp/go-version" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hcltest" + "github.com/zclconf/go-cty/cty" +) + +var ( + ignoreUnexported = cmpopts.IgnoreUnexported(version.Constraint{}) + comparer = cmp.Comparer(func(x, y RequiredProvider) bool { + if x.Name != y.Name { + return false + } + if x.Source != y.Source { + return false + } + if x.Requirement.Required.String() != y.Requirement.Required.String() { + return false + } + return true + }) +) + +func TestDecodeRequiredProvidersBlock_legacy(t *testing.T) { + block := &hcl.Block{ + Type: "required_providers", + Body: hcltest.MockBody(&hcl.BodyContent{ + Attributes: hcl.Attributes{ + "default": { + Name: "default", + Expr: hcltest.MockExprLiteral(cty.StringVal("1.0.0")), + }, + }, + }), + } + + want := &RequiredProvider{ + Name: "default", + Requirement: testVC("1.0.0"), + } + + got, diags := decodeRequiredProvidersBlock(block) + if diags.HasErrors() { + t.Fatalf("unexpected error") + } + if len(got) != 1 { + t.Fatalf("wrong number of results, got %d, wanted 1", len(got)) + } + if !cmp.Equal(got[0], want, ignoreUnexported, comparer) { + t.Fatalf("wrong result:\n %s", cmp.Diff(got[0], want, ignoreUnexported, comparer)) + } +} + +func TestDecodeRequiredProvidersBlock_provider_source(t *testing.T) { + block := &hcl.Block{ + Type: "required_providers", + Body: hcltest.MockBody(&hcl.BodyContent{ + Attributes: hcl.Attributes{ + "my_test": { + Name: "my_test", + Expr: hcltest.MockExprLiteral(cty.ObjectVal(map[string]cty.Value{ + "source": cty.StringVal("mycloud/test"), + "version": cty.StringVal("2.0.0"), + })), + }, + }, + }), + } + + want := &RequiredProvider{ + Name: "my_test", + Source: "mycloud/test", + Requirement: testVC("2.0.0"), + } + got, diags := decodeRequiredProvidersBlock(block) + if diags.HasErrors() { + t.Fatalf("unexpected error") + } + if len(got) != 1 { + t.Fatalf("wrong number of results, got %d, wanted 1", len(got)) + } + if !cmp.Equal(got[0], want, ignoreUnexported, comparer) { + t.Fatalf("wrong result:\n %s", cmp.Diff(got[0], want, ignoreUnexported, comparer)) + } +} + +func TestDecodeRequiredProvidersBlock_mixed(t *testing.T) { + block := &hcl.Block{ + Type: "required_providers", + Body: hcltest.MockBody(&hcl.BodyContent{ + Attributes: hcl.Attributes{ + "legacy": { + Name: "legacy", + Expr: hcltest.MockExprLiteral(cty.StringVal("1.0.0")), + }, + "my_test": { + Name: "my_test", + Expr: hcltest.MockExprLiteral(cty.ObjectVal(map[string]cty.Value{ + "source": cty.StringVal("mycloud/test"), + "version": cty.StringVal("2.0.0"), + })), + }, + }, + }), + } + + want := []*RequiredProvider{ + { + Name: "legacy", + Requirement: testVC("1.0.0"), + }, + { + Name: "my_test", + Source: "mycloud/test", + Requirement: testVC("2.0.0"), + }, + } + + got, diags := decodeRequiredProvidersBlock(block) + + sort.SliceStable(got, func(i, j int) bool { + return got[i].Name < got[j].Name + }) + + if diags.HasErrors() { + t.Fatalf("unexpected error") + } + if len(got) != 2 { + t.Fatalf("wrong number of results, got %d, wanted 2", len(got)) + } + for i, rp := range want { + if !cmp.Equal(got[i], rp, ignoreUnexported, comparer) { + t.Fatalf("wrong result:\n %s", cmp.Diff(got[0], rp, ignoreUnexported, comparer)) + } + } +} + +func TestDecodeRequiredProvidersBlock_version_error(t *testing.T) { + block := &hcl.Block{ + Type: "required_providers", + Body: hcltest.MockBody(&hcl.BodyContent{ + Attributes: hcl.Attributes{ + "my_test": { + Name: "my_test", + Expr: hcltest.MockExprLiteral(cty.ObjectVal(map[string]cty.Value{ + "source": cty.StringVal("mycloud/test"), + "version": cty.StringVal("invalid"), + })), + }, + }, + }), + } + + want := []*RequiredProvider{ + { + Name: "my_test", + Source: "mycloud/test", + }, + } + + got, diags := decodeRequiredProvidersBlock(block) + if !diags.HasErrors() { + t.Fatalf("expected error, got success") + } else { + fmt.Printf(diags[0].Summary) + } + if len(got) != 1 { + t.Fatalf("wrong number of results, got %d, wanted 1", len(got)) + } + for i, rp := range want { + if !cmp.Equal(got[i], rp, ignoreUnexported, comparer) { + t.Fatalf("wrong result:\n %s", cmp.Diff(got[0], rp, ignoreUnexported, comparer)) + } + } +} + +func testVC(ver string) VersionConstraint { + constraint, _ := version.NewConstraint(ver) + return VersionConstraint{ + Required: constraint, + DeclRange: hcl.Range{}, + } +} diff --git a/configs/provider_test.go b/configs/provider_test.go index 625108759..7aefd8f4e 100644 --- a/configs/provider_test.go +++ b/configs/provider_test.go @@ -3,6 +3,11 @@ package configs import ( "io/ioutil" "testing" + + "github.com/go-test/deep" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hclsyntax" + "github.com/hashicorp/terraform/addrs" ) func TestProviderReservedNames(t *testing.T) { @@ -24,3 +29,66 @@ func TestProviderReservedNames(t *testing.T) { `config.tf:13,3-9: Reserved argument name in provider block; The provider argument name "source" is reserved for use by Terraform in a future version.`, }) } + +func TestParseProviderConfigCompact(t *testing.T) { + tests := []struct { + Input string + Want addrs.LocalProviderConfig + WantDiag string + }{ + { + `aws`, + addrs.LocalProviderConfig{ + LocalName: "aws", + }, + ``, + }, + { + `aws.foo`, + addrs.LocalProviderConfig{ + LocalName: "aws", + Alias: "foo", + }, + ``, + }, + { + `aws["foo"]`, + addrs.LocalProviderConfig{}, + `The provider type name must either stand alone or be followed by an alias name separated with a dot.`, + }, + } + + for _, test := range tests { + t.Run(test.Input, func(t *testing.T) { + traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(test.Input), "", hcl.Pos{}) + if len(parseDiags) != 0 { + t.Errorf("unexpected diagnostics during parse") + for _, diag := range parseDiags { + t.Logf("- %s", diag) + } + return + } + + got, diags := ParseProviderConfigCompact(traversal) + + if test.WantDiag != "" { + if len(diags) != 1 { + t.Fatalf("got %d diagnostics; want 1", len(diags)) + } + gotDetail := diags[0].Description().Detail + if gotDetail != test.WantDiag { + t.Fatalf("wrong diagnostic detail\ngot: %s\nwant: %s", gotDetail, test.WantDiag) + } + return + } else { + if len(diags) != 0 { + t.Fatalf("got %d diagnostics; want 0", len(diags)) + } + } + + for _, problem := range deep.Equal(got, test.Want) { + t.Error(problem) + } + }) + } +} diff --git a/configs/provisioner.go b/configs/provisioner.go index 057587a46..769382513 100644 --- a/configs/provisioner.go +++ b/configs/provisioner.go @@ -50,6 +50,11 @@ func decodeProvisionerBlock(block *hcl.Block) (*Provisioner, hcl.Diagnostics) { } } + // destroy provisioners can only refer to self + if pv.When == ProvisionerWhenDestroy { + diags = append(diags, onlySelfRefs(config)...) + } + if attr, exists := content.Attributes["on_failure"]; exists { expr, shimDiags := shimTraversalInString(attr.Expr, true) diags = append(diags, shimDiags...) @@ -85,8 +90,11 @@ func decodeProvisionerBlock(block *hcl.Block) (*Provisioner, hcl.Diagnostics) { } seenConnection = block - //conn, connDiags := decodeConnectionBlock(block) - //diags = append(diags, connDiags...) + // destroy provisioners can only refer to self + if pv.When == ProvisionerWhenDestroy { + diags = append(diags, onlySelfRefs(block.Body)...) + } + pv.Connection = &Connection{ Config: block.Body, DeclRange: block.DefRange, @@ -107,6 +115,52 @@ func decodeProvisionerBlock(block *hcl.Block) (*Provisioner, hcl.Diagnostics) { return pv, diags } +func onlySelfRefs(body hcl.Body) hcl.Diagnostics { + var diags hcl.Diagnostics + + // Provisioners currently do not use any blocks in their configuration. + // Blocks are likely to remain solely for meta parameters, but in the case + // that blocks are supported for provisioners, we will want to extend this + // to find variables in nested blocks. + attrs, _ := body.JustAttributes() + for _, attr := range attrs { + for _, v := range attr.Expr.Variables() { + valid := false + switch v.RootName() { + case "self", "path", "terraform": + valid = true + case "count": + // count must use "index" + if len(v) == 2 { + if t, ok := v[1].(hcl.TraverseAttr); ok && t.Name == "index" { + valid = true + } + } + + case "each": + if len(v) == 2 { + if t, ok := v[1].(hcl.TraverseAttr); ok && t.Name == "key" { + valid = true + } + } + } + + if !valid { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid reference from destroy provisioner", + Detail: "Destroy-time provisioners and their connection configurations may only " + + "reference attributes of the related resource, via 'self', 'count.index', " + + "or 'each.key'.\n\nReferences to other resources during the destroy phase " + + "can cause dependency cycles and interact poorly with create_before_destroy.", + Subject: attr.Expr.Range().Ptr(), + }) + } + } + } + return diags +} + // Connection represents a "connection" block when used within either a // "resource" or "provisioner" block in a module or file. type Connection struct { @@ -118,7 +172,7 @@ type Connection struct { // ProvisionerWhen is an enum for valid values for when to run provisioners. type ProvisionerWhen int -//go:generate stringer -type ProvisionerWhen +//go:generate go run golang.org/x/tools/cmd/stringer -type ProvisionerWhen const ( ProvisionerWhenInvalid ProvisionerWhen = iota @@ -130,7 +184,7 @@ const ( // for provisioners. type ProvisionerOnFailure int -//go:generate stringer -type ProvisionerOnFailure +//go:generate go run golang.org/x/tools/cmd/stringer -type ProvisionerOnFailure const ( ProvisionerOnFailureInvalid ProvisionerOnFailure = iota diff --git a/configs/resource.go b/configs/resource.go index d63fff4a4..ce7a41f87 100644 --- a/configs/resource.go +++ b/configs/resource.go @@ -64,18 +64,29 @@ func (r *Resource) Addr() addrs.Resource { // that should be used for this resource. This function implements the // default behavior of extracting the type from the resource type name if // an explicit "provider" argument was not provided. -func (r *Resource) ProviderConfigAddr() addrs.ProviderConfig { +func (r *Resource) ProviderConfigAddr() addrs.LocalProviderConfig { if r.ProviderConfigRef == nil { - return r.Addr().DefaultProviderConfig() + // TODO: This will become incorrect once we move away from legacy + // provider addresses, and we'll need to refactor here so that + // this lookup is on the Module type rather than the Resource + // type and can thus look at the local-to-FQN mapping table + // to find a suitable local name to use here. + fqn := r.Addr().DefaultProvider() + return addrs.LocalProviderConfig{ + // This will panic once non-legacy addresses are in play. + // See the TODO comment above ^^ + LocalName: fqn.LegacyString(), + } } - return addrs.ProviderConfig{ - Type: r.ProviderConfigRef.Name, - Alias: r.ProviderConfigRef.Alias, + return addrs.LocalProviderConfig{ + LocalName: r.ProviderConfigRef.Name, + Alias: r.ProviderConfigRef.Alias, } } func decodeResourceBlock(block *hcl.Block) (*Resource, hcl.Diagnostics) { + var diags hcl.Diagnostics r := &Resource{ Mode: addrs.ManagedResourceMode, Type: block.Labels[0], @@ -85,7 +96,15 @@ func decodeResourceBlock(block *hcl.Block) (*Resource, hcl.Diagnostics) { Managed: &ManagedResource{}, } - content, remain, diags := block.Body.PartialContent(resourceBlockSchema) + // Produce deprecation messages for any pre-0.12-style + // single-interpolation-only expressions. We do this up front here because + // then we can also catch instances inside special blocks like "connection", + // before PartialContent extracts them. + moreDiags := warnForDeprecatedInterpolationsInBody(block.Body) + diags = append(diags, moreDiags...) + + content, remain, moreDiags := block.Body.PartialContent(resourceBlockSchema) + diags = append(diags, moreDiags...) r.Config = remain if !hclsyntax.ValidIdentifier(r.Type) { @@ -264,6 +283,17 @@ func decodeResourceBlock(block *hcl.Block) (*Resource, hcl.Diagnostics) { } } + // Now we can validate the connection block references if there are any destroy provisioners. + // TODO: should we eliminate standalone connection blocks? + if r.Managed.Connection != nil { + for _, p := range r.Managed.Provisioners { + if p.When == ProvisionerWhenDestroy { + diags = append(diags, onlySelfRefs(r.Managed.Connection.Config)...) + break + } + } + } + return r, diags } @@ -425,10 +455,10 @@ func decodeProviderConfigRef(expr hcl.Expression, argName string) (*ProviderConf // // This is a trivial conversion, essentially just discarding the source // location information and keeping just the addressing information. -func (r *ProviderConfigRef) Addr() addrs.ProviderConfig { - return addrs.ProviderConfig{ - Type: r.Name, - Alias: r.Alias, +func (r *ProviderConfigRef) Addr() addrs.LocalProviderConfig { + return addrs.LocalProviderConfig{ + LocalName: r.Name, + Alias: r.Alias, } } diff --git a/configs/testdata/error-files/destroy-provisioners.tf b/configs/testdata/error-files/destroy-provisioners.tf new file mode 100644 index 000000000..4831b5302 --- /dev/null +++ b/configs/testdata/error-files/destroy-provisioners.tf @@ -0,0 +1,44 @@ +locals { + user = "name" +} + +resource "null_resource" "a" { + connection { + host = self.hostname + user = local.user # ERROR: Invalid reference from destroy provisioner + } + + provisioner "remote-exec" { + when = destroy + index = count.index + key = each.key + + // path and terraform values are static, and do not create any + // dependencies. + dir = path.module + workspace = terraform.workspace + } +} + +resource "null_resource" "b" { + connection { + host = self.hostname + # this is OK since there is no destroy provisioner + user = local.user + } + + provisioner "remote-exec" { + } +} + +resource "null_resource" "b" { + provisioner "remote-exec" { + when = destroy + connection { + host = self.hostname + user = local.user # ERROR: Invalid reference from destroy provisioner + } + + command = "echo ${local.name}" # ERROR: Invalid reference from destroy provisioner + } +} diff --git a/configs/testdata/experiments/concluded/concluded_experiment.tf b/configs/testdata/experiments/concluded/concluded_experiment.tf new file mode 100644 index 000000000..331063881 --- /dev/null +++ b/configs/testdata/experiments/concluded/concluded_experiment.tf @@ -0,0 +1,3 @@ +terraform { + experiments = [concluded] +} diff --git a/configs/testdata/experiments/current/current_experiment.tf b/configs/testdata/experiments/current/current_experiment.tf new file mode 100644 index 000000000..d21b66093 --- /dev/null +++ b/configs/testdata/experiments/current/current_experiment.tf @@ -0,0 +1,3 @@ +terraform { + experiments = [current] +} diff --git a/configs/testdata/experiments/invalid/invalid_experiments.tf b/configs/testdata/experiments/invalid/invalid_experiments.tf new file mode 100644 index 000000000..d3b242b52 --- /dev/null +++ b/configs/testdata/experiments/invalid/invalid_experiments.tf @@ -0,0 +1,3 @@ +terraform { + experiments = invalid +} diff --git a/configs/testdata/experiments/unknown/unknown_experiment.tf b/configs/testdata/experiments/unknown/unknown_experiment.tf new file mode 100644 index 000000000..bbe36edf4 --- /dev/null +++ b/configs/testdata/experiments/unknown/unknown_experiment.tf @@ -0,0 +1,3 @@ +terraform { + experiments = [unknown] +} diff --git a/configs/testdata/invalid-files/variable-validation-bad-msg.tf b/configs/testdata/invalid-files/variable-validation-bad-msg.tf new file mode 100644 index 000000000..37ec496c0 --- /dev/null +++ b/configs/testdata/invalid-files/variable-validation-bad-msg.tf @@ -0,0 +1,11 @@ + +terraform { + experiments = [variable_validation] +} + +variable "validation" { + validation { + condition = var.validation != 4 + error_message = "not four" # ERROR: Invalid validation error message + } +} diff --git a/configs/testdata/invalid-files/variable-validation-condition-badref.tf b/configs/testdata/invalid-files/variable-validation-condition-badref.tf new file mode 100644 index 000000000..de88ced7a --- /dev/null +++ b/configs/testdata/invalid-files/variable-validation-condition-badref.tf @@ -0,0 +1,15 @@ + +terraform { + experiments = [variable_validation] +} + +locals { + foo = 1 +} + +variable "validation" { + validation { + condition = local.foo == var.validation # ERROR: Invalid reference in variable validation + error_message = "Must be five." + } +} diff --git a/configs/testdata/invalid-files/variable-validation-condition-noref.tf b/configs/testdata/invalid-files/variable-validation-condition-noref.tf new file mode 100644 index 000000000..e6b217b14 --- /dev/null +++ b/configs/testdata/invalid-files/variable-validation-condition-noref.tf @@ -0,0 +1,11 @@ + +terraform { + experiments = [variable_validation] +} + +variable "validation" { + validation { + condition = true # ERROR: Invalid variable validation condition + error_message = "Must be true." + } +} diff --git a/configs/testdata/invalid-modules/variable-validation-without-optin/variable-validation-without-optin.tf b/configs/testdata/invalid-modules/variable-validation-without-optin/variable-validation-without-optin.tf new file mode 100644 index 000000000..655f2cd4e --- /dev/null +++ b/configs/testdata/invalid-modules/variable-validation-without-optin/variable-validation-without-optin.tf @@ -0,0 +1,7 @@ + +variable "validation_without_optin" { + validation { # ERROR: Custom variable validation is experimental + condition = var.validation_without_optin != 4 + error_message = "Must not be four." + } +} diff --git a/configs/testdata/providers-explicit-fqn/root.tf b/configs/testdata/providers-explicit-fqn/root.tf new file mode 100644 index 000000000..153222ce3 --- /dev/null +++ b/configs/testdata/providers-explicit-fqn/root.tf @@ -0,0 +1,8 @@ + +terraform { + required_providers { + foo-test = { + source = "foo/test" + } + } +} diff --git a/configs/testdata/valid-files/references.tf.json b/configs/testdata/valid-files/references.tf.json new file mode 100644 index 000000000..3fe7e0aff --- /dev/null +++ b/configs/testdata/valid-files/references.tf.json @@ -0,0 +1,11 @@ +{ + "//": "The purpose of this test file is to show that we can use template syntax unwrapping to provide complex expressions without generating the deprecation warnings we'd expect for native syntax.", + "resource": { + "null_resource": { + "baz": { + "//": "This particular use of template syntax is redundant, but we permit it because this is the documented way to use more complex expressions in JSON.", + "triggers": "${ {} }" + } + } + } +} diff --git a/configs/testdata/valid-files/resources-dependson-quoted.tf b/configs/testdata/valid-files/resources-dependson-quoted.tf deleted file mode 100644 index 3bf188f19..000000000 --- a/configs/testdata/valid-files/resources-dependson-quoted.tf +++ /dev/null @@ -1,8 +0,0 @@ -resource "aws_security_group" "firewall" { -} - -resource "aws_instance" "web" { - depends_on = [ - "aws_security_group.firewall", - ] -} diff --git a/configs/testdata/valid-files/resources-ignorechanges-all-legacy.tf b/configs/testdata/valid-files/resources-ignorechanges-all-legacy.tf deleted file mode 100644 index 6b5e61a9c..000000000 --- a/configs/testdata/valid-files/resources-ignorechanges-all-legacy.tf +++ /dev/null @@ -1,5 +0,0 @@ -resource "aws_instance" "web" { - lifecycle { - ignore_changes = ["*"] - } -} diff --git a/configs/testdata/valid-files/resources-ignorechanges-all-legacy.tf.json b/configs/testdata/valid-files/resources-ignorechanges-all-legacy.tf.json deleted file mode 100644 index 5502dcd50..000000000 --- a/configs/testdata/valid-files/resources-ignorechanges-all-legacy.tf.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "resource": { - "aws_instance": { - "web": { - "lifecycle": { - "ignore_changes": ["*"] - } - } - } - } -} diff --git a/configs/testdata/valid-files/resources-ignorechanges-quoted.tf b/configs/testdata/valid-files/resources-ignorechanges-quoted.tf deleted file mode 100644 index cba5be59d..000000000 --- a/configs/testdata/valid-files/resources-ignorechanges-quoted.tf +++ /dev/null @@ -1,7 +0,0 @@ -resource "aws_instance" "web" { - lifecycle { - ignore_changes = [ - "ami", - ] - } -} diff --git a/configs/testdata/valid-files/resources-provisioner-onfailure-quoted.tf b/configs/testdata/valid-files/resources-provisioner-onfailure-quoted.tf deleted file mode 100644 index dcec1eb08..000000000 --- a/configs/testdata/valid-files/resources-provisioner-onfailure-quoted.tf +++ /dev/null @@ -1,6 +0,0 @@ -resource "aws_security_group" "firewall" { - provisioner "local-exec" { - command = "echo hello" - on_failure = "continue" - } -} diff --git a/configs/testdata/valid-files/resources-provisioner-when-quoted.tf b/configs/testdata/valid-files/resources-provisioner-when-quoted.tf deleted file mode 100644 index 6a66b085f..000000000 --- a/configs/testdata/valid-files/resources-provisioner-when-quoted.tf +++ /dev/null @@ -1,6 +0,0 @@ -resource "aws_security_group" "firewall" { - provisioner "local-exec" { - command = "echo hello" - when = "destroy" - } -} diff --git a/configs/testdata/valid-files/variable-type-quoted.tf b/configs/testdata/valid-files/variable-type-quoted.tf deleted file mode 100644 index 15db803f2..000000000 --- a/configs/testdata/valid-files/variable-type-quoted.tf +++ /dev/null @@ -1,3 +0,0 @@ -variable "bad_type" { - type = "string" -} diff --git a/configs/testdata/warning-files/depends_on.tf b/configs/testdata/warning-files/depends_on.tf new file mode 100644 index 000000000..17e1bf34a --- /dev/null +++ b/configs/testdata/warning-files/depends_on.tf @@ -0,0 +1,6 @@ +resource "null_resource" "a" { +} + +resource "null_resource" "b" { + depends_on = ["null_resource.a"] # WARNING: Quoted references are deprecated +} diff --git a/configs/testdata/warning-files/ignore_changes.tf b/configs/testdata/warning-files/ignore_changes.tf new file mode 100644 index 000000000..5678fe7bd --- /dev/null +++ b/configs/testdata/warning-files/ignore_changes.tf @@ -0,0 +1,11 @@ +resource "null_resource" "one" { + lifecycle { + ignore_changes = ["triggers"] # WARNING: Quoted references are deprecated + } +} + +resource "null_resource" "all" { + lifecycle { + ignore_changes = ["*"] # WARNING: Deprecated ignore_changes wildcard + } +} diff --git a/configs/testdata/warning-files/provider_ref.tf b/configs/testdata/warning-files/provider_ref.tf new file mode 100644 index 000000000..6f5525ed7 --- /dev/null +++ b/configs/testdata/warning-files/provider_ref.tf @@ -0,0 +1,7 @@ +provider "null" { + alias = "foo" +} + +resource "null_resource" "test" { + provider = "null.foo" # WARNING: Quoted references are deprecated +} diff --git a/configs/testdata/warning-files/provisioner_keyword.tf b/configs/testdata/warning-files/provisioner_keyword.tf new file mode 100644 index 000000000..61fe72bdd --- /dev/null +++ b/configs/testdata/warning-files/provisioner_keyword.tf @@ -0,0 +1,6 @@ +resource "null_resource" "a" { + provisioner "local-exec" { + when = "create" # WARNING: Quoted keywords are deprecated + on_failure = "fail" # WARNING: Quoted keywords are deprecated + } +} diff --git a/configs/testdata/warning-files/redundant_interp.tf b/configs/testdata/warning-files/redundant_interp.tf new file mode 100644 index 000000000..07db23b8c --- /dev/null +++ b/configs/testdata/warning-files/redundant_interp.tf @@ -0,0 +1,36 @@ +# It's redundant to write an expression that is just a single template +# interpolation with another expression inside, like "${foo}", but it +# was required before Terraform v0.12 and so there are lots of existing +# examples out there using that style. +# +# We are generating warnings for that situation in order to guide those +# who are following old examples toward the new idiom. + +variable "triggers" { + type = "map" # WARNING: Quoted type constraints are deprecated +} + +provider "null" { + foo = "${var.triggers["foo"]}" # WARNING: Interpolation-only expressions are deprecated +} + +resource "null_resource" "a" { + triggers = "${var.triggers}" # WARNING: Interpolation-only expressions are deprecated + + connection { + type = "ssh" + host = "${var.triggers["host"]}" # WARNING: Interpolation-only expressions are deprecated + } + + provisioner "local-exec" { + single = "${var.triggers["greeting"]}" # WARNING: Interpolation-only expressions are deprecated + + # No warning for this one, because there's more than just one interpolation + # in the template. + template = " ${var.triggers["greeting"]} " + + # No warning for this one, because it's embedded inside a more complex + # expression and our check is only for direct assignment to attributes. + wrapped = ["${var.triggers["greeting"]}"] + } +} diff --git a/configs/testdata/warning-files/variable_type_quoted.tf b/configs/testdata/warning-files/variable_type_quoted.tf new file mode 100644 index 000000000..9201ba62e --- /dev/null +++ b/configs/testdata/warning-files/variable_type_quoted.tf @@ -0,0 +1,11 @@ +variable "bad_string" { + type = "string" # WARNING: Quoted type constraints are deprecated +} + +variable "bad_map" { + type = "map" # WARNING: Quoted type constraints are deprecated +} + +variable "bad_list" { + type = "list" # WARNING: Quoted type constraints are deprecated +} diff --git a/configs/testdata/warning-files/variable_validation_experiment.tf b/configs/testdata/warning-files/variable_validation_experiment.tf new file mode 100644 index 000000000..28c285ad2 --- /dev/null +++ b/configs/testdata/warning-files/variable_validation_experiment.tf @@ -0,0 +1,10 @@ +terraform { + experiments = [variable_validation] # WARNING: Experimental feature "variable_validation" is active +} + +variable "validation" { + validation { + condition = var.validation == 5 + error_message = "Must be five." + } +} diff --git a/configs/variable_type_hint.go b/configs/variable_type_hint.go index 458e75e14..c02ad4b55 100644 --- a/configs/variable_type_hint.go +++ b/configs/variable_type_hint.go @@ -19,7 +19,7 @@ package configs // TypeHintMap requires a type that could be converted to an object type VariableTypeHint rune -//go:generate stringer -type VariableTypeHint +//go:generate go run golang.org/x/tools/cmd/stringer -type VariableTypeHint // TypeHintNone indicates the absence of a type hint. Values specified in // ambiguous contexts will be treated as literal strings, as if TypeHintString diff --git a/contrib/api-coverage/aws_api_coverage.rb b/contrib/api-coverage/aws_api_coverage.rb deleted file mode 100644 index 43ff6206b..000000000 --- a/contrib/api-coverage/aws_api_coverage.rb +++ /dev/null @@ -1,49 +0,0 @@ -# -# This script generates CSV output reporting on the API Coverage of Terraform's -# AWS Provider. -# -# In addition to Ruby, it depends on a properly configured Go development -# environment with both terraform and aws-sdk-go present. -# - -require 'csv' -require 'json' -require 'pathname' - -module APIs - module Terraform - def self.path - @path ||= Pathname(`go list -f '{{.Dir}}' github.com/hashicorp/terraform`.chomp) - end - - def self.called?(api, op) - `git -C "#{path}" grep "#{api}.*#{op}" -- builtin/providers/aws | wc -l`.chomp.to_i > 0 - end - end - - module AWS - def self.path - @path ||= Pathname(`go list -f '{{.Dir}}' github.com/aws/aws-sdk-go/aws`.chomp).parent - end - - def self.api_json_files - Pathname.glob(path.join('**', '*.normal.json')) - end - - def self.each - api_json_files.each do |api_json_file| - json = JSON.parse(api_json_file.read) - api = api_json_file.dirname.basename - json["operations"].keys.each do |op| - yield api, op - end - end - end - end -end - -csv = CSV.new($stdout) -csv << ["API", "Operation", "Called in Terraform?"] -APIs::AWS.each do |api, op| - csv << [api, op, APIs::Terraform.called?(api, op)] -end diff --git a/dag/dag.go b/dag/dag.go index 77c67eff9..8ca4e910e 100644 --- a/dag/dag.go +++ b/dag/dag.go @@ -29,15 +29,14 @@ func (g *AcyclicGraph) DirectedGraph() Grapher { // Returns a Set that includes every Vertex yielded by walking down from the // provided starting Vertex v. -func (g *AcyclicGraph) Ancestors(v Vertex) (*Set, error) { - s := new(Set) - start := AsVertexList(g.DownEdges(v)) +func (g *AcyclicGraph) Ancestors(v Vertex) (Set, error) { + s := make(Set) memoFunc := func(v Vertex, d int) error { s.Add(v) return nil } - if err := g.DepthFirstWalk(start, memoFunc); err != nil { + if err := g.DepthFirstWalk(g.DownEdges(v), memoFunc); err != nil { return nil, err } @@ -46,15 +45,14 @@ func (g *AcyclicGraph) Ancestors(v Vertex) (*Set, error) { // Returns a Set that includes every Vertex yielded by walking up from the // provided starting Vertex v. -func (g *AcyclicGraph) Descendents(v Vertex) (*Set, error) { - s := new(Set) - start := AsVertexList(g.UpEdges(v)) +func (g *AcyclicGraph) Descendents(v Vertex) (Set, error) { + s := make(Set) memoFunc := func(v Vertex, d int) error { s.Add(v) return nil } - if err := g.ReverseDepthFirstWalk(start, memoFunc); err != nil { + if err := g.ReverseDepthFirstWalk(g.UpEdges(v), memoFunc); err != nil { return nil, err } @@ -102,15 +100,12 @@ func (g *AcyclicGraph) TransitiveReduction() { // v such that the edge (u,v) exists (v is a direct descendant of u). // // For each v-prime reachable from v, remove the edge (u, v-prime). - defer g.debug.BeginOperation("TransitiveReduction", "").End("") - for _, u := range g.Vertices() { uTargets := g.DownEdges(u) - vs := AsVertexList(g.DownEdges(u)) - g.depthFirstWalk(vs, false, func(v Vertex, d int) error { + g.DepthFirstWalk(g.DownEdges(u), func(v Vertex, d int) error { shared := uTargets.Intersection(g.DownEdges(v)) - for _, vPrime := range AsVertexList(shared) { + for _, vPrime := range shared { g.RemoveEdge(BasicEdge(u, vPrime)) } @@ -166,19 +161,16 @@ func (g *AcyclicGraph) Cycles() [][]Vertex { // This will walk nodes in parallel if it can. The resulting diagnostics // contains problems from all graphs visited, in no particular order. func (g *AcyclicGraph) Walk(cb WalkFunc) tfdiags.Diagnostics { - defer g.debug.BeginOperation(typeWalk, "").End("") - w := &Walker{Callback: cb, Reverse: true} w.Update(g) return w.Wait() } // simple convenience helper for converting a dag.Set to a []Vertex -func AsVertexList(s *Set) []Vertex { - rawList := s.List() - vertexList := make([]Vertex, len(rawList)) - for i, raw := range rawList { - vertexList[i] = raw.(Vertex) +func AsVertexList(s Set) []Vertex { + vertexList := make([]Vertex, 0, len(s)) + for _, raw := range s { + vertexList = append(vertexList, raw.(Vertex)) } return vertexList } @@ -188,21 +180,48 @@ type vertexAtDepth struct { Depth int } -// depthFirstWalk does a depth-first walk of the graph starting from +// DepthFirstWalk does a depth-first walk of the graph starting from // the vertices in start. -func (g *AcyclicGraph) DepthFirstWalk(start []Vertex, f DepthWalkFunc) error { - return g.depthFirstWalk(start, true, f) +func (g *AcyclicGraph) DepthFirstWalk(start Set, f DepthWalkFunc) error { + seen := make(map[Vertex]struct{}) + frontier := make([]*vertexAtDepth, 0, len(start)) + for _, v := range start { + frontier = append(frontier, &vertexAtDepth{ + Vertex: v, + Depth: 0, + }) + } + for len(frontier) > 0 { + // Pop the current vertex + n := len(frontier) + current := frontier[n-1] + frontier = frontier[:n-1] + + // Check if we've seen this already and return... + if _, ok := seen[current.Vertex]; ok { + continue + } + seen[current.Vertex] = struct{}{} + + // Visit the current node + if err := f(current.Vertex, current.Depth); err != nil { + return err + } + + for _, v := range g.DownEdges(current.Vertex) { + frontier = append(frontier, &vertexAtDepth{ + Vertex: v, + Depth: current.Depth + 1, + }) + } + } + + return nil } -// This internal method provides the option of not sorting the vertices during -// the walk, which we use for the Transitive reduction. -// Some configurations can lead to fully-connected subgraphs, which makes our -// transitive reduction algorithm O(n^3). This is still passable for the size -// of our graphs, but the additional n^2 sort operations would make this -// uncomputable in a reasonable amount of time. -func (g *AcyclicGraph) depthFirstWalk(start []Vertex, sorted bool, f DepthWalkFunc) error { - defer g.debug.BeginOperation(typeDepthFirstWalk, "").End("") - +// SortedDepthFirstWalk does a depth-first walk of the graph starting from +// the vertices in start, always iterating the nodes in a consistent order. +func (g *AcyclicGraph) SortedDepthFirstWalk(start []Vertex, f DepthWalkFunc) error { seen := make(map[Vertex]struct{}) frontier := make([]*vertexAtDepth, len(start)) for i, v := range start { @@ -230,10 +249,7 @@ func (g *AcyclicGraph) depthFirstWalk(start []Vertex, sorted bool, f DepthWalkFu // Visit targets of this in a consistent order. targets := AsVertexList(g.DownEdges(current.Vertex)) - - if sorted { - sort.Sort(byVertexName(targets)) - } + sort.Sort(byVertexName(targets)) for _, t := range targets { frontier = append(frontier, &vertexAtDepth{ @@ -246,11 +262,48 @@ func (g *AcyclicGraph) depthFirstWalk(start []Vertex, sorted bool, f DepthWalkFu return nil } -// reverseDepthFirstWalk does a depth-first walk _up_ the graph starting from +// ReverseDepthFirstWalk does a depth-first walk _up_ the graph starting from // the vertices in start. -func (g *AcyclicGraph) ReverseDepthFirstWalk(start []Vertex, f DepthWalkFunc) error { - defer g.debug.BeginOperation(typeReverseDepthFirstWalk, "").End("") +func (g *AcyclicGraph) ReverseDepthFirstWalk(start Set, f DepthWalkFunc) error { + seen := make(map[Vertex]struct{}) + frontier := make([]*vertexAtDepth, 0, len(start)) + for _, v := range start { + frontier = append(frontier, &vertexAtDepth{ + Vertex: v, + Depth: 0, + }) + } + for len(frontier) > 0 { + // Pop the current vertex + n := len(frontier) + current := frontier[n-1] + frontier = frontier[:n-1] + // Check if we've seen this already and return... + if _, ok := seen[current.Vertex]; ok { + continue + } + seen[current.Vertex] = struct{}{} + + for _, t := range g.UpEdges(current.Vertex) { + frontier = append(frontier, &vertexAtDepth{ + Vertex: t, + Depth: current.Depth + 1, + }) + } + + // Visit the current node + if err := f(current.Vertex, current.Depth); err != nil { + return err + } + } + + return nil +} + +// SortedReverseDepthFirstWalk does a depth-first walk _up_ the graph starting from +// the vertices in start, always iterating the nodes in a consistent order. +func (g *AcyclicGraph) SortedReverseDepthFirstWalk(start []Vertex, f DepthWalkFunc) error { seen := make(map[Vertex]struct{}) frontier := make([]*vertexAtDepth, len(start)) for i, v := range start { diff --git a/dag/dag_test.go b/dag/dag_test.go index 222df257e..ebf5537b6 100644 --- a/dag/dag_test.go +++ b/dag/dag_test.go @@ -335,6 +335,63 @@ func TestAcyclicGraphWalk_error(t *testing.T) { } +func BenchmarkDAG(b *testing.B) { + for i := 0; i < b.N; i++ { + count := 150 + b.StopTimer() + g := &AcyclicGraph{} + + // create 4 layers of fully connected nodes + // layer A + for i := 0; i < count; i++ { + g.Add(fmt.Sprintf("A%d", i)) + } + + // layer B + for i := 0; i < count; i++ { + B := fmt.Sprintf("B%d", i) + g.Add(fmt.Sprintf(B)) + for j := 0; j < count; j++ { + g.Connect(BasicEdge(B, fmt.Sprintf("A%d", j))) + } + } + + // layer C + for i := 0; i < count; i++ { + c := fmt.Sprintf("C%d", i) + g.Add(fmt.Sprintf(c)) + for j := 0; j < count; j++ { + // connect them to previous layers so we have something that requires reduction + g.Connect(BasicEdge(c, fmt.Sprintf("A%d", j))) + g.Connect(BasicEdge(c, fmt.Sprintf("B%d", j))) + } + } + + // layer D + for i := 0; i < count; i++ { + d := fmt.Sprintf("D%d", i) + g.Add(fmt.Sprintf(d)) + for j := 0; j < count; j++ { + g.Connect(BasicEdge(d, fmt.Sprintf("A%d", j))) + g.Connect(BasicEdge(d, fmt.Sprintf("B%d", j))) + g.Connect(BasicEdge(d, fmt.Sprintf("C%d", j))) + } + } + + b.StartTimer() + // Find dependencies for every node + for _, v := range g.Vertices() { + _, err := g.Ancestors(v) + if err != nil { + b.Fatal(err) + } + } + + // reduce the final graph + g.TransitiveReduction() + } +} + func TestAcyclicGraph_ReverseDepthFirstWalk_WithRemoval(t *testing.T) { var g AcyclicGraph g.Add(1) @@ -345,7 +402,7 @@ func TestAcyclicGraph_ReverseDepthFirstWalk_WithRemoval(t *testing.T) { var visits []Vertex var lock sync.Mutex - err := g.ReverseDepthFirstWalk([]Vertex{1}, func(v Vertex, d int) error { + err := g.SortedReverseDepthFirstWalk([]Vertex{1}, func(v Vertex, d int) error { lock.Lock() defer lock.Unlock() visits = append(visits, v) diff --git a/dag/graph.go b/dag/graph.go index e7517a206..4ce0dbccb 100644 --- a/dag/graph.go +++ b/dag/graph.go @@ -2,21 +2,16 @@ package dag import ( "bytes" - "encoding/json" "fmt" - "io" "sort" ) // Graph is used to represent a dependency graph. type Graph struct { - vertices *Set - edges *Set - downEdges map[interface{}]*Set - upEdges map[interface{}]*Set - - // JSON encoder for recording debug information - debug *encoder + vertices Set + edges Set + downEdges map[interface{}]Set + upEdges map[interface{}]Set } // Subgrapher allows a Vertex to be a Graph itself, by returning a Grapher. @@ -47,10 +42,9 @@ func (g *Graph) DirectedGraph() Grapher { // Vertices returns the list of all the vertices in the graph. func (g *Graph) Vertices() []Vertex { - list := g.vertices.List() - result := make([]Vertex, len(list)) - for i, v := range list { - result[i] = v.(Vertex) + result := make([]Vertex, 0, len(g.vertices)) + for _, v := range g.vertices { + result = append(result, v.(Vertex)) } return result @@ -58,10 +52,9 @@ func (g *Graph) Vertices() []Vertex { // Edges returns the list of all the edges in the graph. func (g *Graph) Edges() []Edge { - list := g.edges.List() - result := make([]Edge, len(list)) - for i, v := range list { - result[i] = v.(Edge) + result := make([]Edge, 0, len(g.edges)) + for _, v := range g.edges { + result = append(result, v.(Edge)) } return result @@ -108,7 +101,6 @@ func (g *Graph) HasEdge(e Edge) bool { func (g *Graph) Add(v Vertex) Vertex { g.init() g.vertices.Add(v) - g.debug.Add(v) return v } @@ -117,13 +109,12 @@ func (g *Graph) Add(v Vertex) Vertex { func (g *Graph) Remove(v Vertex) Vertex { // Delete the vertex itself g.vertices.Delete(v) - g.debug.Remove(v) // Delete the edges to non-existent things - for _, target := range g.DownEdges(v).List() { + for _, target := range g.DownEdges(v) { g.RemoveEdge(BasicEdge(v, target)) } - for _, source := range g.UpEdges(v).List() { + for _, source := range g.UpEdges(v) { g.RemoveEdge(BasicEdge(source, v)) } @@ -139,8 +130,6 @@ func (g *Graph) Replace(original, replacement Vertex) bool { return false } - defer g.debug.BeginOperation("Replace", "").End("") - // If they're the same, then don't do anything if original == replacement { return true @@ -148,10 +137,10 @@ func (g *Graph) Replace(original, replacement Vertex) bool { // Add our new vertex, then copy all the edges g.Add(replacement) - for _, target := range g.DownEdges(original).List() { + for _, target := range g.DownEdges(original) { g.Connect(BasicEdge(replacement, target)) } - for _, source := range g.UpEdges(original).List() { + for _, source := range g.UpEdges(original) { g.Connect(BasicEdge(source, replacement)) } @@ -164,7 +153,6 @@ func (g *Graph) Replace(original, replacement Vertex) bool { // RemoveEdge removes an edge from the graph. func (g *Graph) RemoveEdge(edge Edge) { g.init() - g.debug.RemoveEdge(edge) // Delete the edge from the set g.edges.Delete(edge) @@ -179,13 +167,13 @@ func (g *Graph) RemoveEdge(edge Edge) { } // DownEdges returns the outward edges from the source Vertex v. -func (g *Graph) DownEdges(v Vertex) *Set { +func (g *Graph) DownEdges(v Vertex) Set { g.init() return g.downEdges[hashcode(v)] } // UpEdges returns the inward edges to the destination Vertex v. -func (g *Graph) UpEdges(v Vertex) *Set { +func (g *Graph) UpEdges(v Vertex) Set { g.init() return g.upEdges[hashcode(v)] } @@ -196,7 +184,6 @@ func (g *Graph) UpEdges(v Vertex) *Set { // value of the edge itself. func (g *Graph) Connect(edge Edge) { g.init() - g.debug.Connect(edge) source := edge.Source() target := edge.Target() @@ -214,7 +201,7 @@ func (g *Graph) Connect(edge Edge) { // Add the down edge s, ok := g.downEdges[sourceCode] if !ok { - s = new(Set) + s = make(Set) g.downEdges[sourceCode] = s } s.Add(target) @@ -222,7 +209,7 @@ func (g *Graph) Connect(edge Edge) { // Add the up edge s, ok = g.upEdges[targetCode] if !ok { - s = new(Set) + s = make(Set) g.upEdges[targetCode] = s } s.Add(source) @@ -254,7 +241,7 @@ func (g *Graph) StringWithNodeTypes() string { // Alphabetize dependencies deps := make([]string, 0, targets.Len()) targetNodes := make(map[string]Vertex) - for _, target := range targets.List() { + for _, target := range targets { dep := VertexName(target) deps = append(deps, dep) targetNodes[dep] = target @@ -295,7 +282,7 @@ func (g *Graph) String() string { // Alphabetize dependencies deps := make([]string, 0, targets.Len()) - for _, target := range targets.List() { + for _, target := range targets { deps = append(deps, VertexName(target)) } sort.Strings(deps) @@ -311,16 +298,16 @@ func (g *Graph) String() string { func (g *Graph) init() { if g.vertices == nil { - g.vertices = new(Set) + g.vertices = make(Set) } if g.edges == nil { - g.edges = new(Set) + g.edges = make(Set) } if g.downEdges == nil { - g.downEdges = make(map[interface{}]*Set) + g.downEdges = make(map[interface{}]Set) } if g.upEdges == nil { - g.upEdges = make(map[interface{}]*Set) + g.upEdges = make(map[interface{}]Set) } } @@ -329,55 +316,6 @@ func (g *Graph) Dot(opts *DotOpts) []byte { return newMarshalGraph("", g).Dot(opts) } -// MarshalJSON returns a JSON representation of the entire Graph. -func (g *Graph) MarshalJSON() ([]byte, error) { - dg := newMarshalGraph("root", g) - return json.MarshalIndent(dg, "", " ") -} - -// SetDebugWriter sets the io.Writer where the Graph will record debug -// information. After this is set, the graph will immediately encode itself to -// the stream, and continue to record all subsequent operations. -func (g *Graph) SetDebugWriter(w io.Writer) { - g.debug = &encoder{w: w} - g.debug.Encode(newMarshalGraph("root", g)) -} - -// DebugVertexInfo encodes arbitrary information about a vertex in the graph -// debug logs. -func (g *Graph) DebugVertexInfo(v Vertex, info string) { - va := newVertexInfo(typeVertexInfo, v, info) - g.debug.Encode(va) -} - -// DebugEdgeInfo encodes arbitrary information about an edge in the graph debug -// logs. -func (g *Graph) DebugEdgeInfo(e Edge, info string) { - ea := newEdgeInfo(typeEdgeInfo, e, info) - g.debug.Encode(ea) -} - -// DebugVisitInfo records a visit to a Vertex during a walk operation. -func (g *Graph) DebugVisitInfo(v Vertex, info string) { - vi := newVertexInfo(typeVisitInfo, v, info) - g.debug.Encode(vi) -} - -// DebugOperation marks the start of a set of graph transformations in -// the debug log, and returns a DebugOperationEnd func, which marks the end of -// the operation in the log. Additional information can be added to the log via -// the info parameter. -// -// The returned func's End method allows this method to be called from a single -// defer statement: -// defer g.DebugOperationBegin("OpName", "operating").End("") -// -// The returned function must be called to properly close the logical operation -// in the logs. -func (g *Graph) DebugOperation(operation string, info string) DebugOperationEnd { - return g.debug.BeginOperation(operation, info) -} - // VertexName returns the name of a vertex. func VertexName(raw Vertex) string { switch v := raw.(type) { diff --git a/dag/graph_test.go b/dag/graph_test.go index 02c4debd5..297974431 100644 --- a/dag/graph_test.go +++ b/dag/graph_test.go @@ -134,15 +134,15 @@ func TestGraphEdgesFrom(t *testing.T) { edges := g.EdgesFrom(1) - var expected Set + expected := make(Set) expected.Add(BasicEdge(1, 3)) - var s Set + s := make(Set) for _, e := range edges { s.Add(e) } - if s.Intersection(&expected).Len() != expected.Len() { + if s.Intersection(expected).Len() != expected.Len() { t.Fatalf("bad: %#v", edges) } } @@ -157,15 +157,15 @@ func TestGraphEdgesTo(t *testing.T) { edges := g.EdgesTo(3) - var expected Set + expected := make(Set) expected.Add(BasicEdge(1, 3)) - var s Set + s := make(Set) for _, e := range edges { s.Add(e) } - if s.Intersection(&expected).Len() != expected.Len() { + if s.Intersection(expected).Len() != expected.Len() { t.Fatalf("bad: %#v", edges) } } diff --git a/dag/marshal.go b/dag/marshal.go index c567d2719..ebb8a0a63 100644 --- a/dag/marshal.go +++ b/dag/marshal.go @@ -1,14 +1,10 @@ package dag import ( - "encoding/json" "fmt" - "io" - "log" "reflect" "sort" "strconv" - "sync" ) const ( @@ -234,241 +230,3 @@ func marshalSubgrapher(v Vertex) (*Graph, bool) { return nil, false } - -// The DebugOperationEnd func type provides a way to call an End function via a -// method call, allowing for the chaining of methods in a defer statement. -type DebugOperationEnd func(string) - -// End calls function e with the info parameter, marking the end of this -// operation in the logs. -func (e DebugOperationEnd) End(info string) { e(info) } - -// encoder provides methods to write debug data to an io.Writer, and is a noop -// when no writer is present -type encoder struct { - sync.Mutex - w io.Writer -} - -// Encode is analogous to json.Encoder.Encode -func (e *encoder) Encode(i interface{}) { - if e == nil || e.w == nil { - return - } - e.Lock() - defer e.Unlock() - - js, err := json.Marshal(i) - if err != nil { - log.Println("[ERROR] dag:", err) - return - } - js = append(js, '\n') - - _, err = e.w.Write(js) - if err != nil { - log.Println("[ERROR] dag:", err) - return - } -} - -func (e *encoder) Add(v Vertex) { - if e == nil { - return - } - e.Encode(marshalTransform{ - Type: typeTransform, - AddVertex: newMarshalVertex(v), - }) -} - -// Remove records the removal of Vertex v. -func (e *encoder) Remove(v Vertex) { - if e == nil { - return - } - e.Encode(marshalTransform{ - Type: typeTransform, - RemoveVertex: newMarshalVertex(v), - }) -} - -func (e *encoder) Connect(edge Edge) { - if e == nil { - return - } - e.Encode(marshalTransform{ - Type: typeTransform, - AddEdge: newMarshalEdge(edge), - }) -} - -func (e *encoder) RemoveEdge(edge Edge) { - if e == nil { - return - } - e.Encode(marshalTransform{ - Type: typeTransform, - RemoveEdge: newMarshalEdge(edge), - }) -} - -// BeginOperation marks the start of set of graph transformations, and returns -// an EndDebugOperation func to be called once the opration is complete. -func (e *encoder) BeginOperation(op string, info string) DebugOperationEnd { - if e == nil { - return func(string) {} - } - - e.Encode(marshalOperation{ - Type: typeOperation, - Begin: op, - Info: info, - }) - - return func(info string) { - e.Encode(marshalOperation{ - Type: typeOperation, - End: op, - Info: info, - }) - } -} - -// structure for recording graph transformations -type marshalTransform struct { - // Type: "Transform" - Type string - AddEdge *marshalEdge `json:",omitempty"` - RemoveEdge *marshalEdge `json:",omitempty"` - AddVertex *marshalVertex `json:",omitempty"` - RemoveVertex *marshalVertex `json:",omitempty"` -} - -func (t marshalTransform) Transform(g *marshalGraph) { - switch { - case t.AddEdge != nil: - g.connect(t.AddEdge) - case t.RemoveEdge != nil: - g.removeEdge(t.RemoveEdge) - case t.AddVertex != nil: - g.add(t.AddVertex) - case t.RemoveVertex != nil: - g.remove(t.RemoveVertex) - } -} - -// this structure allows us to decode any object in the json stream for -// inspection, then re-decode it into a proper struct if needed. -type streamDecode struct { - Type string - Map map[string]interface{} - JSON []byte -} - -func (s *streamDecode) UnmarshalJSON(d []byte) error { - s.JSON = d - err := json.Unmarshal(d, &s.Map) - if err != nil { - return err - } - - if t, ok := s.Map["Type"]; ok { - s.Type, _ = t.(string) - } - return nil -} - -// structure for recording the beginning and end of any multi-step -// transformations. These are informational, and not required to reproduce the -// graph state. -type marshalOperation struct { - Type string - Begin string `json:",omitempty"` - End string `json:",omitempty"` - Info string `json:",omitempty"` -} - -// decodeGraph decodes a marshalGraph from an encoded graph stream. -func decodeGraph(r io.Reader) (*marshalGraph, error) { - dec := json.NewDecoder(r) - - // a stream should always start with a graph - g := &marshalGraph{} - - err := dec.Decode(g) - if err != nil { - return nil, err - } - - // now replay any operations that occurred on the original graph - for dec.More() { - s := &streamDecode{} - err := dec.Decode(s) - if err != nil { - return g, err - } - - // the only Type we're concerned with here is Transform to complete the - // Graph - if s.Type != typeTransform { - continue - } - - t := &marshalTransform{} - err = json.Unmarshal(s.JSON, t) - if err != nil { - return g, err - } - t.Transform(g) - } - return g, nil -} - -// marshalVertexInfo allows encoding arbitrary information about the a single -// Vertex in the logs. These are accumulated for informational display while -// rebuilding the graph. -type marshalVertexInfo struct { - Type string - Vertex *marshalVertex - Info string -} - -func newVertexInfo(infoType string, v Vertex, info string) *marshalVertexInfo { - return &marshalVertexInfo{ - Type: infoType, - Vertex: newMarshalVertex(v), - Info: info, - } -} - -// marshalEdgeInfo allows encoding arbitrary information about the a single -// Edge in the logs. These are accumulated for informational display while -// rebuilding the graph. -type marshalEdgeInfo struct { - Type string - Edge *marshalEdge - Info string -} - -func newEdgeInfo(infoType string, e Edge, info string) *marshalEdgeInfo { - return &marshalEdgeInfo{ - Type: infoType, - Edge: newMarshalEdge(e), - Info: info, - } -} - -// JSON2Dot reads a Graph debug log from and io.Reader, and converts the final -// graph dot format. -// -// TODO: Allow returning the output at a certain point during decode. -// Encode extra information from the json log into the Dot. -func JSON2Dot(r io.Reader) ([]byte, error) { - g, err := decodeGraph(r) - if err != nil { - return nil, err - } - - return g.Dot(nil), nil -} diff --git a/dag/marshal_test.go b/dag/marshal_test.go index c2f52a936..a7e468dc1 100644 --- a/dag/marshal_test.go +++ b/dag/marshal_test.go @@ -1,12 +1,8 @@ package dag import ( - "bytes" - "encoding/json" "strings" "testing" - - "github.com/hashicorp/terraform/tfdiags" ) func TestGraphDot_empty(t *testing.T) { @@ -80,331 +76,3 @@ const testGraphDotAttrsStr = `digraph { "[root] foo" [foo = "bar"] } }` - -func TestGraphJSON_empty(t *testing.T) { - var g Graph - g.Add(1) - g.Add(2) - g.Add(3) - - js, err := g.MarshalJSON() - if err != nil { - t.Fatal(err) - } - - actual := strings.TrimSpace(string(js)) - expected := strings.TrimSpace(testGraphJSONEmptyStr) - if actual != expected { - t.Fatalf("bad: %s", actual) - } -} - -func TestGraphJSON_basic(t *testing.T) { - var g Graph - g.Add(1) - g.Add(2) - g.Add(3) - g.Connect(BasicEdge(1, 3)) - - js, err := g.MarshalJSON() - if err != nil { - t.Fatal(err) - } - actual := strings.TrimSpace(string(js)) - expected := strings.TrimSpace(testGraphJSONBasicStr) - if actual != expected { - t.Fatalf("bad: %s", actual) - } -} - -// record some graph transformations, and make sure we get the same graph when -// they're replayed -func TestGraphJSON_basicRecord(t *testing.T) { - var g Graph - var buf bytes.Buffer - g.SetDebugWriter(&buf) - - g.Add(1) - g.Add(2) - g.Add(3) - g.Connect(BasicEdge(1, 2)) - g.Connect(BasicEdge(1, 3)) - g.Connect(BasicEdge(2, 3)) - (&AcyclicGraph{g}).TransitiveReduction() - - recorded := buf.Bytes() - // the Walk doesn't happen in a determined order, so just count operations - // for now to make sure we wrote stuff out. - if len(bytes.Split(recorded, []byte{'\n'})) != 17 { - t.Fatalf("bad: %s", recorded) - } - - original, err := g.MarshalJSON() - if err != nil { - t.Fatal(err) - } - - // replay the logs, and marshal the graph back out again - m, err := decodeGraph(bytes.NewReader(buf.Bytes())) - if err != nil { - t.Fatal(err) - } - - replayed, err := json.MarshalIndent(m, "", " ") - if err != nil { - t.Fatal(err) - } - - if !bytes.Equal(original, replayed) { - t.Fatalf("\noriginal: %s\nreplayed: %s", original, replayed) - } -} - -// Verify that Vertex and Edge annotations appear in the debug output -func TestGraphJSON_debugInfo(t *testing.T) { - var g Graph - var buf bytes.Buffer - g.SetDebugWriter(&buf) - - g.Add(1) - g.Add(2) - g.Add(3) - g.Connect(BasicEdge(1, 2)) - - g.DebugVertexInfo(2, "2") - g.DebugVertexInfo(3, "3") - g.DebugEdgeInfo(BasicEdge(1, 2), "1|2") - - dec := json.NewDecoder(bytes.NewReader(buf.Bytes())) - - var found2, found3, foundEdge bool - for dec.More() { - var d streamDecode - - err := dec.Decode(&d) - if err != nil { - t.Fatal(err) - } - - switch d.Type { - case typeVertexInfo: - va := &marshalVertexInfo{} - err := json.Unmarshal(d.JSON, va) - if err != nil { - t.Fatal(err) - } - - switch va.Info { - case "2": - if va.Vertex.Name != "2" { - t.Fatalf("wrong vertex annotated 2: %#v", va) - } - found2 = true - case "3": - if va.Vertex.Name != "3" { - t.Fatalf("wrong vertex annotated 3: %#v", va) - } - found3 = true - default: - t.Fatalf("unexpected annotation: %#v", va) - } - case typeEdgeInfo: - ea := &marshalEdgeInfo{} - err := json.Unmarshal(d.JSON, ea) - if err != nil { - t.Fatal(err) - } - - switch ea.Info { - case "1|2": - if ea.Edge.Name != "1|2" { - t.Fatalf("incorrect edge annotation: %#v\n", ea) - } - foundEdge = true - default: - t.Fatalf("unexpected edge Info: %#v", ea) - } - } - } - - if !found2 { - t.Fatal("annotation 2 not found") - } - if !found3 { - t.Fatal("annotation 3 not found") - } - if !foundEdge { - t.Fatal("edge annotation not found") - } -} - -// Verify that debug operations appear in the debug output -func TestGraphJSON_debugOperations(t *testing.T) { - var g Graph - var buf bytes.Buffer - g.SetDebugWriter(&buf) - - debugOp := g.DebugOperation("AddOne", "adding node 1") - g.Add(1) - debugOp.End("done adding node 1") - - // use an immediate closure to test defers - func() { - defer g.DebugOperation("AddTwo", "adding nodes 2 and 3").End("done adding 2 and 3") - g.Add(2) - defer g.DebugOperation("NestedAddThree", "second defer").End("done adding node 3") - g.Add(3) - }() - - g.Connect(BasicEdge(1, 2)) - - dec := json.NewDecoder(bytes.NewReader(buf.Bytes())) - - var ops []string - for dec.More() { - var d streamDecode - - err := dec.Decode(&d) - if err != nil { - t.Fatal(err) - } - - if d.Type != typeOperation { - continue - } - - o := &marshalOperation{} - err = json.Unmarshal(d.JSON, o) - if err != nil { - t.Fatal(err) - } - - switch { - case o.Begin == "AddOne": - ops = append(ops, "BeginAddOne") - case o.End == "AddOne": - ops = append(ops, "EndAddOne") - case o.Begin == "AddTwo": - ops = append(ops, "BeginAddTwo") - case o.End == "AddTwo": - ops = append(ops, "EndAddTwo") - case o.Begin == "NestedAddThree": - ops = append(ops, "BeginAddThree") - case o.End == "NestedAddThree": - ops = append(ops, "EndAddThree") - } - } - - expectedOps := []string{ - "BeginAddOne", - "EndAddOne", - "BeginAddTwo", - "BeginAddThree", - "EndAddThree", - "EndAddTwo", - } - - if strings.Join(ops, ",") != strings.Join(expectedOps, ",") { - t.Fatalf("incorrect order of operations: %v", ops) - } -} - -// Verify that we can replay visiting each vertex in order -func TestGraphJSON_debugVisits(t *testing.T) { - var g Graph - var buf bytes.Buffer - g.SetDebugWriter(&buf) - - g.Add(1) - g.Add(2) - g.Add(3) - g.Add(4) - - g.Connect(BasicEdge(2, 1)) - g.Connect(BasicEdge(4, 2)) - g.Connect(BasicEdge(3, 4)) - - err := (&AcyclicGraph{g}).Walk(func(v Vertex) tfdiags.Diagnostics { - g.DebugVisitInfo(v, "basic walk") - return nil - }) - - if err != nil { - t.Fatal(err) - } - - var visited []string - - dec := json.NewDecoder(bytes.NewReader(buf.Bytes())) - for dec.More() { - var d streamDecode - - err := dec.Decode(&d) - if err != nil { - t.Fatal(err) - } - - if d.Type != typeVisitInfo { - continue - } - - o := &marshalVertexInfo{} - err = json.Unmarshal(d.JSON, o) - if err != nil { - t.Fatal(err) - } - - visited = append(visited, o.Vertex.ID) - } - - expected := []string{"1", "2", "4", "3"} - - if strings.Join(visited, "-") != strings.Join(expected, "-") { - t.Fatalf("incorrect order of operations: %v", visited) - } -} - -const testGraphJSONEmptyStr = `{ - "Type": "Graph", - "Name": "root", - "Vertices": [ - { - "ID": "1", - "Name": "1" - }, - { - "ID": "2", - "Name": "2" - }, - { - "ID": "3", - "Name": "3" - } - ] -}` - -const testGraphJSONBasicStr = `{ - "Type": "Graph", - "Name": "root", - "Vertices": [ - { - "ID": "1", - "Name": "1" - }, - { - "ID": "2", - "Name": "2" - }, - { - "ID": "3", - "Name": "3" - } - ], - "Edges": [ - { - "Name": "1|3", - "Source": "1", - "Target": "3" - } - ] -}` diff --git a/dag/set.go b/dag/set.go index 92b42151d..f3fd704ba 100644 --- a/dag/set.go +++ b/dag/set.go @@ -1,14 +1,7 @@ package dag -import ( - "sync" -) - // Set is a set data structure. -type Set struct { - m map[interface{}]interface{} - once sync.Once -} +type Set map[interface{}]interface{} // Hashable is the interface used by set to get the hash code of a value. // If this isn't given, then the value of the item being added to the set @@ -27,32 +20,29 @@ func hashcode(v interface{}) interface{} { } // Add adds an item to the set -func (s *Set) Add(v interface{}) { - s.once.Do(s.init) - s.m[hashcode(v)] = v +func (s Set) Add(v interface{}) { + s[hashcode(v)] = v } // Delete removes an item from the set. -func (s *Set) Delete(v interface{}) { - s.once.Do(s.init) - delete(s.m, hashcode(v)) +func (s Set) Delete(v interface{}) { + delete(s, hashcode(v)) } // Include returns true/false of whether a value is in the set. -func (s *Set) Include(v interface{}) bool { - s.once.Do(s.init) - _, ok := s.m[hashcode(v)] +func (s Set) Include(v interface{}) bool { + _, ok := s[hashcode(v)] return ok } // Intersection computes the set intersection with other. -func (s *Set) Intersection(other *Set) *Set { - result := new(Set) +func (s Set) Intersection(other Set) Set { + result := make(Set) if s == nil { return result } if other != nil { - for _, v := range s.m { + for _, v := range s { if other.Include(v) { result.Add(v) } @@ -64,13 +54,13 @@ func (s *Set) Intersection(other *Set) *Set { // Difference returns a set with the elements that s has but // other doesn't. -func (s *Set) Difference(other *Set) *Set { - result := new(Set) +func (s Set) Difference(other Set) Set { + result := make(Set) if s != nil { - for k, v := range s.m { + for k, v := range s { var ok bool if other != nil { - _, ok = other.m[k] + _, ok = other[k] } if !ok { result.Add(v) @@ -83,10 +73,10 @@ func (s *Set) Difference(other *Set) *Set { // Filter returns a set that contains the elements from the receiver // where the given callback returns true. -func (s *Set) Filter(cb func(interface{}) bool) *Set { - result := new(Set) +func (s Set) Filter(cb func(interface{}) bool) Set { + result := make(Set) - for _, v := range s.m { + for _, v := range s { if cb(v) { result.Add(v) } @@ -96,28 +86,20 @@ func (s *Set) Filter(cb func(interface{}) bool) *Set { } // Len is the number of items in the set. -func (s *Set) Len() int { - if s == nil { - return 0 - } - - return len(s.m) +func (s Set) Len() int { + return len(s) } // List returns the list of set elements. -func (s *Set) List() []interface{} { +func (s Set) List() []interface{} { if s == nil { return nil } - r := make([]interface{}, 0, len(s.m)) - for _, v := range s.m { + r := make([]interface{}, 0, len(s)) + for _, v := range s { r = append(r, v) } return r } - -func (s *Set) init() { - s.m = make(map[interface{}]interface{}) -} diff --git a/dag/set_test.go b/dag/set_test.go index c70da475e..63b72e323 100644 --- a/dag/set_test.go +++ b/dag/set_test.go @@ -35,7 +35,9 @@ func TestSetDifference(t *testing.T) { for i, tc := range cases { t.Run(fmt.Sprintf("%d-%s", i, tc.Name), func(t *testing.T) { - var one, two, expected Set + one := make(Set) + two := make(Set) + expected := make(Set) for _, v := range tc.A { one.Add(v) } @@ -46,8 +48,8 @@ func TestSetDifference(t *testing.T) { expected.Add(v) } - actual := one.Difference(&two) - match := actual.Intersection(&expected) + actual := one.Difference(two) + match := actual.Intersection(expected) if match.Len() != expected.Len() { t.Fatalf("bad: %#v", actual.List()) } @@ -78,7 +80,8 @@ func TestSetFilter(t *testing.T) { for i, tc := range cases { t.Run(fmt.Sprintf("%d-%#v", i, tc.Input), func(t *testing.T) { - var input, expected Set + input := make(Set) + expected := make(Set) for _, v := range tc.Input { input.Add(v) } @@ -89,7 +92,7 @@ func TestSetFilter(t *testing.T) { actual := input.Filter(func(v interface{}) bool { return v.(int) < 5 }) - match := actual.Intersection(&expected) + match := actual.Intersection(expected) if match.Len() != expected.Len() { t.Fatalf("bad: %#v", actual.List()) } diff --git a/dag/tarjan.go b/dag/tarjan.go index 9d8b25ce2..330abd589 100644 --- a/dag/tarjan.go +++ b/dag/tarjan.go @@ -24,7 +24,7 @@ func stronglyConnected(acct *sccAcct, g *Graph, v Vertex) int { index := acct.visit(v) minIdx := index - for _, raw := range g.DownEdges(v).List() { + for _, raw := range g.DownEdges(v) { target := raw.(Vertex) targetIdx := acct.VertexIndex[target] diff --git a/dag/walk.go b/dag/walk.go index 1c926c2c2..8fb60f23a 100644 --- a/dag/walk.go +++ b/dag/walk.go @@ -15,7 +15,7 @@ import ( // been walked. If two vertices can be walked at the same time, they will be. // // Update can be called to update the graph. This can be called even during -// a walk, cahnging vertices/edges mid-walk. This should be done carefully. +// a walk, changing vertices/edges mid-walk. This should be done carefully. // If a vertex is removed but has already been executed, the result of that // execution (any error) is still returned by Wait. Changing or re-adding // a vertex that has already executed has no effect. Changing edges of @@ -64,6 +64,15 @@ type Walker struct { diagsLock sync.Mutex } +func (w *Walker) init() { + if w.vertices == nil { + w.vertices = make(Set) + } + if w.edges == nil { + w.edges = make(Set) + } +} + type walkerVertex struct { // These should only be set once on initialization and never written again. // They are not protected by a lock since they don't need to be since @@ -140,7 +149,9 @@ func (w *Walker) Wait() tfdiags.Diagnostics { // time during a walk. func (w *Walker) Update(g *AcyclicGraph) { log.Print("[TRACE] dag/walk: updating graph") - var v, e *Set + w.init() + v := make(Set) + e := make(Set) if g != nil { v, e = g.vertices, g.edges } @@ -157,13 +168,13 @@ func (w *Walker) Update(g *AcyclicGraph) { } // Calculate all our sets - newEdges := e.Difference(&w.edges) + newEdges := e.Difference(w.edges) oldEdges := w.edges.Difference(e) - newVerts := v.Difference(&w.vertices) + newVerts := v.Difference(w.vertices) oldVerts := w.vertices.Difference(v) // Add the new vertices - for _, raw := range newVerts.List() { + for _, raw := range newVerts { v := raw.(Vertex) // Add to the waitgroup so our walk is not done until everything finishes @@ -185,7 +196,7 @@ func (w *Walker) Update(g *AcyclicGraph) { } // Remove the old vertices - for _, raw := range oldVerts.List() { + for _, raw := range oldVerts { v := raw.(Vertex) // Get the vertex info so we can cancel it @@ -207,8 +218,8 @@ func (w *Walker) Update(g *AcyclicGraph) { } // Add the new edges - var changedDeps Set - for _, raw := range newEdges.List() { + changedDeps := make(Set) + for _, raw := range newEdges { edge := raw.(Edge) waiter, dep := w.edgeParts(edge) @@ -238,8 +249,8 @@ func (w *Walker) Update(g *AcyclicGraph) { w.edges.Add(raw) } - // Process reoved edges - for _, raw := range oldEdges.List() { + // Process removed edges + for _, raw := range oldEdges { edge := raw.(Edge) waiter, dep := w.edgeParts(edge) @@ -264,7 +275,7 @@ func (w *Walker) Update(g *AcyclicGraph) { // For each vertex with changed dependencies, we need to kick off // a new waiter and notify the vertex of the changes. - for _, raw := range changedDeps.List() { + for _, raw := range changedDeps { v := raw.(Vertex) info, ok := w.vertexMap[v] if !ok { @@ -309,7 +320,7 @@ func (w *Walker) Update(g *AcyclicGraph) { // Start all the new vertices. We do this at the end so that all // the edge waiters and changes are setup above. - for _, raw := range newVerts.List() { + for _, raw := range newVerts { v := raw.(Vertex) go w.walkVertex(v, w.vertexMap[v]) } diff --git a/experiments/doc.go b/experiments/doc.go new file mode 100644 index 000000000..5538d739c --- /dev/null +++ b/experiments/doc.go @@ -0,0 +1,9 @@ +// Package experiments contains the models and logic for opt-in experiments +// that can be activated for a particular Terraform module. +// +// We use experiments to get feedback on new configuration language features +// in a way that permits breaking changes without waiting for a future minor +// release. Any feature behind an experiment flag is subject to change in any +// way in even a patch release, until we have enough confidence about the +// design of the feature to make compatibility commitments about it. +package experiments diff --git a/experiments/errors.go b/experiments/errors.go new file mode 100644 index 000000000..a1fdc6f5c --- /dev/null +++ b/experiments/errors.go @@ -0,0 +1,26 @@ +package experiments + +import ( + "fmt" +) + +// UnavailableError is the error type returned by GetCurrent when the requested +// experiment is not recognized at all. +type UnavailableError struct { + ExperimentName string +} + +func (e UnavailableError) Error() string { + return fmt.Sprintf("no current experiment is named %q", e.ExperimentName) +} + +// ConcludedError is the error type returned by GetCurrent when the requested +// experiment is recognized as concluded. +type ConcludedError struct { + ExperimentName string + Message string +} + +func (e ConcludedError) Error() string { + return fmt.Sprintf("experiment %q has concluded: %s", e.ExperimentName, e.Message) +} diff --git a/experiments/experiment.go b/experiments/experiment.go new file mode 100644 index 000000000..037b34e73 --- /dev/null +++ b/experiments/experiment.go @@ -0,0 +1,93 @@ +package experiments + +// Experiment represents a particular experiment, which can be activated +// independently of all other experiments. +type Experiment string + +// All active and defunct experiments must be represented by constants whose +// internal string values are unique. +// +// Each of these declared constants must also be registered as either a +// current or a defunct experiment in the init() function below. +// +// Each experiment is represented by a string that must be a valid HCL +// identifier so that it can be specified in configuration. +const ( + VariableValidation = Experiment("variable_validation") +) + +func init() { + // Each experiment constant defined above must be registered here as either + // a current or a concluded experiment. + registerCurrentExperiment(VariableValidation) +} + +// GetCurrent takes an experiment name and returns the experiment value +// representing that expression if and only if it is a current experiment. +// +// If the selected experiment is concluded, GetCurrent will return an +// error of type ConcludedError whose message hopefully includes some guidance +// for users of the experiment on how to migrate to a stable feature that +// succeeded it. +// +// If the selected experiment is not known at all, GetCurrent will return an +// error of type UnavailableError. +func GetCurrent(name string) (Experiment, error) { + exp := Experiment(name) + if currentExperiments.Has(exp) { + return exp, nil + } + + if msg, concluded := concludedExperiments[exp]; concluded { + return Experiment(""), ConcludedError{ExperimentName: name, Message: msg} + } + + return Experiment(""), UnavailableError{ExperimentName: name} +} + +// Keyword returns the keyword that would be used to activate this experiment +// in the configuration. +func (e Experiment) Keyword() string { + return string(e) +} + +// IsCurrent returns true if the receiver is considered a currently-selectable +// experiment. +func (e Experiment) IsCurrent() bool { + return currentExperiments.Has(e) +} + +// IsConcluded returns true if the receiver is a concluded experiment. +func (e Experiment) IsConcluded() bool { + _, exists := concludedExperiments[e] + return exists +} + +// currentExperiments are those which are available to activate in the current +// version of Terraform. +// +// Members of this set are registered in the init function above. +var currentExperiments = make(Set) + +// concludedExperiments are those which were available to activate in an earlier +// version of Terraform but are no longer available, either because the feature +// in question has been implemented or because the experiment failed and the +// feature was abandoned. Each experiment maps to a message describing the +// outcome, so we can give users feedback about what they might do in modules +// using concluded experiments. +// +// After an experiment has been concluded for a whole major release span it can +// be removed, since we expect users to perform upgrades one major release at +// at time without skipping and thus they will see the concludedness error +// message as they upgrade through a prior major version. +// +// Members of this map are registered in the init function above. +var concludedExperiments = make(map[Experiment]string) + +func registerCurrentExperiment(exp Experiment) { + currentExperiments.Add(exp) +} + +func registerConcludedExperiment(exp Experiment, message string) { + concludedExperiments[exp] = message +} diff --git a/experiments/set.go b/experiments/set.go new file mode 100644 index 000000000..8247e212b --- /dev/null +++ b/experiments/set.go @@ -0,0 +1,46 @@ +package experiments + +// Set is a collection of experiments where every experiment is either a member +// or not. +type Set map[Experiment]struct{} + +// NewSet constructs a new Set with the given experiments as its initial members. +func NewSet(exps ...Experiment) Set { + ret := make(Set) + for _, exp := range exps { + ret.Add(exp) + } + return ret +} + +// SetUnion constructs a new Set containing the members of all of the given +// sets. +func SetUnion(sets ...Set) Set { + ret := make(Set) + for _, set := range sets { + for exp := range set { + ret.Add(exp) + } + } + return ret +} + +// Add inserts the given experiment into the set. +// +// If the given experiment is already present then this is a no-op. +func (s Set) Add(exp Experiment) { + s[exp] = struct{}{} +} + +// Remove takes the given experiment out of the set. +// +// If the given experiment not already present then this is a no-op. +func (s Set) Remove(exp Experiment) { + delete(s, exp) +} + +// Has tests whether the given experiment is in the receiving set. +func (s Set) Has(exp Experiment) bool { + _, ok := s[exp] + return ok +} diff --git a/experiments/testing.go b/experiments/testing.go new file mode 100644 index 000000000..54ff2dfde --- /dev/null +++ b/experiments/testing.go @@ -0,0 +1,33 @@ +package experiments + +import ( + "testing" +) + +// OverrideForTesting temporarily overrides the global tables +// of experiments in order to allow for a predictable set when unit testing +// the experiments infrastructure code. +// +// The correct way to use this function is to defer a call to its result so +// that the original tables can be restored at the conclusion of the calling +// test: +// +// defer experiments.OverrideForTesting(t, current, concluded)() +// +// This function modifies global variables that are normally fixed throughout +// our execution, so this function must not be called from non-test code and +// any test using it cannot safely run concurrently with other tests. +func OverrideForTesting(t *testing.T, current Set, concluded map[Experiment]string) func() { + // We're not currently using the given *testing.T in here, but we're + // requiring it anyway in case we might need it in future, and because + // it hopefully reinforces that only test code should be calling this. + + realCurrents := currentExperiments + realConcludeds := concludedExperiments + currentExperiments = current + concludedExperiments = concluded + return func() { + currentExperiments = realCurrents + concludedExperiments = realConcludeds + } +} diff --git a/go.mod b/go.mod index 67e34ca44..cf78aac1a 100644 --- a/go.mod +++ b/go.mod @@ -2,8 +2,8 @@ module github.com/hashicorp/terraform require ( cloud.google.com/go v0.45.1 - github.com/Azure/azure-sdk-for-go v21.3.0+incompatible - github.com/Azure/go-autorest v10.15.4+incompatible + github.com/Azure/azure-sdk-for-go v36.2.0+incompatible + github.com/Azure/go-autorest/autorest v0.9.2 github.com/Unknwon/com v0.0.0-20151008135407-28b053d5a292 // indirect github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af // indirect github.com/agext/levenshtein v1.2.2 @@ -13,10 +13,11 @@ require ( github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible github.com/apparentlymart/go-cidr v1.0.1 github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0 + github.com/apparentlymart/go-versions v0.0.2-0.20180815153302-64b99f7cb171 github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2 github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da // indirect github.com/armon/go-radix v1.0.0 // indirect - github.com/aws/aws-sdk-go v1.22.0 + github.com/aws/aws-sdk-go v1.25.3 github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect github.com/blang/semver v3.5.1+incompatible github.com/bmatcuk/doublestar v1.1.5 @@ -40,7 +41,7 @@ require ( github.com/golang/protobuf v1.3.2 github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db // indirect github.com/google/btree v1.0.0 // indirect - github.com/google/go-cmp v0.3.0 + github.com/google/go-cmp v0.3.1 github.com/google/uuid v1.1.1 github.com/gophercloud/gophercloud v0.0.0-20190208042652-bc37892e1968 github.com/gophercloud/utils v0.0.0-20190128072930-fbb6ab446f01 // indirect @@ -49,13 +50,13 @@ require ( github.com/grpc-ecosystem/go-grpc-middleware v1.0.0 // indirect github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect github.com/grpc-ecosystem/grpc-gateway v1.8.5 // indirect - github.com/hashicorp/aws-sdk-go-base v0.3.0 + github.com/hashicorp/aws-sdk-go-base v0.4.0 github.com/hashicorp/consul v0.0.0-20171026175957-610f3c86a089 github.com/hashicorp/errwrap v1.0.0 - github.com/hashicorp/go-azure-helpers v0.0.0-20190129193224-166dfd221bb2 + github.com/hashicorp/go-azure-helpers v0.10.0 github.com/hashicorp/go-checkpoint v0.5.0 - github.com/hashicorp/go-cleanhttp v0.5.0 - github.com/hashicorp/go-getter v1.4.0 + github.com/hashicorp/go-cleanhttp v0.5.1 + github.com/hashicorp/go-getter v1.4.2-0.20200106182914-9813cbd4eb02 github.com/hashicorp/go-hclog v0.0.0-20181001195459-61d530d6c27f github.com/hashicorp/go-immutable-radix v0.0.0-20180129170900-7f3cd4390caa // indirect github.com/hashicorp/go-msgpack v0.5.4 // indirect @@ -64,17 +65,18 @@ require ( github.com/hashicorp/go-retryablehttp v0.5.2 github.com/hashicorp/go-rootcerts v1.0.0 github.com/hashicorp/go-sockaddr v0.0.0-20180320115054-6d291a969b86 // indirect - github.com/hashicorp/go-tfe v0.3.23 + github.com/hashicorp/go-tfe v0.3.27 github.com/hashicorp/go-uuid v1.0.1 - github.com/hashicorp/go-version v1.1.0 + github.com/hashicorp/go-version v1.2.0 github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f - github.com/hashicorp/hcl/v2 v2.0.0 + github.com/hashicorp/hcl/v2 v2.3.0 github.com/hashicorp/hil v0.0.0-20190212112733-ab17b08d6590 - github.com/hashicorp/logutils v1.0.0 github.com/hashicorp/memberlist v0.1.0 // indirect github.com/hashicorp/serf v0.0.0-20160124182025-e4ec8cc423bb // indirect - github.com/hashicorp/terraform-config-inspect v0.0.0-20190821133035-82a99dc22ef4 + github.com/hashicorp/terraform-config-inspect v0.0.0-20191212124732-c6ae6269b9d7 + github.com/hashicorp/terraform-svchost v0.0.0-20191011084731-65d371908596 github.com/hashicorp/vault v0.10.4 + github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af github.com/jonboulle/clockwork v0.1.0 // indirect github.com/joyent/triton-go v0.0.0-20180313100802-d8f9c0314926 github.com/json-iterator/go v1.1.5 // indirect @@ -82,8 +84,8 @@ require ( github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 github.com/keybase/go-crypto v0.0.0-20161004153544-93f5b35093ba // indirect github.com/lib/pq v1.0.0 + github.com/likexian/gokit v0.20.15 github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 - github.com/marstr/guid v1.1.0 // indirect github.com/masterzen/winrm v0.0.0-20190223112901-5e5c9a7fe54b github.com/mattn/go-colorable v0.1.1 github.com/mattn/go-shellwords v1.0.4 @@ -96,7 +98,7 @@ require ( github.com/mitchellh/go-wordwrap v1.0.0 github.com/mitchellh/hashstructure v1.0.0 github.com/mitchellh/mapstructure v1.1.2 - github.com/mitchellh/panicwrap v0.0.0-20190213213626-17011010aaa4 + github.com/mitchellh/panicwrap v1.0.0 github.com/mitchellh/prefixedio v0.0.0-20190213213902-5733675afd51 github.com/mitchellh/reflectwalk v1.0.0 github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect @@ -112,6 +114,8 @@ require ( github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect github.com/soheilhy/cmux v0.1.4 // indirect github.com/spf13/afero v1.2.1 + github.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible + github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c github.com/terraform-providers/terraform-provider-openstack v1.15.0 github.com/tmc/grpc-websocket-proxy v0.0.0-20171017195756-830351dc03c6 // indirect github.com/ugorji/go v0.0.0-20180813092308-00b869d2f4a5 // indirect @@ -119,17 +123,20 @@ require ( github.com/xanzy/ssh-agent v0.2.1 github.com/xiang90/probing v0.0.0-20160813154853-07dd2e8dfe18 // indirect github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557 - github.com/zclconf/go-cty v1.1.0 + github.com/zclconf/go-cty v1.3.1 github.com/zclconf/go-cty-yaml v1.0.1 go.uber.org/atomic v1.3.2 // indirect go.uber.org/multierr v1.1.0 // indirect go.uber.org/zap v1.9.1 // indirect golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4 - golang.org/x/net v0.0.0-20190620200207-3b0461eec859 + golang.org/x/net v0.0.0-20191009170851-d66e71096ffb golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 golang.org/x/sys v0.0.0-20190804053845-51ab0e2deafa + golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0 google.golang.org/api v0.9.0 google.golang.org/grpc v1.21.1 gopkg.in/ini.v1 v1.42.0 // indirect gopkg.in/yaml.v2 v2.2.2 ) + +go 1.12 diff --git a/go.sum b/go.sum index 6911518b8..c95ddf0eb 100644 --- a/go.sum +++ b/go.sum @@ -7,16 +7,41 @@ cloud.google.com/go v0.45.1 h1:lRi0CHyU+ytlvylOlFKKq0af6JncuyoRh1J+QJBqQx0= cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= -github.com/Azure/azure-sdk-for-go v21.3.0+incompatible h1:YFvAka2WKAl2xnJkYV1e1b7E2z88AgFszDzWU18ejMY= -github.com/Azure/azure-sdk-for-go v21.3.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= -github.com/Azure/go-autorest v10.15.4+incompatible h1:q+DRrRdbCnkY7f2WxQBx58TwCGkEdMAK/hkZ10g0Pzk= -github.com/Azure/go-autorest v10.15.4+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= +github.com/Azure/azure-sdk-for-go v35.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go v36.2.0+incompatible h1:09cv2WoH0g6jl6m2iT+R9qcIPZKhXEL0sbmLhxP895s= +github.com/Azure/azure-sdk-for-go v36.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest v0.9.2 h1:6AWuh3uWrsZJcNoCHrCF/+g4aKPCU39kaMO6/qrnK/4= +github.com/Azure/go-autorest/autorest v0.9.2/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest/adal v0.5.0 h1:q2gDruN08/guU9vAjuPWff0+QIrpH6ediguzdAzXAUU= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/adal v0.8.1-0.20191028180845-3492b2aff503 h1:Hxqlh1uAA8aGpa1dFhDNhll7U/rkWtG8ZItFvRMr7l0= +github.com/Azure/go-autorest/autorest/adal v0.8.1-0.20191028180845-3492b2aff503/go.mod h1:Z6vX6WXXuyieHAXwMj0S6HY6e6wcHn37qQMBQlvY3lc= +github.com/Azure/go-autorest/autorest/azure/cli v0.2.0 h1:pSwNMF0qotgehbQNllUWwJ4V3vnrLKOzHrwDLEZK904= +github.com/Azure/go-autorest/autorest/azure/cli v0.2.0/go.mod h1:WWTbGPvkAg3I4ms2j2s+Zr5xCGwGqTQh+6M2ZqOczkE= +github.com/Azure/go-autorest/autorest/date v0.1.0 h1:YGrhWfrgtFs84+h0o46rJrlmsZtyZRg470CqAXTZaGM= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/date v0.2.0 h1:yW+Zlqf26583pE43KhfnhFcdmSWlm5Ew6bxipnr/tbM= +github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.3.0 h1:qJumjCaCudz+OcqE9/XtEPfvtOjOmKaui4EOpFI6zZc= +github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM= +github.com/Azure/go-autorest/autorest/to v0.3.0 h1:zebkZaadz7+wIQYgC7GXaz3Wb28yKYfVkkBKwc38VF8= +github.com/Azure/go-autorest/autorest/to v0.3.0/go.mod h1:MgwOyqaIuKdG4TL/2ywSsIWKAfJfgHDo8ObuUk3t5sA= +github.com/Azure/go-autorest/autorest/validation v0.2.0 h1:15vMO4y76dehZSq7pAaOLQxC6dZYsSrj2GQpflyM/L4= +github.com/Azure/go-autorest/autorest/validation v0.2.0/go.mod h1:3EEqHnBxQGHXRYq3HT1WyXAvT7LLY3tl70hw6tQIbjI= +github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= +github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= github.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4 h1:pSm8mp0T2OH2CPmPDPtwHPr3VAQaOwVF/JbllOPP4xA= github.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/ChrisTrenkamp/goxpath v0.0.0-20170922090931-c385f95c6022 h1:y8Gs8CzNfDF5AZvjr+5UyGQvQEBL7pwo+v+wX6q9JI8= github.com/ChrisTrenkamp/goxpath v0.0.0-20170922090931-c385f95c6022/go.mod h1:nuWgzSkT5PnyOd+272uUmV0dnAnAn42Mk7PiQC5VzN4= +github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM= github.com/Unknwon/com v0.0.0-20151008135407-28b053d5a292 h1:tuQ7w+my8a8mkwN7x2TSd7OzTjkZ7rAeSyH4xncuAMI= github.com/Unknwon/com v0.0.0-20151008135407-28b053d5a292/go.mod h1:KYCjqMOeHpNuTOiFQU6WEcTG7poCJrUs0YgyHNtn1no= github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af h1:DBNMBMuMiWYu0b+8KMJuWmfCkcxl09JwdlqwDZZ6U14= @@ -45,6 +70,8 @@ github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0 h1:MzVXffFU github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM= github.com/apparentlymart/go-textseg v1.0.0 h1:rRmlIsPEEhUTIKQb7T++Nz/A5Q6C9IuX2wFoYVvnCs0= github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk= +github.com/apparentlymart/go-versions v0.0.2-0.20180815153302-64b99f7cb171 h1:19Seu/H5gq3Ugtx+CGenwF89SDG3S1REX5i6PJj3RK4= +github.com/apparentlymart/go-versions v0.0.2-0.20180815153302-64b99f7cb171/go.mod h1:JXY95WvQrPJQtudvNARshgWajS7jNNlM90altXIPNyI= github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2 h1:7Ip0wMmLHLRJdrloDxZfhMm0xrLXZS8+COSu2bXmEQs= github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da h1:8GUt8eRujhVEGZFFEjBj46YV4rDjvGrNxb0KMWYkL2I= @@ -53,9 +80,8 @@ github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI= github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM= -github.com/aws/aws-sdk-go v1.16.36/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= -github.com/aws/aws-sdk-go v1.22.0 h1:e88V6+dSEyBibUy0ekOydtTfNWzqG3hrtCR8SF6UqqY= -github.com/aws/aws-sdk-go v1.22.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= +github.com/aws/aws-sdk-go v1.25.3 h1:uM16hIw9BotjZKMZlX05SN2EFtaWfi/NonPKIARiBLQ= +github.com/aws/aws-sdk-go v1.25.3/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f h1:ZNv7On9kyUzm7fvRZumSyy/IUiSC7AzL0I1jKKtwooA= github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0= @@ -70,7 +96,6 @@ github.com/bmatcuk/doublestar v1.1.5 h1:2bNwBOmhyFEFcoB3tGvTD5xanq+4kyOZlB8wFYbM github.com/bmatcuk/doublestar v1.1.5/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE= github.com/boltdb/bolt v1.3.1 h1:JQmyP4ZBrce+ZQu0dY660FMfatumYDLun9hBCUVIkF4= github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps= -github.com/bsm/go-vlq v0.0.0-20150828105119-ec6e8d4f5f4e/go.mod h1:N+BjUcTjSxc2mtRGSCPsat1kze3CUtvJN3/jTXlp29k= github.com/cheggaaa/pb v1.0.27/go.mod h1:pQciLPpbU0oxA0h+VJYYLxO+XeDQb5pZijXscXHm81s= github.com/chzyer/logex v1.1.10 h1:Swpa1K6QvQznwJRcfTfQJmTE72DqScAa40E+fbHEXEE= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= @@ -94,8 +119,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= -github.com/dimchansky/utfbom v1.0.0 h1:fGC2kkf4qOoKqZ4q7iIh+Vef4ubC1c38UDsEyZynZPc= -github.com/dimchansky/utfbom v1.0.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8= +github.com/dimchansky/utfbom v1.1.0 h1:FcM3g+nofKgUteL8dm/UpdRXNC9KmADgTpLKsu0TRo4= +github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8= github.com/dnaeon/go-vcr v0.0.0-20180920040454-5637cf3d8a31 h1:Dzuw9GtbmllUqEcoHfScT9YpKFUssSiZ5PgZkIGf/YQ= github.com/dnaeon/go-vcr v0.0.0-20180920040454-5637cf3d8a31/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E= github.com/dylanmei/iso8601 v0.1.0 h1:812NGQDBcqquTfH5Yeo7lwR0nzx/cKdsmf3qMjPURUI= @@ -104,12 +129,12 @@ github.com/dylanmei/winrmtest v0.0.0-20190225150635-99b7fe2fddf1 h1:r1oACdS2XYiA github.com/dylanmei/winrmtest v0.0.0-20190225150635-99b7fe2fddf1/go.mod h1:lcy9/2gH1jn/VCLouHA6tOEwLoNVd4GW6zhuKLmHC2Y= github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= -github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/go-test/deep v1.0.1/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68= github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= @@ -136,6 +161,8 @@ github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= +github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= @@ -161,21 +188,22 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92Bcuy github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/grpc-gateway v1.8.5 h1:2+KSC78XiO6Qy0hIjfc1OD9H+hsaJdJlb8Kqsd41CTE= github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= -github.com/hashicorp/aws-sdk-go-base v0.3.0 h1:CPWKWCuOwpIFNsy8FUI9IT2QI7mGwgVPc4hrXW9I4L4= -github.com/hashicorp/aws-sdk-go-base v0.3.0/go.mod h1:ZIWACGGi0N7a4DZbf15yuE1JQORmWLtBcVM6F5SXNFU= +github.com/hashicorp/aws-sdk-go-base v0.4.0 h1:zH9hNUdsS+2G0zJaU85ul8D59BGnZBaKM+KMNPAHGwk= +github.com/hashicorp/aws-sdk-go-base v0.4.0/go.mod h1:eRhlz3c4nhqxFZJAahJEFL7gh6Jyj5rQmQc7F9eHFyQ= github.com/hashicorp/consul v0.0.0-20171026175957-610f3c86a089 h1:1eDpXAxTh0iPv+1kc9/gfSI2pxRERDsTk/lNGolwHn8= github.com/hashicorp/consul v0.0.0-20171026175957-610f3c86a089/go.mod h1:mFrjN1mfidgJfYP1xrJCF+AfRhr6Eaqhb2+sfyn/OOI= -github.com/hashicorp/errwrap v0.0.0-20180715044906-d6c0cd880357/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= -github.com/hashicorp/go-azure-helpers v0.0.0-20190129193224-166dfd221bb2 h1:VBRx+yPYUZaobnn5ANBcOUf4hhWpTHSQgftG4TcDkhI= -github.com/hashicorp/go-azure-helpers v0.0.0-20190129193224-166dfd221bb2/go.mod h1:lu62V//auUow6k0IykxLK2DCNW8qTmpm8KqhYVWattA= +github.com/hashicorp/go-azure-helpers v0.10.0 h1:KhjDnQhCqEMKlt4yH00MCevJQPJ6LkHFdSveXINO6vE= +github.com/hashicorp/go-azure-helpers v0.10.0/go.mod h1:YuAtHxm2v74s+IjQwUG88dHBJPd5jL+cXr5BGVzSKhE= github.com/hashicorp/go-checkpoint v0.5.0 h1:MFYpPZCnQqQTE18jFwSII6eUQrD/oxMFp3mlgcqk5mU= github.com/hashicorp/go-checkpoint v0.5.0/go.mod h1:7nfLNL10NsxqO4iWuW6tWW0HjZuDrwkBuEQsVcpCOgg= github.com/hashicorp/go-cleanhttp v0.5.0 h1:wvCrVc9TjDls6+YGAF2hAifE1E5U1+b4tH6KdvN3Gig= github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= -github.com/hashicorp/go-getter v1.4.0 h1:ENHNi8494porjD0ZhIrjlAHnveSFhY7hvOJrV/fsKkw= -github.com/hashicorp/go-getter v1.4.0/go.mod h1:7qxyCd8rBfcShwsvxgIguu4KbS3l8bUCwg2Umn7RjeY= +github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM= +github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= +github.com/hashicorp/go-getter v1.4.2-0.20200106182914-9813cbd4eb02 h1:l1KB3bHVdvegcIf5upQ5mjcHjs2qsWnKh4Yr9xgIuu8= +github.com/hashicorp/go-getter v1.4.2-0.20200106182914-9813cbd4eb02/go.mod h1:7qxyCd8rBfcShwsvxgIguu4KbS3l8bUCwg2Umn7RjeY= github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI= github.com/hashicorp/go-hclog v0.0.0-20181001195459-61d530d6c27f h1:Yv9YzBlAETjy6AOX9eLBZ3nshNVRREgerT/3nvxlGho= github.com/hashicorp/go-hclog v0.0.0-20181001195459-61d530d6c27f/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= @@ -183,7 +211,6 @@ github.com/hashicorp/go-immutable-radix v0.0.0-20180129170900-7f3cd4390caa h1:0n github.com/hashicorp/go-immutable-radix v0.0.0-20180129170900-7f3cd4390caa/go.mod h1:6ij3Z20p+OhOkCSrA0gImAWoHYQRGbnlcuk6XYTiaRw= github.com/hashicorp/go-msgpack v0.5.4 h1:SFT72YqIkOcLdWJUYcriVX7hbrZpwc/f7h8aW2NUqrA= github.com/hashicorp/go-msgpack v0.5.4/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= -github.com/hashicorp/go-multierror v0.0.0-20180717150148-3d5d8f294aa0/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I= github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-plugin v1.0.1-0.20190610192547-a1bc61569a26 h1:hRho44SAoNu1CBtn5r8Q9J3rCs4ZverWZ4R+UeeNuWM= @@ -194,42 +221,41 @@ github.com/hashicorp/go-rootcerts v1.0.0 h1:Rqb66Oo1X/eSV1x66xbDccZjhJigjg0+e82k github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU= github.com/hashicorp/go-safetemp v1.0.0 h1:2HR189eFNrjHQyENnQMMpCiBAsRxzbTMIgBhEyExpmo= github.com/hashicorp/go-safetemp v1.0.0/go.mod h1:oaerMy3BhqiTbVye6QuFhFtIceqFoDHxNAB65b+Rj1I= -github.com/hashicorp/go-slug v0.3.0 h1:L0c+AvH/J64iMNF4VqRaRku2DMTEuHioPVS7kMjWIU8= -github.com/hashicorp/go-slug v0.3.0/go.mod h1:I5tq5Lv0E2xcNXNkmx7BSfzi1PsJ2cNjs3cC3LwyhK8= +github.com/hashicorp/go-slug v0.4.1 h1:/jAo8dNuLgSImoLXaX7Od7QB4TfYCVPam+OpAt5bZqc= +github.com/hashicorp/go-slug v0.4.1/go.mod h1:I5tq5Lv0E2xcNXNkmx7BSfzi1PsJ2cNjs3cC3LwyhK8= github.com/hashicorp/go-sockaddr v0.0.0-20180320115054-6d291a969b86 h1:7YOlAIO2YWnJZkQp7B5eFykaIY7C9JndqAFQyVV5BhM= github.com/hashicorp/go-sockaddr v0.0.0-20180320115054-6d291a969b86/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= -github.com/hashicorp/go-tfe v0.3.23 h1:kd9hlFQvGubNF/CpF7T5AP/xU8uLUq8ANbI5xRDVSms= -github.com/hashicorp/go-tfe v0.3.23/go.mod h1:SuPHR+OcxvzBZNye7nGPfwZTEyd3rWPfLVbCgyZPezM= +github.com/hashicorp/go-tfe v0.3.27 h1:7XZ/ZoPyYoeuNXaWWW0mJOq016y0qb7I4Q0P/cagyu8= +github.com/hashicorp/go-tfe v0.3.27/go.mod h1:DVPSW2ogH+M9W1/i50ASgMht8cHP7NxxK0nrY9aFikQ= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.1 h1:fv1ep09latC32wFoVwnqcnKJGnMSdBanPczbHAYm1BE= github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-version v1.1.0 h1:bPIoEKD27tNdebFGGxxYwcL4nepeY4j1QP23PFRGzg0= github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E= +github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f h1:UdxlrJz4JOnY8W+DbLISwf2B8WXEolNRA8BGCwI9jws= github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w= -github.com/hashicorp/hcl/v2 v2.0.0 h1:efQznTz+ydmQXq3BOnRa3AXzvCeTq1P4dKj/z5GLlY8= github.com/hashicorp/hcl/v2 v2.0.0/go.mod h1:oVVDG71tEinNGYCxinCYadcmKU9bglqW9pV3txagJ90= -github.com/hashicorp/hcl2 v0.0.0-20190821123243-0c888d1241f6 h1:JImQpEeUQ+0DPFMaWzLA0GdUNPaUlCXLpfiqkSZBUfc= -github.com/hashicorp/hcl2 v0.0.0-20190821123243-0c888d1241f6/go.mod h1:Cxv+IJLuBiEhQ7pBYGEuORa0nr4U994pE8mYLuFd7v0= +github.com/hashicorp/hcl/v2 v2.3.0 h1:iRly8YaMwTBAKhn1Ybk7VSdzbnopghktCD031P8ggUE= +github.com/hashicorp/hcl/v2 v2.3.0/go.mod h1:d+FwDBbOLvpAM3Z6J7gPj/VoAGkNe/gm352ZhjJ/Zv8= github.com/hashicorp/hil v0.0.0-20190212112733-ab17b08d6590 h1:2yzhWGdgQUWZUCNK+AoO35V+HTsgEmcM4J9IkArh7PI= github.com/hashicorp/hil v0.0.0-20190212112733-ab17b08d6590/go.mod h1:n2TSygSNwsLJ76m8qFXTSc7beTb+auJxYdqrnoqwZWE= -github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= -github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= github.com/hashicorp/memberlist v0.1.0 h1:qSsCiC0WYD39lbSitKNt40e30uorm2Ss/d4JGU1hzH8= github.com/hashicorp/memberlist v0.1.0/go.mod h1:ncdBp14cuox2iFOq3kDiquKU6fqsTBc3W6JvZwjxxsE= github.com/hashicorp/serf v0.0.0-20160124182025-e4ec8cc423bb h1:ZbgmOQt8DOg796figP87/EFCVx2v2h9yRvwHF/zceX4= github.com/hashicorp/serf v0.0.0-20160124182025-e4ec8cc423bb/go.mod h1:h/Ru6tmZazX7WO/GDmwdpS975F019L4t5ng5IgwbNrE= -github.com/hashicorp/terraform-config-inspect v0.0.0-20190821133035-82a99dc22ef4 h1:fTkL0YwjohGyN7AqsDhz6bwcGBpT+xBqi3Qhpw58Juw= -github.com/hashicorp/terraform-config-inspect v0.0.0-20190821133035-82a99dc22ef4/go.mod h1:JDmizlhaP5P0rYTTZB0reDMefAiJyfWPEtugV4in1oI= +github.com/hashicorp/terraform-config-inspect v0.0.0-20191212124732-c6ae6269b9d7 h1:Pc5TCv9mbxFN6UVX0LH6CpQrdTM5YjbVI2w15237Pjk= +github.com/hashicorp/terraform-config-inspect v0.0.0-20191212124732-c6ae6269b9d7/go.mod h1:p+ivJws3dpqbp1iP84+npOyAmTTOLMgCzrXd3GSdn/A= +github.com/hashicorp/terraform-svchost v0.0.0-20191011084731-65d371908596 h1:hjyO2JsNZUKT1ym+FAdlBEkGPevazYsmVgIMw7dVELg= +github.com/hashicorp/terraform-svchost v0.0.0-20191011084731-65d371908596/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg= github.com/hashicorp/vault v0.10.4 h1:4x0lHxui/ZRp/B3E0Auv1QNBJpzETqHR2kQD3mHSBJU= github.com/hashicorp/vault v0.10.4/go.mod h1:KfSyffbKxoVyspOdlaGVjIuwLobi07qD1bAbosPMpP0= github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb h1:b5rjCoWHc7eqmAS4/qyk21ZsHyb6Mxv/jykxvNTkU4M= github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= -github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= -github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM= github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= @@ -243,7 +269,6 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1 github.com/jtolds/gls v4.2.1+incompatible h1:fSuqC+Gmlu6l/ZYAoZzx2pyucC8Xza35fpRVWLVmUEE= github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= -github.com/kardianos/osext v0.0.0-20170510131534-ae77be60afb1/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8= github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 h1:iQTw/8FWTuc7uiaSepXwyf3o52HaUYcV+Tu66S3F5GA= github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8= github.com/keybase/go-crypto v0.0.0-20161004153544-93f5b35093ba h1:NARVGAAgEXvoMeNPHhPFt1SBt1VMznA3Gnz9d0qj+co= @@ -261,10 +286,16 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0 github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/lib/pq v1.0.0 h1:X5PMW56eZitiTeO7tKzZxFCSpbFZJtkMMooicw2us9A= github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= +github.com/likexian/gokit v0.0.0-20190309162924-0a377eecf7aa/go.mod h1:QdfYv6y6qPA9pbBA2qXtoT8BMKha6UyNbxWGWl/9Jfk= +github.com/likexian/gokit v0.0.0-20190418170008-ace88ad0983b/go.mod h1:KKqSnk/VVSW8kEyO2vVCXoanzEutKdlBAPohmGXkxCk= +github.com/likexian/gokit v0.0.0-20190501133040-e77ea8b19cdc/go.mod h1:3kvONayqCaj+UgrRZGpgfXzHdMYCAO0KAt4/8n0L57Y= +github.com/likexian/gokit v0.20.15 h1:DgtIqqTRFqtbiLJFzuRESwVrxWxfs8OlY6hnPYBa3BM= +github.com/likexian/gokit v0.20.15/go.mod h1:kn+nTv3tqh6yhor9BC4Lfiu58SmH8NmQ2PmEl+uM6nU= +github.com/likexian/simplejson-go v0.0.0-20190409170913-40473a74d76d/go.mod h1:Typ1BfnATYtZ/+/shXfFYLrovhFyuKvzwrdOnIDHlmg= +github.com/likexian/simplejson-go v0.0.0-20190419151922-c1f9f0b4f084/go.mod h1:U4O1vIJvIKwbMZKUJ62lppfdvkCdVd2nfMimHK81eec= +github.com/likexian/simplejson-go v0.0.0-20190502021454-d8787b4bfa0b/go.mod h1:3BWwtmKP9cXWwYCr5bkoVDEfLywacOv0s06OBEDpyt8= github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 h1:wnfcqULT+N2seWf6y4yHzmi7GD2kNx4Ute0qArktD48= github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82/go.mod h1:y54tfGmO3NKssKveTEFFzH8C/akrSOy/iW9qEAUDV84= -github.com/marstr/guid v1.1.0 h1:/M4H/1G4avsieL6BbUwCOBzulmoeKVP5ux/3mQNnbyI= -github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho= github.com/masterzen/simplexml v0.0.0-20160608183007-4572e39b1ab9 h1:SmVbOZFWAlyQshuMfOkiAx1f5oUTsOGG5IXplAEYeeM= github.com/masterzen/simplexml v0.0.0-20160608183007-4572e39b1ab9/go.mod h1:kCEbxUJlNDEBNbdQMkPSp6yaKcRXVI6f4ddk8Riv4bc= github.com/masterzen/winrm v0.0.0-20190223112901-5e5c9a7fe54b h1:/1RFh2SLCJ+tEnT73+Fh5R2AO89sQqs8ba7o+hx1G0Y= @@ -304,8 +335,8 @@ github.com/mitchellh/hashstructure v1.0.0 h1:ZkRJX1CyOoTkar7p/mLS5TZU4nJ1Rn/F8u9 github.com/mitchellh/hashstructure v1.0.0/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ= github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/mitchellh/panicwrap v0.0.0-20190213213626-17011010aaa4 h1:jw9tsdJ1FQmUkyTXdIF/nByTX+mMnnp16glnvGZMsC4= -github.com/mitchellh/panicwrap v0.0.0-20190213213626-17011010aaa4/go.mod h1:YYMf4xtQnR8LRC0vKi3afvQ5QwRPQ17zjcpkBCufb+I= +github.com/mitchellh/panicwrap v1.0.0 h1:67zIyVakCIvcs69A0FGfZjBdPleaonSgGlXRSRlb6fE= +github.com/mitchellh/panicwrap v1.0.0/go.mod h1:pKvZHwWrZowLUzftuFq7coarnxbBXU4aQh3N0BJOeeA= github.com/mitchellh/prefixedio v0.0.0-20190213213902-5733675afd51 h1:eD92Am0Qf3rqhsOeA1zwBHSfRkoHrt4o6uORamdmJP8= github.com/mitchellh/prefixedio v0.0.0-20190213213902-5733675afd51/go.mod h1:kB1naBgV9ORnkiTVeyJOI1DavaJkG4oNIq0Af6ZVKUo= github.com/mitchellh/reflectwalk v1.0.0 h1:9D+8oIskB4VJBN5SFlmc27fSlIBZaov1Wpk/IfikLNY= @@ -314,14 +345,13 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= +github.com/mozillazg/go-httpheader v0.2.1 h1:geV7TrjbL8KXSyvghnFm+NyTux/hxwueTSrwhe88TQQ= +github.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d h1:VhgPp6v9qf9Agr/56bj7Y/xa04UccTW04VP0Qed4vnQ= github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d/go.mod h1:YUTz3bUH2ZwIWBy3CJBeOBEugqcmXREj14T+iG/4k4U= github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw= github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= -github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/packer-community/winrmcp v0.0.0-20180102160824-81144009af58 h1:m3CEgv3ah1Rhy82L+c0QG/U3VyY1UsvsIdkh0/rU97Y= github.com/packer-community/winrmcp v0.0.0-20180102160824-81144009af58/go.mod h1:f6Izs6JvFTdnRbziASagjZ2vmf55NSIkC/weStxCHqk= github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c h1:Lgl0gzECD8GnQ5QCWA8o6BtfL6mDH5rQgM4/fX3avOs= @@ -364,6 +394,7 @@ github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4k github.com/spf13/afero v1.2.1 h1:qgMbHoJbPbw579P+1zVY+6n4nIFuIchaIjzZ/I/Yq8M= github.com/spf13/afero v1.2.1/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk= github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= @@ -372,6 +403,10 @@ github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0 github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/svanharmelen/jsonapi v0.0.0-20180618144545-0c0828c3f16d h1:Z4EH+5EffvBEhh37F0C0DnpklTMh00JOkjW5zK3ofBI= github.com/svanharmelen/jsonapi v0.0.0-20180618144545-0c0828c3f16d/go.mod h1:BSTlc8jOjh0niykqEGVXOLXdi9o0r0kR8tCYiMvjFgw= +github.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible h1:5Td2b0yfaOvw9M9nZ5Oav6Li9bxUNxt4DgxMfIPpsa0= +github.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible/go.mod h1:0PfYow01SHPMhKY31xa+EFz2RStxIqj6JFAJS+IkCi4= +github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c h1:iRD1CqtWUjgEVEmjwTMbP1DMzz1HRytOsgx/rlw/vNs= +github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c/go.mod h1:wk2XFUg6egk4tSDNZtXeKfe2G6690UVyt163PuUxBZk= github.com/terraform-providers/terraform-provider-openstack v1.15.0 h1:adpjqej+F8BAX9dHmuPF47sUIkgifeqBu6p7iCsyj0Y= github.com/terraform-providers/terraform-provider-openstack v1.15.0/go.mod h1:2aQ6n/BtChAl1y2S60vebhyJyZXBsuAI5G4+lHrT1Ew= github.com/tmc/grpc-websocket-proxy v0.0.0-20171017195756-830351dc03c6 h1:lYIiVDtZnyTWlNwiAxLj0bbpTcx1BWCFhXjfsvmPdNc= @@ -392,6 +427,12 @@ github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557/go.mod h1:ce1O1j6Ut github.com/zclconf/go-cty v1.0.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= github.com/zclconf/go-cty v1.1.0 h1:uJwc9HiBOCpoKIObTQaLR+tsEXx1HBHnOsOOpcdhZgw= github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= +github.com/zclconf/go-cty v1.2.0 h1:sPHsy7ADcIZQP3vILvTjrh74ZA175TFP5vqiNK1UmlI= +github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= +github.com/zclconf/go-cty v1.3.0 h1:ig1G6+rJHX6jZDRjw4LUD3J8q7SBAagcmbM7bQ8ijmI= +github.com/zclconf/go-cty v1.3.0/go.mod h1:YO23e2L18AG+ZYQfSobnY4G65nvwvprPCxBHkufUH1k= +github.com/zclconf/go-cty v1.3.1 h1:QIOZl+CKKdkv4l2w3lG23nNzXgLoxsWLSEdg1MlX4p0= +github.com/zclconf/go-cty v1.3.1/go.mod h1:YO23e2L18AG+ZYQfSobnY4G65nvwvprPCxBHkufUH1k= github.com/zclconf/go-cty-yaml v1.0.1 h1:up11wlgAaDvlAGENcFDnZgkn0qUJurso7k6EpURKNF8= github.com/zclconf/go-cty-yaml v1.0.1/go.mod h1:IP3Ylp0wQpYm50IHK8OZWKMu6sPJIUgKa8XhiVHura0= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= @@ -404,7 +445,6 @@ go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/ go.uber.org/zap v1.9.1 h1:XCJQEf3W6eZaVwhRBof6ImoYGJSITeKWsyeh3HFu/5o= go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20181112202954-3d3f9f413869/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190222235706-ffb98f73852f/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= @@ -424,7 +464,6 @@ golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -433,11 +472,12 @@ golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73r golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190502183928-7f726cade0ab/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20191009170851-d66e71096ffb h1:TR699M2v0qoKTOHxeLgp6zPqaQNs74f01a/ob9W0qko= +golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0= @@ -450,7 +490,6 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190129075346-302c3dd5f1cc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -482,6 +521,7 @@ golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3 golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0 h1:Dh6fw+p6FyRl5x/FvNswO1ji0lIGzm3KP8Y9VkS9PTE= golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= @@ -511,11 +551,9 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8 gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/cheggaaa/pb.v1 v1.0.27/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw= -gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/ini.v1 v1.42.0 h1:7N3gPTt50s8GuLortA00n8AqRTk75qOP98+mTPpgzRk= gopkg.in/ini.v1 v1.42.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= @@ -523,5 +561,4 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -howett.net/plist v0.0.0-20181124034731-591f970eefbb/go.mod h1:vMygbs4qMhSZSc4lCUl2OEE+rDiIIJAIdR4m7MiMcm0= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= diff --git a/helper/logging/indent.go b/helper/logging/indent.go new file mode 100644 index 000000000..e0da0d7c7 --- /dev/null +++ b/helper/logging/indent.go @@ -0,0 +1,23 @@ +package logging + +import ( + "strings" +) + +// Indent adds two spaces to the beginning of each line of the given string, +// with the goal of making the log level filter understand it as a line +// continuation rather than possibly as new log lines. +func Indent(s string) string { + var b strings.Builder + for len(s) > 0 { + end := strings.IndexByte(s, '\n') + if end == -1 { + end = len(s) - 1 + } + var l string + l, s = s[:end+1], s[end+1:] + b.WriteString(" ") + b.WriteString(l) + } + return b.String() +} diff --git a/helper/logging/indent_test.go b/helper/logging/indent_test.go new file mode 100644 index 000000000..46b12a42c --- /dev/null +++ b/helper/logging/indent_test.go @@ -0,0 +1,15 @@ +package logging + +import ( + "testing" +) + +func TestIndent(t *testing.T) { + s := "hello\n world\ngoodbye\n moon" + got := Indent(s) + want := " hello\n world\n goodbye\n moon" + + if got != want { + t.Errorf("wrong result\ngot:\n%s\n\nwant:\n%s", got, want) + } +} diff --git a/helper/logging/level.go b/helper/logging/level.go new file mode 100644 index 000000000..0dc4dfe8d --- /dev/null +++ b/helper/logging/level.go @@ -0,0 +1,159 @@ +package logging + +import ( + "bytes" + "io" + "sync" +) + +// LogLevel is a special string, conventionally written all in uppercase, that +// can be used to mark a log line for filtering and to specify filtering +// levels in the LevelFilter type. +type LogLevel string + +// LevelFilter is an io.Writer that can be used with a logger that +// will attempt to filter out log messages that aren't at least a certain +// level. +// +// This filtering is HEURISTIC-BASED, and so will not be 100% reliable. The +// assumptions it makes are: +// +// - Individual log messages are never split across multiple calls to the +// Write method. +// +// - Messages that carry levels are marked by a sequence starting with "[", +// then the level name string, and then "]". Any message without a sequence +// like this is an un-levelled message, and is not subject to filtering. +// +// - Each \n-delimited line in a write is a separate log message, unless a +// line starts with at least one space in which case it is interpreted +// as a continuation of the previous line. +// +// - If a log line starts with a non-whitespace character that isn't a digit +// then it's recognized as a degenerate continuation, because "real" log +// lines should start with a date/time and thus always have a leading +// digit. (This also cleans up after some situations where the assumptuion +// that messages arrive atomically aren't met, which is sadly sometimes +// true for longer messages that trip over some buffering behavior in +// panicwrap.) +// +// Because logging is a cross-cutting concern and not fully under the control +// of Terraform itself, there will certainly be cases where the above +// heuristics will fail. For example, it is likely that LevelFilter will +// occasionally misinterpret a continuation line as a new message because the +// code generating it doesn't know about our indentation convention. +// +// Our goal here is just to make a best effort to reduce the log volume, +// accepting that the results will not be 100% correct. +// +// Logging calls within Terraform Core should follow the above conventions so +// that the log output is broadly correct, however. +// +// Once the filter is in use somewhere, it is not safe to modify +// the structure. +type LevelFilter struct { + // Levels is the list of log levels, in increasing order of + // severity. Example might be: {"DEBUG", "WARN", "ERROR"}. + Levels []LogLevel + + // MinLevel is the minimum level allowed through + MinLevel LogLevel + + // The underlying io.Writer where log messages that pass the filter + // will be set. + Writer io.Writer + + badLevels map[LogLevel]struct{} + show bool + once sync.Once +} + +// Check will check a given line if it would be included in the level +// filter. +func (f *LevelFilter) Check(line []byte) bool { + f.once.Do(f.init) + + // Check for a log level + var level LogLevel + x := bytes.IndexByte(line, '[') + if x >= 0 { + y := bytes.IndexByte(line[x:], ']') + if y >= 0 { + level = LogLevel(line[x+1 : x+y]) + } + } + + //return level == "" + + _, ok := f.badLevels[level] + return !ok +} + +// Write is a specialized implementation of io.Writer suitable for being +// the output of a logger from the "log" package. +// +// This Writer implementation assumes that it will only recieve byte slices +// containing one or more entire lines of log output, each one terminated by +// a newline. This is compatible with the behavior of the "log" package +// directly, and is also tolerant of intermediaries that might buffer multiple +// separate writes together, as long as no individual log line is ever +// split into multiple slices. +// +// Behavior is undefined if any log line is split across multiple writes or +// written without a trailing '\n' delimiter. +func (f *LevelFilter) Write(p []byte) (n int, err error) { + for len(p) > 0 { + // Split at the first \n, inclusive + idx := bytes.IndexByte(p, '\n') + if idx == -1 { + // Invalid, undelimited write. We'll tolerate it assuming that + // our assumptions are being violated, but the results may be + // non-ideal. + idx = len(p) - 1 + break + } + var l []byte + l, p = p[:idx+1], p[idx+1:] + // Lines starting with characters other than decimal digits (including + // whitespace) are assumed to be continuations lines. This is an + // imprecise heuristic, but experimentally it seems to generate + // "good enough" results from Terraform Core's own logging. Its mileage + // may vary with output from other systems. + if l[0] >= '0' && l[0] <= '9' { + f.show = f.Check(l) + } + if f.show { + _, err = f.Writer.Write(l) + if err != nil { + // Technically it's not correct to say we've written the whole + // buffer, but for our purposes here it's good enough as we're + // only implementing io.Writer enough to satisfy logging + // use-cases. + return len(p), err + } + } + } + + // We always behave as if we wrote the whole of the buffer, even if + // we actually skipped some lines. We're only implementiong io.Writer + // enough to satisfy logging use-cases. + return len(p), nil +} + +// SetMinLevel is used to update the minimum log level +func (f *LevelFilter) SetMinLevel(min LogLevel) { + f.MinLevel = min + f.init() +} + +func (f *LevelFilter) init() { + badLevels := make(map[LogLevel]struct{}) + for _, level := range f.Levels { + if level == f.MinLevel { + break + } + badLevels[level] = struct{}{} + } + f.badLevels = badLevels + f.show = true +} diff --git a/helper/logging/level_test.go b/helper/logging/level_test.go new file mode 100644 index 000000000..baa94748e --- /dev/null +++ b/helper/logging/level_test.go @@ -0,0 +1,93 @@ +package logging + +import ( + "bytes" + "io" + "log" + "testing" +) + +func TestLevelFilter_impl(t *testing.T) { + var _ io.Writer = new(LevelFilter) +} + +func TestLevelFilter(t *testing.T) { + buf := new(bytes.Buffer) + filter := &LevelFilter{ + Levels: []LogLevel{"DEBUG", "WARN", "ERROR"}, + MinLevel: "WARN", + Writer: buf, + } + + logger := log.New(filter, "", 0) + logger.Print("2019/01/01 00:00:00 [WARN] foo") + logger.Println("2019/01/01 00:00:00 [ERROR] bar\n2019/01/01 00:00:00 [DEBUG] buzz") + logger.Println("2019/01/01 00:00:00 [DEBUG] baz\n continuation\n2019/01/01 00:00:00 [WARN] buzz\n more\n2019/01/01 00:00:00 [DEBUG] fizz") + + result := buf.String() + expected := "2019/01/01 00:00:00 [WARN] foo\n2019/01/01 00:00:00 [ERROR] bar\n2019/01/01 00:00:00 [WARN] buzz\n more\n" + if result != expected { + t.Fatalf("wrong result\ngot:\n%s\nwant:\n%s", result, expected) + } +} + +func TestLevelFilterCheck(t *testing.T) { + filter := &LevelFilter{ + Levels: []LogLevel{"DEBUG", "WARN", "ERROR"}, + MinLevel: "WARN", + Writer: nil, + } + + testCases := []struct { + line string + check bool + }{ + {"[WARN] foo\n", true}, + {"[ERROR] bar\n", true}, + {"[DEBUG] baz\n", false}, + {"[WARN] buzz\n", true}, + } + + for _, testCase := range testCases { + result := filter.Check([]byte(testCase.line)) + if result != testCase.check { + t.Errorf("Fail: %s", testCase.line) + } + } +} + +func TestLevelFilter_SetMinLevel(t *testing.T) { + filter := &LevelFilter{ + Levels: []LogLevel{"DEBUG", "WARN", "ERROR"}, + MinLevel: "ERROR", + Writer: nil, + } + + testCases := []struct { + line string + checkBefore bool + checkAfter bool + }{ + {"[WARN] foo\n", false, true}, + {"[ERROR] bar\n", true, true}, + {"[DEBUG] baz\n", false, false}, + {"[WARN] buzz\n", false, true}, + } + + for _, testCase := range testCases { + result := filter.Check([]byte(testCase.line)) + if result != testCase.checkBefore { + t.Errorf("Fail: %s", testCase.line) + } + } + + // Update the minimum level to WARN + filter.SetMinLevel("WARN") + + for _, testCase := range testCases { + result := filter.Check([]byte(testCase.line)) + if result != testCase.checkAfter { + t.Errorf("Fail: %s", testCase.line) + } + } +} diff --git a/helper/logging/logging.go b/helper/logging/logging.go index 6bd92f777..75627cf02 100644 --- a/helper/logging/logging.go +++ b/helper/logging/logging.go @@ -7,8 +7,6 @@ import ( "os" "strings" "syscall" - - "github.com/hashicorp/logutils" ) // These are the environmental variables that determine if we log, and if @@ -18,13 +16,14 @@ const ( EnvLogFile = "TF_LOG_PATH" // Set to a file ) -var ValidLevels = []logutils.LogLevel{"TRACE", "DEBUG", "INFO", "WARN", "ERROR"} +// ValidLevels are the log level names that Terraform recognizes. +var ValidLevels = []LogLevel{"TRACE", "DEBUG", "INFO", "WARN", "ERROR"} // LogOutput determines where we should send logs (if anywhere) and the log level. func LogOutput() (logOutput io.Writer, err error) { logOutput = ioutil.Discard - logLevel := LogLevel() + logLevel := CurrentLogLevel() if logLevel == "" { return } @@ -38,14 +37,21 @@ func LogOutput() (logOutput io.Writer, err error) { } } - // This was the default since the beginning - logOutput = &logutils.LevelFilter{ + if logLevel == "TRACE" { + // Just pass through logs directly then, without any level filtering at all. + return logOutput, nil + } + + // Otherwise we'll use our level filter, which is a heuristic-based + // best effort thing that is not totally reliable but helps to reduce + // the volume of logs in some cases. + logOutput = &LevelFilter{ Levels: ValidLevels, - MinLevel: logutils.LogLevel(logLevel), + MinLevel: LogLevel(logLevel), Writer: logOutput, } - return + return logOutput, nil } // SetOutput checks for a log destination with LogOutput, and calls @@ -64,8 +70,8 @@ func SetOutput() { log.SetOutput(out) } -// LogLevel returns the current log level string based the environment vars -func LogLevel() string { +// CurrentLogLevel returns the current log level string based the environment vars +func CurrentLogLevel() string { envLevel := os.Getenv(EnvLog) if envLevel == "" { return "" @@ -79,13 +85,16 @@ func LogLevel() string { log.Printf("[WARN] Invalid log level: %q. Defaulting to level: TRACE. Valid levels are: %+v", envLevel, ValidLevels) } + if logLevel != "TRACE" { + log.Printf("[WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.\n Use TF_LOG=TRACE to see Terraform's internal logs.\n ----") + } return logLevel } // IsDebugOrHigher returns whether or not the current log level is debug or trace func IsDebugOrHigher() bool { - level := string(LogLevel()) + level := string(CurrentLogLevel()) return level == "DEBUG" || level == "TRACE" } diff --git a/helper/plugin/grpc_provisioner.go b/helper/plugin/grpc_provisioner.go index 14494e462..088e94e4a 100644 --- a/helper/plugin/grpc_provisioner.go +++ b/helper/plugin/grpc_provisioner.go @@ -2,6 +2,8 @@ package plugin import ( "log" + "strings" + "unicode/utf8" "github.com/hashicorp/terraform/helper/schema" proto "github.com/hashicorp/terraform/internal/tfplugin5" @@ -90,7 +92,7 @@ type uiOutput struct { func (o uiOutput) Output(s string) { err := o.srv.Send(&proto.ProvisionResource_Response{ - Output: s, + Output: toValidUTF8(s, string(utf8.RuneError)), }) if err != nil { log.Printf("[ERROR] %s", err) @@ -145,3 +147,55 @@ func (s *GRPCProvisionerServer) Stop(_ context.Context, req *proto.Stop_Request) return resp, nil } + +// FIXME: backported from go1.13 strings package, remove once terraform is +// using go >= 1.13 +// ToValidUTF8 returns a copy of the string s with each run of invalid UTF-8 byte sequences +// replaced by the replacement string, which may be empty. +func toValidUTF8(s, replacement string) string { + var b strings.Builder + + for i, c := range s { + if c != utf8.RuneError { + continue + } + + _, wid := utf8.DecodeRuneInString(s[i:]) + if wid == 1 { + b.Grow(len(s) + len(replacement)) + b.WriteString(s[:i]) + s = s[i:] + break + } + } + + // Fast path for unchanged input + if b.Cap() == 0 { // didn't call b.Grow above + return s + } + + invalid := false // previous byte was from an invalid UTF-8 sequence + for i := 0; i < len(s); { + c := s[i] + if c < utf8.RuneSelf { + i++ + invalid = false + b.WriteByte(c) + continue + } + _, wid := utf8.DecodeRuneInString(s[i:]) + if wid == 1 { + i++ + if !invalid { + invalid = true + b.WriteString(replacement) + } + continue + } + invalid = false + b.WriteString(s[i : i+wid]) + i += wid + } + + return b.String() +} diff --git a/helper/plugin/grpc_provisioner_test.go b/helper/plugin/grpc_provisioner_test.go index c64045ab4..9b38daf4a 100644 --- a/helper/plugin/grpc_provisioner_test.go +++ b/helper/plugin/grpc_provisioner_test.go @@ -1,5 +1,82 @@ package plugin -import proto "github.com/hashicorp/terraform/internal/tfplugin5" +import ( + "testing" + "unicode/utf8" + + "github.com/golang/mock/gomock" + "github.com/hashicorp/terraform/helper/schema" + proto "github.com/hashicorp/terraform/internal/tfplugin5" + mockproto "github.com/hashicorp/terraform/plugin/mock_proto" + "github.com/hashicorp/terraform/terraform" + context "golang.org/x/net/context" +) var _ proto.ProvisionerServer = (*GRPCProvisionerServer)(nil) + +type validUTF8Matcher string + +func (m validUTF8Matcher) Matches(x interface{}) bool { + resp := x.(*proto.ProvisionResource_Response) + return utf8.Valid([]byte(resp.Output)) +} + +func (m validUTF8Matcher) String() string { + return string(m) +} + +func mockProvisionerServer(t *testing.T, c *gomock.Controller) *mockproto.MockProvisioner_ProvisionResourceServer { + server := mockproto.NewMockProvisioner_ProvisionResourceServer(c) + + server.EXPECT().Send( + validUTF8Matcher("check for valid utf8"), + ).Return(nil) + + return server +} + +// ensure that a provsioner cannot return invalid utf8 which isn't allowed in +// the grpc protocol. +func TestProvisionerInvalidUTF8(t *testing.T) { + p := &schema.Provisioner{ + ConnSchema: map[string]*schema.Schema{ + "foo": { + Type: schema.TypeString, + Optional: true, + }, + }, + + Schema: map[string]*schema.Schema{ + "foo": { + Type: schema.TypeInt, + Optional: true, + }, + }, + + ApplyFunc: func(ctx context.Context) error { + out := ctx.Value(schema.ProvOutputKey).(terraform.UIOutput) + out.Output("invalid \xc3\x28\n") + return nil + }, + } + + ctrl := gomock.NewController(t) + defer ctrl.Finish() + + srv := mockProvisionerServer(t, ctrl) + cfg := &proto.DynamicValue{ + Msgpack: []byte("\x81\xa3foo\x01"), + } + conn := &proto.DynamicValue{ + Msgpack: []byte("\x81\xa3foo\xa4host"), + } + provisionerServer := NewGRPCProvisionerServerShim(p) + req := &proto.ProvisionResource_Request{ + Config: cfg, + Connection: conn, + } + + if err := provisionerServer.ProvisionResource(req, srv); err != nil { + t.Fatal(err) + } +} diff --git a/helper/resource/state_shim.go b/helper/resource/state_shim.go index 5ddd02020..fb7ed4ad0 100644 --- a/helper/resource/state_shim.go +++ b/helper/resource/state_shim.go @@ -48,14 +48,14 @@ func shimNewState(newState *states.State, providers map[string]terraform.Resourc for _, res := range newMod.Resources { resType := res.Addr.Type - providerType := res.ProviderConfig.ProviderConfig.Type + providerType := res.ProviderConfig.Provider.Type resource := getResource(providers, providerType, res.Addr) for key, i := range res.Instances { resState := &terraform.ResourceState{ Type: resType, - Provider: res.ProviderConfig.String(), + Provider: res.ProviderConfig.LegacyString(), } // We should always have a Current instance here, but be safe about checking. @@ -87,7 +87,7 @@ func shimNewState(newState *states.State, providers map[string]terraform.Resourc resState.Primary.Meta["schema_version"] = i.Current.SchemaVersion } - for _, dep := range i.Current.Dependencies { + for _, dep := range i.Current.DependsOn { resState.Dependencies = append(resState.Dependencies, dep.String()) } diff --git a/helper/resource/state_shim_test.go b/helper/resource/state_shim_test.go index e87a87e9e..3894c7392 100644 --- a/helper/resource/state_shim_test.go +++ b/helper/resource/state_shim_test.go @@ -31,7 +31,7 @@ func TestStateShim(t *testing.T) { Status: states.ObjectReady, AttrsFlat: map[string]string{"id": "foo", "bazzle": "dazzle"}, SchemaVersion: 7, - Dependencies: []addrs.Referenceable{ + DependsOn: []addrs.Referenceable{ addrs.ResourceInstance{ Resource: addrs.Resource{ Mode: 'M', @@ -41,9 +41,10 @@ func TestStateShim(t *testing.T) { }, }, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) rootModule.SetResourceInstanceCurrent( addrs.Resource{ @@ -52,13 +53,14 @@ func TestStateShim(t *testing.T) { Name: "baz", }.Instance(addrs.NoKey), &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsFlat: map[string]string{"id": "baz", "bazzle": "dazzle"}, - Dependencies: []addrs.Referenceable{}, + Status: states.ObjectReady, + AttrsFlat: map[string]string{"id": "baz", "bazzle": "dazzle"}, + DependsOn: []addrs.Referenceable{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), ) childInstance := addrs.RootModuleInstance.Child("child", addrs.NoKey) @@ -70,13 +72,14 @@ func TestStateShim(t *testing.T) { Name: "foo", }.Instance(addrs.NoKey), &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"id": "bar", "fuzzle":"wuzzle"}`), - Dependencies: []addrs.Referenceable{}, + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id": "bar", "fuzzle":"wuzzle"}`), + DependsOn: []addrs.Referenceable{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: childInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(childInstance), ) childModule.SetResourceInstanceCurrent( addrs.Resource{ @@ -87,7 +90,7 @@ func TestStateShim(t *testing.T) { &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsJSON: []byte(`{"id": "bar", "fizzle":"wizzle"}`), - Dependencies: []addrs.Referenceable{ + DependsOn: []addrs.Referenceable{ addrs.ResourceInstance{ Resource: addrs.Resource{ Mode: 'D', @@ -97,9 +100,10 @@ func TestStateShim(t *testing.T) { }, }, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(childInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: childInstance, + }, ) childModule.SetResourceInstanceDeposed( @@ -112,7 +116,7 @@ func TestStateShim(t *testing.T) { &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsFlat: map[string]string{"id": "old", "fizzle": "wizzle"}, - Dependencies: []addrs.Referenceable{ + DependsOn: []addrs.Referenceable{ addrs.ResourceInstance{ Resource: addrs.Resource{ Mode: 'D', @@ -122,9 +126,10 @@ func TestStateShim(t *testing.T) { }, }, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(childInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: childInstance, + }, ) childModule.SetResourceInstanceCurrent( @@ -134,13 +139,14 @@ func TestStateShim(t *testing.T) { Name: "lots", }.Instance(addrs.IntKey(0)), &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsFlat: map[string]string{"id": "0", "bazzle": "dazzle"}, - Dependencies: []addrs.Referenceable{}, + Status: states.ObjectReady, + AttrsFlat: map[string]string{"id": "0", "bazzle": "dazzle"}, + DependsOn: []addrs.Referenceable{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: childInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(childInstance), ) childModule.SetResourceInstanceCurrent( addrs.Resource{ @@ -149,13 +155,14 @@ func TestStateShim(t *testing.T) { Name: "lots", }.Instance(addrs.IntKey(1)), &states.ResourceInstanceObjectSrc{ - Status: states.ObjectTainted, - AttrsFlat: map[string]string{"id": "1", "bazzle": "dazzle"}, - Dependencies: []addrs.Referenceable{}, + Status: states.ObjectTainted, + AttrsFlat: map[string]string{"id": "1", "bazzle": "dazzle"}, + DependsOn: []addrs.Referenceable{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: childInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(childInstance), ) childModule.SetResourceInstanceCurrent( @@ -165,13 +172,14 @@ func TestStateShim(t *testing.T) { Name: "single_count", }.Instance(addrs.IntKey(0)), &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"id": "single", "bazzle":"dazzle"}`), - Dependencies: []addrs.Referenceable{}, + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id": "single", "bazzle":"dazzle"}`), + DependsOn: []addrs.Referenceable{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: childInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(childInstance), ) expected := &terraform.State{ @@ -321,6 +329,6 @@ func TestStateShim(t *testing.T) { } if !expected.Equal(shimmed) { - t.Fatalf("wrong result state\ngot:\n%s\n\nwant:\n%s", expected, shimmed) + t.Fatalf("wrong result state\ngot:\n%s\n\nwant:\n%s", shimmed, expected) } } diff --git a/helper/resource/testing.go b/helper/resource/testing.go index 576ef31f3..853241a9e 100644 --- a/helper/resource/testing.go +++ b/helper/resource/testing.go @@ -18,7 +18,6 @@ import ( "github.com/davecgh/go-spew/spew" "github.com/hashicorp/errwrap" "github.com/hashicorp/go-multierror" - "github.com/hashicorp/logutils" "github.com/mitchellh/colorstring" "github.com/hashicorp/terraform/addrs" @@ -396,7 +395,7 @@ const EnvLogPathMask = "TF_LOG_PATH_MASK" func LogOutput(t TestT) (logOutput io.Writer, err error) { logOutput = ioutil.Discard - logLevel := logging.LogLevel() + logLevel := logging.CurrentLogLevel() if logLevel == "" { return } @@ -424,9 +423,9 @@ func LogOutput(t TestT) (logOutput io.Writer, err error) { } // This was the default since the beginning - logOutput = &logutils.LevelFilter{ + logOutput = &logging.LevelFilter{ Levels: logging.ValidLevels, - MinLevel: logutils.LogLevel(logLevel), + MinLevel: logging.LogLevel(logLevel), Writer: logOutput, } @@ -677,11 +676,11 @@ func testProviderResolver(c TestCase) (providers.Resolver, error) { // wrap the old provider factories in the test grpc server so they can be // called from terraform. - newProviders := make(map[string]providers.Factory) + newProviders := make(map[addrs.Provider]providers.Factory) for k, pf := range ctxProviders { factory := pf // must copy to ensure each closure sees its own value - newProviders[k] = func() (providers.Interface, error) { + newProviders[addrs.NewLegacyProvider(k)] = func() (providers.Interface, error) { p, err := factory() if err != nil { return nil, err @@ -728,7 +727,10 @@ func testIDOnlyRefresh(c TestCase, opts terraform.ContextOpts, step TestStep, r AttrsFlat: r.Primary.Attributes, Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "placeholder"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("placeholder"), + Module: addrs.RootModuleInstance, + }, ) // Create the config module. We use the full config because Refresh diff --git a/helper/resource/testing_import_state.go b/helper/resource/testing_import_state.go index 3473f8e52..998c4cbfb 100644 --- a/helper/resource/testing_import_state.go +++ b/helper/resource/testing_import_state.go @@ -16,7 +16,7 @@ import ( "github.com/hashicorp/terraform/terraform" ) -// testStepImportState runs an imort state test step +// testStepImportState runs an import state test step func testStepImportState( opts terraform.ContextOpts, state *terraform.State, @@ -96,7 +96,6 @@ func testStepImportState( if err != nil { return nil, err } - // Go through the new state and verify if step.ImportStateCheck != nil { var states []*terraform.InstanceState @@ -138,7 +137,8 @@ func testStepImportState( // this shouldn't happen in any reasonable case. var rsrcSchema *schema.Resource if providerAddr, diags := addrs.ParseAbsProviderConfigStr(r.Provider); !diags.HasErrors() { - providerType := providerAddr.ProviderConfig.Type + // FIXME + providerType := providerAddr.Provider.Type if provider, ok := step.providers[providerType]; ok { if provider, ok := provider.(*schema.Provider); ok { rsrcSchema = provider.ResourcesMap[r.Type] diff --git a/helper/resource/testing_test.go b/helper/resource/testing_test.go index 1c358f663..96e098edf 100644 --- a/helper/resource/testing_test.go +++ b/helper/resource/testing_test.go @@ -14,6 +14,7 @@ import ( "testing" "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/plugin/discovery" "github.com/hashicorp/terraform/terraform" @@ -1080,7 +1081,7 @@ func TestTestProviderResolver(t *testing.T) { for name := range reqd { t.Run(name, func(t *testing.T) { - pf, ok := factories[name] + pf, ok := factories[addrs.NewLegacyProvider(name)] if !ok { t.Fatalf("no factory for %q", name) } diff --git a/helper/schema/resource_data_get_source.go b/helper/schema/resource_data_get_source.go index 7dd655de3..8bfb079be 100644 --- a/helper/schema/resource_data_get_source.go +++ b/helper/schema/resource_data_get_source.go @@ -1,6 +1,6 @@ package schema -//go:generate stringer -type=getSource resource_data_get_source.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=getSource resource_data_get_source.go // getSource represents the level we want to get for a value (internally). // Any source less than or equal to the level will be loaded (whichever diff --git a/helper/schema/valuetype.go b/helper/schema/valuetype.go index 9286987d5..0f65d692f 100644 --- a/helper/schema/valuetype.go +++ b/helper/schema/valuetype.go @@ -1,6 +1,6 @@ package schema -//go:generate stringer -type=ValueType valuetype.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=ValueType valuetype.go // ValueType is an enum of the type that can be represented by a schema. type ValueType int diff --git a/instances/expander.go b/instances/expander.go new file mode 100644 index 000000000..357d0f9fd --- /dev/null +++ b/instances/expander.go @@ -0,0 +1,319 @@ +package instances + +import ( + "fmt" + "sort" + "sync" + + "github.com/hashicorp/terraform/addrs" + "github.com/zclconf/go-cty/cty" +) + +// Expander instances serve as a coordination point for gathering object +// repetition values (count and for_each in configuration) and then later +// making use of them to fully enumerate all of the instances of an object. +// +// The two repeatable object types in Terraform are modules and resources. +// Because resources belong to modules and modules can nest inside other +// modules, module expansion in particular has a recursive effect that can +// cause deep objects to expand exponentially. Expander assumes that all +// instances of a module have the same static objects inside, and that they +// differ only in the repetition count for some of those objects. +// +// Expander is a synchronized object whose methods can be safely called +// from concurrent threads of execution. However, it does expect a certain +// sequence of operations which is normally obtained by the caller traversing +// a dependency graph: each object must have its repetition mode set exactly +// once, and this must be done before any calls that depend on the repetition +// mode. In other words, the count or for_each expression value for a module +// must be provided before any object nested directly or indirectly inside +// that module can be expanded. If this ordering is violated, the methods +// will panic to enforce internal consistency. +// +// The Expand* methods of Expander only work directly with modules and with +// resources. Addresses for other objects that nest within modules but +// do not themselves support repetition can be obtained by calling ExpandModule +// with the containing module path and then producing one absolute instance +// address per module instance address returned. +type Expander struct { + mu sync.RWMutex + exps *expanderModule +} + +// NewExpander initializes and returns a new Expander, empty and ready to use. +func NewExpander() *Expander { + return &Expander{ + exps: newExpanderModule(), + } +} + +// SetModuleSingle records that the given module call inside the given parent +// module does not use any repetition arguments and is therefore a singleton. +func (e *Expander) SetModuleSingle(parentAddr addrs.ModuleInstance, callAddr addrs.ModuleCall) { + e.setModuleExpansion(parentAddr, callAddr, expansionSingleVal) +} + +// SetModuleCount records that the given module call inside the given parent +// module instance uses the "count" repetition argument, with the given value. +func (e *Expander) SetModuleCount(parentAddr addrs.ModuleInstance, callAddr addrs.ModuleCall, count int) { + e.setModuleExpansion(parentAddr, callAddr, expansionCount(count)) +} + +// SetModuleForEach records that the given module call inside the given parent +// module instance uses the "for_each" repetition argument, with the given +// map value. +// +// In the configuration language the for_each argument can also accept a set. +// It's the caller's responsibility to convert that into an identity map before +// calling this method. +func (e *Expander) SetModuleForEach(parentAddr addrs.ModuleInstance, callAddr addrs.ModuleCall, mapping map[string]cty.Value) { + e.setModuleExpansion(parentAddr, callAddr, expansionForEach(mapping)) +} + +// SetResourceSingle records that the given module inside the given parent +// module does not use any repetition arguments and is therefore a singleton. +func (e *Expander) SetResourceSingle(parentAddr addrs.ModuleInstance, resourceAddr addrs.Resource) { + e.setResourceExpansion(parentAddr, resourceAddr, expansionSingleVal) +} + +// SetResourceCount records that the given module inside the given parent +// module uses the "count" repetition argument, with the given value. +func (e *Expander) SetResourceCount(parentAddr addrs.ModuleInstance, resourceAddr addrs.Resource, count int) { + e.setResourceExpansion(parentAddr, resourceAddr, expansionCount(count)) +} + +// SetResourceForEach records that the given module inside the given parent +// module uses the "for_each" repetition argument, with the given map value. +// +// In the configuration language the for_each argument can also accept a set. +// It's the caller's responsibility to convert that into an identity map before +// calling this method. +func (e *Expander) SetResourceForEach(parentAddr addrs.ModuleInstance, resourceAddr addrs.Resource, mapping map[string]cty.Value) { + e.setResourceExpansion(parentAddr, resourceAddr, expansionForEach(mapping)) +} + +// ExpandModule finds the exhaustive set of module instances resulting from +// the expansion of the given module and all of its ancestor modules. +// +// All of the modules on the path to the identified module must already have +// had their expansion registered using one of the SetModule* methods before +// calling, or this method will panic. +func (e *Expander) ExpandModule(addr addrs.Module) []addrs.ModuleInstance { + if len(addr) == 0 { + // Root module is always a singleton. + return singletonRootModule + } + + e.mu.RLock() + defer e.mu.RUnlock() + + // We're going to be dynamically growing ModuleInstance addresses, so + // we'll preallocate some space to do it so that for typical shallow + // module trees we won't need to reallocate this. + // (moduleInstances does plenty of allocations itself, so the benefit of + // pre-allocating this is marginal but it's not hard to do.) + parentAddr := make(addrs.ModuleInstance, 0, 4) + ret := e.exps.moduleInstances(addr, parentAddr) + sort.SliceStable(ret, func(i, j int) bool { + return ret[i].Less(ret[j]) + }) + return ret +} + +// ExpandResource finds the exhaustive set of resource instances resulting from +// the expansion of the given resource and all of its containing modules. +// +// All of the modules on the path to the identified resource and the resource +// itself must already have had their expansion registered using one of the +// SetModule*/SetResource* methods before calling, or this method will panic. +func (e *Expander) ExpandResource(parentAddr addrs.Module, resourceAddr addrs.Resource) []addrs.AbsResourceInstance { + e.mu.RLock() + defer e.mu.RUnlock() + + // We're going to be dynamically growing ModuleInstance addresses, so + // we'll preallocate some space to do it so that for typical shallow + // module trees we won't need to reallocate this. + // (moduleInstances does plenty of allocations itself, so the benefit of + // pre-allocating this is marginal but it's not hard to do.) + moduleInstanceAddr := make(addrs.ModuleInstance, 0, 4) + ret := e.exps.resourceInstances(parentAddr, resourceAddr, moduleInstanceAddr) + sort.SliceStable(ret, func(i, j int) bool { + return ret[i].Less(ret[j]) + }) + return ret +} + +// GetModuleInstanceRepetitionData returns an object describing the values +// that should be available for each.key, each.value, and count.index within +// the call block for the given module instance. +func (e *Expander) GetModuleInstanceRepetitionData(addr addrs.ModuleInstance) RepetitionData { + if len(addr) == 0 { + // The root module is always a singleton, so it has no repetition data. + return RepetitionData{} + } + + e.mu.RLock() + defer e.mu.RUnlock() + + parentMod := e.findModule(addr[:len(addr)-1]) + lastStep := addr[len(addr)-1] + exp, ok := parentMod.moduleCalls[addrs.ModuleCall{Name: lastStep.Name}] + if !ok { + panic(fmt.Sprintf("no expansion has been registered for %s", addr)) + } + return exp.repetitionData(lastStep.InstanceKey) +} + +// GetResourceInstanceRepetitionData returns an object describing the values +// that should be available for each.key, each.value, and count.index within +// the definition block for the given resource instance. +func (e *Expander) GetResourceInstanceRepetitionData(addr addrs.AbsResourceInstance) RepetitionData { + e.mu.RLock() + defer e.mu.RUnlock() + + parentMod := e.findModule(addr.Module) + exp, ok := parentMod.resources[addr.Resource.Resource] + if !ok { + panic(fmt.Sprintf("no expansion has been registered for %s", addr.ContainingResource())) + } + return exp.repetitionData(addr.Resource.Key) +} + +func (e *Expander) findModule(moduleInstAddr addrs.ModuleInstance) *expanderModule { + // We expect that all of the modules on the path to our module instance + // should already have expansions registered. + mod := e.exps + for i, step := range moduleInstAddr { + next, ok := mod.childInstances[step] + if !ok { + // Top-down ordering of registration is part of the contract of + // Expander, so this is always indicative of a bug in the caller. + panic(fmt.Sprintf("no expansion has been registered for ancestor module %s", moduleInstAddr[:i+1])) + } + mod = next + } + return mod +} + +func (e *Expander) setModuleExpansion(parentAddr addrs.ModuleInstance, callAddr addrs.ModuleCall, exp expansion) { + e.mu.Lock() + defer e.mu.Unlock() + + mod := e.findModule(parentAddr) + if _, exists := mod.moduleCalls[callAddr]; exists { + panic(fmt.Sprintf("expansion already registered for %s", parentAddr.Child(callAddr.Name, addrs.NoKey))) + } + // We'll also pre-register the child instances so that later calls can + // populate them as the caller traverses the configuration tree. + for _, key := range exp.instanceKeys() { + step := addrs.ModuleInstanceStep{Name: callAddr.Name, InstanceKey: key} + mod.childInstances[step] = newExpanderModule() + } + mod.moduleCalls[callAddr] = exp +} + +func (e *Expander) setResourceExpansion(parentAddr addrs.ModuleInstance, resourceAddr addrs.Resource, exp expansion) { + e.mu.Lock() + defer e.mu.Unlock() + + mod := e.findModule(parentAddr) + if _, exists := mod.resources[resourceAddr]; exists { + panic(fmt.Sprintf("expansion already registered for %s", resourceAddr.Absolute(parentAddr))) + } + mod.resources[resourceAddr] = exp +} + +type expanderModule struct { + moduleCalls map[addrs.ModuleCall]expansion + resources map[addrs.Resource]expansion + childInstances map[addrs.ModuleInstanceStep]*expanderModule +} + +func newExpanderModule() *expanderModule { + return &expanderModule{ + moduleCalls: make(map[addrs.ModuleCall]expansion), + resources: make(map[addrs.Resource]expansion), + childInstances: make(map[addrs.ModuleInstanceStep]*expanderModule), + } +} + +var singletonRootModule = []addrs.ModuleInstance{addrs.RootModuleInstance} + +func (m *expanderModule) moduleInstances(addr addrs.Module, parentAddr addrs.ModuleInstance) []addrs.ModuleInstance { + callName := addr[0] + exp, ok := m.moduleCalls[addrs.ModuleCall{Name: callName}] + if !ok { + // This is a bug in the caller, because it should always register + // expansions for an object and all of its ancestors before requesting + // expansion of it. + panic(fmt.Sprintf("no expansion has been registered for %s", parentAddr.Child(callName, addrs.NoKey))) + } + + var ret []addrs.ModuleInstance + + // If there's more than one step remaining then we need to traverse deeper. + if len(addr) > 1 { + for step, inst := range m.childInstances { + if step.Name != callName { + continue + } + instAddr := append(parentAddr, step) + ret = append(ret, inst.moduleInstances(addr[1:], instAddr)...) + } + return ret + } + + // Otherwise, we'll use the expansion from the final step to produce + // a sequence of addresses under this prefix. + for _, k := range exp.instanceKeys() { + // We're reusing the buffer under parentAddr as we recurse through + // the structure, so we need to copy it here to produce a final + // immutable slice to return. + full := make(addrs.ModuleInstance, 0, len(parentAddr)+1) + full = append(full, parentAddr...) + full = full.Child(callName, k) + ret = append(ret, full) + } + return ret +} + +func (m *expanderModule) resourceInstances(moduleAddr addrs.Module, resourceAddr addrs.Resource, parentAddr addrs.ModuleInstance) []addrs.AbsResourceInstance { + var ret []addrs.AbsResourceInstance + + if len(moduleAddr) > 0 { + // We need to traverse through the module levels first, so we can + // then iterate resource expansions in the context of each module + // path leading to them. + callName := moduleAddr[0] + if _, ok := m.moduleCalls[addrs.ModuleCall{Name: callName}]; !ok { + // This is a bug in the caller, because it should always register + // expansions for an object and all of its ancestors before requesting + // expansion of it. + panic(fmt.Sprintf("no expansion has been registered for %s", parentAddr.Child(callName, addrs.NoKey))) + } + + for step, inst := range m.childInstances { + if step.Name != callName { + continue + } + moduleInstAddr := append(parentAddr, step) + ret = append(ret, inst.resourceInstances(moduleAddr[1:], resourceAddr, moduleInstAddr)...) + } + return ret + } + + exp, ok := m.resources[resourceAddr] + if !ok { + panic(fmt.Sprintf("no expansion has been registered for %s", resourceAddr.Absolute(parentAddr))) + } + + for _, k := range exp.instanceKeys() { + // We're reusing the buffer under parentAddr as we recurse through + // the structure, so we need to copy it here to produce a final + // immutable slice to return. + moduleAddr := make(addrs.ModuleInstance, len(parentAddr)) + copy(moduleAddr, parentAddr) + ret = append(ret, resourceAddr.Instance(k).Absolute(moduleAddr)) + } + return ret +} diff --git a/instances/expander_test.go b/instances/expander_test.go new file mode 100644 index 000000000..63dd35113 --- /dev/null +++ b/instances/expander_test.go @@ -0,0 +1,458 @@ +package instances + +import ( + "fmt" + "strings" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/zclconf/go-cty/cty" + + "github.com/hashicorp/terraform/addrs" +) + +func TestExpander(t *testing.T) { + // Some module and resource addresses and values we'll use repeatedly below. + singleModuleAddr := addrs.ModuleCall{Name: "single"} + count2ModuleAddr := addrs.ModuleCall{Name: "count2"} + count0ModuleAddr := addrs.ModuleCall{Name: "count0"} + forEachModuleAddr := addrs.ModuleCall{Name: "for_each"} + singleResourceAddr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test", + Name: "single", + } + count2ResourceAddr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test", + Name: "count2", + } + count0ResourceAddr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test", + Name: "count0", + } + forEachResourceAddr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test", + Name: "for_each", + } + eachMap := map[string]cty.Value{ + "a": cty.NumberIntVal(1), + "b": cty.NumberIntVal(2), + } + + // In normal use, Expander would be called in the context of a graph + // traversal to ensure that information is registered/requested in the + // correct sequence, but to keep this test self-contained we'll just + // manually write out the steps here. + // + // The steps below are assuming a configuration tree like the following: + // - root module + // - resource test.single with no count or for_each + // - resource test.count2 with count = 2 + // - resource test.count0 with count = 0 + // - resource test.for_each with for_each = { a = 1, b = 2 } + // - child module "single" with no count or for_each + // - resource test.single with no count or for_each + // - resource test.count2 with count = 2 + // - child module "count2" with count = 2 + // - resource test.single with no count or for_each + // - resource test.count2 with count = 2 + // - child module "count2" with count = 2 + // - resource test.count2 with count = 2 + // - child module "count0" with count = 0 + // - resource test.single with no count or for_each + // - child module for_each with for_each = { a = 1, b = 2 } + // - resource test.single with no count or for_each + // - resource test.count2 with count = 2 + + ex := NewExpander() + + // We don't register the root module, because it's always implied to exist. + // + // Below we're going to use braces and indentation just to help visually + // reflect the tree structure from the tree in the above comment, in the + // hope that the following is easier to follow. + // + // The Expander API requires that we register containing modules before + // registering anything inside them, so we'll work through the above + // in a depth-first order in the registration steps that follow. + { + ex.SetResourceSingle(addrs.RootModuleInstance, singleResourceAddr) + ex.SetResourceCount(addrs.RootModuleInstance, count2ResourceAddr, 2) + ex.SetResourceCount(addrs.RootModuleInstance, count0ResourceAddr, 0) + ex.SetResourceForEach(addrs.RootModuleInstance, forEachResourceAddr, eachMap) + + ex.SetModuleSingle(addrs.RootModuleInstance, singleModuleAddr) + { + // The single instance of the module + moduleInstanceAddr := addrs.RootModuleInstance.Child("single", addrs.NoKey) + ex.SetResourceSingle(moduleInstanceAddr, singleResourceAddr) + ex.SetResourceCount(moduleInstanceAddr, count2ResourceAddr, 2) + } + + ex.SetModuleCount(addrs.RootModuleInstance, count2ModuleAddr, 2) + for i1 := 0; i1 < 2; i1++ { + moduleInstanceAddr := addrs.RootModuleInstance.Child("count2", addrs.IntKey(i1)) + ex.SetResourceSingle(moduleInstanceAddr, singleResourceAddr) + ex.SetResourceCount(moduleInstanceAddr, count2ResourceAddr, 2) + ex.SetModuleCount(moduleInstanceAddr, count2ModuleAddr, 2) + for i2 := 0; i2 < 2; i2++ { + moduleInstanceAddr := moduleInstanceAddr.Child("count2", addrs.IntKey(i2)) + ex.SetResourceCount(moduleInstanceAddr, count2ResourceAddr, 2) + } + } + + ex.SetModuleCount(addrs.RootModuleInstance, count0ModuleAddr, 0) + { + // There are no instances of module "count0", so our nested module + // would never actually get registered here: the expansion node + // for the resource would see that its containing module has no + // instances and so do nothing. + } + + ex.SetModuleForEach(addrs.RootModuleInstance, forEachModuleAddr, eachMap) + for k := range eachMap { + moduleInstanceAddr := addrs.RootModuleInstance.Child("for_each", addrs.StringKey(k)) + ex.SetResourceSingle(moduleInstanceAddr, singleResourceAddr) + ex.SetResourceCount(moduleInstanceAddr, count2ResourceAddr, 2) + } + } + + t.Run("root module", func(t *testing.T) { + // Requesting expansion of the root module doesn't really mean anything + // since it's always a singleton, but for consistency it should work. + got := ex.ExpandModule(addrs.RootModule) + want := []addrs.ModuleInstance{addrs.RootModuleInstance} + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("resource single", func(t *testing.T) { + got := ex.ExpandResource( + addrs.RootModule, + singleResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`test.single`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("resource count2", func(t *testing.T) { + got := ex.ExpandResource( + addrs.RootModule, + count2ResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`test.count2[0]`), + mustAbsResourceInstanceAddr(`test.count2[1]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("resource count0", func(t *testing.T) { + got := ex.ExpandResource( + addrs.RootModule, + count0ResourceAddr, + ) + want := []addrs.AbsResourceInstance(nil) + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("resource for_each", func(t *testing.T) { + got := ex.ExpandResource( + addrs.RootModule, + forEachResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`test.for_each["a"]`), + mustAbsResourceInstanceAddr(`test.for_each["b"]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module single", func(t *testing.T) { + got := ex.ExpandModule(addrs.RootModule.Child("single")) + want := []addrs.ModuleInstance{ + mustModuleInstanceAddr(`module.single`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module single resource single", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr("single"), + singleResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr("module.single.test.single"), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module single resource count2", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr(`single`), + count2ResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`module.single.test.count2[0]`), + mustAbsResourceInstanceAddr(`module.single.test.count2[1]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module count2", func(t *testing.T) { + got := ex.ExpandModule(mustModuleAddr(`count2`)) + want := []addrs.ModuleInstance{ + mustModuleInstanceAddr(`module.count2[0]`), + mustModuleInstanceAddr(`module.count2[1]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module count2 resource single", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr(`count2`), + singleResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`module.count2[0].test.single`), + mustAbsResourceInstanceAddr(`module.count2[1].test.single`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module count2 resource count2", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr(`count2`), + count2ResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`module.count2[0].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.count2[0].test.count2[1]`), + mustAbsResourceInstanceAddr(`module.count2[1].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.count2[1].test.count2[1]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module count2 module count2", func(t *testing.T) { + got := ex.ExpandModule(mustModuleAddr(`count2.count2`)) + want := []addrs.ModuleInstance{ + mustModuleInstanceAddr(`module.count2[0].module.count2[0]`), + mustModuleInstanceAddr(`module.count2[0].module.count2[1]`), + mustModuleInstanceAddr(`module.count2[1].module.count2[0]`), + mustModuleInstanceAddr(`module.count2[1].module.count2[1]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module count2 resource count2 resource count2", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr(`count2.count2`), + count2ResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`module.count2[0].module.count2[0].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.count2[0].module.count2[0].test.count2[1]`), + mustAbsResourceInstanceAddr(`module.count2[0].module.count2[1].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.count2[0].module.count2[1].test.count2[1]`), + mustAbsResourceInstanceAddr(`module.count2[1].module.count2[0].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.count2[1].module.count2[0].test.count2[1]`), + mustAbsResourceInstanceAddr(`module.count2[1].module.count2[1].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.count2[1].module.count2[1].test.count2[1]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module count0", func(t *testing.T) { + got := ex.ExpandModule(mustModuleAddr(`count0`)) + want := []addrs.ModuleInstance(nil) + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module count0 resource single", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr(`count0`), + singleResourceAddr, + ) + // The containing module has zero instances, so therefore there + // are zero instances of this resource even though it doesn't have + // count = 0 set itself. + want := []addrs.AbsResourceInstance(nil) + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module for_each", func(t *testing.T) { + got := ex.ExpandModule(mustModuleAddr(`for_each`)) + want := []addrs.ModuleInstance{ + mustModuleInstanceAddr(`module.for_each["a"]`), + mustModuleInstanceAddr(`module.for_each["b"]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module for_each resource single", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr(`for_each`), + singleResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`module.for_each["a"].test.single`), + mustAbsResourceInstanceAddr(`module.for_each["b"].test.single`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run("module for_each resource count2", func(t *testing.T) { + got := ex.ExpandResource( + mustModuleAddr(`for_each`), + count2ResourceAddr, + ) + want := []addrs.AbsResourceInstance{ + mustAbsResourceInstanceAddr(`module.for_each["a"].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.for_each["a"].test.count2[1]`), + mustAbsResourceInstanceAddr(`module.for_each["b"].test.count2[0]`), + mustAbsResourceInstanceAddr(`module.for_each["b"].test.count2[1]`), + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + + t.Run(`module.for_each["b"] repetitiondata`, func(t *testing.T) { + got := ex.GetModuleInstanceRepetitionData( + mustModuleInstanceAddr(`module.for_each["b"]`), + ) + want := RepetitionData{ + EachKey: cty.StringVal("b"), + EachValue: cty.NumberIntVal(2), + } + if diff := cmp.Diff(want, got, cmp.Comparer(valueEquals)); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run(`module.count2[0].module.count2[1] repetitiondata`, func(t *testing.T) { + got := ex.GetModuleInstanceRepetitionData( + mustModuleInstanceAddr(`module.count2[0].module.count2[1]`), + ) + want := RepetitionData{ + CountIndex: cty.NumberIntVal(1), + } + if diff := cmp.Diff(want, got, cmp.Comparer(valueEquals)); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run(`module.for_each["a"] repetitiondata`, func(t *testing.T) { + got := ex.GetModuleInstanceRepetitionData( + mustModuleInstanceAddr(`module.for_each["a"]`), + ) + want := RepetitionData{ + EachKey: cty.StringVal("a"), + EachValue: cty.NumberIntVal(1), + } + if diff := cmp.Diff(want, got, cmp.Comparer(valueEquals)); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + + t.Run(`test.for_each["a"] repetitiondata`, func(t *testing.T) { + got := ex.GetResourceInstanceRepetitionData( + mustAbsResourceInstanceAddr(`test.for_each["a"]`), + ) + want := RepetitionData{ + EachKey: cty.StringVal("a"), + EachValue: cty.NumberIntVal(1), + } + if diff := cmp.Diff(want, got, cmp.Comparer(valueEquals)); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run(`module.for_each["a"].test.single repetitiondata`, func(t *testing.T) { + got := ex.GetResourceInstanceRepetitionData( + mustAbsResourceInstanceAddr(`module.for_each["a"].test.single`), + ) + want := RepetitionData{} + if diff := cmp.Diff(want, got, cmp.Comparer(valueEquals)); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + t.Run(`module.for_each["a"].test.count2[1] repetitiondata`, func(t *testing.T) { + got := ex.GetResourceInstanceRepetitionData( + mustAbsResourceInstanceAddr(`module.for_each["a"].test.count2[1]`), + ) + want := RepetitionData{ + CountIndex: cty.NumberIntVal(1), + } + if diff := cmp.Diff(want, got, cmp.Comparer(valueEquals)); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) +} + +func mustResourceAddr(str string) addrs.Resource { + addr, diags := addrs.ParseAbsResourceStr(str) + if diags.HasErrors() { + panic(fmt.Sprintf("invalid resource address: %s", diags.Err())) + } + if !addr.Module.IsRoot() { + panic("invalid resource address: includes module path") + } + return addr.Resource +} + +func mustAbsResourceInstanceAddr(str string) addrs.AbsResourceInstance { + addr, diags := addrs.ParseAbsResourceInstanceStr(str) + if diags.HasErrors() { + panic(fmt.Sprintf("invalid absolute resource instance address: %s", diags.Err())) + } + return addr +} + +func mustModuleAddr(str string) addrs.Module { + if len(str) == 0 { + return addrs.RootModule + } + // We don't have a real parser for these because they don't appear in the + // language anywhere, but this interpretation mimics the format we + // produce from the String method on addrs.Module. + parts := strings.Split(str, ".") + return addrs.Module(parts) +} + +func mustModuleInstanceAddr(str string) addrs.ModuleInstance { + if len(str) == 0 { + return addrs.RootModuleInstance + } + addr, diags := addrs.ParseModuleInstanceStr(str) + if diags.HasErrors() { + panic(fmt.Sprintf("invalid module instance address: %s", diags.Err())) + } + return addr +} + +func valueEquals(a, b cty.Value) bool { + if a == cty.NilVal || b == cty.NilVal { + return a == b + } + return a.RawEquals(b) +} diff --git a/instances/expansion_mode.go b/instances/expansion_mode.go new file mode 100644 index 000000000..be3393432 --- /dev/null +++ b/instances/expansion_mode.go @@ -0,0 +1,85 @@ +package instances + +import ( + "fmt" + "sort" + + "github.com/zclconf/go-cty/cty" + + "github.com/hashicorp/terraform/addrs" +) + +// expansion is an internal interface used to represent the different +// ways expansion can operate depending on how repetition is configured for +// an object. +type expansion interface { + instanceKeys() []addrs.InstanceKey + repetitionData(addrs.InstanceKey) RepetitionData +} + +// expansionSingle is the expansion corresponding to no repetition arguments +// at all, producing a single object with no key. +// +// expansionSingleVal is the only valid value of this type. +type expansionSingle uintptr + +var singleKeys = []addrs.InstanceKey{addrs.NoKey} +var expansionSingleVal expansionSingle + +func (e expansionSingle) instanceKeys() []addrs.InstanceKey { + return singleKeys +} + +func (e expansionSingle) repetitionData(key addrs.InstanceKey) RepetitionData { + if key != addrs.NoKey { + panic("cannot use instance key with non-repeating object") + } + return RepetitionData{} +} + +// expansionCount is the expansion corresponding to the "count" argument. +type expansionCount int + +func (e expansionCount) instanceKeys() []addrs.InstanceKey { + ret := make([]addrs.InstanceKey, int(e)) + for i := range ret { + ret[i] = addrs.IntKey(i) + } + return ret +} + +func (e expansionCount) repetitionData(key addrs.InstanceKey) RepetitionData { + i := int(key.(addrs.IntKey)) + if i < 0 || i >= int(e) { + panic(fmt.Sprintf("instance key %d out of range for count %d", i, e)) + } + return RepetitionData{ + CountIndex: cty.NumberIntVal(int64(i)), + } +} + +// expansionForEach is the expansion corresponding to the "for_each" argument. +type expansionForEach map[string]cty.Value + +func (e expansionForEach) instanceKeys() []addrs.InstanceKey { + ret := make([]addrs.InstanceKey, 0, len(e)) + for k := range e { + ret = append(ret, addrs.StringKey(k)) + } + sort.Slice(ret, func(i, j int) bool { + return ret[i].(addrs.StringKey) < ret[j].(addrs.StringKey) + }) + return ret +} + +func (e expansionForEach) repetitionData(key addrs.InstanceKey) RepetitionData { + k := string(key.(addrs.StringKey)) + v, ok := e[k] + if !ok { + panic(fmt.Sprintf("instance key %q does not match any instance", k)) + } + return RepetitionData{ + EachKey: cty.StringVal(k), + EachValue: v, + } +} diff --git a/instances/instance_key_data.go b/instances/instance_key_data.go new file mode 100644 index 000000000..9ada5253c --- /dev/null +++ b/instances/instance_key_data.go @@ -0,0 +1,28 @@ +package instances + +import ( + "github.com/zclconf/go-cty/cty" +) + +// RepetitionData represents the values available to identify individual +// repetitions of a particular object. +// +// This corresponds to the each.key, each.value, and count.index symbols in +// the configuration language. +type RepetitionData struct { + // CountIndex is the value for count.index, or cty.NilVal if evaluating + // in a context where the "count" argument is not active. + // + // For correct operation, this should always be of type cty.Number if not + // nil. + CountIndex cty.Value + + // EachKey and EachValue are the values for each.key and each.value + // respectively, or cty.NilVal if evaluating in a context where the + // "for_each" argument is not active. These must either both be set + // or neither set. + // + // For correct operation, EachKey must always be either of type cty.String + // or cty.Number if not nil. + EachKey, EachValue cty.Value +} diff --git a/internal/earlyconfig/config.go b/internal/earlyconfig/config.go index a9b8f9883..c0aa58fe2 100644 --- a/internal/earlyconfig/config.go +++ b/internal/earlyconfig/config.go @@ -84,23 +84,38 @@ func (c *Config) ProviderDependencies() (*moduledeps.Module, tfdiags.Diagnostics providers := make(moduledeps.Providers) for name, reqs := range c.Module.RequiredProviders { - inst := moduledeps.ProviderInstance(name) + var fqn addrs.Provider + if source := reqs.Source; source != "" { + addr, diags := addrs.ParseProviderSourceString(source) + if diags.HasErrors() { + diags = diags.Append(wrapDiagnostic(tfconfig.Diagnostic{ + Severity: tfconfig.DiagError, + Summary: "Invalid provider source", + Detail: fmt.Sprintf("Invalid source %q for provider", name), + })) + continue + } + fqn = addr + } + if fqn.IsZero() { + fqn = addrs.NewLegacyProvider(name) + } var constraints version.Constraints - for _, reqStr := range reqs { + for _, reqStr := range reqs.VersionConstraints { if reqStr != "" { constraint, err := version.NewConstraint(reqStr) if err != nil { diags = diags.Append(wrapDiagnostic(tfconfig.Diagnostic{ Severity: tfconfig.DiagError, Summary: "Invalid provider version constraint", - Detail: fmt.Sprintf("Invalid version constraint %q for provider %s.", reqStr, name), + Detail: fmt.Sprintf("Invalid version constraint %q for provider %s.", reqStr, fqn.LegacyString()), })) continue } constraints = append(constraints, constraint...) } } - providers[inst] = moduledeps.ProviderDependency{ + providers[fqn] = moduledeps.ProviderDependency{ Constraints: discovery.NewConstraints(constraints), Reason: moduledeps.ProviderDependencyExplicit, } diff --git a/internal/getproviders/doc.go b/internal/getproviders/doc.go new file mode 100644 index 000000000..a39aa1dda --- /dev/null +++ b/internal/getproviders/doc.go @@ -0,0 +1,11 @@ +// Package getproviders is the lowest-level provider automatic installation +// functionality. It can answer questions about what providers and provider +// versions are available in a registry, and it can retrieve the URL for +// the distribution archive for a specific version of a specific provider +// targeting a particular platform. +// +// This package is not responsible for choosing the best version to install +// from a set of available versions, or for any signature verification of the +// archives it fetches. Callers will use this package in conjunction with other +// logic elsewhere in order to construct a full provider installer. +package getproviders diff --git a/internal/getproviders/errors.go b/internal/getproviders/errors.go new file mode 100644 index 000000000..5710e5e05 --- /dev/null +++ b/internal/getproviders/errors.go @@ -0,0 +1,147 @@ +package getproviders + +import ( + "fmt" + + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform/addrs" +) + +// ErrHostNoProviders is an error type used to indicate that a hostname given +// in a provider address does not support the provider registry protocol. +type ErrHostNoProviders struct { + Hostname svchost.Hostname + + // HasOtherVersionis set to true if the discovery process detected + // declarations of services named "providers" whose version numbers did not + // match any version supported by the current version of Terraform. + // + // If this is set, it's helpful to hint to the user in an error message + // that the provider host may be expecting an older or a newer version + // of Terraform, rather than that it isn't a provider registry host at all. + HasOtherVersion bool +} + +func (err ErrHostNoProviders) Error() string { + switch { + case err.HasOtherVersion: + return fmt.Sprintf("host %s does not support the provider registry protocol required by this Terraform version, but may be compatible with a different Terraform version", err.Hostname.ForDisplay()) + default: + return fmt.Sprintf("host %s does not offer a Terraform provider registry", err.Hostname.ForDisplay()) + } +} + +// ErrHostUnreachable is an error type used to indicate that a hostname +// given in a provider address did not resolve in DNS, did not respond to an +// HTTPS request for service discovery, or otherwise failed to correctly speak +// the service discovery protocol. +type ErrHostUnreachable struct { + Hostname svchost.Hostname + Wrapped error +} + +func (err ErrHostUnreachable) Error() string { + return fmt.Sprintf("could not connect to %s: %s", err.Hostname.ForDisplay(), err.Wrapped.Error()) +} + +// Unwrap returns the underlying error that occurred when trying to reach the +// indicated host. +func (err ErrHostUnreachable) Unwrap() error { + return err.Wrapped +} + +// ErrUnauthorized is an error type used to indicate that a hostname +// given in a provider address returned a "401 Unauthorized" or "403 Forbidden" +// error response when we tried to access it. +type ErrUnauthorized struct { + Hostname svchost.Hostname + + // HaveCredentials is true when the request that failed included some + // credentials, and thus it seems that those credentials were invalid. + // Conversely, HaveCredentials is false if the request did not include + // credentials at all, in which case it seems that credentials must be + // provided. + HaveCredentials bool +} + +func (err ErrUnauthorized) Error() string { + switch { + case err.HaveCredentials: + return fmt.Sprintf("host %s rejected the given authentication credentials", err.Hostname) + default: + return fmt.Sprintf("host %s requires authentication credentials", err.Hostname) + } +} + +// ErrProviderNotKnown is an error type used to indicate that the hostname +// given in a provider address does appear to be a provider registry but that +// registry does not know about the given provider namespace or type. +// +// A caller serving requests from an end-user should recognize this error type +// and use it to produce user-friendly hints for common errors such as failing +// to specify an explicit source for a provider not in the default namespace +// (one not under registry.terraform.io/hashicorp/). The default error message +// for this type is a direct description of the problem with no such hints, +// because we expect that the caller will have better context to decide what +// hints are appropriate, e.g. by looking at the configuration given by the +// user. +type ErrProviderNotKnown struct { + Provider addrs.Provider +} + +func (err ErrProviderNotKnown) Error() string { + return fmt.Sprintf( + "provider registry %s does not have a provider named %s", + err.Provider.Hostname.ForDisplay(), + err.Provider, + ) +} + +// ErrPlatformNotSupported is an error type used to indicate that a particular +// version of a provider isn't available for a particular target platform. +// +// This is returned when DownloadLocation encounters a 404 Not Found response +// from the underlying registry, because it presumes that a caller will only +// ask for the DownloadLocation for a version it already found the existence +// of via AvailableVersions. +type ErrPlatformNotSupported struct { + Provider addrs.Provider + Version Version + Platform Platform +} + +func (err ErrPlatformNotSupported) Error() string { + return fmt.Sprintf( + "provider %s %s is not available for %s", + err.Provider, + err.Version, + err.Platform, + ) +} + +// ErrQueryFailed is an error type used to indicate that the hostname given +// in a provider address does appear to be a provider registry but that when +// we queried it for metadata for the given provider the server returned an +// unexpected error. +// +// This is used for any error responses other than "Not Found", which would +// indicate the absense of a provider and is thus reported using +// ErrProviderNotKnown instead. +type ErrQueryFailed struct { + Provider addrs.Provider + Wrapped error +} + +func (err ErrQueryFailed) Error() string { + return fmt.Sprintf( + "could not query provider registry for %s: %s", + err.Provider.String(), + err.Wrapped.Error(), + ) +} + +// Unwrap returns the underlying error that occurred when trying to reach the +// indicated host. +func (err ErrQueryFailed) Unwrap() error { + return err.Wrapped +} diff --git a/internal/getproviders/filesystem_mirror_source.go b/internal/getproviders/filesystem_mirror_source.go new file mode 100644 index 000000000..e3c5789a8 --- /dev/null +++ b/internal/getproviders/filesystem_mirror_source.go @@ -0,0 +1,299 @@ +package getproviders + +import ( + "fmt" + "log" + "os" + "path/filepath" + "strings" + + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform/addrs" +) + +// FilesystemMirrorSource is a source that reads providers and their metadata +// from a directory prefix in the local filesystem. +type FilesystemMirrorSource struct { + baseDir string + + // allPackages caches the result of scanning the baseDir for all available + // packages on the first call that needs package availability information, + // to avoid re-scanning the filesystem on subsequent operations. + allPackages map[addrs.Provider]PackageMetaList +} + +var _ Source = (*FilesystemMirrorSource)(nil) + +// NewFilesystemMirrorSource constructs and returns a new filesystem-based +// mirror source with the given base directory. +func NewFilesystemMirrorSource(baseDir string) *FilesystemMirrorSource { + return &FilesystemMirrorSource{ + baseDir: baseDir, + } +} + +// AvailableVersions scans the directory structure under the source's base +// directory for locally-mirrored packages for the given provider, returning +// a list of version numbers for the providers it found. +func (s *FilesystemMirrorSource) AvailableVersions(provider addrs.Provider) (VersionList, error) { + // s.allPackages is populated if scanAllVersions succeeds + err := s.scanAllVersions() + if err != nil { + return nil, err + } + + // There might be multiple packages for a given version in the filesystem, + // but the contract here is to return distinct versions so we'll dedupe + // them first, then sort them, and then return them. + versionsMap := make(map[Version]struct{}) + for _, m := range s.allPackages[provider] { + versionsMap[m.Version] = struct{}{} + } + ret := make(VersionList, 0, len(versionsMap)) + for v := range versionsMap { + ret = append(ret, v) + } + ret.Sort() + return ret, nil +} + +// PackageMeta checks to see if the source's base directory contains a +// local copy of the distribution package for the given provider version on +// the given target, and returns the metadata about it if so. +func (s *FilesystemMirrorSource) PackageMeta(provider addrs.Provider, version Version, target Platform) (PackageMeta, error) { + // s.allPackages is populated if scanAllVersions succeeds + err := s.scanAllVersions() + if err != nil { + return PackageMeta{}, err + } + + relevantPkgs := s.allPackages[provider].FilterProviderPlatformExactVersion(provider, target, version) + if len(relevantPkgs) == 0 { + // This is the local equivalent of a "404 Not Found" when retrieving + // a particular version from a registry or network mirror. Because + // the caller should've selected a version already found by + // AvailableVersions, the only discriminator that should fail here + // is the target platform, and so our error result assumes that, + // causing the caller to return an error like "This provider version is + // not compatible with aros_riscv". + return PackageMeta{}, ErrPlatformNotSupported{ + Provider: provider, + Version: version, + Platform: target, + } + } + + // It's possible that there could be multiple copies of the same package + // available in the filesystem, if e.g. there's both a packed and an + // unpacked variant. For now we assume that the decision between them + // is arbitrary and just take the first one in the result. + return relevantPkgs[0], nil +} + +// AllAvailablePackages scans the directory structure under the source's base +// directory for locally-mirrored packages for all providers, returning a map +// of the discovered packages with the fully-qualified provider names as +// keys. +// +// This is not an operation generally supported by all Source implementations, +// but the filesystem implementation offers it because we also use the +// filesystem mirror source directly to scan our auto-install plugin directory +// and in other automatic discovery situations. +func (s *FilesystemMirrorSource) AllAvailablePackages() (map[addrs.Provider]PackageMetaList, error) { + // s.allPackages is populated if scanAllVersions succeeds + err := s.scanAllVersions() + return s.allPackages, err +} + +func (s *FilesystemMirrorSource) scanAllVersions() error { + if s.allPackages != nil { + // we're distinguishing nil-ness from emptiness here so we can + // recognize when we've scanned the directory without errors, even + // if we found nothing during the scan. + return nil + } + ret := make(map[addrs.Provider]PackageMetaList) + err := filepath.Walk(s.baseDir, func(fullPath string, info os.FileInfo, err error) error { + if err != nil { + return fmt.Errorf("cannot search %s: %s", fullPath, err) + } + + // There are two valid directory structures that we support here... + // Unpacked: registry.terraform.io/hashicorp/aws/2.0.0/linux_amd64 (a directory) + // Packed: registry.terraform.io/hashicorp/aws/terraform-provider-aws_2.0.0_linux_amd64.zip (a file) + // + // Both of these give us enough information to identify the package + // metadata. + fsPath, err := filepath.Rel(s.baseDir, fullPath) + if err != nil { + // This should never happen because the filepath.Walk contract is + // for the paths to include the base path. + log.Printf("[TRACE] FilesystemMirrorSource: ignoring malformed path %q during walk: %s", fullPath, err) + return nil + } + relPath := filepath.ToSlash(fsPath) + parts := strings.Split(relPath, "/") + + if len(parts) < 3 { + // Likely a prefix of a valid path, so we'll ignore it and visit + // the full valid path on a later call. + return nil + } + + hostnameGiven := parts[0] + namespace := parts[1] + typeName := parts[2] + + hostname, err := svchost.ForComparison(hostnameGiven) + if err != nil { + log.Printf("[WARN] local provider path %q contains invalid hostname %q; ignoring", fullPath, hostnameGiven) + return nil + } + var providerAddr addrs.Provider + if namespace == addrs.LegacyProviderNamespace { + if hostname != addrs.DefaultRegistryHost { + log.Printf("[WARN] local provider path %q indicates a legacy provider not on the default registry host; ignoring", fullPath) + return nil + } + providerAddr = addrs.NewLegacyProvider(typeName) + } else { + providerAddr = addrs.NewProvider(hostname, namespace, typeName) + } + + switch len(parts) { + case 5: // Might be unpacked layout + if !info.IsDir() { + return nil // packed layout requires a directory + } + + versionStr := parts[3] + version, err := ParseVersion(versionStr) + if err != nil { + log.Printf("[WARN] ignoring local provider path %q with invalid version %q: %s", fullPath, versionStr, err) + return nil + } + + platformStr := parts[4] + platform, err := ParsePlatform(platformStr) + if err != nil { + log.Printf("[WARN] ignoring local provider path %q with invalid platform %q: %s", fullPath, platformStr, err) + return nil + } + + log.Printf("[TRACE] FilesystemMirrorSource: found %s v%s for %s at %s", providerAddr, version, platform, fullPath) + + meta := PackageMeta{ + Provider: providerAddr, + Version: version, + + // FIXME: How do we populate this? + ProtocolVersions: nil, + TargetPlatform: platform, + + // Because this is already unpacked, the filename is synthetic + // based on the standard naming scheme. + Filename: fmt.Sprintf("terraform-provider-%s_%s_%s.zip", providerAddr.Type, version, platform), + Location: PackageLocalDir(fullPath), + + // FIXME: What about the SHA256Sum field? As currently specified + // it's a hash of the zip file, but this thing is already + // unpacked and so we don't have the zip file to hash. + } + ret[providerAddr] = append(ret[providerAddr], meta) + + case 4: // Might be packed layout + if info.IsDir() { + return nil // packed layout requires a file + } + + filename := filepath.Base(fsPath) + // the filename components are matched case-insensitively, and + // the normalized form of them is in lowercase so we'll convert + // to lowercase for comparison here. (This normalizes only for case, + // because that is the primary constraint affecting compatibility + // between filesystem implementations on different platforms; + // filenames are expected to be pre-normalized and valid in other + // regards.) + normFilename := strings.ToLower(filename) + + // In the packed layout, the version number and target platform + // are derived from the package filename, but only if the + // filename has the expected prefix identifying it as a package + // for the provider in question, and the suffix identifying it + // as a zip file. + prefix := "terraform-provider-" + providerAddr.Type + "_" + const suffix = ".zip" + if !strings.HasPrefix(normFilename, prefix) { + log.Printf("[WARN] ignoring file %q as possible package for %s: lacks expected prefix %q", filename, providerAddr, prefix) + return nil + } + if !strings.HasSuffix(normFilename, suffix) { + log.Printf("[WARN] ignoring file %q as possible package for %s: lacks expected suffix %q", filename, providerAddr, suffix) + return nil + } + + // Extract the version and target part of the filename, which + // will look like "2.1.0_linux_amd64" + infoSlice := normFilename[len(prefix) : len(normFilename)-len(suffix)] + infoParts := strings.Split(infoSlice, "_") + if len(infoParts) < 3 { + log.Printf("[WARN] ignoring file %q as possible package for %s: filename does not include version number, target OS, and target architecture", filename, providerAddr) + return nil + } + + versionStr := infoParts[0] + version, err := ParseVersion(versionStr) + if err != nil { + log.Printf("[WARN] ignoring local provider path %q with invalid version %q: %s", fullPath, versionStr, err) + return nil + } + + // We'll reassemble this back into a single string just so we can + // easily re-use our existing parser and its normalization rules. + platformStr := infoParts[1] + "_" + infoParts[2] + platform, err := ParsePlatform(platformStr) + if err != nil { + log.Printf("[WARN] ignoring local provider path %q with invalid platform %q: %s", fullPath, platformStr, err) + return nil + } + + log.Printf("[TRACE] FilesystemMirrorSource: found %s v%s for %s at %s", providerAddr, version, platform, fullPath) + + meta := PackageMeta{ + Provider: providerAddr, + Version: version, + + // FIXME: How do we populate this? + ProtocolVersions: nil, + TargetPlatform: platform, + + // Because this is already unpacked, the filename is synthetic + // based on the standard naming scheme. + Filename: normFilename, // normalized filename, because this field says what it _should_ be called, not what it _is_ called + Location: PackageLocalArchive(fullPath), // non-normalized here, because this is the actual physical location + + // TODO: Also populate the SHA256Sum field. Skipping that + // for now because our initial uses of this result -- + // scanning already-installed providers in local directories, + // rather than explicit filesystem mirrors -- doesn't do + // any hash verification anyway, and this is consistent with + // the FIXME in the unpacked case above even though technically + // we _could_ populate SHA256Sum here right now. + } + ret[providerAddr] = append(ret[providerAddr], meta) + + } + + return nil + }) + if err != nil { + return err + } + // Sort the results to be deterministic (aside from semver build metadata) + // and consistent with ordering from other functions. + for _, l := range ret { + l.Sort() + } + s.allPackages = ret + return nil +} diff --git a/internal/getproviders/filesystem_mirror_source_test.go b/internal/getproviders/filesystem_mirror_source_test.go new file mode 100644 index 000000000..5757bf97d --- /dev/null +++ b/internal/getproviders/filesystem_mirror_source_test.go @@ -0,0 +1,166 @@ +package getproviders + +import ( + "testing" + + "github.com/apparentlymart/go-versions/versions" + "github.com/google/go-cmp/cmp" + + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform/addrs" +) + +func TestFilesystemMirrorSourceAllAvailablePackages(t *testing.T) { + source := NewFilesystemMirrorSource("testdata/filesystem-mirror") + got, err := source.AllAvailablePackages() + if err != nil { + t.Fatal(err) + } + + want := map[addrs.Provider]PackageMetaList{ + nullProvider: { + { + Provider: nullProvider, + Version: versions.MustParseVersion("2.0.0"), + TargetPlatform: Platform{"darwin", "amd64"}, + Filename: "terraform-provider-null_2.0.0_darwin_amd64.zip", + Location: PackageLocalDir("testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/darwin_amd64"), + }, + { + Provider: nullProvider, + Version: versions.MustParseVersion("2.0.0"), + TargetPlatform: Platform{"linux", "amd64"}, + Filename: "terraform-provider-null_2.0.0_linux_amd64.zip", + Location: PackageLocalDir("testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/linux_amd64"), + }, + { + Provider: nullProvider, + Version: versions.MustParseVersion("2.1.0"), + TargetPlatform: Platform{"linux", "amd64"}, + Filename: "terraform-provider-null_2.1.0_linux_amd64.zip", + Location: PackageLocalArchive("testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_2.1.0_linux_amd64.zip"), + }, + { + Provider: nullProvider, + Version: versions.MustParseVersion("2.0.0"), + TargetPlatform: Platform{"windows", "amd64"}, + Filename: "terraform-provider-null_2.0.0_windows_amd64.zip", + Location: PackageLocalDir("testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/windows_amd64"), + }, + }, + randomProvider: { + { + Provider: randomProvider, + Version: versions.MustParseVersion("1.2.0"), + TargetPlatform: Platform{"linux", "amd64"}, + Filename: "terraform-provider-random_1.2.0_linux_amd64.zip", + Location: PackageLocalDir("testdata/filesystem-mirror/registry.terraform.io/hashicorp/random/1.2.0/linux_amd64"), + }, + }, + happycloudProvider: { + { + Provider: happycloudProvider, + Version: versions.MustParseVersion("0.1.0-alpha.2"), + TargetPlatform: Platform{"darwin", "amd64"}, + Filename: "terraform-provider-happycloud_0.1.0-alpha.2_darwin_amd64.zip", + Location: PackageLocalDir("testdata/filesystem-mirror/tfe.example.com/AwesomeCorp/happycloud/0.1.0-alpha.2/darwin_amd64"), + }, + }, + legacyProvider: { + { + Provider: legacyProvider, + Version: versions.MustParseVersion("1.0.0"), + TargetPlatform: Platform{"linux", "amd64"}, + Filename: "terraform-provider-legacy_1.0.0_linux_amd64.zip", + Location: PackageLocalDir("testdata/filesystem-mirror/registry.terraform.io/-/legacy/1.0.0/linux_amd64"), + }, + }, + } + + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("incorrect result\n%s", diff) + } +} + +func TestFilesystemMirrorSourceAvailableVersions(t *testing.T) { + source := NewFilesystemMirrorSource("testdata/filesystem-mirror") + got, err := source.AvailableVersions(nullProvider) + if err != nil { + t.Fatal(err) + } + + want := VersionList{ + versions.MustParseVersion("2.0.0"), + versions.MustParseVersion("2.1.0"), + } + + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("incorrect result\n%s", diff) + } +} + +func TestFilesystemMirrorSourcePackageMeta(t *testing.T) { + t.Run("available platform", func(t *testing.T) { + source := NewFilesystemMirrorSource("testdata/filesystem-mirror") + got, err := source.PackageMeta( + nullProvider, versions.MustParseVersion("2.0.0"), Platform{"linux", "amd64"}, + ) + if err != nil { + t.Fatal(err) + } + + want := PackageMeta{ + Provider: nullProvider, + Version: versions.MustParseVersion("2.0.0"), + TargetPlatform: Platform{"linux", "amd64"}, + Filename: "terraform-provider-null_2.0.0_linux_amd64.zip", + Location: PackageLocalDir("testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/linux_amd64"), + } + + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("incorrect result\n%s", diff) + } + }) + t.Run("unavailable platform", func(t *testing.T) { + source := NewFilesystemMirrorSource("testdata/filesystem-mirror") + // We'll request a version that does exist in the fixture directory, + // but for a platform that isn't supported. + _, err := source.PackageMeta( + nullProvider, versions.MustParseVersion("2.0.0"), Platform{"nonexist", "nonexist"}, + ) + + if err == nil { + t.Fatalf("succeeded; want error") + } + + // This specific error type is important so callers can use it to + // generate an actionable error message e.g. by checking to see if + // _any_ versions of this provider support the given platform, or + // similar helpful hints. + wantErr := ErrPlatformNotSupported{ + Provider: nullProvider, + Version: versions.MustParseVersion("2.0.0"), + Platform: Platform{"nonexist", "nonexist"}, + } + if diff := cmp.Diff(wantErr, err); diff != "" { + t.Errorf("incorrect error\n%s", diff) + } + }) +} + +var nullProvider = addrs.Provider{ + Hostname: svchost.Hostname("registry.terraform.io"), + Namespace: "hashicorp", + Type: "null", +} +var randomProvider = addrs.Provider{ + Hostname: svchost.Hostname("registry.terraform.io"), + Namespace: "hashicorp", + Type: "random", +} +var happycloudProvider = addrs.Provider{ + Hostname: svchost.Hostname("tfe.example.com"), + Namespace: "awesomecorp", + Type: "happycloud", +} +var legacyProvider = addrs.NewLegacyProvider("legacy") diff --git a/internal/getproviders/http_mirror_source.go b/internal/getproviders/http_mirror_source.go new file mode 100644 index 000000000..5b5bb80f1 --- /dev/null +++ b/internal/getproviders/http_mirror_source.go @@ -0,0 +1,38 @@ +package getproviders + +import ( + "net/url" + + "github.com/hashicorp/terraform/addrs" +) + +// HTTPMirrorSource is a source that reads provider metadata from a provider +// mirror that is accessible over the HTTP provider mirror protocol. +type HTTPMirrorSource struct { + baseURL *url.URL +} + +var _ Source = (*HTTPMirrorSource)(nil) + +// NewHTTPMirrorSource constructs and returns a new network mirror source with +// the given base URL. The relative URL offsets defined by the HTTP mirror +// protocol will be resolve relative to the given URL. +func NewHTTPMirrorSource(baseURL *url.URL) *HTTPMirrorSource { + return &HTTPMirrorSource{ + baseURL: baseURL, + } +} + +// AvailableVersions retrieves the available versions for the given provider +// from the object's underlying HTTP mirror service. +func (s *HTTPMirrorSource) AvailableVersions(provider addrs.Provider) (VersionList, error) { + // TODO: Implement + panic("HTTPMirrorSource.AvailableVersions not yet implemented") +} + +// PackageMeta retrieves metadata for the requested provider package +// from the object's underlying HTTP mirror service. +func (s *HTTPMirrorSource) PackageMeta(provider addrs.Provider, version Version, target Platform) (PackageMeta, error) { + // TODO: Implement + panic("HTTPMirrorSource.PackageMeta not yet implemented") +} diff --git a/internal/getproviders/legacy_lookup.go b/internal/getproviders/legacy_lookup.go new file mode 100644 index 000000000..96901abab --- /dev/null +++ b/internal/getproviders/legacy_lookup.go @@ -0,0 +1,121 @@ +package getproviders + +import ( + "fmt" + + svchost "github.com/hashicorp/terraform-svchost" + + "github.com/hashicorp/terraform/addrs" +) + +// LookupLegacyProvider attempts to resolve a legacy provider address (whose +// registry host and namespace are implied, rather than explicit) into a +// fully-qualified provider address, by asking the main Terraform registry +// to resolve it. +// +// If the given address is not a legacy provider address then it will just be +// returned verbatim without making any outgoing requests. +// +// Legacy provider lookup is possible only if the given source is either a +// *RegistrySource directly or if it is a MultiSource containing a +// *RegistrySource whose selector matching patterns include the +// public registry hostname registry.terraform.io. +// +// This is a backward-compatibility mechanism for compatibility with existing +// configurations that don't include explicit provider source addresses. New +// configurations should not rely on it, and this fallback mechanism is +// likely to be removed altogether in a future Terraform version. +func LookupLegacyProvider(addr addrs.Provider, source Source) (addrs.Provider, error) { + if addr.Namespace != "-" { + return addr, nil + } + if addr.Hostname != defaultRegistryHost { // condition above assures namespace is also "-" + // Legacy providers must always belong to the default registry host. + return addrs.Provider{}, fmt.Errorf("invalid provider type %q: legacy provider addresses must always belong to %s", addr, defaultRegistryHost) + } + + // Now we need to derive a suitable *RegistrySource from the given source, + // either directly or indirectly. This will not be possible if the user + // has configured Terraform to disable direct installation from + // registry.terraform.io; in that case, fully-qualified provider addresses + // are always required. + regSource := findLegacyProviderLookupSource(addr.Hostname, source) + if regSource == nil { + // This error message is assuming that the given Source was produced + // based on the CLI configuration, which isn't necessarily true but + // is true in all cases where this error message will ultimately be + // presented to an end-user, so good enough for now. + return addrs.Provider{}, fmt.Errorf("unqualified provider type %q cannot be resolved because direct installation from %s is disabled in the CLI configuration; declare an explicit provider namespace for this provider", addr.Type, addr.Hostname) + } + + defaultNamespace, err := regSource.LookupLegacyProviderNamespace(addr.Hostname, addr.Type) + if err != nil { + return addrs.Provider{}, err + } + + return addrs.Provider{ + Hostname: addr.Hostname, + Namespace: defaultNamespace, + Type: addr.Type, + }, nil +} + +// findLegacyProviderLookupSource tries to find a *RegistrySource that can talk +// to the given registry host in the given Source. It might be given directly, +// or it might be given indirectly via a MultiSource where the selector +// includes a wildcard for registry.terraform.io. +// +// Returns nil if the given source does not have any configured way to talk +// directly to the given host. +// +// If the given source contains multiple sources that can talk to the given +// host directly, the first one in the sequence takes preference. In practice +// it's pointless to have two direct installation sources that match the same +// hostname anyway, so this shouldn't arise in normal use. +func findLegacyProviderLookupSource(host svchost.Hostname, source Source) *RegistrySource { + switch source := source.(type) { + + case *RegistrySource: + // Easy case: the source is a registry source directly, and so we'll + // just use it. + return source + + case MultiSource: + // Trickier case: if it's a multisource then we need to scan over + // its selectors until we find one that is a *RegistrySource _and_ + // that is configured to accept arbitrary providers from the + // given hostname. + + // For our matching purposes we'll use an address that would not be + // valid as a real provider FQN and thus can only match a selector + // that has no filters at all or a selector that wildcards everything + // except the hostname, like "registry.terraform.io/*/*" + matchAddr := addrs.Provider{ + Hostname: host, + // Other fields are intentionally left empty, to make this invalid + // as a specific provider address. + } + + for _, selector := range source { + // If this source has suitable matching patterns to install from + // the given hostname then we'll recursively search inside it + // for *RegistrySource objects. + if selector.CanHandleProvider(matchAddr) { + ret := findLegacyProviderLookupSource(host, selector.Source) + if ret != nil { + return ret + } + } + } + + // If we get here then there were no selectors that are both configured + // to handle modules from the given hostname and that are registry + // sources, so we fail. + return nil + + default: + // This source cannot be and cannot contain a *RegistrySource, so + // we fail. + return nil + } +} diff --git a/internal/getproviders/legacy_lookup_test.go b/internal/getproviders/legacy_lookup_test.go new file mode 100644 index 000000000..eac853b63 --- /dev/null +++ b/internal/getproviders/legacy_lookup_test.go @@ -0,0 +1,29 @@ +package getproviders + +import ( + "testing" + + "github.com/hashicorp/terraform/addrs" +) + +func TestLookupLegacyProvider(t *testing.T) { + source, _, close := testRegistrySource(t) + defer close() + + got, err := LookupLegacyProvider( + addrs.NewLegacyProvider("legacy"), + source, + ) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } + + want := addrs.Provider{ + Hostname: defaultRegistryHost, + Namespace: "legacycorp", + Type: "legacy", + } + if got != want { + t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, want) + } +} diff --git a/internal/getproviders/memoize_source.go b/internal/getproviders/memoize_source.go new file mode 100644 index 000000000..19e08d58c --- /dev/null +++ b/internal/getproviders/memoize_source.go @@ -0,0 +1,96 @@ +package getproviders + +import ( + "sync" + + "github.com/hashicorp/terraform/addrs" +) + +// MemoizeSource is a Source that wraps another Source and remembers its +// results so that they can be returned more quickly on future calls to the +// same object. +// +// Each MemoizeSource maintains a cache of response it has seen as part of its +// body. All responses are retained for the remaining lifetime of the object. +// Errors from the underlying source are also cached, and so subsequent calls +// with the same arguments will always produce the same errors. +// +// A MemoizeSource can be called concurrently, with incoming requests processed +// sequentially. +type MemoizeSource struct { + underlying Source + availableVersions map[addrs.Provider]memoizeAvailableVersionsRet + packageMetas map[memoizePackageMetaCall]memoizePackageMetaRet + mu sync.Mutex +} + +type memoizeAvailableVersionsRet struct { + VersionList VersionList + Err error +} + +type memoizePackageMetaCall struct { + Provider addrs.Provider + Version Version + Target Platform +} + +type memoizePackageMetaRet struct { + PackageMeta PackageMeta + Err error +} + +var _ Source = (*MemoizeSource)(nil) + +// NewMemoizeSource constructs and returns a new MemoizeSource that wraps +// the given underlying source and memoizes its results. +func NewMemoizeSource(underlying Source) *MemoizeSource { + return &MemoizeSource{ + underlying: underlying, + availableVersions: make(map[addrs.Provider]memoizeAvailableVersionsRet), + packageMetas: make(map[memoizePackageMetaCall]memoizePackageMetaRet), + } +} + +// AvailableVersions requests the available versions from the underlying source +// and caches them before returning them, or on subsequent calls returns the +// result directly from the cache. +func (s *MemoizeSource) AvailableVersions(provider addrs.Provider) (VersionList, error) { + s.mu.Lock() + defer s.mu.Unlock() + + if existing, exists := s.availableVersions[provider]; exists { + return existing.VersionList, existing.Err + } + + ret, err := s.underlying.AvailableVersions(provider) + s.availableVersions[provider] = memoizeAvailableVersionsRet{ + VersionList: ret, + Err: err, + } + return ret, err +} + +// PackageMeta requests package metadata from the underlying source and caches +// the result before returning it, or on subsequent calls returns the result +// directly from the cache. +func (s *MemoizeSource) PackageMeta(provider addrs.Provider, version Version, target Platform) (PackageMeta, error) { + s.mu.Lock() + defer s.mu.Unlock() + + key := memoizePackageMetaCall{ + Provider: provider, + Version: version, + Target: target, + } + if existing, exists := s.packageMetas[key]; exists { + return existing.PackageMeta, existing.Err + } + + ret, err := s.underlying.PackageMeta(provider, version, target) + s.packageMetas[key] = memoizePackageMetaRet{ + PackageMeta: ret, + Err: err, + } + return ret, err +} diff --git a/internal/getproviders/multi_source.go b/internal/getproviders/multi_source.go new file mode 100644 index 000000000..c3bd29e77 --- /dev/null +++ b/internal/getproviders/multi_source.go @@ -0,0 +1,162 @@ +package getproviders + +import ( + "fmt" + "regexp" + "strings" + + svchost "github.com/hashicorp/terraform-svchost" + + "github.com/hashicorp/terraform/addrs" +) + +// MultiSource is a Source that wraps a series of other sources and combines +// their sets of available providers and provider versions. +// +// A MultiSource consists of a sequence of selectors that each specify an +// underlying source to query and a set of matching patterns to decide which +// providers can be retrieved from which sources. If multiple selectors find +// a given provider version then the earliest one in the sequence takes +// priority for deciding the package metadata for the provider. +// +// For underlying sources that make network requests, consider wrapping each +// one in a MemoizeSource so that availability information retrieved in +// AvailableVersions can be reused in PackageMeta. +type MultiSource []MultiSourceSelector + +var _ Source = MultiSource(nil) + +// AvailableVersions retrieves all of the versions of the given provider +// that are available across all of the underlying selectors, while respecting +// each selector's matching patterns. +func (s MultiSource) AvailableVersions(provider addrs.Provider) (VersionList, error) { + // TODO: Implement + panic("MultiSource.AvailableVersions not yet implemented") +} + +// PackageMeta retrieves the package metadata for the given provider from the +// first selector that indicates support for it. +func (s MultiSource) PackageMeta(provider addrs.Provider, version Version, target Platform) (PackageMeta, error) { + // TODO: Implement + panic("MultiSource.PackageMeta not yet implemented") +} + +// MultiSourceSelector is an element of the source selection configuration on +// MultiSource. A MultiSource has zero or more of these to configure which +// underlying sources it should consult for a given provider. +type MultiSourceSelector struct { + // Source is the underlying source that this selector applies to. + Source Source + + // Include and Exclude are sets of provider matching patterns that + // together define which providers are eligible to be potentially + // installed from the corresponding Source. + Include, Exclude MultiSourceMatchingPatterns +} + +// MultiSourceMatchingPatterns is a set of patterns that together define a +// set of providers by matching on the segments of the provider FQNs. +// +// The Provider address values in a MultiSourceMatchingPatterns are special in +// that any of Hostname, Namespace, or Type can be getproviders.Wildcard +// to indicate that any concrete value is permitted for that segment. +type MultiSourceMatchingPatterns []addrs.Provider + +// ParseMultiSourceMatchingPatterns parses a slice of strings containing the +// string form of provider matching patterns and, if all the given strings +// are valid, returns the corresponding MultiSourceMatchingPatterns value. +func ParseMultiSourceMatchingPatterns(strs []string) (MultiSourceMatchingPatterns, error) { + if len(strs) == 0 { + return nil, nil + } + + ret := make(MultiSourceMatchingPatterns, len(strs)) + for i, str := range strs { + parts := strings.Split(str, "/") + if len(parts) < 2 || len(parts) > 3 { + return nil, fmt.Errorf("invalid provider matching pattern %q: must have either two or three slash-separated segments", str) + } + host := defaultRegistryHost + explicitHost := len(parts) == 3 + if explicitHost { + givenHost := parts[0] + if givenHost == "*" { + host = svchost.Hostname(Wildcard) + } else { + normalHost, err := svchost.ForComparison(givenHost) + if err != nil { + return nil, fmt.Errorf("invalid hostname in provider matching pattern %q: %s", str, err) + } + + // The remaining code below deals only with the namespace/type portions. + host = normalHost + } + + parts = parts[1:] + } + + if !validProviderNamePattern.MatchString(parts[1]) { + return nil, fmt.Errorf("invalid provider type %q in provider matching pattern %q: must either be the wildcard * or a provider type name", parts[1], str) + } + if !validProviderNamePattern.MatchString(parts[0]) { + return nil, fmt.Errorf("invalid registry namespace %q in provider matching pattern %q: must either be the wildcard * or a literal namespace", parts[1], str) + } + + ret[i] = addrs.Provider{ + Hostname: host, + Namespace: parts[0], + Type: parts[1], + } + + if ret[i].Hostname == svchost.Hostname(Wildcard) && !(ret[i].Namespace == Wildcard && ret[i].Type == Wildcard) { + return nil, fmt.Errorf("invalid provider matching pattern %q: hostname can be a wildcard only if both namespace and provider type are also wildcards", str) + } + if ret[i].Namespace == Wildcard && ret[i].Type != Wildcard { + return nil, fmt.Errorf("invalid provider matching pattern %q: namespace can be a wildcard only if the provider type is also a wildcard", str) + } + } + return ret, nil +} + +// CanHandleProvider returns true if and only if the given provider address +// is both included by the selector's include patterns and _not_ excluded +// by its exclude patterns. +// +// The absense of any include patterns is treated the same as a pattern +// that matches all addresses. Exclusions take priority over inclusions. +func (s MultiSourceSelector) CanHandleProvider(addr addrs.Provider) bool { + switch { + case s.Exclude.MatchesProvider(addr): + return false + case len(s.Include) > 0: + return s.Include.MatchesProvider(addr) + default: + return true + } +} + +// MatchesProvider tests whether the receiving matching patterns match with +// the given concrete provider address. +func (ps MultiSourceMatchingPatterns) MatchesProvider(addr addrs.Provider) bool { + for _, pattern := range ps { + hostMatch := (pattern.Hostname == svchost.Hostname(Wildcard) || pattern.Hostname == addr.Hostname) + namespaceMatch := (pattern.Namespace == Wildcard || pattern.Namespace == addr.Namespace) + typeMatch := (pattern.Type == Wildcard || pattern.Type == addr.Type) + if hostMatch && namespaceMatch && typeMatch { + return true + } + } + return false +} + +// Wildcard is a string value representing a wildcard element in the Include +// and Exclude patterns used with MultiSource. It is not valid to use Wildcard +// anywhere else. +const Wildcard string = "*" + +// We'll read the default registry host from over in the addrs package, to +// avoid duplicating it. A "default" provider uses the default registry host +// by definition. +var defaultRegistryHost = addrs.NewDefaultProvider("placeholder").Hostname + +var validProviderNamePattern = regexp.MustCompile("^[a-zA-Z0-9_-]+|\\*$") diff --git a/internal/getproviders/registry_client.go b/internal/getproviders/registry_client.go new file mode 100644 index 000000000..a1d1532d7 --- /dev/null +++ b/internal/getproviders/registry_client.go @@ -0,0 +1,320 @@ +package getproviders + +import ( + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "net/http" + "net/url" + "path" + "time" + + "github.com/apparentlymart/go-versions/versions" + svchost "github.com/hashicorp/terraform-svchost" + svcauth "github.com/hashicorp/terraform-svchost/auth" + + "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/httpclient" + "github.com/hashicorp/terraform/version" +) + +const terraformVersionHeader = "X-Terraform-Version" + +// registryClient is a client for the provider registry protocol that is +// specialized only for the needs of this package. It's not intended as a +// general registry API client. +type registryClient struct { + baseURL *url.URL + creds svcauth.HostCredentials + + httpClient *http.Client +} + +func newRegistryClient(baseURL *url.URL, creds svcauth.HostCredentials) *registryClient { + httpClient := httpclient.New() + httpClient.Timeout = 10 * time.Second + + return ®istryClient{ + baseURL: baseURL, + creds: creds, + httpClient: httpClient, + } +} + +// ProviderVersions returns the raw version strings produced by the registry +// for the given provider. +// +// The returned error will be ErrProviderNotKnown if the registry responds +// with 404 Not Found to indicate that the namespace or provider type are +// not known, ErrUnauthorized if the registry responds with 401 or 403 status +// codes, or ErrQueryFailed for any other protocol or operational problem. +func (c *registryClient) ProviderVersions(addr addrs.Provider) ([]string, error) { + endpointPath, err := url.Parse(path.Join(addr.Namespace, addr.Type, "versions")) + if err != nil { + // Should never happen because we're constructing this from + // already-validated components. + return nil, err + } + endpointURL := c.baseURL.ResolveReference(endpointPath) + + req, err := http.NewRequest("GET", endpointURL.String(), nil) + if err != nil { + return nil, err + } + c.addHeadersToRequest(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, c.errQueryFailed(addr, err) + } + defer resp.Body.Close() + + switch resp.StatusCode { + case http.StatusOK: + // Great! + case http.StatusNotFound: + return nil, ErrProviderNotKnown{ + Provider: addr, + } + case http.StatusUnauthorized, http.StatusForbidden: + return nil, c.errUnauthorized(addr.Hostname) + default: + return nil, c.errQueryFailed(addr, errors.New(resp.Status)) + } + + // We ignore everything except the version numbers here because our goal + // is to find out which versions are available _at all_. Which ones are + // compatible with the current Terraform becomes relevant only once we've + // selected one, at which point we'll return an error if the selected one + // is incompatible. + // + // We intentionally produce an error on incompatibility, rather than + // silently ignoring an incompatible version, in order to give the user + // explicit feedback about why their selection wasn't valid and allow them + // to decide whether to fix that by changing the selection or by some other + // action such as upgrading Terraform, using a different OS to run + // Terraform, etc. Changes that affect compatibility are considered + // breaking changes from a provider API standpoint, so provider teams + // should change compatibility only in new major versions. + type ResponseBody struct { + Versions []struct { + Version string `json:"version"` + } `json:"versions"` + } + var body ResponseBody + + dec := json.NewDecoder(resp.Body) + if err := dec.Decode(&body); err != nil { + return nil, c.errQueryFailed(addr, err) + } + + if len(body.Versions) == 0 { + return nil, nil + } + + ret := make([]string, len(body.Versions)) + for i, v := range body.Versions { + ret[i] = v.Version + } + return ret, nil +} + +// PackageMeta returns metadata about a distribution package for a +// provider. +// +// The returned error will be ErrPlatformNotSupported if the registry responds +// with 404 Not Found, under the assumption that the caller previously checked +// that the provider and version are valid. It will return ErrUnauthorized if +// the registry responds with 401 or 403 status codes, or ErrQueryFailed for +// any other protocol or operational problem. +func (c *registryClient) PackageMeta(provider addrs.Provider, version Version, target Platform) (PackageMeta, error) { + endpointPath, err := url.Parse(path.Join( + provider.Namespace, + provider.Type, + version.String(), + "download", + target.OS, + target.Arch, + )) + if err != nil { + // Should never happen because we're constructing this from + // already-validated components. + return PackageMeta{}, err + } + endpointURL := c.baseURL.ResolveReference(endpointPath) + + req, err := http.NewRequest("GET", endpointURL.String(), nil) + if err != nil { + return PackageMeta{}, err + } + c.addHeadersToRequest(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return PackageMeta{}, c.errQueryFailed(provider, err) + } + defer resp.Body.Close() + + switch resp.StatusCode { + case http.StatusOK: + // Great! + case http.StatusNotFound: + return PackageMeta{}, ErrPlatformNotSupported{ + Provider: provider, + Version: version, + Platform: target, + } + case http.StatusUnauthorized, http.StatusForbidden: + return PackageMeta{}, c.errUnauthorized(provider.Hostname) + default: + return PackageMeta{}, c.errQueryFailed(provider, errors.New(resp.Status)) + } + + type ResponseBody struct { + Protocols []string `json:"protocols"` + OS string `json:"os"` + Arch string `json:"arch"` + Filename string `json:"filename"` + DownloadURL string `json:"download_url"` + SHA256Sum string `json:"shasum"` + + // TODO: Other metadata for signature checking + } + var body ResponseBody + + dec := json.NewDecoder(resp.Body) + if err := dec.Decode(&body); err != nil { + return PackageMeta{}, c.errQueryFailed(provider, err) + } + + var protoVersions VersionList + for _, versionStr := range body.Protocols { + v, err := versions.ParseVersion(versionStr) + if err != nil { + return PackageMeta{}, c.errQueryFailed( + provider, + fmt.Errorf("registry response includes invalid version string %q: %s", versionStr, err), + ) + } + protoVersions = append(protoVersions, v) + } + protoVersions.Sort() + + downloadURL, err := url.Parse(body.DownloadURL) + if err != nil { + return PackageMeta{}, fmt.Errorf("registry response includes invalid download URL: %s", err) + } + downloadURL = resp.Request.URL.ResolveReference(downloadURL) + if downloadURL.Scheme != "http" && downloadURL.Scheme != "https" { + return PackageMeta{}, fmt.Errorf("registry response includes invalid download URL: must use http or https scheme") + } + + ret := PackageMeta{ + Provider: provider, + Version: version, + ProtocolVersions: protoVersions, + TargetPlatform: Platform{ + OS: body.OS, + Arch: body.Arch, + }, + Filename: body.Filename, + Location: PackageHTTPURL(downloadURL.String()), + // SHA256Sum is populated below + } + + if len(body.SHA256Sum) != len(ret.SHA256Sum)*2 { + return PackageMeta{}, c.errQueryFailed( + provider, + fmt.Errorf("registry response includes invalid SHA256 hash %q: %s", body.SHA256Sum, err), + ) + } + _, err = hex.Decode(ret.SHA256Sum[:], []byte(body.SHA256Sum)) + if err != nil { + return PackageMeta{}, c.errQueryFailed( + provider, + fmt.Errorf("registry response includes invalid SHA256 hash %q: %s", body.SHA256Sum, err), + ) + } + + return ret, nil +} + +// LegacyProviderCanonicalAddress returns the raw address strings produced by +// the registry when asked about the given unqualified provider type name. +// The returned namespace string is taken verbatim from the registry's response. +// +// This method exists only to allow compatibility with unqualified names +// in older configurations. New configurations should be written so as not to +// depend on it. +func (c *registryClient) LegacyProviderDefaultNamespace(typeName string) (string, error) { + endpointPath, err := url.Parse(path.Join("-", typeName)) + if err != nil { + // Should never happen because we're constructing this from + // already-validated components. + return "", err + } + endpointURL := c.baseURL.ResolveReference(endpointPath) + + req, err := http.NewRequest("GET", endpointURL.String(), nil) + if err != nil { + return "", err + } + c.addHeadersToRequest(req) + + // This is just to give us something to return in error messages. It's + // not a proper provider address. + placeholderProviderAddr := addrs.NewLegacyProvider(typeName) + + resp, err := c.httpClient.Do(req) + if err != nil { + return "", c.errQueryFailed(placeholderProviderAddr, err) + } + defer resp.Body.Close() + + switch resp.StatusCode { + case http.StatusOK: + // Great! + case http.StatusNotFound: + return "", ErrProviderNotKnown{ + Provider: placeholderProviderAddr, + } + case http.StatusUnauthorized, http.StatusForbidden: + return "", c.errUnauthorized(placeholderProviderAddr.Hostname) + default: + return "", c.errQueryFailed(placeholderProviderAddr, errors.New(resp.Status)) + } + + type ResponseBody struct { + Namespace string + } + var body ResponseBody + + dec := json.NewDecoder(resp.Body) + if err := dec.Decode(&body); err != nil { + return "", c.errQueryFailed(placeholderProviderAddr, err) + } + + return body.Namespace, nil +} + +func (c *registryClient) addHeadersToRequest(req *http.Request) { + if c.creds != nil { + c.creds.PrepareRequest(req) + } + req.Header.Set(terraformVersionHeader, version.String()) +} + +func (c *registryClient) errQueryFailed(provider addrs.Provider, err error) error { + return ErrQueryFailed{ + Provider: provider, + Wrapped: err, + } +} + +func (c *registryClient) errUnauthorized(hostname svchost.Hostname) error { + return ErrUnauthorized{ + Hostname: hostname, + HaveCredentials: c.creds != nil, + } +} diff --git a/internal/getproviders/registry_client_test.go b/internal/getproviders/registry_client_test.go new file mode 100644 index 000000000..2848652af --- /dev/null +++ b/internal/getproviders/registry_client_test.go @@ -0,0 +1,179 @@ +package getproviders + +import ( + "log" + "net/http" + "net/http/httptest" + "strings" + "testing" + + svchost "github.com/hashicorp/terraform-svchost" + disco "github.com/hashicorp/terraform-svchost/disco" +) + +// testServices starts up a local HTTP server running a fake provider registry +// service and returns a service discovery object pre-configured to consider +// the host "example.com" to be served by the fake registry service. +// +// The returned discovery object also knows the hostname "not.example.com" +// which does not have a provider registry at all and "too-new.example.com" +// which has a "providers.v99" service that is inoperable but could be useful +// to test the error reporting for detecting an unsupported protocol version. +// It also knows fails.example.com but it refers to an endpoint that doesn't +// correctly speak HTTP, to simulate a protocol error. +// +// The second return value is a function to call at the end of a test function +// to shut down the test server. After you call that function, the discovery +// object becomes useless. +func testServices(t *testing.T) (services *disco.Disco, baseURL string, cleanup func()) { + server := httptest.NewServer(http.HandlerFunc(fakeRegistryHandler)) + + services = disco.New() + services.ForceHostServices(svchost.Hostname("example.com"), map[string]interface{}{ + "providers.v1": server.URL + "/providers/v1/", + }) + services.ForceHostServices(svchost.Hostname("not.example.com"), map[string]interface{}{}) + services.ForceHostServices(svchost.Hostname("too-new.example.com"), map[string]interface{}{ + // This service doesn't actually work; it's here only to be + // detected as "too new" by the discovery logic. + "providers.v99": server.URL + "/providers/v99/", + }) + services.ForceHostServices(svchost.Hostname("fails.example.com"), map[string]interface{}{ + "providers.v1": server.URL + "/fails-immediately/", + }) + + // We'll also permit registry.terraform.io here just because it's our + // default and has some unique features that are not allowed on any other + // hostname. It behaves the same as example.com, which should be preferred + // if you're not testing something specific to the default registry in order + // to ensure that most things are hostname-agnostic. + services.ForceHostServices(svchost.Hostname("registry.terraform.io"), map[string]interface{}{ + "providers.v1": server.URL + "/providers/v1/", + }) + + return services, server.URL, func() { + server.Close() + } +} + +// testRegistrySource is a wrapper around testServices that uses the created +// discovery object to produce a Source instance that is ready to use with the +// fake registry services. +// +// As with testServices, the second return value is a function to call at the end +// of your test in order to shut down the test server. +func testRegistrySource(t *testing.T) (source *RegistrySource, baseURL string, cleanup func()) { + services, baseURL, close := testServices(t) + source = NewRegistrySource(services) + return source, baseURL, close +} + +func fakeRegistryHandler(resp http.ResponseWriter, req *http.Request) { + path := req.URL.EscapedPath() + if strings.HasPrefix(path, "/fails-immediately/") { + // Here we take over the socket and just close it immediately, to + // simulate one possible way a server might not be an HTTP server. + hijacker, ok := resp.(http.Hijacker) + if !ok { + // Not hijackable, so we'll just fail normally. + // If this happens, tests relying on this will fail. + resp.WriteHeader(500) + resp.Write([]byte(`cannot hijack`)) + return + } + conn, _, err := hijacker.Hijack() + if err != nil { + resp.WriteHeader(500) + resp.Write([]byte(`hijack failed`)) + return + } + conn.Close() + return + } + + if !strings.HasPrefix(path, "/providers/v1/") { + resp.WriteHeader(404) + resp.Write([]byte(`not a provider registry endpoint`)) + return + } + + pathParts := strings.Split(path, "/")[3:] + if len(pathParts) < 2 { + resp.WriteHeader(404) + resp.Write([]byte(`unexpected number of path parts`)) + return + } + log.Printf("[TRACE] fake provider registry request for %#v", pathParts) + if len(pathParts) == 2 { + switch pathParts[0] + "/" + pathParts[1] { + + case "-/legacy": + // NOTE: This legacy lookup endpoint is specific to + // registry.terraform.io and not expected to work on any other + // registry host. + resp.Header().Set("Content-Type", "application/json") + resp.WriteHeader(200) + resp.Write([]byte(`{"namespace":"legacycorp"}`)) + + default: + resp.WriteHeader(404) + resp.Write([]byte(`unknown namespace or provider type for direct lookup`)) + } + } + + if len(pathParts) < 3 { + resp.WriteHeader(404) + resp.Write([]byte(`unexpected number of path parts`)) + return + } + + if pathParts[2] == "versions" { + if len(pathParts) != 3 { + resp.WriteHeader(404) + resp.Write([]byte(`extraneous path parts`)) + return + } + + switch pathParts[0] + "/" + pathParts[1] { + case "awesomesauce/happycloud": + resp.Header().Set("Content-Type", "application/json") + resp.WriteHeader(200) + // Note that these version numbers are intentionally misordered + // so we can test that the client-side code places them in the + // correct order (lowest precedence first). + resp.Write([]byte(`{"versions":[{"version":"1.2.0"}, {"version":"1.0.0"}]}`)) + case "weaksauce/no-versions": + resp.Header().Set("Content-Type", "application/json") + resp.WriteHeader(200) + resp.Write([]byte(`{"versions":[]}`)) + default: + resp.WriteHeader(404) + resp.Write([]byte(`unknown namespace or provider type`)) + } + return + } + + if len(pathParts) == 6 && pathParts[3] == "download" { + switch pathParts[0] + "/" + pathParts[1] { + case "awesomesauce/happycloud": + if pathParts[4] == "nonexist" { + resp.WriteHeader(404) + resp.Write([]byte(`unsupported OS`)) + return + } + resp.Header().Set("Content-Type", "application/json") + resp.WriteHeader(200) + // Note that these version numbers are intentionally misordered + // so we can test that the client-side code places them in the + // correct order (lowest precedence first). + resp.Write([]byte(`{"protocols":["5.0"],"os":"` + pathParts[4] + `","arch":"` + pathParts[5] + `","filename":"happycloud_` + pathParts[2] + `.zip","download_url":"/pkg/happycloud_` + pathParts[2] + `.zip","shasum":"000000000000000000000000000000000000000000000000000000000000f00d"}`)) + default: + resp.WriteHeader(404) + resp.Write([]byte(`unknown namespace/provider/version/architecture`)) + } + return + } + + resp.WriteHeader(404) + resp.Write([]byte(`unrecognized path scheme`)) +} diff --git a/internal/getproviders/registry_source.go b/internal/getproviders/registry_source.go new file mode 100644 index 000000000..301f431a1 --- /dev/null +++ b/internal/getproviders/registry_source.go @@ -0,0 +1,155 @@ +package getproviders + +import ( + "fmt" + + svchost "github.com/hashicorp/terraform-svchost" + disco "github.com/hashicorp/terraform-svchost/disco" + + "github.com/hashicorp/terraform/addrs" +) + +// RegistrySource is a Source that knows how to find and install providers from +// their originating provider registries. +type RegistrySource struct { + services *disco.Disco +} + +var _ Source = (*RegistrySource)(nil) + +// NewRegistrySource creates and returns a new source that will install +// providers from their originating provider registries. +func NewRegistrySource(services *disco.Disco) *RegistrySource { + return &RegistrySource{ + services: services, + } +} + +// AvailableVersions returns all of the versions available for the provider +// with the given address, or an error if that result cannot be determined. +// +// If the request fails, the returned error might be an value of +// ErrHostNoProviders, ErrHostUnreachable, ErrUnauthenticated, +// ErrProviderNotKnown, or ErrQueryFailed. Callers must be defensive and +// expect errors of other types too, to allow for future expansion. +func (s *RegistrySource) AvailableVersions(provider addrs.Provider) (VersionList, error) { + client, err := s.registryClient(provider.Hostname) + if err != nil { + return nil, err + } + + versionStrs, err := client.ProviderVersions(provider) + if err != nil { + return nil, err + } + + if len(versionStrs) == 0 { + return nil, nil + } + + ret := make(VersionList, len(versionStrs)) + for i, str := range versionStrs { + v, err := ParseVersion(str) + if err != nil { + return nil, ErrQueryFailed{ + Provider: provider, + Wrapped: fmt.Errorf("registry response includes invalid version string %q: %s", str, err), + } + } + ret[i] = v + } + ret.Sort() // lowest precedence first, preserving order when equal precedence + return ret, nil +} + +// PackageMeta returns metadata about the location and capabilities of +// a distribution package for a particular provider at a particular version +// targeting a particular platform. +// +// Callers of PackageMeta should first call AvailableVersions and pass +// one of the resulting versions to this function. This function cannot +// distinguish between a version that is not available and an unsupported +// target platform, so if it encounters either case it will return an error +// suggesting that the target platform isn't supported under the assumption +// that the caller already checked that the version is available at all. +// +// To find a package suitable for the platform where the provider installation +// process is running, set the "target" argument to +// getproviders.CurrentPlatform. +// +// If the request fails, the returned error might be an value of +// ErrHostNoProviders, ErrHostUnreachable, ErrUnauthenticated, +// ErrPlatformNotSupported, or ErrQueryFailed. Callers must be defensive and +// expect errors of other types too, to allow for future expansion. +func (s *RegistrySource) PackageMeta(provider addrs.Provider, version Version, target Platform) (PackageMeta, error) { + client, err := s.registryClient(provider.Hostname) + if err != nil { + return PackageMeta{}, err + } + + return client.PackageMeta(provider, version, target) +} + +// LookupLegacyProviderNamespace is a special method available only on +// RegistrySource which can deal with legacy provider addresses that contain +// only a type and leave the namespace implied. +// +// It asks the registry at the given hostname to provide a default namespace +// for the given provider type, which can be combined with the given hostname +// and type name to produce a fully-qualified provider address. +// +// Not all unqualified type names can be resolved to a default namespace. If +// the request fails, this method returns an error describing the failure. +// +// This method exists only to allow compatibility with unqualified names +// in older configurations. New configurations should be written so as not to +// depend on it, and this fallback mechanism will likely be removed altogether +// in a future Terraform version. +func (s *RegistrySource) LookupLegacyProviderNamespace(hostname svchost.Hostname, typeName string) (string, error) { + client, err := s.registryClient(hostname) + if err != nil { + return "", err + } + return client.LegacyProviderDefaultNamespace(typeName) +} + +func (s *RegistrySource) registryClient(hostname svchost.Hostname) (*registryClient, error) { + host, err := s.services.Discover(hostname) + if err != nil { + return nil, ErrHostUnreachable{ + Hostname: hostname, + Wrapped: err, + } + } + + url, err := host.ServiceURL("providers.v1") + switch err := err.(type) { + case nil: + // okay! We'll fall through and return below. + case *disco.ErrServiceNotProvided: + return nil, ErrHostNoProviders{ + Hostname: hostname, + } + case *disco.ErrVersionNotSupported: + return nil, ErrHostNoProviders{ + Hostname: hostname, + HasOtherVersion: true, + } + default: + return nil, ErrHostUnreachable{ + Hostname: hostname, + Wrapped: err, + } + } + + // Check if we have credentials configured for this hostname. + creds, err := s.services.CredentialsForHost(hostname) + if err != nil { + // This indicates that a credentials helper failed, which means we + // can't do anything better than just pass through the helper's + // own error message. + return nil, fmt.Errorf("failed to retrieve credentials for %s: %s", hostname, err) + } + + return newRegistryClient(url, creds), nil +} diff --git a/internal/getproviders/registry_source_test.go b/internal/getproviders/registry_source_test.go new file mode 100644 index 000000000..5de161e7b --- /dev/null +++ b/internal/getproviders/registry_source_test.go @@ -0,0 +1,205 @@ +package getproviders + +import ( + "fmt" + "regexp" + "strings" + "testing" + + "github.com/apparentlymart/go-versions/versions" + "github.com/google/go-cmp/cmp" + svchost "github.com/hashicorp/terraform-svchost" + + "github.com/hashicorp/terraform/addrs" +) + +func TestSourceAvailableVersions(t *testing.T) { + source, baseURL, close := testRegistrySource(t) + defer close() + + tests := []struct { + provider string + wantVersions []string + wantErr string + }{ + // These test cases are relying on behaviors of the fake provider + // registry server implemented in client_test.go. + { + "example.com/awesomesauce/happycloud", + []string{"1.0.0", "1.2.0"}, + ``, + }, + { + "example.com/weaksauce/no-versions", + nil, + ``, // having no versions is not an error, it's just odd + }, + { + "example.com/nonexist/nonexist", + nil, + `provider registry example.com does not have a provider named example.com/nonexist/nonexist`, + }, + { + "not.example.com/foo/bar", + nil, + `host not.example.com does not offer a Terraform provider registry`, + }, + { + "too-new.example.com/foo/bar", + nil, + `host too-new.example.com does not support the provider registry protocol required by this Terraform version, but may be compatible with a different Terraform version`, + }, + { + "fails.example.com/foo/bar", + nil, + `could not query provider registry for fails.example.com/foo/bar: Get ` + baseURL + `/fails-immediately/foo/bar/versions: EOF`, + }, + } + + for _, test := range tests { + t.Run(test.provider, func(t *testing.T) { + // TEMP: We don't yet have a function for parsing provider + // source addresses so we'll just fake it in here for now. + parts := strings.Split(test.provider, "/") + providerAddr := addrs.Provider{ + Hostname: svchost.Hostname(parts[0]), + Namespace: parts[1], + Type: parts[2], + } + + gotVersions, err := source.AvailableVersions(providerAddr) + + if err != nil { + if test.wantErr == "" { + t.Fatalf("wrong error\ngot: %s\nwant: ", err.Error()) + } + if got, want := err.Error(), test.wantErr; got != want { + t.Fatalf("wrong error\ngot: %s\nwant: %s", got, want) + } + return + } + + if test.wantErr != "" { + t.Fatalf("wrong error\ngot: \nwant: %s", test.wantErr) + } + + var gotVersionsStr []string + if gotVersions != nil { + gotVersionsStr = make([]string, len(gotVersions)) + for i, v := range gotVersions { + gotVersionsStr[i] = v.String() + } + } + + if diff := cmp.Diff(test.wantVersions, gotVersionsStr); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + } + +} + +func TestSourcePackageMeta(t *testing.T) { + source, baseURL, close := testRegistrySource(t) + defer close() + + tests := []struct { + provider string + version string + os, arch string + want PackageMeta + wantErr string + }{ + // These test cases are relying on behaviors of the fake provider + // registry server implemented in client_test.go. + { + "example.com/awesomesauce/happycloud", + "1.2.0", + "linux", "amd64", + PackageMeta{ + Provider: addrs.NewProvider( + svchost.Hostname("example.com"), "awesomesauce", "happycloud", + ), + Version: versions.MustParseVersion("1.2.0"), + ProtocolVersions: VersionList{versions.MustParseVersion("5.0.0")}, + TargetPlatform: Platform{"linux", "amd64"}, + Filename: "happycloud_1.2.0.zip", + Location: PackageHTTPURL(baseURL + "/pkg/happycloud_1.2.0.zip"), + SHA256Sum: [32]uint8{30: 0xf0, 31: 0x0d}, // fake registry uses a memorable sum + }, + ``, + }, + { + "example.com/awesomesauce/happycloud", + "1.2.0", + "nonexist", "amd64", + PackageMeta{}, + `provider example.com/awesomesauce/happycloud 1.2.0 is not available for nonexist_amd64`, + }, + { + "not.example.com/awesomesauce/happycloud", + "1.2.0", + "linux", "amd64", + PackageMeta{}, + `host not.example.com does not offer a Terraform provider registry`, + }, + { + "too-new.example.com/awesomesauce/happycloud", + "1.2.0", + "linux", "amd64", + PackageMeta{}, + `host too-new.example.com does not support the provider registry protocol required by this Terraform version, but may be compatible with a different Terraform version`, + }, + { + "fails.example.com/awesomesauce/happycloud", + "1.2.0", + "linux", "amd64", + PackageMeta{}, + `could not query provider registry for fails.example.com/awesomesauce/happycloud: Get http://placeholder-origin/fails-immediately/awesomesauce/happycloud/1.2.0/download/linux/amd64: EOF`, + }, + } + + // Sometimes error messages contain specific HTTP endpoint URLs, but + // since our test server is on a random port we'd not be able to + // consistently match those. Instead, we'll normalize the URLs. + urlPattern := regexp.MustCompile(`http://[^/]+/`) + + cmpOpts := cmp.Comparer(Version.Same) + + for _, test := range tests { + t.Run(fmt.Sprintf("%s for %s_%s", test.provider, test.os, test.arch), func(t *testing.T) { + // TEMP: We don't yet have a function for parsing provider + // source addresses so we'll just fake it in here for now. + parts := strings.Split(test.provider, "/") + providerAddr := addrs.Provider{ + Hostname: svchost.Hostname(parts[0]), + Namespace: parts[1], + Type: parts[2], + } + + version := versions.MustParseVersion(test.version) + + got, err := source.PackageMeta(providerAddr, version, Platform{test.os, test.arch}) + + if err != nil { + if test.wantErr == "" { + t.Fatalf("wrong error\ngot: %s\nwant: ", err.Error()) + } + gotErr := urlPattern.ReplaceAllLiteralString(err.Error(), "http://placeholder-origin/") + if got, want := gotErr, test.wantErr; got != want { + t.Fatalf("wrong error\ngot: %s\nwant: %s", got, want) + } + return + } + + if test.wantErr != "" { + t.Fatalf("wrong error\ngot: \nwant: %s", test.wantErr) + } + + if diff := cmp.Diff(test.want, got, cmpOpts); diff != "" { + t.Errorf("wrong result\n%s", diff) + } + }) + } + +} diff --git a/internal/getproviders/source.go b/internal/getproviders/source.go new file mode 100644 index 000000000..2921e2d76 --- /dev/null +++ b/internal/getproviders/source.go @@ -0,0 +1,12 @@ +package getproviders + +import ( + "github.com/hashicorp/terraform/addrs" +) + +// A Source can query a particular source for information about providers +// that are available to install. +type Source interface { + AvailableVersions(provider addrs.Provider) (VersionList, error) + PackageMeta(provider addrs.Provider, version Version, target Platform) (PackageMeta, error) +} diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/-/legacy/1.0.0/linux_amd64/terraform-provider-legacy b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/-/legacy/1.0.0/linux_amd64/terraform-provider-legacy new file mode 100644 index 000000000..daa9e3509 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/-/legacy/1.0.0/linux_amd64/terraform-provider-legacy @@ -0,0 +1 @@ +# This is just a placeholder file for discovery testing, not a real provider plugin. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/darwin_amd64/terraform-provider-null b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/darwin_amd64/terraform-provider-null new file mode 100644 index 000000000..daa9e3509 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/darwin_amd64/terraform-provider-null @@ -0,0 +1 @@ +# This is just a placeholder file for discovery testing, not a real provider plugin. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/linux_amd64/terraform-provider-null b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/linux_amd64/terraform-provider-null new file mode 100644 index 000000000..daa9e3509 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/linux_amd64/terraform-provider-null @@ -0,0 +1 @@ +# This is just a placeholder file for discovery testing, not a real provider plugin. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/windows_amd64/terraform-provider-null.exe b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/windows_amd64/terraform-provider-null.exe new file mode 100644 index 000000000..daa9e3509 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/2.0.0/windows_amd64/terraform-provider-null.exe @@ -0,0 +1 @@ +# This is just a placeholder file for discovery testing, not a real provider plugin. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/invalid b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/invalid new file mode 100644 index 000000000..289663a2a --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/invalid @@ -0,0 +1 @@ +This should be ignored because it doesn't follow the provider package naming scheme. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_2.1.0_linux_amd64.zip b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_2.1.0_linux_amd64.zip new file mode 100644 index 000000000..68a550271 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_2.1.0_linux_amd64.zip @@ -0,0 +1,5 @@ +This is just a placeholder file for discovery testing, not a real provider package. + +This file is what we'd find for mirrors using the "packed" mirror layout, +where the mirror maintainer can just download the packages from upstream and +have Terraform unpack them automatically when installing. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_invalid.zip b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_invalid.zip new file mode 100644 index 000000000..289663a2a --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_invalid.zip @@ -0,0 +1 @@ +This should be ignored because it doesn't follow the provider package naming scheme. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_invalid_invalid_invalid.zip b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_invalid_invalid_invalid.zip new file mode 100644 index 000000000..289663a2a --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/null/terraform-provider-null_invalid_invalid_invalid.zip @@ -0,0 +1 @@ +This should be ignored because it doesn't follow the provider package naming scheme. diff --git a/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/random/1.2.0/linux_amd64/terraform-provider-random b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/random/1.2.0/linux_amd64/terraform-provider-random new file mode 100644 index 000000000..daa9e3509 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/registry.terraform.io/hashicorp/random/1.2.0/linux_amd64/terraform-provider-random @@ -0,0 +1 @@ +# This is just a placeholder file for discovery testing, not a real provider plugin. diff --git a/internal/getproviders/testdata/filesystem-mirror/tfe.example.com/AwesomeCorp/happycloud/0.1.0-alpha.2/darwin_amd64/extra-data.txt b/internal/getproviders/testdata/filesystem-mirror/tfe.example.com/AwesomeCorp/happycloud/0.1.0-alpha.2/darwin_amd64/extra-data.txt new file mode 100644 index 000000000..8a1c7c327 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/tfe.example.com/AwesomeCorp/happycloud/0.1.0-alpha.2/darwin_amd64/extra-data.txt @@ -0,0 +1,6 @@ +Provider plugin packages are allowed to include other files such as any static +data they need to operate, or possibly source files if the provider is written +in an interpreted programming language. + +This extra file is here just to make sure that extra files don't cause any +misbehavior during local discovery. diff --git a/internal/getproviders/testdata/filesystem-mirror/tfe.example.com/AwesomeCorp/happycloud/0.1.0-alpha.2/darwin_amd64/terraform-provider-happycloud b/internal/getproviders/testdata/filesystem-mirror/tfe.example.com/AwesomeCorp/happycloud/0.1.0-alpha.2/darwin_amd64/terraform-provider-happycloud new file mode 100644 index 000000000..daa9e3509 --- /dev/null +++ b/internal/getproviders/testdata/filesystem-mirror/tfe.example.com/AwesomeCorp/happycloud/0.1.0-alpha.2/darwin_amd64/terraform-provider-happycloud @@ -0,0 +1 @@ +# This is just a placeholder file for discovery testing, not a real provider plugin. diff --git a/internal/getproviders/types.go b/internal/getproviders/types.go new file mode 100644 index 000000000..ac41b20e1 --- /dev/null +++ b/internal/getproviders/types.go @@ -0,0 +1,232 @@ +package getproviders + +import ( + "crypto/sha256" + "fmt" + "runtime" + "sort" + "strings" + + "github.com/apparentlymart/go-versions/versions" + "github.com/hashicorp/terraform/addrs" +) + +// Version represents a particular single version of a provider. +type Version = versions.Version + +// VersionList represents a list of versions. It is a []Version with some +// extra methods for convenient filtering. +type VersionList = versions.List + +// ParseVersion parses a "semver"-style version string into a Version value, +// which is the version syntax we use for provider versions. +func ParseVersion(str string) (Version, error) { + return versions.ParseVersion(str) +} + +// Platform represents a target platform that a provider is or might be +// available for. +type Platform struct { + OS, Arch string +} + +func (p Platform) String() string { + return p.OS + "_" + p.Arch +} + +// LessThan returns true if the receiver should sort before the other given +// Platform in an ordered list of platforms. +// +// The ordering is lexical first by OS and then by Architecture. +// This ordering is primarily just to ensure that results of +// functions in this package will be deterministic. The ordering is not +// intended to have any semantic meaning and is subject to change in future. +func (p Platform) LessThan(other Platform) bool { + switch { + case p.OS != other.OS: + return p.OS < other.OS + default: + return p.Arch < other.Arch + } +} + +// ParsePlatform parses a string representation of a platform, like +// "linux_amd64", or returns an error if the string is not valid. +func ParsePlatform(str string) (Platform, error) { + underPos := strings.Index(str, "_") + if underPos < 1 || underPos >= len(str)-2 { + return Platform{}, fmt.Errorf("must be two words separated by an underscore") + } + + os, arch := str[:underPos], str[underPos+1:] + if strings.ContainsAny(os, " \t\n\r") { + return Platform{}, fmt.Errorf("OS portion must not contain whitespace") + } + if strings.ContainsAny(arch, " \t\n\r") { + return Platform{}, fmt.Errorf("architecture portion must not contain whitespace") + } + + return Platform{ + OS: os, + Arch: arch, + }, nil +} + +// CurrentPlatform is the platform where the current program is running. +// +// If attempting to install providers for use on the same system where the +// installation process is running, this is the right platform to use. +var CurrentPlatform = Platform{ + OS: runtime.GOOS, + Arch: runtime.GOARCH, +} + +// PackageMeta represents the metadata related to a particular downloadable +// provider package targeting a single platform. +// +// Package findproviders does no signature verification or protocol version +// compatibility checking of its own. A caller receving a PackageMeta must +// verify that it has a correct signature and supports a protocol version +// accepted by the current version of Terraform before trying to use the +// described package. +type PackageMeta struct { + Provider addrs.Provider + Version Version + + ProtocolVersions VersionList + TargetPlatform Platform + + Filename string + Location PackageLocation + + // FIXME: Our current hashing scheme only works for sources that have + // access to the original distribution archives, so this isn't always + // populated. Need to figure out a different approach where we can + // consistently hash both from an archive file and from an extracted + // archive to detect inconsistencies. + SHA256Sum [sha256.Size]byte + + // TODO: Extra metadata for signature verification +} + +// LessThan returns true if the receiver should sort before the given other +// PackageMeta in a sorted list of PackageMeta. +// +// Sorting preference is given first to the provider address, then to the +// taget platform, and the to the version number (using semver precedence). +// Packages that differ only in semver build metadata have no defined +// precedence and so will always return false. +// +// This ordering is primarily just to maximize the chance that results of +// functions in this package will be deterministic. The ordering is not +// intended to have any semantic meaning and is subject to change in future. +func (m PackageMeta) LessThan(other PackageMeta) bool { + switch { + case m.Provider != other.Provider: + return m.Provider.LessThan(other.Provider) + case m.TargetPlatform != other.TargetPlatform: + return m.TargetPlatform.LessThan(other.TargetPlatform) + case m.Version != other.Version: + return m.Version.LessThan(other.Version) + default: + return false + } +} + +// PackageLocation represents a location where a provider distribution package +// can be obtained. A value of this type contains one of the following +// concrete types: PackageLocalArchive, PackageLocalDir, or PackageHTTPURL. +type PackageLocation interface { + packageLocation() +} + +// PackageLocalArchive is the location of a provider distribution archive file +// in the local filesystem. Its value is a local filesystem path using the +// syntax understood by Go's standard path/filepath package on the operating +// system where Terraform is running. +type PackageLocalArchive string + +func (p PackageLocalArchive) packageLocation() {} + +// PackageLocalDir is the location of a directory containing an unpacked +// provider distribution archive in the local filesystem. Its value is a local +// filesystem path using the syntax understood by Go's standard path/filepath +// package on the operating system where Terraform is running. +type PackageLocalDir string + +func (p PackageLocalDir) packageLocation() {} + +// PackageHTTPURL is a provider package location accessible via HTTP. +// Its value is a URL string using either the http: scheme or the https: scheme. +type PackageHTTPURL string + +func (p PackageHTTPURL) packageLocation() {} + +// PackageMetaList is a list of PackageMeta. It's just []PackageMeta with +// some methods for convenient sorting and filtering. +type PackageMetaList []PackageMeta + +func (l PackageMetaList) Len() int { + return len(l) +} + +func (l PackageMetaList) Less(i, j int) bool { + return l[i].LessThan(l[j]) +} + +func (l PackageMetaList) Swap(i, j int) { + l[i], l[j] = l[j], l[i] +} + +// Sort performs an in-place, stable sort on the contents of the list, using +// the ordering given by method Less. This ordering is primarily to help +// encourage deterministic results from functions and does not have any +// semantic meaning. +func (l PackageMetaList) Sort() { + sort.Stable(l) +} + +// FilterPlatform constructs a new PackageMetaList that contains only the +// elements of the receiver that are for the given target platform. +// +// Pass CurrentPlatform to filter only for packages targeting the platform +// where this code is running. +func (l PackageMetaList) FilterPlatform(target Platform) PackageMetaList { + var ret PackageMetaList + for _, m := range l { + if m.TargetPlatform == target { + ret = append(ret, m) + } + } + return ret +} + +// FilterProviderExactVersion constructs a new PackageMetaList that contains +// only the elements of the receiver that relate to the given provider address +// and exact version. +// +// The version matching for this function is exact, including matching on +// semver build metadata, because it's intended for handling a single exact +// version selected by the caller from a set of available versions. +func (l PackageMetaList) FilterProviderExactVersion(provider addrs.Provider, version Version) PackageMetaList { + var ret PackageMetaList + for _, m := range l { + if m.Provider == provider && m.Version == version { + ret = append(ret, m) + } + } + return ret +} + +// FilterProviderPlatformExactVersion is a combination of both +// FilterPlatform and FilterProviderExactVersion that filters by all three +// criteria at once. +func (l PackageMetaList) FilterProviderPlatformExactVersion(provider addrs.Provider, platform Platform, version Version) PackageMetaList { + var ret PackageMetaList + for _, m := range l { + if m.Provider == provider && m.Version == version && m.TargetPlatform == platform { + ret = append(ret, m) + } + } + return ret +} diff --git a/internal/initwd/getter.go b/internal/initwd/getter.go index 2f306be73..85574f7ca 100644 --- a/internal/initwd/getter.go +++ b/internal/initwd/getter.go @@ -21,6 +21,7 @@ import ( var goGetterDetectors = []getter.Detector{ new(getter.GitHubDetector), + new(getter.GitDetector), new(getter.BitBucketDetector), new(getter.GCSDetector), new(getter.S3Detector), @@ -86,7 +87,7 @@ func (g reusingGetter) getWithGoGetter(instPath, addr string) (string, error) { log.Printf("[DEBUG] will download %q to %s", packageAddr, instPath) - realAddr, err := getter.Detect(packageAddr, instPath, getter.Detectors) + realAddr, err := getter.Detect(packageAddr, instPath, goGetterDetectors) if err != nil { return "", err } diff --git a/internal/initwd/module_install.go b/internal/initwd/module_install.go index 531310ab8..cbfd8e98c 100644 --- a/internal/initwd/module_install.go +++ b/internal/initwd/module_install.go @@ -14,18 +14,34 @@ import ( "github.com/hashicorp/terraform/internal/modsdir" "github.com/hashicorp/terraform/registry" "github.com/hashicorp/terraform/registry/regsrc" + "github.com/hashicorp/terraform/registry/response" "github.com/hashicorp/terraform/tfdiags" ) type ModuleInstaller struct { modsDir string reg *registry.Client + + // The keys in moduleVersions are resolved and trimmed registry source + // addresses and the values are the registry response. + moduleVersions map[string]*response.ModuleVersions + + // The keys in moduleVersionsUrl are the moduleVersion struct below and + // addresses and the values are the download URLs. + moduleVersionsUrl map[moduleVersion]string +} + +type moduleVersion struct { + module string + version string } func NewModuleInstaller(modsDir string, reg *registry.Client) *ModuleInstaller { return &ModuleInstaller{ - modsDir: modsDir, - reg: reg, + modsDir: modsDir, + reg: reg, + moduleVersions: make(map[string]*response.ModuleVersions), + moduleVersionsUrl: make(map[moduleVersion]string), } } @@ -309,24 +325,32 @@ func (i *ModuleInstaller) installRegistryModule(req *earlyconfig.ModuleRequest, } reg := i.reg + var resp *response.ModuleVersions + var exists bool - log.Printf("[DEBUG] %s listing available versions of %s at %s", key, addr, hostname) - resp, err := reg.ModuleVersions(addr) - if err != nil { - if registry.IsModuleNotFound(err) { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Module not found", - fmt.Sprintf("Module %q (from %s:%d) cannot be found in the module registry at %s.", req.Name, req.CallPos.Filename, req.CallPos.Line, hostname), - )) - } else { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error accessing remote module registry", - fmt.Sprintf("Failed to retrieve available versions for module %q (%s:%d) from %s: %s.", req.Name, req.CallPos.Filename, req.CallPos.Line, hostname, err), - )) + // check if we've already looked up this module from the registry + if resp, exists = i.moduleVersions[addr.String()]; exists { + log.Printf("[TRACE] %s using already found available versions of %s at %s", key, addr, hostname) + } else { + log.Printf("[DEBUG] %s listing available versions of %s at %s", key, addr, hostname) + resp, err = reg.ModuleVersions(addr) + if err != nil { + if registry.IsModuleNotFound(err) { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Module not found", + fmt.Sprintf("Module %q (from %s:%d) cannot be found in the module registry at %s.", req.Name, req.CallPos.Filename, req.CallPos.Line, hostname), + )) + } else { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Error accessing remote module registry", + fmt.Sprintf("Failed to retrieve available versions for module %q (%s:%d) from %s: %s.", req.Name, req.CallPos.Filename, req.CallPos.Line, hostname, err), + )) + } + return nil, nil, diags } - return nil, nil, diags + i.moduleVersions[addr.String()] = resp } // The response might contain information about dependencies to allow us @@ -405,17 +429,25 @@ func (i *ModuleInstaller) installRegistryModule(req *earlyconfig.ModuleRequest, // If we manage to get down here then we've found a suitable version to // install, so we need to ask the registry where we should download it from. // The response to this is a go-getter-style address string. - dlAddr, err := reg.ModuleLocation(addr, latestMatch.String()) - if err != nil { - log.Printf("[ERROR] %s from %s %s: %s", key, addr, latestMatch, err) - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Invalid response from remote module registry", - fmt.Sprintf("The remote registry at %s failed to return a download URL for %s %s.", hostname, addr, latestMatch), - )) - return nil, nil, diags + + // first check the cache for the download URL + moduleAddr := moduleVersion{module: addr.String(), version: latestMatch.String()} + if _, exists := i.moduleVersionsUrl[moduleAddr]; !exists { + url, err := reg.ModuleLocation(addr, latestMatch.String()) + if err != nil { + log.Printf("[ERROR] %s from %s %s: %s", key, addr, latestMatch, err) + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Invalid response from remote module registry", + fmt.Sprintf("The remote registry at %s failed to return a download URL for %s %s.", hostname, addr, latestMatch), + )) + return nil, nil, diags + } + i.moduleVersionsUrl[moduleVersion{module: addr.String(), version: latestMatch.String()}] = url } + dlAddr := i.moduleVersionsUrl[moduleAddr] + log.Printf("[TRACE] ModuleInstaller: %s %s %s is available at %q", key, addr, latestMatch, dlAddr) modDir, err := getter.getWithGoGetter(instPath, dlAddr) diff --git a/internal/initwd/module_install_test.go b/internal/initwd/module_install_test.go index 444968169..239014990 100644 --- a/internal/initwd/module_install_test.go +++ b/internal/initwd/module_install_test.go @@ -327,6 +327,14 @@ func TestLoaderInstallModules_registry(t *testing.T) { return } + //check that the registry reponses were cached + if _, ok := inst.moduleVersions["hashicorp/module-installer-acctest/aws"]; !ok { + t.Fatal("module versions cache was not populated") + } + if _, ok := inst.moduleVersionsUrl[moduleVersion{module: "hashicorp/module-installer-acctest/aws", version: "0.0.1"}]; !ok { + t.Fatal("module download url cache was not populated") + } + loader, err := configload.NewLoader(&configload.Config{ ModulesDir: modulesDir, }) diff --git a/internal/modsdir/manifest.go b/internal/modsdir/manifest.go index 36f6c033f..56332523d 100644 --- a/internal/modsdir/manifest.go +++ b/internal/modsdir/manifest.go @@ -8,6 +8,7 @@ import ( "log" "os" "path/filepath" + "strings" version "github.com/hashicorp/go-version" @@ -48,7 +49,11 @@ type Record struct { type Manifest map[string]Record func (m Manifest) ModuleKey(path addrs.Module) string { - return path.String() + if len(path) == 0 { + return "" + } + return strings.Join([]string(path), ".") + } // manifestSnapshotFile is an internal struct used only to assist in our JSON @@ -81,6 +86,11 @@ func ReadManifestSnapshot(r io.Reader) (Manifest, error) { return nil, fmt.Errorf("invalid version %q for %s: %s", record.VersionStr, record.Key, err) } } + + // Ensure Windows is using the proper modules path format after + // reading the modules manifest Dir records + record.Dir = filepath.FromSlash(record.Dir) + if _, exists := new[record.Key]; exists { // This should never happen in any valid file, so we'll catch it // and report it to avoid confusing/undefined behavior if the @@ -115,6 +125,10 @@ func (m Manifest) WriteSnapshot(w io.Writer) error { } else { record.VersionStr = "" } + + // Ensure Dir is written in a format that can be read by Linux and + // Windows nodes for remote and apply compatibility + record.Dir = filepath.ToSlash(record.Dir) write.Records = append(write.Records, record) } diff --git a/lang/eval.go b/lang/eval.go index 989105f12..bfacd671a 100644 --- a/lang/eval.go +++ b/lang/eval.go @@ -240,15 +240,19 @@ func (s *Scope) evalContext(refs []*addrs.Reference, selfAddr addrs.Referenceabl // Self is an exception in that it must always resolve to a // particular instance. We will still insert the full resource into // the context below. + var hclDiags hcl.Diagnostics + // We should always have a valid self index by this point, but in + // the case of an error, self may end up as a cty.DynamicValue. switch k := subj.Key.(type) { case addrs.IntKey: - self = val.Index(cty.NumberIntVal(int64(k))) + self, hclDiags = hcl.Index(val, cty.NumberIntVal(int64(k)), ref.SourceRange.ToHCL().Ptr()) + diags.Append(hclDiags) case addrs.StringKey: - self = val.Index(cty.StringVal(string(k))) + self, hclDiags = hcl.Index(val, cty.StringVal(string(k)), ref.SourceRange.ToHCL().Ptr()) + diags.Append(hclDiags) default: self = val } - continue } diff --git a/lang/funcs/collection.go b/lang/funcs/collection.go index 50b52b3cf..a6eb16fba 100644 --- a/lang/funcs/collection.go +++ b/lang/funcs/collection.go @@ -12,70 +12,6 @@ import ( "github.com/zclconf/go-cty/cty/gocty" ) -var ElementFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.DynamicPseudoType, - }, - { - Name: "index", - Type: cty.Number, - }, - }, - Type: func(args []cty.Value) (cty.Type, error) { - list := args[0] - listTy := list.Type() - switch { - case listTy.IsListType(): - return listTy.ElementType(), nil - case listTy.IsTupleType(): - if !args[1].IsKnown() { - // If the index isn't known yet then we can't predict the - // result type since each tuple element can have its own type. - return cty.DynamicPseudoType, nil - } - - etys := listTy.TupleElementTypes() - var index int - err := gocty.FromCtyValue(args[1], &index) - if err != nil { - // e.g. fractional number where whole number is required - return cty.DynamicPseudoType, fmt.Errorf("invalid index: %s", err) - } - if len(etys) == 0 { - return cty.DynamicPseudoType, errors.New("cannot use element function with an empty list") - } - index = index % len(etys) - return etys[index], nil - default: - return cty.DynamicPseudoType, fmt.Errorf("cannot read elements from %s", listTy.FriendlyName()) - } - }, - Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { - var index int - err := gocty.FromCtyValue(args[1], &index) - if err != nil { - // can't happen because we checked this in the Type function above - return cty.DynamicVal, fmt.Errorf("invalid index: %s", err) - } - - if !args[0].IsKnown() { - return cty.UnknownVal(retType), nil - } - - l := args[0].LengthInt() - if l == 0 { - return cty.DynamicVal, errors.New("cannot use element function with an empty list") - } - index = index % l - - // We did all the necessary type checks in the type function above, - // so this is guaranteed not to fail. - return args[0].Index(cty.NumberIntVal(int64(index))), nil - }, -}) - var LengthFunc = function.New(&function.Spec{ Params: []function.Parameter{ { @@ -164,133 +100,6 @@ var CoalesceFunc = function.New(&function.Spec{ }, }) -// CoalesceListFunc constructs a function that takes any number of list arguments -// and returns the first one that isn't empty. -var CoalesceListFunc = function.New(&function.Spec{ - Params: []function.Parameter{}, - VarParam: &function.Parameter{ - Name: "vals", - Type: cty.DynamicPseudoType, - AllowUnknown: true, - AllowDynamicType: true, - AllowNull: true, - }, - Type: func(args []cty.Value) (ret cty.Type, err error) { - if len(args) == 0 { - return cty.NilType, errors.New("at least one argument is required") - } - - argTypes := make([]cty.Type, len(args)) - - for i, arg := range args { - // if any argument is unknown, we can't be certain know which type we will return - if !arg.IsKnown() { - return cty.DynamicPseudoType, nil - } - ty := arg.Type() - - if !ty.IsListType() && !ty.IsTupleType() { - return cty.NilType, errors.New("coalescelist arguments must be lists or tuples") - } - - argTypes[i] = arg.Type() - } - - last := argTypes[0] - // If there are mixed types, we have to return a dynamic type. - for _, next := range argTypes[1:] { - if !next.Equals(last) { - return cty.DynamicPseudoType, nil - } - } - - return last, nil - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - for _, arg := range args { - if !arg.IsKnown() { - // If we run into an unknown list at some point, we can't - // predict the final result yet. (If there's a known, non-empty - // arg before this then we won't get here.) - return cty.UnknownVal(retType), nil - } - - if arg.LengthInt() > 0 { - return arg, nil - } - } - - return cty.NilVal, errors.New("no non-null arguments") - }, -}) - -// CompactFunc constructs a function that takes a list of strings and returns a new list -// with any empty string elements removed. -var CompactFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.List(cty.String), - }, - }, - Type: function.StaticReturnType(cty.List(cty.String)), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - listVal := args[0] - if !listVal.IsWhollyKnown() { - // If some of the element values aren't known yet then we - // can't yet return a compacted list - return cty.UnknownVal(retType), nil - } - - var outputList []cty.Value - - for it := listVal.ElementIterator(); it.Next(); { - _, v := it.Element() - if v.IsNull() || v.AsString() == "" { - continue - } - outputList = append(outputList, v) - } - - if len(outputList) == 0 { - return cty.ListValEmpty(cty.String), nil - } - - return cty.ListVal(outputList), nil - }, -}) - -// ContainsFunc constructs a function that determines whether a given list or -// set contains a given single value as one of its elements. -var ContainsFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.DynamicPseudoType, - }, - { - Name: "value", - Type: cty.DynamicPseudoType, - }, - }, - Type: function.StaticReturnType(cty.Bool), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - arg := args[0] - ty := arg.Type() - - if !ty.IsListType() && !ty.IsTupleType() && !ty.IsSetType() { - return cty.NilVal, errors.New("argument must be list, tuple, or set") - } - - _, err = Index(cty.TupleVal(arg.AsValueSlice()), args[1]) - if err != nil { - return cty.False, nil - } - - return cty.True, nil - }, -}) - // IndexFunc constructs a function that finds the element index for a given value in a list. var IndexFunc = function.New(&function.Spec{ Params: []function.Parameter{ @@ -335,151 +144,6 @@ var IndexFunc = function.New(&function.Spec{ }, }) -// DistinctFunc constructs a function that takes a list and returns a new list -// with any duplicate elements removed. -var DistinctFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.List(cty.DynamicPseudoType), - }, - }, - Type: func(args []cty.Value) (cty.Type, error) { - return args[0].Type(), nil - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - listVal := args[0] - - if !listVal.IsWhollyKnown() { - return cty.UnknownVal(retType), nil - } - var list []cty.Value - - for it := listVal.ElementIterator(); it.Next(); { - _, v := it.Element() - list, err = appendIfMissing(list, v) - if err != nil { - return cty.NilVal, err - } - } - - if len(list) == 0 { - return cty.ListValEmpty(retType.ElementType()), nil - } - return cty.ListVal(list), nil - }, -}) - -// ChunklistFunc constructs a function that splits a single list into fixed-size chunks, -// returning a list of lists. -var ChunklistFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.List(cty.DynamicPseudoType), - }, - { - Name: "size", - Type: cty.Number, - }, - }, - Type: func(args []cty.Value) (cty.Type, error) { - return cty.List(args[0].Type()), nil - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - listVal := args[0] - if !listVal.IsKnown() { - return cty.UnknownVal(retType), nil - } - - if listVal.LengthInt() == 0 { - return cty.ListValEmpty(listVal.Type()), nil - } - - var size int - err = gocty.FromCtyValue(args[1], &size) - if err != nil { - return cty.NilVal, fmt.Errorf("invalid index: %s", err) - } - - if size < 0 { - return cty.NilVal, errors.New("the size argument must be positive") - } - - output := make([]cty.Value, 0) - - // if size is 0, returns a list made of the initial list - if size == 0 { - output = append(output, listVal) - return cty.ListVal(output), nil - } - - chunk := make([]cty.Value, 0) - - l := args[0].LengthInt() - i := 0 - - for it := listVal.ElementIterator(); it.Next(); { - _, v := it.Element() - chunk = append(chunk, v) - - // Chunk when index isn't 0, or when reaching the values's length - if (i+1)%size == 0 || (i+1) == l { - output = append(output, cty.ListVal(chunk)) - chunk = make([]cty.Value, 0) - } - i++ - } - - return cty.ListVal(output), nil - }, -}) - -// FlattenFunc constructs a function that takes a list and replaces any elements -// that are lists with a flattened sequence of the list contents. -var FlattenFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.DynamicPseudoType, - }, - }, - Type: func(args []cty.Value) (cty.Type, error) { - if !args[0].IsWhollyKnown() { - return cty.DynamicPseudoType, nil - } - - argTy := args[0].Type() - if !argTy.IsListType() && !argTy.IsSetType() && !argTy.IsTupleType() { - return cty.NilType, errors.New("can only flatten lists, sets and tuples") - } - - retVal, known := flattener(args[0]) - if !known { - return cty.DynamicPseudoType, nil - } - - tys := make([]cty.Type, len(retVal)) - for i, ty := range retVal { - tys[i] = ty.Type() - } - return cty.Tuple(tys), nil - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - inputList := args[0] - if inputList.LengthInt() == 0 { - return cty.EmptyTupleVal, nil - } - - out, known := flattener(inputList) - if !known { - return cty.UnknownVal(retType), nil - } - - return cty.TupleVal(out), nil - }, -}) - // Flatten until it's not a cty.List, and return whether the value is known. // We can flatten lists with unknown values, as long as they are not // lists themselves. @@ -504,76 +168,6 @@ func flattener(flattenList cty.Value) ([]cty.Value, bool) { return out, true } -// KeysFunc constructs a function that takes a map and returns a sorted list of the map keys. -var KeysFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "inputMap", - Type: cty.DynamicPseudoType, - AllowUnknown: true, - }, - }, - Type: func(args []cty.Value) (cty.Type, error) { - ty := args[0].Type() - switch { - case ty.IsMapType(): - return cty.List(cty.String), nil - case ty.IsObjectType(): - atys := ty.AttributeTypes() - if len(atys) == 0 { - return cty.EmptyTuple, nil - } - // All of our result elements will be strings, and atys just - // decides how many there are. - etys := make([]cty.Type, len(atys)) - for i := range etys { - etys[i] = cty.String - } - return cty.Tuple(etys), nil - default: - return cty.DynamicPseudoType, function.NewArgErrorf(0, "must have map or object type") - } - }, - Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { - m := args[0] - var keys []cty.Value - - switch { - case m.Type().IsObjectType(): - // In this case we allow unknown values so we must work only with - // the attribute _types_, not with the value itself. - var names []string - for name := range m.Type().AttributeTypes() { - names = append(names, name) - } - sort.Strings(names) // same ordering guaranteed by cty's ElementIterator - if len(names) == 0 { - return cty.EmptyTupleVal, nil - } - keys = make([]cty.Value, len(names)) - for i, name := range names { - keys[i] = cty.StringVal(name) - } - return cty.TupleVal(keys), nil - default: - if !m.IsKnown() { - return cty.UnknownVal(retType), nil - } - - // cty guarantees that ElementIterator will iterate in lexicographical - // order by key. - for it := args[0].ElementIterator(); it.Next(); { - k, _ := it.Element() - keys = append(keys, k) - } - if len(keys) == 0 { - return cty.ListValEmpty(cty.String), nil - } - return cty.ListVal(keys), nil - } - }, -}) - // ListFunc constructs a function that takes an arbitrary number of arguments // and returns a list containing those values in the same order. // @@ -865,321 +459,6 @@ var MatchkeysFunc = function.New(&function.Spec{ }, }) -// MergeFunc constructs a function that takes an arbitrary number of maps and -// returns a single map that contains a merged set of elements from all of the maps. -// -// If more than one given map defines the same key then the one that is later in -// the argument sequence takes precedence. -var MergeFunc = function.New(&function.Spec{ - Params: []function.Parameter{}, - VarParam: &function.Parameter{ - Name: "maps", - Type: cty.DynamicPseudoType, - AllowDynamicType: true, - }, - Type: function.StaticReturnType(cty.DynamicPseudoType), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - outputMap := make(map[string]cty.Value) - - for _, arg := range args { - if !arg.IsWhollyKnown() { - return cty.UnknownVal(retType), nil - } - if !arg.Type().IsObjectType() && !arg.Type().IsMapType() { - return cty.NilVal, fmt.Errorf("arguments must be maps or objects, got %#v", arg.Type().FriendlyName()) - } - for it := arg.ElementIterator(); it.Next(); { - k, v := it.Element() - outputMap[k.AsString()] = v - } - } - return cty.ObjectVal(outputMap), nil - }, -}) - -// ReverseFunc takes a sequence and produces a new sequence of the same length -// with all of the same elements as the given sequence but in reverse order. -var ReverseFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.DynamicPseudoType, - }, - }, - Type: func(args []cty.Value) (cty.Type, error) { - argTy := args[0].Type() - switch { - case argTy.IsTupleType(): - argTys := argTy.TupleElementTypes() - retTys := make([]cty.Type, len(argTys)) - for i, ty := range argTys { - retTys[len(retTys)-i-1] = ty - } - return cty.Tuple(retTys), nil - case argTy.IsListType(), argTy.IsSetType(): // We accept sets here to mimic the usual behavior of auto-converting to list - return cty.List(argTy.ElementType()), nil - default: - return cty.NilType, function.NewArgErrorf(0, "can only reverse list or tuple values, not %s", argTy.FriendlyName()) - } - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - in := args[0].AsValueSlice() - outVals := make([]cty.Value, len(in)) - for i, v := range in { - outVals[len(outVals)-i-1] = v - } - switch { - case retType.IsTupleType(): - return cty.TupleVal(outVals), nil - default: - if len(outVals) == 0 { - return cty.ListValEmpty(retType.ElementType()), nil - } - return cty.ListVal(outVals), nil - } - }, -}) - -// SetProductFunc calculates the Cartesian product of two or more sets or -// sequences. If the arguments are all lists then the result is a list of tuples, -// preserving the ordering of all of the input lists. Otherwise the result is a -// set of tuples. -var SetProductFunc = function.New(&function.Spec{ - Params: []function.Parameter{}, - VarParam: &function.Parameter{ - Name: "sets", - Type: cty.DynamicPseudoType, - }, - Type: func(args []cty.Value) (retType cty.Type, err error) { - if len(args) < 2 { - return cty.NilType, errors.New("at least two arguments are required") - } - - listCount := 0 - elemTys := make([]cty.Type, len(args)) - for i, arg := range args { - aty := arg.Type() - switch { - case aty.IsSetType(): - elemTys[i] = aty.ElementType() - case aty.IsListType(): - elemTys[i] = aty.ElementType() - listCount++ - case aty.IsTupleType(): - // We can accept a tuple type only if there's some common type - // that all of its elements can be converted to. - allEtys := aty.TupleElementTypes() - if len(allEtys) == 0 { - elemTys[i] = cty.DynamicPseudoType - listCount++ - break - } - ety, _ := convert.UnifyUnsafe(allEtys) - if ety == cty.NilType { - return cty.NilType, function.NewArgErrorf(i, "all elements must be of the same type") - } - elemTys[i] = ety - listCount++ - default: - return cty.NilType, function.NewArgErrorf(i, "a set or a list is required") - } - } - - if listCount == len(args) { - return cty.List(cty.Tuple(elemTys)), nil - } - return cty.Set(cty.Tuple(elemTys)), nil - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - ety := retType.ElementType() - - total := 1 - for _, arg := range args { - // Because of our type checking function, we are guaranteed that - // all of the arguments are known, non-null values of types that - // support LengthInt. - total *= arg.LengthInt() - } - - if total == 0 { - // If any of the arguments was an empty collection then our result - // is also an empty collection, which we'll short-circuit here. - if retType.IsListType() { - return cty.ListValEmpty(ety), nil - } - return cty.SetValEmpty(ety), nil - } - - subEtys := ety.TupleElementTypes() - product := make([][]cty.Value, total) - - b := make([]cty.Value, total*len(args)) - n := make([]int, len(args)) - s := 0 - argVals := make([][]cty.Value, len(args)) - for i, arg := range args { - argVals[i] = arg.AsValueSlice() - } - - for i := range product { - e := s + len(args) - pi := b[s:e] - product[i] = pi - s = e - - for j, n := range n { - val := argVals[j][n] - ty := subEtys[j] - if !val.Type().Equals(ty) { - var err error - val, err = convert.Convert(val, ty) - if err != nil { - // Should never happen since we checked this in our - // type-checking function. - return cty.NilVal, fmt.Errorf("failed to convert argVals[%d][%d] to %s; this is a bug in Terraform", j, n, ty.FriendlyName()) - } - } - pi[j] = val - } - - for j := len(n) - 1; j >= 0; j-- { - n[j]++ - if n[j] < len(argVals[j]) { - break - } - n[j] = 0 - } - } - - productVals := make([]cty.Value, total) - for i, vals := range product { - productVals[i] = cty.TupleVal(vals) - } - - if retType.IsListType() { - return cty.ListVal(productVals), nil - } - return cty.SetVal(productVals), nil - }, -}) - -// SliceFunc constructs a function that extracts some consecutive elements -// from within a list. -var SliceFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.DynamicPseudoType, - }, - { - Name: "start_index", - Type: cty.Number, - }, - { - Name: "end_index", - Type: cty.Number, - }, - }, - Type: func(args []cty.Value) (cty.Type, error) { - arg := args[0] - argTy := arg.Type() - - if argTy.IsSetType() { - return cty.NilType, function.NewArgErrorf(0, "cannot slice a set, because its elements do not have indices; use the tolist function to force conversion to list if the ordering of the result is not important") - } - if !argTy.IsListType() && !argTy.IsTupleType() { - return cty.NilType, function.NewArgErrorf(0, "must be a list or tuple value") - } - - startIndex, endIndex, idxsKnown, err := sliceIndexes(args) - if err != nil { - return cty.NilType, err - } - - if argTy.IsListType() { - return argTy, nil - } - - if !idxsKnown { - // If we don't know our start/end indices then we can't predict - // the result type if we're planning to return a tuple. - return cty.DynamicPseudoType, nil - } - return cty.Tuple(argTy.TupleElementTypes()[startIndex:endIndex]), nil - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - inputList := args[0] - - if retType == cty.DynamicPseudoType { - return cty.DynamicVal, nil - } - - // we ignore idxsKnown return value here because the indices are always - // known here, or else the call would've short-circuited. - startIndex, endIndex, _, err := sliceIndexes(args) - if err != nil { - return cty.NilVal, err - } - - if endIndex-startIndex == 0 { - if retType.IsTupleType() { - return cty.EmptyTupleVal, nil - } - return cty.ListValEmpty(retType.ElementType()), nil - } - - outputList := inputList.AsValueSlice()[startIndex:endIndex] - - if retType.IsTupleType() { - return cty.TupleVal(outputList), nil - } - - return cty.ListVal(outputList), nil - }, -}) - -func sliceIndexes(args []cty.Value) (int, int, bool, error) { - var startIndex, endIndex, length int - var startKnown, endKnown, lengthKnown bool - - if args[0].Type().IsTupleType() || args[0].IsKnown() { // if it's a tuple then we always know the length by the type, but lists must be known - length = args[0].LengthInt() - lengthKnown = true - } - - if args[1].IsKnown() { - if err := gocty.FromCtyValue(args[1], &startIndex); err != nil { - return 0, 0, false, function.NewArgErrorf(1, "invalid start index: %s", err) - } - if startIndex < 0 { - return 0, 0, false, function.NewArgErrorf(1, "start index must not be less than zero") - } - if lengthKnown && startIndex > length { - return 0, 0, false, function.NewArgErrorf(1, "start index must not be greater than the length of the list") - } - startKnown = true - } - if args[2].IsKnown() { - if err := gocty.FromCtyValue(args[2], &endIndex); err != nil { - return 0, 0, false, function.NewArgErrorf(2, "invalid end index: %s", err) - } - if endIndex < 0 { - return 0, 0, false, function.NewArgErrorf(2, "end index must not be less than zero") - } - if lengthKnown && endIndex > length { - return 0, 0, false, function.NewArgErrorf(2, "end index must not be greater than the length of the list") - } - endKnown = true - } - if startKnown && endKnown { - if startIndex > endIndex { - return 0, 0, false, function.NewArgErrorf(1, "start index must not be greater than end index") - } - } - return startIndex, endIndex, startKnown && endKnown, nil -} - -// TransposeFunc contructs a function that takes a map of lists of strings and // TransposeFunc constructs a function that takes a map of lists of strings and // swaps the keys and values to produce a new map of lists of strings. var TransposeFunc = function.New(&function.Spec{ @@ -1226,156 +505,14 @@ var TransposeFunc = function.New(&function.Spec{ outputMap[outKey] = cty.ListVal(values) } + if len(outputMap) == 0 { + return cty.MapValEmpty(cty.List(cty.String)), nil + } + return cty.MapVal(outputMap), nil }, }) -// ValuesFunc constructs a function that returns a list of the map values, -// in the order of the sorted keys. -var ValuesFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "values", - Type: cty.DynamicPseudoType, - }, - }, - Type: func(args []cty.Value) (ret cty.Type, err error) { - ty := args[0].Type() - if ty.IsMapType() { - return cty.List(ty.ElementType()), nil - } else if ty.IsObjectType() { - // The result is a tuple type with all of the same types as our - // object type's attributes, sorted in lexicographical order by the - // keys. (This matches the sort order guaranteed by ElementIterator - // on a cty object value.) - atys := ty.AttributeTypes() - if len(atys) == 0 { - return cty.EmptyTuple, nil - } - attrNames := make([]string, 0, len(atys)) - for name := range atys { - attrNames = append(attrNames, name) - } - sort.Strings(attrNames) - - tys := make([]cty.Type, len(attrNames)) - for i, name := range attrNames { - tys[i] = atys[name] - } - return cty.Tuple(tys), nil - } - return cty.NilType, errors.New("values() requires a map as the first argument") - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - mapVar := args[0] - - // We can just iterate the map/object value here because cty guarantees - // that these types always iterate in key lexicographical order. - var values []cty.Value - for it := mapVar.ElementIterator(); it.Next(); { - _, val := it.Element() - values = append(values, val) - } - - if retType.IsTupleType() { - return cty.TupleVal(values), nil - } - if len(values) == 0 { - return cty.ListValEmpty(retType.ElementType()), nil - } - return cty.ListVal(values), nil - }, -}) - -// ZipmapFunc constructs a function that constructs a map from a list of keys -// and a corresponding list of values. -var ZipmapFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "keys", - Type: cty.List(cty.String), - }, - { - Name: "values", - Type: cty.DynamicPseudoType, - }, - }, - Type: func(args []cty.Value) (ret cty.Type, err error) { - keys := args[0] - values := args[1] - valuesTy := values.Type() - - switch { - case valuesTy.IsListType(): - return cty.Map(values.Type().ElementType()), nil - case valuesTy.IsTupleType(): - if !keys.IsWhollyKnown() { - // Since zipmap with a tuple produces an object, we need to know - // all of the key names before we can predict our result type. - return cty.DynamicPseudoType, nil - } - - keysRaw := keys.AsValueSlice() - valueTypesRaw := valuesTy.TupleElementTypes() - if len(keysRaw) != len(valueTypesRaw) { - return cty.NilType, fmt.Errorf("number of keys (%d) does not match number of values (%d)", len(keysRaw), len(valueTypesRaw)) - } - atys := make(map[string]cty.Type, len(valueTypesRaw)) - for i, keyVal := range keysRaw { - if keyVal.IsNull() { - return cty.NilType, fmt.Errorf("keys list has null value at index %d", i) - } - key := keyVal.AsString() - atys[key] = valueTypesRaw[i] - } - return cty.Object(atys), nil - - default: - return cty.NilType, errors.New("values argument must be a list or tuple value") - } - }, - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - keys := args[0] - values := args[1] - - if !keys.IsWhollyKnown() { - // Unknown map keys and object attributes are not supported, so - // our entire result must be unknown in this case. - return cty.UnknownVal(retType), nil - } - - // both keys and values are guaranteed to be shallowly-known here, - // because our declared params above don't allow unknown or null values. - if keys.LengthInt() != values.LengthInt() { - return cty.NilVal, fmt.Errorf("number of keys (%d) does not match number of values (%d)", keys.LengthInt(), values.LengthInt()) - } - - output := make(map[string]cty.Value) - - i := 0 - for it := keys.ElementIterator(); it.Next(); { - _, v := it.Element() - val := values.Index(cty.NumberIntVal(int64(i))) - output[v.AsString()] = val - i++ - } - - switch { - case retType.IsMapType(): - if len(output) == 0 { - return cty.MapValEmpty(retType.ElementType()), nil - } - return cty.MapVal(output), nil - case retType.IsObjectType(): - return cty.ObjectVal(output), nil - default: - // Should never happen because the type-check function should've - // caught any other case. - return cty.NilVal, fmt.Errorf("internally selected incorrect result type %s (this is a bug)", retType.FriendlyName()) - } - }, -}) - // helper function to add an element to a list, if it does not already exist func appendIfMissing(slice []cty.Value, element cty.Value) ([]cty.Value, error) { for _, ele := range slice { @@ -1390,13 +527,6 @@ func appendIfMissing(slice []cty.Value, element cty.Value) ([]cty.Value, error) return append(slice, element), nil } -// Element returns a single element from a given list at the given index. If -// index is greater than the length of the list then it is wrapped modulo -// the list length. -func Element(list, index cty.Value) (cty.Value, error) { - return ElementFunc.Call([]cty.Value{list, index}) -} - // Length returns the number of elements in the given collection or number of // Unicode characters in the given string. func Length(collection cty.Value) (cty.Value, error) { @@ -1408,49 +538,11 @@ func Coalesce(args ...cty.Value) (cty.Value, error) { return CoalesceFunc.Call(args) } -// CoalesceList takes any number of list arguments and returns the first one that isn't empty. -func CoalesceList(args ...cty.Value) (cty.Value, error) { - return CoalesceListFunc.Call(args) -} - -// Compact takes a list of strings and returns a new list -// with any empty string elements removed. -func Compact(list cty.Value) (cty.Value, error) { - return CompactFunc.Call([]cty.Value{list}) -} - -// Contains determines whether a given list contains a given single value -// as one of its elements. -func Contains(list, value cty.Value) (cty.Value, error) { - return ContainsFunc.Call([]cty.Value{list, value}) -} - // Index finds the element index for a given value in a list. func Index(list, value cty.Value) (cty.Value, error) { return IndexFunc.Call([]cty.Value{list, value}) } -// Distinct takes a list and returns a new list with any duplicate elements removed. -func Distinct(list cty.Value) (cty.Value, error) { - return DistinctFunc.Call([]cty.Value{list}) -} - -// Chunklist splits a single list into fixed-size chunks, returning a list of lists. -func Chunklist(list, size cty.Value) (cty.Value, error) { - return ChunklistFunc.Call([]cty.Value{list, size}) -} - -// Flatten takes a list and replaces any elements that are lists with a flattened -// sequence of the list contents. -func Flatten(list cty.Value) (cty.Value, error) { - return FlattenFunc.Call([]cty.Value{list}) -} - -// Keys takes a map and returns a sorted list of the map keys. -func Keys(inputMap cty.Value) (cty.Value, error) { - return KeysFunc.Call([]cty.Value{inputMap}) -} - // List takes any number of list arguments and returns a list containing those // values in the same order. func List(args ...cty.Value) (cty.Value, error) { @@ -1476,44 +568,8 @@ func Matchkeys(values, keys, searchset cty.Value) (cty.Value, error) { return MatchkeysFunc.Call([]cty.Value{values, keys, searchset}) } -// Merge takes an arbitrary number of maps and returns a single map that contains -// a merged set of elements from all of the maps. -// -// If more than one given map defines the same key then the one that is later in -// the argument sequence takes precedence. -func Merge(maps ...cty.Value) (cty.Value, error) { - return MergeFunc.Call(maps) -} - -// Reverse takes a sequence and produces a new sequence of the same length -// with all of the same elements as the given sequence but in reverse order. -func Reverse(list cty.Value) (cty.Value, error) { - return ReverseFunc.Call([]cty.Value{list}) -} - -// SetProduct computes the Cartesian product of sets or sequences. -func SetProduct(sets ...cty.Value) (cty.Value, error) { - return SetProductFunc.Call(sets) -} - -// Slice extracts some consecutive elements from within a list. -func Slice(list, start, end cty.Value) (cty.Value, error) { - return SliceFunc.Call([]cty.Value{list, start, end}) -} - // Transpose takes a map of lists of strings and swaps the keys and values to // produce a new map of lists of strings. func Transpose(values cty.Value) (cty.Value, error) { return TransposeFunc.Call([]cty.Value{values}) } - -// Values returns a list of the map values, in the order of the sorted keys. -// This function only works on flat maps. -func Values(values cty.Value) (cty.Value, error) { - return ValuesFunc.Call([]cty.Value{values}) -} - -// Zipmap constructs a map from a list of keys and a corresponding list of values. -func Zipmap(keys, values cty.Value) (cty.Value, error) { - return ZipmapFunc.Call([]cty.Value{keys, values}) -} diff --git a/lang/funcs/collection_test.go b/lang/funcs/collection_test.go index cec18c049..76dd1dbf0 100644 --- a/lang/funcs/collection_test.go +++ b/lang/funcs/collection_test.go @@ -7,125 +7,6 @@ import ( "github.com/zclconf/go-cty/cty" ) -func TestElement(t *testing.T) { - tests := []struct { - List cty.Value - Index cty.Value - Want cty.Value - }{ - { - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - }), - cty.NumberIntVal(0), - cty.StringVal("hello"), - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - }), - cty.NumberIntVal(1), - cty.StringVal("hello"), - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("bonjour"), - }), - cty.NumberIntVal(0), - cty.StringVal("hello"), - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("bonjour"), - }), - cty.NumberIntVal(1), - cty.StringVal("bonjour"), - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("bonjour"), - }), - cty.NumberIntVal(2), - cty.StringVal("hello"), - }, - - { - cty.TupleVal([]cty.Value{ - cty.StringVal("hello"), - }), - cty.NumberIntVal(0), - cty.StringVal("hello"), - }, - { - cty.TupleVal([]cty.Value{ - cty.StringVal("hello"), - }), - cty.NumberIntVal(1), - cty.StringVal("hello"), - }, - { - cty.TupleVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("bonjour"), - }), - cty.NumberIntVal(0), - cty.StringVal("hello"), - }, - { - cty.TupleVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("bonjour"), - }), - cty.NumberIntVal(1), - cty.StringVal("bonjour"), - }, - { - cty.TupleVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("bonjour"), - }), - cty.NumberIntVal(2), - cty.StringVal("hello"), - }, - { - cty.TupleVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("bonjour"), - }), - cty.UnknownVal(cty.Number), - cty.DynamicVal, - }, - { - cty.UnknownVal(cty.Tuple([]cty.Type{cty.String, cty.Bool})), - cty.NumberIntVal(1), - cty.UnknownVal(cty.Bool), - }, - { - cty.UnknownVal(cty.Tuple([]cty.Type{cty.String, cty.String})), - cty.UnknownVal(cty.Number), - cty.DynamicVal, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("Element(%#v, %#v)", test.List, test.Index), func(t *testing.T) { - got, err := Element(test.List, test.Index) - - if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } - -} - func TestLength(t *testing.T) { tests := []struct { Value cty.Value @@ -345,434 +226,6 @@ func TestCoalesce(t *testing.T) { } } -func TestCoalesceList(t *testing.T) { - tests := []struct { - Values []cty.Value - Want cty.Value - Err bool - }{ - { - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("first"), cty.StringVal("second"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.ListVal([]cty.Value{ - cty.StringVal("first"), cty.StringVal("second"), - }), - false, - }, - { - []cty.Value{ - cty.ListValEmpty(cty.String), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - false, - }, - { - []cty.Value{ - cty.ListValEmpty(cty.Number), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - }, - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - false, - }, - { // lists with mixed types - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("first"), cty.StringVal("second"), - }), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - }, - cty.ListVal([]cty.Value{ - cty.StringVal("first"), cty.StringVal("second"), - }), - false, - }, - { // lists with mixed types - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("first"), cty.StringVal("second"), - }), - }, - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), cty.NumberIntVal(2), - }), - false, - }, - { // list with unknown values - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("first"), cty.StringVal("second"), - }), - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - }), - }, - cty.ListVal([]cty.Value{ - cty.StringVal("first"), cty.StringVal("second"), - }), - false, - }, - { // list with unknown values - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - }), - false, - }, - { - []cty.Value{ - cty.MapValEmpty(cty.DynamicPseudoType), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.NilVal, - true, - }, - { // unknown list - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - cty.UnknownVal(cty.List(cty.String)), - }, - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - false, - }, - { // unknown list - []cty.Value{ - cty.ListValEmpty(cty.String), - cty.UnknownVal(cty.List(cty.String)), - }, - cty.DynamicVal, - false, - }, - { // unknown list - []cty.Value{ - cty.UnknownVal(cty.List(cty.String)), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.DynamicVal, - false, - }, - { // unknown tuple - []cty.Value{ - cty.UnknownVal(cty.Tuple([]cty.Type{cty.String})), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.DynamicVal, - false, - }, - { // empty tuple - []cty.Value{ - cty.EmptyTupleVal, - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - false, - }, - { // tuple value - []cty.Value{ - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.NumberIntVal(2), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.NumberIntVal(2), - }), - false, - }, - { // reject set value - []cty.Value{ - cty.SetVal([]cty.Value{ - cty.StringVal("a"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("third"), cty.StringVal("fourth"), - }), - }, - cty.NilVal, - true, - }, - } - - for i, test := range tests { - t.Run(fmt.Sprintf("%d-coalescelist(%#v)", i, test.Values), func(t *testing.T) { - got, err := CoalesceList(test.Values...) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestCompact(t *testing.T) { - tests := []struct { - List cty.Value - Want cty.Value - Err bool - }{ - { - cty.ListVal([]cty.Value{ - cty.StringVal("test"), - cty.StringVal(""), - cty.StringVal("test"), - cty.NullVal(cty.String), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("test"), - cty.StringVal("test"), - }), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal(""), - cty.StringVal(""), - cty.StringVal(""), - }), - cty.ListValEmpty(cty.String), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.NullVal(cty.String), - cty.NullVal(cty.String), - }), - cty.ListValEmpty(cty.String), - false, - }, - { - cty.ListValEmpty(cty.String), - cty.ListValEmpty(cty.String), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("test"), - cty.StringVal("test"), - cty.StringVal(""), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("test"), - cty.StringVal("test"), - }), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("test"), - cty.UnknownVal(cty.String), - cty.StringVal(""), - cty.NullVal(cty.String), - }), - cty.UnknownVal(cty.List(cty.String)), - false, - }, - { // errors on list of lists - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("test"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal(""), - }), - }), - cty.NilVal, - true, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("compact(%#v)", test.List), func(t *testing.T) { - got, err := Compact(test.List) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestContains(t *testing.T) { - listOfStrings := cty.ListVal([]cty.Value{ - cty.StringVal("the"), - cty.StringVal("quick"), - cty.StringVal("brown"), - cty.StringVal("fox"), - }) - listOfInts := cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - cty.NumberIntVal(3), - cty.NumberIntVal(4), - }) - listWithUnknown := cty.ListVal([]cty.Value{ - cty.StringVal("the"), - cty.StringVal("quick"), - cty.StringVal("brown"), - cty.UnknownVal(cty.String), - }) - - tests := []struct { - List cty.Value - Value cty.Value - Want cty.Value - Err bool - }{ - { - listOfStrings, - cty.StringVal("the"), - cty.BoolVal(true), - false, - }, - { - listWithUnknown, - cty.StringVal("the"), - cty.BoolVal(true), - false, - }, - { - listOfStrings, - cty.StringVal("penguin"), - cty.BoolVal(false), - false, - }, - { - listOfInts, - cty.NumberIntVal(1), - cty.BoolVal(true), - false, - }, - { - listOfInts, - cty.NumberIntVal(42), - cty.BoolVal(false), - false, - }, - { // And now we mix and match - listOfInts, - cty.StringVal("1"), - cty.BoolVal(false), - false, - }, - { // Check a list with an unknown value - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - cty.StringVal("quick"), - cty.StringVal("brown"), - cty.StringVal("fox"), - }), - cty.StringVal("quick"), - cty.BoolVal(true), - false, - }, - { // set val - cty.SetVal([]cty.Value{ - cty.StringVal("quick"), - cty.StringVal("brown"), - cty.StringVal("fox"), - }), - cty.StringVal("quick"), - cty.BoolVal(true), - false, - }, - { // tuple val - cty.TupleVal([]cty.Value{ - cty.StringVal("quick"), - cty.StringVal("brown"), - cty.NumberIntVal(3), - }), - cty.NumberIntVal(3), - cty.BoolVal(true), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("contains(%#v, %#v)", test.List, test.Value), func(t *testing.T) { - got, err := Contains(test.List, test.Value) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - func TestIndex(t *testing.T) { tests := []struct { List cty.Value @@ -892,493 +345,6 @@ func TestIndex(t *testing.T) { } } -func TestDistinct(t *testing.T) { - tests := []struct { - List cty.Value - Want cty.Value - Err bool - }{ - { - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - false, - }, - { - cty.ListValEmpty(cty.String), - cty.ListValEmpty(cty.String), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("a"), - cty.UnknownVal(cty.String), - }), - cty.UnknownVal(cty.List(cty.String)), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - cty.StringVal("d"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - cty.StringVal("d"), - }), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - }), - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - }), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(3), - cty.NumberIntVal(4), - }), - }), - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(3), - cty.NumberIntVal(4), - }), - }), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("distinct(%#v)", test.List), func(t *testing.T) { - got, err := Distinct(test.List) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestChunklist(t *testing.T) { - tests := []struct { - List cty.Value - Size cty.Value - Want cty.Value - Err bool - }{ - { - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - }), - cty.NumberIntVal(1), - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("b"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("c"), - }), - }), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - }), - cty.NumberIntVal(-1), - cty.NilVal, - true, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - }), - cty.NumberIntVal(0), - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - }), - }), - false, - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.UnknownVal(cty.String), - }), - cty.NumberIntVal(1), - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("b"), - }), - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - }), - }), - false, - }, - { - cty.UnknownVal(cty.List(cty.String)), - cty.NumberIntVal(1), - cty.UnknownVal(cty.List(cty.List(cty.String))), - false, - }, - { - cty.ListValEmpty(cty.String), - cty.NumberIntVal(3), - cty.ListValEmpty(cty.List(cty.String)), - false, - }, - } - - for i, test := range tests { - t.Run(fmt.Sprintf("%d-chunklist(%#v, %#v)", i, test.List, test.Size), func(t *testing.T) { - got, err := Chunklist(test.List, test.Size) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestFlatten(t *testing.T) { - tests := []struct { - List cty.Value - Want cty.Value - Err bool - }{ - { - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("c"), - cty.StringVal("d"), - }), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - cty.StringVal("d"), - }), - false, - }, - // handle single elements as arguments - { - cty.TupleVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.StringVal("c"), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - }), false, - }, - // handle single elements and mixed primitive types as arguments - { - cty.TupleVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.StringVal("c"), - cty.TupleVal([]cty.Value{ - cty.StringVal("x"), - cty.NumberIntVal(1), - }), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - cty.StringVal("x"), - cty.NumberIntVal(1), - }), - false, - }, - // Primitive unknowns should still be flattened to a tuple - { - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - cty.StringVal("d"), - }), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.UnknownVal(cty.String), - cty.StringVal("d"), - }), false, - }, - // An unknown series should return an unknown dynamic value - { - cty.TupleVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.TupleVal([]cty.Value{ - cty.UnknownVal(cty.List(cty.String)), - cty.StringVal("d"), - }), - }), - cty.UnknownVal(cty.DynamicPseudoType), false, - }, - { - cty.ListValEmpty(cty.String), - cty.EmptyTupleVal, - false, - }, - { - cty.SetVal([]cty.Value{ - cty.SetVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.SetVal([]cty.Value{ - cty.StringVal("c"), - cty.StringVal("d"), - }), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - cty.StringVal("d"), - }), - false, - }, - { - cty.TupleVal([]cty.Value{ - cty.SetVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("c"), - cty.StringVal("d"), - }), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - cty.StringVal("c"), - cty.StringVal("d"), - }), - false, - }, - } - - for i, test := range tests { - t.Run(fmt.Sprintf("%d-flatten(%#v)", i, test.List), func(t *testing.T) { - got, err := Flatten(test.List) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestKeys(t *testing.T) { - tests := []struct { - Map cty.Value - Want cty.Value - Err bool - }{ - { - cty.MapVal(map[string]cty.Value{ - "hello": cty.NumberIntVal(1), - "goodbye": cty.NumberIntVal(42), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("goodbye"), - cty.StringVal("hello"), - }), - false, - }, - { // same as above, but an object type - cty.ObjectVal(map[string]cty.Value{ - "hello": cty.NumberIntVal(1), - "goodbye": cty.StringVal("adieu"), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("goodbye"), - cty.StringVal("hello"), - }), - false, - }, - { // for an unknown object we can still return the keys, since they are part of the type - cty.UnknownVal(cty.Object(map[string]cty.Type{ - "hello": cty.Number, - "goodbye": cty.String, - })), - cty.TupleVal([]cty.Value{ - cty.StringVal("goodbye"), - cty.StringVal("hello"), - }), - false, - }, - { // an empty object has no keys - cty.EmptyObjectVal, - cty.EmptyTupleVal, - false, - }, - { // an empty map has no keys, but the result should still be properly typed - cty.MapValEmpty(cty.Number), - cty.ListValEmpty(cty.String), - false, - }, - { // Unknown map has unknown keys - cty.UnknownVal(cty.Map(cty.String)), - cty.UnknownVal(cty.List(cty.String)), - false, - }, - { // Not a map at all, so invalid - cty.StringVal("foo"), - cty.NilVal, - true, - }, - { // Can't get keys from a null object - cty.NullVal(cty.Object(map[string]cty.Type{ - "hello": cty.Number, - "goodbye": cty.String, - })), - cty.NilVal, - true, - }, - { // Can't get keys from a null map - cty.NullVal(cty.Map(cty.Number)), - cty.NilVal, - true, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("keys(%#v)", test.Map), func(t *testing.T) { - got, err := Keys(test.Map) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - func TestList(t *testing.T) { tests := []struct { Values []cty.Value @@ -2064,796 +1030,6 @@ func TestMatchkeys(t *testing.T) { } } -func TestMerge(t *testing.T) { - tests := []struct { - Values []cty.Value - Want cty.Value - Err bool - }{ - { - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.StringVal("b"), - }), - cty.MapVal(map[string]cty.Value{ - "c": cty.StringVal("d"), - }), - }, - cty.ObjectVal(map[string]cty.Value{ - "a": cty.StringVal("b"), - "c": cty.StringVal("d"), - }), - false, - }, - { // handle unknowns - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.UnknownVal(cty.String), - }), - cty.MapVal(map[string]cty.Value{ - "c": cty.StringVal("d"), - }), - }, - cty.DynamicVal, - false, - }, - { // merge with conflicts is ok, last in wins - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.StringVal("b"), - "c": cty.StringVal("d"), - }), - cty.MapVal(map[string]cty.Value{ - "a": cty.StringVal("x"), - }), - }, - cty.ObjectVal(map[string]cty.Value{ - "a": cty.StringVal("x"), - "c": cty.StringVal("d"), - }), - false, - }, - { // only accept maps - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.StringVal("b"), - "c": cty.StringVal("d"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("x"), - }), - }, - cty.NilVal, - true, - }, - - { // argument error, for a null type - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.StringVal("b"), - }), - cty.NullVal(cty.String), - }, - cty.NilVal, - true, - }, - { // merge maps of maps - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.MapVal(map[string]cty.Value{ - "b": cty.StringVal("c"), - }), - }), - cty.MapVal(map[string]cty.Value{ - "d": cty.MapVal(map[string]cty.Value{ - "e": cty.StringVal("f"), - }), - }), - }, - cty.ObjectVal(map[string]cty.Value{ - "a": cty.MapVal(map[string]cty.Value{ - "b": cty.StringVal("c"), - }), - "d": cty.MapVal(map[string]cty.Value{ - "e": cty.StringVal("f"), - }), - }), - false, - }, - { // map of lists - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.ListVal([]cty.Value{ - cty.StringVal("b"), - cty.StringVal("c"), - }), - }), - cty.MapVal(map[string]cty.Value{ - "d": cty.ListVal([]cty.Value{ - cty.StringVal("e"), - cty.StringVal("f"), - }), - }), - }, - cty.ObjectVal(map[string]cty.Value{ - "a": cty.ListVal([]cty.Value{ - cty.StringVal("b"), - cty.StringVal("c"), - }), - "d": cty.ListVal([]cty.Value{ - cty.StringVal("e"), - cty.StringVal("f"), - }), - }), - false, - }, - { // merge map of various kinds - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.ListVal([]cty.Value{ - cty.StringVal("b"), - cty.StringVal("c"), - }), - }), - cty.MapVal(map[string]cty.Value{ - "d": cty.MapVal(map[string]cty.Value{ - "e": cty.StringVal("f"), - }), - }), - }, - cty.ObjectVal(map[string]cty.Value{ - "a": cty.ListVal([]cty.Value{ - cty.StringVal("b"), - cty.StringVal("c"), - }), - "d": cty.MapVal(map[string]cty.Value{ - "e": cty.StringVal("f"), - }), - }), - false, - }, - { // argument error: non map type - []cty.Value{ - cty.MapVal(map[string]cty.Value{ - "a": cty.ListVal([]cty.Value{ - cty.StringVal("b"), - cty.StringVal("c"), - }), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("d"), - cty.StringVal("e"), - }), - }, - cty.NilVal, - true, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("merge(%#v)", test.Values), func(t *testing.T) { - got, err := Merge(test.Values...) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestReverse(t *testing.T) { - tests := []struct { - List cty.Value - Want cty.Value - Err string - }{ - { - cty.ListValEmpty(cty.String), - cty.ListValEmpty(cty.String), - "", - }, - { - cty.ListVal([]cty.Value{cty.StringVal("a")}), - cty.ListVal([]cty.Value{cty.StringVal("a")}), - "", - }, - { - cty.ListVal([]cty.Value{cty.StringVal("a"), cty.StringVal("b")}), - cty.ListVal([]cty.Value{cty.StringVal("b"), cty.StringVal("a")}), - "", - }, - { - cty.ListVal([]cty.Value{cty.StringVal("a"), cty.StringVal("b"), cty.StringVal("c")}), - cty.ListVal([]cty.Value{cty.StringVal("c"), cty.StringVal("b"), cty.StringVal("a")}), - "", - }, - { - cty.ListVal([]cty.Value{cty.UnknownVal(cty.String), cty.StringVal("b"), cty.StringVal("c")}), - cty.ListVal([]cty.Value{cty.StringVal("c"), cty.StringVal("b"), cty.UnknownVal(cty.String)}), - "", - }, - { - cty.EmptyTupleVal, - cty.EmptyTupleVal, - "", - }, - { - cty.TupleVal([]cty.Value{cty.StringVal("a")}), - cty.TupleVal([]cty.Value{cty.StringVal("a")}), - "", - }, - { - cty.TupleVal([]cty.Value{cty.StringVal("a"), cty.True}), - cty.TupleVal([]cty.Value{cty.True, cty.StringVal("a")}), - "", - }, - { - cty.TupleVal([]cty.Value{cty.StringVal("a"), cty.True, cty.Zero}), - cty.TupleVal([]cty.Value{cty.Zero, cty.True, cty.StringVal("a")}), - "", - }, - { - cty.SetValEmpty(cty.String), - cty.ListValEmpty(cty.String), - "", - }, - { - cty.SetVal([]cty.Value{cty.StringVal("a")}), - cty.ListVal([]cty.Value{cty.StringVal("a")}), - "", - }, - { - cty.SetVal([]cty.Value{cty.StringVal("a"), cty.StringVal("b")}), - cty.ListVal([]cty.Value{cty.StringVal("b"), cty.StringVal("a")}), // set-of-string iterates in lexicographical order - "", - }, - { - cty.SetVal([]cty.Value{cty.StringVal("b"), cty.StringVal("a"), cty.StringVal("c")}), - cty.ListVal([]cty.Value{cty.StringVal("c"), cty.StringVal("b"), cty.StringVal("a")}), // set-of-string iterates in lexicographical order - "", - }, - { - cty.StringVal("no"), - cty.NilVal, - "can only reverse list or tuple values, not string", - }, - { - cty.True, - cty.NilVal, - "can only reverse list or tuple values, not bool", - }, - { - cty.MapValEmpty(cty.String), - cty.NilVal, - "can only reverse list or tuple values, not map of string", - }, - { - cty.NullVal(cty.List(cty.String)), - cty.NilVal, - "argument must not be null", - }, - { - cty.UnknownVal(cty.List(cty.String)), - cty.UnknownVal(cty.List(cty.String)), - "", - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("reverse(%#v)", test.List), func(t *testing.T) { - got, err := Reverse(test.List) - - if test.Err != "" { - if err == nil { - t.Fatal("succeeded; want error") - } - if got, want := err.Error(), test.Err; got != want { - t.Fatalf("wrong error\ngot: %s\nwant: %s", got, want) - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } - -} - -func TestSetProduct(t *testing.T) { - tests := []struct { - Sets []cty.Value - Want cty.Value - Err string - }{ - { - nil, - cty.DynamicVal, - "at least two arguments are required", - }, - { - []cty.Value{ - cty.SetValEmpty(cty.String), - }, - cty.DynamicVal, - "at least two arguments are required", - }, - { - []cty.Value{ - cty.SetValEmpty(cty.String), - cty.StringVal("hello"), - }, - cty.DynamicVal, - "a set or a list is required", // this is an ArgError, so is presented against the second argument in particular - }, - { - []cty.Value{ - cty.SetValEmpty(cty.String), - cty.SetValEmpty(cty.String), - }, - cty.SetValEmpty(cty.Tuple([]cty.Type{cty.String, cty.String})), - "", - }, - { - []cty.Value{ - cty.SetVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.SetVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - }, - cty.SetVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("bar")}), - }), - "", - }, - { - []cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.SetVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - }, - cty.SetVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("bar")}), - }), - "", - }, - { - []cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.SetVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - }, - cty.SetVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("bar")}), - }), - "", - }, - { - []cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.ListVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - }, - cty.ListVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("bar")}), - }), - "", - }, - { - []cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - }, - cty.ListVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("bar")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("bar")}), - }), - "", - }, - { - []cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.True}), - }, - cty.ListVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("true")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("true")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("true")}), - }), - "", - }, - { - []cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.EmptyTupleVal, - }, - cty.ListValEmpty(cty.Tuple([]cty.Type{cty.String, cty.DynamicPseudoType})), - "", - }, - { - []cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.EmptyObjectVal}), - }, - cty.DynamicVal, - "all elements must be of the same type", // this is an ArgError for the second argument - }, - { - []cty.Value{ - cty.SetVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.SetVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - cty.SetVal([]cty.Value{cty.StringVal("baz")}), - }, - cty.SetVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("foo"), cty.StringVal("baz")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("foo"), cty.StringVal("baz")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("foo"), cty.StringVal("baz")}), - cty.TupleVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("bar"), cty.StringVal("baz")}), - cty.TupleVal([]cty.Value{cty.StringVal("stg"), cty.StringVal("bar"), cty.StringVal("baz")}), - cty.TupleVal([]cty.Value{cty.StringVal("prd"), cty.StringVal("bar"), cty.StringVal("baz")}), - }), - "", - }, - { - []cty.Value{ - cty.SetVal([]cty.Value{cty.StringVal("dev"), cty.StringVal("stg"), cty.StringVal("prd")}), - cty.SetValEmpty(cty.String), - }, - cty.SetValEmpty(cty.Tuple([]cty.Type{cty.String, cty.String})), - "", - }, - { - []cty.Value{ - cty.SetVal([]cty.Value{cty.StringVal("foo")}), - cty.SetVal([]cty.Value{cty.StringVal("bar")}), - }, - cty.SetVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - }), - "", - }, - { - []cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("foo")}), - cty.TupleVal([]cty.Value{cty.StringVal("bar")}), - }, - cty.ListVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("bar")}), - }), - "", - }, - { - []cty.Value{ - cty.SetVal([]cty.Value{cty.StringVal("foo")}), - cty.SetVal([]cty.Value{cty.DynamicVal}), - }, - cty.SetVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.DynamicVal}), - }), - "", - }, - { - []cty.Value{ - cty.SetVal([]cty.Value{cty.StringVal("foo")}), - cty.SetVal([]cty.Value{cty.True, cty.DynamicVal}), - }, - cty.SetVal([]cty.Value{ - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.True}), - cty.TupleVal([]cty.Value{cty.StringVal("foo"), cty.UnknownVal(cty.Bool)}), - }), - "", - }, - { - []cty.Value{ - cty.UnknownVal(cty.Set(cty.String)), - cty.SetVal([]cty.Value{cty.True, cty.False}), - }, - cty.UnknownVal(cty.Set(cty.Tuple([]cty.Type{cty.String, cty.Bool}))), - "", - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("setproduct(%#v)", test.Sets), func(t *testing.T) { - got, err := SetProduct(test.Sets...) - - if test.Err != "" { - if err == nil { - t.Fatal("succeeded; want error") - } - if got, want := err.Error(), test.Err; got != want { - t.Fatalf("wrong error\ngot: %s\nwant: %s", got, want) - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } - -} - -func TestSlice(t *testing.T) { - listOfStrings := cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.StringVal("b"), - }) - listOfInts := cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(2), - }) - listWithUnknowns := cty.ListVal([]cty.Value{ - cty.StringVal("a"), - cty.UnknownVal(cty.String), - }) - tuple := cty.TupleVal([]cty.Value{ - cty.StringVal("a"), - cty.NumberIntVal(1), - cty.UnknownVal(cty.List(cty.String)), - }) - tests := []struct { - List cty.Value - StartIndex cty.Value - EndIndex cty.Value - Want cty.Value - Err bool - }{ - { // normal usage - listOfStrings, - cty.NumberIntVal(1), - cty.NumberIntVal(2), - cty.ListVal([]cty.Value{ - cty.StringVal("b"), - }), - false, - }, - { // slice only an unknown value - listWithUnknowns, - cty.NumberIntVal(1), - cty.NumberIntVal(2), - cty.ListVal([]cty.Value{cty.UnknownVal(cty.String)}), - false, - }, - { // slice multiple values, which contain an unknown - listWithUnknowns, - cty.NumberIntVal(0), - cty.NumberIntVal(2), - listWithUnknowns, - false, - }, - { // an unknown list should be slicable, returning an unknown list - cty.UnknownVal(cty.List(cty.String)), - cty.NumberIntVal(0), - cty.NumberIntVal(2), - cty.UnknownVal(cty.List(cty.String)), - false, - }, - { // normal usage - listOfInts, - cty.NumberIntVal(1), - cty.NumberIntVal(2), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(2), - }), - false, - }, - { // empty result - listOfStrings, - cty.NumberIntVal(1), - cty.NumberIntVal(1), - cty.ListValEmpty(cty.String), - false, - }, - { // index out of bounds - listOfStrings, - cty.NumberIntVal(1), - cty.NumberIntVal(4), - cty.NilVal, - true, - }, - { // StartIndex index > EndIndex - listOfStrings, - cty.NumberIntVal(2), - cty.NumberIntVal(1), - cty.NilVal, - true, - }, - { // negative StartIndex - listOfStrings, - cty.NumberIntVal(-1), - cty.NumberIntVal(0), - cty.NilVal, - true, - }, - { // sets are not slice-able - cty.SetVal([]cty.Value{ - cty.StringVal("x"), - cty.StringVal("y"), - }), - cty.NumberIntVal(0), - cty.NumberIntVal(0), - cty.NilVal, - true, - }, - { // tuple slice - tuple, - cty.NumberIntVal(1), - cty.NumberIntVal(3), - cty.TupleVal([]cty.Value{ - cty.NumberIntVal(1), - cty.UnknownVal(cty.List(cty.String)), - }), - false, - }, - { // unknown tuple slice - cty.UnknownVal(tuple.Type()), - cty.NumberIntVal(1), - cty.NumberIntVal(3), - cty.UnknownVal(cty.Tuple([]cty.Type{ - cty.Number, - cty.List(cty.String), - })), - false, - }, - { // empty list slice - listOfStrings, - cty.NumberIntVal(2), - cty.NumberIntVal(2), - cty.ListValEmpty(cty.String), - false, - }, - { // empty tuple slice - tuple, - cty.NumberIntVal(3), - cty.NumberIntVal(3), - cty.EmptyTupleVal, - false, - }, - { // list with unknown start offset - listOfStrings, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(2), - cty.UnknownVal(cty.List(cty.String)), - false, - }, - { // list with unknown start offset but end out of bounds - listOfStrings, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(200), - cty.UnknownVal(cty.List(cty.String)), - true, - }, - { // list with unknown start offset but end < 0 - listOfStrings, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(-4), - cty.UnknownVal(cty.List(cty.String)), - true, - }, - { // list with unknown end offset - listOfStrings, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(0), - cty.UnknownVal(cty.List(cty.String)), - false, - }, - { // list with unknown end offset but start out of bounds - listOfStrings, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(200), - cty.UnknownVal(cty.List(cty.String)), - true, - }, - { // list with unknown end offset but start < 0 - listOfStrings, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(-3), - cty.UnknownVal(cty.List(cty.String)), - true, - }, - { // tuple slice with unknown start offset - tuple, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(3), - cty.DynamicVal, - false, - }, - { // tuple slice with unknown start offset but end out of bounds - tuple, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(200), - cty.DynamicVal, - true, - }, - { // tuple slice with unknown start offset but end < 0 - tuple, - cty.UnknownVal(cty.Number), - cty.NumberIntVal(-20), - cty.DynamicVal, - true, - }, - { // tuple slice with unknown end offset - tuple, - cty.NumberIntVal(0), - cty.UnknownVal(cty.Number), - cty.DynamicVal, - false, - }, - { // tuple slice with unknown end offset but start < 0 - tuple, - cty.NumberIntVal(-2), - cty.UnknownVal(cty.Number), - cty.DynamicVal, - true, - }, - { // tuple slice with unknown end offset but start out of bounds - tuple, - cty.NumberIntVal(200), - cty.UnknownVal(cty.Number), - cty.DynamicVal, - true, - }, - } - - for i, test := range tests { - t.Run(fmt.Sprintf("%d-slice(%#v, %#v, %#v)", i, test.List, test.StartIndex, test.EndIndex), func(t *testing.T) { - got, err := Slice(test.List, test.StartIndex, test.EndIndex) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - func TestTranspose(t *testing.T) { tests := []struct { Values cty.Value @@ -2903,8 +1079,8 @@ func TestTranspose(t *testing.T) { cty.MapVal(map[string]cty.Value{ "key1": cty.ListValEmpty(cty.String), }), - cty.NilVal, - true, + cty.MapValEmpty(cty.List(cty.String)), + false, }, { // bad map - value not a list cty.MapVal(map[string]cty.Value{ @@ -2934,286 +1110,3 @@ func TestTranspose(t *testing.T) { }) } } - -func TestValues(t *testing.T) { - tests := []struct { - Values cty.Value - Want cty.Value - Err bool - }{ - { - cty.MapVal(map[string]cty.Value{ - "hello": cty.StringVal("world"), - "what's": cty.StringVal("up"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("world"), - cty.StringVal("up"), - }), - false, - }, - { - cty.ObjectVal(map[string]cty.Value{ - "what's": cty.StringVal("up"), - "hello": cty.StringVal("world"), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("world"), - cty.StringVal("up"), - }), - false, - }, - { // empty object - cty.EmptyObjectVal, - cty.EmptyTupleVal, - false, - }, - { - cty.UnknownVal(cty.Object(map[string]cty.Type{ - "what's": cty.String, - "hello": cty.Bool, - })), - cty.UnknownVal(cty.Tuple([]cty.Type{ - cty.Bool, - cty.String, - })), - false, - }, - { // note ordering: keys are sorted first - cty.MapVal(map[string]cty.Value{ - "hello": cty.NumberIntVal(1), - "goodbye": cty.NumberIntVal(42), - }), - cty.ListVal([]cty.Value{ - cty.NumberIntVal(42), - cty.NumberIntVal(1), - }), - false, - }, - { // map of lists - cty.MapVal(map[string]cty.Value{ - "hello": cty.ListVal([]cty.Value{cty.StringVal("world")}), - "what's": cty.ListVal([]cty.Value{cty.StringVal("up")}), - }), - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("world")}), - cty.ListVal([]cty.Value{cty.StringVal("up")}), - }), - false, - }, - { // map with unknowns - cty.MapVal(map[string]cty.Value{ - "hello": cty.ListVal([]cty.Value{cty.StringVal("world")}), - "what's": cty.UnknownVal(cty.List(cty.String)), - }), - cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{cty.StringVal("world")}), - cty.UnknownVal(cty.List(cty.String)), - }), - false, - }, - { // empty m - cty.MapValEmpty(cty.DynamicPseudoType), - cty.ListValEmpty(cty.DynamicPseudoType), - false, - }, - { // unknown m - cty.UnknownVal(cty.Map(cty.String)), - cty.UnknownVal(cty.List(cty.String)), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("values(%#v)", test.Values), func(t *testing.T) { - got, err := Values(test.Values) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestZipmap(t *testing.T) { - list1 := cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("world"), - }) - list2 := cty.ListVal([]cty.Value{ - cty.StringVal("bar"), - cty.StringVal("baz"), - }) - list3 := cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("there"), - cty.StringVal("world"), - }) - list4 := cty.ListVal([]cty.Value{ - cty.NumberIntVal(1), - cty.NumberIntVal(42), - }) - list5 := cty.ListVal([]cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("bar"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("baz"), - }), - }) - tests := []struct { - Keys cty.Value - Values cty.Value - Want cty.Value - Err bool - }{ - { - list1, - list2, - cty.MapVal(map[string]cty.Value{ - "hello": cty.StringVal("bar"), - "world": cty.StringVal("baz"), - }), - false, - }, - { - list1, - list4, - cty.MapVal(map[string]cty.Value{ - "hello": cty.NumberIntVal(1), - "world": cty.NumberIntVal(42), - }), - false, - }, - { // length mismatch - list1, - list3, - cty.NilVal, - true, - }, - { // map of lists - list1, - list5, - cty.MapVal(map[string]cty.Value{ - "hello": cty.ListVal([]cty.Value{cty.StringVal("bar")}), - "world": cty.ListVal([]cty.Value{cty.StringVal("baz")}), - }), - false, - }, - { // tuple values produce object - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("world"), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("bar"), - cty.UnknownVal(cty.Bool), - }), - cty.ObjectVal(map[string]cty.Value{ - "hello": cty.StringVal("bar"), - "world": cty.UnknownVal(cty.Bool), - }), - false, - }, - { // empty tuple produces empty object - cty.ListValEmpty(cty.String), - cty.EmptyTupleVal, - cty.EmptyObjectVal, - false, - }, - { // tuple with any unknown keys produces DynamicVal - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.UnknownVal(cty.String), - }), - cty.TupleVal([]cty.Value{ - cty.StringVal("bar"), - cty.True, - }), - cty.DynamicVal, - false, - }, - { // tuple with all keys unknown produces DynamicVal - cty.UnknownVal(cty.List(cty.String)), - cty.TupleVal([]cty.Value{ - cty.StringVal("bar"), - cty.True, - }), - cty.DynamicVal, - false, - }, - { // list with all keys unknown produces correctly-typed unknown map - cty.UnknownVal(cty.List(cty.String)), - cty.ListVal([]cty.Value{ - cty.StringVal("bar"), - cty.StringVal("baz"), - }), - cty.UnknownVal(cty.Map(cty.String)), - false, - }, - { // unknown tuple as values produces correctly-typed unknown object - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("world"), - }), - cty.UnknownVal(cty.Tuple([]cty.Type{ - cty.String, - cty.Bool, - })), - cty.UnknownVal(cty.Object(map[string]cty.Type{ - "hello": cty.String, - "world": cty.Bool, - })), - false, - }, - { // unknown list as values produces correctly-typed unknown map - cty.ListVal([]cty.Value{ - cty.StringVal("hello"), - cty.StringVal("world"), - }), - cty.UnknownVal(cty.List(cty.String)), - cty.UnknownVal(cty.Map(cty.String)), - false, - }, - { // empty input returns an empty map - cty.ListValEmpty(cty.String), - cty.ListValEmpty(cty.String), - cty.MapValEmpty(cty.String), - false, - }, - { // keys cannot be a list of lists - list5, - list1, - cty.NilVal, - true, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("zipmap(%#v, %#v)", test.Keys, test.Values), func(t *testing.T) { - got, err := Zipmap(test.Keys, test.Values) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\n\nkeys: %#v\nvalues: %#v\ngot: %#v\nwant: %#v", test.Keys, test.Values, got, test.Want) - } - }) - } -} diff --git a/lang/funcs/filesystem.go b/lang/funcs/filesystem.go index 4b899cbc4..eb4921de1 100644 --- a/lang/funcs/filesystem.go +++ b/lang/funcs/filesystem.go @@ -100,6 +100,20 @@ func MakeTemplateFileFunc(baseDir string, funcsCb func() map[string]function.Fun Variables: varsVal.AsValueMap(), } + // We require all of the variables to be valid HCL identifiers, because + // otherwise there would be no way to refer to them in the template + // anyway. Rejecting this here gives better feedback to the user + // than a syntax error somewhere in the template itself. + for n := range ctx.Variables { + if !hclsyntax.ValidIdentifier(n) { + // This error message intentionally doesn't describe _all_ of + // the different permutations that are technically valid as an + // HCL identifier, but rather focuses on what we might + // consider to be an "idiomatic" variable name. + return cty.DynamicVal, function.NewArgErrorf(1, "invalid template variable name %q: must start with a letter, followed by zero or more letters, digits, and underscores", n) + } + } + // We'll pre-check references in the template here so we can give a // more specialized error message than HCL would by default, so it's // clearer that this problem is coming from a templatefile call. diff --git a/lang/funcs/filesystem_test.go b/lang/funcs/filesystem_test.go index 73428e896..15839f228 100644 --- a/lang/funcs/filesystem_test.go +++ b/lang/funcs/filesystem_test.go @@ -8,6 +8,7 @@ import ( homedir "github.com/mitchellh/go-homedir" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/function" + "github.com/zclconf/go-cty/cty/function/stdlib" ) func TestFile(t *testing.T) { @@ -58,25 +59,25 @@ func TestTemplateFile(t *testing.T) { Path cty.Value Vars cty.Value Want cty.Value - Err bool + Err string }{ { cty.StringVal("testdata/hello.txt"), cty.EmptyObjectVal, cty.StringVal("Hello World"), - false, + ``, }, { cty.StringVal("testdata/icon.png"), cty.EmptyObjectVal, cty.NilVal, - true, // Not valid UTF-8 + `contents of testdata/icon.png are not valid UTF-8; use the filebase64 function to obtain the Base64 encoded contents or the other file functions (e.g. filemd5, filesha256) to obtain file hashing results instead`, }, { cty.StringVal("testdata/missing"), cty.EmptyObjectVal, cty.NilVal, - true, // no file exists + `no file exists at testdata/missing`, }, { cty.StringVal("testdata/hello.tmpl"), @@ -84,7 +85,15 @@ func TestTemplateFile(t *testing.T) { "name": cty.StringVal("Jodie"), }), cty.StringVal("Hello, Jodie!"), - false, + ``, + }, + { + cty.StringVal("testdata/hello.tmpl"), + cty.MapVal(map[string]cty.Value{ + "name!": cty.StringVal("Jodie"), + }), + cty.NilVal, + `invalid template variable name "name!": must start with a letter, followed by zero or more letters, digits, and underscores`, }, { cty.StringVal("testdata/hello.tmpl"), @@ -92,13 +101,13 @@ func TestTemplateFile(t *testing.T) { "name": cty.StringVal("Jimbo"), }), cty.StringVal("Hello, Jimbo!"), - false, + ``, }, { cty.StringVal("testdata/hello.tmpl"), cty.EmptyObjectVal, cty.NilVal, - true, // "name" is missing from the vars map + `vars map does not contain key "name", referenced at testdata/hello.tmpl:1,10-14`, }, { cty.StringVal("testdata/func.tmpl"), @@ -110,13 +119,13 @@ func TestTemplateFile(t *testing.T) { }), }), cty.StringVal("The items are a, b, c"), - false, + ``, }, { cty.StringVal("testdata/recursive.tmpl"), cty.MapValEmpty(cty.String), cty.NilVal, - true, // recursive templatefile call not allowed + `testdata/recursive.tmpl:1,3-16: Error in function call; Call to function "templatefile" failed: cannot recursively call templatefile from inside templatefile call.`, }, { cty.StringVal("testdata/list.tmpl"), @@ -128,7 +137,7 @@ func TestTemplateFile(t *testing.T) { }), }), cty.StringVal("- a\n- b\n- c\n"), - false, + ``, }, { cty.StringVal("testdata/list.tmpl"), @@ -136,7 +145,7 @@ func TestTemplateFile(t *testing.T) { "list": cty.True, }), cty.NilVal, - true, // iteration over non-iterable value + `testdata/list.tmpl:1,13-17: Iteration over non-iterable value; A value of type bool cannot be used as the collection in a 'for' expression.`, }, { cty.StringVal("testdata/bare.tmpl"), @@ -144,13 +153,13 @@ func TestTemplateFile(t *testing.T) { "val": cty.True, }), cty.True, // since this template contains only an interpolation, its true value shines through - false, + ``, }, } templateFileFn := MakeTemplateFileFunc(".", func() map[string]function.Function { return map[string]function.Function{ - "join": JoinFunc, + "join": stdlib.JoinFunc, "templatefile": MakeFileFunc(".", false), // just a placeholder, since templatefile itself overrides this } }) @@ -165,10 +174,13 @@ func TestTemplateFile(t *testing.T) { } } - if test.Err { + if test.Err != "" { if err == nil { t.Fatal("succeeded; want error") } + if got, want := err.Error(), test.Err; got != want { + t.Errorf("wrong error\ngot: %s\nwant: %s", got, want) + } return } else if err != nil { t.Fatalf("unexpected error: %s", err) diff --git a/lang/funcs/number.go b/lang/funcs/number.go index c813f47bf..43effec12 100644 --- a/lang/funcs/number.go +++ b/lang/funcs/number.go @@ -9,44 +9,6 @@ import ( "github.com/zclconf/go-cty/cty/gocty" ) -// CeilFunc contructs a function that returns the closest whole number greater -// than or equal to the given value. -var CeilFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "num", - Type: cty.Number, - }, - }, - Type: function.StaticReturnType(cty.Number), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - var val float64 - if err := gocty.FromCtyValue(args[0], &val); err != nil { - return cty.UnknownVal(cty.String), err - } - return cty.NumberIntVal(int64(math.Ceil(val))), nil - }, -}) - -// FloorFunc contructs a function that returns the closest whole number lesser -// than or equal to the given value. -var FloorFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "num", - Type: cty.Number, - }, - }, - Type: function.StaticReturnType(cty.Number), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - var val float64 - if err := gocty.FromCtyValue(args[0], &val); err != nil { - return cty.UnknownVal(cty.String), err - } - return cty.NumberIntVal(int64(math.Floor(val))), nil - }, -}) - // LogFunc contructs a function that returns the logarithm of a given number in a given base. var LogFunc = function.New(&function.Spec{ Params: []function.Parameter{ @@ -185,16 +147,6 @@ var ParseIntFunc = function.New(&function.Spec{ }, }) -// Ceil returns the closest whole number greater than or equal to the given value. -func Ceil(num cty.Value) (cty.Value, error) { - return CeilFunc.Call([]cty.Value{num}) -} - -// Floor returns the closest whole number lesser than or equal to the given value. -func Floor(num cty.Value) (cty.Value, error) { - return FloorFunc.Call([]cty.Value{num}) -} - // Log returns returns the logarithm of a given number in a given base. func Log(num, base cty.Value) (cty.Value, error) { return LogFunc.Call([]cty.Value{num, base}) diff --git a/lang/funcs/number_test.go b/lang/funcs/number_test.go index 97ec70a75..b467a429f 100644 --- a/lang/funcs/number_test.go +++ b/lang/funcs/number_test.go @@ -7,82 +7,6 @@ import ( "github.com/zclconf/go-cty/cty" ) -func TestCeil(t *testing.T) { - tests := []struct { - Num cty.Value - Want cty.Value - Err bool - }{ - { - cty.NumberFloatVal(-1.8), - cty.NumberFloatVal(-1), - false, - }, - { - cty.NumberFloatVal(1.2), - cty.NumberFloatVal(2), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("ceil(%#v)", test.Num), func(t *testing.T) { - got, err := Ceil(test.Num) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestFloor(t *testing.T) { - tests := []struct { - Num cty.Value - Want cty.Value - Err bool - }{ - { - cty.NumberFloatVal(-1.8), - cty.NumberFloatVal(-2), - false, - }, - { - cty.NumberFloatVal(1.2), - cty.NumberFloatVal(1), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("floor(%#v)", test.Num), func(t *testing.T) { - got, err := Floor(test.Num) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - func TestLog(t *testing.T) { tests := []struct { Num cty.Value diff --git a/lang/funcs/string.go b/lang/funcs/string.go index 1505600a6..ab6da7277 100644 --- a/lang/funcs/string.go +++ b/lang/funcs/string.go @@ -1,169 +1,14 @@ package funcs import ( - "fmt" "regexp" - "sort" "strings" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/function" - "github.com/zclconf/go-cty/cty/gocty" ) -var JoinFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "separator", - Type: cty.String, - }, - }, - VarParam: &function.Parameter{ - Name: "lists", - Type: cty.List(cty.String), - }, - Type: function.StaticReturnType(cty.String), - Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { - sep := args[0].AsString() - listVals := args[1:] - if len(listVals) < 1 { - return cty.UnknownVal(cty.String), fmt.Errorf("at least one list is required") - } - - l := 0 - for _, list := range listVals { - if !list.IsWhollyKnown() { - return cty.UnknownVal(cty.String), nil - } - l += list.LengthInt() - } - - items := make([]string, 0, l) - for ai, list := range listVals { - ei := 0 - for it := list.ElementIterator(); it.Next(); { - _, val := it.Element() - if val.IsNull() { - if len(listVals) > 1 { - return cty.UnknownVal(cty.String), function.NewArgErrorf(ai+1, "element %d of list %d is null; cannot concatenate null values", ei, ai+1) - } - return cty.UnknownVal(cty.String), function.NewArgErrorf(ai+1, "element %d is null; cannot concatenate null values", ei) - } - items = append(items, val.AsString()) - ei++ - } - } - - return cty.StringVal(strings.Join(items, sep)), nil - }, -}) - -var SortFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "list", - Type: cty.List(cty.String), - }, - }, - Type: function.StaticReturnType(cty.List(cty.String)), - Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { - listVal := args[0] - - if !listVal.IsWhollyKnown() { - // If some of the element values aren't known yet then we - // can't yet predict the order of the result. - return cty.UnknownVal(retType), nil - } - if listVal.LengthInt() == 0 { // Easy path - return listVal, nil - } - - list := make([]string, 0, listVal.LengthInt()) - for it := listVal.ElementIterator(); it.Next(); { - iv, v := it.Element() - if v.IsNull() { - return cty.UnknownVal(retType), fmt.Errorf("given list element %s is null; a null string cannot be sorted", iv.AsBigFloat().String()) - } - list = append(list, v.AsString()) - } - - sort.Strings(list) - retVals := make([]cty.Value, len(list)) - for i, s := range list { - retVals[i] = cty.StringVal(s) - } - return cty.ListVal(retVals), nil - }, -}) - -var SplitFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "separator", - Type: cty.String, - }, - { - Name: "str", - Type: cty.String, - }, - }, - Type: function.StaticReturnType(cty.List(cty.String)), - Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { - sep := args[0].AsString() - str := args[1].AsString() - elems := strings.Split(str, sep) - elemVals := make([]cty.Value, len(elems)) - for i, s := range elems { - elemVals[i] = cty.StringVal(s) - } - if len(elemVals) == 0 { - return cty.ListValEmpty(cty.String), nil - } - return cty.ListVal(elemVals), nil - }, -}) - -// ChompFunc constructions a function that removes newline characters at the end of a string. -var ChompFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "str", - Type: cty.String, - }, - }, - Type: function.StaticReturnType(cty.String), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - newlines := regexp.MustCompile(`(?:\r\n?|\n)*\z`) - return cty.StringVal(newlines.ReplaceAllString(args[0].AsString(), "")), nil - }, -}) - -// IndentFunc constructions a function that adds a given number of spaces to the -// beginnings of all but the first line in a given multi-line string. -var IndentFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "spaces", - Type: cty.Number, - }, - { - Name: "str", - Type: cty.String, - }, - }, - Type: function.StaticReturnType(cty.String), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - var spaces int - if err := gocty.FromCtyValue(args[0], &spaces); err != nil { - return cty.UnknownVal(cty.String), err - } - data := args[1].AsString() - pad := strings.Repeat(" ", spaces) - return cty.StringVal(strings.Replace(data, "\n", "\n"+pad, -1)), nil - }, -}) - -// ReplaceFunc constructions a function that searches a given string for another +// ReplaceFunc constructs a function that searches a given string for another // given substring, and replaces each occurence with a given replacement string. var ReplaceFunc = function.New(&function.Spec{ Params: []function.Parameter{ @@ -201,80 +46,8 @@ var ReplaceFunc = function.New(&function.Spec{ }, }) -// TitleFunc constructions a function that converts the first letter of each word -// in the given string to uppercase. -var TitleFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "str", - Type: cty.String, - }, - }, - Type: function.StaticReturnType(cty.String), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - return cty.StringVal(strings.Title(args[0].AsString())), nil - }, -}) - -// TrimSpaceFunc constructions a function that removes any space characters from -// the start and end of the given string. -var TrimSpaceFunc = function.New(&function.Spec{ - Params: []function.Parameter{ - { - Name: "str", - Type: cty.String, - }, - }, - Type: function.StaticReturnType(cty.String), - Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { - return cty.StringVal(strings.TrimSpace(args[0].AsString())), nil - }, -}) - -// Join concatenates together the string elements of one or more lists with a -// given separator. -func Join(sep cty.Value, lists ...cty.Value) (cty.Value, error) { - args := make([]cty.Value, len(lists)+1) - args[0] = sep - copy(args[1:], lists) - return JoinFunc.Call(args) -} - -// Sort re-orders the elements of a given list of strings so that they are -// in ascending lexicographical order. -func Sort(list cty.Value) (cty.Value, error) { - return SortFunc.Call([]cty.Value{list}) -} - -// Split divides a given string by a given separator, returning a list of -// strings containing the characters between the separator sequences. -func Split(sep, str cty.Value) (cty.Value, error) { - return SplitFunc.Call([]cty.Value{sep, str}) -} - -// Chomp removes newline characters at the end of a string. -func Chomp(str cty.Value) (cty.Value, error) { - return ChompFunc.Call([]cty.Value{str}) -} - -// Indent adds a given number of spaces to the beginnings of all but the first -// line in a given multi-line string. -func Indent(spaces, str cty.Value) (cty.Value, error) { - return IndentFunc.Call([]cty.Value{spaces, str}) -} - // Replace searches a given string for another given substring, // and replaces all occurences with a given replacement string. func Replace(str, substr, replace cty.Value) (cty.Value, error) { return ReplaceFunc.Call([]cty.Value{str, substr, replace}) } - -// Title converts the first letter of each word in the given string to uppercase. -func Title(str cty.Value) (cty.Value, error) { - return TitleFunc.Call([]cty.Value{str}) -} - -// TrimSpace removes any space characters from the start and end of the given string. -func TrimSpace(str cty.Value) (cty.Value, error) { - return TrimSpaceFunc.Call([]cty.Value{str}) -} diff --git a/lang/funcs/string_test.go b/lang/funcs/string_test.go index cddbc3744..7b44a2762 100644 --- a/lang/funcs/string_test.go +++ b/lang/funcs/string_test.go @@ -7,360 +7,6 @@ import ( "github.com/zclconf/go-cty/cty" ) -func TestJoin(t *testing.T) { - tests := []struct { - Sep cty.Value - Lists []cty.Value - Want cty.Value - }{ - { - cty.StringVal(" "), - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("Hello"), - cty.StringVal("World"), - }), - }, - cty.StringVal("Hello World"), - }, - { - cty.StringVal(" "), - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("Hello"), - cty.StringVal("World"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("Foo"), - cty.StringVal("Bar"), - }), - }, - cty.StringVal("Hello World Foo Bar"), - }, - { - cty.StringVal(" "), - []cty.Value{ - cty.ListValEmpty(cty.String), - }, - cty.StringVal(""), - }, - { - cty.StringVal(" "), - []cty.Value{ - cty.ListValEmpty(cty.String), - cty.ListValEmpty(cty.String), - cty.ListValEmpty(cty.String), - }, - cty.StringVal(""), - }, - { - cty.StringVal(" "), - []cty.Value{ - cty.ListValEmpty(cty.String), - cty.ListVal([]cty.Value{ - cty.StringVal("Foo"), - cty.StringVal("Bar"), - }), - }, - cty.StringVal("Foo Bar"), - }, - { - cty.UnknownVal(cty.String), - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("Hello"), - cty.StringVal("World"), - }), - }, - cty.UnknownVal(cty.String), - }, - { - cty.StringVal(" "), - []cty.Value{ - cty.ListVal([]cty.Value{ - cty.StringVal("Hello"), - cty.UnknownVal(cty.String), - }), - }, - cty.UnknownVal(cty.String), - }, - { - cty.StringVal(" "), - []cty.Value{ - cty.UnknownVal(cty.List(cty.String)), - }, - cty.UnknownVal(cty.String), - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("Join(%#v, %#v...)", test.Sep, test.Lists), func(t *testing.T) { - got, err := Join(test.Sep, test.Lists...) - - if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestSort(t *testing.T) { - tests := []struct { - List cty.Value - Want cty.Value - }{ - { - cty.ListValEmpty(cty.String), - cty.ListValEmpty(cty.String), - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("banana"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("banana"), - }), - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("banana"), - cty.StringVal("apple"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("apple"), - cty.StringVal("banana"), - }), - }, - { - cty.ListVal([]cty.Value{ - cty.StringVal("8"), - cty.StringVal("9"), - cty.StringVal("10"), - }), - cty.ListVal([]cty.Value{ - cty.StringVal("10"), // lexicographical sort, not numeric sort - cty.StringVal("8"), - cty.StringVal("9"), - }), - }, - { - cty.UnknownVal(cty.List(cty.String)), - cty.UnknownVal(cty.List(cty.String)), - }, - { - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - }), - cty.UnknownVal(cty.List(cty.String)), - }, - { - cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.String), - cty.StringVal("banana"), - }), - cty.UnknownVal(cty.List(cty.String)), - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("Sort(%#v)", test.List), func(t *testing.T) { - got, err := Sort(test.List) - - if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} -func TestSplit(t *testing.T) { - tests := []struct { - Sep cty.Value - Str cty.Value - Want cty.Value - }{ - { - cty.StringVal(" "), - cty.StringVal("Hello World"), - cty.ListVal([]cty.Value{ - cty.StringVal("Hello"), - cty.StringVal("World"), - }), - }, - { - cty.StringVal(" "), - cty.StringVal("Hello"), - cty.ListVal([]cty.Value{ - cty.StringVal("Hello"), - }), - }, - { - cty.StringVal(" "), - cty.StringVal(""), - cty.ListVal([]cty.Value{ - cty.StringVal(""), - }), - }, - { - cty.StringVal(""), - cty.StringVal(""), - cty.ListValEmpty(cty.String), - }, - { - cty.UnknownVal(cty.String), - cty.StringVal("Hello World"), - cty.UnknownVal(cty.List(cty.String)), - }, - { - cty.StringVal(" "), - cty.UnknownVal(cty.String), - cty.UnknownVal(cty.List(cty.String)), - }, - { - cty.UnknownVal(cty.String), - cty.UnknownVal(cty.String), - cty.UnknownVal(cty.List(cty.String)), - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("Split(%#v, %#v)", test.Sep, test.Str), func(t *testing.T) { - got, err := Split(test.Sep, test.Str) - - if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestChomp(t *testing.T) { - tests := []struct { - String cty.Value - Want cty.Value - Err bool - }{ - { - cty.StringVal("hello world"), - cty.StringVal("hello world"), - false, - }, - { - cty.StringVal("goodbye\ncruel\nworld"), - cty.StringVal("goodbye\ncruel\nworld"), - false, - }, - { - cty.StringVal("goodbye\r\nwindows\r\nworld"), - cty.StringVal("goodbye\r\nwindows\r\nworld"), - false, - }, - { - cty.StringVal("goodbye\ncruel\nworld\n"), - cty.StringVal("goodbye\ncruel\nworld"), - false, - }, - { - cty.StringVal("goodbye\ncruel\nworld\n\n\n\n"), - cty.StringVal("goodbye\ncruel\nworld"), - false, - }, - { - cty.StringVal("goodbye\r\nwindows\r\nworld\r\n"), - cty.StringVal("goodbye\r\nwindows\r\nworld"), - false, - }, - { - cty.StringVal("goodbye\r\nwindows\r\nworld\r\n\r\n\r\n\r\n"), - cty.StringVal("goodbye\r\nwindows\r\nworld"), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("chomp(%#v)", test.String), func(t *testing.T) { - got, err := Chomp(test.String) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestIndent(t *testing.T) { - tests := []struct { - String cty.Value - Spaces cty.Value - Want cty.Value - Err bool - }{ - { - cty.StringVal(`Fleas: -Adam -Had'em - -E.E. Cummings`), - cty.NumberIntVal(4), - cty.StringVal("Fleas:\n Adam\n Had'em\n \n E.E. Cummings"), - false, - }, - { - cty.StringVal("oneliner"), - cty.NumberIntVal(4), - cty.StringVal("oneliner"), - false, - }, - { - cty.StringVal(`#!/usr/bin/env bash -date -pwd`), - cty.NumberIntVal(4), - cty.StringVal("#!/usr/bin/env bash\n date\n pwd"), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("indent(%#v, %#v)", test.Spaces, test.String), func(t *testing.T) { - got, err := Indent(test.Spaces, test.String) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - func TestReplace(t *testing.T) { tests := []struct { String cty.Value @@ -425,84 +71,3 @@ func TestReplace(t *testing.T) { }) } } - -func TestTitle(t *testing.T) { - tests := []struct { - String cty.Value - Want cty.Value - Err bool - }{ - { - cty.StringVal("hello"), - cty.StringVal("Hello"), - false, - }, - { - cty.StringVal("hello world"), - cty.StringVal("Hello World"), - false, - }, - { - cty.StringVal(""), - cty.StringVal(""), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("title(%#v)", test.String), func(t *testing.T) { - got, err := Title(test.String) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} - -func TestTrimSpace(t *testing.T) { - tests := []struct { - String cty.Value - Want cty.Value - Err bool - }{ - { - cty.StringVal(" hello "), - cty.StringVal("hello"), - false, - }, - { - cty.StringVal(""), - cty.StringVal(""), - false, - }, - } - - for _, test := range tests { - t.Run(fmt.Sprintf("trimspace(%#v)", test.String), func(t *testing.T) { - got, err := TrimSpace(test.String) - - if test.Err { - if err == nil { - t.Fatal("succeeded; want error") - } - return - } else if err != nil { - t.Fatalf("unexpected error: %s", err) - } - - if !got.RawEquals(test.Want) { - t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) - } - }) - } -} diff --git a/lang/functions.go b/lang/functions.go index 8c089a552..d02b9fa88 100644 --- a/lang/functions.go +++ b/lang/functions.go @@ -3,6 +3,7 @@ package lang import ( "fmt" + "github.com/hashicorp/hcl/v2/ext/tryfunc" ctyyaml "github.com/zclconf/go-cty-yaml" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/function" @@ -39,22 +40,23 @@ func (s *Scope) Functions() map[string]function.Function { "base64sha256": funcs.Base64Sha256Func, "base64sha512": funcs.Base64Sha512Func, "bcrypt": funcs.BcryptFunc, - "ceil": funcs.CeilFunc, - "chomp": funcs.ChompFunc, + "can": tryfunc.CanFunc, + "ceil": stdlib.CeilFunc, + "chomp": stdlib.ChompFunc, "cidrhost": funcs.CidrHostFunc, "cidrnetmask": funcs.CidrNetmaskFunc, "cidrsubnet": funcs.CidrSubnetFunc, "cidrsubnets": funcs.CidrSubnetsFunc, "coalesce": funcs.CoalesceFunc, - "coalescelist": funcs.CoalesceListFunc, - "compact": funcs.CompactFunc, + "coalescelist": stdlib.CoalesceListFunc, + "compact": stdlib.CompactFunc, "concat": stdlib.ConcatFunc, - "contains": funcs.ContainsFunc, + "contains": stdlib.ContainsFunc, "csvdecode": stdlib.CSVDecodeFunc, "dirname": funcs.DirnameFunc, - "distinct": funcs.DistinctFunc, - "element": funcs.ElementFunc, - "chunklist": funcs.ChunklistFunc, + "distinct": stdlib.DistinctFunc, + "element": stdlib.ElementFunc, + "chunklist": stdlib.ChunklistFunc, "file": funcs.MakeFileFunc(s.BaseDir, false), "fileexists": funcs.MakeFileExistsFunc(s.BaseDir), "fileset": funcs.MakeFileSetFunc(s.BaseDir), @@ -65,52 +67,53 @@ func (s *Scope) Functions() map[string]function.Function { "filesha1": funcs.MakeFileSha1Func(s.BaseDir), "filesha256": funcs.MakeFileSha256Func(s.BaseDir), "filesha512": funcs.MakeFileSha512Func(s.BaseDir), - "flatten": funcs.FlattenFunc, - "floor": funcs.FloorFunc, + "flatten": stdlib.FlattenFunc, + "floor": stdlib.FloorFunc, "format": stdlib.FormatFunc, "formatdate": stdlib.FormatDateFunc, "formatlist": stdlib.FormatListFunc, - "indent": funcs.IndentFunc, - "index": funcs.IndexFunc, - "join": funcs.JoinFunc, + "indent": stdlib.IndentFunc, + "index": funcs.IndexFunc, // stdlib.IndexFunc is not compatible + "join": stdlib.JoinFunc, "jsondecode": stdlib.JSONDecodeFunc, "jsonencode": stdlib.JSONEncodeFunc, - "keys": funcs.KeysFunc, + "keys": stdlib.KeysFunc, "length": funcs.LengthFunc, "list": funcs.ListFunc, - "log": funcs.LogFunc, + "log": stdlib.LogFunc, "lookup": funcs.LookupFunc, "lower": stdlib.LowerFunc, "map": funcs.MapFunc, "matchkeys": funcs.MatchkeysFunc, "max": stdlib.MaxFunc, "md5": funcs.Md5Func, - "merge": funcs.MergeFunc, + "merge": stdlib.MergeFunc, "min": stdlib.MinFunc, - "parseint": funcs.ParseIntFunc, + "parseint": stdlib.ParseIntFunc, "pathexpand": funcs.PathExpandFunc, - "pow": funcs.PowFunc, + "pow": stdlib.PowFunc, "range": stdlib.RangeFunc, "regex": stdlib.RegexFunc, "regexall": stdlib.RegexAllFunc, "replace": funcs.ReplaceFunc, - "reverse": funcs.ReverseFunc, + "reverse": stdlib.ReverseListFunc, "rsadecrypt": funcs.RsaDecryptFunc, "setintersection": stdlib.SetIntersectionFunc, - "setproduct": funcs.SetProductFunc, + "setproduct": stdlib.SetProductFunc, + "setsubtract": stdlib.SetSubtractFunc, "setunion": stdlib.SetUnionFunc, "sha1": funcs.Sha1Func, "sha256": funcs.Sha256Func, "sha512": funcs.Sha512Func, - "signum": funcs.SignumFunc, - "slice": funcs.SliceFunc, - "sort": funcs.SortFunc, - "split": funcs.SplitFunc, + "signum": stdlib.SignumFunc, + "slice": stdlib.SliceFunc, + "sort": stdlib.SortFunc, + "split": stdlib.SplitFunc, "strrev": stdlib.ReverseFunc, "substr": stdlib.SubstrFunc, "timestamp": funcs.TimestampFunc, - "timeadd": funcs.TimeAddFunc, - "title": funcs.TitleFunc, + "timeadd": stdlib.TimeAddFunc, + "title": stdlib.TitleFunc, "tostring": funcs.MakeToFunc(cty.String), "tonumber": funcs.MakeToFunc(cty.Number), "tobool": funcs.MakeToFunc(cty.Bool), @@ -118,15 +121,19 @@ func (s *Scope) Functions() map[string]function.Function { "tolist": funcs.MakeToFunc(cty.List(cty.DynamicPseudoType)), "tomap": funcs.MakeToFunc(cty.Map(cty.DynamicPseudoType)), "transpose": funcs.TransposeFunc, - "trimspace": funcs.TrimSpaceFunc, + "trim": stdlib.TrimFunc, + "trimprefix": stdlib.TrimPrefixFunc, + "trimspace": stdlib.TrimSpaceFunc, + "trimsuffix": stdlib.TrimSuffixFunc, + "try": tryfunc.TryFunc, "upper": stdlib.UpperFunc, "urlencode": funcs.URLEncodeFunc, "uuid": funcs.UUIDFunc, "uuidv5": funcs.UUIDV5Func, - "values": funcs.ValuesFunc, + "values": stdlib.ValuesFunc, "yamldecode": ctyyaml.YAMLDecodeFunc, "yamlencode": ctyyaml.YAMLEncodeFunc, - "zipmap": funcs.ZipmapFunc, + "zipmap": stdlib.ZipmapFunc, } s.funcs["templatefile"] = funcs.MakeTemplateFileFunc(s.BaseDir, func() map[string]function.Function { diff --git a/lang/functions_test.go b/lang/functions_test.go index 13c19c655..029cc0c96 100644 --- a/lang/functions_test.go +++ b/lang/functions_test.go @@ -109,6 +109,27 @@ func TestFunctions(t *testing.T) { }, }, + "can": { + { + `can(true)`, + cty.True, + }, + { + // Note: "can" only works with expressions that pass static + // validation, because it only gets an opportunity to run in + // that case. The following "works" (captures the error) because + // Terraform understands it as a reference to an attribute + // that does not exist during dynamic evaluation. + // + // "can" doesn't work with references that could never possibly + // be valid and are thus caught during static validation, such + // as an expression like "foo" alone which would be understood + // as an invalid resource reference. + `can({}.baz)`, + cty.False, + }, + }, + "ceil": { { `ceil(1.2)`, @@ -666,6 +687,15 @@ func TestFunctions(t *testing.T) { }, }, + "setsubtract": { + { + `setsubtract(["a", "b", "c"], ["a", "c"])`, + cty.SetVal([]cty.Value{ + cty.StringVal("b"), + }), + }, + }, + "setunion": { { `setunion(["a", "b"], ["b", "c"], ["d"])`, @@ -837,6 +867,20 @@ func TestFunctions(t *testing.T) { }, }, + "trim": { + { + `trim("?!hello?!", "!?")`, + cty.StringVal("hello"), + }, + }, + + "trimprefix": { + { + `trimprefix("helloworld", "hello")`, + cty.StringVal("world"), + }, + }, + "trimspace": { { `trimspace(" hello ")`, @@ -844,6 +888,37 @@ func TestFunctions(t *testing.T) { }, }, + "trimsuffix": { + { + `trimsuffix("helloworld", "world")`, + cty.StringVal("hello"), + }, + }, + + "try": { + { + // Note: "try" only works with expressions that pass static + // validation, because it only gets an opportunity to run in + // that case. The following "works" (captures the error) because + // Terraform understands it as a reference to an attribute + // that does not exist during dynamic evaluation. + // + // "try" doesn't work with references that could never possibly + // be valid and are thus caught during static validation, such + // as an expression like "foo" alone which would be understood + // as an invalid resource reference. That's okay because this + // function exists primarily to ease access to dynamically-typed + // structures that Terraform can't statically validate by + // definition. + `try({}.baz, "fallback")`, + cty.StringVal("fallback"), + }, + { + `try("fallback")`, + cty.StringVal("fallback"), + }, + }, + "upper": { { `upper("hello")`, diff --git a/main.go b/main.go index 3cc867801..762d566a5 100644 --- a/main.go +++ b/main.go @@ -12,9 +12,13 @@ import ( "sync" "github.com/hashicorp/go-plugin" + "github.com/hashicorp/terraform-svchost/disco" + "github.com/hashicorp/terraform/command/cliconfig" "github.com/hashicorp/terraform/command/format" "github.com/hashicorp/terraform/helper/logging" - "github.com/hashicorp/terraform/svchost/disco" + "github.com/hashicorp/terraform/httpclient" + "github.com/hashicorp/terraform/internal/getproviders" + "github.com/hashicorp/terraform/version" "github.com/mattn/go-colorable" "github.com/mattn/go-shellwords" "github.com/mitchellh/cli" @@ -123,7 +127,7 @@ func wrappedMain() int { log.Printf("[INFO] Go runtime version: %s", runtime.Version()) log.Printf("[INFO] CLI args: %#v", os.Args) - config, diags := LoadConfig() + config, diags := cliconfig.LoadConfig() if len(diags) > 0 { // Since we haven't instantiated a command.Meta yet, we need to do // some things manually here and use some "safe" defaults for things @@ -159,13 +163,20 @@ func wrappedMain() int { // object checks that and just acts as though no credentials are present. } services := disco.NewWithCredentialsSource(credsSrc) + services.SetUserAgent(httpclient.TerraformUserAgent(version.String())) + + // For the moment, we just always use the registry source to install + // direct from a registry. In future there should be a mechanism to + // configure providers sources from the CLI config, which will then + // change how we construct this object. + providerSrc := getproviders.NewRegistrySource(services) // Initialize the backends. backendInit.Init(services) // In tests, Commands may already be set to provide mock commands if Commands == nil { - initCommands(config, services) + initCommands(config, services, providerSrc) } // Run checkpoint diff --git a/moduledeps/dependencies.go b/moduledeps/dependencies.go index 87c8431ea..dd21a0a25 100644 --- a/moduledeps/dependencies.go +++ b/moduledeps/dependencies.go @@ -1,13 +1,14 @@ package moduledeps import ( + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/plugin/discovery" ) // Providers describes a set of provider dependencies for a given module. // // Each named provider instance can have one version constraint. -type Providers map[ProviderInstance]ProviderDependency +type Providers map[addrs.Provider]ProviderDependency // ProviderDependency describes the dependency for a particular provider // instance, including both the set of allowed versions and the reason for diff --git a/moduledeps/module.go b/moduledeps/module.go index d6cbaf5c5..52b7c2f6e 100644 --- a/moduledeps/module.go +++ b/moduledeps/module.go @@ -109,17 +109,14 @@ func (s sortModules) Swap(i, j int) { // and apply no particular SHA256 hash constraint. func (m *Module) PluginRequirements() discovery.PluginRequirements { ret := make(discovery.PluginRequirements) - for inst, dep := range m.Providers { - // m.Providers is keyed on provider names, such as "aws.foo". - // a PluginRequirements wants keys to be provider *types*, such - // as "aws". If there are multiple aliases for the same - // provider then we will flatten them into a single requirement - // by combining their constraint sets. - pty := inst.Type() - if existing, exists := ret[pty]; exists { - ret[pty].Versions = existing.Versions.Append(dep.Constraints) + for pFqn, dep := range m.Providers { + // TODO: discovery.PluginRequirements should be refactored and use + // addrs.Provider as the map keys + provider := pFqn.LegacyString() + if existing, exists := ret[provider]; exists { + ret[provider].Versions = existing.Versions.Append(dep.Constraints) } else { - ret[pty] = &discovery.PluginConstraints{ + ret[provider] = &discovery.PluginConstraints{ Versions: dep.Constraints, } } diff --git a/moduledeps/module_test.go b/moduledeps/module_test.go index 4cbe0103c..7ec143b1c 100644 --- a/moduledeps/module_test.go +++ b/moduledeps/module_test.go @@ -5,6 +5,7 @@ import ( "reflect" "testing" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/plugin/discovery" ) @@ -191,13 +192,10 @@ func TestModulePluginRequirements(t *testing.T) { m := &Module{ Name: "root", Providers: Providers{ - "foo": ProviderDependency{ + addrs.NewLegacyProvider("foo"): ProviderDependency{ Constraints: discovery.ConstraintStr(">=1.0.0").MustParse(), }, - "foo.bar": ProviderDependency{ - Constraints: discovery.ConstraintStr(">=2.0.0").MustParse(), - }, - "baz": ProviderDependency{ + addrs.NewLegacyProvider("baz"): ProviderDependency{ Constraints: discovery.ConstraintStr(">=3.0.0").MustParse(), }, }, @@ -207,7 +205,7 @@ func TestModulePluginRequirements(t *testing.T) { if len(reqd) != 2 { t.Errorf("wrong number of elements in %#v; want 2", reqd) } - if got, want := reqd["foo"].Versions.String(), ">=1.0.0,>=2.0.0"; got != want { + if got, want := reqd["foo"].Versions.String(), ">=1.0.0"; got != want { t.Errorf("wrong combination of versions for 'foo' %q; want %q", got, want) } if got, want := reqd["baz"].Versions.String(), ">=3.0.0"; got != want { diff --git a/moduledeps/provider.go b/moduledeps/provider.go deleted file mode 100644 index 89ceefb2c..000000000 --- a/moduledeps/provider.go +++ /dev/null @@ -1,30 +0,0 @@ -package moduledeps - -import ( - "strings" -) - -// ProviderInstance describes a particular provider instance by its full name, -// like "null" or "aws.foo". -type ProviderInstance string - -// Type returns the provider type of this instance. For example, for an instance -// named "aws.foo" the type is "aws". -func (p ProviderInstance) Type() string { - t := string(p) - if dotPos := strings.Index(t, "."); dotPos != -1 { - t = t[:dotPos] - } - return t -} - -// Alias returns the alias of this provider, if any. An instance named "aws.foo" -// has the alias "foo", while an instance named just "docker" has no alias, -// so the empty string would be returned. -func (p ProviderInstance) Alias() string { - t := string(p) - if dotPos := strings.Index(t, "."); dotPos != -1 { - return t[dotPos+1:] - } - return "" -} diff --git a/moduledeps/provider_test.go b/moduledeps/provider_test.go deleted file mode 100644 index c18a0d0a9..000000000 --- a/moduledeps/provider_test.go +++ /dev/null @@ -1,36 +0,0 @@ -package moduledeps - -import ( - "testing" -) - -func TestProviderInstance(t *testing.T) { - tests := []struct { - Name string - WantType string - WantAlias string - }{ - { - Name: "aws", - WantType: "aws", - WantAlias: "", - }, - { - Name: "aws.foo", - WantType: "aws", - WantAlias: "foo", - }, - } - - for _, test := range tests { - t.Run(test.Name, func(t *testing.T) { - inst := ProviderInstance(test.Name) - if got, want := inst.Type(), test.WantType; got != want { - t.Errorf("got type %q; want %q", got, want) - } - if got, want := inst.Alias(), test.WantAlias; got != want { - t.Errorf("got alias %q; want %q", got, want) - } - }) - } -} diff --git a/panic.go b/panic.go index 7ea4da713..6690a93e9 100644 --- a/panic.go +++ b/panic.go @@ -23,6 +23,10 @@ When reporting bugs, please include your terraform version. That information is available on the first line of crash.log. You can also get it by running 'terraform --version' on the command line. +SECURITY WARNING: the "crash.log" file that was created may contain +sensitive information that must be redacted before it is safe to share +on the issue tracker. + [1]: https://github.com/hashicorp/terraform/issues !!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!! diff --git a/plans/action.go b/plans/action.go index c3e6a32ae..c653b106b 100644 --- a/plans/action.go +++ b/plans/action.go @@ -12,7 +12,7 @@ const ( Delete Action = '-' ) -//go:generate stringer -type Action +//go:generate go run golang.org/x/tools/cmd/stringer -type Action // IsReplace returns true if the action is one of the two actions that // represents replacing an existing object with a new object: diff --git a/plans/plan_test.go b/plans/plan_test.go index f68357072..03501b619 100644 --- a/plans/plan_test.go +++ b/plans/plan_test.go @@ -20,9 +20,10 @@ func TestProviderAddrs(t *testing.T) { Type: "test_thing", Name: "woot", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("test"), + }, }, { Addr: addrs.Resource{ @@ -31,9 +32,10 @@ func TestProviderAddrs(t *testing.T) { Name: "woot", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), DeposedKey: "foodface", - ProviderAddr: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("test"), + }, }, { Addr: addrs.Resource{ @@ -41,9 +43,10 @@ func TestProviderAddrs(t *testing.T) { Type: "test_thing", Name: "what", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance.Child("foo", addrs.NoKey)), + ProviderAddr: addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance.Child("foo", addrs.NoKey), + Provider: addrs.NewLegacyProvider("test"), + }, }, }, }, @@ -51,12 +54,14 @@ func TestProviderAddrs(t *testing.T) { got := plan.ProviderAddrs() want := []addrs.AbsProviderConfig{ - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance.Child("foo", addrs.NoKey)), - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance.Child("foo", addrs.NoKey), + Provider: addrs.NewLegacyProvider("test"), + }, + addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("test"), + }, } for _, problem := range deep.Equal(got, want) { diff --git a/plans/planfile/tfplan_test.go b/plans/planfile/tfplan_test.go index 2b49553ac..3da15289c 100644 --- a/plans/planfile/tfplan_test.go +++ b/plans/planfile/tfplan_test.go @@ -56,9 +56,10 @@ func TestTFPlanRoundTrip(t *testing.T) { Type: "test_thing", Name: "woot", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ Action: plans.DeleteThenCreate, Before: mustNewDynamicValue(cty.ObjectVal(map[string]cty.Value{ @@ -76,9 +77,10 @@ func TestTFPlanRoundTrip(t *testing.T) { Name: "woot", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), DeposedKey: "foodface", - ProviderAddr: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ Action: plans.Delete, Before: mustNewDynamicValue(cty.ObjectVal(map[string]cty.Value{ @@ -194,9 +196,10 @@ func TestTFPlanRoundTripDestroy(t *testing.T) { Type: "test_thing", Name: "woot", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ Action: plans.Delete, Before: mustNewDynamicValue(cty.ObjectVal(map[string]cty.Value{ diff --git a/plugin/client.go b/plugin/client.go index 0eab5385b..f20f913f8 100644 --- a/plugin/client.go +++ b/plugin/client.go @@ -9,6 +9,11 @@ import ( "github.com/hashicorp/terraform/plugin/discovery" ) +// The TF_DISABLE_PLUGIN_TLS environment variable is intended only for use by +// the plugin SDK test framework. We do not recommend Terraform CLI end-users +// set this variable. +var enableAutoMTLS = os.Getenv("TF_DISABLE_PLUGIN_TLS") == "" + // ClientConfig returns a configuration object that can be used to instantiate // a client for the plugin described by the given metadata. func ClientConfig(m discovery.PluginMeta) *plugin.ClientConfig { @@ -25,7 +30,7 @@ func ClientConfig(m discovery.PluginMeta) *plugin.ClientConfig { Managed: true, Logger: logger, AllowedProtocols: []plugin.Protocol{plugin.ProtocolGRPC}, - AutoMTLS: true, + AutoMTLS: enableAutoMTLS, } } diff --git a/plugin/discovery/get.go b/plugin/discovery/get.go index 7724cc54b..7c63d4c84 100644 --- a/plugin/discovery/get.go +++ b/plugin/discovery/get.go @@ -16,12 +16,12 @@ import ( "github.com/hashicorp/errwrap" getter "github.com/hashicorp/go-getter" multierror "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/httpclient" "github.com/hashicorp/terraform/registry" "github.com/hashicorp/terraform/registry/regsrc" "github.com/hashicorp/terraform/registry/response" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/tfdiags" tfversion "github.com/hashicorp/terraform/version" "github.com/mitchellh/cli" @@ -50,7 +50,7 @@ func init() { // An Installer maintains a local cache of plugins by downloading plugins // from an online repository. type Installer interface { - Get(provider addrs.ProviderType, req Constraints) (PluginMeta, tfdiags.Diagnostics, error) + Get(provider addrs.Provider, req Constraints) (PluginMeta, tfdiags.Diagnostics, error) PurgeUnused(used map[string]PluginMeta) (removed PluginMetaSet, err error) } @@ -107,7 +107,7 @@ type ProviderInstaller struct { // are produced under the assumption that if presented to the user they will // be presented alongside context about what is being installed, and thus the // error messages do not redundantly include such information. -func (i *ProviderInstaller) Get(provider addrs.ProviderType, req Constraints) (PluginMeta, tfdiags.Diagnostics, error) { +func (i *ProviderInstaller) Get(provider addrs.Provider, req Constraints) (PluginMeta, tfdiags.Diagnostics, error) { var diags tfdiags.Diagnostics // a little bit of initialization. @@ -200,7 +200,7 @@ func (i *ProviderInstaller) Get(provider addrs.ProviderType, req Constraints) (P } return PluginMeta{}, diags, errwrap.Wrap(ErrorVersionIncompatible, fmt.Errorf(fmt.Sprintf( - errMsg, provider, v.String(), tfversion.String(), + errMsg, provider.LegacyString(), v.String(), tfversion.String(), closestVersion.String(), closestVersion.MinorUpgradeConstraintStr(), constraintStr))) } @@ -232,7 +232,7 @@ func (i *ProviderInstaller) Get(provider addrs.ProviderType, req Constraints) (P } } - printedProviderName := fmt.Sprintf("%q (%s)", provider.Name, providerSource) + printedProviderName := fmt.Sprintf("%q (%s)", provider.LegacyString(), providerSource) i.Ui.Info(fmt.Sprintf("- Downloading plugin for provider %s %s...", printedProviderName, versionMeta.Version)) log.Printf("[DEBUG] getting provider %s version %q", printedProviderName, versionMeta.Version) err = i.install(provider, v, providerURL) @@ -244,11 +244,11 @@ func (i *ProviderInstaller) Get(provider addrs.ProviderType, req Constraints) (P // (This is weird, because go-getter doesn't directly return // information about what was extracted, and we just extracted // the archive directly into a shared dir here.) - log.Printf("[DEBUG] looking for the %s %s plugin we just installed", provider.Name, versionMeta.Version) + log.Printf("[DEBUG] looking for the %s %s plugin we just installed", provider.LegacyString(), versionMeta.Version) metas := FindPlugins("provider", []string{i.Dir}) log.Printf("[DEBUG] all plugins found %#v", metas) metas, _ = metas.ValidateVersions() - metas = metas.WithName(provider.Name).WithVersion(v) + metas = metas.WithName(provider.Type).WithVersion(v) log.Printf("[DEBUG] filtered plugins %#v", metas) if metas.Count() == 0 { // This should never happen. Suggests that the release archive @@ -276,18 +276,18 @@ func (i *ProviderInstaller) Get(provider addrs.ProviderType, req Constraints) (P return metas.Newest(), diags, nil } -func (i *ProviderInstaller) install(provider addrs.ProviderType, version Version, url string) error { +func (i *ProviderInstaller) install(provider addrs.Provider, version Version, url string) error { if i.Cache != nil { - log.Printf("[DEBUG] looking for provider %s %s in plugin cache", provider.Name, version) - cached := i.Cache.CachedPluginPath("provider", provider.Name, version) + log.Printf("[DEBUG] looking for provider %s %s in plugin cache", provider.LegacyString(), version) + cached := i.Cache.CachedPluginPath("provider", provider.Type, version) if cached == "" { - log.Printf("[DEBUG] %s %s not yet in cache, so downloading %s", provider.Name, version, url) + log.Printf("[DEBUG] %s %s not yet in cache, so downloading %s", provider.LegacyString(), version, url) err := getter.Get(i.Cache.InstallDir(), url) if err != nil { return err } // should now be in cache - cached = i.Cache.CachedPluginPath("provider", provider.Name, version) + cached = i.Cache.CachedPluginPath("provider", provider.Type, version) if cached == "" { // should never happen if the getter is behaving properly // and the plugins are packaged properly. @@ -308,7 +308,7 @@ func (i *ProviderInstaller) install(provider addrs.ProviderType, version Version return err } - log.Printf("[DEBUG] installing %s %s to %s from local cache %s", provider.Name, version, targetPath, cached) + log.Printf("[DEBUG] installing %s %s to %s from local cache %s", provider.LegacyString(), version, targetPath, cached) // Delete if we can. If there's nothing there already then no harm done. // This is important because we can't create a link if there's @@ -366,7 +366,7 @@ func (i *ProviderInstaller) install(provider addrs.ProviderType, version Version // One way or another, by the time we get here we should have either // a link or a copy of the cached plugin within i.Dir, as expected. } else { - log.Printf("[DEBUG] plugin cache is disabled, so downloading %s %s from %s", provider.Name, version, url) + log.Printf("[DEBUG] plugin cache is disabled, so downloading %s %s from %s", provider.LegacyString(), version, url) err := getter.Get(i.Dir, url) if err != nil { return err @@ -472,8 +472,8 @@ func (i *ProviderInstaller) hostname() (string, error) { } // list all versions available for the named provider -func (i *ProviderInstaller) listProviderVersions(provider addrs.ProviderType) (*response.TerraformProviderVersions, error) { - req := regsrc.NewTerraformProvider(provider.Name, i.OS, i.Arch) +func (i *ProviderInstaller) listProviderVersions(provider addrs.Provider) (*response.TerraformProviderVersions, error) { + req := regsrc.NewTerraformProvider(provider.Type, i.OS, i.Arch) versions, err := i.registry.TerraformProviderVersions(req) return versions, err } diff --git a/plugin/discovery/get_test.go b/plugin/discovery/get_test.go index b14de9f74..231551d63 100644 --- a/plugin/discovery/get_test.go +++ b/plugin/discovery/get_test.go @@ -17,11 +17,13 @@ import ( "strings" "testing" + svchost "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/httpclient" "github.com/hashicorp/terraform/registry" "github.com/hashicorp/terraform/registry/response" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/disco" + "github.com/hashicorp/terraform/version" "github.com/mitchellh/cli" ) @@ -150,7 +152,7 @@ func TestVersionListing(t *testing.T) { i := newProviderInstaller(server) - allVersions, err := i.listProviderVersions(addrs.ProviderType{Name: "test"}) + allVersions, err := i.listProviderVersions(addrs.Provider{Type: "test"}) if err != nil { t.Fatal(err) @@ -417,7 +419,7 @@ func TestProviderInstallerGet(t *testing.T) { registry: registry.NewClient(Disco(server), nil), } - _, _, err = i.Get(addrs.ProviderType{Name: "test"}, AllVersions) + _, _, err = i.Get(addrs.NewLegacyProvider("test"), AllVersions) if err != ErrorNoVersionCompatibleWithPlatform { t.Fatal("want error for incompatible version") @@ -434,21 +436,21 @@ func TestProviderInstallerGet(t *testing.T) { } { - _, _, err := i.Get(addrs.ProviderType{Name: "test"}, ConstraintStr(">9.0.0").MustParse()) + _, _, err := i.Get(addrs.NewLegacyProvider("test"), ConstraintStr(">9.0.0").MustParse()) if err != ErrorNoSuitableVersion { t.Fatal("want error for mismatching constraints") } } { - provider := addrs.ProviderType{Name: "nonexist"} + provider := addrs.NewLegacyProvider("nonexist") _, _, err := i.Get(provider, AllVersions) if err != ErrorNoSuchProvider { t.Fatal("want error for no such provider") } } - gotMeta, _, err := i.Get(addrs.ProviderType{Name: "test"}, AllVersions) + gotMeta, _, err := i.Get(addrs.NewLegacyProvider("test"), AllVersions) if err != nil { t.Fatal(err) } @@ -506,7 +508,7 @@ func TestProviderInstallerGet_cache(t *testing.T) { Arch: "mockarch", } - gotMeta, _, err := i.Get(addrs.ProviderType{Name: "test"}, AllVersions) + gotMeta, _, err := i.Get(addrs.NewLegacyProvider("test"), AllVersions) if err != nil { t.Fatal(err) } @@ -741,6 +743,7 @@ func Disco(s *httptest.Server) *disco.Disco { "providers.v1": fmt.Sprintf("%s/v1/providers", s.URL), } d := disco.New() + d.SetUserAgent(httpclient.TerraformUserAgent(version.String())) d.ForceHostServices(svchost.Hostname("registry.terraform.io"), services) d.ForceHostServices(svchost.Hostname("localhost"), services) diff --git a/plugin/mock_proto/generate.go b/plugin/mock_proto/generate.go index 357700996..6f004ffd3 100644 --- a/plugin/mock_proto/generate.go +++ b/plugin/mock_proto/generate.go @@ -1,3 +1,3 @@ -//go:generate bash ./generate.sh +//go:generate go run github.com/golang/mock/mockgen -destination mock.go github.com/hashicorp/terraform/internal/tfplugin5 ProviderClient,ProvisionerClient,Provisioner_ProvisionResourceClient,Provisioner_ProvisionResourceServer package mock_tfplugin5 diff --git a/plugin/mock_proto/generate.sh b/plugin/mock_proto/generate.sh deleted file mode 100644 index 2a375f18c..000000000 --- a/plugin/mock_proto/generate.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash - -# mockgen is particularly sensitive about what mode we run it in -export GOFLAGS="" -export GO111MODULE=on - -mockgen -destination mock.go github.com/hashicorp/terraform/internal/tfplugin5 ProviderClient,ProvisionerClient,Provisioner_ProvisionResourceClient,Provisioner_ProvisionResourceServer diff --git a/plugins.go b/plugins.go index cf2d54253..47ae2e4f6 100644 --- a/plugins.go +++ b/plugins.go @@ -5,6 +5,8 @@ import ( "log" "path/filepath" "runtime" + + "github.com/hashicorp/terraform/command/cliconfig" ) // globalPluginDirs returns directories that should be searched for @@ -16,7 +18,7 @@ import ( func globalPluginDirs() []string { var ret []string // Look in ~/.terraform.d/plugins/ , or its equivalent on non-UNIX - dir, err := ConfigDir() + dir, err := cliconfig.ConfigDir() if err != nil { log.Printf("[ERROR] Error finding global config directory: %s", err) } else { diff --git a/providers/addressed_types.go b/providers/addressed_types.go index 7ed523f15..85ff4c962 100644 --- a/providers/addressed_types.go +++ b/providers/addressed_types.go @@ -6,35 +6,15 @@ import ( "github.com/hashicorp/terraform/addrs" ) -// AddressedTypes is a helper that extracts all of the distinct provider -// types from the given list of relative provider configuration addresses. -func AddressedTypes(providerAddrs []addrs.ProviderConfig) []string { - if len(providerAddrs) == 0 { - return nil - } - m := map[string]struct{}{} - for _, addr := range providerAddrs { - m[addr.Type] = struct{}{} - } - - names := make([]string, 0, len(m)) - for typeName := range m { - names = append(names, typeName) - } - - sort.Strings(names) // Stable result for tests - return names -} - // AddressedTypesAbs is a helper that extracts all of the distinct provider // types from the given list of absolute provider configuration addresses. -func AddressedTypesAbs(providerAddrs []addrs.AbsProviderConfig) []string { +func AddressedTypesAbs(providerAddrs []addrs.AbsProviderConfig) []addrs.Provider { if len(providerAddrs) == 0 { return nil } - m := map[string]struct{}{} + m := map[string]addrs.Provider{} for _, addr := range providerAddrs { - m[addr.ProviderConfig.Type] = struct{}{} + m[addr.Provider.String()] = addr.Provider } names := make([]string, 0, len(m)) @@ -43,5 +23,11 @@ func AddressedTypesAbs(providerAddrs []addrs.AbsProviderConfig) []string { } sort.Strings(names) // Stable result for tests - return names + + ret := make([]addrs.Provider, len(names)) + for i, name := range names { + ret[i] = m[name] + } + + return ret } diff --git a/providers/addressed_types_test.go b/providers/addressed_types_test.go index 80915e3e6..0bf555bba 100644 --- a/providers/addressed_types_test.go +++ b/providers/addressed_types_test.go @@ -8,40 +8,36 @@ import ( "github.com/hashicorp/terraform/addrs" ) -func TestAddressedTypes(t *testing.T) { - providerAddrs := []addrs.ProviderConfig{ - {Type: "aws"}, - {Type: "aws", Alias: "foo"}, - {Type: "azure"}, - {Type: "null"}, - {Type: "null"}, - } - - got := AddressedTypes(providerAddrs) - want := []string{ - "aws", - "azure", - "null", - } - for _, problem := range deep.Equal(got, want) { - t.Error(problem) - } -} - func TestAddressedTypesAbs(t *testing.T) { providerAddrs := []addrs.AbsProviderConfig{ - addrs.ProviderConfig{Type: "aws"}.Absolute(addrs.RootModuleInstance), - addrs.ProviderConfig{Type: "aws", Alias: "foo"}.Absolute(addrs.RootModuleInstance), - addrs.ProviderConfig{Type: "azure"}.Absolute(addrs.RootModuleInstance), - addrs.ProviderConfig{Type: "null"}.Absolute(addrs.RootModuleInstance), - addrs.ProviderConfig{Type: "null"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("aws"), + }, + addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("aws"), + Alias: "foo", + }, + addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("azure"), + }, + addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("null"), + }, + addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("null"), + }, } got := AddressedTypesAbs(providerAddrs) - want := []string{ - "aws", - "azure", - "null", + want := []addrs.Provider{ + addrs.NewLegacyProvider("aws"), + addrs.NewLegacyProvider("azure"), + addrs.NewLegacyProvider("null"), } for _, problem := range deep.Equal(got, want) { t.Error(problem) diff --git a/providers/resolver.go b/providers/resolver.go index 4de8e0acd..2ef387e46 100644 --- a/providers/resolver.go +++ b/providers/resolver.go @@ -3,6 +3,7 @@ package providers import ( "fmt" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/plugin/discovery" ) @@ -13,17 +14,17 @@ type Resolver interface { // Given a constraint map, return a Factory for each requested provider. // If some or all of the constraints cannot be satisfied, return a non-nil // slice of errors describing the problems. - ResolveProviders(reqd discovery.PluginRequirements) (map[string]Factory, []error) + ResolveProviders(reqd discovery.PluginRequirements) (map[addrs.Provider]Factory, []error) } // ResolverFunc wraps a callback function and turns it into a Resolver // implementation, for convenience in situations where a function and its // associated closure are sufficient as a resolver implementation. -type ResolverFunc func(reqd discovery.PluginRequirements) (map[string]Factory, []error) +type ResolverFunc func(reqd discovery.PluginRequirements) (map[addrs.Provider]Factory, []error) // ResolveProviders implements Resolver by calling the // wrapped function. -func (f ResolverFunc) ResolveProviders(reqd discovery.PluginRequirements) (map[string]Factory, []error) { +func (f ResolverFunc) ResolveProviders(reqd discovery.PluginRequirements) (map[addrs.Provider]Factory, []error) { return f(reqd) } @@ -34,13 +35,14 @@ func (f ResolverFunc) ResolveProviders(reqd discovery.PluginRequirements) (map[s // // This function is primarily used in tests, to provide mock providers or // in-process providers under test. -func ResolverFixed(factories map[string]Factory) Resolver { - return ResolverFunc(func(reqd discovery.PluginRequirements) (map[string]Factory, []error) { - ret := make(map[string]Factory, len(reqd)) +func ResolverFixed(factories map[addrs.Provider]Factory) Resolver { + return ResolverFunc(func(reqd discovery.PluginRequirements) (map[addrs.Provider]Factory, []error) { + ret := make(map[addrs.Provider]Factory, len(reqd)) var errs []error for name := range reqd { - if factory, exists := factories[name]; exists { - ret[name] = factory + fqn := addrs.NewLegacyProvider(name) + if factory, exists := factories[fqn]; exists { + ret[fqn] = factory } else { errs = append(errs, fmt.Errorf("provider %q is not available", name)) } diff --git a/registry/client.go b/registry/client.go index 93424d176..e8f7ac111 100644 --- a/registry/client.go +++ b/registry/client.go @@ -11,11 +11,11 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/httpclient" "github.com/hashicorp/terraform/registry/regsrc" "github.com/hashicorp/terraform/registry/response" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/version" ) @@ -52,6 +52,8 @@ func NewClient(services *disco.Disco, client *http.Client) *Client { services.Transport = client.Transport + services.SetUserAgent(httpclient.TerraformUserAgent(version.String())) + return &Client{ client: client, services: services, diff --git a/registry/client_test.go b/registry/client_test.go index fd39f9f75..105205f94 100644 --- a/registry/client_test.go +++ b/registry/client_test.go @@ -7,9 +7,11 @@ import ( "testing" version "github.com/hashicorp/go-version" + "github.com/hashicorp/terraform-svchost/disco" + "github.com/hashicorp/terraform/httpclient" "github.com/hashicorp/terraform/registry/regsrc" "github.com/hashicorp/terraform/registry/test" - "github.com/hashicorp/terraform/svchost/disco" + tfversion "github.com/hashicorp/terraform/version" ) func TestLookupModuleVersions(t *testing.T) { @@ -136,6 +138,7 @@ func TestAccLookupModuleVersions(t *testing.T) { t.Skip() } regDisco := disco.New() + regDisco.SetUserAgent(httpclient.TerraformUserAgent(tfversion.String())) // test with and without a hostname for _, src := range []string{ diff --git a/registry/errors.go b/registry/errors.go index 5a6a31b08..3b99b34d8 100644 --- a/registry/errors.go +++ b/registry/errors.go @@ -3,8 +3,8 @@ package registry import ( "fmt" + "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/registry/regsrc" - "github.com/hashicorp/terraform/svchost/disco" ) type errModuleNotFound struct { diff --git a/registry/regsrc/friendly_host.go b/registry/regsrc/friendly_host.go index 14b4dce9c..c9bc40bee 100644 --- a/registry/regsrc/friendly_host.go +++ b/registry/regsrc/friendly_host.go @@ -4,7 +4,7 @@ import ( "regexp" "strings" - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" ) var ( diff --git a/registry/regsrc/module.go b/registry/regsrc/module.go index 325706ec2..c3edd7d87 100644 --- a/registry/regsrc/module.go +++ b/registry/regsrc/module.go @@ -6,7 +6,7 @@ import ( "regexp" "strings" - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" ) var ( diff --git a/registry/regsrc/terraform_provider.go b/registry/regsrc/terraform_provider.go index 58dedee5e..7205d03b8 100644 --- a/registry/regsrc/terraform_provider.go +++ b/registry/regsrc/terraform_provider.go @@ -5,7 +5,7 @@ import ( "runtime" "strings" - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" ) var ( diff --git a/registry/test/mock_registry.go b/registry/test/mock_registry.go index f89cfc016..e1b6249e3 100644 --- a/registry/test/mock_registry.go +++ b/registry/test/mock_registry.go @@ -12,11 +12,13 @@ import ( "strings" version "github.com/hashicorp/go-version" + "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/auth" + "github.com/hashicorp/terraform-svchost/disco" + "github.com/hashicorp/terraform/httpclient" "github.com/hashicorp/terraform/registry/regsrc" "github.com/hashicorp/terraform/registry/response" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/auth" - "github.com/hashicorp/terraform/svchost/disco" + tfversion "github.com/hashicorp/terraform/version" ) // Disco return a *disco.Disco mapping registry.terraform.io, localhost, @@ -29,6 +31,7 @@ func Disco(s *httptest.Server) *disco.Disco { "providers.v1": fmt.Sprintf("%s/v1/providers", s.URL), } d := disco.NewWithCredentialsSource(credsSrc) + d.SetUserAgent(httpclient.TerraformUserAgent(tfversion.String())) d.ForceHostServices(svchost.Hostname("registry.terraform.io"), services) d.ForceHostServices(svchost.Hostname("localhost"), services) diff --git a/repl/session_test.go b/repl/session_test.go index d79ba33a8..4ef4c8b78 100644 --- a/repl/session_test.go +++ b/repl/session_test.go @@ -45,9 +45,10 @@ func TestSession_basicState(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"bar"}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -59,9 +60,10 @@ func TestSession_basicState(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"bar"}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -211,8 +213,8 @@ func testSession(t *testing.T, test testSessionTest) { // Build the TF context ctx, diags := terraform.NewContext(&terraform.ContextOpts{ State: test.State, - ProviderResolver: providers.ResolverFixed(map[string]providers.Factory{ - "test": providers.FactoryFixed(p), + ProviderResolver: providers.ResolverFixed(map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): providers.FactoryFixed(p), }), Config: config, }) diff --git a/scripts/travis.sh b/scripts/travis.sh index 90e5cb9ff..dba7d036b 100755 --- a/scripts/travis.sh +++ b/scripts/travis.sh @@ -1,14 +1,12 @@ #!/bin/bash +set -e +echo "" > coverage.txt -# Consistent output so travis does not think we're dead during long running -# tests. -export PING_SLEEP=30 -bash -c "while true; do echo \$(date) - building ...; sleep $PING_SLEEP; done" & -PING_LOOP_PID=$! - -make testacc -TEST_OUTPUT=$? - -kill $PING_LOOP_PID -exit $TEST_OUTPUT +for d in $(go list ./... | grep -v vendor); do + go test -mod=vendor -timeout=2m -parallel=4 -coverprofile=profile.out -covermode=atomic $d + if [ -f profile.out ]; then + cat profile.out >> coverage.txt + rm profile.out + fi +done diff --git a/states/instance_object.go b/states/instance_object.go index 1374c59d3..78e1dda93 100644 --- a/states/instance_object.go +++ b/states/instance_object.go @@ -29,18 +29,23 @@ type ResourceInstanceObject struct { // it was updated. Status ObjectStatus - // Dependencies is a set of other addresses in the same module which - // this instance depended on when the given attributes were evaluated. - // This is used to construct the dependency relationships for an object - // whose configuration is no longer available, such as if it has been - // removed from configuration altogether, or is now deposed. - Dependencies []addrs.Referenceable + // Dependencies is a set of absolute address to other resources this + // instance dependeded on when it was applied. This is used to construct + // the dependency relationships for an object whose configuration is no + // longer available, such as if it has been removed from configuration + // altogether, or is now deposed. + Dependencies []addrs.AbsResource + + // DependsOn corresponds to the deprecated `depends_on` field in the state. + // This field contained the configuration `depends_on` values, and some of + // the references from within a single module. + DependsOn []addrs.Referenceable } // ObjectStatus represents the status of a RemoteObject. type ObjectStatus rune -//go:generate stringer -type ObjectStatus +//go:generate go run golang.org/x/tools/cmd/stringer -type ObjectStatus const ( // ObjectReady is an object status for an object that is ready to use. diff --git a/states/instance_object_src.go b/states/instance_object_src.go index 62907ab76..a18cf313c 100644 --- a/states/instance_object_src.go +++ b/states/instance_object_src.go @@ -53,7 +53,9 @@ type ResourceInstanceObjectSrc struct { // ResourceInstanceObject. Private []byte Status ObjectStatus - Dependencies []addrs.Referenceable + Dependencies []addrs.AbsResource + // deprecated + DependsOn []addrs.Referenceable } // Decode unmarshals the raw representation of the object attributes. Pass the @@ -86,6 +88,7 @@ func (os *ResourceInstanceObjectSrc) Decode(ty cty.Type) (*ResourceInstanceObjec Value: val, Status: os.Status, Dependencies: os.Dependencies, + DependsOn: os.DependsOn, Private: os.Private, }, nil } diff --git a/states/module.go b/states/module.go index d89e7878d..ec177edf3 100644 --- a/states/module.go +++ b/states/module.go @@ -76,7 +76,7 @@ func (ms *Module) RemoveResource(addr addrs.Resource) { } // SetResourceInstanceCurrent saves the given instance object as the current -// generation of the resource instance with the given address, simulataneously +// generation of the resource instance with the given address, simultaneously // updating the recorded provider configuration address, dependencies, and // resource EachMode. // @@ -88,23 +88,58 @@ func (ms *Module) RemoveResource(addr addrs.Resource) { // are updated for all other instances of the same resource as a side-effect of // this call. func (ms *Module) SetResourceInstanceCurrent(addr addrs.ResourceInstance, obj *ResourceInstanceObjectSrc, provider addrs.AbsProviderConfig) { - ms.SetResourceMeta(addr.Resource, eachModeForInstanceKey(addr.Key), provider) - rs := ms.Resource(addr.Resource) - is := rs.EnsureInstance(addr.Key) - + // if the resource is nil and the object is nil, don't do anything! + // you'll probably just cause issues + if obj == nil && rs == nil { + return + } + if obj == nil && rs != nil { + // does the resource have any other objects? + // if not then delete the whole resource + // When deleting the resource, ensure that its EachMode is NoEach, + // as a resource with EachList or EachMap can have 0 instances and be valid + if rs.EachMode == NoEach && len(rs.Instances) == 0 { + delete(ms.Resources, addr.Resource.String()) + return + } + // check for an existing resource, now that we've ensured that rs.Instances is more than 0/not nil + is := rs.Instance(addr.Key) + if is == nil { + // if there is no instance on the resource with this address and obj is nil, return and change nothing + return + } + // if we have an instance, update the current + is.Current = obj + if !is.HasObjects() { + // If we have no objects at all then we'll clean up. + delete(rs.Instances, addr.Key) + // Delete the resource if it has no instances, but only if NoEach + if rs.EachMode == NoEach && len(rs.Instances) == 0 { + delete(ms.Resources, addr.Resource.String()) + return + } + } + // Nothing more to do here, so return! + return + } + if rs == nil && obj != nil { + // We don't have have a resource so make one, which is a side effect of setResourceMeta + ms.SetResourceMeta(addr.Resource, eachModeForInstanceKey(addr.Key), provider) + // now we have a resource! so update the rs value to point to it + rs = ms.Resource(addr.Resource) + } + // Get our instance from the resource; it could be there or not at this point + is := rs.Instance(addr.Key) + if is == nil { + // if we don't have a resource, create one and add to the instances + is = rs.CreateInstance(addr.Key) + // update the resource meta because we have a new instance, so EachMode may have changed + ms.SetResourceMeta(addr.Resource, eachModeForInstanceKey(addr.Key), provider) + } + // Update the resource's ProviderConfig, in case the provider has updated + rs.ProviderConfig = provider is.Current = obj - - if !is.HasObjects() { - // If we have no objects at all then we'll clean up. - delete(rs.Instances, addr.Key) - } - if rs.EachMode == NoEach && len(rs.Instances) == 0 { - // Also clean up if we only expect to have one instance anyway - // and there are none. We leave the resource behind if an each mode - // is active because an empty list or map of instances is a valid state. - delete(ms.Resources, addr.Resource.String()) - } } // SetResourceInstanceDeposed saves the given instance object as a deposed diff --git a/states/resource.go b/states/resource.go index e2a2b8588..da883ddab 100644 --- a/states/resource.go +++ b/states/resource.go @@ -39,6 +39,13 @@ func (rs *Resource) Instance(key addrs.InstanceKey) *ResourceInstance { return rs.Instances[key] } +// CreateInstance creates an instance and adds it to the resource +func (rs *Resource) CreateInstance(key addrs.InstanceKey) *ResourceInstance { + is := NewResourceInstance() + rs.Instances[key] = is + return is +} + // EnsureInstance returns the state for the instance with the given key, // creating a new empty state for it if one doesn't already exist. // @@ -175,7 +182,7 @@ const ( EachMap EachMode = 'M' ) -//go:generate stringer -type EachMode +//go:generate go run golang.org/x/tools/cmd/stringer -type EachMode func eachModeForInstanceKey(key addrs.InstanceKey) EachMode { switch key.(type) { diff --git a/states/state_deepcopy.go b/states/state_deepcopy.go index 8664f3bea..7d7a7ef10 100644 --- a/states/state_deepcopy.go +++ b/states/state_deepcopy.go @@ -153,8 +153,17 @@ func (obj *ResourceInstanceObjectSrc) DeepCopy() *ResourceInstanceObjectSrc { // Some addrs.Referencable implementations are technically mutable, but // we treat them as immutable by convention and so we don't deep-copy here. - dependencies := make([]addrs.Referenceable, len(obj.Dependencies)) - copy(dependencies, obj.Dependencies) + var dependencies []addrs.AbsResource + if obj.Dependencies != nil { + dependencies = make([]addrs.AbsResource, len(obj.Dependencies)) + copy(dependencies, obj.Dependencies) + } + + var dependsOn []addrs.Referenceable + if obj.DependsOn != nil { + dependsOn = make([]addrs.Referenceable, len(obj.DependsOn)) + copy(dependsOn, obj.DependsOn) + } return &ResourceInstanceObjectSrc{ Status: obj.Status, @@ -163,6 +172,7 @@ func (obj *ResourceInstanceObjectSrc) DeepCopy() *ResourceInstanceObjectSrc { AttrsFlat: attrsFlat, AttrsJSON: attrsJSON, Dependencies: dependencies, + DependsOn: dependsOn, } } @@ -187,9 +197,9 @@ func (obj *ResourceInstanceObject) DeepCopy() *ResourceInstanceObject { // Some addrs.Referenceable implementations are technically mutable, but // we treat them as immutable by convention and so we don't deep-copy here. - var dependencies []addrs.Referenceable + var dependencies []addrs.AbsResource if obj.Dependencies != nil { - dependencies = make([]addrs.Referenceable, len(obj.Dependencies)) + dependencies = make([]addrs.AbsResource, len(obj.Dependencies)) copy(dependencies, obj.Dependencies) } diff --git a/states/state_test.go b/states/state_test.go index 618cfaafb..ef481a6e6 100644 --- a/states/state_test.go +++ b/states/state_test.go @@ -35,9 +35,10 @@ func TestState(t *testing.T) { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles"}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModuleInstance, + }, ) childModule := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) @@ -78,9 +79,10 @@ func TestState(t *testing.T) { Deposed: map[DeposedKey]*ResourceInstanceObjectSrc{}, }, }, - ProviderConfig: addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + ProviderConfig: addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModuleInstance, + }, }, }, }, @@ -138,11 +140,12 @@ func TestStateDeepCopy(t *testing.T) { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles"}`), Private: []byte("private data"), - Dependencies: []addrs.Referenceable{}, + Dependencies: []addrs.AbsResource{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), ) rootModule.SetResourceInstanceCurrent( addrs.Resource{ @@ -155,15 +158,21 @@ func TestStateDeepCopy(t *testing.T) { SchemaVersion: 1, AttrsJSON: []byte(`{"woozles":"confuzles"}`), Private: []byte("private data"), - Dependencies: []addrs.Referenceable{addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_thing", - Name: "baz", - }}, + Dependencies: []addrs.AbsResource{ + { + Module: addrs.RootModuleInstance, + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "baz", + }, + }, + }, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModuleInstance, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), ) childModule := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) diff --git a/states/statefile/roundtrip_test.go b/states/statefile/roundtrip_test.go index 81158aa3f..693a9be96 100644 --- a/states/statefile/roundtrip_test.go +++ b/states/statefile/roundtrip_test.go @@ -2,7 +2,6 @@ package statefile import ( "bytes" - "encoding/json" "io/ioutil" "os" "path/filepath" @@ -11,8 +10,6 @@ import ( "testing" "github.com/go-test/deep" - - tfversion "github.com/hashicorp/terraform/version" ) func TestRoundtrip(t *testing.T) { @@ -22,8 +19,6 @@ func TestRoundtrip(t *testing.T) { t.Fatal(err) } - currentVersion := tfversion.Version - for _, info := range entries { const inSuffix = ".in.tfstate" const outSuffix = ".out.tfstate" @@ -39,14 +34,20 @@ func TestRoundtrip(t *testing.T) { outName := name + outSuffix t.Run(name, func(t *testing.T) { - ir, err := os.Open(filepath.Join(dir, inName)) - if err != nil { - t.Fatal(err) - } oSrcWant, err := ioutil.ReadFile(filepath.Join(dir, outName)) if err != nil { t.Fatal(err) } + oWant, diags := readStateV4(oSrcWant) + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + + ir, err := os.Open(filepath.Join(dir, inName)) + if err != nil { + t.Fatal(err) + } + defer ir.Close() f, err := Read(ir) if err != nil { @@ -58,20 +59,12 @@ func TestRoundtrip(t *testing.T) { if err != nil { t.Fatal(err) } - oSrcGot := buf.Bytes() + oSrcWritten := buf.Bytes() - var oGot, oWant map[string]interface{} - err = json.Unmarshal(oSrcGot, &oGot) - if err != nil { - t.Fatalf("result isn't JSON: %s", err) + oGot, diags := readStateV4(oSrcWritten) + if diags.HasErrors() { + t.Fatal(diags.Err()) } - err = json.Unmarshal(oSrcWant, &oWant) - if err != nil { - t.Fatalf("wanted result isn't JSON: %s", err) - } - - // A newly written state should always reflect the current terraform version. - oWant["terraform_version"] = currentVersion problems := deep.Equal(oGot, oWant) sort.Strings(problems) diff --git a/states/statefile/testdata/roundtrip/v1-simple.out.tfstate b/states/statefile/testdata/roundtrip/v1-simple.out.tfstate index eb9d68db0..7b9c18023 100644 --- a/states/statefile/testdata/roundtrip/v1-simple.out.tfstate +++ b/states/statefile/testdata/roundtrip/v1-simple.out.tfstate @@ -14,7 +14,7 @@ "mode": "managed", "type": "null_resource", "name": "bar", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, @@ -23,7 +23,9 @@ "triggers.%": "1", "triggers.whaaat": "0,1" }, - "depends_on": ["null_resource.foo"] + "depends_on": [ + "null_resource.foo" + ] } ] }, @@ -31,7 +33,7 @@ "mode": "managed", "type": "null_resource", "name": "foo", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "each": "list", "instances": [ { diff --git a/states/statefile/testdata/roundtrip/v3-bigint.out.tfstate b/states/statefile/testdata/roundtrip/v3-bigint.out.tfstate index 7a342b8d6..cac5cd019 100644 --- a/states/statefile/testdata/roundtrip/v3-bigint.out.tfstate +++ b/states/statefile/testdata/roundtrip/v3-bigint.out.tfstate @@ -28,7 +28,7 @@ "type": "null_resource", "name": "bar", "each": "list", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "attributes_flat": { @@ -36,7 +36,9 @@ "triggers.%": "1", "triggers.index": "0" }, - "depends_on": ["null_resource.baz"], + "depends_on": [ + "null_resource.baz" + ], "index_key": 0, "schema_version": 1 }, @@ -46,7 +48,9 @@ "triggers.%": "1", "triggers.index": "1" }, - "depends_on": ["null_resource.baz"], + "depends_on": [ + "null_resource.baz" + ], "index_key": 1, "schema_version": 0 } @@ -56,7 +60,7 @@ "mode": "managed", "type": "null_resource", "name": "baz", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "attributes_flat": { @@ -82,7 +86,7 @@ "mode": "managed", "type": "null_resource", "name": "foo", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "attributes_flat": { diff --git a/states/statefile/testdata/roundtrip/v3-grabbag.out.tfstate b/states/statefile/testdata/roundtrip/v3-grabbag.out.tfstate index 7a342b8d6..cac5cd019 100644 --- a/states/statefile/testdata/roundtrip/v3-grabbag.out.tfstate +++ b/states/statefile/testdata/roundtrip/v3-grabbag.out.tfstate @@ -28,7 +28,7 @@ "type": "null_resource", "name": "bar", "each": "list", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "attributes_flat": { @@ -36,7 +36,9 @@ "triggers.%": "1", "triggers.index": "0" }, - "depends_on": ["null_resource.baz"], + "depends_on": [ + "null_resource.baz" + ], "index_key": 0, "schema_version": 1 }, @@ -46,7 +48,9 @@ "triggers.%": "1", "triggers.index": "1" }, - "depends_on": ["null_resource.baz"], + "depends_on": [ + "null_resource.baz" + ], "index_key": 1, "schema_version": 0 } @@ -56,7 +60,7 @@ "mode": "managed", "type": "null_resource", "name": "baz", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "attributes_flat": { @@ -82,7 +86,7 @@ "mode": "managed", "type": "null_resource", "name": "foo", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "attributes_flat": { diff --git a/states/statefile/testdata/roundtrip/v3-invalid-depends.in.tfstate b/states/statefile/testdata/roundtrip/v3-invalid-depends.in.tfstate new file mode 100644 index 000000000..6943b6139 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v3-invalid-depends.in.tfstate @@ -0,0 +1,42 @@ +{ + "version": 3, + "terraform_version": "0.7.13", + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "modules": [ + { + "path": [ + "root" + ], + "outputs": { + "numbers": { + "sensitive": false, + "type": "string", + "value": "0,1" + } + }, + "resources": { + "null_resource.bar": { + "type": "null_resource", + "depends_on": [ + "null_resource.valid", + "null_resource.1invalid" + ], + "primary": { + "id": "5388490630832483079", + "attributes": { + "id": "5388490630832483079", + "triggers.%": "1", + "triggers.whaaat": "0,1" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + } + }, + "depends_on": [] + } + ] +} \ No newline at end of file diff --git a/states/statefile/testdata/roundtrip/v3-invalid-depends.out.tfstate b/states/statefile/testdata/roundtrip/v3-invalid-depends.out.tfstate new file mode 100644 index 000000000..2afd8d787 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v3-invalid-depends.out.tfstate @@ -0,0 +1,33 @@ +{ + "version": 4, + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "terraform_version": "0.7.13", + "outputs": { + "numbers": { + "type": "string", + "value": "0,1" + } + }, + "resources": [ + { + "mode": "managed", + "type": "null_resource", + "name": "bar", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes_flat": { + "id": "5388490630832483079", + "triggers.%": "1", + "triggers.whaaat": "0,1" + }, + "depends_on": [ + "null_resource.valid" + ] + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v3-simple.out.tfstate b/states/statefile/testdata/roundtrip/v3-simple.out.tfstate index bcc6bcc83..2cae1a5ef 100644 --- a/states/statefile/testdata/roundtrip/v3-simple.out.tfstate +++ b/states/statefile/testdata/roundtrip/v3-simple.out.tfstate @@ -14,7 +14,7 @@ "mode": "managed", "type": "null_resource", "name": "bar", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, @@ -35,7 +35,7 @@ "mode": "managed", "type": "null_resource", "name": "foo", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "each": "list", "instances": [ { @@ -62,7 +62,7 @@ "mode": "managed", "type": "null_resource", "name": "foobar", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, @@ -75,4 +75,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/states/statefile/testdata/roundtrip/v4-foreach.in.tfstate b/states/statefile/testdata/roundtrip/v4-foreach.in.tfstate new file mode 100644 index 000000000..dbca333e1 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-foreach.in.tfstate @@ -0,0 +1,36 @@ +{ + "version": 4, + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "terraform_version": "0.12.0", + "outputs": { + "numbers": { + "type": "string", + "value": "0,1" + } + }, + "resources": [ + { + "module": "module.modA", + "mode": "managed", + "type": "null_resource", + "name": "resource", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "id": "4639265839606265182", + "triggers": { + "input": "test" + } + }, + "private": "bnVsbA==", + "depends_on": [ + "var.input" + ] + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-foreach.out.tfstate b/states/statefile/testdata/roundtrip/v4-foreach.out.tfstate new file mode 120000 index 000000000..d35986e2e --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-foreach.out.tfstate @@ -0,0 +1 @@ +v4-foreach.in.tfstate \ No newline at end of file diff --git a/states/statefile/testdata/roundtrip/v4-legacy-foreach.in.tfstate b/states/statefile/testdata/roundtrip/v4-legacy-foreach.in.tfstate new file mode 100644 index 000000000..0b5085f9a --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-legacy-foreach.in.tfstate @@ -0,0 +1,36 @@ +{ + "version": 4, + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "terraform_version": "0.12.0", + "outputs": { + "numbers": { + "type": "string", + "value": "0,1" + } + }, + "resources": [ + { + "module": "module.modA", + "mode": "managed", + "type": "null_resource", + "name": "resource", + "provider": "provider.null", + "instances": [ + { + "schema_version": 0, + "attributes": { + "id": "4639265839606265182", + "triggers": { + "input": "test" + } + }, + "private": "bnVsbA==", + "depends_on": [ + "var.input" + ] + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-legacy-foreach.out.tfstate b/states/statefile/testdata/roundtrip/v4-legacy-foreach.out.tfstate new file mode 100644 index 000000000..dbca333e1 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-legacy-foreach.out.tfstate @@ -0,0 +1,36 @@ +{ + "version": 4, + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "terraform_version": "0.12.0", + "outputs": { + "numbers": { + "type": "string", + "value": "0,1" + } + }, + "resources": [ + { + "module": "module.modA", + "mode": "managed", + "type": "null_resource", + "name": "resource", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "id": "4639265839606265182", + "triggers": { + "input": "test" + } + }, + "private": "bnVsbA==", + "depends_on": [ + "var.input" + ] + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-legacy-modules.in.tfstate b/states/statefile/testdata/roundtrip/v4-legacy-modules.in.tfstate new file mode 100644 index 000000000..0e892ef55 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-legacy-modules.in.tfstate @@ -0,0 +1,88 @@ +{ + "version": 4, + "terraform_version": "0.12.0", + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "outputs": { + "numbers": { + "value": "0,1", + "type": "string" + } + }, + "resources": [ + { + "mode": "managed", + "type": "null_resource", + "name": "bar", + "provider": "provider.null", + "instances": [ + { + "schema_version": 0, + "attributes_flat": { + "id": "5388490630832483079", + "triggers.%": "1", + "triggers.whaaat": "0,1" + }, + "depends_on": [ + "null_resource.foo" + ] + } + ] + }, + { + "module": "module.modB", + "mode": "managed", + "type": "null_resource", + "name": "bar", + "each": "map", + "provider": "provider. null", + "instances": [ + { + "index_key": "a", + "schema_version": 0, + "attributes_flat": { + "id": "8212585058302700791" + }, + "dependencies": [ + "module.modA.null_resource.resource" + ] + }, + { + "index_key": "b", + "schema_version": 0, + "attributes_flat": { + "id": "1523897709610803586" + }, + "dependencies": [ + "module.modA.null_resource.resource" + ] + } + ] + }, + { + "module": "module.modA", + "mode": "managed", + "type": "null_resource", + "name": "resource", + "provider": "provider.null", + "instances": [ + { + "schema_version": 0, + "attributes": { + "id": "4639265839606265182", + "triggers": { + "input": "test" + } + }, + "private": "bnVsbA==", + "dependencies": [ + "null_resource.bar" + ], + "depends_on": [ + "var.input" + ] + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-legacy-modules.out.tfstate b/states/statefile/testdata/roundtrip/v4-legacy-modules.out.tfstate new file mode 100644 index 000000000..b9ccd7cf7 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-legacy-modules.out.tfstate @@ -0,0 +1,88 @@ +{ + "version": 4, + "terraform_version": "0.12.0", + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "outputs": { + "numbers": { + "value": "0,1", + "type": "string" + } + }, + "resources": [ + { + "mode": "managed", + "type": "null_resource", + "name": "bar", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes_flat": { + "id": "5388490630832483079", + "triggers.%": "1", + "triggers.whaaat": "0,1" + }, + "depends_on": [ + "null_resource.foo" + ] + } + ] + }, + { + "module": "module.modB", + "mode": "managed", + "type": "null_resource", + "name": "bar", + "each": "map", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "index_key": "a", + "schema_version": 0, + "attributes_flat": { + "id": "8212585058302700791" + }, + "dependencies": [ + "module.modA.null_resource.resource" + ] + }, + { + "index_key": "b", + "schema_version": 0, + "attributes_flat": { + "id": "1523897709610803586" + }, + "dependencies": [ + "module.modA.null_resource.resource" + ] + } + ] + }, + { + "module": "module.modA", + "mode": "managed", + "type": "null_resource", + "name": "resource", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "id": "4639265839606265182", + "triggers": { + "input": "test" + } + }, + "private": "bnVsbA==", + "dependencies": [ + "null_resource.bar" + ], + "depends_on": [ + "var.input" + ] + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-legacy-simple.in.tfstate b/states/statefile/testdata/roundtrip/v4-legacy-simple.in.tfstate new file mode 100644 index 000000000..2924215a9 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-legacy-simple.in.tfstate @@ -0,0 +1,60 @@ +{ + "version": 4, + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "terraform_version": "0.12.0", + "outputs": { + "numbers": { + "type": "string", + "value": "0,1" + } + }, + "resources": [ + { + "mode": "managed", + "type": "null_resource", + "name": "bar", + "provider": "provider.null", + "instances": [ + { + "schema_version": 0, + "attributes_flat": { + "id": "5388490630832483079", + "triggers.%": "1", + "triggers.whaaat": "0,1" + }, + "depends_on": [ + "null_resource.foo" + ] + } + ] + }, + { + "mode": "managed", + "type": "null_resource", + "name": "foo", + "provider": "provider.null", + "each": "list", + "instances": [ + { + "index_key": 0, + "schema_version": 0, + "attributes_flat": { + "id": "8212585058302700791", + "triggers.%": "1", + "triggers.what": "0" + } + }, + { + "index_key": 1, + "schema_version": 0, + "attributes_flat": { + "id": "1523897709610803586", + "triggers.%": "1", + "triggers.what": "0" + } + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-legacy-simple.out.tfstate b/states/statefile/testdata/roundtrip/v4-legacy-simple.out.tfstate new file mode 100644 index 000000000..5d3c0af9f --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-legacy-simple.out.tfstate @@ -0,0 +1,60 @@ +{ + "version": 4, + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "terraform_version": "0.12.0", + "outputs": { + "numbers": { + "type": "string", + "value": "0,1" + } + }, + "resources": [ + { + "mode": "managed", + "type": "null_resource", + "name": "bar", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes_flat": { + "id": "5388490630832483079", + "triggers.%": "1", + "triggers.whaaat": "0,1" + }, + "depends_on": [ + "null_resource.foo" + ] + } + ] + }, + { + "mode": "managed", + "type": "null_resource", + "name": "foo", + "provider": "provider.null", + "each": "list", + "instances": [ + { + "index_key": 0, + "schema_version": 0, + "attributes_flat": { + "id": "8212585058302700791", + "triggers.%": "1", + "triggers.what": "0" + } + }, + { + "index_key": 1, + "schema_version": 0, + "attributes_flat": { + "id": "1523897709610803586", + "triggers.%": "1", + "triggers.what": "0" + } + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-modules.in.tfstate b/states/statefile/testdata/roundtrip/v4-modules.in.tfstate new file mode 100644 index 000000000..b9ccd7cf7 --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-modules.in.tfstate @@ -0,0 +1,88 @@ +{ + "version": 4, + "terraform_version": "0.12.0", + "serial": 0, + "lineage": "f2968801-fa14-41ab-a044-224f3a4adf04", + "outputs": { + "numbers": { + "value": "0,1", + "type": "string" + } + }, + "resources": [ + { + "mode": "managed", + "type": "null_resource", + "name": "bar", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes_flat": { + "id": "5388490630832483079", + "triggers.%": "1", + "triggers.whaaat": "0,1" + }, + "depends_on": [ + "null_resource.foo" + ] + } + ] + }, + { + "module": "module.modB", + "mode": "managed", + "type": "null_resource", + "name": "bar", + "each": "map", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "index_key": "a", + "schema_version": 0, + "attributes_flat": { + "id": "8212585058302700791" + }, + "dependencies": [ + "module.modA.null_resource.resource" + ] + }, + { + "index_key": "b", + "schema_version": 0, + "attributes_flat": { + "id": "1523897709610803586" + }, + "dependencies": [ + "module.modA.null_resource.resource" + ] + } + ] + }, + { + "module": "module.modA", + "mode": "managed", + "type": "null_resource", + "name": "resource", + "provider": "provider[\"registry.terraform.io/-/null\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "id": "4639265839606265182", + "triggers": { + "input": "test" + } + }, + "private": "bnVsbA==", + "dependencies": [ + "null_resource.bar" + ], + "depends_on": [ + "var.input" + ] + } + ] + } + ] +} diff --git a/states/statefile/testdata/roundtrip/v4-modules.out.tfstate b/states/statefile/testdata/roundtrip/v4-modules.out.tfstate new file mode 120000 index 000000000..009f759ed --- /dev/null +++ b/states/statefile/testdata/roundtrip/v4-modules.out.tfstate @@ -0,0 +1 @@ +v4-modules.in.tfstate \ No newline at end of file diff --git a/states/statefile/testdata/roundtrip/v4-simple.in.tfstate b/states/statefile/testdata/roundtrip/v4-simple.in.tfstate index 5c61e645d..5d3c0af9f 100644 --- a/states/statefile/testdata/roundtrip/v4-simple.in.tfstate +++ b/states/statefile/testdata/roundtrip/v4-simple.in.tfstate @@ -14,7 +14,7 @@ "mode": "managed", "type": "null_resource", "name": "bar", - "provider": "provider.null", + "provider": "provider[\"registry.terraform.io/-/null\"]", "instances": [ { "schema_version": 0, @@ -23,7 +23,9 @@ "triggers.%": "1", "triggers.whaaat": "0,1" }, - "depends_on": ["null_resource.foo"] + "depends_on": [ + "null_resource.foo" + ] } ] }, diff --git a/states/statefile/version3_upgrade.go b/states/statefile/version3_upgrade.go index fbec5477c..36153faae 100644 --- a/states/statefile/version3_upgrade.go +++ b/states/statefile/version3_upgrade.go @@ -3,13 +3,16 @@ package statefile import ( "encoding/json" "fmt" + "log" "strconv" "strings" + "github.com/hashicorp/hcl/v2/hclsyntax" "github.com/zclconf/go-cty/cty" ctyjson "github.com/zclconf/go-cty/cty/json" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/states" "github.com/hashicorp/terraform/tfdiags" ) @@ -50,6 +53,13 @@ func upgradeStateV3ToV4(old *stateV3) (*stateV4, error) { // all of the modules are unkeyed. moduleAddr := make(addrs.ModuleInstance, len(msOld.Path)-1) for i, name := range msOld.Path[1:] { + if !hclsyntax.ValidIdentifier(name) { + // If we don't fail here then we'll produce an invalid state + // version 4 which subsequent operations will reject, so we'll + // fail early here for safety to make sure we can never + // inadvertently commit an invalid snapshot to a backend. + return nil, fmt.Errorf("state contains invalid module path %#v: %q is not a valid identifier; rename it in Terraform 0.11 before upgrading to Terraform 0.12", msOld.Path, name) + } moduleAddr[i] = addrs.ModuleInstanceStep{ Name: name, InstanceKey: addrs.NoKey, @@ -96,8 +106,15 @@ func upgradeStateV3ToV4(old *stateV3) (*stateV4, error) { if strings.Contains(oldProviderAddr, "provider.") { // Smells like a new-style provider address, but we'll test it. var diags tfdiags.Diagnostics - providerAddr, diags = addrs.ParseAbsProviderConfigStr(oldProviderAddr) + providerAddr, diags = addrs.ParseLegacyAbsProviderConfigStr(oldProviderAddr) if diags.HasErrors() { + if strings.Contains(oldProviderAddr, "${") { + // There seems to be a common misconception that + // interpolation was valid in provider aliases + // in 0.11, so we'll use a specialized error + // message for that case. + return nil, fmt.Errorf("invalid provider config reference %q for %s: this alias seems to contain a template interpolation sequence, which was not supported but also not error-checked in Terraform 0.11. To proceed, rename the associated provider alias to a valid identifier and apply the change with Terraform 0.11 before upgrading to Terraform 0.12", oldProviderAddr, instAddr) + } return nil, fmt.Errorf("invalid provider config reference %q for %s: %s", oldProviderAddr, instAddr, diags.Err()) } } else { @@ -107,13 +124,27 @@ func upgradeStateV3ToV4(old *stateV3) (*stateV4, error) { // incorrect but it'll get fixed up next time any updates // are made to an instance. if oldProviderAddr != "" { - localAddr, diags := addrs.ParseProviderConfigCompactStr(oldProviderAddr) + localAddr, diags := configs.ParseProviderConfigCompactStr(oldProviderAddr) if diags.HasErrors() { + if strings.Contains(oldProviderAddr, "${") { + // There seems to be a common misconception that + // interpolation was valid in provider aliases + // in 0.11, so we'll use a specialized error + // message for that case. + return nil, fmt.Errorf("invalid legacy provider config reference %q for %s: this alias seems to contain a template interpolation sequence, which was not supported but also not error-checked in Terraform 0.11. To proceed, rename the associated provider alias to a valid identifier and apply the change with Terraform 0.11 before upgrading to Terraform 0.12", oldProviderAddr, instAddr) + } return nil, fmt.Errorf("invalid legacy provider config reference %q for %s: %s", oldProviderAddr, instAddr, diags.Err()) } - providerAddr = localAddr.Absolute(moduleAddr) + providerAddr = addrs.AbsProviderConfig{ + Module: moduleAddr, + Provider: addrs.NewLegacyProvider(localAddr.LocalName), + Alias: localAddr.Alias, + } } else { - providerAddr = resAddr.DefaultProviderConfig().Absolute(moduleAddr) + providerAddr = addrs.AbsProviderConfig{ + Module: moduleAddr, + Provider: resAddr.DefaultProvider(), + } } } @@ -299,13 +330,33 @@ func upgradeInstanceObjectV3ToV4(rsOld *resourceStateV2, isOld *instanceStateV2, } } - dependencies := make([]string, len(rsOld.Dependencies)) - for i, v := range rsOld.Dependencies { + dependencies := make([]string, 0, len(rsOld.Dependencies)) + for _, v := range rsOld.Dependencies { depStr, err := parseLegacyDependency(v) if err != nil { - return nil, fmt.Errorf("invalid dependency reference %q: %s", v, err) + // We just drop invalid dependencies on the floor here, because + // they tend to get left behind in Terraform 0.11 when resources + // are renamed or moved between modules and there's no automatic + // way to fix them here. In practice it shouldn't hurt to miss + // a few dependency edges in the state because a subsequent plan + // will run a refresh walk first and re-synchronize the + // dependencies with the configuration. + // + // There is one rough edges where this can cause an incorrect + // result, though: If the first command the user runs after + // upgrading to Terraform 0.12 uses -refresh=false and thus + // prevents the dependency reorganization from occurring _and_ + // that initial plan discovered "orphaned" resources (not present + // in configuration any longer) then when the plan is applied the + // destroy ordering will be incorrect for the instances of those + // resources. We expect that is a rare enough situation that it + // isn't a big deal, and even when it _does_ occur it's common for + // the apply to succeed anyway unless many separate resources with + // complex inter-dependencies are all orphaned at once. + log.Printf("statefile: ignoring invalid dependency address %q while upgrading from state version 3 to version 4: %s", v, err) + continue } - dependencies[i] = depStr + dependencies = append(dependencies, depStr) } return &instanceObjectStateV4{ @@ -313,7 +364,7 @@ func upgradeInstanceObjectV3ToV4(rsOld *resourceStateV2, isOld *instanceStateV2, Status: status, Deposed: string(deposedKey), AttributesFlat: attributes, - Dependencies: dependencies, + DependsOn: dependencies, SchemaVersion: schemaVersion, PrivateRaw: privateJSON, }, nil diff --git a/states/statefile/version4.go b/states/statefile/version4.go index ee8b65236..adde804f1 100644 --- a/states/statefile/version4.go +++ b/states/statefile/version4.go @@ -84,7 +84,15 @@ func prepareStateV4(sV4 *stateV4) (*File, tfdiags.Diagnostics) { providerAddr, addrDiags := addrs.ParseAbsProviderConfigStr(rsV4.ProviderConfig) diags.Append(addrDiags) if addrDiags.HasErrors() { - continue + // If ParseAbsProviderConfigStr returns an error, the state may have + // been written before Provider FQNs were introduced and the + // AbsProviderConfig string format will need normalization. If so, + // we assume it is a default (hashicorp) provider. + var legacyAddrDiags tfdiags.Diagnostics + providerAddr, legacyAddrDiags = addrs.ParseLegacyAbsProviderConfigStr(rsV4.ProviderConfig) + if legacyAddrDiags.HasErrors() { + continue + } } var eachMode states.EachMode @@ -181,7 +189,10 @@ func prepareStateV4(sV4 *stateV4) (*File, tfdiags.Diagnostics) { } { - depsRaw := isV4.Dependencies + // Allow both the deprecated `depends_on` and new + // `dependencies` to coexist for now so resources can be + // upgraded as they are refreshed. + depsRaw := isV4.DependsOn deps := make([]addrs.Referenceable, 0, len(depsRaw)) for _, depRaw := range depsRaw { ref, refDiags := addrs.ParseRefStr(depRaw) @@ -202,6 +213,20 @@ func prepareStateV4(sV4 *stateV4) (*File, tfdiags.Diagnostics) { } deps = append(deps, ref.Subject) } + obj.DependsOn = deps + } + + { + depsRaw := isV4.Dependencies + deps := make([]addrs.AbsResource, 0, len(depsRaw)) + for _, depRaw := range depsRaw { + addr, addrDiags := addrs.ParseAbsResourceStr(depRaw) + diags = diags.Append(addrDiags) + if addrDiags.HasErrors() { + continue + } + deps = append(deps, addr) + } obj.Dependencies = deps } @@ -466,6 +491,11 @@ func appendInstanceObjectStateV4(rs *states.Resource, is *states.ResourceInstanc deps[i] = depAddr.String() } + depOn := make([]string, len(obj.DependsOn)) + for i, depAddr := range obj.DependsOn { + depOn[i] = depAddr.String() + } + var rawKey interface{} switch tk := key.(type) { case addrs.IntKey: @@ -491,6 +521,7 @@ func appendInstanceObjectStateV4(rs *states.Resource, is *states.ResourceInstanc AttributesRaw: obj.AttrsJSON, PrivateRaw: privateRaw, Dependencies: deps, + DependsOn: depOn, }), diags } @@ -540,7 +571,8 @@ type instanceObjectStateV4 struct { PrivateRaw []byte `json:"private,omitempty"` - Dependencies []string `json:"depends_on,omitempty"` + Dependencies []string `json:"dependencies,omitempty"` + DependsOn []string `json:"depends_on,omitempty"` } // stateVersionV4 is a weird special type we use to produce our hard-coded diff --git a/states/statemgr/filesystem.go b/states/statemgr/filesystem.go index 8338e5741..541108dde 100644 --- a/states/statemgr/filesystem.go +++ b/states/statemgr/filesystem.go @@ -336,7 +336,7 @@ func (s *Filesystem) Unlock(id string) error { idErr := fmt.Errorf("invalid lock id: %q. current id: %q", id, s.lockID) info, err := s.lockInfo() if err != nil { - err = multierror.Append(idErr, err) + idErr = multierror.Append(idErr, err) } return &LockError{ diff --git a/states/statemgr/migrate.go b/states/statemgr/migrate.go index 8f0d6799e..9b55fe9a7 100644 --- a/states/statemgr/migrate.go +++ b/states/statemgr/migrate.go @@ -129,7 +129,7 @@ func Export(mgr Reader) *statefile.File { // is the receiver of that method and the "second" is the given argument. type SnapshotMetaRel rune -//go:generate stringer -type=SnapshotMetaRel +//go:generate go run golang.org/x/tools/cmd/stringer -type=SnapshotMetaRel const ( // SnapshotOlder indicates that two snapshots have a common lineage and diff --git a/states/statemgr/testing.go b/states/statemgr/testing.go index 8b2f2cbb3..2ded9ac7a 100644 --- a/states/statemgr/testing.go +++ b/states/statemgr/testing.go @@ -5,13 +5,11 @@ import ( "testing" "github.com/davecgh/go-spew/spew" - - "github.com/hashicorp/terraform/states/statefile" - - "github.com/hashicorp/terraform/addrs" "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/states" + "github.com/hashicorp/terraform/states/statefile" ) // TestFull is a helper for testing full state manager implementations. It @@ -152,6 +150,10 @@ func TestFullInitialState() *states.State { Type: "null_resource", Name: "foo", } - childMod.SetResourceMeta(rAddr, states.EachList, rAddr.DefaultProviderConfig().Absolute(addrs.RootModuleInstance)) + providerAddr := addrs.AbsProviderConfig{ + Provider: rAddr.DefaultProvider(), + Module: addrs.RootModuleInstance, + } + childMod.SetResourceMeta(rAddr, states.EachList, providerAddr) return state } diff --git a/svchost/auth/helper_program_test.go b/svchost/auth/helper_program_test.go deleted file mode 100644 index f28a59903..000000000 --- a/svchost/auth/helper_program_test.go +++ /dev/null @@ -1,83 +0,0 @@ -package auth - -import ( - "os" - "path/filepath" - "testing" - - "github.com/hashicorp/terraform/svchost" -) - -func TestHelperProgramCredentialsSource(t *testing.T) { - wd, err := os.Getwd() - if err != nil { - t.Fatal(err) - } - - program := filepath.Join(wd, "testdata/test-helper") - t.Logf("testing with helper at %s", program) - - src := HelperProgramCredentialsSource(program) - - t.Run("happy path", func(t *testing.T) { - creds, err := src.ForHost(svchost.Hostname("example.com")) - if err != nil { - t.Fatal(err) - } - if tokCreds, isTok := creds.(HostCredentialsToken); isTok { - if got, want := string(tokCreds), "example-token"; got != want { - t.Errorf("wrong token %q; want %q", got, want) - } - } else { - t.Errorf("wrong type of credentials %T", creds) - } - }) - t.Run("no credentials", func(t *testing.T) { - creds, err := src.ForHost(svchost.Hostname("nothing.example.com")) - if err != nil { - t.Fatal(err) - } - if creds != nil { - t.Errorf("got credentials; want nil") - } - }) - t.Run("unsupported credentials type", func(t *testing.T) { - creds, err := src.ForHost(svchost.Hostname("other-cred-type.example.com")) - if err != nil { - t.Fatal(err) - } - if creds != nil { - t.Errorf("got credentials; want nil") - } - }) - t.Run("lookup error", func(t *testing.T) { - _, err := src.ForHost(svchost.Hostname("fail.example.com")) - if err == nil { - t.Error("completed successfully; want error") - } - }) - t.Run("store happy path", func(t *testing.T) { - err := src.StoreForHost(svchost.Hostname("example.com"), HostCredentialsToken("example-token")) - if err != nil { - t.Fatal(err) - } - }) - t.Run("store error", func(t *testing.T) { - err := src.StoreForHost(svchost.Hostname("fail.example.com"), HostCredentialsToken("example-token")) - if err == nil { - t.Error("completed successfully; want error") - } - }) - t.Run("forget happy path", func(t *testing.T) { - err := src.ForgetForHost(svchost.Hostname("example.com")) - if err != nil { - t.Fatal(err) - } - }) - t.Run("forget error", func(t *testing.T) { - err := src.ForgetForHost(svchost.Hostname("fail.example.com")) - if err == nil { - t.Error("completed successfully; want error") - } - }) -} diff --git a/svchost/auth/static_test.go b/svchost/auth/static_test.go deleted file mode 100644 index a24a888ea..000000000 --- a/svchost/auth/static_test.go +++ /dev/null @@ -1,38 +0,0 @@ -package auth - -import ( - "testing" - - "github.com/hashicorp/terraform/svchost" -) - -func TestStaticCredentialsSource(t *testing.T) { - src := StaticCredentialsSource(map[svchost.Hostname]map[string]interface{}{ - svchost.Hostname("example.com"): map[string]interface{}{ - "token": "abc123", - }, - }) - - t.Run("exists", func(t *testing.T) { - creds, err := src.ForHost(svchost.Hostname("example.com")) - if err != nil { - t.Fatal(err) - } - if tokCreds, isToken := creds.(HostCredentialsToken); isToken { - if got, want := string(tokCreds), "abc123"; got != want { - t.Errorf("wrong token %q; want %q", got, want) - } - } else { - t.Errorf("creds is %#v; want HostCredentialsToken", creds) - } - }) - t.Run("does not exist", func(t *testing.T) { - creds, err := src.ForHost(svchost.Hostname("example.net")) - if err != nil { - t.Fatal(err) - } - if creds != nil { - t.Errorf("creds is %#v; want nil", creds) - } - }) -} diff --git a/svchost/auth/testdata/.gitignore b/svchost/auth/testdata/.gitignore deleted file mode 100644 index ba2906d06..000000000 --- a/svchost/auth/testdata/.gitignore +++ /dev/null @@ -1 +0,0 @@ -main diff --git a/svchost/auth/testdata/main.go b/svchost/auth/testdata/main.go deleted file mode 100644 index 1abd5fc43..000000000 --- a/svchost/auth/testdata/main.go +++ /dev/null @@ -1,64 +0,0 @@ -package main - -import ( - "encoding/json" - "fmt" - "io/ioutil" - "os" -) - -// This is a simple program that implements the "helper program" protocol -// for the svchost/auth package for unit testing purposes. - -func main() { - args := os.Args - - if len(args) < 3 { - die("not enough arguments\n") - } - - host := args[2] - switch args[1] { - case "get": - switch host { - case "example.com": - fmt.Print(`{"token":"example-token"}`) - case "other-cred-type.example.com": - fmt.Print(`{"username":"alfred"}`) // unrecognized by main program - case "fail.example.com": - die("failing because you told me to fail\n") - default: - fmt.Print("{}") // no credentials available - } - case "store": - dataSrc, err := ioutil.ReadAll(os.Stdin) - if err != nil { - die("invalid input: %s", err) - } - var data map[string]interface{} - err = json.Unmarshal(dataSrc, &data) - - switch host { - case "example.com": - if data["token"] != "example-token" { - die("incorrect token value to store") - } - default: - die("can't store credentials for %s", host) - } - case "forget": - switch host { - case "example.com": - // okay! - default: - die("can't forget credentials for %s", host) - } - default: - die("unknown subcommand %q\n", args[1]) - } -} - -func die(f string, args ...interface{}) { - fmt.Fprintf(os.Stderr, fmt.Sprintf(f, args...)) - os.Exit(1) -} diff --git a/svchost/auth/testdata/test-helper b/svchost/auth/testdata/test-helper deleted file mode 100755 index 0ed3396c5..000000000 --- a/svchost/auth/testdata/test-helper +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env bash - -set -eu - -cd "$( dirname "${BASH_SOURCE[0]}" )" -[ -x main ] || go build -o main . -exec ./main "$@" diff --git a/svchost/auth/token_credentials_test.go b/svchost/auth/token_credentials_test.go deleted file mode 100644 index 61f2c9bd4..000000000 --- a/svchost/auth/token_credentials_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package auth - -import ( - "net/http" - "testing" - - "github.com/zclconf/go-cty/cty" -) - -func TestHostCredentialsToken(t *testing.T) { - creds := HostCredentialsToken("foo-bar") - - { - req := &http.Request{} - creds.PrepareRequest(req) - authStr := req.Header.Get("authorization") - if got, want := authStr, "Bearer foo-bar"; got != want { - t.Errorf("wrong Authorization header value %q; want %q", got, want) - } - } - - { - got := creds.ToStore() - want := cty.ObjectVal(map[string]cty.Value{ - "token": cty.StringVal("foo-bar"), - }) - if !want.RawEquals(got) { - t.Errorf("wrong storable object value\ngot: %#v\nwant: %#v", got, want) - } - } -} diff --git a/svchost/disco/disco_test.go b/svchost/disco/disco_test.go deleted file mode 100644 index d5826e835..000000000 --- a/svchost/disco/disco_test.go +++ /dev/null @@ -1,357 +0,0 @@ -package disco - -import ( - "crypto/tls" - "net/http" - "net/http/httptest" - "net/url" - "os" - "strconv" - "testing" - - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/auth" -) - -func TestMain(m *testing.M) { - // During all tests we override the HTTP transport we use for discovery - // so it'll tolerate the locally-generated TLS certificates we use - // for test URLs. - httpTransport = &http.Transport{ - TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, - } - - os.Exit(m.Run()) -} - -func TestDiscover(t *testing.T) { - t.Run("happy path", func(t *testing.T) { - portStr, close := testServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(` -{ -"thingy.v1": "http://example.com/foo", -"wotsit.v2": "http://example.net/bar" -} -`) - w.Header().Add("Content-Type", "application/json") - w.Header().Add("Content-Length", strconv.Itoa(len(resp))) - w.Write(resp) - }) - defer close() - - givenHost := "localhost" + portStr - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - discovered, err := d.Discover(host) - if err != nil { - t.Fatalf("unexpected discovery error: %s", err) - } - - gotURL, err := discovered.ServiceURL("thingy.v1") - if err != nil { - t.Fatalf("unexpected service URL error: %s", err) - } - if gotURL == nil { - t.Fatalf("found no URL for thingy.v1") - } - if got, want := gotURL.String(), "http://example.com/foo"; got != want { - t.Fatalf("wrong result %q; want %q", got, want) - } - }) - t.Run("chunked encoding", func(t *testing.T) { - portStr, close := testServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(` -{ -"thingy.v1": "http://example.com/foo", -"wotsit.v2": "http://example.net/bar" -} -`) - w.Header().Add("Content-Type", "application/json") - // We're going to force chunked encoding here -- and thus prevent - // the server from predicting the length -- so we can make sure - // our client is tolerant of servers using this encoding. - w.Write(resp[:5]) - w.(http.Flusher).Flush() - w.Write(resp[5:]) - w.(http.Flusher).Flush() - }) - defer close() - - givenHost := "localhost" + portStr - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - discovered, err := d.Discover(host) - if err != nil { - t.Fatalf("unexpected discovery error: %s", err) - } - - gotURL, err := discovered.ServiceURL("wotsit.v2") - if err != nil { - t.Fatalf("unexpected service URL error: %s", err) - } - if gotURL == nil { - t.Fatalf("found no URL for wotsit.v2") - } - if got, want := gotURL.String(), "http://example.net/bar"; got != want { - t.Fatalf("wrong result %q; want %q", got, want) - } - }) - t.Run("with credentials", func(t *testing.T) { - var authHeaderText string - portStr, close := testServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(`{}`) - authHeaderText = r.Header.Get("Authorization") - w.Header().Add("Content-Type", "application/json") - w.Header().Add("Content-Length", strconv.Itoa(len(resp))) - w.Write(resp) - }) - defer close() - - givenHost := "localhost" + portStr - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - d.SetCredentialsSource(auth.StaticCredentialsSource(map[svchost.Hostname]map[string]interface{}{ - host: map[string]interface{}{ - "token": "abc123", - }, - })) - d.Discover(host) - if got, want := authHeaderText, "Bearer abc123"; got != want { - t.Fatalf("wrong Authorization header\ngot: %s\nwant: %s", got, want) - } - }) - t.Run("forced services override", func(t *testing.T) { - forced := map[string]interface{}{ - "thingy.v1": "http://example.net/foo", - "wotsit.v2": "/foo", - } - - d := New() - d.ForceHostServices(svchost.Hostname("example.com"), forced) - - givenHost := "example.com" - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - discovered, err := d.Discover(host) - if err != nil { - t.Fatalf("unexpected discovery error: %s", err) - } - { - gotURL, err := discovered.ServiceURL("thingy.v1") - if err != nil { - t.Fatalf("unexpected service URL error: %s", err) - } - if gotURL == nil { - t.Fatalf("found no URL for thingy.v1") - } - if got, want := gotURL.String(), "http://example.net/foo"; got != want { - t.Fatalf("wrong result %q; want %q", got, want) - } - } - { - gotURL, err := discovered.ServiceURL("wotsit.v2") - if err != nil { - t.Fatalf("unexpected service URL error: %s", err) - } - if gotURL == nil { - t.Fatalf("found no URL for wotsit.v2") - } - if got, want := gotURL.String(), "https://example.com/foo"; got != want { - t.Fatalf("wrong result %q; want %q", got, want) - } - } - }) - t.Run("not JSON", func(t *testing.T) { - portStr, close := testServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(`{"thingy.v1": "http://example.com/foo"}`) - w.Header().Add("Content-Type", "application/octet-stream") - w.Write(resp) - }) - defer close() - - givenHost := "localhost" + portStr - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - discovered, err := d.Discover(host) - if err == nil { - t.Fatalf("expected a discovery error") - } - - // Returned discovered should be nil. - if discovered != nil { - t.Errorf("discovered not nil; should be") - } - }) - t.Run("malformed JSON", func(t *testing.T) { - portStr, close := testServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(`{"thingy.v1": "htt`) // truncated, for example... - w.Header().Add("Content-Type", "application/json") - w.Write(resp) - }) - defer close() - - givenHost := "localhost" + portStr - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - discovered, err := d.Discover(host) - if err == nil { - t.Fatalf("expected a discovery error") - } - - // Returned discovered should be nil. - if discovered != nil { - t.Errorf("discovered not nil; should be") - } - }) - t.Run("JSON with redundant charset", func(t *testing.T) { - // The JSON RFC defines no parameters for the application/json - // MIME type, but some servers have a weird tendency to just add - // "charset" to everything, so we'll make sure we ignore it successfully. - // (JSON uses content sniffing for encoding detection, not media type params.) - portStr, close := testServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(`{"thingy.v1": "http://example.com/foo"}`) - w.Header().Add("Content-Type", "application/json; charset=latin-1") - w.Write(resp) - }) - defer close() - - givenHost := "localhost" + portStr - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - discovered, err := d.Discover(host) - if err != nil { - t.Fatalf("unexpected discovery error: %s", err) - } - - if discovered.services == nil { - t.Errorf("response is empty; shouldn't be") - } - }) - t.Run("no discovery doc", func(t *testing.T) { - portStr, close := testServer(func(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(404) - }) - defer close() - - givenHost := "localhost" + portStr - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - discovered, err := d.Discover(host) - if err != nil { - t.Fatalf("unexpected discovery error: %s", err) - } - - // Returned discovered.services should be nil (empty). - if discovered.services != nil { - t.Errorf("discovered.services not nil (empty); should be") - } - }) - t.Run("redirect", func(t *testing.T) { - // For this test, we have two servers and one redirects to the other - portStr1, close1 := testServer(func(w http.ResponseWriter, r *http.Request) { - // This server is the one that returns a real response. - resp := []byte(`{"thingy.v1": "http://example.com/foo"}`) - w.Header().Add("Content-Type", "application/json") - w.Header().Add("Content-Length", strconv.Itoa(len(resp))) - w.Write(resp) - }) - portStr2, close2 := testServer(func(w http.ResponseWriter, r *http.Request) { - // This server is the one that redirects. - http.Redirect(w, r, "https://127.0.0.1"+portStr1+"/.well-known/terraform.json", 302) - }) - defer close1() - defer close2() - - givenHost := "localhost" + portStr2 - host, err := svchost.ForComparison(givenHost) - if err != nil { - t.Fatalf("test server hostname is invalid: %s", err) - } - - d := New() - discovered, err := d.Discover(host) - if err != nil { - t.Fatalf("unexpected discovery error: %s", err) - } - - gotURL, err := discovered.ServiceURL("thingy.v1") - if err != nil { - t.Fatalf("unexpected service URL error: %s", err) - } - if gotURL == nil { - t.Fatalf("found no URL for thingy.v1") - } - if got, want := gotURL.String(), "http://example.com/foo"; got != want { - t.Fatalf("wrong result %q; want %q", got, want) - } - - // The base URL for the host object should be the URL we redirected to, - // rather than the we redirected _from_. - gotBaseURL := discovered.discoURL.String() - wantBaseURL := "https://127.0.0.1" + portStr1 + "/.well-known/terraform.json" - if gotBaseURL != wantBaseURL { - t.Errorf("incorrect base url %s; want %s", gotBaseURL, wantBaseURL) - } - - }) -} - -func testServer(h func(w http.ResponseWriter, r *http.Request)) (portStr string, close func()) { - server := httptest.NewTLSServer(http.HandlerFunc( - func(w http.ResponseWriter, r *http.Request) { - // Test server always returns 404 if the URL isn't what we expect - if r.URL.Path != "/.well-known/terraform.json" { - w.WriteHeader(404) - w.Write([]byte("not found")) - return - } - - // If the URL is correct then the given hander decides the response - h(w, r) - }, - )) - - serverURL, _ := url.Parse(server.URL) - - portStr = serverURL.Port() - if portStr != "" { - portStr = ":" + portStr - } - - close = func() { - server.Close() - } - - return portStr, close -} diff --git a/svchost/disco/host_test.go b/svchost/disco/host_test.go deleted file mode 100644 index 91f7861ec..000000000 --- a/svchost/disco/host_test.go +++ /dev/null @@ -1,528 +0,0 @@ -package disco - -import ( - "fmt" - "net/http" - "net/http/httptest" - "net/url" - "os" - "path" - "reflect" - "strconv" - "strings" - "testing" - - "github.com/google/go-cmp/cmp" -) - -func TestHostServiceURL(t *testing.T) { - baseURL, _ := url.Parse("https://example.com/disco/foo.json") - host := Host{ - discoURL: baseURL, - hostname: "test-server", - services: map[string]interface{}{ - "absolute.v1": "http://example.net/foo/bar", - "absolutewithport.v1": "http://example.net:8080/foo/bar", - "relative.v1": "./stu/", - "rootrelative.v1": "/baz", - "protorelative.v1": "//example.net/", - "withfragment.v1": "http://example.org/#foo", - "querystring.v1": "https://example.net/baz?foo=bar", - "nothttp.v1": "ftp://127.0.0.1/pub/", - "invalid.v1": "***not A URL at all!:/<@@@@>***", - }, - } - - tests := []struct { - ID string - want string - err string - }{ - {"absolute.v1", "http://example.net/foo/bar", ""}, - {"absolutewithport.v1", "http://example.net:8080/foo/bar", ""}, - {"relative.v1", "https://example.com/disco/stu/", ""}, - {"rootrelative.v1", "https://example.com/baz", ""}, - {"protorelative.v1", "https://example.net/", ""}, - {"withfragment.v1", "http://example.org/", ""}, - {"querystring.v1", "https://example.net/baz?foo=bar", ""}, - {"nothttp.v1", "", "unsupported scheme"}, - {"invalid.v1", "", "Failed to parse service URL"}, - } - - for _, test := range tests { - t.Run(test.ID, func(t *testing.T) { - url, err := host.ServiceURL(test.ID) - if (err != nil || test.err != "") && - (err == nil || !strings.Contains(err.Error(), test.err)) { - t.Fatalf("unexpected service URL error: %s", err) - } - - var got string - if url != nil { - got = url.String() - } else { - got = "" - } - - if got != test.want { - t.Errorf("wrong result\ngot: %s\nwant: %s", got, test.want) - } - }) - } -} - -func TestHostServiceOAuthClient(t *testing.T) { - baseURL, _ := url.Parse("https://example.com/disco/foo.json") - host := Host{ - discoURL: baseURL, - hostname: "test-server", - services: map[string]interface{}{ - "explicitgranttype.v1": map[string]interface{}{ - "client": "explicitgranttype", - "authz": "./authz", - "token": "./token", - "grant_types": []interface{}{"authz_code", "password", "tbd"}, - }, - "customports.v1": map[string]interface{}{ - "client": "customports", - "authz": "./authz", - "token": "./token", - "ports": []interface{}{1025, 1026}, - }, - "invalidports.v1": map[string]interface{}{ - "client": "invalidports", - "authz": "./authz", - "token": "./token", - "ports": []interface{}{1, 65535}, - }, - "missingauthz.v1": map[string]interface{}{ - "client": "missingauthz", - "token": "./token", - }, - "missingtoken.v1": map[string]interface{}{ - "client": "missingtoken", - "authz": "./authz", - }, - "passwordmissingauthz.v1": map[string]interface{}{ - "client": "passwordmissingauthz", - "token": "./token", - "grant_types": []interface{}{"password"}, - }, - "absolute.v1": map[string]interface{}{ - "client": "absolute", - "authz": "http://example.net/foo/authz", - "token": "http://example.net/foo/token", - }, - "absolutewithport.v1": map[string]interface{}{ - "client": "absolutewithport", - "authz": "http://example.net:8000/foo/authz", - "token": "http://example.net:8000/foo/token", - }, - "relative.v1": map[string]interface{}{ - "client": "relative", - "authz": "./authz", - "token": "./token", - }, - "rootrelative.v1": map[string]interface{}{ - "client": "rootrelative", - "authz": "/authz", - "token": "/token", - }, - "protorelative.v1": map[string]interface{}{ - "client": "protorelative", - "authz": "//example.net/authz", - "token": "//example.net/token", - }, - "nothttp.v1": map[string]interface{}{ - "client": "nothttp", - "authz": "ftp://127.0.0.1/pub/authz", - "token": "ftp://127.0.0.1/pub/token", - }, - "invalidauthz.v1": map[string]interface{}{ - "client": "invalidauthz", - "authz": "***not A URL at all!:/<@@@@>***", - "token": "/foo", - }, - "invalidtoken.v1": map[string]interface{}{ - "client": "invalidauthz", - "authz": "/foo", - "token": "***not A URL at all!:/<@@@@>***", - }, - }, - } - - mustURL := func(t *testing.T, s string) *url.URL { - t.Helper() - u, err := url.Parse(s) - if err != nil { - t.Fatalf("invalid wanted URL %s in test case: %s", s, err) - } - return u - } - - tests := []struct { - ID string - want *OAuthClient - err string - }{ - { - "explicitgranttype.v1", - &OAuthClient{ - ID: "explicitgranttype", - AuthorizationURL: mustURL(t, "https://example.com/disco/authz"), - TokenURL: mustURL(t, "https://example.com/disco/token"), - MinPort: 1024, - MaxPort: 65535, - SupportedGrantTypes: NewOAuthGrantTypeSet("authz_code", "password", "tbd"), - }, - "", - }, - { - "customports.v1", - &OAuthClient{ - ID: "customports", - AuthorizationURL: mustURL(t, "https://example.com/disco/authz"), - TokenURL: mustURL(t, "https://example.com/disco/token"), - MinPort: 1025, - MaxPort: 1026, - SupportedGrantTypes: NewOAuthGrantTypeSet("authz_code"), - }, - "", - }, - { - "invalidports.v1", - nil, - `Invalid "ports" definition for service invalidports.v1: both ports must be whole numbers between 1024 and 65535`, - }, - { - "missingauthz.v1", - nil, - `Service missingauthz.v1 definition is missing required property "authz"`, - }, - { - "missingtoken.v1", - nil, - `Service missingtoken.v1 definition is missing required property "token"`, - }, - { - "passwordmissingauthz.v1", - &OAuthClient{ - ID: "passwordmissingauthz", - TokenURL: mustURL(t, "https://example.com/disco/token"), - MinPort: 1024, - MaxPort: 65535, - SupportedGrantTypes: NewOAuthGrantTypeSet("password"), - }, - "", - }, - { - "absolute.v1", - &OAuthClient{ - ID: "absolute", - AuthorizationURL: mustURL(t, "http://example.net/foo/authz"), - TokenURL: mustURL(t, "http://example.net/foo/token"), - MinPort: 1024, - MaxPort: 65535, - SupportedGrantTypes: NewOAuthGrantTypeSet("authz_code"), - }, - "", - }, - { - "absolutewithport.v1", - &OAuthClient{ - ID: "absolutewithport", - AuthorizationURL: mustURL(t, "http://example.net:8000/foo/authz"), - TokenURL: mustURL(t, "http://example.net:8000/foo/token"), - MinPort: 1024, - MaxPort: 65535, - SupportedGrantTypes: NewOAuthGrantTypeSet("authz_code"), - }, - "", - }, - { - "relative.v1", - &OAuthClient{ - ID: "relative", - AuthorizationURL: mustURL(t, "https://example.com/disco/authz"), - TokenURL: mustURL(t, "https://example.com/disco/token"), - MinPort: 1024, - MaxPort: 65535, - SupportedGrantTypes: NewOAuthGrantTypeSet("authz_code"), - }, - "", - }, - { - "rootrelative.v1", - &OAuthClient{ - ID: "rootrelative", - AuthorizationURL: mustURL(t, "https://example.com/authz"), - TokenURL: mustURL(t, "https://example.com/token"), - MinPort: 1024, - MaxPort: 65535, - SupportedGrantTypes: NewOAuthGrantTypeSet("authz_code"), - }, - "", - }, - { - "protorelative.v1", - &OAuthClient{ - ID: "protorelative", - AuthorizationURL: mustURL(t, "https://example.net/authz"), - TokenURL: mustURL(t, "https://example.net/token"), - MinPort: 1024, - MaxPort: 65535, - SupportedGrantTypes: NewOAuthGrantTypeSet("authz_code"), - }, - "", - }, - { - "nothttp.v1", - nil, - "Failed to parse authorization URL: unsupported scheme ftp", - }, - { - "invalidauthz.v1", - nil, - "Failed to parse authorization URL: parse ***not A URL at all!:/<@@@@>***: first path segment in URL cannot contain colon", - }, - { - "invalidtoken.v1", - nil, - "Failed to parse token URL: parse ***not A URL at all!:/<@@@@>***: first path segment in URL cannot contain colon", - }, - } - - for _, test := range tests { - t.Run(test.ID, func(t *testing.T) { - got, err := host.ServiceOAuthClient(test.ID) - if (err != nil || test.err != "") && - (err == nil || !strings.Contains(err.Error(), test.err)) { - t.Fatalf("unexpected service URL error: %s", err) - } - - if diff := cmp.Diff(test.want, got); diff != "" { - t.Errorf("wrong result\n%s", diff) - } - }) - } -} - -func TestVersionConstrains(t *testing.T) { - baseURL, _ := url.Parse("https://example.com/disco/foo.json") - - t.Run("exact service version is provided", func(t *testing.T) { - portStr, close := testVersionsServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(` -{ - "service": "%s", - "product": "%s", - "minimum": "0.11.8", - "maximum": "0.12.0" -}`) - // Add the requested service and product to the response. - service := path.Base(r.URL.Path) - product := r.URL.Query().Get("product") - resp = []byte(fmt.Sprintf(string(resp), service, product)) - - w.Header().Add("Content-Type", "application/json") - w.Header().Add("Content-Length", strconv.Itoa(len(resp))) - w.Write(resp) - }) - defer close() - - host := Host{ - discoURL: baseURL, - hostname: "test-server", - transport: httpTransport, - services: map[string]interface{}{ - "thingy.v1": "/api/v1/", - "thingy.v2": "/api/v2/", - "versions.v1": "https://localhost" + portStr + "/v1/versions/", - }, - } - - expected := &Constraints{ - Service: "thingy.v1", - Product: "terraform", - Minimum: "0.11.8", - Maximum: "0.12.0", - } - - actual, err := host.VersionConstraints("thingy.v1", "terraform") - if err != nil { - t.Fatalf("unexpected version constraints error: %s", err) - } - - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("expected %#v, got: %#v", expected, actual) - } - }) - - t.Run("service provided with different versions", func(t *testing.T) { - portStr, close := testVersionsServer(func(w http.ResponseWriter, r *http.Request) { - resp := []byte(` -{ - "service": "%s", - "product": "%s", - "minimum": "0.11.8", - "maximum": "0.12.0" -}`) - // Add the requested service and product to the response. - service := path.Base(r.URL.Path) - product := r.URL.Query().Get("product") - resp = []byte(fmt.Sprintf(string(resp), service, product)) - - w.Header().Add("Content-Type", "application/json") - w.Header().Add("Content-Length", strconv.Itoa(len(resp))) - w.Write(resp) - }) - defer close() - - host := Host{ - discoURL: baseURL, - hostname: "test-server", - transport: httpTransport, - services: map[string]interface{}{ - "thingy.v2": "/api/v2/", - "thingy.v3": "/api/v3/", - "versions.v1": "https://localhost" + portStr + "/v1/versions/", - }, - } - - expected := &Constraints{ - Service: "thingy.v3", - Product: "terraform", - Minimum: "0.11.8", - Maximum: "0.12.0", - } - - actual, err := host.VersionConstraints("thingy.v1", "terraform") - if err != nil { - t.Fatalf("unexpected version constraints error: %s", err) - } - - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("expected %#v, got: %#v", expected, actual) - } - }) - - t.Run("service not provided", func(t *testing.T) { - host := Host{ - discoURL: baseURL, - hostname: "test-server", - transport: httpTransport, - services: map[string]interface{}{ - "versions.v1": "https://localhost/v1/versions/", - }, - } - - _, err := host.VersionConstraints("thingy.v1", "terraform") - if _, ok := err.(*ErrServiceNotProvided); !ok { - t.Fatalf("expected service not provided error, got: %v", err) - } - }) - - t.Run("versions service returns a 404", func(t *testing.T) { - portStr, close := testVersionsServer(nil) - defer close() - - host := Host{ - discoURL: baseURL, - hostname: "test-server", - transport: httpTransport, - services: map[string]interface{}{ - "thingy.v1": "/api/v1/", - "versions.v1": "https://localhost" + portStr + "/v1/non-existent/", - }, - } - - _, err := host.VersionConstraints("thingy.v1", "terraform") - if _, ok := err.(*ErrNoVersionConstraints); !ok { - t.Fatalf("expected service not provided error, got: %v", err) - } - }) - - t.Run("checkpoint is disabled", func(t *testing.T) { - if err := os.Setenv("CHECKPOINT_DISABLE", "1"); err != nil { - t.Fatalf("unexpected error: %v", err) - } - defer os.Unsetenv("CHECKPOINT_DISABLE") - - host := Host{ - discoURL: baseURL, - hostname: "test-server", - transport: httpTransport, - services: map[string]interface{}{ - "thingy.v1": "/api/v1/", - "versions.v1": "https://localhost/v1/versions/", - }, - } - - _, err := host.VersionConstraints("thingy.v1", "terraform") - if _, ok := err.(*ErrNoVersionConstraints); !ok { - t.Fatalf("expected service not provided error, got: %v", err) - } - }) - - t.Run("versions service not discovered", func(t *testing.T) { - host := Host{ - discoURL: baseURL, - hostname: "test-server", - transport: httpTransport, - services: map[string]interface{}{ - "thingy.v1": "/api/v1/", - }, - } - - _, err := host.VersionConstraints("thingy.v1", "terraform") - if _, ok := err.(*ErrServiceNotProvided); !ok { - t.Fatalf("expected service not provided error, got: %v", err) - } - }) - - t.Run("versions service version not discovered", func(t *testing.T) { - host := Host{ - discoURL: baseURL, - hostname: "test-server", - transport: httpTransport, - services: map[string]interface{}{ - "thingy.v1": "/api/v1/", - "versions.v2": "https://localhost/v2/versions/", - }, - } - - _, err := host.VersionConstraints("thingy.v1", "terraform") - if _, ok := err.(*ErrVersionNotSupported); !ok { - t.Fatalf("expected service not provided error, got: %v", err) - } - }) -} - -func testVersionsServer(h func(w http.ResponseWriter, r *http.Request)) (portStr string, close func()) { - server := httptest.NewTLSServer(http.HandlerFunc( - func(w http.ResponseWriter, r *http.Request) { - // Test server always returns 404 if the URL isn't what we expect - if !strings.HasPrefix(r.URL.Path, "/v1/versions/") { - w.WriteHeader(404) - w.Write([]byte("not found")) - return - } - - // If the URL is correct then the given hander decides the response - h(w, r) - }, - )) - - serverURL, _ := url.Parse(server.URL) - - portStr = serverURL.Port() - if portStr != "" { - portStr = ":" + portStr - } - - close = func() { - server.Close() - } - - return portStr, close -} diff --git a/svchost/svchost_test.go b/svchost/svchost_test.go deleted file mode 100644 index 2eda3acbe..000000000 --- a/svchost/svchost_test.go +++ /dev/null @@ -1,218 +0,0 @@ -package svchost - -import "testing" - -func TestForDisplay(t *testing.T) { - tests := []struct { - Input string - Want string - }{ - { - "", - "", - }, - { - "example.com", - "example.com", - }, - { - "invalid", - "invalid", - }, - { - "localhost", - "localhost", - }, - { - "localhost:1211", - "localhost:1211", - }, - { - "HashiCorp.com", - "hashicorp.com", - }, - { - "Испытание.com", - "испытание.com", - }, - { - "münchen.de", // this is a precomposed u with diaeresis - "münchen.de", // this is a precomposed u with diaeresis - }, - { - "münchen.de", // this is a separate u and combining diaeresis - "münchen.de", // this is a precomposed u with diaeresis - }, - { - "example.com:443", - "example.com", - }, - { - "example.com:81", - "example.com:81", - }, - { - "example.com:boo", - "example.com:boo", // invalid, but tolerated for display purposes - }, - { - "example.com:boo:boo", - "example.com:boo:boo", // invalid, but tolerated for display purposes - }, - { - "example.com:081", - "example.com:81", - }, - } - - for _, test := range tests { - t.Run(test.Input, func(t *testing.T) { - got := ForDisplay(test.Input) - if got != test.Want { - t.Errorf("wrong result\ninput: %s\ngot: %s\nwant: %s", test.Input, got, test.Want) - } - }) - } -} - -func TestForComparison(t *testing.T) { - tests := []struct { - Input string - Want string - Err bool - }{ - { - "", - "", - true, - }, - { - "example.com", - "example.com", - false, - }, - { - "example.com:443", - "example.com", - false, - }, - { - "example.com:81", - "example.com:81", - false, - }, - { - "example.com:081", - "example.com:81", - false, - }, - { - "invalid", - "invalid", - false, // the "invalid" TLD is, confusingly, a valid hostname syntactically - }, - { - "localhost", // supported for local testing only - "localhost", - false, - }, - { - "localhost:1211", // supported for local testing only - "localhost:1211", - false, - }, - { - "HashiCorp.com", - "hashicorp.com", - false, - }, - { - "1example.com", - "1example.com", - false, - }, - { - "Испытание.com", - "xn--80akhbyknj4f.com", - false, - }, - { - "münchen.de", // this is a precomposed u with diaeresis - "xn--mnchen-3ya.de", - false, - }, - { - "münchen.de", // this is a separate u and combining diaeresis - "xn--mnchen-3ya.de", - false, - }, - { - "blah..blah", - "", - true, - }, - { - "example.com:boo", - "", - true, - }, - { - "example.com:80:boo", - "", - true, - }, - } - - for _, test := range tests { - t.Run(test.Input, func(t *testing.T) { - got, err := ForComparison(test.Input) - if (err != nil) != test.Err { - if test.Err { - t.Error("unexpected success; want error") - } else { - t.Errorf("unexpected error; want success\nerror: %s", err) - } - } - if string(got) != test.Want { - t.Errorf("wrong result\ninput: %s\ngot: %s\nwant: %s", test.Input, got, test.Want) - } - }) - } -} - -func TestHostnameForDisplay(t *testing.T) { - tests := []struct { - Input string - Want string - }{ - { - "example.com", - "example.com", - }, - { - "example.com:81", - "example.com:81", - }, - { - "xn--80akhbyknj4f.com", - "испытание.com", - }, - { - "xn--80akhbyknj4f.com:8080", - "испытание.com:8080", - }, - { - "xn--mnchen-3ya.de", - "münchen.de", // this is a precomposed u with diaeresis - }, - } - - for _, test := range tests { - t.Run(test.Input, func(t *testing.T) { - got := Hostname(test.Input).ForDisplay() - if got != test.Want { - t.Errorf("wrong result\ninput: %s\ngot: %s\nwant: %s", test.Input, got, test.Want) - } - }) - } -} diff --git a/terraform/context.go b/terraform/context.go index 76be6dfc5..1817aa541 100644 --- a/terraform/context.go +++ b/terraform/context.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" + "github.com/hashicorp/terraform/instances" "github.com/hashicorp/terraform/lang" "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/providers" @@ -24,19 +25,12 @@ import ( type InputMode byte const ( - // InputModeVar asks for all variables - InputModeVar InputMode = 1 << iota - - // InputModeVarUnset asks for variables which are not set yet. - // InputModeVar must be set for this to have an effect. - InputModeVarUnset - // InputModeProvider asks for provider variables - InputModeProvider + InputModeProvider InputMode = 1 << iota // InputModeStd is the standard operating mode and asks for both variables // and providers. - InputModeStd = InputModeVar | InputModeProvider + InputModeStd = InputModeProvider ) var ( @@ -145,7 +139,17 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { // Determine parallelism, default to 10. We do this both to limit // CPU pressure but also to have an extra guard against rate throttling // from providers. + // We throw an error in case of negative parallelism par := opts.Parallelism + if par < 0 { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Invalid parallelism value", + fmt.Sprintf("The parallelism must be a positive value. Not %d.", par), + )) + return nil, diags + } + if par == 0 { par = 10 } @@ -166,7 +170,7 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { variables = variables.Override(opts.Variables) // Bind available provider plugins to the constraints in config - var providerFactories map[string]providers.Factory + var providerFactories map[addrs.Provider]providers.Factory if opts.ProviderResolver != nil { deps := ConfigTreeDependencies(opts.Config, state) reqd := deps.AllPluginRequirements() @@ -174,7 +178,6 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { reqd.LockExecutables(opts.ProviderSHA256s) } log.Printf("[TRACE] terraform.NewContext: resolving provider version selections") - var providerDiags tfdiags.Diagnostics providerFactories, providerDiags = resourceProviderFactories(opts.ProviderResolver, reqd) diags = diags.Append(providerDiags) @@ -183,7 +186,7 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { return nil, diags } } else { - providerFactories = make(map[string]providers.Factory) + providerFactories = make(map[addrs.Provider]providers.Factory) } components := &basicComponentFactory{ @@ -210,6 +213,18 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { log.Printf("[TRACE] terraform.NewContext: complete") + // By the time we get here, we should have values defined for all of + // the root module variables, even if some of them are "unknown". It's the + // caller's responsibility to have already handled the decoding of these + // from the various ways the CLI allows them to be set and to produce + // user-friendly error messages if they are not all present, and so + // the error message from checkInputVariables should never be seen and + // includes language asking the user to report a bug. + if config != nil { + varDiags := checkInputVariables(config.Module.Variables, variables) + diags = diags.Append(varDiags) + } + return &Context{ components: components, schemas: schemas, @@ -227,7 +242,7 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { providerInputConfig: make(map[string]map[string]cty.Value), providerSHA256s: opts.ProviderSHA256s, sh: sh, - }, nil + }, diags } func (c *Context) Schemas() *Schemas { @@ -658,14 +673,6 @@ func (c *Context) Validate() tfdiags.Diagnostics { var diags tfdiags.Diagnostics - // Validate input variables. We do this only for the values supplied - // by the root module, since child module calls are validated when we - // visit their graph nodes. - if c.config != nil { - varDiags := checkInputVariables(c.config.Module.Variables, c.variables) - diags = diags.Append(varDiags) - } - // If we have errors at this point then we probably won't be able to // construct a graph without producing redundant errors, so we'll halt early. if diags.HasErrors() { @@ -782,6 +789,7 @@ func (c *Context) graphWalker(operation walkOperation) *ContextGraphWalker { Context: c, State: c.state.SyncWrapper(), Changes: c.changes.SyncWrapper(), + InstanceExpander: instances.NewExpander(), Operation: operation, StopContext: c.runContext, RootVariableValues: c.variables, diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go index 808e62bbc..fc26f446b 100644 --- a/terraform/context_apply_test.go +++ b/terraform/context_apply_test.go @@ -26,6 +26,7 @@ import ( "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/provisioners" "github.com/hashicorp/terraform/states" + "github.com/hashicorp/terraform/states/statefile" "github.com/hashicorp/terraform/tfdiags" "github.com/zclconf/go-cty/cty" ) @@ -38,8 +39,8 @@ func TestContext2Apply_basic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -81,8 +82,8 @@ func TestContext2Apply_unstable(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -137,8 +138,8 @@ func TestContext2Apply_escape(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -155,7 +156,7 @@ func TestContext2Apply_escape(t *testing.T) { checkStateString(t, state, ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = "bar" type = aws_instance `) @@ -169,8 +170,8 @@ func TestContext2Apply_resourceCountOneList(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -185,7 +186,7 @@ func TestContext2Apply_resourceCountOneList(t *testing.T) { got := strings.TrimSpace(state.String()) want := strings.TrimSpace(`null_resource.foo.0: ID = foo - provider = provider.null + provider = provider["registry.terraform.io/-/null"] Outputs: @@ -202,8 +203,8 @@ func TestContext2Apply_resourceCountZeroList(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -260,8 +261,8 @@ func TestContext2Apply_resourceDependsOnModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -289,35 +290,26 @@ func TestContext2Apply_resourceDependsOnModuleStateOnly(t *testing.T) { p := testProvider("aws") p.DiffFn = testDiffFn - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.a": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "parent", - }, - Dependencies: []string{"module.child"}, - Provider: "provider.aws", - }, - }, - }, - &ModuleState{ - Path: []string{"root", "child"}, - Resources: map[string]*ResourceState{ - "aws_instance.child": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "child", - }, - Provider: "provider.aws", - }, - }, - }, + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.a").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"parent"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("module.child.aws_instance.child")}, }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) + child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) + child.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.child").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"child"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) { // verify the apply happens in the correct order @@ -348,8 +340,8 @@ func TestContext2Apply_resourceDependsOnModuleStateOnly(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -381,8 +373,8 @@ func TestContext2Apply_resourceDependsOnModuleDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -428,8 +420,8 @@ func TestContext2Apply_resourceDependsOnModuleDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: globalState, @@ -487,8 +479,8 @@ func TestContext2Apply_resourceDependsOnModuleGrandchild(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -540,8 +532,8 @@ func TestContext2Apply_resourceDependsOnModuleInModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -571,8 +563,8 @@ func TestContext2Apply_mapVarBetweenModules(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -595,7 +587,7 @@ amis_from_module = {eu-west-1:ami-789012 eu-west-2:ami-989484 us-west-1:ami-1234 module.test: null_resource.noop: ID = foo - provider = provider.null + provider = provider["registry.terraform.io/-/null"] Outputs: @@ -613,8 +605,8 @@ func TestContext2Apply_refCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -648,8 +640,8 @@ func TestContext2Apply_providerAlias(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -686,8 +678,8 @@ func TestContext2Apply_providerAliasConfigure(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "another": testProviderFuncFixed(p2), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("another"): testProviderFuncFixed(p2), }, ), }) @@ -744,8 +736,8 @@ func TestContext2Apply_providerWarning(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -763,7 +755,7 @@ func TestContext2Apply_providerWarning(t *testing.T) { expected := strings.TrimSpace(` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) if actual != expected { t.Fatalf("got: \n%s\n\nexpected:\n%s", actual, expected) @@ -775,6 +767,7 @@ aws_instance.foo: } func TestContext2Apply_emptyModule(t *testing.T) { + // A module with only outputs (no resources) m := testModule(t, "apply-empty-module") p := testProvider("aws") p.ApplyFn = testApplyFn @@ -782,8 +775,8 @@ func TestContext2Apply_emptyModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -831,8 +824,8 @@ func TestContext2Apply_createBeforeDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -888,8 +881,8 @@ func TestContext2Apply_createBeforeDestroyUpdate(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -956,8 +949,8 @@ func TestContext2Apply_createBeforeDestroy_dependsNonCBD(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -977,7 +970,7 @@ func TestContext2Apply_createBeforeDestroy_dependsNonCBD(t *testing.T) { checkStateString(t, state, ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = yes type = aws_instance value = foo @@ -986,7 +979,7 @@ aws_instance.bar: aws_instance.foo aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = yes type = aws_instance `) @@ -1032,8 +1025,8 @@ func TestContext2Apply_createBeforeDestroy_hook(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1110,8 +1103,8 @@ func TestContext2Apply_createBeforeDestroy_deposedCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1131,12 +1124,12 @@ func TestContext2Apply_createBeforeDestroy_deposedCount(t *testing.T) { checkStateString(t, state, ` aws_instance.bar.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.bar.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance `) @@ -1176,8 +1169,8 @@ func TestContext2Apply_createBeforeDestroy_deposedOnly(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1197,7 +1190,7 @@ func TestContext2Apply_createBeforeDestroy_deposedOnly(t *testing.T) { checkStateString(t, state, ` aws_instance.bar: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) } @@ -1228,8 +1221,8 @@ func TestContext2Apply_destroyComputed(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1263,30 +1256,26 @@ func testContext2Apply_destroyDependsOn(t *testing.T) { p := testProvider("aws") p.ApplyFn = testApplyFn p.DiffFn = testDiffFn - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "foo", - Attributes: map[string]string{}, - }, - }, - "aws_instance.bar": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - Attributes: map[string]string{}, - }, - }, - }, - }, + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.bar").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.foo").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("aws_instance.bar")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) // Record the order we see Apply var actual []string @@ -1302,8 +1291,8 @@ func testContext2Apply_destroyDependsOn(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1328,46 +1317,64 @@ func testContext2Apply_destroyDependsOn(t *testing.T) { // Test that destroy ordering is correct with dependencies only // in the state. func TestContext2Apply_destroyDependsOnStateOnly(t *testing.T) { + newState := states.NewState() + root := newState.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "foo", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo"}`), + Dependencies: []addrs.AbsResource{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "bar", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), + Dependencies: []addrs.AbsResource{ + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "foo", + }, + Module: root.Addr, + }, + }, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + // It is possible for this to be racy, so we loop a number of times // just to check. for i := 0; i < 10; i++ { - testContext2Apply_destroyDependsOnStateOnly(t) + t.Run("new", func(t *testing.T) { + testContext2Apply_destroyDependsOnStateOnly(t, newState) + }) } } -func testContext2Apply_destroyDependsOnStateOnly(t *testing.T) { +func testContext2Apply_destroyDependsOnStateOnly(t *testing.T, state *states.State) { m := testModule(t, "empty") p := testProvider("aws") p.ApplyFn = testApplyFn p.DiffFn = testDiffFn - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "foo", - Attributes: map[string]string{}, - }, - Provider: "provider.aws", - }, - - "aws_instance.bar": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - Attributes: map[string]string{}, - }, - Dependencies: []string{"aws_instance.foo"}, - Provider: "provider.aws", - }, - }, - }, - }, - }) - // Record the order we see Apply var actual []string var actualLock sync.Mutex @@ -1382,8 +1389,8 @@ func testContext2Apply_destroyDependsOnStateOnly(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1408,45 +1415,64 @@ func testContext2Apply_destroyDependsOnStateOnly(t *testing.T) { // Test that destroy ordering is correct with dependencies only // in the state within a module (GH-11749) func TestContext2Apply_destroyDependsOnStateOnlyModule(t *testing.T) { + newState := states.NewState() + child := newState.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) + child.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "foo", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo"}`), + Dependencies: []addrs.AbsResource{}, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + child.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "bar", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), + Dependencies: []addrs.AbsResource{ + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "foo", + }, + Module: child.Addr, + }, + }, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + // It is possible for this to be racy, so we loop a number of times // just to check. for i := 0; i < 10; i++ { - testContext2Apply_destroyDependsOnStateOnlyModule(t) + t.Run("new", func(t *testing.T) { + testContext2Apply_destroyDependsOnStateOnlyModule(t, newState) + }) } } -func testContext2Apply_destroyDependsOnStateOnlyModule(t *testing.T) { +func testContext2Apply_destroyDependsOnStateOnlyModule(t *testing.T, state *states.State) { m := testModule(t, "empty") p := testProvider("aws") p.ApplyFn = testApplyFn p.DiffFn = testDiffFn - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: []string{"root", "child"}, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "foo", - Attributes: map[string]string{}, - }, - Provider: "provider.aws", - }, - - "aws_instance.bar": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - Attributes: map[string]string{}, - }, - Dependencies: []string{"aws_instance.foo"}, - Provider: "provider.aws", - }, - }, - }, - }, - }) // Record the order we see Apply var actual []string @@ -1462,8 +1488,8 @@ func testContext2Apply_destroyDependsOnStateOnlyModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1500,8 +1526,8 @@ func TestContext2Apply_dataBasic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -1550,8 +1576,8 @@ func TestContext2Apply_destroyData(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), State: state, @@ -1621,8 +1647,8 @@ func TestContext2Apply_destroySkipsCBD(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1663,8 +1689,8 @@ func TestContext2Apply_destroyModuleVarProviderConfig(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1709,8 +1735,8 @@ func TestContext2Apply_destroyCrossProviders(t *testing.T) { }, } - providers := map[string]providers.Factory{ - "aws": testProviderFuncFixed(p_aws), + providers := map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p_aws), } // Bug only appears from time to time, @@ -1733,7 +1759,7 @@ func TestContext2Apply_destroyCrossProviders(t *testing.T) { } } -func getContextForApply_destroyCrossProviders(t *testing.T, m *configs.Config, providerFactories map[string]providers.Factory) *Context { +func getContextForApply_destroyCrossProviders(t *testing.T, m *configs.Config, providerFactories map[addrs.Provider]providers.Factory) *Context { state := MustShimLegacyState(&State{ Modules: []*ModuleState{ &ModuleState{ @@ -1786,8 +1812,8 @@ func TestContext2Apply_minimal(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1816,8 +1842,8 @@ func TestContext2Apply_badDiff(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1851,8 +1877,8 @@ func TestContext2Apply_cancel(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1930,8 +1956,8 @@ func TestContext2Apply_cancelBlock(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2013,77 +2039,11 @@ func TestContext2Apply_cancelBlock(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 `) } -// for_each values cannot be used in the provisioner during destroy. -// There may be a way to handle this, but for now make sure we print an error -// rather than crashing with an invalid config. -func TestContext2Apply_provisionerDestroyForEach(t *testing.T) { - m := testModule(t, "apply-provisioner-each") - p := testProvider("aws") - pr := testProvisioner() - p.DiffFn = testDiffFn - p.ApplyFn = testApplyFn - - s := &states.State{ - Modules: map[string]*states.Module{ - "": &states.Module{ - Resources: map[string]*states.Resource{ - "aws_instance.bar": &states.Resource{ - Addr: addrs.Resource{Mode: 77, Type: "aws_instance", Name: "bar"}, - EachMode: states.EachMap, - Instances: map[addrs.InstanceKey]*states.ResourceInstance{ - addrs.StringKey("a"): &states.ResourceInstance{ - Current: &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte(`{"foo":"bar","id":"foo"}`), - }, - }, - addrs.StringKey("b"): &states.ResourceInstance{ - Current: &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte(`{"foo":"bar","id":"foo"}`), - }, - }, - }, - ProviderConfig: addrs.AbsProviderConfig{ - Module: addrs.ModuleInstance(nil), - ProviderConfig: addrs.ProviderConfig{Type: "aws", Alias: ""}, - }, - }, - }, - }, - }, - } - - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Provisioners: map[string]ProvisionerFactory{ - "shell": testProvisionerFuncFixed(pr), - }, - State: s, - Destroy: true, - }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - _, diags := ctx.Apply() - if diags == nil { - t.Fatal("should error") - } - if !strings.Contains(diags.Err().Error(), `Reference to "each" in context without for_each`) { - t.Fatal("unexpected error:", diags.Err()) - } -} - func TestContext2Apply_cancelProvisioner(t *testing.T) { m := testModule(t, "apply-cancel-provisioner") p := testProvider("aws") @@ -2105,8 +2065,8 @@ func TestContext2Apply_cancelProvisioner(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -2148,7 +2108,7 @@ func TestContext2Apply_cancelProvisioner(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: (tainted) ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance `) @@ -2203,8 +2163,8 @@ func TestContext2Apply_compute(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2279,8 +2239,8 @@ func TestContext2Apply_countDecrease(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -2340,8 +2300,8 @@ func TestContext2Apply_countDecreaseToOneX(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -2404,8 +2364,8 @@ func TestContext2Apply_countDecreaseToOneCorrupted(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -2461,8 +2421,8 @@ func TestContext2Apply_countTainted(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -2494,12 +2454,12 @@ CREATE: aws_instance.foo[1] want := strings.TrimSpace(` aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance `) @@ -2516,8 +2476,8 @@ func TestContext2Apply_countVariable(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2546,8 +2506,8 @@ func TestContext2Apply_countVariableRef(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2582,8 +2542,8 @@ func TestContext2Apply_provisionerInterpCount(t *testing.T) { pr := testProvisioner() providerResolver := providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) provisioners := map[string]ProvisionerFactory{ @@ -2636,8 +2596,8 @@ func TestContext2Apply_foreachVariable(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -2671,8 +2631,8 @@ func TestContext2Apply_moduleBasic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2731,36 +2691,32 @@ func TestContext2Apply_moduleDestroyOrder(t *testing.T) { }, } - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.b": resourceState("aws_instance", "b"), - }, - }, - - &ModuleState{ - Path: []string{"root", "child"}, - Resources: map[string]*ResourceState{ - "aws_instance.a": resourceState("aws_instance", "a"), - }, - Outputs: map[string]*OutputState{ - "a_output": &OutputState{ - Type: "string", - Sensitive: false, - Value: "a", - }, - }, - }, + state := states.NewState() + child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) + child.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.a").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"a"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.b").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"b"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("module.child.aws_instance.a")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -2811,8 +2767,8 @@ func TestContext2Apply_moduleInheritAlias(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2831,7 +2787,7 @@ func TestContext2Apply_moduleInheritAlias(t *testing.T) { module.child: aws_instance.foo: ID = foo - provider = provider.aws.eu + provider = provider["registry.terraform.io/-/aws"].eu `) } @@ -2856,8 +2812,8 @@ func TestContext2Apply_orphanResource(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -2869,9 +2825,10 @@ func TestContext2Apply_orphanResource(t *testing.T) { // At this point both resources should be recorded in the state, along // with the single instance associated with test_thing.one. want := states.BuildState(func(s *states.SyncState) { - providerAddr := addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance) + providerAddr := addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + } zeroAddr := addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "test_thing", @@ -2889,7 +2846,9 @@ func TestContext2Apply_orphanResource(t *testing.T) { AttrsJSON: []byte(`{}`), }, providerAddr) }) - if !cmp.Equal(state, want) { + + // compare the marshaled form to easily remove empty and nil slices + if !statefile.StatesMarshalEqual(state, want) { t.Fatalf("wrong state after step 1\n%s", cmp.Diff(want, state)) } @@ -2899,8 +2858,8 @@ func TestContext2Apply_orphanResource(t *testing.T) { Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -2961,8 +2920,8 @@ func TestContext2Apply_moduleOrphanInheritAlias(t *testing.T) { Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3019,8 +2978,8 @@ func TestContext2Apply_moduleOrphanProvider(t *testing.T) { Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3070,8 +3029,8 @@ func TestContext2Apply_moduleOrphanGrandchildProvider(t *testing.T) { Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3107,8 +3066,8 @@ func TestContext2Apply_moduleGrandchildProvider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3144,9 +3103,9 @@ func TestContext2Apply_moduleOnlyProvider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "test": testProviderFuncFixed(pTest), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("test"): testProviderFuncFixed(pTest), }, ), }) @@ -3175,8 +3134,8 @@ func TestContext2Apply_moduleProviderAlias(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3205,8 +3164,8 @@ func TestContext2Apply_moduleProviderAliasTargets(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -3247,8 +3206,8 @@ func TestContext2Apply_moduleProviderCloseNested(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -3312,8 +3271,8 @@ func TestContext2Apply_moduleVarRefExisting(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -3343,8 +3302,8 @@ func TestContext2Apply_moduleVarResourceCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -3367,8 +3326,8 @@ func TestContext2Apply_moduleVarResourceCount(t *testing.T) { ctx = testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -3397,8 +3356,8 @@ func TestContext2Apply_moduleBool(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3429,8 +3388,8 @@ func TestContext2Apply_moduleTarget(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -3452,7 +3411,7 @@ func TestContext2Apply_moduleTarget(t *testing.T) { module.A: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance @@ -3462,9 +3421,12 @@ module.A: module.B: aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance + + Dependencies: + module.A.aws_instance.foo `) } @@ -3481,9 +3443,9 @@ func TestContext2Apply_multiProvider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "do": testProviderFuncFixed(pDO), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("do"): testProviderFuncFixed(pDO), }, ), }) @@ -3549,9 +3511,9 @@ func TestContext2Apply_multiProviderDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "vault": testProviderFuncFixed(p2), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("vault"): testProviderFuncFixed(p2), }, ), }) @@ -3606,9 +3568,9 @@ func TestContext2Apply_multiProviderDestroy(t *testing.T) { State: state, Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "vault": testProviderFuncFixed(p2), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("vault"): testProviderFuncFixed(p2), }, ), }) @@ -3676,9 +3638,9 @@ func TestContext2Apply_multiProviderDestroyChild(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "vault": testProviderFuncFixed(p2), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("vault"): testProviderFuncFixed(p2), }, ), }) @@ -3733,9 +3695,9 @@ func TestContext2Apply_multiProviderDestroyChild(t *testing.T) { State: state, Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "vault": testProviderFuncFixed(p2), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("vault"): testProviderFuncFixed(p2), }, ), }) @@ -3771,8 +3733,8 @@ func TestContext2Apply_multiVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -3806,8 +3768,8 @@ func TestContext2Apply_multiVar(t *testing.T) { Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -3914,8 +3876,8 @@ func TestContext2Apply_multiVarComprehensive(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -4067,8 +4029,8 @@ func TestContext2Apply_multiVarOrder(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4103,8 +4065,8 @@ func TestContext2Apply_multiVarOrderInterp(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4141,8 +4103,8 @@ func TestContext2Apply_multiVarCountDec(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -4209,8 +4171,8 @@ func TestContext2Apply_multiVarCountDec(t *testing.T) { State: s, Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -4269,8 +4231,8 @@ func TestContext2Apply_multiVarMissingState(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -4295,8 +4257,8 @@ func TestContext2Apply_nilDiff(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4337,8 +4299,8 @@ func TestContext2Apply_outputDependsOn(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4362,8 +4324,8 @@ func TestContext2Apply_outputDependsOn(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4380,7 +4342,7 @@ func TestContext2Apply_outputDependsOn(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Outputs: @@ -4418,8 +4380,8 @@ func TestContext2Apply_outputOrphan(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -4468,8 +4430,8 @@ func TestContext2Apply_outputOrphanModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state.DeepCopy(), @@ -4495,8 +4457,8 @@ func TestContext2Apply_outputOrphanModule(t *testing.T) { ctx = testContext2(t, &ContextOpts{ Config: configs.NewEmptyConfig(), ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state.DeepCopy(), @@ -4529,9 +4491,9 @@ func TestContext2Apply_providerComputedVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "test": testProviderFuncFixed(pTest), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("test"): testProviderFuncFixed(pTest), }, ), }) @@ -4581,8 +4543,8 @@ func TestContext2Apply_providerConfigureDisabled(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4619,8 +4581,8 @@ func TestContext2Apply_provisionerModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -4666,8 +4628,8 @@ func TestContext2Apply_Provisioner_compute(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -4716,8 +4678,8 @@ func TestContext2Apply_provisionerCreateFail(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -4757,8 +4719,8 @@ func TestContext2Apply_provisionerCreateFailNoId(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -4796,19 +4758,13 @@ func TestContext2Apply_provisionerFail(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ "shell": testProvisionerFuncFixed(pr), }, - Variables: InputValues{ - "value": &InputValue{ - Value: cty.NumberIntVal(1), - SourceType: ValueFromCaller, - }, - }, }) if _, diags := ctx.Plan(); diags.HasErrors() { @@ -4858,8 +4814,8 @@ func TestContext2Apply_provisionerFail_createBeforeDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -4908,14 +4864,14 @@ func TestContext2Apply_error_createBeforeDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, }) p.ApplyFn = func(info *InstanceInfo, is *InstanceState, id *InstanceDiff) (*InstanceState, error) { - return nil, fmt.Errorf("error") + return nil, fmt.Errorf("placeholder error from ApplyFn") } p.DiffFn = testDiffFn @@ -4927,6 +4883,11 @@ func TestContext2Apply_error_createBeforeDestroy(t *testing.T) { if diags == nil { t.Fatal("should have error") } + if got, want := diags.Err().Error(), "placeholder error from ApplyFn"; got != want { + // We're looking for our artificial error from ApplyFn above, whose + // message is literally "placeholder error from ApplyFn". + t.Fatalf("wrong error\ngot: %s\nwant: %s", got, want) + } actual := strings.TrimSpace(state.String()) expected := strings.TrimSpace(testTerraformApplyErrorCreateBeforeDestroyStr) @@ -4959,8 +4920,8 @@ func TestContext2Apply_errorDestroy_createBeforeDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -5012,7 +4973,7 @@ func TestContext2Apply_multiDepose_createBeforeDestroy(t *testing.T) { }, }, } - ps := map[string]providers.Factory{"aws": testProviderFuncFixed(p)} + ps := map[addrs.Provider]providers.Factory{addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p)} state := MustShimLegacyState(&State{ Modules: []*ModuleState{ &ModuleState{ @@ -5092,7 +5053,7 @@ func TestContext2Apply_multiDepose_createBeforeDestroy(t *testing.T) { checkStateString(t, state, ` aws_instance.web: (1 deposed) ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = yes Deposed ID 1 = foo `) @@ -5175,7 +5136,7 @@ aws_instance.web: (1 deposed) checkStateString(t, state, ` aws_instance.web: (1 deposed) ID = qux - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = yes Deposed ID 1 = bar `) @@ -5203,7 +5164,7 @@ aws_instance.web: (1 deposed) checkStateString(t, state, ` aws_instance.web: ID = quux - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = yes `) } @@ -5224,8 +5185,8 @@ func TestContext2Apply_provisionerFailContinue(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5245,7 +5206,7 @@ func TestContext2Apply_provisionerFailContinue(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance `) @@ -5273,8 +5234,8 @@ func TestContext2Apply_provisionerFailContinueHook(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5306,36 +5267,31 @@ func TestContext2Apply_provisionerDestroy(t *testing.T) { p.DiffFn = testDiffFn pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { val, ok := c.Config["command"] - if !ok || val != "destroy" { + if !ok || val != "destroy a" { t.Fatalf("bad value for foo: %v %#v", val, c) } return nil } - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - }, - }, - }, + state := states.NewState() + root := state.RootModule() + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`aws_instance.foo["a"]`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) ctx := testContext2(t, &ContextOpts{ Config: m, State: state, Destroy: true, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5371,29 +5327,24 @@ func TestContext2Apply_provisionerDestroyFail(t *testing.T) { return fmt.Errorf("provisioner error") } - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - }, - }, - }, + state := states.NewState() + root := state.RootModule() + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`aws_instance.foo["a"]`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) ctx := testContext2(t, &ContextOpts{ Config: m, State: state, Destroy: true, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5411,9 +5362,9 @@ func TestContext2Apply_provisionerDestroyFail(t *testing.T) { } checkStateString(t, state, ` -aws_instance.foo: +aws_instance.foo["a"]: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) // Verify apply was invoked @@ -5445,29 +5396,24 @@ func TestContext2Apply_provisionerDestroyFailContinue(t *testing.T) { return fmt.Errorf("provisioner error") } - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - }, - }, - }, + state := states.NewState() + root := state.RootModule() + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`aws_instance.foo["a"]`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) ctx := testContext2(t, &ContextOpts{ Config: m, State: state, Destroy: true, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5542,8 +5488,8 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) { State: state, Destroy: true, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5563,7 +5509,7 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) // Verify apply was invoked @@ -5587,7 +5533,7 @@ func TestContext2Apply_provisionerDestroyTainted(t *testing.T) { destroyCalled := false pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { - expected := "create" + expected := "create a b" if rs.ID == "bar" { destroyCalled = true return nil @@ -5601,34 +5547,36 @@ func TestContext2Apply_provisionerDestroyTainted(t *testing.T) { return nil } - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - Tainted: true, - }, - }, - }, - }, + state := states.NewState() + root := state.RootModule() + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`aws_instance.foo["a"]`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectTainted, + AttrsJSON: []byte(`{"id":"bar"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) ctx := testContext2(t, &ContextOpts{ Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ "shell": testProvisionerFuncFixed(pr), }, + Variables: InputValues{ + "input": &InputValue{ + Value: cty.MapVal(map[string]cty.Value{ + "a": cty.StringVal("b"), + }), + SourceType: ValueFromInput, + }, + }, }) if _, diags := ctx.Plan(); diags.HasErrors() { @@ -5641,9 +5589,9 @@ func TestContext2Apply_provisionerDestroyTainted(t *testing.T) { } checkStateString(t, state, ` -aws_instance.foo: +aws_instance.foo["a"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance `) @@ -5658,196 +5606,6 @@ aws_instance.foo: } } -func TestContext2Apply_provisionerDestroyModule(t *testing.T) { - m := testModule(t, "apply-provisioner-destroy-module") - p := testProvider("aws") - pr := testProvisioner() - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { - val, ok := c.Config["command"] - if !ok || val != "value" { - t.Fatalf("bad value for foo: %v %#v", val, c) - } - - return nil - } - - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: []string{"root", "child"}, - Resources: map[string]*ResourceState{ - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - }, - }, - }, - }, - }) - - ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - Destroy: true, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Provisioners: map[string]ProvisionerFactory{ - "shell": testProvisionerFuncFixed(pr), - }, - }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() - if diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } - - checkStateString(t, state, ``) - - // Verify apply was invoked - if !pr.ProvisionResourceCalled { - t.Fatalf("provisioner not invoked") - } -} - -func TestContext2Apply_provisionerDestroyRef(t *testing.T) { - m := testModule(t, "apply-provisioner-destroy-ref") - p := testProvider("aws") - pr := testProvisioner() - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { - val, ok := c.Config["command"] - if !ok || val != "hello" { - return fmt.Errorf("bad value for command: %v %#v", val, c) - } - - return nil - } - - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.bar": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - Attributes: map[string]string{ - "value": "hello", - }, - }, - Provider: "provider.aws", - }, - - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - Provider: "provider.aws", - }, - }, - }, - }, - }) - - ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - Destroy: true, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Provisioners: map[string]ProvisionerFactory{ - "shell": testProvisionerFuncFixed(pr), - }, - }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() - if diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } - - checkStateString(t, state, ``) - - // Verify apply was invoked - if !pr.ProvisionResourceCalled { - t.Fatalf("provisioner not invoked") - } -} - -// Test that a destroy provisioner referencing an invalid key errors. -func TestContext2Apply_provisionerDestroyRefInvalid(t *testing.T) { - m := testModule(t, "apply-provisioner-destroy-ref-invalid") - p := testProvider("aws") - pr := testProvisioner() - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { - return nil - } - - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.bar": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - }, - - "aws_instance.foo": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - }, - }, - }, - }, - }) - - ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - Destroy: true, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Provisioners: map[string]ProvisionerFactory{ - "shell": testProvisionerFuncFixed(pr), - }, - }) - - // this was an apply test, but this is now caught in Validation - if diags := ctx.Validate(); !diags.HasErrors() { - t.Fatal("expected error") - } -} - func TestContext2Apply_provisionerResourceRef(t *testing.T) { m := testModule(t, "apply-provisioner-resource-ref") p := testProvider("aws") @@ -5867,8 +5625,8 @@ func TestContext2Apply_provisionerResourceRef(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5915,8 +5673,8 @@ func TestContext2Apply_provisionerSelfRef(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -5970,8 +5728,8 @@ func TestContext2Apply_provisionerMultiSelfRef(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -6032,8 +5790,8 @@ func TestContext2Apply_provisionerMultiSelfRefSingle(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -6089,8 +5847,8 @@ func TestContext2Apply_provisionerExplicitSelfRef(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -6120,8 +5878,8 @@ func TestContext2Apply_provisionerExplicitSelfRef(t *testing.T) { Destroy: true, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -6143,6 +5901,44 @@ func TestContext2Apply_provisionerExplicitSelfRef(t *testing.T) { } } +func TestContext2Apply_provisionerForEachSelfRef(t *testing.T) { + m := testModule(t, "apply-provisioner-for-each-self") + p := testProvider("aws") + pr := testProvisioner() + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + + pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { + val, ok := c.Config["command"] + if !ok { + t.Fatalf("bad value for command: %v %#v", val, c) + } + + return nil + } + + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + }, + ), + Provisioners: map[string]ProvisionerFactory{ + "shell": testProvisionerFuncFixed(pr), + }, + }) + + if _, diags := ctx.Plan(); diags.HasErrors() { + t.Fatalf("plan errors: %s", diags.Err()) + } + + _, diags := ctx.Apply() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } +} + // Provisioner should NOT run on a diff, only create func TestContext2Apply_Provisioner_Diff(t *testing.T) { m := testModule(t, "apply-provisioner-diff") @@ -6156,8 +5952,8 @@ func TestContext2Apply_Provisioner_Diff(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -6206,8 +6002,8 @@ func TestContext2Apply_Provisioner_Diff(t *testing.T) { ctx = testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -6259,8 +6055,8 @@ func TestContext2Apply_outputDiffVars(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -6322,8 +6118,8 @@ func TestContext2Apply_destroyX(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6346,8 +6142,8 @@ func TestContext2Apply_destroyX(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6386,8 +6182,8 @@ func TestContext2Apply_destroyOrder(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6412,8 +6208,8 @@ func TestContext2Apply_destroyOrder(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6453,8 +6249,8 @@ func TestContext2Apply_destroyModulePrefix(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6482,8 +6278,8 @@ func TestContext2Apply_destroyModulePrefix(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6529,8 +6325,8 @@ func TestContext2Apply_destroyNestedModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -6579,8 +6375,8 @@ func TestContext2Apply_destroyDeeplyNestedModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -6614,8 +6410,8 @@ func TestContext2Apply_destroyModuleWithAttrsReferencingResource(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6646,16 +6442,10 @@ func TestContext2Apply_destroyModuleWithAttrsReferencingResource(t *testing.T) { State: state, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), - Variables: InputValues{ - "key_name": &InputValue{ - Value: cty.StringVal("foobarkey"), - SourceType: ValueFromCaller, - }, - }, }) // First plan and apply a create operation @@ -6672,8 +6462,8 @@ func TestContext2Apply_destroyModuleWithAttrsReferencingResource(t *testing.T) { } ctxOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) ctx, diags = NewContext(ctxOpts) @@ -6709,8 +6499,8 @@ func TestContext2Apply_destroyWithModuleVariableAndCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6736,8 +6526,8 @@ func TestContext2Apply_destroyWithModuleVariableAndCount(t *testing.T) { State: state, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6754,8 +6544,8 @@ func TestContext2Apply_destroyWithModuleVariableAndCount(t *testing.T) { } ctxOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) ctx, diags = NewContext(ctxOpts) @@ -6792,8 +6582,8 @@ func TestContext2Apply_destroyTargetWithModuleVariableAndCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6815,8 +6605,8 @@ func TestContext2Apply_destroyTargetWithModuleVariableAndCount(t *testing.T) { Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -6875,8 +6665,8 @@ func TestContext2Apply_destroyWithModuleVariableAndCountNested(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6902,8 +6692,8 @@ func TestContext2Apply_destroyWithModuleVariableAndCountNested(t *testing.T) { State: state, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6920,8 +6710,8 @@ func TestContext2Apply_destroyWithModuleVariableAndCountNested(t *testing.T) { } ctxOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) ctx, diags = NewContext(ctxOpts) @@ -6954,8 +6744,8 @@ func TestContext2Apply_destroyOutputs(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -6977,8 +6767,8 @@ func TestContext2Apply_destroyOutputs(t *testing.T) { State: state, Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7003,8 +6793,8 @@ func TestContext2Apply_destroyOutputs(t *testing.T) { State: state, Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7038,8 +6828,8 @@ func TestContext2Apply_destroyOrphan(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -7122,8 +6912,8 @@ func TestContext2Apply_destroyTaintedProvisioner(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -7161,8 +6951,8 @@ func TestContext2Apply_error(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7258,14 +7048,15 @@ func TestContext2Apply_errorDestroy(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"baz"}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }), ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -7283,7 +7074,7 @@ func TestContext2Apply_errorDestroy(t *testing.T) { expected := strings.TrimSpace(` test_thing.foo: ID = baz - provider = provider.test + provider = provider["registry.terraform.io/-/test"] `) // test_thing.foo is still here, even though provider returned no new state along with its error if actual != expected { t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) @@ -7325,8 +7116,8 @@ func TestContext2Apply_errorCreateInvalidNew(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7397,14 +7188,15 @@ func TestContext2Apply_errorUpdateNullNew(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"value":"old"}`), }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }), ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7466,8 +7258,8 @@ func TestContext2Apply_errorPartial(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -7534,8 +7326,8 @@ func TestContext2Apply_hook(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7588,8 +7380,8 @@ func TestContext2Apply_hookOrphan(t *testing.T) { State: state, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7619,8 +7411,8 @@ func TestContext2Apply_idAttr(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7677,8 +7469,8 @@ func TestContext2Apply_outputBasic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7707,8 +7499,8 @@ func TestContext2Apply_outputAdd(t *testing.T) { ctx1 := testContext2(t, &ContextOpts{ Config: m1, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p1), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p1), }, ), }) @@ -7729,8 +7521,8 @@ func TestContext2Apply_outputAdd(t *testing.T) { ctx2 := testContext2(t, &ContextOpts{ Config: m2, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p2), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p2), }, ), State: state1, @@ -7760,8 +7552,8 @@ func TestContext2Apply_outputList(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7790,8 +7582,8 @@ func TestContext2Apply_outputMulti(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7820,8 +7612,8 @@ func TestContext2Apply_outputMultiIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -7884,8 +7676,8 @@ func TestContext2Apply_taintX(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -7952,8 +7744,8 @@ func TestContext2Apply_taintDep(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -8016,8 +7808,8 @@ func TestContext2Apply_taintDepRequiresNew(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -8049,8 +7841,8 @@ func TestContext2Apply_targeted(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -8077,7 +7869,7 @@ func TestContext2Apply_targeted(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance `) @@ -8091,8 +7883,8 @@ func TestContext2Apply_targetedCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -8114,13 +7906,13 @@ func TestContext2Apply_targetedCount(t *testing.T) { checkStateString(t, state, ` aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.2: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) } @@ -8132,8 +7924,8 @@ func TestContext2Apply_targetedCountIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -8155,7 +7947,7 @@ func TestContext2Apply_targetedCountIndex(t *testing.T) { checkStateString(t, state, ` aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) } @@ -8167,8 +7959,8 @@ func TestContext2Apply_targetedDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -8207,281 +7999,44 @@ func TestContext2Apply_targetedDestroy(t *testing.T) { checkStateString(t, state, ` aws_instance.bar: ID = i-abc123 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) } -func TestContext2Apply_destroyProvisionerWithLocals(t *testing.T) { - m := testModule(t, "apply-provisioner-destroy-locals") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - - pr := testProvisioner() - pr.ApplyFn = func(_ *InstanceState, rc *ResourceConfig) error { - cmd, ok := rc.Get("command") - if !ok || cmd != "local" { - return fmt.Errorf("provisioner got %v:%s", ok, cmd) - } - return nil - } - pr.GetSchemaResponse = provisioners.GetSchemaResponse{ - Provisioner: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "command": { - Type: cty.String, - Required: true, - }, - "when": { - Type: cty.String, - Optional: true, - }, - }, - }, - } - - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Provisioners: map[string]ProvisionerFactory{ - "shell": testProvisionerFuncFixed(pr), - }, - State: MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: []string{"root"}, - Resources: map[string]*ResourceState{ - "aws_instance.foo": resourceState("aws_instance", "1234"), - }, - }, - }, - }), - Destroy: true, - // the test works without targeting, but this also tests that the local - // node isn't inadvertently pruned because of the wrong evaluation - // order. - Targets: []addrs.Targetable{ - addrs.RootModuleInstance.Resource( - addrs.ManagedResourceMode, "aws_instance", "foo", - ), - }, - }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatal(diags.Err()) - } - - if _, diags := ctx.Apply(); diags.HasErrors() { - t.Fatal(diags.Err()) - } - - if !pr.ProvisionResourceCalled { - t.Fatal("provisioner not called") - } -} - -// this also tests a local value in the config referencing a resource that -// wasn't in the state during destroy. -func TestContext2Apply_destroyProvisionerWithMultipleLocals(t *testing.T) { - m := testModule(t, "apply-provisioner-destroy-multiple-locals") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - - pr := testProvisioner() - pr.GetSchemaResponse = provisioners.GetSchemaResponse{ - Provisioner: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Required: true, - }, - "command": { - Type: cty.String, - Required: true, - }, - "when": { - Type: cty.String, - Optional: true, - }, - }, - }, - } - - pr.ApplyFn = func(is *InstanceState, rc *ResourceConfig) error { - cmd, ok := rc.Get("command") - if !ok { - return errors.New("no command in provisioner") - } - id, ok := rc.Get("id") - if !ok { - return errors.New("no id in provisioner") - } - - switch id { - case "1234": - if cmd != "local" { - return fmt.Errorf("provisioner %q got:%q", is.ID, cmd) - } - case "3456": - if cmd != "1234" { - return fmt.Errorf("provisioner %q got:%q", is.ID, cmd) - } - default: - t.Fatal("unknown instance") - } - return nil - } - - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Provisioners: map[string]ProvisionerFactory{ - "shell": testProvisionerFuncFixed(pr), - }, - State: MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: []string{"root"}, - Resources: map[string]*ResourceState{ - "aws_instance.foo": resourceState("aws_instance", "1234"), - "aws_instance.bar": resourceState("aws_instance", "3456"), - }, - }, - }, - }), - Destroy: true, - }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatal(diags.Err()) - } - - if _, diags := ctx.Apply(); diags.HasErrors() { - t.Fatal(diags.Err()) - } - - if !pr.ProvisionResourceCalled { - t.Fatal("provisioner not called") - } -} - -func TestContext2Apply_destroyProvisionerWithOutput(t *testing.T) { - m := testModule(t, "apply-provisioner-destroy-outputs") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - - pr := testProvisioner() - pr.ApplyFn = func(is *InstanceState, rc *ResourceConfig) error { - cmd, ok := rc.Get("command") - if !ok || cmd != "3" { - return fmt.Errorf("provisioner for %s got %v:%s", is.ID, ok, cmd) - } - return nil - } - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Provisioners: map[string]ProvisionerFactory{ - "shell": testProvisionerFuncFixed(pr), - }, - State: MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: []string{"root"}, - Resources: map[string]*ResourceState{ - "aws_instance.foo": resourceState("aws_instance", "1"), - }, - Outputs: map[string]*OutputState{ - "value": { - Type: "string", - Value: "3", - }, - }, - }, - &ModuleState{ - Path: []string{"root", "mod"}, - Resources: map[string]*ResourceState{ - "aws_instance.baz": resourceState("aws_instance", "3"), - }, - // state needs to be properly initialized - Outputs: map[string]*OutputState{}, - }, - &ModuleState{ - Path: []string{"root", "mod2"}, - Resources: map[string]*ResourceState{ - "aws_instance.bar": resourceState("aws_instance", "2"), - }, - }, - }, - }), - Destroy: true, - - // targeting the source of the value used by all resources should still - // destroy them all. - Targets: []addrs.Targetable{ - addrs.RootModuleInstance.Child("mod", addrs.NoKey).Resource( - addrs.ManagedResourceMode, "aws_instance", "baz", - ), - }, - }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatal(diags.Err()) - } - - state, diags := ctx.Apply() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } - if !pr.ProvisionResourceCalled { - t.Fatal("provisioner not called") - } - - // confirm all outputs were removed too - for _, mod := range state.Modules { - if len(mod.OutputValues) > 0 { - t.Fatalf("output left in module state: %#v\n", mod) - } - } -} - func TestContext2Apply_targetedDestroyCountDeps(t *testing.T) { m := testModule(t, "apply-destroy-targeted-count") p := testProvider("aws") p.ApplyFn = testApplyFn p.DiffFn = testDiffFn + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.foo").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"i-bcd345"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.bar").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"i-abc123"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("aws_instance.foo")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/aws"]`), + ) + ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), - State: MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.foo": resourceState("aws_instance", "i-bcd345"), - "aws_instance.bar": resourceState("aws_instance", "i-abc123"), - }, - }, - }, - }), + State: state, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "foo", @@ -8511,8 +8066,8 @@ func TestContext2Apply_targetedDestroyModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -8553,15 +8108,15 @@ func TestContext2Apply_targetedDestroyModule(t *testing.T) { checkStateString(t, state, ` aws_instance.bar: ID = i-abc123 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo: ID = i-bcd345 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] module.child: aws_instance.bar: ID = i-abc123 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) } @@ -8573,8 +8128,8 @@ func TestContext2Apply_targetedDestroyCountIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -8615,16 +8170,16 @@ func TestContext2Apply_targetedDestroyCountIndex(t *testing.T) { checkStateString(t, state, ` aws_instance.bar.0: ID = i-abc123 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.bar.2: ID = i-abc123 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.0: ID = i-bcd345 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.1: ID = i-bcd345 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) } @@ -8636,8 +8191,8 @@ func TestContext2Apply_targetedModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -8667,12 +8222,12 @@ func TestContext2Apply_targetedModule(t *testing.T) { module.child: aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance `) @@ -8687,8 +8242,8 @@ func TestContext2Apply_targetedModuleDep(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -8712,17 +8267,17 @@ func TestContext2Apply_targetedModuleDep(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance Dependencies: - module.child + module.child.aws_instance.mod module.child: aws_instance.mod: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Outputs: @@ -8740,8 +8295,8 @@ func TestContext2Apply_targetedModuleUnrelatedOutputs(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -8795,7 +8350,7 @@ child2_id = foo module.child2: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Outputs: @@ -8811,8 +8366,8 @@ func TestContext2Apply_targetedModuleResource(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -8841,7 +8396,7 @@ func TestContext2Apply_targetedModuleResource(t *testing.T) { module.child: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance `) @@ -8872,8 +8427,8 @@ func TestContext2Apply_targetedResourceOrphanModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -8901,8 +8456,8 @@ func TestContext2Apply_unknownAttribute(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -8931,8 +8486,8 @@ func TestContext2Apply_unknownAttributeInterpolate(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -9048,39 +8603,56 @@ func TestContext2Apply_createBefore_depends(t *testing.T) { p := testProvider("aws") p.ApplyFn = testApplyFn p.DiffFn = testDiffFn - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.web": &ResourceState{ + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "web", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar","require_new":"ami-old"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "lb", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"baz","instance":"bar"}`), + Dependencies: []addrs.AbsResource{ + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - Attributes: map[string]string{ - "require_new": "ami-old", - }, - }, - }, - "aws_instance.lb": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "baz", - Attributes: map[string]string{ - "instance": "bar", - }, - }, + Name: "web", }, + Module: addrs.RootModuleInstance, }, }, }, - }) + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + ctx := testContext2(t, &ContextOpts{ Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -9114,17 +8686,18 @@ func TestContext2Apply_createBefore_depends(t *testing.T) { // Test that things were managed _in the right order_ order := h.States + diffs := h.Diffs if !order[0].IsNull() || diffs[0].Action == plans.Delete { t.Fatalf("should create new instance first: %#v", order) } if order[1].GetAttr("id").AsString() != "baz" { - t.Fatalf("update must happen after create: %#v", order) + t.Fatalf("update must happen after create: %#v", order[1]) } if order[2].GetAttr("id").AsString() != "bar" || diffs[2].Action != plans.Delete { - t.Fatalf("destroy must happen after update: %#v", order) + t.Fatalf("destroy must happen after update: %#v", order[2]) } } @@ -9163,39 +8736,56 @@ func TestContext2Apply_singleDestroy(t *testing.T) { return testApplyFn(info, s, d) } p.DiffFn = testDiffFn - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - Resources: map[string]*ResourceState{ - "aws_instance.web": &ResourceState{ + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "web", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar","require_new":"ami-old"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "lb", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"baz","instance":"bar"}`), + Dependencies: []addrs.AbsResource{ + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - Attributes: map[string]string{ - "require_new": "ami-old", - }, - }, - }, - "aws_instance.lb": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "baz", - Attributes: map[string]string{ - "instance": "bar", - }, - }, + Name: "web", }, + Module: addrs.RootModuleInstance, }, }, }, - }) + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + ctx := testContext2(t, &ContextOpts{ Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -9238,8 +8828,8 @@ func TestContext2Apply_issue7824(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("template"): testProviderFuncFixed(p), }, ), }) @@ -9256,8 +8846,8 @@ func TestContext2Apply_issue7824(t *testing.T) { } ctxOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("template"): testProviderFuncFixed(p), }, ) ctx, diags = NewContext(ctxOpts) @@ -9296,8 +8886,8 @@ func TestContext2Apply_issue5254(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: testModule(t, "issue-5254/step-0"), ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("template"): testProviderFuncFixed(p), }, ), }) @@ -9319,8 +8909,8 @@ func TestContext2Apply_issue5254(t *testing.T) { Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("template"): testProviderFuncFixed(p), }, ), }) @@ -9337,8 +8927,8 @@ func TestContext2Apply_issue5254(t *testing.T) { } ctxOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("template"): testProviderFuncFixed(p), }, ) ctx, diags = NewContext(ctxOpts) @@ -9355,7 +8945,7 @@ func TestContext2Apply_issue5254(t *testing.T) { expected := strings.TrimSpace(` template_file.child: ID = foo - provider = provider.template + provider = provider["registry.terraform.io/-/template"] __template_requires_new = true template = Hi type = template_file @@ -9364,7 +8954,7 @@ template_file.child: template_file.parent template_file.parent.0: ID = foo - provider = provider.template + provider = provider["registry.terraform.io/-/template"] template = Hi type = template_file `) @@ -9381,8 +8971,8 @@ func TestContext2Apply_targetedWithTaintedInState(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -9420,8 +9010,8 @@ func TestContext2Apply_targetedWithTaintedInState(t *testing.T) { } ctxOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) ctx, diags = NewContext(ctxOpts) @@ -9438,10 +9028,10 @@ func TestContext2Apply_targetedWithTaintedInState(t *testing.T) { expected := strings.TrimSpace(` aws_instance.iambeingadded: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.ifailedprovisioners: (tainted) ID = ifailedprovisioners - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) if actual != expected { t.Fatalf("expected state: \n%s\ngot: \n%s", expected, actual) @@ -9465,8 +9055,8 @@ func TestContext2Apply_ignoreChangesCreate(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -9492,7 +9082,7 @@ func TestContext2Apply_ignoreChangesCreate(t *testing.T) { expected := strings.TrimSpace(` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] required_field = set type = aws_instance `) @@ -9578,8 +9168,8 @@ func TestContext2Apply_ignoreChangesWithDep(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -9613,8 +9203,8 @@ func TestContext2Apply_ignoreChangesWildcard(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -9639,7 +9229,7 @@ func TestContext2Apply_ignoreChangesWildcard(t *testing.T) { expected := strings.TrimSpace(` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] required_field = set type = aws_instance `) @@ -9661,8 +9251,8 @@ func TestContext2Apply_destroyNestedModuleWithAttrsReferencingResource(t *testin ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -9684,8 +9274,8 @@ func TestContext2Apply_destroyNestedModuleWithAttrsReferencingResource(t *testin Config: m, State: state, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -9701,8 +9291,8 @@ func TestContext2Apply_destroyNestedModuleWithAttrsReferencingResource(t *testin } ctxOpts.ProviderResolver = providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ) ctx, diags = NewContext(ctxOpts) @@ -9730,8 +9320,8 @@ func TestContext2Apply_dataDependsOn(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -9797,8 +9387,8 @@ func TestContext2Apply_terraformWorkspace(t *testing.T) { Meta: &ContextMeta{Env: "foo"}, Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -9828,8 +9418,8 @@ func TestContext2Apply_multiRef(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -9857,8 +9447,8 @@ func TestContext2Apply_targetedModuleRecursive(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -9890,7 +9480,7 @@ func TestContext2Apply_targetedModuleRecursive(t *testing.T) { module.child.subchild: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance `) @@ -9901,7 +9491,7 @@ func TestContext2Apply_localVal(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{}, + map[addrs.Provider]providers.Factory{}, ), }) @@ -9971,8 +9561,8 @@ func TestContext2Apply_destroyWithLocals(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -10012,8 +9602,8 @@ func TestContext2Apply_providerWithLocals(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -10030,8 +9620,8 @@ func TestContext2Apply_providerWithLocals(t *testing.T) { ctx = testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -10062,38 +9652,25 @@ func TestContext2Apply_destroyWithProviders(t *testing.T) { p.ApplyFn = testApplyFn p.DiffFn = testDiffFn - s := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: rootModulePath, - }, - &ModuleState{ - Path: []string{"root", "child"}, - }, - &ModuleState{ - Path: []string{"root", "mod", "removed"}, - Resources: map[string]*ResourceState{ - "aws_instance.child": &ResourceState{ - Type: "aws_instance", - Primary: &InstanceState{ - ID: "bar", - }, - // this provider doesn't exist - Provider: "provider.aws.baz", - }, - }, - }, + state := states.NewState() + removed := state.EnsureModule(addrs.RootModuleInstance.Child("mod", addrs.NoKey).Child("removed", addrs.NoKey)) + removed.SetResourceInstanceCurrent( + mustResourceInstanceAddr("aws_instance.child").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/aws"].baz`), + ) ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), - State: s, + State: state, Destroy: true, }) @@ -10103,10 +9680,11 @@ func TestContext2Apply_destroyWithProviders(t *testing.T) { } // correct the state - s.Modules["module.mod.module.removed"].Resources["aws_instance.child"].ProviderConfig = addrs.ProviderConfig{ - Type: "aws", - Alias: "bar", - }.Absolute(addrs.RootModuleInstance) + state.Modules["module.mod.module.removed"].Resources["aws_instance.child"].ProviderConfig = addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Alias: "bar", + Module: addrs.RootModuleInstance, + } if _, diags := ctx.Plan(); diags.HasErrors() { t.Fatal(diags.Err()) @@ -10207,8 +9785,8 @@ func TestContext2Apply_providersFromState(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: tc.state, @@ -10245,8 +9823,8 @@ func TestContext2Apply_plannedInterpolatedCount(t *testing.T) { p.DiffFn = testDiffFn providerResolver := providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) @@ -10307,8 +9885,8 @@ func TestContext2Apply_plannedDestroyInterpolatedCount(t *testing.T) { p.DiffFn = testDiffFn providerResolver := providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) @@ -10363,7 +9941,6 @@ func TestContext2Apply_plannedDestroyInterpolatedCount(t *testing.T) { } ctxOpts.ProviderResolver = providerResolver - ctxOpts.Destroy = true ctx, diags = NewContext(ctxOpts) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) @@ -10384,8 +9961,8 @@ func TestContext2Apply_scaleInMultivarRef(t *testing.T) { p.DiffFn = testDiffFn providerResolver := providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ) @@ -10467,8 +10044,8 @@ func TestContext2Apply_inconsistentWithPlan(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -10528,14 +10105,15 @@ func TestContext2Apply_issue19908(t *testing.T) { AttrsJSON: []byte(`{"baz":"old"}`), Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }), ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -10591,8 +10169,8 @@ func TestContext2Apply_invalidIndexRef(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -10614,3 +10192,552 @@ func TestContext2Apply_invalidIndexRef(t *testing.T) { t.Fatalf("missing expected error\ngot: %s\n\nwant: error containing %q", gotErr, wantErr) } } + +func TestContext2Apply_moduleReplaceCycle(t *testing.T) { + for _, mode := range []string{"normal", "cbd"} { + var m *configs.Config + + switch mode { + case "normal": + m = testModule(t, "apply-module-replace-cycle") + case "cbd": + m = testModule(t, "apply-module-replace-cycle-cbd") + } + + p := testProvider("aws") + p.DiffFn = testDiffFn + p.ApplyFn = testApplyFn + + instanceSchema := &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "id": {Type: cty.String, Computed: true}, + "require_new": {Type: cty.String, Optional: true}, + }, + } + + p.GetSchemaReturn = &ProviderSchema{ + ResourceTypes: map[string]*configschema.Block{ + "aws_instance": instanceSchema, + }, + } + + state := states.NewState() + modA := state.EnsureModule(addrs.RootModuleInstance.Child("a", addrs.NoKey)) + modA.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "a", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"a","require_new":"old"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + + modB := state.EnsureModule(addrs.RootModuleInstance.Child("b", addrs.NoKey)) + modB.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "b", + }.Instance(addrs.IntKey(0)), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"b","require_new":"old"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + + aBefore, _ := plans.NewDynamicValue( + cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("a"), + "require_new": cty.StringVal("old"), + }), instanceSchema.ImpliedType()) + aAfter, _ := plans.NewDynamicValue( + cty.ObjectVal(map[string]cty.Value{ + "id": cty.UnknownVal(cty.String), + "require_new": cty.StringVal("new"), + }), instanceSchema.ImpliedType()) + bBefore, _ := plans.NewDynamicValue( + cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("b"), + "require_new": cty.StringVal("old"), + }), instanceSchema.ImpliedType()) + bAfter, _ := plans.NewDynamicValue( + cty.ObjectVal(map[string]cty.Value{ + "id": cty.UnknownVal(cty.String), + "require_new": cty.UnknownVal(cty.String), + }), instanceSchema.ImpliedType()) + + var aAction plans.Action + switch mode { + case "normal": + aAction = plans.DeleteThenCreate + case "cbd": + aAction = plans.CreateThenDelete + } + + changes := &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "a", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance.Child("a", addrs.NoKey)), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ChangeSrc: plans.ChangeSrc{ + Action: aAction, + Before: aBefore, + After: aAfter, + }, + }, + { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "b", + }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance.Child("b", addrs.NoKey)), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ChangeSrc: plans.ChangeSrc{ + Action: plans.DeleteThenCreate, + Before: bBefore, + After: bAfter, + }, + }, + }, + } + + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + }, + ), + State: state, + Changes: changes, + }) + + t.Run(mode, func(t *testing.T) { + _, diags := ctx.Apply() + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + }) + } +} + +func TestContext2Apply_destroyDataCycle(t *testing.T) { + m, snap := testModuleWithSnapshot(t, "apply-destroy-data-cycle") + p := testProvider("null") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "null_resource", + Name: "a", + }.Instance(addrs.IntKey(0)), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"a"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("null"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.DataResourceMode, + Type: "null_data_source", + Name: "d", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"data"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("null"), + Module: addrs.RootModuleInstance, + }, + ) + + providerResolver := providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), + }, + ) + + hook := &testHook{} + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providerResolver, + State: state, + Destroy: true, + Hooks: []Hook{hook}, + }) + + plan, diags := ctx.Plan() + diags.HasErrors() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } + + // We'll marshal and unmarshal the plan here, to ensure that we have + // a clean new context as would be created if we separately ran + // terraform plan -out=tfplan && terraform apply tfplan + ctxOpts, err := contextOptsForPlanViaFile(snap, state, plan) + if err != nil { + t.Fatal(err) + } + ctxOpts.ProviderResolver = providerResolver + ctx, diags = NewContext(ctxOpts) + if diags.HasErrors() { + t.Fatalf("failed to create context for plan: %s", diags.Err()) + } + + _, diags = ctx.Apply() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } +} + +func TestContext2Apply_taintedDestroyFailure(t *testing.T) { + m := testModule(t, "apply-destroy-tainted") + p := testProvider("test") + p.DiffFn = testDiffFn + p.ApplyFn = func(info *InstanceInfo, s *InstanceState, d *InstanceDiff) (*InstanceState, error) { + // All destroys fail. + // c will also fail to create, meaning the existing tainted instance + // becomes deposed, ans is then promoted back to current. + // only C has a foo attribute + attr := d.Attributes["foo"] + if d.Destroy || (attr != nil && attr.New == "c") { + return nil, errors.New("failure") + } + + return testApplyFn(info, s, d) + } + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "a", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectTainted, + AttrsJSON: []byte(`{"id":"a","foo":"a"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "b", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectTainted, + AttrsJSON: []byte(`{"id":"b","foo":"b"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "c", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectTainted, + AttrsJSON: []byte(`{"id":"c","foo":"old"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + + providerResolver := providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), + }, + ) + + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providerResolver, + State: state, + Hooks: []Hook{&testHook{}}, + }) + + _, diags := ctx.Plan() + diags.HasErrors() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } + + state, diags = ctx.Apply() + if !diags.HasErrors() { + t.Fatal("expected error") + } + + root = state.Module(addrs.RootModuleInstance) + + // the instance that failed to destroy should remain tainted + a := root.ResourceInstance(addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "a", + }.Instance(addrs.NoKey)) + + if a.Current.Status != states.ObjectTainted { + t.Fatal("test_instance.a should be tainted") + } + + // b is create_before_destroy, and the destroy failed, so there should be 1 + // deposed instance. + b := root.ResourceInstance(addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "b", + }.Instance(addrs.NoKey)) + + if b.Current.Status != states.ObjectReady { + t.Fatal("test_instance.b should be Ready") + } + + if len(b.Deposed) != 1 { + t.Fatal("test_instance.b failed to keep deposed instance") + } + + // the desposed c instance should be promoted back to Current, and remain + // tainted + c := root.ResourceInstance(addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "c", + }.Instance(addrs.NoKey)) + + if c.Current == nil { + t.Fatal("test_instance.c has no current instance, but it should") + } + + if c.Current.Status != states.ObjectTainted { + t.Fatal("test_instance.c should be tainted") + } + + if len(c.Deposed) != 0 { + t.Fatal("test_instance.c should have no deposed instances") + } + + if string(c.Current.AttrsJSON) != `{"id":"c","foo":"old"}` { + t.Fatalf("unexpected attrs for c: %q\n", c.Current.AttrsJSON) + } +} + +func TestContext2Apply_plannedConnectionRefs(t *testing.T) { + m := testModule(t, "apply-plan-connection-refs") + p := testProvider("test") + p.DiffFn = testDiffFn + p.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) (resp providers.ApplyResourceChangeResponse) { + s := req.PlannedState.AsValueMap() + // delay "a" slightly, so if the reference edge is missing the "b" + // provisioner will see an unknown value. + if s["foo"].AsString() == "a" { + time.Sleep(500 * time.Millisecond) + } + + s["id"] = cty.StringVal("ID") + resp.NewState = cty.ObjectVal(s) + return resp + } + + pr := testProvisioner() + pr.ProvisionResourceFn = func(req provisioners.ProvisionResourceRequest) (resp provisioners.ProvisionResourceResponse) { + host := req.Connection.GetAttr("host") + if host.IsNull() || !host.IsKnown() { + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("invalid host value: %#v", host)) + } + + return resp + } + + providerResolver := providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), + }, + ) + + provisioners := map[string]ProvisionerFactory{ + "shell": testProvisionerFuncFixed(pr), + } + + hook := &testHook{} + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providerResolver, + Provisioners: provisioners, + Hooks: []Hook{hook}, + }) + + _, diags := ctx.Plan() + diags.HasErrors() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } + + _, diags = ctx.Apply() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } +} + +func TestContext2Apply_cbdCycle(t *testing.T) { + m, snap := testModuleWithSnapshot(t, "apply-cbd-cycle") + p := testProvider("test") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "a", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"a","require_new":"old","foo":"b"}`), + Dependencies: []addrs.AbsResource{ + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "b", + }, + Module: addrs.RootModuleInstance, + }, + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "c", + }, + Module: addrs.RootModuleInstance, + }, + }, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "b", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"b","require_new":"old","foo":"c"}`), + Dependencies: []addrs.AbsResource{ + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "c", + }, + Module: addrs.RootModuleInstance, + }, + }, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "c", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"c","require_new":"old"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + + providerResolver := providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), + }, + ) + + hook := &testHook{} + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providerResolver, + State: state, + Hooks: []Hook{hook}, + }) + + plan, diags := ctx.Plan() + diags.HasErrors() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } + + // We'll marshal and unmarshal the plan here, to ensure that we have + // a clean new context as would be created if we separately ran + // terraform plan -out=tfplan && terraform apply tfplan + ctxOpts, err := contextOptsForPlanViaFile(snap, state, plan) + if err != nil { + t.Fatal(err) + } + ctxOpts.ProviderResolver = providerResolver + ctx, diags = NewContext(ctxOpts) + if diags.HasErrors() { + t.Fatalf("failed to create context for plan: %s", diags.Err()) + } + + _, diags = ctx.Apply() + if diags.HasErrors() { + t.Fatalf("diags: %s", diags.Err()) + } +} diff --git a/terraform/context_components.go b/terraform/context_components.go index 26ec99595..f99c36695 100644 --- a/terraform/context_components.go +++ b/terraform/context_components.go @@ -3,6 +3,7 @@ package terraform import ( "fmt" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/provisioners" ) @@ -12,36 +13,32 @@ import ( // This factory gets more information than the raw maps using to initialize // a Context. This information is used for debugging. type contextComponentFactory interface { - // ResourceProvider creates a new ResourceProvider with the given - // type. The "uid" is a unique identifier for this provider being - // initialized that can be used for internal tracking. - ResourceProvider(typ, uid string) (providers.Interface, error) + // ResourceProvider creates a new ResourceProvider with the given type. + ResourceProvider(typ addrs.Provider) (providers.Interface, error) ResourceProviders() []string - // ResourceProvisioner creates a new ResourceProvisioner with the - // given type. The "uid" is a unique identifier for this provisioner - // being initialized that can be used for internal tracking. - ResourceProvisioner(typ, uid string) (provisioners.Interface, error) + // ResourceProvisioner creates a new ResourceProvisioner with the given + // type. + ResourceProvisioner(typ string) (provisioners.Interface, error) ResourceProvisioners() []string } // basicComponentFactory just calls a factory from a map directly. type basicComponentFactory struct { - providers map[string]providers.Factory + providers map[addrs.Provider]providers.Factory provisioners map[string]ProvisionerFactory } func (c *basicComponentFactory) ResourceProviders() []string { - result := make([]string, len(c.providers)) + var result []string for k := range c.providers { - result = append(result, k) + result = append(result, k.LegacyString()) } - return result } func (c *basicComponentFactory) ResourceProvisioners() []string { - result := make([]string, len(c.provisioners)) + var result []string for k := range c.provisioners { result = append(result, k) } @@ -49,16 +46,16 @@ func (c *basicComponentFactory) ResourceProvisioners() []string { return result } -func (c *basicComponentFactory) ResourceProvider(typ, uid string) (providers.Interface, error) { +func (c *basicComponentFactory) ResourceProvider(typ addrs.Provider) (providers.Interface, error) { f, ok := c.providers[typ] if !ok { - return nil, fmt.Errorf("unknown provider %q", typ) + return nil, fmt.Errorf("unknown provider %q", typ.LegacyString()) } return f() } -func (c *basicComponentFactory) ResourceProvisioner(typ, uid string) (provisioners.Interface, error) { +func (c *basicComponentFactory) ResourceProvisioner(typ string) (provisioners.Interface, error) { f, ok := c.provisioners[typ] if !ok { return nil, fmt.Errorf("unknown provisioner %q", typ) diff --git a/terraform/context_components_test.go b/terraform/context_components_test.go index cdd4da076..edca24257 100644 --- a/terraform/context_components_test.go +++ b/terraform/context_components_test.go @@ -3,6 +3,7 @@ package terraform import ( "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/provisioners" @@ -26,8 +27,8 @@ func simpleMockComponentFactory() *basicComponentFactory { provider := simpleMockProvider() provisioner := simpleMockProvisioner() return &basicComponentFactory{ - providers: map[string]providers.Factory{ - "test": func() (providers.Interface, error) { + providers: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): func() (providers.Interface, error) { return provider, nil }, }, diff --git a/terraform/context_fixtures_test.go b/terraform/context_fixtures_test.go index acc4fd589..96cdb13d5 100644 --- a/terraform/context_fixtures_test.go +++ b/terraform/context_fixtures_test.go @@ -3,6 +3,7 @@ package terraform import ( "testing" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/providers" @@ -52,8 +53,8 @@ func contextFixtureApplyVars(t *testing.T) *contextTestFixture { return &contextTestFixture{ Config: c, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), } @@ -80,8 +81,8 @@ func contextFixtureApplyVarsEnv(t *testing.T) *contextTestFixture { return &contextTestFixture{ Config: c, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), } diff --git a/terraform/context_graph_type.go b/terraform/context_graph_type.go index 0a424a01d..4448d8706 100644 --- a/terraform/context_graph_type.go +++ b/terraform/context_graph_type.go @@ -1,6 +1,6 @@ package terraform -//go:generate stringer -type=GraphType context_graph_type.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=GraphType context_graph_type.go // GraphType is an enum of the type of graph to create with a Context. // The values of the constants may change so they shouldn't be depended on; diff --git a/terraform/context_import_test.go b/terraform/context_import_test.go index 619ae0c33..d16a36371 100644 --- a/terraform/context_import_test.go +++ b/terraform/context_import_test.go @@ -18,8 +18,8 @@ func TestContextImport_basic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -57,8 +57,8 @@ func TestContextImport_countIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -97,8 +97,8 @@ func TestContextImport_collision(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), @@ -115,7 +115,10 @@ func TestContextImport_collision(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "aws"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }), }) @@ -144,7 +147,7 @@ func TestContextImport_collision(t *testing.T) { actual := strings.TrimSpace(state.String()) expected := `aws_instance.foo: ID = bar - provider = provider.aws` + provider = provider["registry.terraform.io/-/aws"]` if actual != expected { t.Fatalf("bad: \n%s", actual) @@ -164,8 +167,8 @@ func TestContextImport_missingType(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -216,8 +219,8 @@ func TestContextImport_moduleProvider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -255,8 +258,8 @@ func TestContextImport_providerModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -307,8 +310,8 @@ func TestContextImport_providerVarConfig(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -369,8 +372,8 @@ func TestContextImport_providerNonVarConfig(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -389,7 +392,7 @@ func TestContextImport_providerNonVarConfig(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -404,8 +407,8 @@ func TestContextImport_refresh(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -433,7 +436,7 @@ func TestContextImport_refresh(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -454,8 +457,8 @@ func TestContextImport_refreshNil(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -480,7 +483,7 @@ func TestContextImport_refreshNil(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -501,8 +504,8 @@ func TestContextImport_module(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -521,7 +524,7 @@ func TestContextImport_module(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -542,8 +545,8 @@ func TestContextImport_moduleDepth2(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -562,7 +565,7 @@ func TestContextImport_moduleDepth2(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -583,8 +586,8 @@ func TestContextImport_moduleDiff(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), @@ -601,7 +604,10 @@ func TestContextImport_moduleDiff(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "aws"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }), }) @@ -620,7 +626,7 @@ func TestContextImport_moduleDiff(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -641,8 +647,8 @@ func TestContextImport_moduleExisting(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), @@ -659,7 +665,10 @@ func TestContextImport_moduleExisting(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "aws"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }), }) @@ -678,7 +687,7 @@ func TestContextImport_moduleExisting(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -731,8 +740,8 @@ func TestContextImport_multiState(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -744,7 +753,7 @@ func TestContextImport_multiState(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -801,8 +810,8 @@ func TestContextImport_multiStateSame(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -814,7 +823,7 @@ func TestContextImport_multiStateSame(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), }, }, }) @@ -836,8 +845,8 @@ func TestContextImport_customProviderMissing(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -856,7 +865,7 @@ func TestContextImport_customProviderMissing(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigAliased("aws", "alias"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigAliased(addrs.NewLegacyProvider("aws"), "alias"), }, }, }) @@ -871,8 +880,8 @@ func TestContextImport_customProvider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -891,7 +900,7 @@ func TestContextImport_customProvider(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", - ProviderAddr: addrs.RootModuleInstance.ProviderConfigAliased("aws", "alias"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigAliased(addrs.NewLegacyProvider("aws"), "alias"), }, }, }) @@ -909,13 +918,13 @@ func TestContextImport_customProvider(t *testing.T) { const testImportStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportCountIndexStr = ` aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportModuleStr = ` @@ -923,7 +932,7 @@ const testImportModuleStr = ` module.foo: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportModuleDepth2Str = ` @@ -931,7 +940,7 @@ const testImportModuleDepth2Str = ` module.a.b: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportModuleDiffStr = ` @@ -939,11 +948,11 @@ const testImportModuleDiffStr = ` module.bar: aws_instance.bar: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] module.foo: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportModuleExistingStr = ` @@ -951,42 +960,42 @@ const testImportModuleExistingStr = ` module.foo: aws_instance.bar: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportMultiStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance_thing.foo: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportMultiSameStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance_thing.foo: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance_thing.foo-1: ID = qux - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testImportRefreshStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar ` const testImportCustomProviderStr = ` aws_instance.foo: ID = foo - provider = provider.aws.alias + provider = provider["registry.terraform.io/-/aws"].alias ` diff --git a/terraform/context_input.go b/terraform/context_input.go index 4ad227143..b3a2b5f9d 100644 --- a/terraform/context_input.go +++ b/terraform/context_input.go @@ -2,7 +2,6 @@ package terraform import ( "context" - "fmt" "log" "sort" @@ -15,10 +14,22 @@ import ( "github.com/hashicorp/terraform/tfdiags" ) -// Input asks for input to fill variables and provider configurations. +// Input asks for input to fill unset required arguments in provider +// configurations. +// // This modifies the configuration in-place, so asking for Input twice // may result in different UI output showing different current values. func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { + // This function used to be responsible for more than it is now, so its + // interface is more general than its current functionality requires. + // It now exists only to handle interactive prompts for provider + // configurations, with other prompts the responsibility of the CLI + // layer prior to calling in to this package. + // + // (Hopefully in future the remaining functionality here can move to the + // CLI layer too in order to avoid this odd situation where core code + // produces UI input prompts.) + var diags tfdiags.Diagnostics defer c.acquireRun("input")() @@ -29,85 +40,6 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { ctx := context.Background() - if mode&InputModeVar != 0 { - log.Printf("[TRACE] Context.Input: Prompting for variables") - - // Walk the variables first for the root module. We walk them in - // alphabetical order for UX reasons. - configs := c.config.Module.Variables - names := make([]string, 0, len(configs)) - for name := range configs { - names = append(names, name) - } - sort.Strings(names) - Variables: - for _, n := range names { - v := configs[n] - - // If we only care about unset variables, then we should set any - // variable that is already set. - if mode&InputModeVarUnset != 0 { - if _, isSet := c.variables[n]; isSet { - continue - } - } - - // this should only happen during tests - if c.uiInput == nil { - log.Println("[WARN] Context.uiInput is nil during input walk") - continue - } - - // Ask the user for a value for this variable - var rawValue string - retry := 0 - for { - var err error - rawValue, err = c.uiInput.Input(ctx, &InputOpts{ - Id: fmt.Sprintf("var.%s", n), - Query: fmt.Sprintf("var.%s", n), - Description: v.Description, - }) - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Failed to request interactive input", - fmt.Sprintf("Terraform attempted to request a value for var.%s interactively, but encountered an error: %s.", n, err), - )) - return diags - } - - if rawValue == "" && v.Default == cty.NilVal { - // Redo if it is required, but abort if we keep getting - // blank entries - if retry > 2 { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Required variable not assigned", - fmt.Sprintf("The variable %q is required, so Terraform cannot proceed without a defined value for it.", n), - )) - continue Variables - } - retry++ - continue - } - - break - } - - val, valDiags := v.ParsingMode.Parse(n, rawValue) - diags = diags.Append(valDiags) - if diags.HasErrors() { - continue - } - - c.variables[n] = &InputValue{ - Value: val, - SourceType: ValueFromInput, - } - } - } - if mode&InputModeProvider != 0 { log.Printf("[TRACE] Context.Input: Prompting for provider arguments") @@ -121,7 +53,7 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { // us to keep this relatively simple without significant hardship. pcs := make(map[string]*configs.Provider) - pas := make(map[string]addrs.ProviderConfig) + pas := make(map[string]addrs.LocalProviderConfig) for _, pc := range c.config.Module.ProviderConfigs { addr := pc.Addr() pcs[addr.String()] = pc @@ -164,12 +96,13 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { UIInput: c.uiInput, } - schema := c.schemas.ProviderConfig(pa.Type) + providerFqn := c.config.Module.ProviderForLocalConfig(pa) + schema := c.schemas.ProviderConfig(providerFqn) if schema == nil { // Could either be an incorrect config or just an incomplete // mock in tests. We'll let a later pass decide, and just // ignore this for the purposes of gathering input. - log.Printf("[TRACE] Context.Input: No schema available for provider type %q", pa.Type) + log.Printf("[TRACE] Context.Input: No schema available for provider type %q", pa.LocalName) continue } @@ -224,7 +157,13 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { vals[key] = cty.StringVal(rawVal) } - c.providerInputConfig[pk] = vals + absConfigAddr := addrs.AbsProviderConfig{ + Provider: providerFqn, + Alias: pa.Alias, + Module: c.Config().Path.UnkeyedInstanceShim(), + } + c.providerInputConfig[absConfigAddr.String()] = vals + log.Printf("[TRACE] Context.Input: Input for %s: %#v", pk, vals) } } diff --git a/terraform/context_input_test.go b/terraform/context_input_test.go index f6b5a4f36..74a9f6e73 100644 --- a/terraform/context_input_test.go +++ b/terraform/context_input_test.go @@ -14,92 +14,6 @@ import ( "github.com/hashicorp/terraform/states" ) -func TestContext2Input(t *testing.T) { - input := new(MockUIInput) - m := testModule(t, "input-vars") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Variables: InputValues{ - "amis": &InputValue{ - Value: cty.MapVal(map[string]cty.Value{ - "us-east-1": cty.StringVal("override"), - }), - SourceType: ValueFromCaller, - }, - }, - UIInput: input, - }) - - input.InputReturnMap = map[string]string{ - "var.foo": "us-east-1", - } - - if diags := ctx.Input(InputModeStd | InputModeVarUnset); diags.HasErrors() { - t.Fatalf("input errors: %s", diags.Err()) - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() - if diags.HasErrors() { - t.Fatalf("apply errors: %s", diags.Err()) - } - - actual := strings.TrimSpace(state.String()) - expected := strings.TrimSpace(testTerraformInputVarsStr) - if actual != expected { - t.Fatalf("expected:\n%s\ngot:\n%s", expected, actual) - } -} - -func TestContext2Input_moduleComputedOutputElement(t *testing.T) { - m := testModule(t, "input-module-computed-output-element") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - }) - - if diags := ctx.Input(InputModeStd); diags.HasErrors() { - t.Fatalf("input errors: %s", diags.Err()) - } -} - -func TestContext2Input_badVarDefault(t *testing.T) { - m := testModule(t, "input-bad-var-default") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - }) - - if diags := ctx.Input(InputModeStd); diags.HasErrors() { - t.Fatalf("input errors: %s", diags.Err()) - } -} - func TestContext2Input_provider(t *testing.T) { m := testModule(t, "input-provider") p := testProvider("aws") @@ -129,8 +43,8 @@ func TestContext2Input_provider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), UIInput: inp, @@ -200,8 +114,8 @@ func TestContext2Input_providerMulti(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), UIInput: inp, @@ -245,8 +159,8 @@ func TestContext2Input_providerOnce(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -301,8 +215,8 @@ func TestContext2Input_providerId(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), UIInput: input, @@ -366,8 +280,8 @@ func TestContext2Input_providerOnly(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -422,8 +336,8 @@ func TestContext2Input_providerVars(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -475,8 +389,8 @@ func TestContext2Input_providerVarsModuleInherit(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), UIInput: input, @@ -497,304 +411,6 @@ func TestContext2Input_providerVarsModuleInherit(t *testing.T) { } } -func TestContext2Input_varOnly(t *testing.T) { - input := new(MockUIInput) - m := testModule(t, "input-provider-vars") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Variables: InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("us-west-2"), - SourceType: ValueFromCaller, - }, - }, - UIInput: input, - }) - - input.InputReturnMap = map[string]string{ - "var.foo": "us-east-1", - } - - var actual interface{} - /*p.InputFn = func(i UIInput, c *ResourceConfig) (*ResourceConfig, error) { - c.Raw["foo"] = "bar" - return c, nil - }*/ - p.ConfigureFn = func(c *ResourceConfig) error { - actual = c.Raw["foo"] - return nil - } - - if err := ctx.Input(InputModeVar); err != nil { - t.Fatalf("err: %s", err) - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, err := ctx.Apply() - if err != nil { - t.Fatalf("err: %s", err) - } - - if reflect.DeepEqual(actual, "bar") { - t.Fatalf("bad: %#v", actual) - } - - actualStr := strings.TrimSpace(state.String()) - expectedStr := strings.TrimSpace(testTerraformInputVarOnlyStr) - if actualStr != expectedStr { - t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actualStr, expectedStr) - } -} - -func TestContext2Input_varOnlyUnset(t *testing.T) { - input := new(MockUIInput) - m := testModule(t, "input-vars-unset") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Variables: InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("foovalue"), - SourceType: ValueFromCaller, - }, - }, - UIInput: input, - }) - - input.InputReturnMap = map[string]string{ - "var.foo": "nope", - "var.bar": "baz", - } - - if err := ctx.Input(InputModeVar | InputModeVarUnset); err != nil { - t.Fatalf("err: %s", err) - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, err := ctx.Apply() - if err != nil { - t.Fatalf("err: %s", err) - } - - actualStr := strings.TrimSpace(state.String()) - expectedStr := strings.TrimSpace(testTerraformInputVarOnlyUnsetStr) - if actualStr != expectedStr { - t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actualStr, expectedStr) - } -} - -func TestContext2Input_varWithDefault(t *testing.T) { - input := new(MockUIInput) - m := testModule(t, "input-var-default") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Variables: InputValues{}, - UIInput: input, - }) - - input.InputFn = func(opts *InputOpts) (string, error) { - t.Fatalf( - "Input should never be called because variable has a default: %#v", opts) - return "", nil - } - - if err := ctx.Input(InputModeVar | InputModeVarUnset); err != nil { - t.Fatalf("err: %s", err) - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, err := ctx.Apply() - if err != nil { - t.Fatalf("err: %s", err) - } - - actualStr := strings.TrimSpace(state.String()) - expectedStr := strings.TrimSpace(` -aws_instance.foo: - ID = foo - provider = provider.aws - foo = 123 - type = aws_instance - `) - if actualStr != expectedStr { - t.Fatalf("expected: \n%s\ngot: \n%s\n", expectedStr, actualStr) - } -} - -func TestContext2Input_varPartiallyComputed(t *testing.T) { - input := new(MockUIInput) - m := testModule(t, "input-var-partially-computed") - p := testProvider("aws") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - }, - ), - Variables: InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("foovalue"), - SourceType: ValueFromCaller, - }, - }, - UIInput: input, - State: states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "aws_instance", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - AttrsFlat: map[string]string{ - "id": "i-abc123", - }, - Status: states.ObjectReady, - }, - addrs.ProviderConfig{Type: "aws"}.Absolute(addrs.RootModuleInstance), - ) - s.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "aws_instance", - Name: "mode", - }.Instance(addrs.NoKey).Absolute(addrs.Module{"child"}.UnkeyedInstanceShim()), - &states.ResourceInstanceObjectSrc{ - AttrsFlat: map[string]string{ - "id": "i-bcd345", - "value": "one,i-abc123", - }, - Status: states.ObjectReady, - }, - addrs.ProviderConfig{Type: "aws"}.Absolute(addrs.RootModuleInstance), - ) - }), - }) - - if diags := ctx.Input(InputModeStd); diags.HasErrors() { - t.Fatalf("input errors: %s", diags.Err()) - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } -} - -// Module variables weren't being interpolated during the Input walk. -// https://github.com/hashicorp/terraform/issues/5322 -func TestContext2Input_interpolateVar(t *testing.T) { - input := new(MockUIInput) - - m := testModule(t, "input-interpolate-var") - p := testProvider("null") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), - }, - ), - UIInput: input, - }) - - if diags := ctx.Input(InputModeStd); diags.HasErrors() { - t.Fatalf("input errors: %s", diags.Err()) - } -} - -func TestContext2Input_hcl(t *testing.T) { - input := new(MockUIInput) - m := testModule(t, "input-hcl") - p := testProvider("hcl") - p.ApplyFn = testApplyFn - p.DiffFn = testDiffFn - p.GetSchemaReturn = &ProviderSchema{ - ResourceTypes: map[string]*configschema.Block{ - "hcl_instance": { - Attributes: map[string]*configschema.Attribute{ - "foo": {Type: cty.List(cty.String), Optional: true}, - "bar": {Type: cty.Map(cty.String), Optional: true}, - "id": {Type: cty.String, Computed: true}, - "type": {Type: cty.String, Computed: true}, - }, - }, - }, - } - ctx := testContext2(t, &ContextOpts{ - Config: m, - ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "hcl": testProviderFuncFixed(p), - }, - ), - Variables: InputValues{}, - UIInput: input, - }) - - input.InputReturnMap = map[string]string{ - "var.listed": `["a", "b"]`, - "var.mapped": `{x = "y", w = "z"}`, - } - - if err := ctx.Input(InputModeVar | InputModeVarUnset); err != nil { - t.Fatalf("err: %s", err) - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, err := ctx.Apply() - if err != nil { - t.Fatalf("err: %s", err) - } - - actualStr := strings.TrimSpace(state.String()) - expectedStr := strings.TrimSpace(testTerraformInputHCL) - if actualStr != expectedStr { - t.Logf("expected: \n%s", expectedStr) - t.Fatalf("bad: \n%s", actualStr) - } -} - // adding a list interpolation in fails to interpolate the count variable func TestContext2Input_submoduleTriggersInvalidCount(t *testing.T) { input := new(MockUIInput) @@ -805,8 +421,8 @@ func TestContext2Input_submoduleTriggersInvalidCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), UIInput: input, @@ -862,15 +478,18 @@ func TestContext2Input_dataSourceRequiresRefresh(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{Type: "null"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("null"), + Module: addrs.RootModuleInstance, + }, ) }) ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), State: state, diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go index edb382860..8288d5121 100644 --- a/terraform/context_plan_test.go +++ b/terraform/context_plan_test.go @@ -31,8 +31,8 @@ func TestContext2Plan_basic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), ProviderSHA256s: map[string][]byte{ @@ -117,8 +117,8 @@ func TestContext2Plan_createBefore_deposed(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -133,7 +133,7 @@ func TestContext2Plan_createBefore_deposed(t *testing.T) { expectedState := strings.TrimSpace(` aws_instance.foo: (1 deposed) ID = baz - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Deposed ID 1 = foo`) if ctx.State().String() != expectedState { @@ -209,16 +209,10 @@ func TestContext2Plan_createBefore_maintainRoot(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), - Variables: InputValues{ - "in": &InputValue{ - Value: cty.StringVal("a,b,c"), - SourceType: ValueFromCaller, - }, - }, }) plan, diags := ctx.Plan() @@ -255,8 +249,8 @@ func TestContext2Plan_emptyDiff(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -296,8 +290,8 @@ func TestContext2Plan_escapedVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -340,8 +334,8 @@ func TestContext2Plan_minimal(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -381,8 +375,8 @@ func TestContext2Plan_modules(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -435,6 +429,81 @@ func TestContext2Plan_modules(t *testing.T) { checkVals(t, expected, ric.After) } } +func TestContext2Plan_moduleCount(t *testing.T) { + // This test is skipped with count disabled. + t.Skip() + //FIXME: add for_each and single modules to this test + + m := testModule(t, "plan-modules-count") + p := testProvider("aws") + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + }, + ), + }) + + plan, diags := ctx.Plan() + if diags.HasErrors() { + t.Fatalf("unexpected errors: %s", diags.Err()) + } + + if len(plan.Changes.Resources) != 6 { + t.Error("expected 6 resource in plan, got", len(plan.Changes.Resources)) + } + + schema := p.GetSchemaReturn.ResourceTypes["aws_instance"] + ty := schema.ImpliedType() + + expectFoo := objectVal(t, schema, map[string]cty.Value{ + "id": cty.UnknownVal(cty.String), + "foo": cty.StringVal("2"), + "type": cty.StringVal("aws_instance")}, + ) + + expectNum := objectVal(t, schema, map[string]cty.Value{ + "id": cty.UnknownVal(cty.String), + "num": cty.NumberIntVal(2), + "type": cty.StringVal("aws_instance"), + }) + + expectExpansion := objectVal(t, schema, map[string]cty.Value{ + "bar": cty.StringVal("baz"), + "id": cty.UnknownVal(cty.String), + "num": cty.NumberIntVal(2), + "type": cty.StringVal("aws_instance"), + }) + + for _, res := range plan.Changes.Resources { + if res.Action != plans.Create { + t.Fatalf("expected resource creation, got %s", res.Action) + } + ric, err := res.Decode(ty) + if err != nil { + t.Fatal(err) + } + + var expected cty.Value + switch i := ric.Addr.String(); i { + case "aws_instance.bar": + expected = expectFoo + case "aws_instance.foo": + expected = expectNum + case "module.child[0].aws_instance.foo[0]", + "module.child[0].aws_instance.foo[1]", + "module.child[1].aws_instance.foo[0]", + "module.child[1].aws_instance.foo[1]": + expected = expectExpansion + default: + t.Fatal("unknown instance:", i) + } + + checkVals(t, expected, ric.After) + } +} // GH-1475 func TestContext2Plan_moduleCycle(t *testing.T) { @@ -456,8 +525,8 @@ func TestContext2Plan_moduleCycle(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -512,8 +581,8 @@ func TestContext2Plan_moduleDeadlock(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -558,8 +627,8 @@ func TestContext2Plan_moduleInput(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -615,8 +684,8 @@ func TestContext2Plan_moduleInputComputed(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -669,8 +738,8 @@ func TestContext2Plan_moduleInputFromVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -740,8 +809,8 @@ func TestContext2Plan_moduleMultiVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -821,8 +890,8 @@ func TestContext2Plan_moduleOrphans(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -869,7 +938,7 @@ func TestContext2Plan_moduleOrphans(t *testing.T) { module.child: aws_instance.foo: ID = baz - provider = provider.aws` + provider = provider["registry.terraform.io/-/aws"]` if ctx.State().String() != expectedState { t.Fatalf("\nexpected state: %q\n\ngot: %q", expectedState, ctx.State().String()) @@ -924,8 +993,8 @@ func TestContext2Plan_moduleOrphansWithProvisioner(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -973,16 +1042,16 @@ func TestContext2Plan_moduleOrphansWithProvisioner(t *testing.T) { expectedState := `aws_instance.top: ID = top - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] module.parent.childone: aws_instance.foo: ID = baz - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] module.parent.childtwo: aws_instance.foo: ID = baz - provider = provider.aws` + provider = provider["registry.terraform.io/-/aws"]` if expectedState != ctx.State().String() { t.Fatalf("\nexpect state: %q\ngot state: %q\n", expectedState, ctx.State().String()) @@ -997,8 +1066,8 @@ func TestContext2Plan_moduleProviderInherit(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": func() (providers.Interface, error) { + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): func() (providers.Interface, error) { l.Lock() defer l.Unlock() @@ -1063,8 +1132,8 @@ func TestContext2Plan_moduleProviderInheritDeep(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": func() (providers.Interface, error) { + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): func() (providers.Interface, error) { l.Lock() defer l.Unlock() @@ -1124,8 +1193,8 @@ func TestContext2Plan_moduleProviderDefaultsVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": func() (providers.Interface, error) { + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): func() (providers.Interface, error) { l.Lock() defer l.Unlock() @@ -1209,8 +1278,8 @@ func TestContext2Plan_moduleProviderVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1254,8 +1323,8 @@ func TestContext2Plan_moduleVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1311,8 +1380,8 @@ func TestContext2Plan_moduleVarWrongTypeBasic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1330,8 +1399,8 @@ func TestContext2Plan_moduleVarWrongTypeNested(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -1349,8 +1418,8 @@ func TestContext2Plan_moduleVarWithDefaultValue(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), }) @@ -1368,8 +1437,8 @@ func TestContext2Plan_moduleVarComputed(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1421,8 +1490,8 @@ func TestContext2Plan_preventDestroy_bad(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -1460,8 +1529,8 @@ func TestContext2Plan_preventDestroy_good(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -1498,8 +1567,8 @@ func TestContext2Plan_preventDestroy_countBad(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -1553,8 +1622,8 @@ func TestContext2Plan_preventDestroy_countGood(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -1607,8 +1676,8 @@ func TestContext2Plan_preventDestroy_countGoodNoChange(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -1649,8 +1718,8 @@ func TestContext2Plan_preventDestroy_destroyPlan(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -1690,8 +1759,8 @@ func TestContext2Plan_provisionerCycle(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -1712,8 +1781,8 @@ func TestContext2Plan_computed(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1787,8 +1856,8 @@ func TestContext2Plan_blockNestingGroup(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -1860,8 +1929,8 @@ func TestContext2Plan_computedDataResource(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1929,8 +1998,8 @@ func TestContext2Plan_computedInFunction(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1983,8 +2052,8 @@ func TestContext2Plan_computedDataCountResource(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2015,8 +2084,8 @@ func TestContext2Plan_localValueCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -2085,8 +2154,8 @@ func TestContext2Plan_dataResourceBecomesComputed(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -2215,8 +2284,8 @@ func TestContext2Plan_computedList(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2327,8 +2396,8 @@ func TestContext2Plan_computedMultiIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2385,8 +2454,8 @@ func TestContext2Plan_count(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2462,8 +2531,8 @@ func TestContext2Plan_countComputed(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2481,8 +2550,8 @@ func TestContext2Plan_countComputedModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2503,8 +2572,8 @@ func TestContext2Plan_countModuleStatic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2556,8 +2625,8 @@ func TestContext2Plan_countModuleStaticGrandchild(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2609,8 +2678,8 @@ func TestContext2Plan_countIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2662,8 +2731,8 @@ func TestContext2Plan_countVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -2748,8 +2817,8 @@ func TestContext2Plan_countZero(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2791,8 +2860,8 @@ func TestContext2Plan_countOneIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -2875,8 +2944,8 @@ func TestContext2Plan_countDecreaseToOne(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -2929,15 +2998,15 @@ func TestContext2Plan_countDecreaseToOne(t *testing.T) { expectedState := `aws_instance.foo.0: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.foo.1: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.2: ID = bar - provider = provider.aws` + provider = provider["registry.terraform.io/-/aws"]` if ctx.State().String() != expectedState { t.Fatalf("epected state:\n%q\n\ngot state:\n%q\n", expectedState, ctx.State().String()) @@ -2970,8 +3039,8 @@ func TestContext2Plan_countIncreaseFromNotSet(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3059,8 +3128,8 @@ func TestContext2Plan_countIncreaseFromOne(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3163,8 +3232,8 @@ func TestContext2Plan_countIncreaseFromOneCorrupted(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3301,8 +3370,8 @@ func TestContext2Plan_countIncreaseWithSplatReference(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3358,8 +3427,8 @@ func TestContext2Plan_forEach(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3388,18 +3457,24 @@ func TestContext2Plan_forEach(t *testing.T) { } func TestContext2Plan_forEachUnknownValue(t *testing.T) { - // This module has a variable defined, but it is not provided - // in the context below and we expect the plan to error, but not panic + // This module has a variable defined, but it's value is unknown. We + // expect this to produce an error, but not to panic. m := testModule(t, "plan-for-each-unknown-value") p := testProvider("aws") p.DiffFn = testDiffFn ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), + Variables: InputValues{ + "foo": { + Value: cty.UnknownVal(cty.String), + SourceType: ValueFromCLIArg, + }, + }, }) _, diags := ctx.Plan() @@ -3444,8 +3519,8 @@ func TestContext2Plan_destroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3515,8 +3590,8 @@ func TestContext2Plan_moduleDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3586,8 +3661,8 @@ func TestContext2Plan_moduleDestroyCycle(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3656,8 +3731,8 @@ func TestContext2Plan_moduleDestroyMultivar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3718,8 +3793,8 @@ func TestContext2Plan_pathVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3782,8 +3857,8 @@ func TestContext2Plan_diffVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3862,8 +3937,8 @@ func TestContext2Plan_hook(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3891,8 +3966,8 @@ func TestContext2Plan_closeProvider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -3929,8 +4004,8 @@ func TestContext2Plan_orphan(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -3983,8 +4058,8 @@ func TestContext2Plan_shadowUuid(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4017,8 +4092,8 @@ func TestContext2Plan_state(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -4105,8 +4180,8 @@ func TestContext2Plan_taint(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -4190,8 +4265,8 @@ func TestContext2Plan_taintIgnoreChanges(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -4270,8 +4345,8 @@ func TestContext2Plan_taintDestroyInterpolatedCountRace(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -4324,8 +4399,8 @@ func TestContext2Plan_targeted(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -4377,8 +4452,8 @@ func TestContext2Plan_targetedCrossModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -4444,8 +4519,8 @@ func TestContext2Plan_targetedModuleWithProvider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -4483,8 +4558,8 @@ func TestContext2Plan_targetedOrphan(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -4553,8 +4628,8 @@ func TestContext2Plan_targetedModuleOrphan(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -4621,8 +4696,8 @@ func TestContext2Plan_targetedModuleUntargetedVariable(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -4679,8 +4754,8 @@ func TestContext2Plan_outputContainsTargetedResource(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Targets: []addrs.Targetable{ @@ -4726,8 +4801,8 @@ func TestContext2Plan_targetedOverTen(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -4778,8 +4853,8 @@ func TestContext2Plan_provider(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -4805,8 +4880,8 @@ func TestContext2Plan_varListErr(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -4843,8 +4918,8 @@ func TestContext2Plan_ignoreChanges(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -4915,8 +4990,8 @@ func TestContext2Plan_ignoreChangesWildcard(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -4975,9 +5050,10 @@ func TestContext2Plan_ignoreChangesInMap(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"tags":{"ignored":"from state","other":"from state"}}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) m := testModule(t, "plan-ignore-changes-in-map") @@ -4985,8 +5061,8 @@ func TestContext2Plan_ignoreChangesInMap(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), State: s, @@ -5065,8 +5141,8 @@ func TestContext2Plan_moduleMapLiteral(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5112,8 +5188,8 @@ func TestContext2Plan_computedValueInMap(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5171,8 +5247,8 @@ func TestContext2Plan_moduleVariableFromSplat(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5254,8 +5330,8 @@ func TestContext2Plan_createBeforeDestroy_depends_datasource(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5367,8 +5443,8 @@ func TestContext2Plan_listOrder(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5448,8 +5524,8 @@ func TestContext2Plan_ignoreChangesWithFlatmaps(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -5584,8 +5660,8 @@ func TestContext2Plan_resourceNestedCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -5666,8 +5742,8 @@ func TestContext2Plan_computedAttrRefTypeMismatch(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5699,8 +5775,8 @@ func TestContext2Plan_selfRef(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5738,8 +5814,8 @@ func TestContext2Plan_selfRefMulti(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5777,8 +5853,8 @@ func TestContext2Plan_selfRefMultiAll(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5818,8 +5894,8 @@ output "out" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5860,8 +5936,8 @@ resource "aws_instance" "foo" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5909,8 +5985,8 @@ resource "aws_instance" "foo" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -5959,8 +6035,8 @@ func TestContext2Plan_requiredModuleOutput(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -6026,8 +6102,8 @@ func TestContext2Plan_requiredModuleObject(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) diff --git a/terraform/context_refresh_test.go b/terraform/context_refresh_test.go index dbd57eb53..7b84af45c 100644 --- a/terraform/context_refresh_test.go +++ b/terraform/context_refresh_test.go @@ -45,8 +45,8 @@ func TestContext2Refresh(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: startingState, @@ -103,9 +103,10 @@ func TestContext2Refresh_dynamicAttr(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"dynamic":{"type":"string","value":"hello"}}`), }, - addrs.ProviderConfig{ - Type: "test", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -132,8 +133,8 @@ func TestContext2Refresh_dynamicAttr(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), State: startingState, @@ -168,8 +169,8 @@ func TestContext2Refresh_dataComputedModuleVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -259,8 +260,8 @@ func TestContext2Refresh_targeted(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -342,8 +343,8 @@ func TestContext2Refresh_targetedCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -435,8 +436,8 @@ func TestContext2Refresh_targetedCountIndex(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -504,8 +505,8 @@ func TestContext2Refresh_moduleComputedVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -523,8 +524,8 @@ func TestContext2Refresh_delete(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -566,8 +567,8 @@ func TestContext2Refresh_ignoreUncreated(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: nil, @@ -597,8 +598,8 @@ func TestContext2Refresh_hook(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -663,8 +664,8 @@ func TestContext2Refresh_modules(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -725,8 +726,8 @@ func TestContext2Refresh_moduleInputComputedOutput(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -743,8 +744,8 @@ func TestContext2Refresh_moduleVarModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -761,8 +762,8 @@ func TestContext2Refresh_noState(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -803,8 +804,8 @@ func TestContext2Refresh_output(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -879,8 +880,8 @@ func TestContext2Refresh_outputPartial(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -934,8 +935,8 @@ func TestContext2Refresh_stateBasic(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1009,8 +1010,8 @@ func TestContext2Refresh_dataCount(t *testing.T) { ctx := testContext2(t, &ContextOpts{ ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), Config: m, @@ -1055,8 +1056,8 @@ func TestContext2Refresh_dataOrphan(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), State: state, @@ -1105,8 +1106,8 @@ func TestContext2Refresh_dataState(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), State: state, @@ -1212,8 +1213,8 @@ func TestContext2Refresh_dataStateRefData(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "null": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("null"): testProviderFuncFixed(p), }, ), State: state, @@ -1263,8 +1264,8 @@ func TestContext2Refresh_tainted(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1307,7 +1308,7 @@ func TestContext2Refresh_unknownProvider(t *testing.T) { _, diags := NewContext(&ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{}, + map[addrs.Provider]providers.Factory{}, ), State: MustShimLegacyState(&State{ Modules: []*ModuleState{ @@ -1360,8 +1361,8 @@ func TestContext2Refresh_vars(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: MustShimLegacyState(&State{ @@ -1522,8 +1523,8 @@ func TestContext2Refresh_orphanModule(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1567,8 +1568,8 @@ func TestContext2Validate(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1630,8 +1631,8 @@ func TestContext2Refresh_noDiffHookOnScaleOut(t *testing.T) { Config: m, Hooks: []Hook{h}, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -1678,8 +1679,8 @@ func TestContext2Refresh_updateProviderInState(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: s, @@ -1688,7 +1689,7 @@ func TestContext2Refresh_updateProviderInState(t *testing.T) { expected := strings.TrimSpace(` aws_instance.bar: ID = foo - provider = provider.aws.foo`) + provider = provider["registry.terraform.io/-/aws"].foo`) state, diags := ctx.Refresh() if diags.HasErrors() { @@ -1739,15 +1740,18 @@ func TestContext2Refresh_schemaUpgradeFlatmap(t *testing.T) { "id": "foo", }, }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), State: s, @@ -1777,7 +1781,7 @@ func TestContext2Refresh_schemaUpgradeFlatmap(t *testing.T) { want := strings.TrimSpace(` test_thing.bar: ID = - provider = provider.test + provider = provider["registry.terraform.io/-/test"] name = foo `) if got != want { @@ -1822,15 +1826,18 @@ func TestContext2Refresh_schemaUpgradeJSON(t *testing.T) { SchemaVersion: 3, AttrsJSON: []byte(`{"id":"foo"}`), }, - addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, ) }) ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), State: s, @@ -1858,7 +1865,7 @@ func TestContext2Refresh_schemaUpgradeJSON(t *testing.T) { want := strings.TrimSpace(` test_thing.bar: ID = - provider = provider.test + provider = provider["registry.terraform.io/-/test"] name = foo `) if got != want { @@ -1889,8 +1896,8 @@ data "aws_data_source" "foo" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1952,8 +1959,8 @@ func TestContext2Refresh_dataResourceDependsOn(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), State: s, @@ -1964,3 +1971,97 @@ func TestContext2Refresh_dataResourceDependsOn(t *testing.T) { t.Fatalf("unexpected errors: %s", diags.Err()) } } + +// verify that dependencies are updated in the state during refresh +func TestRefresh_updateDependencies(t *testing.T) { + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "foo", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo"}`), + Dependencies: []addrs.AbsResource{ + // Existing dependencies should not be removed during refresh + { + Module: addrs.RootModuleInstance, + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "baz", + }, + }, + }, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "bar", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar","foo":"foo"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, + ) + + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "aws_instance" "bar" { + foo = aws_instance.foo.id +} + +resource "aws_instance" "foo" { +}`, + }) + + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + }, + ), + State: state, + }) + + result, diags := ctx.Refresh() + if diags.HasErrors() { + t.Fatalf("plan errors: %s", diags.Err()) + } + + expect := strings.TrimSpace(` +aws_instance.bar: + ID = bar + provider = provider["registry.terraform.io/-/aws"] + foo = foo + + Dependencies: + aws_instance.foo +aws_instance.foo: + ID = foo + provider = provider["registry.terraform.io/-/aws"] + + Dependencies: + aws_instance.baz +`) + + checkStateString(t, result, expect) +} diff --git a/terraform/context_test.go b/terraform/context_test.go index 5b7579051..3407fc0bd 100644 --- a/terraform/context_test.go +++ b/terraform/context_test.go @@ -1086,18 +1086,18 @@ root const testContextRefreshModuleStr = ` aws_instance.web: (tainted) ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] module.child: aws_instance.web: ID = new - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testContextRefreshOutputStr = ` aws_instance.web: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar Outputs: @@ -1112,5 +1112,5 @@ const testContextRefreshOutputPartialStr = ` const testContextRefreshTaintedStr = ` aws_instance.web: (tainted) ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` diff --git a/terraform/context_validate_test.go b/terraform/context_validate_test.go index aae8da090..6d2718462 100644 --- a/terraform/context_validate_test.go +++ b/terraform/context_validate_test.go @@ -30,8 +30,8 @@ func TestContext2Validate_badCount(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -59,8 +59,8 @@ func TestContext2Validate_badVar(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -87,35 +87,28 @@ func TestContext2Validate_varMapOverrideOld(t *testing.T) { }, } - c := testContext2(t, &ContextOpts{ + _, diags := NewContext(&ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), - Variables: InputValues{ - "foo.foo": &InputValue{ - Value: cty.StringVal("bar"), - SourceType: ValueFromCaller, - }, - }, + Variables: InputValues{}, }) - - diags := c.Validate() if !diags.HasErrors() { + // Error should be: The input variable "provider_var" has not been assigned a value. t.Fatalf("succeeded; want error") } } func TestContext2Validate_varNoDefaultExplicitType(t *testing.T) { m := testModule(t, "validate-var-no-default-explicit-type") - c := testContext2(t, &ContextOpts{ + _, diags := NewContext(&ContextOpts{ Config: m, }) - - diags := c.Validate() if !diags.HasErrors() { + // Error should be: The input variable "maybe_a_map" has not been assigned a value. t.Fatalf("succeeded; want error") } } @@ -149,9 +142,9 @@ func TestContext2Validate_computedVar(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), - "test": testProviderFuncFixed(pt), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), + addrs.NewLegacyProvider("test"): testProviderFuncFixed(pt), }, ), }) @@ -198,8 +191,8 @@ func TestContext2Validate_computedInFunction(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -235,8 +228,8 @@ func TestContext2Validate_countComputed(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -261,8 +254,8 @@ func TestContext2Validate_countNegative(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -289,8 +282,8 @@ func TestContext2Validate_countVariable(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -314,17 +307,16 @@ func TestContext2Validate_countVariableNoDefault(t *testing.T) { }, } - c := testContext2(t, &ContextOpts{ + _, diags := NewContext(&ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) - - diags := c.Validate() if !diags.HasErrors() { + // Error should be: The input variable "foo" has not been assigned a value. t.Fatalf("succeeded; want error") } } @@ -345,8 +337,8 @@ func TestContext2Validate_moduleBadOutput(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -373,8 +365,8 @@ func TestContext2Validate_moduleGood(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -399,8 +391,8 @@ func TestContext2Validate_moduleBadResource(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -431,8 +423,8 @@ func TestContext2Validate_moduleDepsShouldNotCycle(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -464,8 +456,8 @@ func TestContext2Validate_moduleProviderVar(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -507,8 +499,8 @@ func TestContext2Validate_moduleProviderInheritUnused(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -555,8 +547,8 @@ func TestContext2Validate_orphans(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -597,8 +589,8 @@ func TestContext2Validate_providerConfig_bad(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -635,8 +627,8 @@ func TestContext2Validate_providerConfig_badEmpty(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -670,8 +662,8 @@ func TestContext2Validate_providerConfig_good(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -700,8 +692,8 @@ func TestContext2Validate_provisionerConfig_bad(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -737,8 +729,8 @@ func TestContext2Validate_badResourceConnection(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -771,8 +763,8 @@ func TestContext2Validate_badProvisionerConnection(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -819,8 +811,8 @@ func TestContext2Validate_provisionerConfig_good(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -847,17 +839,16 @@ func TestContext2Validate_requiredVar(t *testing.T) { }, } - c := testContext2(t, &ContextOpts{ + _, diags := NewContext(&ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) - - diags := c.Validate() if !diags.HasErrors() { + // Error should be: The input variable "foo" has not been assigned a value. t.Fatalf("succeeded; want error") } } @@ -878,8 +869,8 @@ func TestContext2Validate_resourceConfig_bad(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -910,8 +901,8 @@ func TestContext2Validate_resourceConfig_good(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -955,8 +946,8 @@ func TestContext2Validate_tainted(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), State: state, @@ -998,8 +989,8 @@ func TestContext2Validate_targetedDestroy(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Provisioners: map[string]ProvisionerFactory{ @@ -1045,8 +1036,8 @@ func TestContext2Validate_varRefUnknown(t *testing.T) { c := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), Variables: InputValues{ @@ -1095,8 +1086,8 @@ func TestContext2Validate_interpolateVar(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("template"): testProviderFuncFixed(p), }, ), UIInput: input, @@ -1130,8 +1121,8 @@ func TestContext2Validate_interpolateComputedModuleVarDef(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), UIInput: input, @@ -1155,8 +1146,8 @@ func TestContext2Validate_interpolateMap(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "template": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("template"): testProviderFuncFixed(p), }, ), UIInput: input, @@ -1235,8 +1226,8 @@ output "out" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1274,8 +1265,8 @@ resource "aws_instance" "foo" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1305,8 +1296,8 @@ output "out" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1338,8 +1329,8 @@ output "out" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1371,8 +1362,8 @@ output "out" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "aws": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): testProviderFuncFixed(p), }, ), }) @@ -1403,8 +1394,8 @@ resource "test_instance" "bar" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -1438,8 +1429,8 @@ resource "test_instance" "bar" { ctx := testContext2(t, &ContextOpts{ Config: m, ProviderResolver: providers.ResolverFixed( - map[string]providers.Factory{ - "test": testProviderFuncFixed(p), + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), }, ), }) @@ -1454,3 +1445,76 @@ resource "test_instance" "bar" { t.Fatalf("wrong error:\ngot: %s\nwant: message containing %q", got, want) } } + +func TestContext2Validate_variableCustomValidationsFail(t *testing.T) { + // This test is for custom validation rules associated with root module + // variables, and specifically that we handle the situation where the + // given value is invalid in a child module. + m := testModule(t, "validate-variable-custom-validations-child") + + p := testProvider("test") + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), + }, + ), + }) + + diags := ctx.Validate() + if !diags.HasErrors() { + t.Fatal("succeeded; want errors") + } + if got, want := diags.Err().Error(), `Invalid value for variable: Value must not be "nope".`; strings.Index(got, want) == -1 { + t.Fatalf("wrong error:\ngot: %s\nwant: message containing %q", got, want) + } +} + +func TestContext2Validate_variableCustomValidationsRoot(t *testing.T) { + // This test is for custom validation rules associated with root module + // variables, and specifically that we handle the situation where their + // values are unknown during validation, skipping the validation check + // altogether. (Root module variables are never known during validation.) + m := testModuleInline(t, map[string]string{ + "main.tf": ` +# This feature is currently experimental. +# (If you're currently cleaning up after concluding the experiment, +# remember to also clean up similar references in the configs package +# under "invalid-files" and "invalid-modules".) +terraform { + experiments = [variable_validation] +} + +variable "test" { + type = string + + validation { + condition = var.test != "nope" + error_message = "Value must not be \"nope\"." + } +} +`, + }) + + p := testProvider("test") + ctx := testContext2(t, &ContextOpts{ + Config: m, + ProviderResolver: providers.ResolverFixed( + map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): testProviderFuncFixed(p), + }, + ), + Variables: InputValues{ + "test": &InputValue{ + Value: cty.UnknownVal(cty.String), + SourceType: ValueFromCLIArg, + }, + }, + }) + + diags := ctx.Validate() + if diags.HasErrors() { + t.Fatalf("unexpected error\ngot: %s", diags.Err().Error()) + } +} diff --git a/terraform/edge_destroy.go b/terraform/edge_destroy.go deleted file mode 100644 index bc9d638aa..000000000 --- a/terraform/edge_destroy.go +++ /dev/null @@ -1,17 +0,0 @@ -package terraform - -import ( - "fmt" - - "github.com/hashicorp/terraform/dag" -) - -// DestroyEdge is an edge that represents a standard "destroy" relationship: -// Target depends on Source because Source is destroying. -type DestroyEdge struct { - S, T dag.Vertex -} - -func (e *DestroyEdge) Hashcode() interface{} { return fmt.Sprintf("%p-%p", e.S, e.T) } -func (e *DestroyEdge) Source() dag.Vertex { return e.S } -func (e *DestroyEdge) Target() dag.Vertex { return e.T } diff --git a/terraform/eval_apply.go b/terraform/eval_apply.go index 6b839fcaf..0755c6b9f 100644 --- a/terraform/eval_apply.go +++ b/terraform/eval_apply.go @@ -24,7 +24,6 @@ import ( type EvalApply struct { Addr addrs.ResourceInstance Config *configs.Resource - Dependencies []addrs.Referenceable State **states.ResourceInstanceObject Change **plans.ResourceInstanceChange ProviderAddr addrs.AbsProviderConfig @@ -129,7 +128,7 @@ func (n *EvalApply) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid object", fmt.Sprintf( "Provider %q produced an invalid value after apply for %s. The result cannot not be saved in the Terraform state.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, tfdiags.FormatErrorPrefixed(err, absAddr.String()), + n.ProviderAddr.Provider.LegacyString(), tfdiags.FormatErrorPrefixed(err, absAddr.String()), ), )) } @@ -199,7 +198,7 @@ func (n *EvalApply) Eval(ctx EvalContext) (interface{}, error) { // to notice in the logs if an inconsistency beyond the type system // leads to a downstream provider failure. var buf strings.Builder - fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:", n.ProviderAddr.ProviderConfig.Type, absAddr) + fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:", n.ProviderAddr.Provider.LegacyString(), absAddr) for _, err := range errs { fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err)) } @@ -219,7 +218,7 @@ func (n *EvalApply) Eval(ctx EvalContext) (interface{}, error) { "Provider produced inconsistent result after apply", fmt.Sprintf( "When applying changes to %s, provider %q produced an unexpected new value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - absAddr, n.ProviderAddr.ProviderConfig.Type, tfdiags.FormatError(err), + absAddr, n.ProviderAddr.Provider.LegacyString(), tfdiags.FormatError(err), ), )) } @@ -254,6 +253,8 @@ func (n *EvalApply) Eval(ctx EvalContext) (interface{}, error) { } } + newStatus := states.ObjectReady + // Sometimes providers return a null value when an operation fails for some // reason, but we'd rather keep the prior state so that the error can be // corrected on a subsequent run. We must only do this for null new value @@ -266,15 +267,20 @@ func (n *EvalApply) Eval(ctx EvalContext) (interface{}, error) { // If change.Action is Create then change.Before will also be null, // which is fine. newVal = change.Before + + // If we're recovering the previous state, we also want to restore the + // the tainted status of the object. + if state.Status == states.ObjectTainted { + newStatus = states.ObjectTainted + } } var newState *states.ResourceInstanceObject if !newVal.IsNull() { // null value indicates that the object is deleted, so we won't set a new state in that case newState = &states.ResourceInstanceObject{ - Status: states.ObjectReady, - Value: newVal, - Private: resp.Private, - Dependencies: n.Dependencies, // Should be populated by the caller from the StateDependencies method on the resource instance node + Status: newStatus, + Value: newVal, + Private: resp.Private, } } @@ -378,40 +384,39 @@ type EvalMaybeTainted struct { Change **plans.ResourceInstanceChange State **states.ResourceInstanceObject Error *error - - // If StateOutput is not nil, its referent will be assigned either the same - // pointer as State or a new object with its status set as Tainted, - // depending on whether an error is given and if this was a create action. - StateOutput **states.ResourceInstanceObject } -// TODO: test func (n *EvalMaybeTainted) Eval(ctx EvalContext) (interface{}, error) { + if n.State == nil || n.Change == nil || n.Error == nil { + return nil, nil + } + state := *n.State change := *n.Change err := *n.Error + // nothing to do if everything went as planned + if err == nil { + return nil, nil + } + if state != nil && state.Status == states.ObjectTainted { log.Printf("[TRACE] EvalMaybeTainted: %s was already tainted, so nothing to do", n.Addr.Absolute(ctx.Path())) return nil, nil } - if n.StateOutput != nil { - if err != nil && change.Action == plans.Create { - // If there are errors during a _create_ then the object is - // in an undefined state, and so we'll mark it as tainted so - // we can try again on the next run. - // - // We don't do this for other change actions because errors - // during updates will often not change the remote object at all. - // If there _were_ changes prior to the error, it's the provider's - // responsibility to record the effect of those changes in the - // object value it returned. - log.Printf("[TRACE] EvalMaybeTainted: %s encountered an error during creation, so it is now marked as tainted", n.Addr.Absolute(ctx.Path())) - *n.StateOutput = state.AsTainted() - } else { - *n.StateOutput = state - } + if change.Action == plans.Create { + // If there are errors during a _create_ then the object is + // in an undefined state, and so we'll mark it as tainted so + // we can try again on the next run. + // + // We don't do this for other change actions because errors + // during updates will often not change the remote object at all. + // If there _were_ changes prior to the error, it's the provider's + // responsibility to record the effect of those changes in the + // object value it returned. + log.Printf("[TRACE] EvalMaybeTainted: %s encountered an error during creation, so it is now marked as tainted", n.Addr.Absolute(ctx.Path())) + *n.State = state.AsTainted() } return nil, nil @@ -556,8 +561,18 @@ func (n *EvalApplyProvisioners) apply(ctx EvalContext, provs []*configs.Provisio provisioner := ctx.Provisioner(prov.Type) schema := ctx.ProvisionerSchema(prov.Type) - forEach, forEachDiags := evaluateResourceForEachExpression(n.ResourceConfig.ForEach, ctx) - diags = diags.Append(forEachDiags) + var forEach map[string]cty.Value + + // For a destroy-time provisioner forEach is intentionally nil here, + // which EvalDataForInstanceKey responds to by not populating EachValue + // in its result. That's okay because each.value is prohibited for + // destroy-time provisioners. + if n.When != configs.ProvisionerWhenDestroy { + m, forEachDiags := evaluateResourceForEachExpression(n.ResourceConfig.ForEach, ctx) + diags = diags.Append(forEachDiags) + forEach = m + } + keyData := EvalDataForInstanceKey(instanceAddr.Key, forEach) // Evaluate the main provisioner configuration. diff --git a/terraform/eval_context.go b/terraform/eval_context.go index e36805e90..a682b3d6c 100644 --- a/terraform/eval_context.go +++ b/terraform/eval_context.go @@ -4,6 +4,7 @@ import ( "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" + "github.com/hashicorp/terraform/instances" "github.com/hashicorp/terraform/lang" "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/providers" @@ -29,11 +30,13 @@ type EvalContext interface { // Input is the UIInput object for interacting with the UI. Input() UIInput - // InitProvider initializes the provider with the given type and address, and - // returns the implementation of the resource provider or an error. + // InitProvider initializes the provider with the given address, and returns + // the implementation of the resource provider or an error. // - // It is an error to initialize the same provider more than once. - InitProvider(typ string, addr addrs.ProviderConfig) (providers.Interface, error) + // It is an error to initialize the same provider more than once. This + // method will panic if the module instance address of the given provider + // configuration does not match the Path() of the EvalContext. + InitProvider(addr addrs.AbsProviderConfig) (providers.Interface, error) // Provider gets the provider instance with the given address (already // initialized) or returns nil if the provider isn't initialized. @@ -52,18 +55,27 @@ type EvalContext interface { ProviderSchema(addrs.AbsProviderConfig) *ProviderSchema // CloseProvider closes provider connections that aren't needed anymore. - CloseProvider(addrs.ProviderConfig) error + // + // This method will panic if the module instance address of the given + // provider configuration does not match the Path() of the EvalContext. + CloseProvider(addrs.AbsProviderConfig) error // ConfigureProvider configures the provider with the given // configuration. This is a separate context call because this call // is used to store the provider configuration for inheritance lookups // with ParentProviderConfig(). - ConfigureProvider(addrs.ProviderConfig, cty.Value) tfdiags.Diagnostics + // + // This method will panic if the module instance address of the given + // provider configuration does not match the Path() of the EvalContext. + ConfigureProvider(addrs.AbsProviderConfig, cty.Value) tfdiags.Diagnostics // ProviderInput and SetProviderInput are used to configure providers // from user input. - ProviderInput(addrs.ProviderConfig) map[string]cty.Value - SetProviderInput(addrs.ProviderConfig, map[string]cty.Value) + // + // These methods will panic if the module instance address of the given + // provider configuration does not match the Path() of the EvalContext. + ProviderInput(addrs.AbsProviderConfig) map[string]cty.Value + SetProviderInput(addrs.AbsProviderConfig, map[string]cty.Value) // InitProvisioner initializes the provisioner with the given name and // returns the implementation of the resource provisioner or an error. @@ -123,6 +135,17 @@ type EvalContext interface { // previously-set keys that are not present in the new map. SetModuleCallArguments(addrs.ModuleCallInstance, map[string]cty.Value) + // GetVariableValue returns the value provided for the input variable with + // the given address, or cty.DynamicVal if the variable hasn't been assigned + // a value yet. + // + // Most callers should deal with variable values only indirectly via + // EvaluationScope and the other expression evaluation functions, but + // this is provided because variables tend to be evaluated outside of + // the context of the module they belong to and so we sometimes need to + // override the normal expression evaluation behavior. + GetVariableValue(addr addrs.AbsInputVariableInstance) cty.Value + // Changes returns the writer object that can be used to write new proposed // changes into the global changes set. Changes() *plans.ChangesSync @@ -130,4 +153,12 @@ type EvalContext interface { // State returns a wrapper object that provides safe concurrent access to // the global state. State() *states.SyncState + + // InstanceExpander returns a helper object for tracking the expansion of + // graph nodes during the plan phase in response to "count" and "for_each" + // arguments. + // + // The InstanceExpander is a global object that is shared across all of the + // EvalContext objects for a given configuration. + InstanceExpander() *instances.Expander } diff --git a/terraform/eval_context_builtin.go b/terraform/eval_context_builtin.go index f6531848f..15f1f08ab 100644 --- a/terraform/eval_context_builtin.go +++ b/terraform/eval_context_builtin.go @@ -6,6 +6,7 @@ import ( "log" "sync" + "github.com/hashicorp/terraform/instances" "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/provisioners" @@ -53,16 +54,17 @@ type BuiltinEvalContext struct { VariableValues map[string]map[string]cty.Value VariableValuesLock *sync.Mutex - Components contextComponentFactory - Hooks []Hook - InputValue UIInput - ProviderCache map[string]providers.Interface - ProviderInputConfig map[string]map[string]cty.Value - ProviderLock *sync.Mutex - ProvisionerCache map[string]provisioners.Interface - ProvisionerLock *sync.Mutex - ChangesValue *plans.ChangesSync - StateValue *states.SyncState + Components contextComponentFactory + Hooks []Hook + InputValue UIInput + ProviderCache map[string]providers.Interface + ProviderInputConfig map[string]map[string]cty.Value + ProviderLock *sync.Mutex + ProvisionerCache map[string]provisioners.Interface + ProvisionerLock *sync.Mutex + ChangesValue *plans.ChangesSync + StateValue *states.SyncState + InstanceExpanderValue *instances.Expander once sync.Once } @@ -103,9 +105,14 @@ func (ctx *BuiltinEvalContext) Input() UIInput { return ctx.InputValue } -func (ctx *BuiltinEvalContext) InitProvider(typeName string, addr addrs.ProviderConfig) (providers.Interface, error) { +func (ctx *BuiltinEvalContext) InitProvider(addr addrs.AbsProviderConfig) (providers.Interface, error) { ctx.once.Do(ctx.init) - absAddr := addr.Absolute(ctx.Path()) + absAddr := addr + if !absAddr.Module.Equal(ctx.Path()) { + // This indicates incorrect use of InitProvider: it should be used + // only from the module that the provider configuration belongs to. + panic(fmt.Sprintf("%s initialized by wrong module %s", absAddr, ctx.Path())) + } // If we already initialized, it is an error if p := ctx.Provider(absAddr); p != nil { @@ -119,12 +126,12 @@ func (ctx *BuiltinEvalContext) InitProvider(typeName string, addr addrs.Provider key := absAddr.String() - p, err := ctx.Components.ResourceProvider(typeName, key) + p, err := ctx.Components.ResourceProvider(addr.Provider) if err != nil { return nil, err } - log.Printf("[TRACE] BuiltinEvalContext: Initialized %q provider for %s", typeName, absAddr) + log.Printf("[TRACE] BuiltinEvalContext: Initialized %q provider for %s", addr.LegacyString(), absAddr) ctx.ProviderCache[key] = p return p, nil @@ -141,17 +148,21 @@ func (ctx *BuiltinEvalContext) Provider(addr addrs.AbsProviderConfig) providers. func (ctx *BuiltinEvalContext) ProviderSchema(addr addrs.AbsProviderConfig) *ProviderSchema { ctx.once.Do(ctx.init) - - return ctx.Schemas.ProviderSchema(addr.ProviderConfig.Type) + return ctx.Schemas.ProviderSchema(addr.Provider) } -func (ctx *BuiltinEvalContext) CloseProvider(addr addrs.ProviderConfig) error { +func (ctx *BuiltinEvalContext) CloseProvider(addr addrs.AbsProviderConfig) error { ctx.once.Do(ctx.init) + if !addr.Module.Equal(ctx.Path()) { + // This indicates incorrect use of CloseProvider: it should be used + // only from the module that the provider configuration belongs to. + panic(fmt.Sprintf("%s closed by wrong module %s", addr, ctx.Path())) + } ctx.ProviderLock.Lock() defer ctx.ProviderLock.Unlock() - key := addr.Absolute(ctx.Path()).String() + key := addr.String() provider := ctx.ProviderCache[key] if provider != nil { delete(ctx.ProviderCache, key) @@ -161,9 +172,15 @@ func (ctx *BuiltinEvalContext) CloseProvider(addr addrs.ProviderConfig) error { return nil } -func (ctx *BuiltinEvalContext) ConfigureProvider(addr addrs.ProviderConfig, cfg cty.Value) tfdiags.Diagnostics { +func (ctx *BuiltinEvalContext) ConfigureProvider(addr addrs.AbsProviderConfig, cfg cty.Value) tfdiags.Diagnostics { var diags tfdiags.Diagnostics - absAddr := addr.Absolute(ctx.Path()) + absAddr := addr + if !absAddr.Module.Equal(ctx.Path()) { + // This indicates incorrect use of ConfigureProvider: it should be used + // only from the module that the provider configuration belongs to. + panic(fmt.Sprintf("%s configured by wrong module %s", absAddr, ctx.Path())) + } + p := ctx.Provider(absAddr) if p == nil { diags = diags.Append(fmt.Errorf("%s not initialized", addr)) @@ -185,10 +202,16 @@ func (ctx *BuiltinEvalContext) ConfigureProvider(addr addrs.ProviderConfig, cfg return resp.Diagnostics } -func (ctx *BuiltinEvalContext) ProviderInput(pc addrs.ProviderConfig) map[string]cty.Value { +func (ctx *BuiltinEvalContext) ProviderInput(pc addrs.AbsProviderConfig) map[string]cty.Value { ctx.ProviderLock.Lock() defer ctx.ProviderLock.Unlock() + if !pc.Module.Equal(ctx.Path()) { + // This indicates incorrect use of InitProvider: it should be used + // only from the module that the provider configuration belongs to. + panic(fmt.Sprintf("%s initialized by wrong module %s", pc, ctx.Path())) + } + if !ctx.Path().IsRoot() { // Only root module provider configurations can have input. return nil @@ -197,8 +220,13 @@ func (ctx *BuiltinEvalContext) ProviderInput(pc addrs.ProviderConfig) map[string return ctx.ProviderInputConfig[pc.String()] } -func (ctx *BuiltinEvalContext) SetProviderInput(pc addrs.ProviderConfig, c map[string]cty.Value) { - absProvider := pc.Absolute(ctx.Path()) +func (ctx *BuiltinEvalContext) SetProviderInput(pc addrs.AbsProviderConfig, c map[string]cty.Value) { + absProvider := pc + if !absProvider.Module.Equal(ctx.Path()) { + // This indicates incorrect use of InitProvider: it should be used + // only from the module that the provider configuration belongs to. + panic(fmt.Sprintf("%s initialized by wrong module %s", absProvider, ctx.Path())) + } if !ctx.Path().IsRoot() { // Only root module provider configurations can have input. @@ -225,7 +253,7 @@ func (ctx *BuiltinEvalContext) InitProvisioner(n string) (provisioners.Interface ctx.ProvisionerLock.Lock() defer ctx.ProvisionerLock.Unlock() - p, err := ctx.Components.ResourceProvisioner(n, "") + p, err := ctx.Components.ResourceProvisioner(n) if err != nil { return nil, err } @@ -312,6 +340,16 @@ func (ctx *BuiltinEvalContext) SetModuleCallArguments(n addrs.ModuleCallInstance } } +func (ctx *BuiltinEvalContext) GetVariableValue(addr addrs.AbsInputVariableInstance) cty.Value { + modKey := addr.Module.String() + modVars := ctx.VariableValues[modKey] + val, ok := modVars[addr.Variable.Name] + if !ok { + return cty.DynamicVal + } + return val +} + func (ctx *BuiltinEvalContext) Changes() *plans.ChangesSync { return ctx.ChangesValue } @@ -320,5 +358,9 @@ func (ctx *BuiltinEvalContext) State() *states.SyncState { return ctx.StateValue } +func (ctx *BuiltinEvalContext) InstanceExpander() *instances.Expander { + return ctx.InstanceExpanderValue +} + func (ctx *BuiltinEvalContext) init() { } diff --git a/terraform/eval_context_builtin_test.go b/terraform/eval_context_builtin_test.go index a0d7bed94..45343c0f3 100644 --- a/terraform/eval_context_builtin_test.go +++ b/terraform/eval_context_builtin_test.go @@ -24,16 +24,23 @@ func TestBuiltinEvalContextProviderInput(t *testing.T) { ctx2.ProviderInputConfig = cache ctx2.ProviderLock = &lock - providerAddr := addrs.ProviderConfig{Type: "foo"} + providerAddr1 := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("foo"), + } + providerAddr2 := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance.Child("child", addrs.NoKey), + Provider: addrs.NewLegacyProvider("foo"), + } expected1 := map[string]cty.Value{"value": cty.StringVal("foo")} - ctx1.SetProviderInput(providerAddr, expected1) + ctx1.SetProviderInput(providerAddr1, expected1) try2 := map[string]cty.Value{"value": cty.StringVal("bar")} - ctx2.SetProviderInput(providerAddr, try2) // ignored because not a root module + ctx2.SetProviderInput(providerAddr2, try2) // ignored because not a root module - actual1 := ctx1.ProviderInput(providerAddr) - actual2 := ctx2.ProviderInput(providerAddr) + actual1 := ctx1.ProviderInput(providerAddr1) + actual2 := ctx2.ProviderInput(providerAddr2) if !reflect.DeepEqual(actual1, expected1) { t.Errorf("wrong result 1\ngot: %#v\nwant: %#v", actual1, expected1) @@ -52,19 +59,26 @@ func TestBuildingEvalContextInitProvider(t *testing.T) { ctx.ProviderLock = &lock ctx.ProviderCache = make(map[string]providers.Interface) ctx.Components = &basicComponentFactory{ - providers: map[string]providers.Factory{ - "test": providers.FactoryFixed(testP), + providers: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): providers.FactoryFixed(testP), }, } - providerAddrDefault := addrs.ProviderConfig{Type: "test"} - providerAddrAlias := addrs.ProviderConfig{Type: "test", Alias: "foo"} + providerAddrDefault := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("test"), + } + providerAddrAlias := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("test"), + Alias: "foo", + } - _, err := ctx.InitProvider("test", providerAddrDefault) + _, err := ctx.InitProvider(providerAddrDefault) if err != nil { t.Fatalf("error initializing provider test: %s", err) } - _, err = ctx.InitProvider("test", providerAddrAlias) + _, err = ctx.InitProvider(providerAddrAlias) if err != nil { t.Fatalf("error initializing provider test.foo: %s", err) } diff --git a/terraform/eval_context_mock.go b/terraform/eval_context_mock.go index 26ed4be1f..210a40d80 100644 --- a/terraform/eval_context_mock.go +++ b/terraform/eval_context_mock.go @@ -5,6 +5,7 @@ import ( "github.com/hashicorp/hcl/v2/hcldec" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" + "github.com/hashicorp/terraform/instances" "github.com/hashicorp/terraform/lang" "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/providers" @@ -30,7 +31,7 @@ type MockEvalContext struct { InitProviderCalled bool InitProviderType string - InitProviderAddr addrs.ProviderConfig + InitProviderAddr addrs.AbsProviderConfig InitProviderProvider providers.Interface InitProviderError error @@ -43,19 +44,19 @@ type MockEvalContext struct { ProviderSchemaSchema *ProviderSchema CloseProviderCalled bool - CloseProviderAddr addrs.ProviderConfig + CloseProviderAddr addrs.AbsProviderConfig CloseProviderProvider providers.Interface ProviderInputCalled bool - ProviderInputAddr addrs.ProviderConfig + ProviderInputAddr addrs.AbsProviderConfig ProviderInputValues map[string]cty.Value SetProviderInputCalled bool - SetProviderInputAddr addrs.ProviderConfig + SetProviderInputAddr addrs.AbsProviderConfig SetProviderInputValues map[string]cty.Value ConfigureProviderCalled bool - ConfigureProviderAddr addrs.ProviderConfig + ConfigureProviderAddr addrs.AbsProviderConfig ConfigureProviderConfig cty.Value ConfigureProviderDiags tfdiags.Diagnostics @@ -115,11 +116,18 @@ type MockEvalContext struct { SetModuleCallArgumentsModule addrs.ModuleCallInstance SetModuleCallArgumentsValues map[string]cty.Value + GetVariableValueCalled bool + GetVariableValueAddr addrs.AbsInputVariableInstance + GetVariableValueValue cty.Value + ChangesCalled bool ChangesChanges *plans.ChangesSync StateCalled bool StateState *states.SyncState + + InstanceExpanderCalled bool + InstanceExpanderExpander *instances.Expander } // MockEvalContext implements EvalContext @@ -146,9 +154,9 @@ func (c *MockEvalContext) Input() UIInput { return c.InputInput } -func (c *MockEvalContext) InitProvider(t string, addr addrs.ProviderConfig) (providers.Interface, error) { +func (c *MockEvalContext) InitProvider(addr addrs.AbsProviderConfig) (providers.Interface, error) { c.InitProviderCalled = true - c.InitProviderType = t + c.InitProviderType = addr.LegacyString() c.InitProviderAddr = addr return c.InitProviderProvider, c.InitProviderError } @@ -165,26 +173,26 @@ func (c *MockEvalContext) ProviderSchema(addr addrs.AbsProviderConfig) *Provider return c.ProviderSchemaSchema } -func (c *MockEvalContext) CloseProvider(addr addrs.ProviderConfig) error { +func (c *MockEvalContext) CloseProvider(addr addrs.AbsProviderConfig) error { c.CloseProviderCalled = true c.CloseProviderAddr = addr return nil } -func (c *MockEvalContext) ConfigureProvider(addr addrs.ProviderConfig, cfg cty.Value) tfdiags.Diagnostics { +func (c *MockEvalContext) ConfigureProvider(addr addrs.AbsProviderConfig, cfg cty.Value) tfdiags.Diagnostics { c.ConfigureProviderCalled = true c.ConfigureProviderAddr = addr c.ConfigureProviderConfig = cfg return c.ConfigureProviderDiags } -func (c *MockEvalContext) ProviderInput(addr addrs.ProviderConfig) map[string]cty.Value { +func (c *MockEvalContext) ProviderInput(addr addrs.AbsProviderConfig) map[string]cty.Value { c.ProviderInputCalled = true c.ProviderInputAddr = addr return c.ProviderInputValues } -func (c *MockEvalContext) SetProviderInput(addr addrs.ProviderConfig, vals map[string]cty.Value) { +func (c *MockEvalContext) SetProviderInput(addr addrs.AbsProviderConfig, vals map[string]cty.Value) { c.SetProviderInputCalled = true c.SetProviderInputAddr = addr c.SetProviderInputValues = vals @@ -308,6 +316,12 @@ func (c *MockEvalContext) SetModuleCallArguments(n addrs.ModuleCallInstance, val c.SetModuleCallArgumentsValues = values } +func (c *MockEvalContext) GetVariableValue(addr addrs.AbsInputVariableInstance) cty.Value { + c.GetVariableValueCalled = true + c.GetVariableValueAddr = addr + return c.GetVariableValueValue +} + func (c *MockEvalContext) Changes() *plans.ChangesSync { c.ChangesCalled = true return c.ChangesChanges @@ -317,3 +331,8 @@ func (c *MockEvalContext) State() *states.SyncState { c.StateCalled = true return c.StateState } + +func (c *MockEvalContext) InstanceExpander() *instances.Expander { + c.InstanceExpanderCalled = true + return c.InstanceExpanderExpander +} diff --git a/terraform/eval_diff.go b/terraform/eval_diff.go index 3ce4adbee..39aa288b5 100644 --- a/terraform/eval_diff.go +++ b/terraform/eval_diff.go @@ -1,7 +1,6 @@ package terraform import ( - "bytes" "fmt" "log" "strings" @@ -66,7 +65,7 @@ func (n *EvalCheckPlannedChange) Eval(ctx EvalContext) (interface{}, error) { "Provider produced inconsistent final plan", fmt.Sprintf( "When expanding the plan for %s to include new values learned so far during apply, provider %q changed the planned action from %s to %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - absAddr, n.ProviderAddr.ProviderConfig.Type, + absAddr, n.ProviderAddr.Provider.LegacyString(), plannedChange.Action, actualChange.Action, ), )) @@ -80,7 +79,7 @@ func (n *EvalCheckPlannedChange) Eval(ctx EvalContext) (interface{}, error) { "Provider produced inconsistent final plan", fmt.Sprintf( "When expanding the plan for %s to include new values learned so far during apply, provider %q produced an invalid new value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - absAddr, n.ProviderAddr.ProviderConfig.Type, tfdiags.FormatError(err), + absAddr, n.ProviderAddr.Provider.LegacyString(), tfdiags.FormatError(err), ), )) } @@ -121,7 +120,7 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { if providerSchema == nil { return nil, fmt.Errorf("provider schema is unavailable for %s", n.Addr) } - if n.ProviderAddr.ProviderConfig.Type == "" { + if n.ProviderAddr.Provider.Type == "" { panic(fmt.Sprintf("EvalDiff for %s does not have ProviderAddr set", n.Addr.Absolute(ctx.Path()))) } @@ -231,7 +230,7 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid plan", fmt.Sprintf( "Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, tfdiags.FormatErrorPrefixed(err, absAddr.String()), + n.ProviderAddr.Provider.LegacyString(), tfdiags.FormatErrorPrefixed(err, absAddr.String()), ), )) } @@ -247,7 +246,10 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { // to notice in the logs if an inconsistency beyond the type system // leads to a downstream provider failure. var buf strings.Builder - fmt.Fprintf(&buf, "[WARN] Provider %q produced an invalid plan for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:", n.ProviderAddr.ProviderConfig.Type, absAddr) + fmt.Fprintf(&buf, + "[WARN] Provider %q produced an invalid plan for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:", + n.ProviderAddr.Provider.LegacyString(), absAddr, + ) for _, err := range errs { fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err)) } @@ -259,7 +261,7 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid plan", fmt.Sprintf( "Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, tfdiags.FormatErrorPrefixed(err, absAddr.String()), + n.ProviderAddr.Provider.LegacyString(), tfdiags.FormatErrorPrefixed(err, absAddr.String()), ), )) } @@ -302,7 +304,7 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid plan", fmt.Sprintf( "Provider %q has indicated \"requires replacement\" on %s for a non-existent attribute path %#v.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, absAddr, path, + n.ProviderAddr.Provider.LegacyString(), absAddr, path, ), )) continue @@ -398,7 +400,7 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid plan", fmt.Sprintf( "Provider %q planned an invalid value for %s%s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, absAddr, tfdiags.FormatError(err), + n.ProviderAddr.Provider.LegacyString(), absAddr, tfdiags.FormatError(err), ), )) } @@ -567,49 +569,6 @@ func processIgnoreChangesIndividual(prior, proposed cty.Value, ignoreChanges []h return ret, diags } -// legacyFlagmapKeyForTraversal constructs a key string compatible with what -// the flatmap package would generate for an attribute addressable by the given -// traversal. -// -// This is used only to shim references to attributes within the diff and -// state structures, which have not (at the time of writing) yet been updated -// to use the newer HCL-based representations. -func legacyFlatmapKeyForTraversal(traversal hcl.Traversal) string { - var buf bytes.Buffer - first := true - for _, step := range traversal { - if !first { - buf.WriteByte('.') - } - switch ts := step.(type) { - case hcl.TraverseRoot: - buf.WriteString(ts.Name) - case hcl.TraverseAttr: - buf.WriteString(ts.Name) - case hcl.TraverseIndex: - val := ts.Key - switch val.Type() { - case cty.Number: - bf := val.AsBigFloat() - buf.WriteString(bf.String()) - case cty.String: - s := val.AsString() - buf.WriteString(s) - default: - // should never happen, since no other types appear in - // traversals in practice. - buf.WriteByte('?') - } - default: - // should never happen, since we've covered all of the types - // that show up in parsed traversals in practice. - buf.WriteByte('?') - } - first = false - } - return buf.String() -} - // a group of key-*ResourceAttrDiff pairs from the same flatmapped container type flatAttrDiff map[string]*ResourceAttrDiff @@ -630,33 +589,6 @@ func (f flatAttrDiff) keepDiff(ignoreChanges map[string]bool) bool { return false } -// sets, lists and maps need to be compared for diff inclusion as a whole, so -// group the flatmapped keys together for easier comparison. -func groupContainers(d *InstanceDiff) map[string]flatAttrDiff { - isIndex := multiVal.MatchString - containers := map[string]flatAttrDiff{} - attrs := d.CopyAttributes() - // we need to loop once to find the index key - for k := range attrs { - if isIndex(k) { - // add the key, always including the final dot to fully qualify it - containers[k[:len(k)-1]] = flatAttrDiff{} - } - } - - // loop again to find all the sub keys - for prefix, values := range containers { - for k, attrDiff := range attrs { - // we include the index value as well, since it could be part of the diff - if strings.HasPrefix(k, prefix) { - values[k] = attrDiff - } - } - } - - return containers -} - // EvalDiffDestroy is an EvalNode implementation that returns a plain // destroy diff. type EvalDiffDestroy struct { @@ -674,7 +606,7 @@ func (n *EvalDiffDestroy) Eval(ctx EvalContext) (interface{}, error) { absAddr := n.Addr.Absolute(ctx.Path()) state := *n.State - if n.ProviderAddr.ProviderConfig.Type == "" { + if n.ProviderAddr.Provider.Type == "" { if n.DeposedKey == "" { panic(fmt.Sprintf("EvalDiffDestroy for %s does not have ProviderAddr set", absAddr)) } else { diff --git a/terraform/eval_for_each.go b/terraform/eval_for_each.go index efe0dd919..599995728 100644 --- a/terraform/eval_for_each.go +++ b/terraform/eval_for_each.go @@ -89,6 +89,22 @@ func evaluateResourceForEachExpressionKnown(expr hcl.Expression, ctx EvalContext if !forEachVal.IsWhollyKnown() { return map[string]cty.Value{}, false, diags } + + // A set of strings may contain null, which makes it impossible to + // convert to a map, so we must return an error + it := forEachVal.ElementIterator() + for it.Next() { + item, _ := it.Element() + if item.IsNull() { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid for_each set argument", + Detail: fmt.Sprintf(`The given "for_each" argument value is unsuitable: "for_each" sets must not contain null values.`), + Subject: expr.Range().Ptr(), + }) + return nil, true, diags + } + } } return forEachVal.AsValueMap(), true, nil diff --git a/terraform/eval_for_each_test.go b/terraform/eval_for_each_test.go new file mode 100644 index 000000000..5f0cabcf0 --- /dev/null +++ b/terraform/eval_for_each_test.go @@ -0,0 +1,181 @@ +package terraform + +import ( + "reflect" + "strings" + "testing" + + "github.com/davecgh/go-spew/spew" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hcltest" + "github.com/hashicorp/terraform/tfdiags" + "github.com/zclconf/go-cty/cty" +) + +func TestEvaluateResourceForEachExpression_valid(t *testing.T) { + tests := map[string]struct { + Expr hcl.Expression + ForEachMap map[string]cty.Value + }{ + "empty set": { + hcltest.MockExprLiteral(cty.SetValEmpty(cty.String)), + map[string]cty.Value{}, + }, + "multi-value string set": { + hcltest.MockExprLiteral(cty.SetVal([]cty.Value{cty.StringVal("a"), cty.StringVal("b")})), + map[string]cty.Value{ + "a": cty.StringVal("a"), + "b": cty.StringVal("b"), + }, + }, + "empty map": { + hcltest.MockExprLiteral(cty.MapValEmpty(cty.Bool)), + map[string]cty.Value{}, + }, + "map": { + hcltest.MockExprLiteral(cty.MapVal(map[string]cty.Value{ + "a": cty.BoolVal(true), + "b": cty.BoolVal(false), + })), + map[string]cty.Value{ + "a": cty.BoolVal(true), + "b": cty.BoolVal(false), + }, + }, + "map containing unknown values": { + hcltest.MockExprLiteral(cty.MapVal(map[string]cty.Value{ + "a": cty.UnknownVal(cty.Bool), + "b": cty.UnknownVal(cty.Bool), + })), + map[string]cty.Value{ + "a": cty.UnknownVal(cty.Bool), + "b": cty.UnknownVal(cty.Bool), + }, + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + ctx := &MockEvalContext{} + ctx.installSimpleEval() + forEachMap, diags := evaluateResourceForEachExpression(test.Expr, ctx) + + if len(diags) != 0 { + t.Errorf("unexpected diagnostics %s", spew.Sdump(diags)) + } + + if !reflect.DeepEqual(forEachMap, test.ForEachMap) { + t.Errorf( + "wrong map value\ngot: %swant: %s", + spew.Sdump(forEachMap), spew.Sdump(test.ForEachMap), + ) + } + + }) + } +} + +func TestEvaluateResourceForEachExpression_errors(t *testing.T) { + tests := map[string]struct { + Expr hcl.Expression + Summary, DetailSubstring string + }{ + "null set": { + hcltest.MockExprLiteral(cty.NullVal(cty.Set(cty.String))), + "Invalid for_each argument", + `the given "for_each" argument value is null`, + }, + "string": { + hcltest.MockExprLiteral(cty.StringVal("i am definitely a set")), + "Invalid for_each argument", + "must be a map, or set of strings, and you have provided a value of type string", + }, + "list": { + hcltest.MockExprLiteral(cty.ListVal([]cty.Value{cty.StringVal("a"), cty.StringVal("a")})), + "Invalid for_each argument", + "must be a map, or set of strings, and you have provided a value of type list", + }, + "tuple": { + hcltest.MockExprLiteral(cty.TupleVal([]cty.Value{cty.StringVal("a"), cty.StringVal("b")})), + "Invalid for_each argument", + "must be a map, or set of strings, and you have provided a value of type tuple", + }, + "unknown string set": { + hcltest.MockExprLiteral(cty.UnknownVal(cty.Set(cty.String))), + "Invalid for_each argument", + "depends on resource attributes that cannot be determined until apply", + }, + "unknown map": { + hcltest.MockExprLiteral(cty.UnknownVal(cty.Map(cty.Bool))), + "Invalid for_each argument", + "depends on resource attributes that cannot be determined until apply", + }, + "set containing booleans": { + hcltest.MockExprLiteral(cty.SetVal([]cty.Value{cty.BoolVal(true)})), + "Invalid for_each set argument", + "supports maps and sets of strings, but you have provided a set containing type bool", + }, + "set containing null": { + hcltest.MockExprLiteral(cty.SetVal([]cty.Value{cty.NullVal(cty.String)})), + "Invalid for_each set argument", + "must not contain null values", + }, + "set containing unknown value": { + hcltest.MockExprLiteral(cty.SetVal([]cty.Value{cty.UnknownVal(cty.String)})), + "Invalid for_each argument", + "depends on resource attributes that cannot be determined until apply", + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + ctx := &MockEvalContext{} + ctx.installSimpleEval() + _, diags := evaluateResourceForEachExpression(test.Expr, ctx) + + if len(diags) != 1 { + t.Fatalf("got %d diagnostics; want 1", diags) + } + if got, want := diags[0].Severity(), tfdiags.Error; got != want { + t.Errorf("wrong diagnostic severity %#v; want %#v", got, want) + } + if got, want := diags[0].Description().Summary, test.Summary; got != want { + t.Errorf("wrong diagnostic summary %#v; want %#v", got, want) + } + if got, want := diags[0].Description().Detail, test.DetailSubstring; !strings.Contains(got, want) { + t.Errorf("wrong diagnostic detail %#v; want %#v", got, want) + } + }) + } +} + +func TestEvaluateResourceForEachExpressionKnown(t *testing.T) { + tests := map[string]hcl.Expression{ + "unknown string set": hcltest.MockExprLiteral(cty.UnknownVal(cty.Set(cty.String))), + "unknown map": hcltest.MockExprLiteral(cty.UnknownVal(cty.Map(cty.Bool))), + } + + for name, expr := range tests { + t.Run(name, func(t *testing.T) { + ctx := &MockEvalContext{} + ctx.installSimpleEval() + forEachMap, known, diags := evaluateResourceForEachExpressionKnown(expr, ctx) + + if len(diags) != 0 { + t.Errorf("unexpected diagnostics %s", spew.Sdump(diags)) + } + + if known { + t.Errorf("got %v known, want false", known) + } + + if len(forEachMap) != 0 { + t.Errorf( + "expected empty map\ngot: %s", + spew.Sdump(forEachMap), + ) + } + + }) + } +} diff --git a/terraform/eval_provider.go b/terraform/eval_provider.go index 1b12b3cc8..3b802d4ed 100644 --- a/terraform/eval_provider.go +++ b/terraform/eval_provider.go @@ -12,7 +12,7 @@ import ( "github.com/hashicorp/terraform/tfdiags" ) -func buildProviderConfig(ctx EvalContext, addr addrs.ProviderConfig, config *configs.Provider) hcl.Body { +func buildProviderConfig(ctx EvalContext, addr addrs.AbsProviderConfig, config *configs.Provider) hcl.Body { var configBody hcl.Body if config != nil { configBody = config.Config @@ -49,7 +49,7 @@ func buildProviderConfig(ctx EvalContext, addr addrs.ProviderConfig, config *con // EvalConfigProvider is an EvalNode implementation that configures // a provider that is already initialized and retrieved. type EvalConfigProvider struct { - Addr addrs.ProviderConfig + Addr addrs.AbsProviderConfig Provider *providers.Interface Config *configs.Provider } @@ -88,18 +88,17 @@ func (n *EvalConfigProvider) Eval(ctx EvalContext) (interface{}, error) { // and returns nothing. The provider can be retrieved again with the // EvalGetProvider node. type EvalInitProvider struct { - TypeName string - Addr addrs.ProviderConfig + Addr addrs.AbsProviderConfig } func (n *EvalInitProvider) Eval(ctx EvalContext) (interface{}, error) { - return ctx.InitProvider(n.TypeName, n.Addr) + return ctx.InitProvider(n.Addr) } // EvalCloseProvider is an EvalNode implementation that closes provider // connections that aren't needed anymore. type EvalCloseProvider struct { - Addr addrs.ProviderConfig + Addr addrs.AbsProviderConfig } func (n *EvalCloseProvider) Eval(ctx EvalContext) (interface{}, error) { @@ -125,7 +124,7 @@ type EvalGetProvider struct { } func (n *EvalGetProvider) Eval(ctx EvalContext) (interface{}, error) { - if n.Addr.ProviderConfig.Type == "" { + if n.Addr.Provider.Type == "" { // Should never happen panic("EvalGetProvider used with uninitialized provider configuration address") } diff --git a/terraform/eval_provider_test.go b/terraform/eval_provider_test.go index f71688d37..23b94c537 100644 --- a/terraform/eval_provider_test.go +++ b/terraform/eval_provider_test.go @@ -16,8 +16,9 @@ func TestBuildProviderConfig(t *testing.T) { configBody := configs.SynthBody("", map[string]cty.Value{ "set_in_config": cty.StringVal("config"), }) - providerAddr := addrs.ProviderConfig{ - Type: "foo", + providerAddr := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("foo"), } ctx := &MockEvalContext{ @@ -67,8 +68,12 @@ func TestEvalConfigProvider(t *testing.T) { } provider := mockProviderWithConfigSchema(simpleTestSchema()) rp := providers.Interface(provider) + providerAddr := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("foo"), + } n := &EvalConfigProvider{ - Addr: addrs.ProviderConfig{Type: "foo"}, + Addr: providerAddr, Config: config, Provider: &rp, } @@ -97,8 +102,12 @@ func TestEvalInitProvider_impl(t *testing.T) { } func TestEvalInitProvider(t *testing.T) { + providerAddr := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("foo"), + } n := &EvalInitProvider{ - Addr: addrs.ProviderConfig{Type: "foo"}, + Addr: providerAddr, } provider := &MockProvider{} ctx := &MockEvalContext{InitProviderProvider: provider} @@ -109,14 +118,18 @@ func TestEvalInitProvider(t *testing.T) { if !ctx.InitProviderCalled { t.Fatal("should be called") } - if ctx.InitProviderAddr.String() != "provider.foo" { + if ctx.InitProviderAddr.String() != `provider["registry.terraform.io/-/foo"]` { t.Fatalf("wrong provider address %s", ctx.InitProviderAddr) } } func TestEvalCloseProvider(t *testing.T) { + providerAddr := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: addrs.NewLegacyProvider("foo"), + } n := &EvalCloseProvider{ - Addr: addrs.ProviderConfig{Type: "foo"}, + Addr: providerAddr, } provider := &MockProvider{} ctx := &MockEvalContext{CloseProviderProvider: provider} @@ -127,7 +140,7 @@ func TestEvalCloseProvider(t *testing.T) { if !ctx.CloseProviderCalled { t.Fatal("should be called") } - if ctx.CloseProviderAddr.String() != "provider.foo" { + if ctx.CloseProviderAddr.String() != `provider["registry.terraform.io/-/foo"]` { t.Fatalf("wrong provider address %s", ctx.CloseProviderAddr) } } @@ -139,7 +152,7 @@ func TestEvalGetProvider_impl(t *testing.T) { func TestEvalGetProvider(t *testing.T) { var actual providers.Interface n := &EvalGetProvider{ - Addr: addrs.RootModuleInstance.ProviderConfigDefault("foo"), + Addr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("foo")), Output: &actual, } provider := &MockProvider{} @@ -154,7 +167,7 @@ func TestEvalGetProvider(t *testing.T) { if !ctx.ProviderCalled { t.Fatal("should be called") } - if ctx.ProviderAddr.String() != "provider.foo" { + if ctx.ProviderAddr.String() != `provider["registry.terraform.io/-/foo"]` { t.Fatalf("wrong provider address %s", ctx.ProviderAddr) } } diff --git a/terraform/eval_read_data.go b/terraform/eval_read_data.go index 4999480f5..f869b44b9 100644 --- a/terraform/eval_read_data.go +++ b/terraform/eval_read_data.go @@ -21,7 +21,6 @@ import ( type EvalReadData struct { Addr addrs.ResourceInstance Config *configs.Resource - Dependencies []addrs.Referenceable Provider *providers.Interface ProviderAddr addrs.AbsProviderConfig ProviderSchema **ProviderSchema @@ -86,7 +85,7 @@ func (n *EvalReadData) Eval(ctx EvalContext) (interface{}, error) { schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource()) if schema == nil { // Should be caught during validation, so we don't bother with a pretty error here - return nil, fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.ProviderConfig.Type, n.Addr.Resource.Type) + return nil, fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.LegacyString(), n.Addr.Resource.Type) } // We'll always start by evaluating the configuration. What we do after @@ -161,9 +160,8 @@ func (n *EvalReadData) Eval(ctx EvalContext) (interface{}, error) { } if n.OutputState != nil { state := &states.ResourceInstanceObject{ - Value: change.After, - Status: states.ObjectPlanned, // because the partial value in the plan must be used for now - Dependencies: n.Dependencies, + Value: change.After, + Status: states.ObjectPlanned, // because the partial value in the plan must be used for now } *n.OutputState = state } @@ -225,7 +223,7 @@ func (n *EvalReadData) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid object", fmt.Sprintf( "Provider %q produced an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, tfdiags.FormatErrorPrefixed(err, absAddr.String()), + n.ProviderAddr.Provider.LegacyString(), tfdiags.FormatErrorPrefixed(err, absAddr.String()), ), )) } @@ -239,7 +237,7 @@ func (n *EvalReadData) Eval(ctx EvalContext) (interface{}, error) { "Provider produced null object", fmt.Sprintf( "Provider %q produced a null value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, absAddr, + n.ProviderAddr.Provider.LegacyString(), absAddr, ), )) } @@ -249,7 +247,7 @@ func (n *EvalReadData) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid object", fmt.Sprintf( "Provider %q produced a value for %s that is not wholly known.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, absAddr, + n.ProviderAddr.Provider.LegacyString(), absAddr, ), )) @@ -275,9 +273,8 @@ func (n *EvalReadData) Eval(ctx EvalContext) (interface{}, error) { }, } state := &states.ResourceInstanceObject{ - Value: change.After, - Status: states.ObjectReady, // because we completed the read from the provider - Dependencies: n.Dependencies, + Value: change.After, + Status: states.ObjectReady, // because we completed the read from the provider } err = ctx.Hook(func(h Hook) (HookAction, error) { @@ -306,14 +303,13 @@ func (n *EvalReadData) Eval(ctx EvalContext) (interface{}, error) { // EvalReadDataApply is an EvalNode implementation that executes a data // resource's ReadDataApply method to read data from the data source. type EvalReadDataApply struct { - Addr addrs.ResourceInstance - Provider *providers.Interface - ProviderAddr addrs.AbsProviderConfig - ProviderSchema **ProviderSchema - Output **states.ResourceInstanceObject - Config *configs.Resource - Change **plans.ResourceInstanceChange - StateReferences []addrs.Referenceable + Addr addrs.ResourceInstance + Provider *providers.Interface + ProviderAddr addrs.AbsProviderConfig + ProviderSchema **ProviderSchema + Output **states.ResourceInstanceObject + Config *configs.Resource + Change **plans.ResourceInstanceChange } func (n *EvalReadDataApply) Eval(ctx EvalContext) (interface{}, error) { @@ -368,7 +364,7 @@ func (n *EvalReadDataApply) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid object", fmt.Sprintf( "Provider %q planned an invalid value for %s. The result could not be saved.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, tfdiags.FormatErrorPrefixed(err, absAddr.String()), + n.ProviderAddr.Provider.LegacyString(), tfdiags.FormatErrorPrefixed(err, absAddr.String()), ), )) } @@ -385,9 +381,8 @@ func (n *EvalReadDataApply) Eval(ctx EvalContext) (interface{}, error) { if n.Output != nil { *n.Output = &states.ResourceInstanceObject{ - Value: newVal, - Status: states.ObjectReady, - Dependencies: n.StateReferences, + Value: newVal, + Status: states.ObjectReady, } } diff --git a/terraform/eval_refresh.go b/terraform/eval_refresh.go index 4dfb5b4e9..d3bfffaf8 100644 --- a/terraform/eval_refresh.go +++ b/terraform/eval_refresh.go @@ -78,7 +78,7 @@ func (n *EvalRefresh) Eval(ctx EvalContext) (interface{}, error) { "Provider produced invalid object", fmt.Sprintf( "Provider %q planned an invalid value for %s during refresh: %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.", - n.ProviderAddr.ProviderConfig.Type, absAddr, tfdiags.FormatError(err), + n.ProviderAddr.Provider.LegacyString(), absAddr, tfdiags.FormatError(err), ), )) } @@ -89,6 +89,7 @@ func (n *EvalRefresh) Eval(ctx EvalContext) (interface{}, error) { newState := state.DeepCopy() newState.Value = resp.NewState newState.Private = resp.Private + newState.Dependencies = state.Dependencies // Call post-refresh hook err = ctx.Hook(func(h Hook) (HookAction, error) { diff --git a/terraform/eval_state.go b/terraform/eval_state.go index b611113e3..ab55835e2 100644 --- a/terraform/eval_state.go +++ b/terraform/eval_state.go @@ -3,9 +3,11 @@ package terraform import ( "fmt" "log" + "sort" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" + "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/states" "github.com/hashicorp/terraform/tfdiags" @@ -200,6 +202,10 @@ type EvalWriteState struct { // ProviderAddr is the address of the provider configuration that // produced the given object. ProviderAddr addrs.AbsProviderConfig + + // Dependencies are the inter-resource dependencies to be stored in the + // state. + Dependencies *[]addrs.AbsResource } func (n *EvalWriteState) Eval(ctx EvalContext) (interface{}, error) { @@ -212,10 +218,9 @@ func (n *EvalWriteState) Eval(ctx EvalContext) (interface{}, error) { absAddr := n.Addr.Absolute(ctx.Path()) state := ctx.State() - if n.ProviderAddr.ProviderConfig.Type == "" { - return nil, fmt.Errorf("failed to write state for %s, missing provider type", absAddr) + if n.ProviderAddr.Provider.Type == "" { + return nil, fmt.Errorf("failed to write state for %s: missing provider type", absAddr) } - obj := *n.State if obj == nil || obj.Value.IsNull() { // No need to encode anything: we'll just write it directly. @@ -223,6 +228,13 @@ func (n *EvalWriteState) Eval(ctx EvalContext) (interface{}, error) { log.Printf("[TRACE] EvalWriteState: removing state object for %s", absAddr) return nil, nil } + + // store the new deps in the state + if n.Dependencies != nil { + log.Printf("[TRACE] EvalWriteState: recording %d dependencies for %s", len(*n.Dependencies), absAddr) + obj.Dependencies = *n.Dependencies + } + if n.ProviderSchema == nil || *n.ProviderSchema == nil { // Should never happen, unless our state object is nil panic("EvalWriteState used with pointer to nil ProviderSchema object") @@ -377,6 +389,12 @@ func (n *EvalDeposeState) Eval(ctx EvalContext) (interface{}, error) { type EvalMaybeRestoreDeposedObject struct { Addr addrs.ResourceInstance + // PlannedChange might be the action we're performing that includes + // the possiblity of restoring a deposed object. However, it might also + // be nil. It's here only for use in error messages and must not be + // used for business logic. + PlannedChange **plans.ResourceInstanceChange + // Key is a pointer to the deposed object key that should be forgotten // from the state, which must be non-nil. Key *states.DeposedKey @@ -388,6 +406,33 @@ func (n *EvalMaybeRestoreDeposedObject) Eval(ctx EvalContext) (interface{}, erro dk := *n.Key state := ctx.State() + if dk == states.NotDeposed { + // This should never happen, and so it always indicates a bug. + // We should evaluate this node only if we've previously deposed + // an object as part of the same operation. + var diags tfdiags.Diagnostics + if n.PlannedChange != nil && *n.PlannedChange != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Attempt to restore non-existent deposed object", + fmt.Sprintf( + "Terraform has encountered a bug where it would need to restore a deposed object for %s without knowing a deposed object key for that object. This occurred during a %s action. This is a bug in Terraform; please report it!", + absAddr, (*n.PlannedChange).Action, + ), + )) + } else { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Attempt to restore non-existent deposed object", + fmt.Sprintf( + "Terraform has encountered a bug where it would need to restore a deposed object for %s without knowing a deposed object key for that object. This is a bug in Terraform; please report it!", + absAddr, + ), + )) + } + return nil, diags.Err() + } + restored := state.MaybeRestoreResourceInstanceDeposed(absAddr, dk) if restored { log.Printf("[TRACE] EvalMaybeRestoreDeposedObject: %s deposed object %s was restored as the current object", absAddr, dk) @@ -443,6 +488,22 @@ func (n *EvalWriteResourceState) Eval(ctx EvalContext) (interface{}, error) { // while ensuring that any existing instances are preserved, etc. state.SetResourceMeta(absAddr, eachMode, n.ProviderAddr) + // We'll record our expansion decision in the shared "expander" object + // so that later operations (i.e. DynamicExpand and expression evaluation) + // can refer to it. Since this node represents the abstract module, we need + // to expand the module here to create all resources. + expander := ctx.InstanceExpander() + for _, module := range expander.ExpandModule(ctx.Path().Module()) { + switch eachMode { + case states.EachList: + expander.SetResourceCount(module, n.Addr, count) + case states.EachMap: + expander.SetResourceForEach(module, n.Addr, forEach) + default: + expander.SetResourceSingle(module, n.Addr) + } + } + return nil, nil } @@ -473,3 +534,49 @@ func (n *EvalForgetResourceState) Eval(ctx EvalContext) (interface{}, error) { return nil, nil } + +// EvalRefreshDependencies is an EvalNode implementation that appends any newly +// found dependencies to those saved in the state. The existing dependencies +// are retained, as they may be missing from the config, and will be required +// for the updates and destroys during the next apply. +type EvalRefreshDependencies struct { + // Prior State + State **states.ResourceInstanceObject + // Dependencies to write to the new state + Dependencies *[]addrs.AbsResource +} + +func (n *EvalRefreshDependencies) Eval(ctx EvalContext) (interface{}, error) { + state := *n.State + if state == nil { + // no existing state to append + return nil, nil + } + + depMap := make(map[string]addrs.AbsResource) + for _, d := range *n.Dependencies { + depMap[d.String()] = d + } + + // We have already dependencies in state, so we need to trust those for + // refresh. We can't write out new dependencies until apply time in case + // the configuration has been changed in a manner the conflicts with the + // stored dependencies. + if len(state.Dependencies) > 0 { + *n.Dependencies = state.Dependencies + return nil, nil + } + + deps := make([]addrs.AbsResource, 0, len(depMap)) + for _, d := range depMap { + deps = append(deps, d) + } + + sort.Slice(deps, func(i, j int) bool { + return deps[i].String() < deps[j].String() + }) + + *n.Dependencies = deps + + return nil, nil +} diff --git a/terraform/eval_state_test.go b/terraform/eval_state_test.go index 56097528f..45dc4b13c 100644 --- a/terraform/eval_state_test.go +++ b/terraform/eval_state_test.go @@ -214,7 +214,7 @@ func TestEvalWriteState(t *testing.T) { State: &obj, ProviderSchema: &providerSchema, - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), } _, err := node.Eval(ctx) if err != nil { @@ -224,7 +224,7 @@ func TestEvalWriteState(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: ID = i-abc123 - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] `) } @@ -261,7 +261,7 @@ func TestEvalWriteStateDeposed(t *testing.T) { State: &obj, ProviderSchema: &providerSchema, - ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault("aws"), + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewLegacyProvider("aws")), } _, err := node.Eval(ctx) if err != nil { @@ -271,7 +271,7 @@ func TestEvalWriteStateDeposed(t *testing.T) { checkStateString(t, state, ` aws_instance.foo: (1 deposed) ID = - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Deposed ID 1 = i-abc123 `) } diff --git a/terraform/eval_state_upgrade.go b/terraform/eval_state_upgrade.go index e1940005e..c468f1ec4 100644 --- a/terraform/eval_state_upgrade.go +++ b/terraform/eval_state_upgrade.go @@ -26,7 +26,8 @@ func UpgradeResourceState(addr addrs.AbsResourceInstance, provider providers.Int stateIsFlatmap := len(src.AttrsJSON) == 0 - providerType := addr.Resource.Resource.DefaultProviderConfig().Type + // TODO: This should eventually use a proper FQN. + providerType := addr.Resource.Resource.DefaultProvider().LegacyString() if src.SchemaVersion > currentVersion { log.Printf("[TRACE] UpgradeResourceState: can't downgrade state for %s from version %d to %d", addr, src.SchemaVersion, currentVersion) var diags tfdiags.Diagnostics diff --git a/terraform/eval_validate.go b/terraform/eval_validate.go index 5b2146a58..d42b49b88 100644 --- a/terraform/eval_validate.go +++ b/terraform/eval_validate.go @@ -67,7 +67,7 @@ RETURN: // EvalValidateProvider is an EvalNode implementation that validates // a provider configuration. type EvalValidateProvider struct { - Addr addrs.ProviderConfig + Addr addrs.AbsProviderConfig Provider *providers.Interface Config *configs.Provider } diff --git a/terraform/eval_variable.go b/terraform/eval_variable.go index 7f6651c4c..e8a88a14d 100644 --- a/terraform/eval_variable.go +++ b/terraform/eval_variable.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" + "github.com/hashicorp/terraform/tfdiags" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/convert" ) @@ -96,6 +97,117 @@ func (n *EvalModuleCallArgument) Eval(ctx EvalContext) (interface{}, error) { return nil, diags.ErrWithWarnings() } +// evalVariableValidations is an EvalNode implementation that ensures that +// all of the configured custom validations for a variable are passing. +// +// This must be used only after any side-effects that make the value of the +// variable available for use in expression evaluation, such as +// EvalModuleCallArgument for variables in descendent modules. +type evalVariableValidations struct { + Addr addrs.AbsInputVariableInstance + Config *configs.Variable + + // Expr is the expression that provided the value for the variable, if any. + // This will be nil for root module variables, because their values come + // from outside the configuration. + Expr hcl.Expression + + // If this flag is set, this node becomes a no-op. + // This is here for consistency with EvalModuleCallArgument so that it + // can be populated with the same value, where needed. + IgnoreDiagnostics bool +} + +func (n *evalVariableValidations) Eval(ctx EvalContext) (interface{}, error) { + if n.Config == nil || n.IgnoreDiagnostics || len(n.Config.Validations) == 0 { + log.Printf("[TRACE] evalVariableValidations: not active for %s, so skipping", n.Addr) + return nil, nil + } + + var diags tfdiags.Diagnostics + + // Variable nodes evaluate in the parent module to where they were declared + // because the value expression (n.Expr, if set) comes from the calling + // "module" block in the parent module. + // + // Validation expressions are statically validated (during configuration + // loading) to refer only to the variable being validated, so we can + // bypass our usual evaluation machinery here and just produce a minimal + // evaluation context containing just the required value, and thus avoid + // the problem that ctx's evaluation functions refer to the wrong module. + val := ctx.GetVariableValue(n.Addr) + hclCtx := &hcl.EvalContext{ + Variables: map[string]cty.Value{ + "var": cty.ObjectVal(map[string]cty.Value{ + n.Config.Name: val, + }), + }, + Functions: ctx.EvaluationScope(nil, EvalDataForNoInstanceKey).Functions(), + } + + for _, validation := range n.Config.Validations { + const errInvalidCondition = "Invalid variable validation result" + const errInvalidValue = "Invalid value for variable" + + result, moreDiags := validation.Condition.Value(hclCtx) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + log.Printf("[TRACE] evalVariableValidations: %s rule %s condition expression failed: %s", n.Addr, validation.DeclRange, diags.Err().Error()) + } + if !result.IsKnown() { + log.Printf("[TRACE] evalVariableValidations: %s rule %s condition value is unknown, so skipping validation for now", n.Addr, validation.DeclRange) + continue // We'll wait until we've learned more, then. + } + if result.IsNull() { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: errInvalidCondition, + Detail: "Validation condition expression must return either true or false, not null.", + Subject: validation.Condition.Range().Ptr(), + Expression: validation.Condition, + EvalContext: hclCtx, + }) + continue + } + var err error + result, err = convert.Convert(result, cty.Bool) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: errInvalidCondition, + Detail: fmt.Sprintf("Invalid validation condition result value: %s.", tfdiags.FormatError(err)), + Subject: validation.Condition.Range().Ptr(), + Expression: validation.Condition, + EvalContext: hclCtx, + }) + continue + } + + if result.False() { + if n.Expr != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: errInvalidValue, + Detail: fmt.Sprintf("%s\n\nThis was checked by the validation rule at %s.", validation.ErrorMessage, validation.DeclRange.String()), + Subject: n.Expr.Range().Ptr(), + }) + } else { + // Since we don't have a source expression for a root module + // variable, we'll just report the error from the perspective + // of the variable declaration itself. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: errInvalidValue, + Detail: fmt.Sprintf("%s\n\nThis was checked by the validation rule at %s.", validation.ErrorMessage, validation.DeclRange.String()), + Subject: n.Config.DeclRange.Ptr(), + }) + } + } + } + + return nil, diags.ErrWithWarnings() +} + // hclTypeName returns the name of the type that would represent this value in // a config file, or falls back to the Go type name if there's no corresponding // HCL type. This is used for formatted output, not for comparing types. diff --git a/terraform/evaltree_provider.go b/terraform/evaltree_provider.go index 6b4df67aa..d4aa94d3d 100644 --- a/terraform/evaltree_provider.go +++ b/terraform/evaltree_provider.go @@ -12,12 +12,10 @@ func ProviderEvalTree(n *NodeApplyableProvider, config *configs.Provider) EvalNo var provider providers.Interface addr := n.Addr - relAddr := addr.ProviderConfig seq := make([]EvalNode, 0, 5) seq = append(seq, &EvalInitProvider{ - TypeName: relAddr.Type, - Addr: addr.ProviderConfig, + Addr: addr, }) // Input stuff @@ -42,7 +40,7 @@ func ProviderEvalTree(n *NodeApplyableProvider, config *configs.Provider) EvalNo Output: &provider, }, &EvalValidateProvider{ - Addr: relAddr, + Addr: addr, Provider: &provider, Config: config, }, @@ -70,7 +68,7 @@ func ProviderEvalTree(n *NodeApplyableProvider, config *configs.Provider) EvalNo Node: &EvalSequence{ Nodes: []EvalNode{ &EvalConfigProvider{ - Addr: relAddr, + Addr: addr, Provider: &provider, Config: config, }, @@ -84,5 +82,5 @@ func ProviderEvalTree(n *NodeApplyableProvider, config *configs.Provider) EvalNo // CloseProviderEvalTree returns the evaluation tree for closing // provider connections that aren't needed anymore. func CloseProviderEvalTree(addr addrs.AbsProviderConfig) EvalNode { - return &EvalCloseProvider{Addr: addr.ProviderConfig} + return &EvalCloseProvider{Addr: addr} } diff --git a/terraform/evaluate.go b/terraform/evaluate.go index 6681f8ddc..96a981070 100644 --- a/terraform/evaluate.go +++ b/terraform/evaluate.go @@ -15,6 +15,7 @@ import ( "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/configs/configschema" + "github.com/hashicorp/terraform/instances" "github.com/hashicorp/terraform/lang" "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/states" @@ -97,47 +98,32 @@ type evaluationStateData struct { Operation walkOperation } -// InstanceKeyEvalData is used during evaluation to specify which values, -// if any, should be produced for count.index, each.key, and each.value. -type InstanceKeyEvalData struct { - // CountIndex is the value for count.index, or cty.NilVal if evaluating - // in a context where the "count" argument is not active. - // - // For correct operation, this should always be of type cty.Number if not - // nil. - CountIndex cty.Value - - // EachKey and EachValue are the values for each.key and each.value - // respectively, or cty.NilVal if evaluating in a context where the - // "for_each" argument is not active. These must either both be set - // or neither set. - // - // For correct operation, EachKey must always be either of type cty.String - // or cty.Number if not nil. - EachKey, EachValue cty.Value -} +// InstanceKeyEvalData is the old name for instances.RepetitionData, aliased +// here for compatibility. In new code, use instances.RepetitionData instead. +type InstanceKeyEvalData = instances.RepetitionData // EvalDataForInstanceKey constructs a suitable InstanceKeyEvalData for // evaluating in a context that has the given instance key. +// +// The forEachMap argument can be nil when preparing for evaluation +// in a context where each.value is prohibited, such as a destroy-time +// provisioner. In that case, the returned EachValue will always be +// cty.NilVal. func EvalDataForInstanceKey(key addrs.InstanceKey, forEachMap map[string]cty.Value) InstanceKeyEvalData { - var countIdx cty.Value - var eachKey cty.Value - var eachVal cty.Value - - if intKey, ok := key.(addrs.IntKey); ok { - countIdx = cty.NumberIntVal(int64(intKey)) + var evalData InstanceKeyEvalData + if key == nil { + return evalData } - if stringKey, ok := key.(addrs.StringKey); ok { - eachKey = cty.StringVal(string(stringKey)) - eachVal = forEachMap[string(stringKey)] - } - - return InstanceKeyEvalData{ - CountIndex: countIdx, - EachKey: eachKey, - EachValue: eachVal, + keyValue := key.Value() + switch keyValue.Type() { + case cty.String: + evalData.EachKey = keyValue + evalData.EachValue = forEachMap[keyValue.AsString()] + case cty.Number: + evalData.CountIndex = keyValue } + return evalData } // EvalDataForNoInstanceKey is a value of InstanceKeyData that sets no instance @@ -185,6 +171,16 @@ func (d *evaluationStateData) GetForEachAttr(addr addrs.ForEachAttr, rng tfdiags returnVal = d.InstanceKeyData.EachKey case "value": returnVal = d.InstanceKeyData.EachValue + + if returnVal == cty.NilVal { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: `each.value cannot be used in this context`, + Detail: fmt.Sprintf(`A reference to "each.value" has been used in a context in which it unavailable, such as when the configuration no longer contains the value in its "for_each" expression. Remove this reference to each.value in your configuration to work around this error.`), + Subject: rng.ToHCL().Ptr(), + }) + return cty.UnknownVal(cty.DynamicPseudoType), diags + } default: diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, @@ -638,60 +634,59 @@ func (d *evaluationStateData) getResourceInstancesAll(addr addrs.Resource, rng t for i := 0; i < length; i++ { ty := schema.ImpliedType() key := addrs.IntKey(i) - is, exists := rs.Instances[key] - if exists && is.Current != nil { - instAddr := addr.Instance(key).Absolute(d.ModulePath) - - // Prefer pending value in plan if present. See getResourceInstanceSingle - // comment for the rationale. - if is.Current.Status == states.ObjectPlanned { - if change := d.Evaluator.Changes.GetResourceInstanceChange(instAddr, states.CurrentGen); change != nil { - val, err := change.After.Decode(ty) - if err != nil { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid resource instance data in plan", - Detail: fmt.Sprintf("Instance %s data could not be decoded from the plan: %s.", instAddr, err), - Subject: &config.DeclRange, - }) - continue - } - vals[i] = val - continue - } else { - // If the object is in planned status then we should not - // get here, since we should've found a pending value - // in the plan above instead. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing pending object in plan", - Detail: fmt.Sprintf("Instance %s is marked as having a change pending but that change is not recorded in the plan. This is a bug in Terraform; please report it.", instAddr), - Subject: &config.DeclRange, - }) - continue - } - } - - ios, err := is.Current.Decode(ty) - if err != nil { - // This shouldn't happen, since by the time we get here - // we should've upgraded the state data already. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid resource instance data in state", - Detail: fmt.Sprintf("Instance %s data could not be decoded from the state: %s.", instAddr, err), - Subject: &config.DeclRange, - }) - continue - } - vals[i] = ios.Value - } else { + is := rs.Instances[key] + if is == nil || is.Current == nil { // There shouldn't normally be "gaps" in our list but we'll // allow it under the assumption that we're in a weird situation // where e.g. someone has run "terraform state mv" to reorder // a list and left a hole behind. vals[i] = cty.UnknownVal(schema.ImpliedType()) + continue } + + instAddr := addr.Instance(key).Absolute(d.ModulePath) + + if is.Current.Status == states.ObjectPlanned { + if change := d.Evaluator.Changes.GetResourceInstanceChange(instAddr, states.CurrentGen); change != nil { + val, err := change.After.Decode(ty) + if err != nil { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid resource instance data in plan", + Detail: fmt.Sprintf("Instance %s data could not be decoded from the plan: %s.", instAddr, err), + Subject: &config.DeclRange, + }) + continue + } + vals[i] = val + continue + } else { + // If the object is in planned status then we should not + // get here, since we should've found a pending value + // in the plan above instead. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Missing pending object in plan", + Detail: fmt.Sprintf("Instance %s is marked as having a change pending but that change is not recorded in the plan. This is a bug in Terraform; please report it.", instAddr), + Subject: &config.DeclRange, + }) + continue + } + } + + ios, err := is.Current.Decode(ty) + if err != nil { + // This shouldn't happen, since by the time we get here + // we should've upgraded the state data already. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid resource instance data in state", + Detail: fmt.Sprintf("Instance %s data could not be decoded from the state: %s.", instAddr, err), + Subject: &config.DeclRange, + }) + continue + } + vals[i] = ios.Value } // We use a tuple rather than a list here because resource schemas may @@ -705,12 +700,14 @@ func (d *evaluationStateData) getResourceInstancesAll(addr addrs.Resource, rng t vals := make(map[string]cty.Value, len(rs.Instances)) for k, is := range rs.Instances { if sk, ok := k.(addrs.StringKey); ok { + if is == nil || is.Current == nil { + // Assume we're dealing with an instance that hasn't been created yet. + vals[string(sk)] = cty.UnknownVal(schema.ImpliedType()) + continue + } + instAddr := addr.Instance(k).Absolute(d.ModulePath) - // Prefer pending value in plan if present. See getResourceInstanceSingle - // comment for the rationale. - // Prefer pending value in plan if present. See getResourceInstanceSingle - // comment for the rationale. if is.Current.Status == states.ObjectPlanned { if change := d.Evaluator.Changes.GetResourceInstanceChange(instAddr, states.CurrentGen); change != nil { val, err := change.After.Decode(ty) @@ -768,9 +765,8 @@ func (d *evaluationStateData) getResourceInstancesAll(addr addrs.Resource, rng t } func (d *evaluationStateData) getResourceSchema(addr addrs.Resource, providerAddr addrs.AbsProviderConfig) *configschema.Block { - providerType := providerAddr.ProviderConfig.Type schemas := d.Evaluator.Schemas - schema, _ := schemas.ResourceTypeConfig(providerType, addr.Mode, addr.Type) + schema, _ := schemas.ResourceTypeConfig(providerAddr.Provider, addr.Mode, addr.Type) return schema } diff --git a/terraform/evaluate_valid.go b/terraform/evaluate_valid.go index 9e55b2f99..18b8c4118 100644 --- a/terraform/evaluate_valid.go +++ b/terraform/evaluate_valid.go @@ -212,11 +212,8 @@ func (d *evaluationStateData) staticValidateResourceReference(modCfg *configs.Co return diags } - // Normally accessing this directly is wrong because it doesn't take into - // account provider inheritance, etc but it's okay here because we're only - // paying attention to the type anyway. - providerType := cfg.ProviderConfigAddr().Type - schema, _ := d.Evaluator.Schemas.ResourceTypeConfig(providerType, addr.Mode, addr.Type) + providerFqn := modCfg.Module.ProviderForLocalConfig(cfg.ProviderConfigAddr()) + schema, _ := d.Evaluator.Schemas.ResourceTypeConfig(providerFqn, addr.Mode, addr.Type) if schema == nil { // Prior validation should've taken care of a resource block with an @@ -225,7 +222,7 @@ func (d *evaluationStateData) staticValidateResourceReference(modCfg *configs.Co diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, Summary: `Invalid resource type`, - Detail: fmt.Sprintf(`A %s resource type %q is not supported by provider %q.`, modeAdjective, addr.Type, providerType), + Detail: fmt.Sprintf(`A %s resource type %q is not supported by provider %q.`, modeAdjective, addr.Type, providerFqn.LegacyString()), Subject: rng.ToHCL().Ptr(), }) return diags diff --git a/terraform/evaluate_valid_test.go b/terraform/evaluate_valid_test.go index 2f8a50cae..7ba0c604e 100644 --- a/terraform/evaluate_valid_test.go +++ b/terraform/evaluate_valid_test.go @@ -6,6 +6,7 @@ import ( "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hclsyntax" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/lang" ) @@ -55,8 +56,8 @@ For example, to correlate with indices of a referring resource, use: evaluator := &Evaluator{ Config: cfg, Schemas: &Schemas{ - Providers: map[string]*ProviderSchema{ - "aws": { + Providers: map[addrs.Provider]*ProviderSchema{ + addrs.NewLegacyProvider("aws"): { ResourceTypes: map[string]*configschema.Block{ "aws_instance": {}, }, diff --git a/terraform/graph.go b/terraform/graph.go index 58d45a7b6..1ac8dac8b 100644 --- a/terraform/graph.go +++ b/terraform/graph.go @@ -20,11 +20,6 @@ type Graph struct { // Path is the path in the module tree that this Graph represents. Path addrs.ModuleInstance - - // debugName is a name for reference in the debug output. This is usually - // to indicate what topmost builder was, and if this graph is a shadow or - // not. - debugName string } func (g *Graph) DirectedGraph() dag.Grapher { @@ -43,19 +38,10 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { ctx := walker.EnterPath(g.Path) defer walker.ExitPath(g.Path) - // Get the path for logs - path := ctx.Path().String() - - debugName := "walk-graph.json" - if g.debugName != "" { - debugName = g.debugName + "-" + debugName - } - // Walk the graph. var walkFn dag.WalkFunc walkFn = func(v dag.Vertex) (diags tfdiags.Diagnostics) { log.Printf("[TRACE] vertex %q: starting visit (%T)", dag.VertexName(v), v) - g.DebugVisitInfo(v, g.debugName) defer func() { log.Printf("[TRACE] vertex %q: visit complete", dag.VertexName(v)) @@ -84,8 +70,6 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { // then callback with the output. log.Printf("[TRACE] vertex %q: evaluating", dag.VertexName(v)) - g.DebugVertexInfo(v, fmt.Sprintf("evaluating %T(%s)", v, path)) - tree = walker.EnterEvalTree(v, tree) output, err := Eval(tree, vertexCtx) diags = diags.Append(walker.ExitEvalTree(v, output, err)) @@ -98,8 +82,6 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { if ev, ok := v.(GraphNodeDynamicExpandable); ok { log.Printf("[TRACE] vertex %q: expanding dynamic subgraph", dag.VertexName(v)) - g.DebugVertexInfo(v, fmt.Sprintf("expanding %T(%s)", v, path)) - g, err := ev.DynamicExpand(vertexCtx) if err != nil { diags = diags.Append(err) @@ -124,8 +106,6 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { if sn, ok := v.(GraphNodeSubgraph); ok { log.Printf("[TRACE] vertex %q: entering static subgraph", dag.VertexName(v)) - g.DebugVertexInfo(v, fmt.Sprintf("subgraph: %T(%s)", v, path)) - subDiags := sn.Subgraph().(*Graph).walk(walker) if subDiags.HasErrors() { log.Printf("[TRACE] vertex %q: static subgraph encountered errors", dag.VertexName(v)) diff --git a/terraform/graph_builder.go b/terraform/graph_builder.go index 66b21f300..f631f83b5 100644 --- a/terraform/graph_builder.go +++ b/terraform/graph_builder.go @@ -5,9 +5,9 @@ import ( "log" "strings" - "github.com/hashicorp/terraform/tfdiags" - "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/helper/logging" + "github.com/hashicorp/terraform/tfdiags" ) // GraphBuilder is an interface that can be implemented and used with @@ -46,17 +46,9 @@ func (b *BasicGraphBuilder) Build(path addrs.ModuleInstance) (*Graph, tfdiags.Di stepName = stepName[dot+1:] } - debugOp := g.DebugOperation(stepName, "") err := step.Transform(g) - - errMsg := "" - if err != nil { - errMsg = err.Error() - } - debugOp.End(errMsg) - if thisStepStr := g.StringWithNodeTypes(); thisStepStr != lastStepStr { - log.Printf("[TRACE] Completed graph transform %T with new graph:\n%s------", step, thisStepStr) + log.Printf("[TRACE] Completed graph transform %T with new graph:\n%s ------", step, logging.Indent(thisStepStr)) lastStepStr = thisStepStr } else { log.Printf("[TRACE] Completed graph transform %T (no changes)", step) diff --git a/terraform/graph_builder_apply.go b/terraform/graph_builder_apply.go index 7182dd7db..4640898c4 100644 --- a/terraform/graph_builder_apply.go +++ b/terraform/graph_builder_apply.go @@ -127,21 +127,6 @@ func (b *ApplyGraphBuilder) Steps() []GraphTransformer { // Attach the state &AttachStateTransformer{State: b.State}, - // Destruction ordering - &DestroyEdgeTransformer{ - Config: b.Config, - State: b.State, - Schemas: b.Schemas, - }, - GraphTransformIf( - func() bool { return !b.Destroy }, - &CBDEdgeTransformer{ - Config: b.Config, - State: b.State, - Schemas: b.Schemas, - }, - ), - // Provisioner-related transformations &MissingProvisionerTransformer{Provisioners: b.Components.ResourceProvisioners()}, &ProvisionerTransformer{}, @@ -168,23 +153,36 @@ func (b *ApplyGraphBuilder) Steps() []GraphTransformer { // analyze the configuration to find references. &AttachSchemaTransformer{Schemas: b.Schemas}, + // Create expansion nodes for all of the module calls. This must + // come after all other transformers that create nodes representing + // objects that can belong to modules. + &ModuleExpansionTransformer{Config: b.Config}, + // Connect references so ordering is correct &ReferenceTransformer{}, + &AttachDependenciesTransformer{}, + + // Destruction ordering + &DestroyEdgeTransformer{ + Config: b.Config, + State: b.State, + Schemas: b.Schemas, + }, + + &CBDEdgeTransformer{ + Config: b.Config, + State: b.State, + Schemas: b.Schemas, + }, - // Handle destroy time transformations for output and local values. - // Reverse the edges from outputs and locals, so that - // interpolations don't fail during destroy. // Create a destroy node for outputs to remove them from the state. + &DestroyOutputTransformer{Destroy: b.Destroy}, + // Prune unreferenced values, which may have interpolations that can't // be resolved. - GraphTransformIf( - func() bool { return b.Destroy }, - GraphTransformMulti( - &DestroyValueReferenceTransformer{}, - &DestroyOutputTransformer{}, - &PruneUnusedValuesTransformer{}, - ), - ), + &PruneUnusedValuesTransformer{ + Destroy: b.Destroy, + }, // Add the node to fix the state count boundaries &CountBoundaryTransformer{ diff --git a/terraform/graph_builder_apply_test.go b/terraform/graph_builder_apply_test.go index ac65751e6..6e221e026 100644 --- a/terraform/graph_builder_apply_test.go +++ b/terraform/graph_builder_apply_test.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/states" + "github.com/zclconf/go-cty/cty" ) func TestApplyGraphBuilder_impl(t *testing.T) { @@ -88,11 +89,32 @@ func TestApplyGraphBuilder_depCbd(t *testing.T) { }, } + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + b := &ApplyGraphBuilder{ Config: testModule(t, "graph-builder-apply-dep-cbd"), Changes: changes, Components: simpleMockComponentFactory(), Schemas: simpleTestSchemas(), + State: state, } g, err := b.Build(addrs.RootModuleInstance) @@ -213,12 +235,6 @@ func TestApplyGraphBuilder_doubleCBD(t *testing.T) { "test_object.B", destroyB, ) - - // actual := strings.TrimSpace(g.String()) - // expected := strings.TrimSpace(testApplyGraphBuilderDoubleCBDStr) - // if actual != expected { - // t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) - // } } // This tests the ordering of two resources being destroyed that depend @@ -241,33 +257,26 @@ func TestApplyGraphBuilder_destroyStateOnly(t *testing.T) { }, } - state := MustShimLegacyState(&State{ - Modules: []*ModuleState{ - &ModuleState{ - Path: []string{"root", "child"}, - Resources: map[string]*ResourceState{ - "test_object.A": &ResourceState{ - Type: "test_object", - Primary: &InstanceState{ - ID: "foo", - Attributes: map[string]string{}, - }, - Provider: "provider.test", - }, - - "test_object.B": &ResourceState{ - Type: "test_object", - Primary: &InstanceState{ - ID: "bar", - Attributes: map[string]string{}, - }, - Dependencies: []string{"test_object.A"}, - Provider: "provider.test", - }, - }, - }, + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo"}`), }, - }) + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + child.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"bar"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("module.child.test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) b := &ApplyGraphBuilder{ Config: testModule(t, "empty"), @@ -282,7 +291,6 @@ func TestApplyGraphBuilder_destroyStateOnly(t *testing.T) { if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } - t.Logf("Graph:\n%s", g.String()) if g.Path.String() != addrs.RootModuleInstance.String() { t.Fatalf("wrong path %q", g.Path.String()) @@ -354,11 +362,33 @@ func TestApplyGraphBuilder_moduleDestroy(t *testing.T) { }, } + state := states.NewState() + modA := state.EnsureModule(addrs.RootModuleInstance.Child("A", addrs.NoKey)) + modA.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.foo").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + modB := state.EnsureModule(addrs.RootModuleInstance.Child("B", addrs.NoKey)) + modB.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.foo").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo","value":"foo"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("module.A.test_object.foo")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + b := &ApplyGraphBuilder{ Config: testModule(t, "graph-builder-apply-module-destroy"), Changes: changes, Components: simpleMockComponentFactory(), Schemas: simpleTestSchemas(), + State: state, } g, err := b.Build(addrs.RootModuleInstance) @@ -474,22 +504,173 @@ func TestApplyGraphBuilder_targetModule(t *testing.T) { testGraphNotContains(t, g, "module.child1.output.instance_id") } +// Ensure that an update resulting from the removal of a resource happens after +// that resource is destroyed. +func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) { + schemas := simpleTestSchemas() + instanceSchema := schemas.Providers[addrs.NewLegacyProvider("test")].ResourceTypes["test_object"] + + bBefore, _ := plans.NewDynamicValue( + cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("b_id"), + "test_string": cty.StringVal("a_id"), + }), instanceSchema.ImpliedType()) + bAfter, _ := plans.NewDynamicValue( + cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("b_id"), + "test_string": cty.StringVal("changed"), + }), instanceSchema.ImpliedType()) + + changes := &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: mustResourceInstanceAddr("test_object.a"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Delete, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.b"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Update, + Before: bBefore, + After: bAfter, + }, + }, + }, + } + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_object", + Name: "a", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"a_id"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + root.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_object", + Name: "b", + }.Instance(addrs.NoKey), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`), + Dependencies: []addrs.AbsResource{ + addrs.AbsResource{ + Resource: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_object", + Name: "a", + }, + Module: root.Addr, + }, + }, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("test"), + Module: addrs.RootModuleInstance, + }, + ) + + b := &ApplyGraphBuilder{ + Config: testModule(t, "graph-builder-apply-orphan-update"), + Changes: changes, + Components: simpleMockComponentFactory(), + Schemas: schemas, + State: state, + } + + g, err := b.Build(addrs.RootModuleInstance) + if err != nil { + t.Fatalf("err: %s", err) + } + + expected := strings.TrimSpace(` +test_object.a (destroy) +test_object.b + test_object.a (destroy) +`) + + instanceGraph := filterInstances(g) + got := strings.TrimSpace(instanceGraph.String()) + + if got != expected { + t.Fatalf("expected:\n%s\ngot:\n%s", expected, got) + } +} + +// The orphan clean up node should not be connected to a provider +func TestApplyGraphBuilder_orphanedWithProvider(t *testing.T) { + changes := &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: mustResourceInstanceAddr("test_object.A"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Delete, + }, + }, + }, + } + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"].foo`), + ) + + b := &ApplyGraphBuilder{ + Config: testModule(t, "graph-builder-orphan-alias"), + Changes: changes, + Components: simpleMockComponentFactory(), + Schemas: simpleTestSchemas(), + State: state, + } + + g, err := b.Build(addrs.RootModuleInstance) + if err != nil { + t.Fatal(err) + } + + // The cleanup node has no state or config of its own, so would create a + // default provider which we don't want. + testGraphNotContains(t, g, "provider.test") +} + const testApplyGraphBuilderStr = ` meta.count-boundary (EachMode fixup) module.child.test_object.other test_object.other +module.child module.child.test_object.create module.child.test_object.create (prepare state) module.child.test_object.create (prepare state) - provider.test + module.child + provider["registry.terraform.io/-/test"] provisioner.test module.child.test_object.other module.child.test_object.create module.child.test_object.other (prepare state) module.child.test_object.other (prepare state) - provider.test -provider.test -provider.test (close) + module.child + provider["registry.terraform.io/-/test"] +provider["registry.terraform.io/-/test"] +provider["registry.terraform.io/-/test"] (close) module.child.test_object.other test_object.other provisioner.test @@ -497,35 +678,36 @@ provisioner.test (close) module.child.test_object.create root meta.count-boundary (EachMode fixup) - provider.test (close) + provider["registry.terraform.io/-/test"] (close) provisioner.test (close) test_object.create test_object.create (prepare state) test_object.create (prepare state) - provider.test + provider["registry.terraform.io/-/test"] test_object.other test_object.create test_object.other (prepare state) test_object.other (prepare state) - provider.test + provider["registry.terraform.io/-/test"] ` const testApplyGraphBuilderDestroyCountStr = ` meta.count-boundary (EachMode fixup) test_object.B -provider.test -provider.test (close) +provider["registry.terraform.io/-/test"] +provider["registry.terraform.io/-/test"] (close) test_object.B root meta.count-boundary (EachMode fixup) - provider.test (close) + provider["registry.terraform.io/-/test"] (close) test_object.A (prepare state) - provider.test + provider["registry.terraform.io/-/test"] test_object.A[1] (destroy) - test_object.A (prepare state) + provider["registry.terraform.io/-/test"] test_object.B + test_object.A (prepare state) test_object.A[1] (destroy) test_object.B (prepare state) test_object.B (prepare state) - provider.test + provider["registry.terraform.io/-/test"] ` diff --git a/terraform/graph_builder_destroy_plan.go b/terraform/graph_builder_destroy_plan.go index a6047a9b4..08ee4e63e 100644 --- a/terraform/graph_builder_destroy_plan.go +++ b/terraform/graph_builder_destroy_plan.go @@ -72,6 +72,9 @@ func (b *DestroyPlanGraphBuilder) Steps() []GraphTransformer { State: b.State, }, + // Attach the state + &AttachStateTransformer{State: b.State}, + // Attach the configuration to any resources &AttachResourceConfigTransformer{Config: b.Config}, diff --git a/terraform/graph_builder_import.go b/terraform/graph_builder_import.go index 49879e4eb..d5ad5f262 100644 --- a/terraform/graph_builder_import.go +++ b/terraform/graph_builder_import.go @@ -66,9 +66,6 @@ func (b *ImportGraphBuilder) Steps() []GraphTransformer { TransformProviders(b.Components.ResourceProviders(), concreteProvider, config), - // This validates that the providers only depend on variables - &ImportProviderValidateTransformer{}, - // Add the local values &LocalTransformer{Config: b.Config}, @@ -86,6 +83,9 @@ func (b *ImportGraphBuilder) Steps() []GraphTransformer { // have to connect again later for providers and so on. &ReferenceTransformer{}, + // This validates that the providers only depend on variables + &ImportProviderValidateTransformer{}, + // Close opened plugin connections &CloseProviderTransformer{}, diff --git a/terraform/graph_builder_plan.go b/terraform/graph_builder_plan.go index 17adfd279..1e52c6470 100644 --- a/terraform/graph_builder_plan.go +++ b/terraform/graph_builder_plan.go @@ -137,6 +137,11 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer { // analyze the configuration to find references. &AttachSchemaTransformer{Schemas: b.Schemas}, + // Create expansion nodes for all of the module calls. This must + // come after all other transformers that create nodes representing + // objects that can belong to modules. + &ModuleExpansionTransformer{Config: b.Config}, + // Connect so that the references are ready for targeting. We'll // have to connect again later for providers and so on. &ReferenceTransformer{}, diff --git a/terraform/graph_builder_plan_test.go b/terraform/graph_builder_plan_test.go index 9b81cb87c..f30e7165b 100644 --- a/terraform/graph_builder_plan_test.go +++ b/terraform/graph_builder_plan_test.go @@ -34,9 +34,9 @@ func TestPlanGraphBuilder(t *testing.T) { }, } components := &basicComponentFactory{ - providers: map[string]providers.Factory{ - "aws": providers.FactoryFixed(awsProvider), - "openstack": providers.FactoryFixed(openstackProvider), + providers: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): providers.FactoryFixed(awsProvider), + addrs.NewLegacyProvider("openstack"): providers.FactoryFixed(openstackProvider), }, } @@ -44,9 +44,9 @@ func TestPlanGraphBuilder(t *testing.T) { Config: testModule(t, "graph-builder-plan-basic"), Components: components, Schemas: &Schemas{ - Providers: map[string]*ProviderSchema{ - "aws": awsProvider.GetSchemaReturn, - "openstack": openstackProvider.GetSchemaReturn, + Providers: map[addrs.Provider]*ProviderSchema{ + addrs.NewLegacyProvider("aws"): awsProvider.GetSchemaReturn, + addrs.NewLegacyProvider("openstack"): openstackProvider.GetSchemaReturn, }, }, DisableReduce: true, @@ -92,8 +92,8 @@ func TestPlanGraphBuilder_dynamicBlock(t *testing.T) { }, } components := &basicComponentFactory{ - providers: map[string]providers.Factory{ - "test": providers.FactoryFixed(provider), + providers: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): providers.FactoryFixed(provider), }, } @@ -101,8 +101,8 @@ func TestPlanGraphBuilder_dynamicBlock(t *testing.T) { Config: testModule(t, "graph-builder-plan-dynblock"), Components: components, Schemas: &Schemas{ - Providers: map[string]*ProviderSchema{ - "test": provider.GetSchemaReturn, + Providers: map[addrs.Provider]*ProviderSchema{ + addrs.NewLegacyProvider("test"): provider.GetSchemaReturn, }, }, DisableReduce: true, @@ -125,25 +125,25 @@ func TestPlanGraphBuilder_dynamicBlock(t *testing.T) { actual := strings.TrimSpace(g.String()) expected := strings.TrimSpace(` meta.count-boundary (EachMode fixup) - provider.test + provider["registry.terraform.io/-/test"] test_thing.a test_thing.b test_thing.c -provider.test -provider.test (close) - provider.test +provider["registry.terraform.io/-/test"] +provider["registry.terraform.io/-/test"] (close) + provider["registry.terraform.io/-/test"] test_thing.a test_thing.b test_thing.c root meta.count-boundary (EachMode fixup) - provider.test (close) + provider["registry.terraform.io/-/test"] (close) test_thing.a - provider.test + provider["registry.terraform.io/-/test"] test_thing.b - provider.test + provider["registry.terraform.io/-/test"] test_thing.c - provider.test + provider["registry.terraform.io/-/test"] test_thing.a test_thing.b `) @@ -171,8 +171,8 @@ func TestPlanGraphBuilder_attrAsBlocks(t *testing.T) { }, } components := &basicComponentFactory{ - providers: map[string]providers.Factory{ - "test": providers.FactoryFixed(provider), + providers: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("test"): providers.FactoryFixed(provider), }, } @@ -180,8 +180,8 @@ func TestPlanGraphBuilder_attrAsBlocks(t *testing.T) { Config: testModule(t, "graph-builder-plan-attr-as-blocks"), Components: components, Schemas: &Schemas{ - Providers: map[string]*ProviderSchema{ - "test": provider.GetSchemaReturn, + Providers: map[addrs.Provider]*ProviderSchema{ + addrs.NewLegacyProvider("test"): provider.GetSchemaReturn, }, }, DisableReduce: true, @@ -204,21 +204,21 @@ func TestPlanGraphBuilder_attrAsBlocks(t *testing.T) { actual := strings.TrimSpace(g.String()) expected := strings.TrimSpace(` meta.count-boundary (EachMode fixup) - provider.test + provider["registry.terraform.io/-/test"] test_thing.a test_thing.b -provider.test -provider.test (close) - provider.test +provider["registry.terraform.io/-/test"] +provider["registry.terraform.io/-/test"] (close) + provider["registry.terraform.io/-/test"] test_thing.a test_thing.b root meta.count-boundary (EachMode fixup) - provider.test (close) + provider["registry.terraform.io/-/test"] (close) test_thing.a - provider.test + provider["registry.terraform.io/-/test"] test_thing.b - provider.test + provider["registry.terraform.io/-/test"] test_thing.a `) if actual != expected { @@ -243,7 +243,7 @@ func TestPlanGraphBuilder_targetModule(t *testing.T) { t.Logf("Graph: %s", g.String()) - testGraphNotContains(t, g, "module.child1.provider.test") + testGraphNotContains(t, g, `module.child1.provider["registry.terraform.io/-/test"]`) testGraphNotContains(t, g, "module.child1.test_object.foo") } @@ -258,8 +258,8 @@ func TestPlanGraphBuilder_forEach(t *testing.T) { } components := &basicComponentFactory{ - providers: map[string]providers.Factory{ - "aws": providers.FactoryFixed(awsProvider), + providers: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider("aws"): providers.FactoryFixed(awsProvider), }, } @@ -267,8 +267,8 @@ func TestPlanGraphBuilder_forEach(t *testing.T) { Config: testModule(t, "plan-for-each"), Components: components, Schemas: &Schemas{ - Providers: map[string]*ProviderSchema{ - "aws": awsProvider.GetSchemaReturn, + Providers: map[addrs.Provider]*ProviderSchema{ + addrs.NewLegacyProvider("aws"): awsProvider.GetSchemaReturn, }, }, DisableReduce: true, @@ -295,13 +295,13 @@ func TestPlanGraphBuilder_forEach(t *testing.T) { const testPlanGraphBuilderStr = ` aws_instance.web aws_security_group.firewall - provider.aws + provider["registry.terraform.io/-/aws"] var.foo aws_load_balancer.weblb aws_instance.web - provider.aws + provider["registry.terraform.io/-/aws"] aws_security_group.firewall - provider.aws + provider["registry.terraform.io/-/aws"] local.instance_id aws_instance.web meta.count-boundary (EachMode fixup) @@ -311,44 +311,44 @@ meta.count-boundary (EachMode fixup) local.instance_id openstack_floating_ip.random output.instance_id - provider.aws - provider.openstack + provider["registry.terraform.io/-/aws"] + provider["registry.terraform.io/-/openstack"] var.foo openstack_floating_ip.random - provider.openstack + provider["registry.terraform.io/-/openstack"] output.instance_id local.instance_id -provider.aws +provider["registry.terraform.io/-/aws"] openstack_floating_ip.random -provider.aws (close) +provider["registry.terraform.io/-/aws"] (close) aws_instance.web aws_load_balancer.weblb aws_security_group.firewall - provider.aws -provider.openstack -provider.openstack (close) + provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/openstack"] +provider["registry.terraform.io/-/openstack"] (close) openstack_floating_ip.random - provider.openstack + provider["registry.terraform.io/-/openstack"] root meta.count-boundary (EachMode fixup) - provider.aws (close) - provider.openstack (close) + provider["registry.terraform.io/-/aws"] (close) + provider["registry.terraform.io/-/openstack"] (close) var.foo ` const testPlanGraphBuilderForEachStr = ` aws_instance.bar - provider.aws + provider["registry.terraform.io/-/aws"] aws_instance.bar2 - provider.aws + provider["registry.terraform.io/-/aws"] aws_instance.bat aws_instance.boo - provider.aws + provider["registry.terraform.io/-/aws"] aws_instance.baz - provider.aws + provider["registry.terraform.io/-/aws"] aws_instance.boo - provider.aws + provider["registry.terraform.io/-/aws"] aws_instance.foo - provider.aws + provider["registry.terraform.io/-/aws"] meta.count-boundary (EachMode fixup) aws_instance.bar aws_instance.bar2 @@ -356,17 +356,17 @@ meta.count-boundary (EachMode fixup) aws_instance.baz aws_instance.boo aws_instance.foo - provider.aws -provider.aws -provider.aws (close) + provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/aws"] (close) aws_instance.bar aws_instance.bar2 aws_instance.bat aws_instance.baz aws_instance.boo aws_instance.foo - provider.aws + provider["registry.terraform.io/-/aws"] root meta.count-boundary (EachMode fixup) - provider.aws (close) + provider["registry.terraform.io/-/aws"] (close) ` diff --git a/terraform/graph_builder_refresh.go b/terraform/graph_builder_refresh.go index 0342cdbe8..4839a17f8 100644 --- a/terraform/graph_builder_refresh.go +++ b/terraform/graph_builder_refresh.go @@ -162,9 +162,15 @@ func (b *RefreshGraphBuilder) Steps() []GraphTransformer { // analyze the configuration to find references. &AttachSchemaTransformer{Schemas: b.Schemas}, + // Create expansion nodes for all of the module calls. This must + // come after all other transformers that create nodes representing + // objects that can belong to modules. + &ModuleExpansionTransformer{Config: b.Config}, + // Connect so that the references are ready for targeting. We'll // have to connect again later for providers and so on. &ReferenceTransformer{}, + &AttachDependenciesTransformer{}, // Target &TargetsTransformer{ diff --git a/terraform/graph_builder_refresh_test.go b/terraform/graph_builder_refresh_test.go index 50f5b468a..35068c841 100644 --- a/terraform/graph_builder_refresh_test.go +++ b/terraform/graph_builder_refresh_test.go @@ -83,19 +83,19 @@ func TestRefreshGraphBuilder_configOrphans(t *testing.T) { actual := strings.TrimSpace(g.StringWithNodeTypes()) expected := strings.TrimSpace(` data.test_object.foo[0] - *terraform.NodeRefreshableManagedResourceInstance - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider data.test_object.foo[0] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider data.test_object.foo[1] - *terraform.NodeRefreshableManagedResourceInstance - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider data.test_object.foo[1] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider data.test_object.foo[2] - *terraform.NodeRefreshableManagedResourceInstance - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider data.test_object.foo[2] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject - provider.test - *terraform.NodeApplyableProvider -provider.test - *terraform.NodeApplyableProvider -provider.test (close) - *terraform.graphNodeCloseProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider +provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider +provider["registry.terraform.io/-/test"] (close) - *terraform.graphNodeCloseProvider data.test_object.foo[0] - *terraform.NodeRefreshableManagedResourceInstance data.test_object.foo[0] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject data.test_object.foo[1] - *terraform.NodeRefreshableManagedResourceInstance @@ -107,13 +107,13 @@ provider.test (close) - *terraform.graphNodeCloseProvider test_object.foo[1] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject test_object.foo[2] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject test_object.foo - *terraform.NodeRefreshableManagedResource - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider test_object.foo[0] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider test_object.foo[1] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider test_object.foo[2] (deposed 00000001) - *terraform.NodePlanDeposedResourceInstanceObject - provider.test - *terraform.NodeApplyableProvider + provider["registry.terraform.io/-/test"] - *terraform.NodeApplyableProvider `) if expected != actual { t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) diff --git a/terraform/graph_walk_context.go b/terraform/graph_walk_context.go index 03c192a86..d53ebe97a 100644 --- a/terraform/graph_walk_context.go +++ b/terraform/graph_walk_context.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/dag" + "github.com/hashicorp/terraform/instances" "github.com/hashicorp/terraform/plans" "github.com/hashicorp/terraform/providers" "github.com/hashicorp/terraform/provisioners" @@ -24,8 +25,9 @@ type ContextGraphWalker struct { // Configurable values Context *Context - State *states.SyncState // Used for safe concurrent access to state - Changes *plans.ChangesSync // Used for safe concurrent writes to changes + State *states.SyncState // Used for safe concurrent access to state + Changes *plans.ChangesSync // Used for safe concurrent writes to changes + InstanceExpander *instances.Expander // Tracks our gradual expansion of module and resource instances Operation walkOperation StopContext context.Context RootVariableValues InputValues @@ -75,22 +77,23 @@ func (w *ContextGraphWalker) EnterPath(path addrs.ModuleInstance) EvalContext { } ctx := &BuiltinEvalContext{ - StopContext: w.StopContext, - PathValue: path, - Hooks: w.Context.hooks, - InputValue: w.Context.uiInput, - Components: w.Context.components, - Schemas: w.Context.schemas, - ProviderCache: w.providerCache, - ProviderInputConfig: w.Context.providerInputConfig, - ProviderLock: &w.providerLock, - ProvisionerCache: w.provisionerCache, - ProvisionerLock: &w.provisionerLock, - ChangesValue: w.Changes, - StateValue: w.State, - Evaluator: evaluator, - VariableValues: w.variableValues, - VariableValuesLock: &w.variableValuesLock, + StopContext: w.StopContext, + PathValue: path, + Hooks: w.Context.hooks, + InputValue: w.Context.uiInput, + InstanceExpanderValue: w.InstanceExpander, + Components: w.Context.components, + Schemas: w.Context.schemas, + ProviderCache: w.providerCache, + ProviderInputConfig: w.Context.providerInputConfig, + ProviderLock: &w.providerLock, + ProvisionerCache: w.provisionerCache, + ProvisionerLock: &w.provisionerLock, + ChangesValue: w.Changes, + StateValue: w.State, + Evaluator: evaluator, + VariableValues: w.variableValues, + VariableValuesLock: &w.variableValuesLock, } w.contexts[key] = ctx diff --git a/terraform/graph_walk_operation.go b/terraform/graph_walk_operation.go index a3756e764..859f6fb12 100644 --- a/terraform/graph_walk_operation.go +++ b/terraform/graph_walk_operation.go @@ -1,6 +1,6 @@ package terraform -//go:generate stringer -type=walkOperation graph_walk_operation.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=walkOperation graph_walk_operation.go // walkOperation is an enum which tells the walkContext what to do. type walkOperation byte diff --git a/terraform/instancetype.go b/terraform/instancetype.go index 08959717b..375a8638a 100644 --- a/terraform/instancetype.go +++ b/terraform/instancetype.go @@ -1,6 +1,6 @@ package terraform -//go:generate stringer -type=InstanceType instancetype.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=InstanceType instancetype.go // InstanceType is an enum of the various types of instances store in the State type InstanceType int diff --git a/terraform/module_dependencies.go b/terraform/module_dependencies.go index 66a68c7de..e10a6f1d1 100644 --- a/terraform/module_dependencies.go +++ b/terraform/module_dependencies.go @@ -58,9 +58,7 @@ func configTreeConfigDependencies(root *configs.Config, inheritProviders map[str // The main way to declare a provider dependency is explicitly inside // the "terraform" block, which allows declaring a requirement without // also creating a configuration. - for fullName, constraints := range module.ProviderRequirements { - inst := moduledeps.ProviderInstance(fullName) - + for localName, req := range module.ProviderRequirements { // The handling here is a bit fiddly because the moduledeps package // was designed around the legacy (pre-0.12) configuration model // and hasn't yet been revised to handle the new model. As a result, @@ -69,12 +67,16 @@ func configTreeConfigDependencies(root *configs.Config, inheritProviders map[str // can also retain the source location of each constraint, for // more informative output from the "terraform providers" command. var rawConstraints version.Constraints - for _, constraint := range constraints { + for _, constraint := range req.VersionConstraints { rawConstraints = append(rawConstraints, constraint.Required...) } discoConstraints := discovery.NewConstraints(rawConstraints) + fqn := req.Type + if fqn.IsZero() { + fqn = addrs.NewLegacyProvider(localName) + } - providers[inst] = moduledeps.ProviderDependency{ + providers[req.Type] = moduledeps.ProviderDependency{ Constraints: discoConstraints, Reason: moduledeps.ProviderDependencyExplicit, } @@ -83,16 +85,21 @@ func configTreeConfigDependencies(root *configs.Config, inheritProviders map[str // Provider configurations can also include version constraints, // allowing for more terse declaration in situations where both a // configuration and a constraint are defined in the same module. - for fullName, pCfg := range module.ProviderConfigs { - inst := moduledeps.ProviderInstance(fullName) + for _, pCfg := range module.ProviderConfigs { + fqn := module.ProviderForLocalConfig(pCfg.Addr()) + discoConstraints := discovery.AllVersions if pCfg.Version.Required != nil { discoConstraints = discovery.NewConstraints(pCfg.Version.Required) } - if existing, exists := providers[inst]; exists { - existing.Constraints = existing.Constraints.Append(discoConstraints) + if existing, exists := providers[fqn]; exists { + constraints := existing.Constraints.Append(discoConstraints) + providers[fqn] = moduledeps.ProviderDependency{ + Constraints: constraints, + Reason: moduledeps.ProviderDependencyExplicit, + } } else { - providers[inst] = moduledeps.ProviderDependency{ + providers[fqn] = moduledeps.ProviderDependency{ Constraints: discoConstraints, Reason: moduledeps.ProviderDependencyExplicit, } @@ -104,8 +111,9 @@ func configTreeConfigDependencies(root *configs.Config, inheritProviders map[str // an explicit dependency on the same provider. for _, rc := range module.ManagedResources { addr := rc.ProviderConfigAddr() - inst := moduledeps.ProviderInstance(addr.StringCompact()) - if _, exists := providers[inst]; exists { + fqn := module.ProviderForLocalConfig(addr) + + if _, exists := providers[fqn]; exists { // Explicit dependency already present continue } @@ -115,15 +123,16 @@ func configTreeConfigDependencies(root *configs.Config, inheritProviders map[str reason = moduledeps.ProviderDependencyInherited } - providers[inst] = moduledeps.ProviderDependency{ + providers[fqn] = moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: reason, } } for _, rc := range module.DataResources { addr := rc.ProviderConfigAddr() - inst := moduledeps.ProviderInstance(addr.StringCompact()) - if _, exists := providers[inst]; exists { + fqn := module.ProviderForLocalConfig(addr) + + if _, exists := providers[fqn]; exists { // Explicit dependency already present continue } @@ -133,7 +142,7 @@ func configTreeConfigDependencies(root *configs.Config, inheritProviders map[str reason = moduledeps.ProviderDependencyInherited } - providers[inst] = moduledeps.ProviderDependency{ + providers[fqn] = moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: reason, } @@ -190,9 +199,9 @@ func configTreeMergeStateDependencies(root *moduledeps.Module, state *states.Sta module := findModule(ms.Addr) for _, rs := range ms.Resources { - inst := moduledeps.ProviderInstance(rs.ProviderConfig.ProviderConfig.StringCompact()) - if _, exists := module.Providers[inst]; !exists { - module.Providers[inst] = moduledeps.ProviderDependency{ + fqn := rs.ProviderConfig.Provider + if _, exists := module.Providers[fqn]; !exists { + module.Providers[fqn] = moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyFromState, } diff --git a/terraform/module_dependencies_test.go b/terraform/module_dependencies_test.go index 64a8edbbb..9834d3c7c 100644 --- a/terraform/module_dependencies_test.go +++ b/terraform/module_dependencies_test.go @@ -3,8 +3,9 @@ package terraform import ( "testing" - "github.com/go-test/deep" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/moduledeps" "github.com/hashicorp/terraform/plugin/discovery" @@ -40,12 +41,22 @@ func TestModuleTreeDependencies(t *testing.T) { &moduledeps.Module{ Name: "root", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ - Constraints: discovery.ConstraintStr(">=1.0.0").MustParse(), + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ + Constraints: discovery.ConstraintStr(">=1.0.0,>=2.0.0").MustParse(), Reason: moduledeps.ProviderDependencyExplicit, }, - "foo.bar": moduledeps.ProviderDependency{ - Constraints: discovery.ConstraintStr(">=2.0.0").MustParse(), + }, + Children: nil, + }, + }, + "required_providers block": { + "module-deps-required-providers", + nil, + &moduledeps.Module{ + Name: "root", + Providers: moduledeps.Providers{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ + Constraints: discovery.ConstraintStr(">=1.0.0").MustParse(), Reason: moduledeps.ProviderDependencyExplicit, }, }, @@ -58,7 +69,7 @@ func TestModuleTreeDependencies(t *testing.T) { &moduledeps.Module{ Name: "root", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyExplicit, }, @@ -72,11 +83,7 @@ func TestModuleTreeDependencies(t *testing.T) { &moduledeps.Module{ Name: "root", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ - Constraints: discovery.AllVersions, - Reason: moduledeps.ProviderDependencyImplicit, - }, - "foo.baz": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyImplicit, }, @@ -90,7 +97,7 @@ func TestModuleTreeDependencies(t *testing.T) { &moduledeps.Module{ Name: "root", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ Constraints: discovery.ConstraintStr(">=1.0.0").MustParse(), Reason: moduledeps.ProviderDependencyExplicit, }, @@ -104,11 +111,11 @@ func TestModuleTreeDependencies(t *testing.T) { &moduledeps.Module{ Name: "root", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyExplicit, }, - "bar": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("bar"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyExplicit, }, @@ -117,11 +124,11 @@ func TestModuleTreeDependencies(t *testing.T) { { Name: "child", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyInherited, }, - "baz": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("baz"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyImplicit, }, @@ -130,11 +137,11 @@ func TestModuleTreeDependencies(t *testing.T) { { Name: "grandchild", Providers: moduledeps.Providers{ - "bar": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("bar"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyInherited, }, - "foo": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyExplicit, }, @@ -163,7 +170,7 @@ func TestModuleTreeDependencies(t *testing.T) { &moduledeps.Module{ Name: "root", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyFromState, }, @@ -182,10 +189,6 @@ func TestModuleTreeDependencies(t *testing.T) { Type: "foo_bar", Provider: "", }, - "foo_bar.test2": { - Type: "foo_bar", - Provider: "foo.bar", - }, "baz_bar.test": { Type: "baz_bar", Provider: "", @@ -209,15 +212,12 @@ func TestModuleTreeDependencies(t *testing.T) { &moduledeps.Module{ Name: "root", Providers: moduledeps.Providers{ - "foo": moduledeps.ProviderDependency{ - Constraints: discovery.ConstraintStr(">=1.0.0").MustParse(), + addrs.NewLegacyProvider("foo"): moduledeps.ProviderDependency{ + Constraints: discovery.ConstraintStr(">=1.0.0,>=2.0.0").MustParse(), Reason: moduledeps.ProviderDependencyExplicit, }, - "foo.bar": moduledeps.ProviderDependency{ - Constraints: discovery.ConstraintStr(">=2.0.0").MustParse(), - Reason: moduledeps.ProviderDependencyExplicit, - }, - "baz": moduledeps.ProviderDependency{ + + addrs.NewLegacyProvider("baz"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyFromState, }, @@ -230,7 +230,7 @@ func TestModuleTreeDependencies(t *testing.T) { { Name: "grandchild", Providers: moduledeps.Providers{ - "banana": moduledeps.ProviderDependency{ + addrs.NewLegacyProvider("banana"): moduledeps.ProviderDependency{ Constraints: discovery.AllVersions, Reason: moduledeps.ProviderDependencyFromState, }, @@ -251,8 +251,8 @@ func TestModuleTreeDependencies(t *testing.T) { } got := ConfigTreeDependencies(root, MustShimLegacyState(test.State)) - for _, problem := range deep.Equal(got, test.Want) { - t.Error(problem) + if !cmp.Equal(got, test.Want) { + t.Error(cmp.Diff(got, test.Want)) } }) } diff --git a/terraform/node_data_refresh.go b/terraform/node_data_refresh.go index 2a4327f91..f3d3f262a 100644 --- a/terraform/node_data_refresh.go +++ b/terraform/node_data_refresh.go @@ -53,6 +53,19 @@ func (n *NodeRefreshableDataResource) DynamicExpand(ctx EvalContext) (*Graph, er // if we're transitioning whether "count" is set at all. fixResourceCountSetTransition(ctx, n.ResourceAddr(), count != -1) + // Inform our instance expander about our expansion results above, + // and then use it to calculate the instance addresses we'll expand for. + expander := ctx.InstanceExpander() + switch { + case count >= 0: + expander.SetResourceCount(ctx.Path(), n.ResourceAddr().Resource, count) + case forEachMap != nil: + expander.SetResourceForEach(ctx.Path(), n.ResourceAddr().Resource, forEachMap) + default: + expander.SetResourceSingle(ctx.Path(), n.ResourceAddr().Resource) + } + instanceAddrs := expander.ExpandResource(ctx.Path().Module(), n.ResourceAddr().Resource) + // Our graph transformers require access to the full state, so we'll // temporarily lock it while we work on this. state := ctx.State().Lock() @@ -85,21 +98,19 @@ func (n *NodeRefreshableDataResource) DynamicExpand(ctx EvalContext) (*Graph, er steps := []GraphTransformer{ // Expand the count. &ResourceCountTransformer{ - Concrete: concreteResource, - Schema: n.Schema, - Count: count, - ForEach: forEachMap, - Addr: n.ResourceAddr(), + Concrete: concreteResource, + Schema: n.Schema, + Addr: n.ResourceAddr(), + InstanceAddrs: instanceAddrs, }, // Add the count orphans. As these are orphaned refresh nodes, we add them // directly as NodeDestroyableDataResource. &OrphanResourceCountTransformer{ - Concrete: concreteResourceDestroyable, - Count: count, - ForEach: forEachMap, - Addr: n.ResourceAddr(), - State: state, + Concrete: concreteResourceDestroyable, + Addr: n.ResourceAddr(), + InstanceAddrs: instanceAddrs, + State: state, }, // Attach the state @@ -169,7 +180,6 @@ func (n *NodeRefreshableDataResourceInstance) EvalTree() EvalNode { &EvalReadData{ Addr: addr.Resource, Config: n.Config, - Dependencies: n.StateReferences(), Provider: &provider, ProviderAddr: n.ResolvedProvider, ProviderSchema: &providerSchema, diff --git a/terraform/node_data_refresh_test.go b/terraform/node_data_refresh_test.go index 6b6059fa2..232f104b1 100644 --- a/terraform/node_data_refresh_test.go +++ b/terraform/node_data_refresh_test.go @@ -6,6 +6,7 @@ import ( "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/instances" ) func TestNodeRefreshableDataResourceDynamicExpand_scaleOut(t *testing.T) { @@ -49,8 +50,9 @@ func TestNodeRefreshableDataResourceDynamicExpand_scaleOut(t *testing.T) { } g, err := n.DynamicExpand(&MockEvalContext{ - PathPath: addrs.RootModuleInstance, - StateState: state.SyncWrapper(), + PathPath: addrs.RootModuleInstance, + StateState: state.SyncWrapper(), + InstanceExpanderExpander: instances.NewExpander(), // DynamicExpand will call EvaluateExpr to evaluate the "count" // expression, which is just a literal number 3 in the fixture config @@ -129,16 +131,16 @@ func TestNodeRefreshableDataResourceDynamicExpand_scaleIn(t *testing.T) { ), Config: m.Module.DataResources["data.aws_instance.foo"], ResolvedProvider: addrs.AbsProviderConfig{ - ProviderConfig: addrs.ProviderConfig{ - Type: "aws", - }, + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, }, }, } g, err := n.DynamicExpand(&MockEvalContext{ - PathPath: addrs.RootModuleInstance, - StateState: state.SyncWrapper(), + PathPath: addrs.RootModuleInstance, + StateState: state.SyncWrapper(), + InstanceExpanderExpander: instances.NewExpander(), // DynamicExpand will call EvaluateExpr to evaluate the "count" // expression, which is just a literal number 3 in the fixture config @@ -174,7 +176,7 @@ root - terraform.graphNodeRoot t.Fatal("failed to find a destroyableDataResource") } - if destroyableDataResource.ResolvedProvider.ProviderConfig.Type == "" { + if destroyableDataResource.ResolvedProvider.Provider.Type == "" { t.Fatal("NodeDestroyableDataResourceInstance missing provider config") } } diff --git a/terraform/node_module_expand.go b/terraform/node_module_expand.go new file mode 100644 index 000000000..71d8a177f --- /dev/null +++ b/terraform/node_module_expand.go @@ -0,0 +1,135 @@ +package terraform + +import ( + "log" + + "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/configs" + "github.com/hashicorp/terraform/lang" + "github.com/hashicorp/terraform/states" +) + +// nodeExpandModule represents a module call in the configuration that +// might expand into multiple module instances depending on how it is +// configured. +type nodeExpandModule struct { + CallerAddr addrs.ModuleInstance + Addr addrs.Module + Call addrs.ModuleCall + Config *configs.Module + ModuleCall *configs.ModuleCall +} + +var ( + _ GraphNodeSubPath = (*nodeExpandModule)(nil) + _ RemovableIfNotTargeted = (*nodeExpandModule)(nil) + _ GraphNodeEvalable = (*nodeExpandModule)(nil) + _ GraphNodeReferencer = (*nodeExpandModule)(nil) +) + +func (n *nodeExpandModule) Name() string { + return n.CallerAddr.Child(n.Call.Name, addrs.NoKey).String() +} + +// GraphNodeSubPath implementation +func (n *nodeExpandModule) Path() addrs.ModuleInstance { + // This node represents the module call within a module, + // so return the CallerAddr as the path as the module + // call may expand into multiple child instances + return n.CallerAddr +} + +// GraphNodeReferencer implementation +func (n *nodeExpandModule) References() []*addrs.Reference { + var refs []*addrs.Reference + + if n.ModuleCall == nil { + return nil + } + + // Expansion only uses the count and for_each expressions, so this + // particular graph node only refers to those. + // Individual variable values in the module call definition might also + // refer to other objects, but that's handled by + // NodeApplyableModuleVariable. + // + // Because our Path method returns the module instance that contains + // our call, these references will be correctly interpreted as being + // in the calling module's namespace, not the namespaces of any of the + // child module instances we might expand to during our evaluation. + + if n.ModuleCall.Count != nil { + refs, _ = lang.ReferencesInExpr(n.ModuleCall.Count) + } + if n.ModuleCall.ForEach != nil { + refs, _ = lang.ReferencesInExpr(n.ModuleCall.ForEach) + } + return appendResourceDestroyReferences(refs) +} + +// RemovableIfNotTargeted implementation +func (n *nodeExpandModule) RemoveIfNotTargeted() bool { + // We need to add this so that this node will be removed if + // it isn't targeted or a dependency of a target. + return true +} + +// GraphNodeEvalable +func (n *nodeExpandModule) EvalTree() EvalNode { + return &evalPrepareModuleExpansion{ + CallerAddr: n.CallerAddr, + Call: n.Call, + Config: n.Config, + ModuleCall: n.ModuleCall, + } +} + +// evalPrepareModuleExpansion is an EvalNode implementation +// that sets the count or for_each on the instance expander +type evalPrepareModuleExpansion struct { + CallerAddr addrs.ModuleInstance + Call addrs.ModuleCall + Config *configs.Module + ModuleCall *configs.ModuleCall +} + +func (n *evalPrepareModuleExpansion) Eval(ctx EvalContext) (interface{}, error) { + eachMode := states.NoEach + expander := ctx.InstanceExpander() + + if n.ModuleCall == nil { + // FIXME: should we have gotten here with no module call? + log.Printf("[TRACE] evalPrepareModuleExpansion: %s is a singleton", n.CallerAddr.Child(n.Call.Name, addrs.NoKey)) + expander.SetModuleSingle(n.CallerAddr, n.Call) + return nil, nil + } + + count, countDiags := evaluateResourceCountExpression(n.ModuleCall.Count, ctx) + if countDiags.HasErrors() { + return nil, countDiags.Err() + } + + if count >= 0 { // -1 signals "count not set" + eachMode = states.EachList + } + + forEach, forEachDiags := evaluateResourceForEachExpression(n.ModuleCall.ForEach, ctx) + if forEachDiags.HasErrors() { + return nil, forEachDiags.Err() + } + + if forEach != nil { + eachMode = states.EachMap + } + + switch eachMode { + case states.EachList: + expander.SetModuleCount(ctx.Path(), n.Call, count) + case states.EachMap: + expander.SetModuleForEach(ctx.Path(), n.Call, forEach) + default: + expander.SetModuleSingle(n.CallerAddr, n.Call) + } + + return nil, nil +} diff --git a/terraform/node_module_removed.go b/terraform/node_module_removed.go index 99e440903..43d9bcd4b 100644 --- a/terraform/node_module_removed.go +++ b/terraform/node_module_removed.go @@ -39,12 +39,12 @@ func (n *NodeModuleRemoved) EvalTree() EvalNode { } } -func (n *NodeModuleRemoved) ReferenceOutside() (selfPath, referencePath addrs.ModuleInstance) { +func (n *NodeModuleRemoved) ReferenceOutside() (selfPath, referencePath addrs.Module) { // Our "References" implementation indicates that this node depends on // the call to the module it represents, which implicitly depends on // everything inside the module. That reference must therefore be // interpreted in terms of our parent module. - return n.Addr, n.Addr.Parent() + return n.Addr.Module(), n.Addr.Parent().Module() } func (n *NodeModuleRemoved) References() []*addrs.Reference { diff --git a/terraform/node_module_variable.go b/terraform/node_module_variable.go index 6b675e570..25954edbc 100644 --- a/terraform/node_module_variable.go +++ b/terraform/node_module_variable.go @@ -1,6 +1,8 @@ package terraform import ( + "fmt" + "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" @@ -9,6 +11,101 @@ import ( "github.com/zclconf/go-cty/cty" ) +// NodePlannableModuleVariable is the placeholder for an variable that has not yet had +// its module path expanded. +type NodePlannableModuleVariable struct { + Addr addrs.InputVariable + Module addrs.Module + Config *configs.Variable + Expr hcl.Expression +} + +var ( + _ GraphNodeDynamicExpandable = (*NodePlannableModuleVariable)(nil) + _ GraphNodeReferenceOutside = (*NodePlannableModuleVariable)(nil) + _ GraphNodeReferenceable = (*NodePlannableModuleVariable)(nil) + _ GraphNodeReferencer = (*NodePlannableModuleVariable)(nil) + _ GraphNodeSubPath = (*NodePlannableModuleVariable)(nil) + _ RemovableIfNotTargeted = (*NodePlannableModuleVariable)(nil) +) + +func (n *NodePlannableModuleVariable) DynamicExpand(ctx EvalContext) (*Graph, error) { + var g Graph + expander := ctx.InstanceExpander() + for _, module := range expander.ExpandModule(ctx.Path().Module()) { + o := &NodeApplyableModuleVariable{ + Addr: n.Addr.Absolute(module), + Config: n.Config, + Expr: n.Expr, + } + g.Add(o) + } + return &g, nil +} + +func (n *NodePlannableModuleVariable) Name() string { + return fmt.Sprintf("%s.%s", n.Module, n.Addr.String()) +} + +// GraphNodeSubPath +func (n *NodePlannableModuleVariable) Path() addrs.ModuleInstance { + // Return an UnkeyedInstanceShim as our placeholder, + // given that modules will be unexpanded at this point in the walk + return n.Module.UnkeyedInstanceShim() +} + +// GraphNodeReferencer +func (n *NodePlannableModuleVariable) References() []*addrs.Reference { + + // If we have no value expression, we cannot depend on anything. + if n.Expr == nil { + return nil + } + + // Variables in the root don't depend on anything, because their values + // are gathered prior to the graph walk and recorded in the context. + if len(n.Module) == 0 { + return nil + } + + // Otherwise, we depend on anything referenced by our value expression. + // We ignore diagnostics here under the assumption that we'll re-eval + // all these things later and catch them then; for our purposes here, + // we only care about valid references. + // + // Due to our GraphNodeReferenceOutside implementation, the addresses + // returned by this function are interpreted in the _parent_ module from + // where our associated variable was declared, which is correct because + // our value expression is assigned within a "module" block in the parent + // module. + refs, _ := lang.ReferencesInExpr(n.Expr) + return refs +} + +// GraphNodeReferenceOutside implementation +func (n *NodePlannableModuleVariable) ReferenceOutside() (selfPath, referencePath addrs.Module) { + return n.Module, n.Module.Parent() +} + +// GraphNodeReferenceable +func (n *NodePlannableModuleVariable) ReferenceableAddrs() []addrs.Referenceable { + // FIXME: References for module variables probably need to be thought out a bit more + // Otherwise, we can reference the output via the address itself, or the + // module call + _, call := n.Module.Call() + return []addrs.Referenceable{n.Addr, call} +} + +// RemovableIfNotTargeted +func (n *NodePlannableModuleVariable) RemoveIfNotTargeted() bool { + return true +} + +// GraphNodeTargetDownstream +func (n *NodePlannableModuleVariable) TargetDownstream(targetedDeps, untargetedDeps dag.Set) bool { + return true +} + // NodeApplyableModuleVariable represents a module variable input during // the apply step. type NodeApplyableModuleVariable struct { @@ -48,15 +145,15 @@ func (n *NodeApplyableModuleVariable) RemoveIfNotTargeted() bool { } // GraphNodeReferenceOutside implementation -func (n *NodeApplyableModuleVariable) ReferenceOutside() (selfPath, referencePath addrs.ModuleInstance) { +func (n *NodeApplyableModuleVariable) ReferenceOutside() (selfPath, referencePath addrs.Module) { // Module input variables have their value expressions defined in the // context of their calling (parent) module, and so references from // a node of this type should be resolved in the parent module instance. - referencePath = n.Addr.Module.Parent() + referencePath = n.Addr.Module.Parent().Module() // Input variables are _referenced_ from their own module, though. - selfPath = n.Addr.Module + selfPath = n.Addr.Module.Module() return // uses named return values } @@ -126,6 +223,14 @@ func (n *NodeApplyableModuleVariable) EvalTree() EvalNode { Module: call, Values: vals, }, + + &evalVariableValidations{ + Addr: n.Addr, + Config: n.Config, + Expr: n.Expr, + + IgnoreDiagnostics: false, + }, }, } } diff --git a/terraform/node_output.go b/terraform/node_output.go index bb3d06531..063611916 100644 --- a/terraform/node_output.go +++ b/terraform/node_output.go @@ -9,6 +9,98 @@ import ( "github.com/hashicorp/terraform/lang" ) +// NodePlannableOutput is the placeholder for an output that has not yet had +// its module path expanded. +type NodePlannableOutput struct { + Addr addrs.OutputValue + Module addrs.Module + Config *configs.Output +} + +var ( + _ GraphNodeSubPath = (*NodePlannableOutput)(nil) + _ RemovableIfNotTargeted = (*NodePlannableOutput)(nil) + _ GraphNodeReferenceable = (*NodePlannableOutput)(nil) + //_ GraphNodeEvalable = (*NodePlannableOutput)(nil) + _ GraphNodeReferencer = (*NodePlannableOutput)(nil) + _ GraphNodeDynamicExpandable = (*NodePlannableOutput)(nil) +) + +func (n *NodePlannableOutput) DynamicExpand(ctx EvalContext) (*Graph, error) { + var g Graph + expander := ctx.InstanceExpander() + for _, module := range expander.ExpandModule(ctx.Path().Module()) { + o := &NodeApplyableOutput{ + Addr: n.Addr.Absolute(module), + Config: n.Config, + } + // log.Printf("[TRACE] Expanding output: adding %s as %T", o.Addr.String(), o) + g.Add(o) + } + return &g, nil +} + +func (n *NodePlannableOutput) Name() string { + return n.Addr.Absolute(n.Module.UnkeyedInstanceShim()).String() +} + +// GraphNodeSubPath +func (n *NodePlannableOutput) Path() addrs.ModuleInstance { + // Return an UnkeyedInstanceShim as our placeholder, + // given that modules will be unexpanded at this point in the walk + return n.Module.UnkeyedInstanceShim() +} + +// GraphNodeReferenceable +func (n *NodePlannableOutput) ReferenceableAddrs() []addrs.Referenceable { + // An output in the root module can't be referenced at all. + if n.Module.IsRoot() { + return nil + } + + // the output is referenced through the module call, and via the + // module itself. + _, call := n.Module.Call() + + // FIXME: make something like ModuleCallOutput for this type of reference + // that doesn't need an instance shim + callOutput := addrs.ModuleCallOutput{ + Call: call.Instance(addrs.NoKey), + Name: n.Addr.Name, + } + + // Otherwise, we can reference the output via the + // module call itself + return []addrs.Referenceable{call, callOutput} +} + +// GraphNodeReferenceOutside implementation +func (n *NodePlannableOutput) ReferenceOutside() (selfPath, referencePath addrs.Module) { + // Output values have their expressions resolved in the context of the + // module where they are defined. + referencePath = n.Module + + // ...but they are referenced in the context of their calling module. + selfPath = referencePath.Parent() + + return // uses named return values +} + +// GraphNodeReferencer +func (n *NodePlannableOutput) References() []*addrs.Reference { + return appendResourceDestroyReferences(referencesForOutput(n.Config)) +} + +// RemovableIfNotTargeted +func (n *NodePlannableOutput) RemoveIfNotTargeted() bool { + return true +} + +// GraphNodeTargetDownstream +func (n *NodePlannableOutput) TargetDownstream(targetedDeps, untargetedDeps dag.Set) bool { + return true +} + // NodeApplyableOutput represents an output that is "applyable": // it is ready to be applied. type NodeApplyableOutput struct { @@ -44,28 +136,26 @@ func (n *NodeApplyableOutput) RemoveIfNotTargeted() bool { } // GraphNodeTargetDownstream -func (n *NodeApplyableOutput) TargetDownstream(targetedDeps, untargetedDeps *dag.Set) bool { +func (n *NodeApplyableOutput) TargetDownstream(targetedDeps, untargetedDeps dag.Set) bool { // If any of the direct dependencies of an output are targeted then // the output must always be targeted as well, so its value will always // be up-to-date at the completion of an apply walk. return true } -func referenceOutsideForOutput(addr addrs.AbsOutputValue) (selfPath, referencePath addrs.ModuleInstance) { - +func referenceOutsideForOutput(addr addrs.AbsOutputValue) (selfPath, referencePath addrs.Module) { // Output values have their expressions resolved in the context of the // module where they are defined. - referencePath = addr.Module + referencePath = addr.Module.Module() // ...but they are referenced in the context of their calling module. - selfPath = addr.Module.Parent() + selfPath = addr.Module.Parent().Module() return // uses named return values - } // GraphNodeReferenceOutside implementation -func (n *NodeApplyableOutput) ReferenceOutside() (selfPath, referencePath addrs.ModuleInstance) { +func (n *NodeApplyableOutput) ReferenceOutside() (selfPath, referencePath addrs.Module) { return referenceOutsideForOutput(n.Addr) } @@ -83,8 +173,8 @@ func referenceableAddrsForOutput(addr addrs.AbsOutputValue) []addrs.Referenceabl // was declared. _, outp := addr.ModuleCallOutput() _, call := addr.Module.CallInstance() - return []addrs.Referenceable{outp, call} + return []addrs.Referenceable{outp, call} } // GraphNodeReferenceable @@ -141,7 +231,8 @@ func (n *NodeApplyableOutput) DotNode(name string, opts *dag.DotOpts) *dag.DotNo // NodeDestroyableOutput represents an output that is "destroybale": // its application will remove the output from the state. type NodeDestroyableOutput struct { - Addr addrs.AbsOutputValue + Addr addrs.OutputValue + Module addrs.Module Config *configs.Output // Config is the output in the config } @@ -160,7 +251,7 @@ func (n *NodeDestroyableOutput) Name() string { // GraphNodeSubPath func (n *NodeDestroyableOutput) Path() addrs.ModuleInstance { - return n.Addr.Module + return n.Module.UnkeyedInstanceShim() } // RemovableIfNotTargeted @@ -172,7 +263,7 @@ func (n *NodeDestroyableOutput) RemoveIfNotTargeted() bool { // This will keep the destroy node in the graph if its corresponding output // node is also in the destroy graph. -func (n *NodeDestroyableOutput) TargetDownstream(targetedDeps, untargetedDeps *dag.Set) bool { +func (n *NodeDestroyableOutput) TargetDownstream(targetedDeps, untargetedDeps dag.Set) bool { return true } @@ -184,7 +275,7 @@ func (n *NodeDestroyableOutput) References() []*addrs.Reference { // GraphNodeEvalable func (n *NodeDestroyableOutput) EvalTree() EvalNode { return &EvalDeleteOutput{ - Addr: n.Addr.OutputValue, + Addr: n.Addr, } } diff --git a/terraform/node_output_orphan.go b/terraform/node_output_orphan.go index 518b8aa09..f8f7124c6 100644 --- a/terraform/node_output_orphan.go +++ b/terraform/node_output_orphan.go @@ -23,7 +23,7 @@ func (n *NodeOutputOrphan) Name() string { } // GraphNodeReferenceOutside implementation -func (n *NodeOutputOrphan) ReferenceOutside() (selfPath, referencePath addrs.ModuleInstance) { +func (n *NodeOutputOrphan) ReferenceOutside() (selfPath, referencePath addrs.Module) { return referenceOutsideForOutput(n.Addr) } diff --git a/terraform/node_provider_eval.go b/terraform/node_provider_eval.go index 580e60cb7..4814d1fae 100644 --- a/terraform/node_provider_eval.go +++ b/terraform/node_provider_eval.go @@ -11,10 +11,8 @@ type NodeEvalableProvider struct { // GraphNodeEvalable func (n *NodeEvalableProvider) EvalTree() EvalNode { addr := n.Addr - relAddr := addr.ProviderConfig return &EvalInitProvider{ - TypeName: relAddr.Type, - Addr: addr.ProviderConfig, + Addr: addr, } } diff --git a/terraform/node_resource_abstract.go b/terraform/node_resource_abstract.go index d147b42e4..09ca14abd 100644 --- a/terraform/node_resource_abstract.go +++ b/terraform/node_resource_abstract.go @@ -3,7 +3,6 @@ package terraform import ( "fmt" "log" - "sort" "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" @@ -11,7 +10,6 @@ import ( "github.com/hashicorp/terraform/dag" "github.com/hashicorp/terraform/lang" "github.com/hashicorp/terraform/states" - "github.com/hashicorp/terraform/tfdiags" ) // ConcreteResourceNodeFunc is a callback type used to convert an @@ -35,12 +33,20 @@ type ConcreteResourceInstanceNodeFunc func(*NodeAbstractResourceInstance) dag.Ve // configuration. type GraphNodeResourceInstance interface { ResourceInstanceAddr() addrs.AbsResourceInstance + + // StateDependencies returns any inter-resource dependencies that are + // stored in the state. + StateDependencies() []addrs.AbsResource } // NodeAbstractResource represents a resource that has no associated // operations. It registers all the interfaces for a resource that common // across multiple operation types. type NodeAbstractResource struct { + //FIXME: AbstractResources are no longer absolute, because modules are not expanded. + // Addr addrs.Resource + // Module addrs.Module + Addr addrs.AbsResource // Addr is the address for this resource // The fields below will be automatically set using the Attach @@ -93,8 +99,8 @@ type NodeAbstractResourceInstance struct { // The fields below will be automatically set using the Attach // interfaces if you're running those transforms, but also be explicitly // set if you already have that information. - ResourceState *states.Resource + Dependencies []addrs.AbsResource } var ( @@ -167,12 +173,12 @@ func (n *NodeAbstractResource) References() []*addrs.Reference { var result []*addrs.Reference for _, traversal := range c.DependsOn { - ref, err := addrs.ParseRef(traversal) - if err != nil { + ref, diags := addrs.ParseRef(traversal) + if diags.HasErrors() { // We ignore this here, because this isn't a suitable place to return // errors. This situation should be caught and rejected during // validation. - log.Printf("[ERROR] Can't parse %#v from depends_on as reference: %s", traversal, err) + log.Printf("[ERROR] Can't parse %#v from depends_on as reference: %s", traversal, diags.Err()) continue } @@ -192,6 +198,11 @@ func (n *NodeAbstractResource) References() []*addrs.Reference { refs, _ = lang.ReferencesInBlock(c.Config, n.Schema) result = append(result, refs...) if c.Managed != nil { + if c.Managed.Connection != nil { + refs, _ = lang.ReferencesInBlock(c.Managed.Connection.Config, connectionBlockSupersetSchema) + result = append(result, refs...) + } + for _, p := range c.Managed.Provisioners { if p.When != configs.ProvisionerWhenCreate { continue @@ -220,7 +231,8 @@ func (n *NodeAbstractResource) References() []*addrs.Reference { func (n *NodeAbstractResourceInstance) References() []*addrs.Reference { // If we have a configuration attached then we'll delegate to our // embedded abstract resource, which knows how to extract dependencies - // from configuration. + // from configuration. If there is no config, then the dependencies will + // be connected during destroy from those stored in the state. if n.Config != nil { if n.Schema == nil { // We'll produce a log message about this out here so that @@ -232,44 +244,6 @@ func (n *NodeAbstractResourceInstance) References() []*addrs.Reference { return n.NodeAbstractResource.References() } - // Otherwise, if we have state then we'll use the values stored in state - // as a fallback. - if rs := n.ResourceState; rs != nil { - if s := rs.Instance(n.InstanceKey); s != nil { - // State is still storing dependencies as old-style strings, so we'll - // need to do a little work here to massage this to the form we now - // want. - var result []*addrs.Reference - - // It is (apparently) possible for s.Current to be nil. This proved - // difficult to reproduce, so we will fix the symptom here and hope - // to find the root cause another time. - // - // https://github.com/hashicorp/terraform/issues/21407 - if s.Current == nil { - log.Printf("[WARN] no current state found for %s", n.Name()) - } else { - for _, addr := range s.Current.Dependencies { - if addr == nil { - // Should never happen; indicates a bug in the state loader - panic(fmt.Sprintf("dependencies for current object on %s contains nil address", n.ResourceInstanceAddr())) - } - - // This is a little weird: we need to manufacture an addrs.Reference - // with a fake range here because the state isn't something we can - // make source references into. - result = append(result, &addrs.Reference{ - Subject: addr, - SourceRange: tfdiags.SourceRange{ - Filename: "(state file)", - }, - }) - } - } - return result - } - } - // If we have neither config nor state then we have no references. return nil } @@ -288,67 +262,17 @@ func dottedInstanceAddr(tr addrs.ResourceInstance) string { return tr.Resource.String() + suffix } -// StateReferences returns the dependencies to put into the state for -// this resource. -func (n *NodeAbstractResourceInstance) StateReferences() []addrs.Referenceable { - selfAddrs := n.ReferenceableAddrs() - - // Since we don't include the source location references in our - // results from this method, we'll also filter out duplicates: - // there's no point in listing the same object twice without - // that additional context. - seen := map[string]struct{}{} - - // Pretend that we've already "seen" all of our own addresses so that we - // won't record self-references in the state. This can arise if, for - // example, a provisioner for a resource refers to the resource itself, - // which is valid (since provisioners always run after apply) but should - // not create an explicit dependency edge. - for _, selfAddr := range selfAddrs { - seen[selfAddr.String()] = struct{}{} - if riAddr, ok := selfAddr.(addrs.ResourceInstance); ok { - seen[riAddr.ContainingResource().String()] = struct{}{} +// StateDependencies returns the dependencies saved in the state. +func (n *NodeAbstractResourceInstance) StateDependencies() []addrs.AbsResource { + if rs := n.ResourceState; rs != nil { + if s := rs.Instance(n.InstanceKey); s != nil { + if s.Current != nil { + return s.Current.Dependencies + } } } - depsRaw := n.References() - deps := make([]addrs.Referenceable, 0, len(depsRaw)) - for _, d := range depsRaw { - subj := d.Subject - if mco, isOutput := subj.(addrs.ModuleCallOutput); isOutput { - // For state dependencies, we simplify outputs to just refer - // to the module as a whole. It's not really clear why we do this, - // but this logic is preserved from before the 0.12 rewrite of - // this function. - subj = mco.Call - } - - k := subj.String() - if _, exists := seen[k]; exists { - continue - } - seen[k] = struct{}{} - switch tr := subj.(type) { - case addrs.ResourceInstance: - deps = append(deps, tr) - case addrs.Resource: - deps = append(deps, tr) - case addrs.ModuleCallInstance: - deps = append(deps, tr) - default: - // No other reference types are recorded in the state. - } - } - - // We'll also sort them, since that'll avoid creating changes in the - // serialized state that make no semantic difference. - sort.Slice(deps, func(i, j int) bool { - // Simple string-based sort because we just care about consistency, - // not user-friendliness. - return deps[i].String() < deps[j].String() - }) - - return deps + return nil } func (n *NodeAbstractResource) SetProvider(p addrs.AbsProviderConfig) { @@ -360,11 +284,26 @@ func (n *NodeAbstractResource) ProvidedBy() (addrs.AbsProviderConfig, bool) { // If we have a config we prefer that above all else if n.Config != nil { relAddr := n.Config.ProviderConfigAddr() - return relAddr.Absolute(n.Path()), false + // FIXME: this will need to lookup the provider and see if there's an + // FQN associated with the local config + fqn := addrs.NewLegacyProvider(relAddr.LocalName) + return addrs.AbsProviderConfig{ + Provider: fqn, + Module: n.Path(), + Alias: relAddr.Alias, + }, false } - // Use our type and containing module path to guess a provider configuration address - return n.Addr.Resource.DefaultProviderConfig().Absolute(n.Addr.Module), false + // Use our type and containing module path to guess a provider configuration address. + // FIXME: This is relying on the FQN-to-local matching true only of legacy + // addresses, so this will need to switch to using an addrs.LocalProviderConfig + // with the local name here, once we've done the work elsewhere to make + // that possible. + defaultFQN := n.Addr.Resource.DefaultProvider() + return addrs.AbsProviderConfig{ + Provider: defaultFQN, + Module: n.Addr.Module, + }, false } // GraphNodeProviderConsumer @@ -372,7 +311,16 @@ func (n *NodeAbstractResourceInstance) ProvidedBy() (addrs.AbsProviderConfig, bo // If we have a config we prefer that above all else if n.Config != nil { relAddr := n.Config.ProviderConfigAddr() - return relAddr.Absolute(n.Path()), false + // Use our type and containing module path to guess a provider configuration address. + // FIXME: This is relying on the FQN-to-local matching true only of legacy + // addresses. + fqn := addrs.NewLegacyProvider(relAddr.LocalName) + + return addrs.AbsProviderConfig{ + Provider: fqn, + Module: n.Path(), + Alias: relAddr.Alias, + }, false } // If we have state, then we will use the provider from there @@ -384,7 +332,15 @@ func (n *NodeAbstractResourceInstance) ProvidedBy() (addrs.AbsProviderConfig, bo } // Use our type and containing module path to guess a provider configuration address - return n.Addr.Resource.DefaultProviderConfig().Absolute(n.Path()), false + // FIXME: This is relying on the FQN-to-local matching true only of legacy + // addresses, so this will need to switch to using an addrs.LocalProviderConfig + // with the local name here, once we've done the work elsewhere to make + // that possible. + defaultFQN := n.Addr.Resource.DefaultProvider() + return addrs.AbsProviderConfig{ + Provider: defaultFQN, + Module: n.Addr.Module, + }, false } // GraphNodeProvisionerConsumer diff --git a/terraform/node_resource_apply_instance.go b/terraform/node_resource_apply_instance.go index d79532467..217a06b7d 100644 --- a/terraform/node_resource_apply_instance.go +++ b/terraform/node_resource_apply_instance.go @@ -28,12 +28,13 @@ type NodeApplyableResourceInstance struct { } var ( - _ GraphNodeResource = (*NodeApplyableResourceInstance)(nil) - _ GraphNodeResourceInstance = (*NodeApplyableResourceInstance)(nil) - _ GraphNodeCreator = (*NodeApplyableResourceInstance)(nil) - _ GraphNodeReferencer = (*NodeApplyableResourceInstance)(nil) - _ GraphNodeDeposer = (*NodeApplyableResourceInstance)(nil) - _ GraphNodeEvalable = (*NodeApplyableResourceInstance)(nil) + _ GraphNodeResource = (*NodeApplyableResourceInstance)(nil) + _ GraphNodeResourceInstance = (*NodeApplyableResourceInstance)(nil) + _ GraphNodeCreator = (*NodeApplyableResourceInstance)(nil) + _ GraphNodeReferencer = (*NodeApplyableResourceInstance)(nil) + _ GraphNodeDeposer = (*NodeApplyableResourceInstance)(nil) + _ GraphNodeEvalable = (*NodeApplyableResourceInstance)(nil) + _ GraphNodeAttachDependencies = (*NodeApplyableResourceInstance)(nil) ) // GraphNodeAttachDestroyer @@ -97,6 +98,11 @@ func (n *NodeApplyableResourceInstance) References() []*addrs.Reference { return ret } +// GraphNodeAttachDependencies +func (n *NodeApplyableResourceInstance) AttachDependencies(deps []addrs.AbsResource) { + n.Dependencies = deps +} + // GraphNodeEvalable func (n *NodeApplyableResourceInstance) EvalTree() EvalNode { addr := n.ResourceInstanceAddr() @@ -171,7 +177,6 @@ func (n *NodeApplyableResourceInstance) evalTreeDataResource(addr addrs.AbsResou &EvalReadData{ Addr: addr.Resource, Config: n.Config, - Dependencies: n.StateReferences(), Planned: &change, // setting this indicates that the result must be complete Provider: &provider, ProviderAddr: n.ResolvedProvider, @@ -341,7 +346,6 @@ func (n *NodeApplyableResourceInstance) evalTreeManagedResource(addr addrs.AbsRe &EvalApply{ Addr: addr.Resource, Config: n.Config, - Dependencies: n.StateReferences(), State: &state, Change: &diffApply, Provider: &provider, @@ -352,17 +356,17 @@ func (n *NodeApplyableResourceInstance) evalTreeManagedResource(addr addrs.AbsRe CreateNew: &createNew, }, &EvalMaybeTainted{ - Addr: addr.Resource, - State: &state, - Change: &diffApply, - Error: &err, - StateOutput: &state, + Addr: addr.Resource, + State: &state, + Change: &diffApply, + Error: &err, }, &EvalWriteState{ Addr: addr.Resource, ProviderAddr: n.ResolvedProvider, ProviderSchema: &providerSchema, State: &state, + Dependencies: &n.Dependencies, }, &EvalApplyProvisioners{ Addr: addr.Resource, @@ -373,25 +377,26 @@ func (n *NodeApplyableResourceInstance) evalTreeManagedResource(addr addrs.AbsRe When: configs.ProvisionerWhenCreate, }, &EvalMaybeTainted{ - Addr: addr.Resource, - State: &state, - Change: &diffApply, - Error: &err, - StateOutput: &state, + Addr: addr.Resource, + State: &state, + Change: &diffApply, + Error: &err, }, &EvalWriteState{ Addr: addr.Resource, ProviderAddr: n.ResolvedProvider, ProviderSchema: &providerSchema, State: &state, + Dependencies: &n.Dependencies, }, &EvalIf{ If: func(ctx EvalContext) (bool, error) { return createBeforeDestroyEnabled && err != nil, nil }, Then: &EvalMaybeRestoreDeposedObject{ - Addr: addr.Resource, - Key: &deposedKey, + Addr: addr.Resource, + PlannedChange: &diffApply, + Key: &deposedKey, }, }, diff --git a/terraform/node_resource_destroy.go b/terraform/node_resource_destroy.go index ca2267e47..0374d83dd 100644 --- a/terraform/node_resource_destroy.go +++ b/terraform/node_resource_destroy.go @@ -285,6 +285,13 @@ var ( _ GraphNodeReferenceable = (*NodeDestroyResource)(nil) _ GraphNodeReferencer = (*NodeDestroyResource)(nil) _ GraphNodeEvalable = (*NodeDestroyResource)(nil) + + // FIXME: this is here to document that this node is both + // GraphNodeProviderConsumer by virtue of the embedded + // NodeAbstractResource, but that behavior is not desired and we skip it by + // checking for GraphNodeNoProvider. + _ GraphNodeProviderConsumer = (*NodeDestroyResource)(nil) + _ GraphNodeNoProvider = (*NodeDestroyResource)(nil) ) func (n *NodeDestroyResource) Name() string { @@ -319,3 +326,23 @@ func (n *NodeDestroyResource) EvalTree() EvalNode { Addr: n.ResourceAddr().Resource, } } + +// GraphNodeResource +func (n *NodeDestroyResource) ResourceAddr() addrs.AbsResource { + return n.NodeAbstractResource.ResourceAddr() +} + +// GraphNodeSubpath +func (n *NodeDestroyResource) Path() addrs.ModuleInstance { + return n.NodeAbstractResource.Path() +} + +// GraphNodeNoProvider +// FIXME: this should be removed once the node can be separated from the +// Internal NodeAbstractResource behavior. +func (n *NodeDestroyResource) NoProvider() { +} + +type GraphNodeNoProvider interface { + NoProvider() +} diff --git a/terraform/node_resource_destroy_deposed.go b/terraform/node_resource_destroy_deposed.go index 67c46913f..e0d5db836 100644 --- a/terraform/node_resource_destroy_deposed.go +++ b/terraform/node_resource_destroy_deposed.go @@ -178,7 +178,7 @@ var ( ) func (n *NodeDestroyDeposedResourceInstanceObject) Name() string { - return fmt.Sprintf("%s (destroy deposed %s)", n.Addr.String(), n.DeposedKey) + return fmt.Sprintf("%s (destroy deposed %s)", n.ResourceInstanceAddr(), n.DeposedKey) } func (n *NodeDestroyDeposedResourceInstanceObject) DeposedInstanceObjectKey() states.DeposedKey { diff --git a/terraform/node_resource_plan.go b/terraform/node_resource_plan.go index ec4aa9322..192d11c1e 100644 --- a/terraform/node_resource_plan.go +++ b/terraform/node_resource_plan.go @@ -71,19 +71,25 @@ func (n *NodePlannableResource) ModifyCreateBeforeDestroy(v bool) error { func (n *NodePlannableResource) DynamicExpand(ctx EvalContext) (*Graph, error) { var diags tfdiags.Diagnostics + // Our instance expander should already have been informed about the + // expansion of this resource and of all of its containing modules, so + // it can tell us which instance addresses we need to process. + module := ctx.Path().Module() + expander := ctx.InstanceExpander() + instanceAddrs := expander.ExpandResource(module, n.ResourceAddr().Resource) + + // We need to potentially rename an instance address in the state + // if we're transitioning whether "count" is set at all. + // + // FIXME: We're re-evaluating count here, even though the InstanceExpander + // has already dealt with our expansion above, because we need it to + // call fixResourceCountSetTransition; the expander API and that function + // are not compatible yet. count, countDiags := evaluateResourceCountExpression(n.Config.Count, ctx) diags = diags.Append(countDiags) if countDiags.HasErrors() { return nil, diags.Err() } - - forEachMap, forEachDiags := evaluateResourceForEachExpression(n.Config.ForEach, ctx) - if forEachDiags.HasErrors() { - return nil, diags.Err() - } - - // Next we need to potentially rename an instance address in the state - // if we're transitioning whether "count" is set at all. fixResourceCountSetTransition(ctx, n.ResourceAddr(), count != -1) // Our graph transformers require access to the full state, so we'll @@ -126,20 +132,18 @@ func (n *NodePlannableResource) DynamicExpand(ctx EvalContext) (*Graph, error) { steps := []GraphTransformer{ // Expand the count or for_each (if present) &ResourceCountTransformer{ - Concrete: concreteResource, - Schema: n.Schema, - Count: count, - ForEach: forEachMap, - Addr: n.ResourceAddr(), + Concrete: concreteResource, + Schema: n.Schema, + Addr: n.ResourceAddr(), + InstanceAddrs: instanceAddrs, }, // Add the count/for_each orphans &OrphanResourceCountTransformer{ - Concrete: concreteResourceOrphan, - Count: count, - ForEach: forEachMap, - Addr: n.ResourceAddr(), - State: state, + Concrete: concreteResourceOrphan, + Addr: n.ResourceAddr(), + InstanceAddrs: instanceAddrs, + State: state, }, // Attach the state diff --git a/terraform/node_resource_plan_destroy.go b/terraform/node_resource_plan_destroy.go index 38746f0d3..d0f63a561 100644 --- a/terraform/node_resource_plan_destroy.go +++ b/terraform/node_resource_plan_destroy.go @@ -47,7 +47,7 @@ func (n *NodePlanDestroyableResourceInstance) EvalTree() EvalNode { var change *plans.ResourceInstanceChange var state *states.ResourceInstanceObject - if n.ResolvedProvider.ProviderConfig.Type == "" { + if n.ResolvedProvider.Provider.Type == "" { // Should never happen; indicates that the graph was not constructed // correctly since we didn't get our provider attached. panic(fmt.Sprintf("%T %q was not assigned a resolved provider", n, dag.VertexName(n))) diff --git a/terraform/node_resource_plan_instance.go b/terraform/node_resource_plan_instance.go index 0f74bbe61..05ccefc34 100644 --- a/terraform/node_resource_plan_instance.go +++ b/terraform/node_resource_plan_instance.go @@ -78,8 +78,8 @@ func (n *NodePlannableResourceInstance) evalTreeDataResource(addr addrs.AbsResou // Check and see if any of our dependencies have changes. changes := ctx.Changes() - for _, d := range n.StateReferences() { - ri, ok := d.(addrs.ResourceInstance) + for _, d := range n.References() { + ri, ok := d.Subject.(addrs.ResourceInstance) if !ok { continue } @@ -114,7 +114,6 @@ func (n *NodePlannableResourceInstance) evalTreeDataResource(addr addrs.AbsResou &EvalReadData{ Addr: addr.Resource, Config: n.Config, - Dependencies: n.StateReferences(), Provider: &provider, ProviderAddr: n.ResolvedProvider, ProviderSchema: &providerSchema, diff --git a/terraform/node_resource_refresh.go b/terraform/node_resource_refresh.go index 9daeabfa6..fa1590093 100644 --- a/terraform/node_resource_refresh.go +++ b/terraform/node_resource_refresh.go @@ -14,10 +14,14 @@ import ( "github.com/hashicorp/terraform/tfdiags" ) -// NodeRefreshableManagedResource represents a resource that is expanabled into +// NodeRefreshableManagedResource represents a resource that is expandable into // NodeRefreshableManagedResourceInstance. Resource count orphans are also added. type NodeRefreshableManagedResource struct { *NodeAbstractResource + + // We attach dependencies to the Resource during refresh, since the + // instances are instantiated during DynamicExpand. + Dependencies []addrs.AbsResource } var ( @@ -27,8 +31,14 @@ var ( _ GraphNodeReferencer = (*NodeRefreshableManagedResource)(nil) _ GraphNodeResource = (*NodeRefreshableManagedResource)(nil) _ GraphNodeAttachResourceConfig = (*NodeRefreshableManagedResource)(nil) + _ GraphNodeAttachDependencies = (*NodeRefreshableManagedResource)(nil) ) +// GraphNodeAttachDependencies +func (n *NodeRefreshableManagedResource) AttachDependencies(deps []addrs.AbsResource) { + n.Dependencies = deps +} + // GraphNodeDynamicExpandable func (n *NodeRefreshableManagedResource) DynamicExpand(ctx EvalContext) (*Graph, error) { var diags tfdiags.Diagnostics @@ -48,6 +58,22 @@ func (n *NodeRefreshableManagedResource) DynamicExpand(ctx EvalContext) (*Graph, // if we're transitioning whether "count" is set at all. fixResourceCountSetTransition(ctx, n.ResourceAddr(), count != -1) + // Inform our instance expander about our expansion results above, + // and then use it to calculate the instance addresses we'll expand for. + expander := ctx.InstanceExpander() + + for _, module := range expander.ExpandModule(ctx.Path().Module()) { + switch { + case count >= 0: + expander.SetResourceCount(module, n.ResourceAddr().Resource, count) + case forEachMap != nil: + expander.SetResourceForEach(module, n.ResourceAddr().Resource, forEachMap) + default: + expander.SetResourceSingle(module, n.ResourceAddr().Resource) + } + } + instanceAddrs := expander.ExpandResource(ctx.Path().Module(), n.ResourceAddr().Resource) + // Our graph transformers require access to the full state, so we'll // temporarily lock it while we work on this. state := ctx.State().Lock() @@ -58,6 +84,7 @@ func (n *NodeRefreshableManagedResource) DynamicExpand(ctx EvalContext) (*Graph, // Add the config and state since we don't do that via transforms a.Config = n.Config a.ResolvedProvider = n.ResolvedProvider + a.Dependencies = n.Dependencies return &NodeRefreshableManagedResourceInstance{ NodeAbstractResourceInstance: a, @@ -68,21 +95,19 @@ func (n *NodeRefreshableManagedResource) DynamicExpand(ctx EvalContext) (*Graph, steps := []GraphTransformer{ // Expand the count. &ResourceCountTransformer{ - Concrete: concreteResource, - Schema: n.Schema, - Count: count, - ForEach: forEachMap, - Addr: n.ResourceAddr(), + Concrete: concreteResource, + Schema: n.Schema, + Addr: n.ResourceAddr(), + InstanceAddrs: instanceAddrs, }, // Add the count orphans to make sure these resources are accounted for // during a scale in. &OrphanResourceCountTransformer{ - Concrete: concreteResource, - Count: count, - ForEach: forEachMap, - Addr: n.ResourceAddr(), - State: state, + Concrete: concreteResource, + Addr: n.ResourceAddr(), + InstanceAddrs: instanceAddrs, + State: state, }, // Attach the state @@ -203,6 +228,11 @@ func (n *NodeRefreshableManagedResourceInstance) evalTreeManagedResource() EvalN Output: &state, }, + &EvalRefreshDependencies{ + State: &state, + Dependencies: &n.Dependencies, + }, + &EvalRefresh{ Addr: addr.Resource, ProviderAddr: n.ResolvedProvider, @@ -217,6 +247,7 @@ func (n *NodeRefreshableManagedResourceInstance) evalTreeManagedResource() EvalN ProviderAddr: n.ResolvedProvider, ProviderSchema: &providerSchema, State: &state, + Dependencies: &n.Dependencies, }, }, } @@ -276,6 +307,7 @@ func (n *NodeRefreshableManagedResourceInstance) evalTreeManagedResourceNoState( ProviderAddr: n.ResolvedProvider, ProviderSchema: &providerSchema, State: &state, + Dependencies: &n.Dependencies, }, // We must also save the planned change, so that expressions in diff --git a/terraform/node_resource_refresh_test.go b/terraform/node_resource_refresh_test.go index c2682ded7..4fec35f9e 100644 --- a/terraform/node_resource_refresh_test.go +++ b/terraform/node_resource_refresh_test.go @@ -8,6 +8,7 @@ import ( "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/instances" ) func TestNodeRefreshableManagedResourceDynamicExpand_scaleOut(t *testing.T) { @@ -49,8 +50,9 @@ func TestNodeRefreshableManagedResourceDynamicExpand_scaleOut(t *testing.T) { } g, err := n.DynamicExpand(&MockEvalContext{ - PathPath: addrs.RootModuleInstance, - StateState: state, + PathPath: addrs.RootModuleInstance, + StateState: state, + InstanceExpanderExpander: instances.NewExpander(), // DynamicExpand will call EvaluateExpr to evaluate the "count" // expression, which is just a literal number 3 in the fixture config @@ -130,8 +132,9 @@ func TestNodeRefreshableManagedResourceDynamicExpand_scaleIn(t *testing.T) { } g, err := n.DynamicExpand(&MockEvalContext{ - PathPath: addrs.RootModuleInstance, - StateState: state, + PathPath: addrs.RootModuleInstance, + StateState: state, + InstanceExpanderExpander: instances.NewExpander(), // DynamicExpand will call EvaluateExpr to evaluate the "count" // expression, which is just a literal number 3 in the fixture config diff --git a/terraform/node_root_variable.go b/terraform/node_root_variable.go index 1c302903d..e3aee6fc8 100644 --- a/terraform/node_root_variable.go +++ b/terraform/node_root_variable.go @@ -32,6 +32,26 @@ func (n *NodeRootVariable) ReferenceableAddrs() []addrs.Referenceable { return []addrs.Referenceable{n.Addr} } +// GraphNodeEvalable +func (n *NodeRootVariable) EvalTree() EvalNode { + // We don't actually need to _evaluate_ a root module variable, because + // its value is always constant and already stashed away in our EvalContext. + // However, we might need to run some user-defined validation rules against + // the value. + + if n.Config == nil || len(n.Config.Validations) == 0 { + return &EvalSequence{} // nothing to do + } + + return &evalVariableValidations{ + Addr: addrs.RootModuleInstance.InputVariable(n.Addr.Name), + Config: n.Config, + Expr: nil, // not set for root module variables + + IgnoreDiagnostics: false, + } +} + // dag.GraphNodeDotter impl. func (n *NodeRootVariable) DotNode(name string, opts *dag.DotOpts) *dag.DotNode { return &dag.DotNode{ diff --git a/terraform/provisioner_mock.go b/terraform/provisioner_mock.go index f59589164..d476e4ea7 100644 --- a/terraform/provisioner_mock.go +++ b/terraform/provisioner_mock.go @@ -120,7 +120,6 @@ func (p *MockProvisioner) ProvisionResource(r provisioners.ProvisionResourceRequ } if p.ProvisionResourceFn != nil { fn := p.ProvisionResourceFn - p.Unlock() return fn(r) } diff --git a/terraform/resource_mode.go b/terraform/resource_mode.go index 70d441df6..c83643a65 100644 --- a/terraform/resource_mode.go +++ b/terraform/resource_mode.go @@ -1,6 +1,6 @@ package terraform -//go:generate stringer -type=ResourceMode -output=resource_mode_string.go resource_mode.go +//go:generate go run golang.org/x/tools/cmd/stringer -type=ResourceMode -output=resource_mode_string.go resource_mode.go // ResourceMode is deprecated, use addrs.ResourceMode instead. // It has been preserved for backwards compatibility. diff --git a/terraform/resource_provider.go b/terraform/resource_provider.go index 3455ad88c..a085e4fdc 100644 --- a/terraform/resource_provider.go +++ b/terraform/resource_provider.go @@ -3,6 +3,7 @@ package terraform import ( "fmt" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/tfdiags" "github.com/hashicorp/terraform/plugin/discovery" @@ -209,18 +210,18 @@ type ResourceProviderResolver interface { // Given a constraint map, return a ResourceProviderFactory for each // requested provider. If some or all of the constraints cannot be // satisfied, return a non-nil slice of errors describing the problems. - ResolveProviders(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) + ResolveProviders(reqd discovery.PluginRequirements) (map[addrs.Provider]ResourceProviderFactory, []error) } // ResourceProviderResolverFunc wraps a callback function and turns it into // a ResourceProviderResolver implementation, for convenience in situations // where a function and its associated closure are sufficient as a resolver // implementation. -type ResourceProviderResolverFunc func(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) +type ResourceProviderResolverFunc func(reqd discovery.PluginRequirements) (map[addrs.Provider]ResourceProviderFactory, []error) // ResolveProviders implements ResourceProviderResolver by calling the // wrapped function. -func (f ResourceProviderResolverFunc) ResolveProviders(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) { +func (f ResourceProviderResolverFunc) ResolveProviders(reqd discovery.PluginRequirements) (map[addrs.Provider]ResourceProviderFactory, []error) { return f(reqd) } @@ -231,13 +232,16 @@ func (f ResourceProviderResolverFunc) ResolveProviders(reqd discovery.PluginRequ // // This function is primarily used in tests, to provide mock providers or // in-process providers under test. -func ResourceProviderResolverFixed(factories map[string]ResourceProviderFactory) ResourceProviderResolver { - return ResourceProviderResolverFunc(func(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) { - ret := make(map[string]ResourceProviderFactory, len(reqd)) +func ResourceProviderResolverFixed(factories map[addrs.Provider]ResourceProviderFactory) ResourceProviderResolver { + return ResourceProviderResolverFunc(func(reqd discovery.PluginRequirements) (map[addrs.Provider]ResourceProviderFactory, []error) { + ret := make(map[addrs.Provider]ResourceProviderFactory, len(reqd)) var errs []error for name := range reqd { - if factory, exists := factories[name]; exists { - ret[name] = factory + // FIXME: discovery.PluginRequirements should use addrs.Provider as + // the map keys instead of a string + fqn := addrs.NewLegacyProvider(name) + if factory, exists := factories[fqn]; exists { + ret[fqn] = factory } else { errs = append(errs, fmt.Errorf("provider %q is not available", name)) } @@ -285,7 +289,7 @@ func ProviderHasDataSource(p ResourceProvider, n string) bool { // This should be called only with configurations that have passed calls // to config.Validate(), which ensures that all of the given version // constraints are valid. It will panic if any invalid constraints are present. -func resourceProviderFactories(resolver providers.Resolver, reqd discovery.PluginRequirements) (map[string]providers.Factory, tfdiags.Diagnostics) { +func resourceProviderFactories(resolver providers.Resolver, reqd discovery.PluginRequirements) (map[addrs.Provider]providers.Factory, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics ret, errs := resolver.ResolveProviders(reqd) if errs != nil { diff --git a/terraform/resource_provider_mock_test.go b/terraform/resource_provider_mock_test.go index ed6d2ba0e..ee6570917 100644 --- a/terraform/resource_provider_mock_test.go +++ b/terraform/resource_provider_mock_test.go @@ -3,6 +3,7 @@ package terraform import ( "testing" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/providers" "github.com/zclconf/go-cty/cty" @@ -17,8 +18,8 @@ func TestMockResourceProvider_impl(t *testing.T) { // a single given. func testProviderComponentFactory(name string, provider providers.Interface) *basicComponentFactory { return &basicComponentFactory{ - providers: map[string]providers.Factory{ - name: providers.FactoryFixed(provider), + providers: map[addrs.Provider]providers.Factory{ + addrs.NewLegacyProvider(name): providers.FactoryFixed(provider), }, } } diff --git a/terraform/schemas.go b/terraform/schemas.go index 62991c82d..9158382d8 100644 --- a/terraform/schemas.go +++ b/terraform/schemas.go @@ -15,7 +15,7 @@ import ( // Schemas is a container for various kinds of schema that Terraform needs // during processing. type Schemas struct { - Providers map[string]*ProviderSchema + Providers map[addrs.Provider]*ProviderSchema Provisioners map[string]*configschema.Block } @@ -24,17 +24,17 @@ type Schemas struct { // // It's usually better to go use the more precise methods offered by type // Schemas to handle this detail automatically. -func (ss *Schemas) ProviderSchema(typeName string) *ProviderSchema { +func (ss *Schemas) ProviderSchema(provider addrs.Provider) *ProviderSchema { if ss.Providers == nil { return nil } - return ss.Providers[typeName] + return ss.Providers[provider] } // ProviderConfig returns the schema for the provider configuration of the // given provider type, or nil if no such schema is available. -func (ss *Schemas) ProviderConfig(typeName string) *configschema.Block { - ps := ss.ProviderSchema(typeName) +func (ss *Schemas) ProviderConfig(provider addrs.Provider) *configschema.Block { + ps := ss.ProviderSchema(provider) if ps == nil { return nil } @@ -50,8 +50,8 @@ func (ss *Schemas) ProviderConfig(typeName string) *configschema.Block { // a resource using the "provider" meta-argument. Therefore it's important to // always pass the correct provider name, even though it many cases it feels // redundant. -func (ss *Schemas) ResourceTypeConfig(providerType string, resourceMode addrs.ResourceMode, resourceType string) (block *configschema.Block, schemaVersion uint64) { - ps := ss.ProviderSchema(providerType) +func (ss *Schemas) ResourceTypeConfig(provider addrs.Provider, resourceMode addrs.ResourceMode, resourceType string) (block *configschema.Block, schemaVersion uint64) { + ps := ss.ProviderSchema(provider) if ps == nil || ps.ResourceTypes == nil { return nil, 0 } @@ -76,7 +76,7 @@ func (ss *Schemas) ProvisionerConfig(name string) *configschema.Block { // still valid but may be incomplete. func LoadSchemas(config *configs.Config, state *states.State, components contextComponentFactory) (*Schemas, error) { schemas := &Schemas{ - Providers: map[string]*ProviderSchema{}, + Providers: map[addrs.Provider]*ProviderSchema{}, Provisioners: map[string]*configschema.Block{}, } var diags tfdiags.Diagnostics @@ -89,20 +89,23 @@ func LoadSchemas(config *configs.Config, state *states.State, components context return schemas, diags.Err() } -func loadProviderSchemas(schemas map[string]*ProviderSchema, config *configs.Config, state *states.State, components contextComponentFactory) tfdiags.Diagnostics { +func loadProviderSchemas(schemas map[addrs.Provider]*ProviderSchema, config *configs.Config, state *states.State, components contextComponentFactory) tfdiags.Diagnostics { var diags tfdiags.Diagnostics - ensure := func(typeName string) { - if _, exists := schemas[typeName]; exists { + ensure := func(fqn addrs.Provider) { + // TODO: LegacyString() will be removed in an upcoming release + typeName := fqn.LegacyString() + + if _, exists := schemas[fqn]; exists { return } - log.Printf("[TRACE] LoadSchemas: retrieving schema for provider type %q", typeName) - provider, err := components.ResourceProvider(typeName, "early/"+typeName) + log.Printf("[TRACE] LoadSchemas: retrieving schema for provider type %q", fqn.LegacyString()) + provider, err := components.ResourceProvider(fqn) if err != nil { // We'll put a stub in the map so we won't re-attempt this on // future calls. - schemas[typeName] = &ProviderSchema{} + schemas[fqn] = &ProviderSchema{} diags = diags.Append( fmt.Errorf("Failed to instantiate provider %q to obtain schema: %s", typeName, err), ) @@ -116,7 +119,7 @@ func loadProviderSchemas(schemas map[string]*ProviderSchema, config *configs.Con if resp.Diagnostics.HasErrors() { // We'll put a stub in the map so we won't re-attempt this on // future calls. - schemas[typeName] = &ProviderSchema{} + schemas[fqn] = &ProviderSchema{} diags = diags.Append( fmt.Errorf("Failed to retrieve schema from provider %q: %s", typeName, resp.Diagnostics.Err()), ) @@ -160,19 +163,19 @@ func loadProviderSchemas(schemas map[string]*ProviderSchema, config *configs.Con } } - schemas[typeName] = s + schemas[fqn] = s } if config != nil { - for _, typeName := range config.ProviderTypes() { - ensure(typeName) + for _, fqn := range config.ProviderTypes() { + ensure(fqn) } } if state != nil { needed := providers.AddressedTypesAbs(state.ProviderAddrs()) - for _, typeName := range needed { - ensure(typeName) + for _, typeAddr := range needed { + ensure(typeAddr) } } @@ -188,7 +191,7 @@ func loadProvisionerSchemas(schemas map[string]*configschema.Block, config *conf } log.Printf("[TRACE] LoadSchemas: retrieving schema for provisioner %q", name) - provisioner, err := components.ResourceProvisioner(name, "early/"+name) + provisioner, err := components.ResourceProvisioner(name) if err != nil { // We'll put a stub in the map so we won't re-attempt this on // future calls. diff --git a/terraform/schemas_test.go b/terraform/schemas_test.go index 3f79981ca..c34c55bcf 100644 --- a/terraform/schemas_test.go +++ b/terraform/schemas_test.go @@ -1,6 +1,7 @@ package terraform import ( + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" ) @@ -8,8 +9,8 @@ func simpleTestSchemas() *Schemas { provider := simpleMockProvider() provisioner := simpleMockProvisioner() return &Schemas{ - Providers: map[string]*ProviderSchema{ - "test": provider.GetSchemaReturn, + Providers: map[addrs.Provider]*ProviderSchema{ + addrs.NewLegacyProvider("test"): provider.GetSchemaReturn, }, Provisioners: map[string]*configschema.Block{ "test": provisioner.GetSchemaResponse.Provisioner, diff --git a/terraform/terraform_test.go b/terraform/terraform_test.go index 0d8cc0900..edc251b12 100644 --- a/terraform/terraform_test.go +++ b/terraform/terraform_test.go @@ -210,6 +210,22 @@ func mustResourceInstanceAddr(s string) addrs.AbsResourceInstance { return addr } +func mustResourceAddr(s string) addrs.AbsResource { + addr, diags := addrs.ParseAbsResourceStr(s) + if diags.HasErrors() { + panic(diags.Err()) + } + return addr +} + +func mustProviderConfig(s string) addrs.AbsProviderConfig { + p, diags := addrs.ParseAbsProviderConfigStr(s) + if diags.HasErrors() { + panic(diags.Err()) + } + return p +} + func instanceObjectIdForTests(obj *states.ResourceInstanceObject) string { v := obj.Value if v.IsNull() || !v.IsKnown() { @@ -267,13 +283,13 @@ func (h *HookRecordApplyOrder) PreApply(addr addrs.AbsResourceInstance, gen stat const testTerraformInputProviderStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] bar = override foo = us-east-1 type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] bar = baz num = 2 type = aws_instance @@ -282,7 +298,7 @@ aws_instance.foo: const testTerraformInputProviderOnlyStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = us-west-2 type = aws_instance ` @@ -290,7 +306,7 @@ aws_instance.foo: const testTerraformInputVarOnlyStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = us-east-1 type = aws_instance ` @@ -298,7 +314,7 @@ aws_instance.foo: const testTerraformInputVarOnlyUnsetStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] bar = baz foo = foovalue type = aws_instance @@ -307,13 +323,13 @@ aws_instance.foo: const testTerraformInputVarsStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] bar = override foo = us-east-1 type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] bar = baz num = 2 type = aws_instance @@ -322,12 +338,12 @@ aws_instance.foo: const testTerraformApplyStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance ` @@ -335,13 +351,13 @@ aws_instance.foo: const testTerraformApplyDataBasicStr = ` data.null_data_source.testing: ID = yo - provider = provider.null + provider = provider["registry.terraform.io/-/null"] ` const testTerraformApplyRefCountStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = 3 type = aws_instance @@ -349,24 +365,24 @@ aws_instance.bar: aws_instance.foo aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.2: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testTerraformApplyProviderAliasStr = ` aws_instance.bar: ID = foo - provider = provider.aws.bar + provider = provider["registry.terraform.io/-/aws"].bar foo = bar type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance ` @@ -374,10 +390,10 @@ aws_instance.foo: const testTerraformApplyProviderAliasConfigStr = ` another_instance.bar: ID = foo - provider = provider.another.two + provider = provider["registry.terraform.io/-/another"].two another_instance.foo: ID = foo - provider = provider.another + provider = provider["registry.terraform.io/-/another"] ` const testTerraformApplyEmptyModuleStr = ` @@ -390,15 +406,13 @@ module.child: Outputs: -aws_access_key = YYYYY aws_route53_zone_id = XXXX -aws_secret_key = ZZZZ ` const testTerraformApplyDependsCreateBeforeStr = ` aws_instance.lb: ID = baz - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] instance = foo type = aws_instance @@ -406,7 +420,7 @@ aws_instance.lb: aws_instance.web aws_instance.web: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = ami-new type = aws_instance ` @@ -414,7 +428,7 @@ aws_instance.web: const testTerraformApplyCreateBeforeStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = xyz type = aws_instance ` @@ -422,7 +436,7 @@ aws_instance.bar: const testTerraformApplyCreateBeforeUpdateStr = ` aws_instance.bar: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = baz type = aws_instance ` @@ -430,14 +444,14 @@ aws_instance.bar: const testTerraformApplyCancelStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] value = 2 ` const testTerraformApplyComputeStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = computed_value type = aws_instance @@ -445,7 +459,7 @@ aws_instance.bar: aws_instance.foo aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] compute = value compute_value = 1 num = 2 @@ -456,17 +470,17 @@ aws_instance.foo: const testTerraformApplyCountDecStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.foo.0: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.foo.1: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance ` @@ -474,7 +488,7 @@ aws_instance.foo.1: const testTerraformApplyCountDecToOneStr = ` aws_instance.foo: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance ` @@ -482,7 +496,7 @@ aws_instance.foo: const testTerraformApplyCountDecToOneCorruptedStr = ` aws_instance.foo: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance ` @@ -500,24 +514,24 @@ STATE: aws_instance.foo: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.foo.0: ID = baz - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] type = aws_instance ` const testTerraformApplyCountVariableStr = ` aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance ` @@ -525,7 +539,7 @@ aws_instance.foo.1: const testTerraformApplyCountVariableRefStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = 2 type = aws_instance @@ -533,70 +547,70 @@ aws_instance.bar: aws_instance.foo aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testTerraformApplyForEachVariableStr = ` aws_instance.foo["b15c6d616d6143248c575900dff57325eb1de498"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.foo["c3de47d34b0a9f13918dd705c141d579dd6555fd"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.foo["e30a7edcc42a846684f2a4eea5f3cd261d33c46d"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo type = aws_instance aws_instance.one["a"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.one["b"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.two["a"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Dependencies: aws_instance.one aws_instance.two["b"]: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Dependencies: aws_instance.one` const testTerraformApplyMinimalStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testTerraformApplyModuleStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance module.child: aws_instance.baz: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance ` @@ -604,13 +618,10 @@ module.child: const testTerraformApplyModuleBoolStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = true type = aws_instance - Dependencies: - module.child - module.child: Outputs: @@ -625,12 +636,12 @@ const testTerraformApplyModuleDestroyOrderStr = ` const testTerraformApplyMultiProviderStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance do_instance.foo: ID = foo - provider = provider.do + provider = provider["registry.terraform.io/-/do"] num = 2 type = do_instance ` @@ -640,10 +651,10 @@ const testTerraformApplyModuleOnlyProviderStr = ` module.child: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] test_instance.foo: ID = foo - provider = provider.test + provider = provider["registry.terraform.io/-/test"] ` const testTerraformApplyModuleProviderAliasStr = ` @@ -651,21 +662,24 @@ const testTerraformApplyModuleProviderAliasStr = ` module.child: aws_instance.foo: ID = foo - provider = module.child.provider.aws.eu + provider = module.child.provider["registry.terraform.io/-/aws"].eu ` const testTerraformApplyModuleVarRefExistingStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar module.child: aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] type = aws_instance value = bar + + Dependencies: + aws_instance.foo ` const testTerraformApplyOutputOrphanStr = ` @@ -677,23 +691,18 @@ foo = bar const testTerraformApplyOutputOrphanModuleStr = ` -module.child: - - Outputs: - - foo = bar ` const testTerraformApplyProvisionerStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Dependencies: aws_instance.foo aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] compute = value compute_value = 1 num = 2 @@ -706,16 +715,16 @@ const testTerraformApplyProvisionerModuleStr = ` module.child: aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testTerraformApplyProvisionerFailStr = ` aws_instance.bar: (tainted) ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance ` @@ -723,7 +732,7 @@ aws_instance.foo: const testTerraformApplyProvisionerFailCreateStr = ` aws_instance.bar: (tainted) ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ` const testTerraformApplyProvisionerFailCreateNoIdStr = ` @@ -733,7 +742,7 @@ const testTerraformApplyProvisionerFailCreateNoIdStr = ` const testTerraformApplyProvisionerFailCreateBeforeDestroyStr = ` aws_instance.bar: (tainted) (1 deposed) ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = xyz type = aws_instance Deposed ID 1 = bar @@ -742,7 +751,7 @@ aws_instance.bar: (tainted) (1 deposed) const testTerraformApplyProvisionerResourceRefStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance ` @@ -750,7 +759,7 @@ aws_instance.bar: const testTerraformApplyProvisionerSelfRefStr = ` aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance ` @@ -758,17 +767,17 @@ aws_instance.foo: const testTerraformApplyProvisionerMultiSelfRefStr = ` aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = number 0 type = aws_instance aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = number 1 type = aws_instance aws_instance.foo.2: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = number 2 type = aws_instance ` @@ -776,31 +785,25 @@ aws_instance.foo.2: const testTerraformApplyProvisionerMultiSelfRefSingleStr = ` aws_instance.foo.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = number 0 type = aws_instance aws_instance.foo.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = number 1 type = aws_instance - - Dependencies: - aws_instance.foo[0] aws_instance.foo.2: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = number 2 type = aws_instance - - Dependencies: - aws_instance.foo[0] ` const testTerraformApplyProvisionerDiffStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance ` @@ -812,27 +815,27 @@ const testTerraformApplyDestroyStr = ` const testTerraformApplyErrorStr = ` aws_instance.bar: (tainted) ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Dependencies: aws_instance.foo aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] value = 2 ` const testTerraformApplyErrorCreateBeforeDestroyStr = ` aws_instance.bar: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = abc ` const testTerraformApplyErrorDestroyCreateBeforeDestroyStr = ` aws_instance.bar: (1 deposed) ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] require_new = xyz type = aws_instance Deposed ID 1 = bar @@ -841,30 +844,30 @@ aws_instance.bar: (1 deposed) const testTerraformApplyErrorPartialStr = ` aws_instance.bar: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] Dependencies: aws_instance.foo aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] value = 2 ` const testTerraformApplyResourceDependsOnModuleStr = ` aws_instance.a: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ami = parent type = aws_instance Dependencies: - module.child + module.child.aws_instance.child module.child: aws_instance.child: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ami = child type = aws_instance ` @@ -872,17 +875,17 @@ module.child: const testTerraformApplyResourceDependsOnModuleDeepStr = ` aws_instance.a: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ami = parent type = aws_instance Dependencies: - module.child + module.child.module.grandchild.aws_instance.c module.child.grandchild: aws_instance.c: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ami = grandchild type = aws_instance ` @@ -892,16 +895,16 @@ const testTerraformApplyResourceDependsOnModuleInModuleStr = ` module.child: aws_instance.b: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ami = child type = aws_instance Dependencies: - module.grandchild + module.child.module.grandchild.aws_instance.c module.child.grandchild: aws_instance.c: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] ami = grandchild type = aws_instance ` @@ -909,7 +912,7 @@ module.child.grandchild: const testTerraformApplyTaintStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance ` @@ -917,7 +920,7 @@ aws_instance.bar: const testTerraformApplyTaintDepStr = ` aws_instance.bar: ID = bar - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo num = 2 type = aws_instance @@ -926,7 +929,7 @@ aws_instance.bar: aws_instance.foo aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance ` @@ -934,7 +937,7 @@ aws_instance.foo: const testTerraformApplyTaintDepRequireNewStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo require_new = yes type = aws_instance @@ -943,7 +946,7 @@ aws_instance.bar: aws_instance.foo aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance ` @@ -951,12 +954,12 @@ aws_instance.foo: const testTerraformApplyOutputStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance @@ -968,12 +971,12 @@ foo_num = 2 const testTerraformApplyOutputAddStr = ` aws_instance.test.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo0 type = aws_instance aws_instance.test.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = foo1 type = aws_instance @@ -986,22 +989,22 @@ secondOutput = foo1 const testTerraformApplyOutputListStr = ` aws_instance.bar.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.bar.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.bar.2: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance @@ -1013,22 +1016,22 @@ foo_num = [bar,bar,bar] const testTerraformApplyOutputMultiStr = ` aws_instance.bar.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.bar.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.bar.2: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance @@ -1040,22 +1043,22 @@ foo_num = bar,bar,bar const testTerraformApplyOutputMultiIndexStr = ` aws_instance.bar.0: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.bar.1: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.bar.2: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] foo = bar type = aws_instance aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] num = 2 type = aws_instance @@ -1067,7 +1070,7 @@ foo_num = bar const testTerraformApplyUnknownAttrStr = ` aws_instance.foo: (tainted) ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] compute = unknown num = 2 type = aws_instance @@ -1076,13 +1079,13 @@ aws_instance.foo: (tainted) const testTerraformApplyVarsStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] bar = override baz = override foo = us-east-1 aws_instance.foo: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] bar = baz list.# = 2 list.0 = Hello @@ -1096,7 +1099,7 @@ aws_instance.foo: const testTerraformApplyVarsEnvStr = ` aws_instance.bar: ID = foo - provider = provider.aws + provider = provider["registry.terraform.io/-/aws"] list.# = 2 list.0 = Hello list.1 = World @@ -1291,7 +1294,7 @@ STATE: const testTerraformInputHCL = ` hcl_instance.hcltest: ID = foo - provider = provider.hcl + provider = provider["registry.terraform.io/-/hcl"] bar.w = z bar.x = y foo.# = 2 @@ -1303,13 +1306,10 @@ hcl_instance.hcltest: const testTerraformRefreshDataRefDataStr = ` data.null_data_source.bar: ID = foo - provider = provider.null + provider = provider["registry.terraform.io/-/null"] bar = yes - - Dependencies: - data.null_data_source.foo data.null_data_source.foo: ID = foo - provider = provider.null + provider = provider["registry.terraform.io/-/null"] foo = yes ` diff --git a/terraform/testdata/apply-cbd-cycle/main.tf b/terraform/testdata/apply-cbd-cycle/main.tf new file mode 100644 index 000000000..5ac53107e --- /dev/null +++ b/terraform/testdata/apply-cbd-cycle/main.tf @@ -0,0 +1,19 @@ +resource "test_instance" "a" { + foo = test_instance.b.id + require_new = "changed" + + lifecycle { + create_before_destroy = true + } +} + +resource "test_instance" "b" { + foo = test_instance.c.id + require_new = "changed" +} + + +resource "test_instance" "c" { + require_new = "changed" +} + diff --git a/terraform/testdata/apply-destroy-data-cycle/main.tf b/terraform/testdata/apply-destroy-data-cycle/main.tf new file mode 100644 index 000000000..bd72a47e3 --- /dev/null +++ b/terraform/testdata/apply-destroy-data-cycle/main.tf @@ -0,0 +1,10 @@ +locals { + l = data.null_data_source.d.id +} + +data "null_data_source" "d" { +} + +resource "null_resource" "a" { + count = local.l == "NONE" ? 1 : 0 +} diff --git a/terraform/testdata/apply-destroy-tainted/main.tf b/terraform/testdata/apply-destroy-tainted/main.tf new file mode 100644 index 000000000..48f4f1378 --- /dev/null +++ b/terraform/testdata/apply-destroy-tainted/main.tf @@ -0,0 +1,17 @@ +resource "test_instance" "a" { + foo = "a" +} + +resource "test_instance" "b" { + foo = "b" + lifecycle { + create_before_destroy = true + } +} + +resource "test_instance" "c" { + foo = "c" + lifecycle { + create_before_destroy = true + } +} diff --git a/terraform/testdata/apply-module-replace-cycle-cbd/main.tf b/terraform/testdata/apply-module-replace-cycle-cbd/main.tf new file mode 100644 index 000000000..6393231d6 --- /dev/null +++ b/terraform/testdata/apply-module-replace-cycle-cbd/main.tf @@ -0,0 +1,8 @@ +module "a" { + source = "./mod1" +} + +module "b" { + source = "./mod2" + ids = module.a.ids +} diff --git a/terraform/testdata/apply-module-replace-cycle-cbd/mod1/main.tf b/terraform/testdata/apply-module-replace-cycle-cbd/mod1/main.tf new file mode 100644 index 000000000..2ade442bf --- /dev/null +++ b/terraform/testdata/apply-module-replace-cycle-cbd/mod1/main.tf @@ -0,0 +1,10 @@ +resource "aws_instance" "a" { + require_new = "new" + lifecycle { + create_before_destroy = true + } +} + +output "ids" { + value = [aws_instance.a.id] +} diff --git a/terraform/testdata/apply-module-replace-cycle-cbd/mod2/main.tf b/terraform/testdata/apply-module-replace-cycle-cbd/mod2/main.tf new file mode 100644 index 000000000..83fb1dcd4 --- /dev/null +++ b/terraform/testdata/apply-module-replace-cycle-cbd/mod2/main.tf @@ -0,0 +1,8 @@ +resource "aws_instance" "b" { + count = length(var.ids) + require_new = var.ids[count.index] +} + +variable "ids" { + type = list(string) +} diff --git a/terraform/testdata/apply-module-replace-cycle/main.tf b/terraform/testdata/apply-module-replace-cycle/main.tf new file mode 100644 index 000000000..6393231d6 --- /dev/null +++ b/terraform/testdata/apply-module-replace-cycle/main.tf @@ -0,0 +1,8 @@ +module "a" { + source = "./mod1" +} + +module "b" { + source = "./mod2" + ids = module.a.ids +} diff --git a/terraform/testdata/apply-module-replace-cycle/mod1/main.tf b/terraform/testdata/apply-module-replace-cycle/mod1/main.tf new file mode 100644 index 000000000..2ade442bf --- /dev/null +++ b/terraform/testdata/apply-module-replace-cycle/mod1/main.tf @@ -0,0 +1,10 @@ +resource "aws_instance" "a" { + require_new = "new" + lifecycle { + create_before_destroy = true + } +} + +output "ids" { + value = [aws_instance.a.id] +} diff --git a/terraform/testdata/apply-module-replace-cycle/mod2/main.tf b/terraform/testdata/apply-module-replace-cycle/mod2/main.tf new file mode 100644 index 000000000..83fb1dcd4 --- /dev/null +++ b/terraform/testdata/apply-module-replace-cycle/mod2/main.tf @@ -0,0 +1,8 @@ +resource "aws_instance" "b" { + count = length(var.ids) + require_new = var.ids[count.index] +} + +variable "ids" { + type = list(string) +} diff --git a/terraform/testdata/apply-plan-connection-refs/main.tf b/terraform/testdata/apply-plan-connection-refs/main.tf new file mode 100644 index 000000000..d20191f33 --- /dev/null +++ b/terraform/testdata/apply-plan-connection-refs/main.tf @@ -0,0 +1,18 @@ +variable "msg" { + default = "ok" +} + +resource "test_instance" "a" { + foo = "a" +} + + +resource "test_instance" "b" { + foo = "b" + provisioner "shell" { + command = "echo ${var.msg}" + } + connection { + host = test_instance.a.id + } +} diff --git a/terraform/testdata/apply-provisioner-destroy-locals/main.tf b/terraform/testdata/apply-provisioner-destroy-locals/main.tf deleted file mode 100644 index 5818e7c7d..000000000 --- a/terraform/testdata/apply-provisioner-destroy-locals/main.tf +++ /dev/null @@ -1,14 +0,0 @@ -locals { - value = "local" -} - -resource "aws_instance" "foo" { - provisioner "shell" { - command = "${local.value}" - when = "create" - } - provisioner "shell" { - command = "${local.value}" - when = "destroy" - } -} diff --git a/terraform/testdata/apply-provisioner-destroy-module/child/main.tf b/terraform/testdata/apply-provisioner-destroy-module/child/main.tf deleted file mode 100644 index a8b8d123c..000000000 --- a/terraform/testdata/apply-provisioner-destroy-module/child/main.tf +++ /dev/null @@ -1,10 +0,0 @@ -variable "key" {} - -resource "aws_instance" "foo" { - foo = "bar" - - provisioner "shell" { - command = "${var.key}" - when = "destroy" - } -} diff --git a/terraform/testdata/apply-provisioner-destroy-module/main.tf b/terraform/testdata/apply-provisioner-destroy-module/main.tf deleted file mode 100644 index 817ae043d..000000000 --- a/terraform/testdata/apply-provisioner-destroy-module/main.tf +++ /dev/null @@ -1,4 +0,0 @@ -module "child" { - source = "./child" - key = "value" -} diff --git a/terraform/testdata/apply-provisioner-destroy-multiple-locals/main.tf b/terraform/testdata/apply-provisioner-destroy-multiple-locals/main.tf deleted file mode 100644 index 2050b19a1..000000000 --- a/terraform/testdata/apply-provisioner-destroy-multiple-locals/main.tf +++ /dev/null @@ -1,26 +0,0 @@ -locals { - value = "local" - foo_id = aws_instance.foo.id - - // baz is not in the state during destroy, but this is a valid config that - // should not fail. - baz_id = aws_instance.baz.id -} - -resource "aws_instance" "baz" {} - -resource "aws_instance" "foo" { - provisioner "shell" { - id = self.id - command = local.value - when = "destroy" - } -} - -resource "aws_instance" "bar" { - provisioner "shell" { - id = self.id - command = local.foo_id - when = "destroy" - } -} diff --git a/terraform/testdata/apply-provisioner-destroy-outputs/main.tf b/terraform/testdata/apply-provisioner-destroy-outputs/main.tf deleted file mode 100644 index 9a5843aa3..000000000 --- a/terraform/testdata/apply-provisioner-destroy-outputs/main.tf +++ /dev/null @@ -1,23 +0,0 @@ -module "mod" { - source = "./mod" -} - -locals { - value = "${module.mod.value}" -} - -resource "aws_instance" "foo" { - provisioner "shell" { - command = "${local.value}" - when = "destroy" - } -} - -module "mod2" { - source = "./mod2" - value = "${module.mod.value}" -} - -output "value" { - value = "${local.value}" -} diff --git a/terraform/testdata/apply-provisioner-destroy-outputs/mod/main.tf b/terraform/testdata/apply-provisioner-destroy-outputs/mod/main.tf deleted file mode 100644 index 1f4ad09c5..000000000 --- a/terraform/testdata/apply-provisioner-destroy-outputs/mod/main.tf +++ /dev/null @@ -1,5 +0,0 @@ -output "value" { - value = "${aws_instance.baz.id}" -} - -resource "aws_instance" "baz" {} diff --git a/terraform/testdata/apply-provisioner-destroy-outputs/mod2/main.tf b/terraform/testdata/apply-provisioner-destroy-outputs/mod2/main.tf deleted file mode 100644 index a476bb920..000000000 --- a/terraform/testdata/apply-provisioner-destroy-outputs/mod2/main.tf +++ /dev/null @@ -1,10 +0,0 @@ -variable "value" { -} - -resource "aws_instance" "bar" { - provisioner "shell" { - command = "${var.value}" - when = "destroy" - } -} - diff --git a/terraform/testdata/apply-provisioner-destroy-ref-invalid/main.tf b/terraform/testdata/apply-provisioner-destroy-ref-invalid/main.tf deleted file mode 100644 index fb4a96b10..000000000 --- a/terraform/testdata/apply-provisioner-destroy-ref-invalid/main.tf +++ /dev/null @@ -1,12 +0,0 @@ -resource "aws_instance" "bar" { - value = "hello" -} - -resource "aws_instance" "foo" { - foo = "bar" - - provisioner "shell" { - command = aws_instance.bar.does_not_exist - when = "destroy" - } -} diff --git a/terraform/testdata/apply-provisioner-destroy-ref/main.tf b/terraform/testdata/apply-provisioner-destroy-ref/main.tf deleted file mode 100644 index b36916df1..000000000 --- a/terraform/testdata/apply-provisioner-destroy-ref/main.tf +++ /dev/null @@ -1,12 +0,0 @@ -resource "aws_instance" "bar" { - value = "hello" -} - -resource "aws_instance" "foo" { - foo = "bar" - - provisioner "shell" { - command = "${aws_instance.bar.value}" - when = "destroy" - } -} diff --git a/terraform/testdata/apply-provisioner-destroy/main.tf b/terraform/testdata/apply-provisioner-destroy/main.tf index 38410ccc0..d5fc54e12 100644 --- a/terraform/testdata/apply-provisioner-destroy/main.tf +++ b/terraform/testdata/apply-provisioner-destroy/main.tf @@ -1,12 +1,18 @@ resource "aws_instance" "foo" { + for_each = var.input foo = "bar" provisioner "shell" { - command = "create" + command = "create ${each.key} ${each.value}" } provisioner "shell" { - command = "destroy" when = "destroy" + command = "destroy ${each.key}" } } + +variable "input" { + type = map(string) + default = {} +} diff --git a/terraform/testdata/apply-provisioner-each/main.tf b/terraform/testdata/apply-provisioner-each/main.tf index 8d29a5c16..29be7206e 100644 --- a/terraform/testdata/apply-provisioner-each/main.tf +++ b/terraform/testdata/apply-provisioner-each/main.tf @@ -2,6 +2,6 @@ resource "aws_instance" "bar" { for_each = toset(["a"]) provisioner "shell" { when = "destroy" - command = "echo ${each.value}" + command = "echo ${each.key}" } } diff --git a/terraform/testdata/apply-provisioner-for-each-self/main.tf b/terraform/testdata/apply-provisioner-for-each-self/main.tf new file mode 100644 index 000000000..f3e1d58df --- /dev/null +++ b/terraform/testdata/apply-provisioner-for-each-self/main.tf @@ -0,0 +1,8 @@ +resource "aws_instance" "foo" { + for_each = toset(["a", "b", "c"]) + foo = "number ${each.value}" + + provisioner "shell" { + command = "${self.foo}" + } +} diff --git a/terraform/testdata/graph-builder-apply-orphan-update/main.tf b/terraform/testdata/graph-builder-apply-orphan-update/main.tf new file mode 100644 index 000000000..22e7ae0f1 --- /dev/null +++ b/terraform/testdata/graph-builder-apply-orphan-update/main.tf @@ -0,0 +1,3 @@ +resource "test_object" "b" { + test_string = "changed" +} diff --git a/terraform/testdata/graph-builder-orphan-alias/main.tf b/terraform/testdata/graph-builder-orphan-alias/main.tf new file mode 100644 index 000000000..039881847 --- /dev/null +++ b/terraform/testdata/graph-builder-orphan-alias/main.tf @@ -0,0 +1,3 @@ +provider "test" { + alias = "foo" +} diff --git a/terraform/testdata/module-deps-required-providers/main.tf b/terraform/testdata/module-deps-required-providers/main.tf new file mode 100644 index 000000000..e39cc897b --- /dev/null +++ b/terraform/testdata/module-deps-required-providers/main.tf @@ -0,0 +1,7 @@ +terraform { + required_providers { + foo = { + version = ">=1.0.0" + } + } +} diff --git a/terraform/testdata/plan-destroy-interpolated-count/main.tf b/terraform/testdata/plan-destroy-interpolated-count/main.tf index b4ef77aba..ac0dadbf8 100644 --- a/terraform/testdata/plan-destroy-interpolated-count/main.tf +++ b/terraform/testdata/plan-destroy-interpolated-count/main.tf @@ -3,9 +3,18 @@ variable "list" { } resource "aws_instance" "a" { - count = "${length(var.list)}" + count = length(var.list) +} + +locals { + ids = aws_instance.a[*].id +} + +module "empty" { + source = "./mod" + input = zipmap(var.list, local.ids) } output "out" { - value = "${aws_instance.a.*.id}" + value = aws_instance.a[*].id } diff --git a/terraform/testdata/plan-destroy-interpolated-count/mod/main.tf b/terraform/testdata/plan-destroy-interpolated-count/mod/main.tf new file mode 100644 index 000000000..682e0f0db --- /dev/null +++ b/terraform/testdata/plan-destroy-interpolated-count/mod/main.tf @@ -0,0 +1,2 @@ +variable "input" { +} diff --git a/terraform/testdata/plan-modules-count/child/main.tf b/terraform/testdata/plan-modules-count/child/main.tf new file mode 100644 index 000000000..612478f79 --- /dev/null +++ b/terraform/testdata/plan-modules-count/child/main.tf @@ -0,0 +1,12 @@ +variable "foo" {} +variable "bar" {} + +resource "aws_instance" "foo" { + count = 2 + num = var.foo + bar = "baz" #var.bar +} + +output "out" { + value = aws_instance.foo[0].id +} diff --git a/terraform/testdata/plan-modules-count/main.tf b/terraform/testdata/plan-modules-count/main.tf new file mode 100644 index 000000000..eeb5fa001 --- /dev/null +++ b/terraform/testdata/plan-modules-count/main.tf @@ -0,0 +1,28 @@ +locals { + val = 2 + bar = "baz" +} + +variable "myvar" { + default = "baz" +} + + +module "child" { + count = local.val + foo = 2 + bar = var.myvar + source = "./child" +} + +output "out" { + value = module.child[*].out +} + +resource "aws_instance" "foo" { + num = 2 +} + +resource "aws_instance" "bar" { + foo = "${aws_instance.foo.num}" +} diff --git a/terraform/testdata/transform-cbd-destroy-edge-both-count/main.tf b/terraform/testdata/transform-cbd-destroy-edge-both-count/main.tf new file mode 100644 index 000000000..c19e78eaa --- /dev/null +++ b/terraform/testdata/transform-cbd-destroy-edge-both-count/main.tf @@ -0,0 +1,11 @@ +resource "test_object" "A" { + count = 2 + lifecycle { + create_before_destroy = true + } +} + +resource "test_object" "B" { + count = 2 + test_string = test_object.A[*].test_string[count.index] +} diff --git a/terraform/testdata/transform-cbd-destroy-edge-count/main.tf b/terraform/testdata/transform-cbd-destroy-edge-count/main.tf new file mode 100644 index 000000000..775900fcd --- /dev/null +++ b/terraform/testdata/transform-cbd-destroy-edge-count/main.tf @@ -0,0 +1,10 @@ +resource "test_object" "A" { + lifecycle { + create_before_destroy = true + } +} + +resource "test_object" "B" { + count = 2 + test_string = test_object.A.test_string +} diff --git a/terraform/testdata/transform-destroy-cbd-edge-basic/main.tf b/terraform/testdata/transform-destroy-cbd-edge-basic/main.tf new file mode 100644 index 000000000..a17d8b4e3 --- /dev/null +++ b/terraform/testdata/transform-destroy-cbd-edge-basic/main.tf @@ -0,0 +1,9 @@ +resource "test_object" "A" { + lifecycle { + create_before_destroy = true + } +} + +resource "test_object" "B" { + test_string = "${test_object.A.id}" +} diff --git a/terraform/testdata/transform-destroy-cbd-edge-multi/main.tf b/terraform/testdata/transform-destroy-cbd-edge-multi/main.tf new file mode 100644 index 000000000..964bc44cf --- /dev/null +++ b/terraform/testdata/transform-destroy-cbd-edge-multi/main.tf @@ -0,0 +1,15 @@ +resource "test_object" "A" { + lifecycle { + create_before_destroy = true + } +} + +resource "test_object" "B" { + lifecycle { + create_before_destroy = true + } +} + +resource "test_object" "C" { + test_string = "${test_object.A.id}-${test_object.B.id}" +} diff --git a/terraform/testdata/validate-variable-custom-validations-child/child/child.tf b/terraform/testdata/validate-variable-custom-validations-child/child/child.tf new file mode 100644 index 000000000..88598e042 --- /dev/null +++ b/terraform/testdata/validate-variable-custom-validations-child/child/child.tf @@ -0,0 +1,16 @@ +# This feature is currently experimental. +# (If you're currently cleaning up after concluding the experiment, +# remember to also clean up similar references in the configs package +# under "invalid-files" and "invalid-modules".) +terraform { + experiments = [variable_validation] +} + +variable "test" { + type = string + + validation { + condition = var.test != "nope" + error_message = "Value must not be \"nope\"." + } +} diff --git a/terraform/testdata/validate-variable-custom-validations-child/validate-variable-custom-validations.tf b/terraform/testdata/validate-variable-custom-validations-child/validate-variable-custom-validations.tf new file mode 100644 index 000000000..8b8111e67 --- /dev/null +++ b/terraform/testdata/validate-variable-custom-validations-child/validate-variable-custom-validations.tf @@ -0,0 +1,5 @@ +module "child" { + source = "./child" + + test = "nope" +} diff --git a/terraform/transform.go b/terraform/transform.go index fd3f5c7da..d587c89e4 100644 --- a/terraform/transform.go +++ b/terraform/transform.go @@ -4,6 +4,7 @@ import ( "log" "github.com/hashicorp/terraform/dag" + "github.com/hashicorp/terraform/helper/logging" ) // GraphTransformer is the interface that transformers implement. This @@ -45,7 +46,7 @@ func (t *graphTransformerMulti) Transform(g *Graph) error { return err } if thisStepStr := g.StringWithNodeTypes(); thisStepStr != lastStepStr { - log.Printf("[TRACE] (graphTransformerMulti) Completed graph transform %T with new graph:\n%s------", t, thisStepStr) + log.Printf("[TRACE] (graphTransformerMulti) Completed graph transform %T with new graph:\n%s ------", t, logging.Indent(thisStepStr)) lastStepStr = thisStepStr } else { log.Printf("[TRACE] (graphTransformerMulti) Completed graph transform %T (no changes)", t) diff --git a/terraform/transform_attach_schema.go b/terraform/transform_attach_schema.go index c7695dd4e..10d1e4641 100644 --- a/terraform/transform_attach_schema.go +++ b/terraform/transform_attach_schema.go @@ -59,9 +59,8 @@ func (t *AttachSchemaTransformer) Transform(g *Graph) error { mode := addr.Resource.Mode typeName := addr.Resource.Type providerAddr, _ := tv.ProvidedBy() - providerType := providerAddr.ProviderConfig.Type - schema, version := t.Schemas.ResourceTypeConfig(providerType, mode, typeName) + schema, version := t.Schemas.ResourceTypeConfig(providerAddr.Provider, mode, typeName) if schema == nil { log.Printf("[ERROR] AttachSchemaTransformer: No resource schema available for %s", addr) continue @@ -72,7 +71,8 @@ func (t *AttachSchemaTransformer) Transform(g *Graph) error { if tv, ok := v.(GraphNodeAttachProviderConfigSchema); ok { providerAddr := tv.ProviderAddr() - schema := t.Schemas.ProviderConfig(providerAddr.ProviderConfig.Type) + schema := t.Schemas.ProviderConfig(providerAddr.Provider) + if schema == nil { log.Printf("[ERROR] AttachSchemaTransformer: No provider config schema available for %s", providerAddr) continue diff --git a/terraform/transform_config_flat.go b/terraform/transform_config_flat.go deleted file mode 100644 index 866c91759..000000000 --- a/terraform/transform_config_flat.go +++ /dev/null @@ -1,71 +0,0 @@ -package terraform - -import ( - "github.com/hashicorp/terraform/configs" - "github.com/hashicorp/terraform/dag" -) - -// FlatConfigTransformer is a GraphTransformer that adds the configuration -// to the graph. The module used to configure this transformer must be -// the root module. -// -// This transform adds the nodes but doesn't connect any of the references. -// The ReferenceTransformer should be used for that. -// -// NOTE: In relation to ConfigTransformer: this is a newer generation config -// transformer. It puts the _entire_ config into the graph (there is no -// "flattening" step as before). -type FlatConfigTransformer struct { - Concrete ConcreteResourceNodeFunc // What to turn resources into - - Config *configs.Config -} - -func (t *FlatConfigTransformer) Transform(g *Graph) error { - // We have nothing to do if there is no configuration. - if t.Config == nil { - return nil - } - - return t.transform(g, t.Config) -} - -func (t *FlatConfigTransformer) transform(g *Graph, config *configs.Config) error { - // If we have no configuration then there's nothing to do. - if config == nil { - return nil - } - - // Transform all the children. - for _, c := range config.Children { - if err := t.transform(g, c); err != nil { - return err - } - } - - module := config.Module - // For now we assume that each module call produces only one module - // instance with no key, since we don't yet support "count" and "for_each" - // on modules. - // FIXME: As part of supporting "count" and "for_each" on modules, rework - // this so that we'll "expand" the module call first and then create graph - // nodes for each module instance separately. - instPath := config.Path.UnkeyedInstanceShim() - - for _, r := range module.ManagedResources { - addr := r.Addr().Absolute(instPath) - abstract := &NodeAbstractResource{ - Addr: addr, - Config: r, - } - // Grab the address for this resource - var node dag.Vertex = abstract - if f := t.Concrete; f != nil { - node = f(abstract) - } - - g.Add(node) - } - - return nil -} diff --git a/terraform/transform_config_flat_test.go b/terraform/transform_config_flat_test.go deleted file mode 100644 index 79fc09367..000000000 --- a/terraform/transform_config_flat_test.go +++ /dev/null @@ -1,42 +0,0 @@ -package terraform - -import ( - "strings" - "testing" - - "github.com/hashicorp/terraform/addrs" -) - -func TestFlatConfigTransformer_nilModule(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - tf := &FlatConfigTransformer{} - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - - if len(g.Vertices()) > 0 { - t.Fatal("graph should be empty") - } -} - -func TestFlatConfigTransformer(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - tf := &FlatConfigTransformer{ - Config: testModule(t, "transform-flat-config-basic"), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - - actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(testTransformFlatConfigBasicStr) - if actual != expected { - t.Fatalf("bad:\n\n%s", actual) - } -} - -const testTransformFlatConfigBasicStr = ` -aws_instance.bar -aws_instance.foo -module.child.aws_instance.baz -` diff --git a/terraform/transform_destroy_cbd.go b/terraform/transform_destroy_cbd.go index 2f4d5edeb..948cf0e67 100644 --- a/terraform/transform_destroy_cbd.go +++ b/terraform/transform_destroy_cbd.go @@ -90,7 +90,7 @@ func (t *ForcedCBDTransformer) hasCBDDescendent(g *Graph, v dag.Vertex) bool { return true } - for _, ov := range s.List() { + for _, ov := range s { dn, ok := ov.(GraphNodeDestroyerCBD) if !ok { continue @@ -138,14 +138,12 @@ type CBDEdgeTransformer struct { func (t *CBDEdgeTransformer) Transform(g *Graph) error { // Go through and reverse any destroy edges - destroyMap := make(map[string][]dag.Vertex) for _, v := range g.Vertices() { dn, ok := v.(GraphNodeDestroyerCBD) if !ok { continue } - dern, ok := v.(GraphNodeDestroyer) - if !ok { + if _, ok = v.(GraphNodeDestroyer); !ok { continue } @@ -153,156 +151,19 @@ func (t *CBDEdgeTransformer) Transform(g *Graph) error { continue } - // Find the destroy edge. There should only be one. + // Find the resource edges for _, e := range g.EdgesTo(v) { - // Not a destroy edge, ignore it - de, ok := e.(*DestroyEdge) - if !ok { - continue + src := e.Source() + + // If source is a create node, invert the edge. + // This covers both the node's own creator, as well as reversing + // any dependants' edges. + if _, ok := src.(GraphNodeCreator); ok { + log.Printf("[TRACE] CBDEdgeTransformer: reversing edge %s -> %s", dag.VertexName(src), dag.VertexName(v)) + g.RemoveEdge(e) + g.Connect(dag.BasicEdge(v, src)) } - - log.Printf("[TRACE] CBDEdgeTransformer: inverting edge: %s => %s", - dag.VertexName(de.Source()), dag.VertexName(de.Target())) - - // Found it! Invert. - g.RemoveEdge(de) - applyNode := de.Source() - destroyNode := de.Target() - g.Connect(&DestroyEdge{S: destroyNode, T: applyNode}) - } - - // If the address has an index, we strip that. Our depMap creation - // graph doesn't expand counts so we don't currently get _exact_ - // dependencies. One day when we limit dependencies more exactly - // this will have to change. We have a test case covering this - // (depNonCBDCountBoth) so it'll be caught. - addr := dern.DestroyAddr() - key := addr.ContainingResource().String() - - // Add this to the list of nodes that we need to fix up - // the edges for (step 2 above in the docs). - destroyMap[key] = append(destroyMap[key], v) - } - - // If we have no CBD nodes, then our work here is done - if len(destroyMap) == 0 { - return nil - } - - // We have CBD nodes. We now have to move on to the much more difficult - // task of connecting dependencies of the creation side of the destroy - // to the destruction node. The easiest way to explain this is an example: - // - // Given a pre-destroy dependence of: A => B - // And A has CBD set. - // - // The resulting graph should be: A => B => A_d - // - // They key here is that B happens before A is destroyed. This is to - // facilitate the primary purpose for CBD: making sure that downstreams - // are properly updated to avoid downtime before the resource is destroyed. - // - // We can't trust that the resource being destroyed or anything that - // depends on it is actually in our current graph so we make a new - // graph in order to determine those dependencies and add them in. - log.Printf("[TRACE] CBDEdgeTransformer: building graph to find dependencies...") - depMap, err := t.depMap(destroyMap) - if err != nil { - return err - } - - // We now have the mapping of resource addresses to the destroy - // nodes they need to depend on. We now go through our own vertices to - // find any matching these addresses and make the connection. - for _, v := range g.Vertices() { - // We're looking for creators - rn, ok := v.(GraphNodeCreator) - if !ok { - continue - } - - // Get the address - addr := rn.CreateAddr() - - // If the address has an index, we strip that. Our depMap creation - // graph doesn't expand counts so we don't currently get _exact_ - // dependencies. One day when we limit dependencies more exactly - // this will have to change. We have a test case covering this - // (depNonCBDCount) so it'll be caught. - key := addr.ContainingResource().String() - - // If there is nothing this resource should depend on, ignore it - dns, ok := depMap[key] - if !ok { - continue - } - - // We have nodes! Make the connection - for _, dn := range dns { - log.Printf("[TRACE] CBDEdgeTransformer: destroy depends on dependence: %s => %s", - dag.VertexName(dn), dag.VertexName(v)) - g.Connect(dag.BasicEdge(dn, v)) } } - return nil } - -func (t *CBDEdgeTransformer) depMap(destroyMap map[string][]dag.Vertex) (map[string][]dag.Vertex, error) { - // Build the graph of our config, this ensures that all resources - // are present in the graph. - g, diags := (&BasicGraphBuilder{ - Steps: []GraphTransformer{ - &FlatConfigTransformer{Config: t.Config}, - &AttachResourceConfigTransformer{Config: t.Config}, - &AttachStateTransformer{State: t.State}, - &AttachSchemaTransformer{Schemas: t.Schemas}, - &ReferenceTransformer{}, - }, - Name: "CBDEdgeTransformer", - }).Build(nil) - if diags.HasErrors() { - return nil, diags.Err() - } - - // Using this graph, build the list of destroy nodes that each resource - // address should depend on. For example, when we find B, we map the - // address of B to A_d in the "depMap" variable below. - depMap := make(map[string][]dag.Vertex) - for _, v := range g.Vertices() { - // We're looking for resources. - rn, ok := v.(GraphNodeResource) - if !ok { - continue - } - - // Get the address - addr := rn.ResourceAddr() - key := addr.String() - - // Get the destroy nodes that are destroying this resource. - // If there aren't any, then we don't need to worry about - // any connections. - dns, ok := destroyMap[key] - if !ok { - continue - } - - // Get the nodes that depend on this on. In the example above: - // finding B in A => B. - for _, v := range g.UpEdges(v).List() { - // We're looking for resources. - rn, ok := v.(GraphNodeResource) - if !ok { - continue - } - - // Keep track of the destroy nodes that this address - // needs to depend on. - key := rn.ResourceAddr().String() - depMap[key] = append(depMap[key], dns...) - } - } - - return depMap, nil -} diff --git a/terraform/transform_destroy_cbd_test.go b/terraform/transform_destroy_cbd_test.go index 665d81647..0831d7ad9 100644 --- a/terraform/transform_destroy_cbd_test.go +++ b/terraform/transform_destroy_cbd_test.go @@ -1,198 +1,362 @@ package terraform import ( + "regexp" "strings" "testing" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/plans" + "github.com/hashicorp/terraform/states" ) +func cbdTestGraph(t *testing.T, mod string, changes *plans.Changes, state *states.State) *Graph { + module := testModule(t, mod) + + applyBuilder := &ApplyGraphBuilder{ + Config: module, + Changes: changes, + Components: simpleMockComponentFactory(), + Schemas: simpleTestSchemas(), + State: state, + } + g, err := (&BasicGraphBuilder{ + Steps: cbdTestSteps(applyBuilder.Steps()), + Name: "ApplyGraphBuilder", + }).Build(addrs.RootModuleInstance) + if err != nil { + t.Fatalf("err: %s", err) + } + + return filterInstances(g) +} + +// override the apply graph builder to halt the process after CBD +func cbdTestSteps(steps []GraphTransformer) []GraphTransformer { + found := false + var i int + var t GraphTransformer + for i, t = range steps { + if _, ok := t.(*CBDEdgeTransformer); ok { + found = true + break + } + } + + if !found { + panic("CBDEdgeTransformer not found") + } + + return steps[:i+1] +} + +// remove extra nodes for easier test comparisons +func filterInstances(g *Graph) *Graph { + for _, v := range g.Vertices() { + if _, ok := v.(GraphNodeResourceInstance); !ok { + g.Remove(v) + } + + } + return g +} + func TestCBDEdgeTransformer(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeCreatorTest{AddrString: "test_object.A"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.B"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A", CBD: true}) - - module := testModule(t, "transform-destroy-edge-basic") - - { - tf := &DestroyEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } + changes := &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: mustResourceInstanceAddr("test_object.A"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.CreateThenDelete, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.B"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Update, + }, + }, + }, } - { - tf := &CBDEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - } + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + + g := cbdTestGraph(t, "transform-destroy-cbd-edge-basic", changes, state) + g = filterInstances(g) actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(testTransformCBDEdgeBasicStr) - if actual != expected { + expected := regexp.MustCompile(strings.TrimSpace(` +(?m)test_object.A +test_object.A \(destroy deposed \w+\) + test_object.A + test_object.B +test_object.B + test_object.A +`)) + + if !expected.MatchString(actual) { t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) } } -// FIXME: see if there is a worthwhile test to create from this. -// CBD is marked on created nodes during the plan phase now, and the -// CBDEdgeTransformer only takes care of the final edge reversal. -/* -func TestCBDEdgeTransformer_depNonCBD(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeCreatorTest{AddrString: "test_object.A"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.B"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.B", CBD: true}) - - module := testModule(t, "transform-destroy-edge-basic") - - { - tf := &DestroyEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } +func TestCBDEdgeTransformerMulti(t *testing.T) { + changes := &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: mustResourceInstanceAddr("test_object.A"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.CreateThenDelete, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.B"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.CreateThenDelete, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.C"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Update, + }, + }, + }, } - { - tf := &CBDEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - } + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.C").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"C","test_list":["x"]}`), + Dependencies: []addrs.AbsResource{ + mustResourceAddr("test_object.A"), + mustResourceAddr("test_object.B"), + }, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + + g := cbdTestGraph(t, "transform-destroy-cbd-edge-multi", changes, state) + g = filterInstances(g) actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(testTransformCBDEdgeDepNonCBDStr) - if actual != expected { + expected := regexp.MustCompile(strings.TrimSpace(` +(?m)test_object.A +test_object.A \(destroy deposed \w+\) + test_object.A + test_object.C +test_object.B +test_object.B \(destroy deposed \w+\) + test_object.B + test_object.C +test_object.C + test_object.A + test_object.B +`)) + + if !expected.MatchString(actual) { t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) } } -*/ func TestCBDEdgeTransformer_depNonCBDCount(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeCreatorTest{AddrString: "test_object.A"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.B[0]"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.B[1]"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A", CBD: true}) - - module := testModule(t, "transform-destroy-edge-splat") - - { - tf := &DestroyEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } + changes := &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: mustResourceInstanceAddr("test_object.A"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.CreateThenDelete, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.B[0]"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Update, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.B[1]"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Update, + }, + }, + }, } - { - tf := &CBDEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - } + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B[0]").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B[1]").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + + g := cbdTestGraph(t, "transform-cbd-destroy-edge-count", changes, state) actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(` -test_object.A -test_object.A (destroy) + expected := regexp.MustCompile(strings.TrimSpace(` +(?m)test_object.A +test_object.A \(destroy deposed \w+\) test_object.A - test_object.B[0] - test_object.B[1] -test_object.B[0] -test_object.B[1] - `) - if actual != expected { + test_object.B\[0\] + test_object.B\[1\] +test_object.B\[0\] + test_object.A +test_object.B\[1\] + test_object.A`)) + + if !expected.MatchString(actual) { t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) } } func TestCBDEdgeTransformer_depNonCBDCountBoth(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeCreatorTest{AddrString: "test_object.A[0]"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.A[1]"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.B[0]"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.B[1]"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A[0]", CBD: true}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A[1]", CBD: true}) - - module := testModule(t, "transform-destroy-edge-splat") - - { - tf := &DestroyEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } + changes := &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: mustResourceInstanceAddr("test_object.A[0]"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.CreateThenDelete, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.A[1]"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.CreateThenDelete, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.B[0]"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Update, + }, + }, + { + Addr: mustResourceInstanceAddr("test_object.B[1]"), + ChangeSrc: plans.ChangeSrc{ + Action: plans.Update, + }, + }, + }, } - { - tf := &CBDEdgeTransformer{ - Config: module, - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - } + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A[0]").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A[1]").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B[0]").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B[1]").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + + g := cbdTestGraph(t, "transform-cbd-destroy-edge-both-count", changes, state) actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(` -test_object.A[0] -test_object.A[0] (destroy) - test_object.A[0] - test_object.B[0] - test_object.B[1] -test_object.A[1] -test_object.A[1] (destroy) - test_object.A[1] - test_object.B[0] - test_object.B[1] -test_object.B[0] -test_object.B[1] - `) - if actual != expected { + expected := regexp.MustCompile(strings.TrimSpace(` +test_object.A\[0\] +test_object.A\[0\] \(destroy deposed \w+\) + test_object.A\[0\] + test_object.B\[0\] + test_object.B\[1\] +test_object.A\[1\] +test_object.A\[1\] \(destroy deposed \w+\) + test_object.A\[1\] + test_object.B\[0\] + test_object.B\[1\] +test_object.B\[0\] + test_object.A\[0\] + test_object.A\[1\] +test_object.B\[1\] + test_object.A\[0\] + test_object.A\[1\] +`)) + + if !expected.MatchString(actual) { t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) } } - -const testTransformCBDEdgeBasicStr = ` -test_object.A -test_object.A (destroy) - test_object.A - test_object.B -test_object.B -` - -const testTransformCBDEdgeDepNonCBDStr = ` -test_object.A -test_object.A (destroy) (modified) - test_object.A - test_object.B - test_object.B (destroy) -test_object.B -test_object.B (destroy) - test_object.B -` diff --git a/terraform/transform_destroy_edge.go b/terraform/transform_destroy_edge.go index 7fb415bdf..e3aedab56 100644 --- a/terraform/transform_destroy_edge.go +++ b/terraform/transform_destroy_edge.go @@ -54,26 +54,37 @@ type DestroyEdgeTransformer struct { func (t *DestroyEdgeTransformer) Transform(g *Graph) error { // Build a map of what is being destroyed (by address string) to - // the list of destroyers. Usually there will be at most one destroyer - // per node, but we allow multiple if present for completeness. + // the list of destroyers. destroyers := make(map[string][]GraphNodeDestroyer) - destroyerAddrs := make(map[string]addrs.AbsResourceInstance) + + // Record the creators, which will need to depend on the destroyers if they + // are only being updated. + creators := make(map[string]GraphNodeCreator) + + // destroyersByResource records each destroyer by the AbsResourceAddress. + // We use this because dependencies are only referenced as resources, but we + // will want to connect all the individual instances for correct ordering. + destroyersByResource := make(map[string][]GraphNodeDestroyer) for _, v := range g.Vertices() { - dn, ok := v.(GraphNodeDestroyer) - if !ok { - continue - } + switch n := v.(type) { + case GraphNodeDestroyer: + addrP := n.DestroyAddr() + if addrP == nil { + log.Printf("[WARN] DestroyEdgeTransformer: %q (%T) has no destroy address", dag.VertexName(n), v) + continue + } + addr := *addrP - addrP := dn.DestroyAddr() - if addrP == nil { - continue - } - addr := *addrP + key := addr.String() + log.Printf("[TRACE] DestroyEdgeTransformer: %q (%T) destroys %s", dag.VertexName(n), v, key) + destroyers[key] = append(destroyers[key], n) - key := addr.String() - log.Printf("[TRACE] DestroyEdgeTransformer: %q (%T) destroys %s", dag.VertexName(dn), v, key) - destroyers[key] = append(destroyers[key], dn) - destroyerAddrs[key] = addr + resAddr := addr.Resource.Resource.Absolute(addr.Module).String() + destroyersByResource[resAddr] = append(destroyersByResource[resAddr], n) + case GraphNodeCreator: + addr := n.CreateAddr() + creators[addr.String()] = n + } } // If we aren't destroying anything, there will be no edges to make @@ -82,6 +93,40 @@ func (t *DestroyEdgeTransformer) Transform(g *Graph) error { return nil } + // Connect destroy despendencies as stored in the state + for _, ds := range destroyers { + for _, des := range ds { + ri, ok := des.(GraphNodeResourceInstance) + if !ok { + continue + } + + for _, resAddr := range ri.StateDependencies() { + for _, desDep := range destroyersByResource[resAddr.String()] { + log.Printf("[TRACE] DestroyEdgeTransformer: %s has stored dependency of %s\n", dag.VertexName(desDep), dag.VertexName(des)) + g.Connect(dag.BasicEdge(desDep, des)) + + } + } + } + } + + // connect creators to any destroyers on which they may depend + for _, c := range creators { + ri, ok := c.(GraphNodeResourceInstance) + if !ok { + continue + } + + for _, resAddr := range ri.StateDependencies() { + for _, desDep := range destroyersByResource[resAddr.String()] { + log.Printf("[TRACE] DestroyEdgeTransformer: %s has stored dependency of %s\n", dag.VertexName(c), dag.VertexName(desDep)) + g.Connect(dag.BasicEdge(c, desDep)) + + } + } + } + // Go through and connect creators to destroyers. Going along with // our example, this makes: A_d => A for _, v := range g.Vertices() { @@ -95,13 +140,7 @@ func (t *DestroyEdgeTransformer) Transform(g *Graph) error { continue } - key := addr.String() - ds := destroyers[key] - if len(ds) == 0 { - continue - } - - for _, d := range ds { + for _, d := range destroyers[addr.String()] { // For illustrating our example a_d := d.(dag.Vertex) a := v @@ -110,7 +149,7 @@ func (t *DestroyEdgeTransformer) Transform(g *Graph) error { "[TRACE] DestroyEdgeTransformer: connecting creator %q with destroyer %q", dag.VertexName(a), dag.VertexName(a_d)) - g.Connect(&DestroyEdge{S: a, T: a_d}) + g.Connect(dag.BasicEdge(a, a_d)) // Attach the destroy node to the creator // There really shouldn't be more than one destroyer, but even if @@ -124,158 +163,46 @@ func (t *DestroyEdgeTransformer) Transform(g *Graph) error { } } - // This is strange but is the easiest way to get the dependencies - // of a node that is being destroyed. We use another graph to make sure - // the resource is in the graph and ask for references. We have to do this - // because the node that is being destroyed may NOT be in the graph. - // - // Example: resource A is force new, then destroy A AND create A are - // in the graph. BUT if resource A is just pure destroy, then only - // destroy A is in the graph, and create A is not. - providerFn := func(a *NodeAbstractProvider) dag.Vertex { - return &NodeApplyableProvider{NodeAbstractProvider: a} - } - steps := []GraphTransformer{ - // Add the local values - &LocalTransformer{Config: t.Config}, + return t.pruneResources(g) +} - // Add outputs and metadata - &OutputTransformer{Config: t.Config}, - &AttachResourceConfigTransformer{Config: t.Config}, - &AttachStateTransformer{State: t.State}, - - // Add all the variables. We can depend on resources through - // variables due to module parameters, and we need to properly - // determine that. - &RootVariableTransformer{Config: t.Config}, - &ModuleVariableTransformer{Config: t.Config}, - - TransformProviders(nil, providerFn, t.Config), - - // Must attach schemas before ReferenceTransformer so that we can - // analyze the configuration to find references. - &AttachSchemaTransformer{Schemas: t.Schemas}, - - &ReferenceTransformer{}, - } - - // Go through all the nodes being destroyed and create a graph. - // The resulting graph is only of things being CREATED. For example, - // following our example, the resulting graph would be: - // - // A, B (with no edges) - // - var tempG Graph - var tempDestroyed []dag.Vertex - for d := range destroyers { - // d is the string key for the resource being destroyed. We actually - // want the address value, which we stashed earlier. - addr := destroyerAddrs[d] - - // This part is a little bit weird but is the best way to - // find the dependencies we need to: build a graph and use the - // attach config and state transformers then ask for references. - abstract := NewNodeAbstractResourceInstance(addr) - tempG.Add(abstract) - tempDestroyed = append(tempDestroyed, abstract) - - // We also add the destroy version here since the destroy can - // depend on things that the creation doesn't (destroy provisioners). - destroy := &NodeDestroyResourceInstance{NodeAbstractResourceInstance: abstract} - tempG.Add(destroy) - tempDestroyed = append(tempDestroyed, destroy) - } - - // Run the graph transforms so we have the information we need to - // build references. - log.Printf("[TRACE] DestroyEdgeTransformer: constructing temporary graph for analysis of references, starting from:\n%s", tempG.StringWithNodeTypes()) - for _, s := range steps { - log.Printf("[TRACE] DestroyEdgeTransformer: running %T on temporary graph", s) - if err := s.Transform(&tempG); err != nil { - log.Printf("[TRACE] DestroyEdgeTransformer: %T failed: %s", s, err) - return err +// If there are only destroy instances for a particular resource, there's no +// reason for the resource node to prepare the state. Remove Resource nodes so +// that they don't fail by trying to evaluate a resource that is only being +// destroyed along with its dependencies. +func (t *DestroyEdgeTransformer) pruneResources(g *Graph) error { + for _, v := range g.Vertices() { + n, ok := v.(*NodeApplyableResource) + if !ok { + continue } - } - log.Printf("[TRACE] DestroyEdgeTransformer: temporary reference graph:\n%s", tempG.String()) - // Go through all the nodes in the graph and determine what they - // depend on. - for _, v := range tempDestroyed { - // Find all ancestors of this to determine the edges we'll depend on - vs, err := tempG.Ancestors(v) + // if there are only destroy dependencies, we don't need this node + descendents, err := g.Descendents(n) if err != nil { return err } - refs := make([]dag.Vertex, 0, vs.Len()) - for _, raw := range vs.List() { - refs = append(refs, raw.(dag.Vertex)) + nonDestroyInstanceFound := false + for _, v := range descendents { + if _, ok := v.(*NodeApplyableResourceInstance); ok { + nonDestroyInstanceFound = true + break + } } - refNames := make([]string, len(refs)) - for i, ref := range refs { - refNames[i] = dag.VertexName(ref) - } - log.Printf( - "[TRACE] DestroyEdgeTransformer: creation node %q references %s", - dag.VertexName(v), refNames) - - // If we have no references, then we won't need to do anything - if len(refs) == 0 { + if nonDestroyInstanceFound { continue } - // Get the destroy node for this. In the example of our struct, - // we are currently at B and we're looking for B_d. - rn, ok := v.(GraphNodeResourceInstance) - if !ok { - log.Printf("[TRACE] DestroyEdgeTransformer: skipping %s, since it's not a resource", dag.VertexName(v)) - continue - } - - addr := rn.ResourceInstanceAddr() - dns := destroyers[addr.String()] - - // We have dependencies, check if any are being destroyed - // to build the list of things that we must depend on! - // - // In the example of the struct, if we have: - // - // B_d => A_d => A => B - // - // Then at this point in the algorithm we started with B_d, - // we built B (to get dependencies), and we found A. We're now looking - // to see if A_d exists. - var depDestroyers []dag.Vertex - for _, v := range refs { - rn, ok := v.(GraphNodeResourceInstance) - if !ok { - continue - } - - addr := rn.ResourceInstanceAddr() - key := addr.String() - if ds, ok := destroyers[key]; ok { - for _, d := range ds { - depDestroyers = append(depDestroyers, d.(dag.Vertex)) - log.Printf( - "[TRACE] DestroyEdgeTransformer: destruction of %q depends on %s", - key, dag.VertexName(d)) - } - } - } - - // Go through and make the connections. Use the variable - // names "a_d" and "b_d" to reference our example. - for _, a_d := range dns { - for _, b_d := range depDestroyers { - if b_d != a_d { - log.Printf("[TRACE] DestroyEdgeTransformer: %q depends on %q", dag.VertexName(b_d), dag.VertexName(a_d)) - g.Connect(dag.BasicEdge(b_d, a_d)) - } + // connect all the through-edges, then delete the node + for _, d := range g.DownEdges(n) { + for _, u := range g.UpEdges(n) { + g.Connect(dag.BasicEdge(u, d)) } } + log.Printf("DestroyEdgeTransformer: pruning unused resource node %s", dag.VertexName(n)) + g.Remove(n) } - return nil } diff --git a/terraform/transform_destroy_edge_test.go b/terraform/transform_destroy_edge_test.go index 01059db02..19a141b52 100644 --- a/terraform/transform_destroy_edge_test.go +++ b/terraform/transform_destroy_edge_test.go @@ -5,12 +5,37 @@ import ( "testing" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/states" ) func TestDestroyEdgeTransformer_basic(t *testing.T) { g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.B"}) + g.Add(testDestroyNode("test_object.A")) + g.Add(testDestroyNode("test_object.B")) + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_string":"x"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + if err := (&AttachStateTransformer{State: state}).Transform(&g); err != nil { + t.Fatal(err) + } + tf := &DestroyEdgeTransformer{ Config: testModule(t, "transform-destroy-edge-basic"), Schemas: simpleTestSchemas(), @@ -26,31 +51,48 @@ func TestDestroyEdgeTransformer_basic(t *testing.T) { } } -func TestDestroyEdgeTransformer_create(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.B"}) - g.Add(&graphNodeCreatorTest{AddrString: "test_object.A"}) - tf := &DestroyEdgeTransformer{ - Config: testModule(t, "transform-destroy-edge-basic"), - Schemas: simpleTestSchemas(), - } - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - - actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(testTransformDestroyEdgeCreatorStr) - if actual != expected { - t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) - } -} - func TestDestroyEdgeTransformer_multi(t *testing.T) { g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.B"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.C"}) + g.Add(testDestroyNode("test_object.A")) + g.Add(testDestroyNode("test_object.B")) + g.Add(testDestroyNode("test_object.C")) + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.A").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"A"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.B").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"B","test_string":"x"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("test_object.A")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.C").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"C","test_string":"x"}`), + Dependencies: []addrs.AbsResource{ + mustResourceAddr("test_object.A"), + mustResourceAddr("test_object.B"), + }, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + + if err := (&AttachStateTransformer{State: state}).Transform(&g); err != nil { + t.Fatal(err) + } + tf := &DestroyEdgeTransformer{ Config: testModule(t, "transform-destroy-edge-multi"), Schemas: simpleTestSchemas(), @@ -68,7 +110,7 @@ func TestDestroyEdgeTransformer_multi(t *testing.T) { func TestDestroyEdgeTransformer_selfRef(t *testing.T) { g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.A"}) + g.Add(testDestroyNode("test_object.A")) tf := &DestroyEdgeTransformer{ Config: testModule(t, "transform-destroy-edge-self-ref"), Schemas: simpleTestSchemas(), @@ -86,8 +128,33 @@ func TestDestroyEdgeTransformer_selfRef(t *testing.T) { func TestDestroyEdgeTransformer_module(t *testing.T) { g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeDestroyerTest{AddrString: "module.child.test_object.b"}) - g.Add(&graphNodeDestroyerTest{AddrString: "test_object.a"}) + g.Add(testDestroyNode("module.child.test_object.b")) + g.Add(testDestroyNode("test_object.a")) + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) + root.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.a").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"a"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("module.child.test_object.b")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + child.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.b").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"b","test_string":"x"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + + if err := (&AttachStateTransformer{State: state}).Transform(&g); err != nil { + t.Fatal(err) + } + tf := &DestroyEdgeTransformer{ Config: testModule(t, "transform-destroy-edge-module"), Schemas: simpleTestSchemas(), @@ -105,9 +172,46 @@ func TestDestroyEdgeTransformer_module(t *testing.T) { func TestDestroyEdgeTransformer_moduleOnly(t *testing.T) { g := Graph{Path: addrs.RootModuleInstance} - g.Add(&graphNodeDestroyerTest{AddrString: "module.child.test_object.a"}) - g.Add(&graphNodeDestroyerTest{AddrString: "module.child.test_object.b"}) - g.Add(&graphNodeDestroyerTest{AddrString: "module.child.test_object.c"}) + g.Add(testDestroyNode("module.child.test_object.a")) + g.Add(testDestroyNode("module.child.test_object.b")) + g.Add(testDestroyNode("module.child.test_object.c")) + + state := states.NewState() + child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) + child.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.a").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"a"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + child.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.b").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"b","test_string":"x"}`), + Dependencies: []addrs.AbsResource{mustResourceAddr("module.child.test_object.a")}, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + child.SetResourceInstanceCurrent( + mustResourceInstanceAddr("test_object.c").Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"c","test_string":"x"}`), + Dependencies: []addrs.AbsResource{ + mustResourceAddr("module.child.test_object.a"), + mustResourceAddr("module.child.test_object.b"), + }, + }, + mustProviderConfig(`provider["registry.terraform.io/-/test"]`), + ) + + if err := (&AttachStateTransformer{State: state}).Transform(&g); err != nil { + t.Fatal(err) + } + tf := &DestroyEdgeTransformer{ Config: testModule(t, "transform-destroy-edge-module-only"), Schemas: simpleTestSchemas(), @@ -130,86 +234,17 @@ module.child.test_object.c (destroy) } } -type graphNodeCreatorTest struct { - AddrString string - Refs []string -} +func testDestroyNode(addrString string) GraphNodeDestroyer { + instAddr := mustResourceInstanceAddr(addrString) -var ( - _ GraphNodeCreator = (*graphNodeCreatorTest)(nil) - _ GraphNodeReferencer = (*graphNodeCreatorTest)(nil) -) + abs := NewNodeAbstractResource(instAddr.ContainingResource()) -func (n *graphNodeCreatorTest) Name() string { - return n.CreateAddr().String() -} - -func (n *graphNodeCreatorTest) mustAddr() addrs.AbsResourceInstance { - addr, diags := addrs.ParseAbsResourceInstanceStr(n.AddrString) - if diags.HasErrors() { - panic(diags.Err()) - } - return addr -} - -func (n *graphNodeCreatorTest) Path() addrs.ModuleInstance { - return n.mustAddr().Module -} - -func (n *graphNodeCreatorTest) CreateAddr() *addrs.AbsResourceInstance { - addr := n.mustAddr() - return &addr -} - -func (n *graphNodeCreatorTest) References() []*addrs.Reference { - ret := make([]*addrs.Reference, len(n.Refs)) - for i, str := range n.Refs { - ref, diags := addrs.ParseRefStr(str) - if diags.HasErrors() { - panic(diags.Err()) - } - ret[i] = ref - } - return ret -} - -type graphNodeDestroyerTest struct { - AddrString string - CBD bool - Modified bool -} - -var _ GraphNodeDestroyer = (*graphNodeDestroyerTest)(nil) - -func (n *graphNodeDestroyerTest) Name() string { - result := n.DestroyAddr().String() + " (destroy)" - if n.Modified { - result += " (modified)" + inst := &NodeAbstractResourceInstance{ + NodeAbstractResource: *abs, + InstanceKey: instAddr.Resource.Key, } - return result -} - -func (n *graphNodeDestroyerTest) mustAddr() addrs.AbsResourceInstance { - addr, diags := addrs.ParseAbsResourceInstanceStr(n.AddrString) - if diags.HasErrors() { - panic(diags.Err()) - } - return addr -} - -func (n *graphNodeDestroyerTest) CreateBeforeDestroy() bool { - return n.CBD -} - -func (n *graphNodeDestroyerTest) ModifyCreateBeforeDestroy(v bool) error { - n.Modified = true - return nil -} - -func (n *graphNodeDestroyerTest) DestroyAddr() *addrs.AbsResourceInstance { - addr := n.mustAddr() - return &addr + return &NodeDestroyResourceInstance{NodeAbstractResourceInstance: inst} } const testTransformDestroyEdgeBasicStr = ` diff --git a/terraform/transform_diff.go b/terraform/transform_diff.go index 6fb915f87..23b6e2a75 100644 --- a/terraform/transform_diff.go +++ b/terraform/transform_diff.go @@ -174,14 +174,6 @@ func (t *DiffTransformer) Transform(g *Graph) error { log.Printf("[TRACE] DiffTransformer: %s deposed object %s will be represented for destruction by %s", addr, dk, dag.VertexName(node)) } g.Add(node) - rsrcAddr := addr.ContainingResource().String() - for _, rsrcNode := range resourceNodes[rsrcAddr] { - // We connect this edge "forwards" (even though destroy dependencies - // are often inverted) because evaluating the resource node - // after the destroy node could cause an unnecessary husk of - // a resource state to be re-added. - g.Connect(dag.BasicEdge(node, rsrcNode)) - } } } diff --git a/terraform/transform_diff_test.go b/terraform/transform_diff_test.go index a83a5d75f..58ae5e8fb 100644 --- a/terraform/transform_diff_test.go +++ b/terraform/transform_diff_test.go @@ -43,9 +43,10 @@ func TestDiffTransformer(t *testing.T) { Type: "aws_instance", Name: "foo", }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - ProviderAddr: addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ChangeSrc: plans.ChangeSrc{ Action: plans.Update, Before: beforeVal, diff --git a/terraform/transform_import_state.go b/terraform/transform_import_state.go index ab0ecae0a..5743c0645 100644 --- a/terraform/transform_import_state.go +++ b/terraform/transform_import_state.go @@ -20,8 +20,12 @@ func (t *ImportStateTransformer) Transform(g *Graph) error { // This will be populated if the targets come from the cli, but tests // may not specify implied provider addresses. providerAddr := target.ProviderAddr - if providerAddr.ProviderConfig.Type == "" { - providerAddr = target.Addr.Resource.Resource.DefaultProviderConfig().Absolute(target.Addr.Module) + if providerAddr.Provider.Type == "" { + defaultFQN := target.Addr.Resource.Resource.DefaultProvider() + providerAddr = addrs.AbsProviderConfig{ + Provider: defaultFQN, + Module: target.Addr.Module, + } } node := &graphNodeImportState{ diff --git a/terraform/transform_module_expansion.go b/terraform/transform_module_expansion.go new file mode 100644 index 000000000..e8d2c4a32 --- /dev/null +++ b/terraform/transform_module_expansion.go @@ -0,0 +1,84 @@ +package terraform + +import ( + "log" + + "github.com/hashicorp/terraform/configs" + "github.com/hashicorp/terraform/dag" +) + +// ModuleExpansionTransformer is a GraphTransformer that adds graph nodes +// representing the possible expansion of each module call in the configuration, +// and ensures that any nodes representing objects declared within a module +// are dependent on the expansion node so that they will be visited only +// after the module expansion has been decided. +// +// This transform must be applied only after all nodes representing objects +// that can be contained within modules have already been added. +type ModuleExpansionTransformer struct { + Config *configs.Config +} + +func (t *ModuleExpansionTransformer) Transform(g *Graph) error { + // The root module is always a singleton and so does not need expansion + // processing, but any descendent modules do. We'll process them + // recursively using t.transform. + for _, cfg := range t.Config.Children { + err := t.transform(g, cfg, nil) + if err != nil { + return err + } + } + return nil +} + +func (t *ModuleExpansionTransformer) transform(g *Graph, c *configs.Config, parentNode dag.Vertex) error { + // FIXME: We're using addrs.ModuleInstance to represent the paths here + // because the rest of Terraform Core is expecting that, but in practice + // thus is representing a path through the static module instances (not + // expanded yet), and so as we weave in support for repetition of module + // calls we'll need to make the plan processing actually use addrs.Module + // to represent that our graph nodes are actually representing unexpanded + // static configuration objects, not instances. + fullAddr := c.Path.UnkeyedInstanceShim() + callerAddr, callAddr := fullAddr.Call() + + modulecall := c.Parent.Module.ModuleCalls["child"] + v := &nodeExpandModule{ + CallerAddr: callerAddr, + Call: callAddr, + Config: c.Module, + ModuleCall: modulecall, + } + g.Add(v) + log.Printf("[TRACE] ModuleExpansionTransformer: Added %s as %T", fullAddr, v) + + if parentNode != nil { + log.Printf("[TRACE] ModuleExpansionTransformer: %s must wait for expansion of %s", dag.VertexName(v), dag.VertexName(parentNode)) + g.Connect(dag.BasicEdge(v, parentNode)) + } + + // Connect any node that reports this module as its Path to ensure that + // the module expansion will be handled before that node. + // FIXME: Again, there is some Module vs. ModuleInstance muddling here + // for legacy reasons, which we'll need to clean up as part of further + // work to properly support "count" and "for_each" for modules. Nodes + // in the plan graph actually belong to modules, not to module instances. + for _, childV := range g.Vertices() { + pather, ok := childV.(GraphNodeSubPath) + if !ok { + continue + } + if pather.Path().Equal(fullAddr) { + log.Printf("[TRACE] ModuleExpansionTransformer: %s must wait for expansion of %s", dag.VertexName(childV), fullAddr) + g.Connect(dag.BasicEdge(childV, v)) + } + } + + // Also visit child modules, recursively. + for _, cc := range c.Children { + return t.transform(g, cc, v) + } + + return nil +} diff --git a/terraform/transform_module_variable.go b/terraform/transform_module_variable.go index 18e0b2d1f..2cbb5cd11 100644 --- a/terraform/transform_module_variable.go +++ b/terraform/transform_module_variable.go @@ -4,6 +4,7 @@ import ( "fmt" "github.com/hashicorp/hcl/v2/hclsyntax" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/tfdiags" "github.com/zclconf/go-cty/cty" @@ -110,15 +111,17 @@ func (t *ModuleVariableTransformer) transformSingle(g *Graph, parent, c *configs } } - // For now we treat all module variables as "applyable", even though - // such nodes are valid to use on other walks too. We may specialize - // this in future if we find reasons to employ different behaviors - // in different scenarios. - node := &NodeApplyableModuleVariable{ - Addr: path.InputVariable(v.Name), + // Add a plannable node, as the variable may expand + // during module expansion + node := &NodePlannableModuleVariable{ + Addr: addrs.InputVariable{ + Name: v.Name, + }, + Module: c.Path, Config: v, Expr: expr, } + g.Add(node) } diff --git a/terraform/transform_orphan_count.go b/terraform/transform_orphan_count.go index 40163cf91..59c15c198 100644 --- a/terraform/transform_orphan_count.go +++ b/terraform/transform_orphan_count.go @@ -6,7 +6,6 @@ import ( "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/dag" "github.com/hashicorp/terraform/states" - "github.com/zclconf/go-cty/cty" ) // OrphanResourceCountTransformer is a GraphTransformer that adds orphans @@ -19,155 +18,43 @@ import ( type OrphanResourceCountTransformer struct { Concrete ConcreteResourceInstanceNodeFunc - Count int // Actual count of the resource, or -1 if count is not set at all - ForEach map[string]cty.Value // The ForEach map on the resource - Addr addrs.AbsResource // Addr of the resource to look for orphans - State *states.State // Full global state + Addr addrs.AbsResource // Addr of the resource to look for orphans + InstanceAddrs []addrs.AbsResourceInstance // Addresses that currently exist in config + State *states.State // Full global state } func (t *OrphanResourceCountTransformer) Transform(g *Graph) error { + // FIXME: This is currently assuming that all of the instances of + // this resource belong to a single module instance, which is true + // at the time of writing this because Terraform Core doesn't support + // repetition of module calls yet, but this will need to be corrected + // in order to support count and for_each on module calls, where + // our t.InstanceAddrs may contain resource instances from many different + // module instances. rs := t.State.Resource(t.Addr) if rs == nil { return nil // Resource doesn't exist in state, so nothing to do! } - haveKeys := make(map[addrs.InstanceKey]struct{}) + // This is an O(n*m) analysis, which we accept for now because the + // number of instances of a single resource ought to always be small in any + // reasonable Terraform configuration. +Have: for key := range rs.Instances { - haveKeys[key] = struct{}{} - } - - // if for_each is set, use that transformer - if t.ForEach != nil { - return t.transformForEach(haveKeys, g) - } - if t.Count < 0 { - return t.transformNoCount(haveKeys, g) - } - if t.Count == 0 { - return t.transformZeroCount(haveKeys, g) - } - return t.transformCount(haveKeys, g) -} - -func (t *OrphanResourceCountTransformer) transformForEach(haveKeys map[addrs.InstanceKey]struct{}, g *Graph) error { - // If there is a NoKey node, add this to the graph first, - // so that we can create edges to it in subsequent (StringKey) nodes. - // This is because the last item determines the resource mode for the whole resource, - // (see SetResourceInstanceCurrent for more information) and we need to evaluate - // an orphaned (NoKey) resource before the in-memory state is updated - // to deal with a new for_each resource - _, hasNoKeyNode := haveKeys[addrs.NoKey] - var noKeyNode dag.Vertex - if hasNoKeyNode { - abstract := NewNodeAbstractResourceInstance(t.Addr.Instance(addrs.NoKey)) - noKeyNode = abstract - if f := t.Concrete; f != nil { - noKeyNode = f(abstract) + thisAddr := t.Addr.Instance(key) + for _, wantAddr := range t.InstanceAddrs { + if wantAddr.Equal(thisAddr) { + continue Have + } } - g.Add(noKeyNode) - } + // If thisAddr is not in t.InstanceAddrs then we've found an "orphan" - for key := range haveKeys { - // If the key is no-key, we have already added it, so skip - if key == addrs.NoKey { - continue - } - - s, _ := key.(addrs.StringKey) - // If the key is present in our current for_each, carry on - if _, ok := t.ForEach[string(s)]; ok { - continue - } - - abstract := NewNodeAbstractResourceInstance(t.Addr.Instance(key)) + abstract := NewNodeAbstractResourceInstance(thisAddr) var node dag.Vertex = abstract if f := t.Concrete; f != nil { node = f(abstract) } - log.Printf("[TRACE] OrphanResourceCount(non-zero): adding %s as %T", t.Addr, node) - g.Add(node) - - // Add edge to noKeyNode if it exists - if hasNoKeyNode { - g.Connect(dag.BasicEdge(node, noKeyNode)) - } - } - return nil -} - -func (t *OrphanResourceCountTransformer) transformCount(haveKeys map[addrs.InstanceKey]struct{}, g *Graph) error { - // Due to the logic in Transform, we only get in here if our count is - // at least one. - - _, have0Key := haveKeys[addrs.IntKey(0)] - - for key := range haveKeys { - if key == addrs.NoKey && !have0Key { - // If we have no 0-key then we will accept a no-key instance - // as an alias for it. - continue - } - - i, isInt := key.(addrs.IntKey) - if isInt && int(i) < t.Count { - continue - } - - abstract := NewNodeAbstractResourceInstance(t.Addr.Instance(key)) - var node dag.Vertex = abstract - if f := t.Concrete; f != nil { - node = f(abstract) - } - log.Printf("[TRACE] OrphanResourceCount(non-zero): adding %s as %T", t.Addr, node) - g.Add(node) - } - - return nil -} - -func (t *OrphanResourceCountTransformer) transformZeroCount(haveKeys map[addrs.InstanceKey]struct{}, g *Graph) error { - // This case is easy: we need to orphan any keys we have at all. - - for key := range haveKeys { - abstract := NewNodeAbstractResourceInstance(t.Addr.Instance(key)) - var node dag.Vertex = abstract - if f := t.Concrete; f != nil { - node = f(abstract) - } - log.Printf("[TRACE] OrphanResourceCount(zero): adding %s as %T", t.Addr, node) - g.Add(node) - } - - return nil -} - -func (t *OrphanResourceCountTransformer) transformNoCount(haveKeys map[addrs.InstanceKey]struct{}, g *Graph) error { - // Negative count indicates that count is not set at all, in which - // case we expect to have a single instance with no key set at all. - // However, we'll also accept an instance with key 0 set as an alias - // for it, in case the user has just deleted the "count" argument and - // so wants to keep the first instance in the set. - - _, haveNoKey := haveKeys[addrs.NoKey] - _, have0Key := haveKeys[addrs.IntKey(0)] - keepKey := addrs.NoKey - if have0Key && !haveNoKey { - // If we don't have a no-key instance then we can use the 0-key instance - // instead. - keepKey = addrs.IntKey(0) - } - - for key := range haveKeys { - if key == keepKey { - continue - } - - abstract := NewNodeAbstractResourceInstance(t.Addr.Instance(key)) - var node dag.Vertex = abstract - if f := t.Concrete; f != nil { - node = f(abstract) - } - log.Printf("[TRACE] OrphanResourceCount(no-count): adding %s as %T", t.Addr, node) + log.Printf("[TRACE] OrphanResourceCountTransformer: adding %s as %T", thisAddr, node) g.Add(node) } diff --git a/terraform/transform_orphan_count_test.go b/terraform/transform_orphan_count_test.go index 4853ce831..0c1b31895 100644 --- a/terraform/transform_orphan_count_test.go +++ b/terraform/transform_orphan_count_test.go @@ -1,5 +1,10 @@ package terraform +// FIXME: Update these tests for the new OrphanResourceCountTransformer +// interface that expects to be given a list of instance addresses that +// exist in config. + +/* import ( "strings" "testing" @@ -352,9 +357,10 @@ func TestOrphanResourceCountTransformer_ForEachEdgesAdded(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) // NoKey'd resource @@ -370,9 +376,10 @@ func TestOrphanResourceCountTransformer_ForEachEdgesAdded(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -431,3 +438,4 @@ aws_instance.foo (orphan) aws_instance.foo["bar"] (orphan) aws_instance.foo (orphan) ` +*/ diff --git a/terraform/transform_orphan_resource_test.go b/terraform/transform_orphan_resource_test.go index fed351cc1..182d14bca 100644 --- a/terraform/transform_orphan_resource_test.go +++ b/terraform/transform_orphan_resource_test.go @@ -26,9 +26,10 @@ func TestOrphanResourceInstanceTransformer(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) // The orphan @@ -44,9 +45,10 @@ func TestOrphanResourceInstanceTransformer(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -92,9 +94,10 @@ func TestOrphanResourceInstanceTransformer_countGood(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -108,9 +111,10 @@ func TestOrphanResourceInstanceTransformer_countGood(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -155,9 +159,10 @@ func TestOrphanResourceInstanceTransformer_countBad(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -171,9 +176,10 @@ func TestOrphanResourceInstanceTransformer_countBad(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }) @@ -218,9 +224,10 @@ func TestOrphanResourceInstanceTransformer_modules(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -234,9 +241,10 @@ func TestOrphanResourceInstanceTransformer_modules(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }) diff --git a/terraform/transform_output.go b/terraform/transform_output.go index ed93cdb87..8e19000af 100644 --- a/terraform/transform_output.go +++ b/terraform/transform_output.go @@ -3,6 +3,7 @@ package terraform import ( "log" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/dag" ) @@ -36,20 +37,16 @@ func (t *OutputTransformer) transform(g *Graph, c *configs.Config) error { } } - // Our addressing system distinguishes between modules and module instances, - // but we're not yet ready to make that distinction here (since we don't - // support "count"/"for_each" on modules) and so we just do a naive - // transform of the module path into a module instance path, assuming that - // no keys are in use. This should be removed when "count" and "for_each" - // are implemented for modules. - path := c.Path.UnkeyedInstanceShim() - + // Add plannable outputs to the graph, which will be dynamically expanded + // into NodeApplyableOutputs to reflect possible expansion + // through the presence of "count" or "for_each" on the modules. for _, o := range c.Module.Outputs { - addr := path.OutputValue(o.Name) - node := &NodeApplyableOutput{ - Addr: addr, + node := &NodePlannableOutput{ + Addr: addrs.OutputValue{Name: o.Name}, + Module: c.Path, Config: o, } + log.Printf("[TRACE] OutputTransformer: adding %s as %T", o.Name, node) g.Add(node) } @@ -60,11 +57,17 @@ func (t *OutputTransformer) transform(g *Graph, c *configs.Config) error { // outputs during destroy. We need to do this to ensure that no stale outputs // are ever left in the state. type DestroyOutputTransformer struct { + Destroy bool } func (t *DestroyOutputTransformer) Transform(g *Graph) error { + // Only clean root outputs on a full destroy + if !t.Destroy { + return nil + } + for _, v := range g.Vertices() { - output, ok := v.(*NodeApplyableOutput) + output, ok := v.(*NodePlannableOutput) if !ok { continue } @@ -72,6 +75,7 @@ func (t *DestroyOutputTransformer) Transform(g *Graph) error { // create the destroy node for this output node := &NodeDestroyableOutput{ Addr: output.Addr, + Module: output.Module, Config: output.Config, } @@ -86,7 +90,7 @@ func (t *DestroyOutputTransformer) Transform(g *Graph) error { // the destroy node must depend on the eval node deps.Add(v) - for _, d := range deps.List() { + for _, d := range deps { log.Printf("[TRACE] %s depends on %s", node.Name(), dag.VertexName(d)) g.Connect(dag.BasicEdge(node, d)) } diff --git a/terraform/transform_provider.go b/terraform/transform_provider.go index bc86d295f..ed9ccb63a 100644 --- a/terraform/transform_provider.go +++ b/terraform/transform_provider.go @@ -99,6 +99,13 @@ func (t *ProviderTransformer) Transform(g *Graph) error { needConfigured := map[string]addrs.AbsProviderConfig{} for _, v := range g.Vertices() { + // FIXME: fix the type that implements this, so it's not a + // GraphNodeProviderConsumer. + // check if we want to skip connecting this to a provider + if _, ok := v.(GraphNodeNoProvider); ok { + continue + } + // Does the vertex _directly_ use a provider? if pv, ok := v.(GraphNodeProviderConsumer); ok { requested[v] = make(map[string]ProviderRequest) @@ -158,7 +165,10 @@ func (t *ProviderTransformer) Transform(g *Graph) error { // stub it out with an init-only provider node, which will just // start up the provider and fetch its schema. if _, exists := needConfigured[key]; target == nil && !exists { - stubAddr := p.ProviderConfig.Absolute(addrs.RootModuleInstance) + stubAddr := addrs.AbsProviderConfig{ + Module: addrs.RootModuleInstance, + Provider: p.Provider, + } stub := &NodeEvalableProvider{ &NodeAbstractProvider{ Addr: stubAddr, @@ -232,7 +242,7 @@ func (t *CloseProviderTransformer) Transform(g *Graph) error { g.Connect(dag.BasicEdge(closer, p)) // connect all the provider's resources to the close node - for _, s := range g.UpEdges(p).List() { + for _, s := range g.UpEdges(p) { if _, ok := s.(GraphNodeProviderConsumer); ok { g.Connect(dag.BasicEdge(closer, s)) } @@ -275,6 +285,13 @@ func (t *MissingProviderTransformer) Transform(g *Graph) error { var err error m := providerVertexMap(g) for _, v := range g.Vertices() { + // FIXME: fix the type that implements this, so it's not a + // GraphNodeProviderConsumer. + // check if we want to skip connecting this to a provider + if _, ok := v.(GraphNodeNoProvider); ok { + continue + } + pv, ok := v.(GraphNodeProviderConsumer) if !ok { continue @@ -286,7 +303,7 @@ func (t *MissingProviderTransformer) Transform(g *Graph) error { // the later proper resolution of provider inheritance done by // ProviderTransformer. p, _ := pv.ProvidedBy() - if p.ProviderConfig.Alias != "" { + if p.Alias != "" { // We do not create default aliased configurations. log.Println("[TRACE] MissingProviderTransformer: skipping implication of aliased config", p) continue @@ -295,7 +312,7 @@ func (t *MissingProviderTransformer) Transform(g *Graph) error { // We're going to create an implicit _default_ configuration for the // referenced provider type in the _root_ module, ignoring all other // aspects of the resource's declared provider address. - defaultAddr := addrs.RootModuleInstance.ProviderConfigDefault(p.ProviderConfig.Type) + defaultAddr := addrs.RootModuleInstance.ProviderConfigDefault(p.Provider) key := defaultAddr.String() provider := m[key] @@ -570,8 +587,12 @@ func (t *ProviderConfigTransformer) transformSingle(g *Graph, c *configs.Config) // add all providers from the configuration for _, p := range mod.ProviderConfigs { - relAddr := p.Addr() - addr := relAddr.Absolute(path) + fqn := mod.ProviderForLocalConfig(p.Addr()) + addr := addrs.AbsProviderConfig{ + Provider: fqn, + Alias: p.Alias, + Module: path, + } abstract := &NodeAbstractProvider{ Addr: addr, @@ -647,8 +668,19 @@ func (t *ProviderConfigTransformer) addProxyProviders(g *Graph, c *configs.Confi // Go through all the providers the parent is passing in, and add proxies to // the parent provider nodes. for _, pair := range parentCfg.Providers { - fullAddr := pair.InChild.Addr().Absolute(instPath) - fullParentAddr := pair.InParent.Addr().Absolute(parentInstPath) + fqn := c.Module.ProviderForLocalConfig(pair.InChild.Addr()) + fullAddr := addrs.AbsProviderConfig{ + Provider: fqn, + Module: instPath, + Alias: pair.InChild.Addr().Alias, + } + + fullParentAddr := addrs.AbsProviderConfig{ + Provider: fqn, + Module: parentInstPath, + Alias: pair.InParent.Addr().Alias, + } + fullName := fullAddr.String() fullParentName := fullParentAddr.String() @@ -673,7 +705,7 @@ func (t *ProviderConfigTransformer) addProxyProviders(g *Graph, c *configs.Confi } // aliased configurations can't be implicitly passed in - if fullAddr.ProviderConfig.Alias != "" { + if fullAddr.Alias != "" { continue } @@ -705,7 +737,7 @@ func (t *ProviderConfigTransformer) attachProviderConfigs(g *Graph) error { // Go through the provider configs to find the matching config for _, p := range mc.Module.ProviderConfigs { - if p.Name == addr.ProviderConfig.Type && p.Alias == addr.ProviderConfig.Alias { + if p.Name == addr.Provider.Type && p.Alias == addr.Alias { log.Printf("[TRACE] ProviderConfigTransformer: attaching to %q provider configuration from %s", dag.VertexName(v), p.DeclRange) apn.AttachProvider(p) break diff --git a/terraform/transform_provider_test.go b/terraform/transform_provider_test.go index 2becac3b2..197e51bcb 100644 --- a/terraform/transform_provider_test.go +++ b/terraform/transform_provider_test.go @@ -62,7 +62,7 @@ func TestProviderTransformer_moduleChild(t *testing.T) { ), ProviderAddr: addrs.RootModuleInstance. Child("moo", addrs.NoKey). - ProviderConfigDefault("foo"), + ProviderConfigDefault(addrs.NewLegacyProvider("foo")), ID: "bar", }, }, @@ -279,7 +279,7 @@ func TestMissingProviderTransformer_moduleChild(t *testing.T) { ), ProviderAddr: addrs.RootModuleInstance. Child("moo", addrs.NoKey). - ProviderConfigDefault("foo"), + ProviderConfigDefault(addrs.NewLegacyProvider("foo")), ID: "bar", }, }, @@ -324,7 +324,7 @@ func TestMissingProviderTransformer_moduleGrandchild(t *testing.T) { ), ProviderAddr: addrs.RootModuleInstance. Child("moo", addrs.NoKey). - ProviderConfigDefault("foo"), + ProviderConfigDefault(addrs.NewLegacyProvider("foo")), ID: "bar", }, }, @@ -366,7 +366,7 @@ func TestParentProviderTransformer(t *testing.T) { ), ProviderAddr: addrs.RootModuleInstance. Child("moo", addrs.NoKey). - ProviderConfigDefault("foo"), + ProviderConfigDefault(addrs.NewLegacyProvider("foo")), ID: "bar", }, }, @@ -420,7 +420,7 @@ func TestParentProviderTransformer_moduleGrandchild(t *testing.T) { ), ProviderAddr: addrs.RootModuleInstance. Child("moo", addrs.NoKey). - ProviderConfigDefault("foo"), + ProviderConfigDefault(addrs.NewLegacyProvider("foo")), ID: "bar", }, }, @@ -598,8 +598,8 @@ func TestProviderConfigTransformer_implicitModule(t *testing.T) { actual := strings.TrimSpace(g.String()) expected := strings.TrimSpace(`module.mod.aws_instance.bar - provider.aws.foo -provider.aws.foo`) + provider["registry.terraform.io/-/aws"].foo +provider["registry.terraform.io/-/aws"].foo`) if actual != expected { t.Fatalf("wrong result\n\nexpected:\n%s\n\ngot:\n%s", expected, actual) } @@ -629,118 +629,118 @@ func TestProviderConfigTransformer_invalidProvider(t *testing.T) { if err == nil { t.Fatal("expected missing provider error") } - if !strings.Contains(err.Error(), "provider.aws.foo") { + if !strings.Contains(err.Error(), `provider["registry.terraform.io/-/aws"].foo`) { t.Fatalf("error should reference missing provider, got: %s", err) } } const testTransformProviderBasicStr = ` aws_instance.web - provider.aws -provider.aws + provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/aws"] ` const testTransformCloseProviderBasicStr = ` aws_instance.web - provider.aws -provider.aws -provider.aws (close) + provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/aws"] (close) aws_instance.web - provider.aws + provider["registry.terraform.io/-/aws"] ` const testTransformMissingProviderBasicStr = ` aws_instance.web - provider.aws + provider["registry.terraform.io/-/aws"] foo_instance.web - provider.foo -provider.aws -provider.aws (close) + provider["registry.terraform.io/-/foo"] +provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/aws"] (close) aws_instance.web - provider.aws -provider.foo -provider.foo (close) + provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/foo"] +provider["registry.terraform.io/-/foo"] (close) foo_instance.web - provider.foo + provider["registry.terraform.io/-/foo"] ` const testTransformMissingGrandchildProviderStr = ` module.sub.module.subsub.bar_instance.two - provider.bar + provider["registry.terraform.io/-/bar"] module.sub.module.subsub.foo_instance.one - module.sub.provider.foo -module.sub.provider.foo -provider.bar + module.sub.provider["registry.terraform.io/-/foo"] +module.sub.provider["registry.terraform.io/-/foo"] +provider["registry.terraform.io/-/bar"] ` const testTransformMissingProviderModuleChildStr = ` module.moo.foo_instance.qux (import id "bar") -provider.foo +provider["registry.terraform.io/-/foo"] ` const testTransformMissingProviderModuleGrandchildStr = ` module.a.module.b.foo_instance.qux (import id "bar") -provider.foo +provider["registry.terraform.io/-/foo"] ` const testTransformParentProviderStr = ` module.moo.foo_instance.qux (import id "bar") -provider.foo +provider["registry.terraform.io/-/foo"] ` const testTransformParentProviderModuleGrandchildStr = ` module.a.module.b.foo_instance.qux (import id "bar") -provider.foo +provider["registry.terraform.io/-/foo"] ` const testTransformProviderModuleChildStr = ` module.moo.foo_instance.qux (import id "bar") - provider.foo -provider.foo + provider["registry.terraform.io/-/foo"] +provider["registry.terraform.io/-/foo"] ` const testTransformPruneProviderBasicStr = ` foo_instance.web - provider.foo -provider.foo -provider.foo (close) + provider["registry.terraform.io/-/foo"] +provider["registry.terraform.io/-/foo"] +provider["registry.terraform.io/-/foo"] (close) foo_instance.web - provider.foo + provider["registry.terraform.io/-/foo"] ` const testTransformDisableProviderBasicStr = ` module.child - provider.aws (disabled) + provider["registry.terraform.io/-/aws"] (disabled) var.foo -provider.aws (close) +provider["registry.terraform.io/-/aws"] (close) module.child - provider.aws (disabled) -provider.aws (disabled) + provider["registry.terraform.io/-/aws"] (disabled) +provider["registry.terraform.io/-/aws"] (disabled) var.foo ` const testTransformDisableProviderKeepStr = ` aws_instance.foo - provider.aws + provider["registry.terraform.io/-/aws"] module.child - provider.aws + provider["registry.terraform.io/-/aws"] var.foo -provider.aws -provider.aws (close) +provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/aws"] (close) aws_instance.foo module.child - provider.aws + provider["registry.terraform.io/-/aws"] var.foo ` const testTransformModuleProviderConfigStr = ` module.child.aws_instance.thing - provider.aws.foo -provider.aws.foo + provider["registry.terraform.io/-/aws"].foo +provider["registry.terraform.io/-/aws"].foo ` const testTransformModuleProviderGrandparentStr = ` module.child.module.grandchild.aws_instance.baz - provider.aws.foo -provider.aws.foo + provider["registry.terraform.io/-/aws"].foo +provider["registry.terraform.io/-/aws"].foo ` diff --git a/terraform/transform_provisioner_test.go b/terraform/transform_provisioner_test.go index eecd67788..0b25b1d25 100644 --- a/terraform/transform_provisioner_test.go +++ b/terraform/transform_provisioner_test.go @@ -70,9 +70,10 @@ func TestMissingProvisionerTransformer_module(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) s.SetResourceInstanceCurrent( addrs.Resource{ @@ -86,9 +87,10 @@ func TestMissingProvisionerTransformer_module(t *testing.T) { }, Status: states.ObjectReady, }, - addrs.ProviderConfig{ - Type: "aws", - }.Absolute(addrs.RootModuleInstance), + addrs.AbsProviderConfig{ + Provider: addrs.NewLegacyProvider("aws"), + Module: addrs.RootModuleInstance, + }, ) }) diff --git a/terraform/transform_reference.go b/terraform/transform_reference.go index f9199be0e..9b397ccd4 100644 --- a/terraform/transform_reference.go +++ b/terraform/transform_reference.go @@ -3,12 +3,15 @@ package terraform import ( "fmt" "log" + "sort" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/addrs" + "github.com/hashicorp/terraform/configs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/dag" "github.com/hashicorp/terraform/lang" + "github.com/hashicorp/terraform/states" ) // GraphNodeReferenceable must be implemented by any node that represents @@ -37,6 +40,11 @@ type GraphNodeReferencer interface { References() []*addrs.Reference } +type GraphNodeAttachDependencies interface { + GraphNodeResource + AttachDependencies([]addrs.AbsResource) +} + // GraphNodeReferenceOutside is an interface that can optionally be implemented. // A node that implements it can specify that its own referenceable addresses // and/or the addresses it references are in a different module than the @@ -58,7 +66,7 @@ type GraphNodeReferencer interface { type GraphNodeReferenceOutside interface { // ReferenceOutside returns a path in which any references from this node // are resolved. - ReferenceOutside() (selfPath, referencePath addrs.ModuleInstance) + ReferenceOutside() (selfPath, referencePath addrs.Module) } // ReferenceTransformer is a GraphTransformer that connects all the @@ -72,7 +80,12 @@ func (t *ReferenceTransformer) Transform(g *Graph) error { // Find the things that reference things and connect them for _, v := range vs { - parents, _ := m.References(v) + if _, ok := v.(GraphNodeDestroyer); ok { + // destroy nodes references are not connected, since they can only + // use their own state. + continue + } + parents := m.References(v) parentsDbg := make([]string, len(parents)) for i, v := range parents { parentsDbg[i] = dag.VertexName(v) @@ -84,62 +97,113 @@ func (t *ReferenceTransformer) Transform(g *Graph) error { for _, parent := range parents { g.Connect(dag.BasicEdge(v, parent)) } + + if len(parents) > 0 { + continue + } } return nil } -// DestroyReferenceTransformer is a GraphTransformer that reverses the edges -// for locals and outputs that depend on other nodes which will be -// removed during destroy. If a destroy node is evaluated before the local or -// output value, it will be removed from the state, and the later interpolation -// will fail. -type DestroyValueReferenceTransformer struct{} +// AttachDependenciesTransformer records all resource dependencies for each +// instance, and attaches the addresses to the node itself. Managed resource +// will record these in the state for proper ordering of destroy operations. +type AttachDependenciesTransformer struct { + Config *configs.Config + State *states.State + Schemas *Schemas +} -func (t *DestroyValueReferenceTransformer) Transform(g *Graph) error { - vs := g.Vertices() - for _, v := range vs { - switch v.(type) { - case *NodeApplyableOutput, *NodeLocal: - // OK - default: +func (t AttachDependenciesTransformer) Transform(g *Graph) error { + for _, v := range g.Vertices() { + attacher, ok := v.(GraphNodeAttachDependencies) + if !ok { + continue + } + selfAddr := attacher.ResourceAddr() + + // Data sources don't need to track destroy dependencies + if selfAddr.Resource.Mode == addrs.DataResourceMode { continue } - // reverse any outgoing edges so that the value is evaluated first. - for _, e := range g.EdgesFrom(v) { - target := e.Target() + ans, err := g.Ancestors(v) + if err != nil { + return err + } - // only destroy nodes will be evaluated in reverse - if _, ok := target.(GraphNodeDestroyer); !ok { + // dedupe addrs when there's multiple instances involved, or + // multiple paths in the un-reduced graph + depMap := map[string]addrs.AbsResource{} + for _, d := range ans { + var addr addrs.AbsResource + + switch d := d.(type) { + case GraphNodeResourceInstance: + instAddr := d.ResourceInstanceAddr() + addr = instAddr.Resource.Resource.Absolute(instAddr.Module) + case GraphNodeResource: + addr = d.ResourceAddr() + default: continue } - log.Printf("[TRACE] output dep: %s", dag.VertexName(target)) + // Data sources don't need to track destroy dependencies + if addr.Resource.Mode == addrs.DataResourceMode { + continue + } - g.RemoveEdge(e) - g.Connect(&DestroyEdge{S: target, T: v}) + if addr.Equal(selfAddr) { + continue + } + depMap[addr.String()] = addr } + + deps := make([]addrs.AbsResource, 0, len(depMap)) + for _, d := range depMap { + deps = append(deps, d) + } + sort.Slice(deps, func(i, j int) bool { + return deps[i].String() < deps[j].String() + }) + + log.Printf("[TRACE] AttachDependenciesTransformer: %s depends on %s", attacher.ResourceAddr(), deps) + attacher.AttachDependencies(deps) } return nil } -// PruneUnusedValuesTransformer is s GraphTransformer that removes local and -// output values which are not referenced in the graph. Since outputs and -// locals always need to be evaluated, if they reference a resource that is not -// available in the state the interpolation could fail. -type PruneUnusedValuesTransformer struct{} +// PruneUnusedValuesTransformer is a GraphTransformer that removes local, +// variable, and output values which are not referenced in the graph. If these +// values reference a resource that is no longer in the state the interpolation +// could fail. +type PruneUnusedValuesTransformer struct { + Destroy bool +} func (t *PruneUnusedValuesTransformer) Transform(g *Graph) error { - // this might need multiple runs in order to ensure that pruning a value - // doesn't effect a previously checked value. + // Pruning a value can effect previously checked edges, so loop until there + // are no more changes. for removed := 0; ; removed = 0 { for _, v := range g.Vertices() { - switch v.(type) { - case *NodeApplyableOutput, *NodeLocal: + switch v := v.(type) { + case *NodeApplyableOutput: + // If we're not certain this is a full destroy, we need to keep any + // root module outputs + if v.Addr.Module.IsRoot() && !t.Destroy { + continue + } + case *NodePlannableOutput: + // Have similar guardrails for plannable outputs as applyable above + if v.Module.IsRoot() && !t.Destroy { + continue + } + case *NodeLocal, *NodeApplyableModuleVariable, *NodePlannableModuleVariable: // OK default: + // We're only concerned with variables, locals and outputs continue } @@ -148,6 +212,7 @@ func (t *PruneUnusedValuesTransformer) Transform(g *Graph) error { switch dependants.Len() { case 0: // nothing at all depends on this + log.Printf("[TRACE] PruneUnusedValuesTransformer: removing unused value %s", dag.VertexName(v)) g.Remove(v) removed++ case 1: @@ -155,6 +220,7 @@ func (t *PruneUnusedValuesTransformer) Transform(g *Graph) error { // we need to check for the case of a single destroy node. d := dependants.List()[0] if _, ok := d.(*NodeDestroyableOutput); ok { + log.Printf("[TRACE] PruneUnusedValuesTransformer: removing unused value %s", dag.VertexName(v)) g.Remove(v) removed++ } @@ -177,27 +243,20 @@ type ReferenceMap struct { // A particular reference key might actually identify multiple vertices, // e.g. in situations where one object is contained inside another. vertices map[string][]dag.Vertex - - // edges is a map whose keys are a subset of the internal reference keys - // from "vertices", and whose values are the nodes that refer to each - // key. The values in this map are the referrers, while values in - // "verticies" are the referents. The keys in both cases are referents. - edges map[string][]dag.Vertex } // References returns the set of vertices that the given vertex refers to, // and any referenced addresses that do not have corresponding vertices. -func (m *ReferenceMap) References(v dag.Vertex) ([]dag.Vertex, []addrs.Referenceable) { +func (m *ReferenceMap) References(v dag.Vertex) []dag.Vertex { rn, ok := v.(GraphNodeReferencer) if !ok { - return nil, nil + return nil } if _, ok := v.(GraphNodeSubPath); !ok { - return nil, nil + return nil } var matches []dag.Vertex - var missing []addrs.Referenceable for _, ref := range rn.References() { subject := ref.Subject @@ -216,7 +275,6 @@ func (m *ReferenceMap) References(v dag.Vertex) ([]dag.Vertex, []addrs.Reference } key = m.referenceMapKey(v, subject) } - vertices := m.vertices[key] for _, rv := range vertices { // don't include self-references @@ -225,47 +283,6 @@ func (m *ReferenceMap) References(v dag.Vertex) ([]dag.Vertex, []addrs.Reference } matches = append(matches, rv) } - if len(vertices) == 0 { - missing = append(missing, ref.Subject) - } - } - - return matches, missing -} - -// Referrers returns the set of vertices that refer to the given vertex. -func (m *ReferenceMap) Referrers(v dag.Vertex) []dag.Vertex { - rn, ok := v.(GraphNodeReferenceable) - if !ok { - return nil - } - sp, ok := v.(GraphNodeSubPath) - if !ok { - return nil - } - - var matches []dag.Vertex - for _, addr := range rn.ReferenceableAddrs() { - key := m.mapKey(sp.Path(), addr) - referrers, ok := m.edges[key] - if !ok { - continue - } - - // If the referrer set includes our own given vertex then we skip, - // since we don't want to return self-references. - selfRef := false - for _, p := range referrers { - if p == v { - selfRef = true - break - } - } - if selfRef { - continue - } - - matches = append(matches, referrers...) } return matches @@ -292,7 +309,7 @@ func (m *ReferenceMap) vertexReferenceablePath(v dag.Vertex) addrs.ModuleInstanc // Vertex is referenced from a different module than where it was // declared. path, _ := outside.ReferenceOutside() - return path + return path.UnkeyedInstanceShim() } // Vertex is referenced from the same module as where it was declared. @@ -311,12 +328,11 @@ func vertexReferencePath(referrer dag.Vertex) addrs.ModuleInstance { panic(fmt.Errorf("vertexReferencePath on vertex type %T which doesn't implement GraphNodeSubPath", sp)) } - var path addrs.ModuleInstance if outside, ok := referrer.(GraphNodeReferenceOutside); ok { // Vertex makes references to objects in a different module than where // it was declared. - _, path = outside.ReferenceOutside() - return path + _, path := outside.ReferenceOutside() + return path.UnkeyedInstanceShim() } // Vertex makes references to objects in the same module as where it @@ -381,34 +397,7 @@ func NewReferenceMap(vs []dag.Vertex) *ReferenceMap { } } - // Build the lookup table for referenced by - edges := make(map[string][]dag.Vertex) - for _, v := range vs { - _, ok := v.(GraphNodeSubPath) - if !ok { - // Only nodes with paths can participate in a reference map. - continue - } - - rn, ok := v.(GraphNodeReferencer) - if !ok { - // We're only looking for referenceable nodes - continue - } - - // Go through and cache them - for _, ref := range rn.References() { - if ref.Subject == nil { - // Should never happen - panic(fmt.Sprintf("%T.References returned reference with nil subject", rn)) - } - key := m.referenceMapKey(v, ref.Subject) - edges[key] = append(edges[key], v) - } - } - m.vertices = vertices - m.edges = edges return &m } @@ -441,6 +430,8 @@ func appendResourceDestroyReferences(refs []*addrs.Reference) []*addrs.Reference newRef.Subject = tr.Phase(addrs.ResourceInstancePhaseDestroy) refs = append(refs, &newRef) } + // FIXME: Using this method in module expansion references, + // May want to refactor this method beyond resources } return refs } diff --git a/terraform/transform_reference_test.go b/terraform/transform_reference_test.go index ad32b1376..004dbde48 100644 --- a/terraform/transform_reference_test.go +++ b/terraform/transform_reference_test.go @@ -113,55 +113,7 @@ func TestReferenceMapReferences(t *testing.T) { for tn, tc := range cases { t.Run(tn, func(t *testing.T) { rm := NewReferenceMap(tc.Nodes) - result, _ := rm.References(tc.Check) - - var resultStr []string - for _, v := range result { - resultStr = append(resultStr, dag.VertexName(v)) - } - - sort.Strings(resultStr) - sort.Strings(tc.Result) - if !reflect.DeepEqual(resultStr, tc.Result) { - t.Fatalf("bad: %#v", resultStr) - } - }) - } -} - -func TestReferenceMapReferencedBy(t *testing.T) { - cases := map[string]struct { - Nodes []dag.Vertex - Check dag.Vertex - Result []string - }{ - "simple": { - Nodes: []dag.Vertex{ - &graphNodeRefChildTest{ - NameValue: "A", - Refs: []string{"A"}, - }, - &graphNodeRefChildTest{ - NameValue: "B", - Refs: []string{"A"}, - }, - &graphNodeRefChildTest{ - NameValue: "C", - Refs: []string{"B"}, - }, - }, - Check: &graphNodeRefParentTest{ - NameValue: "foo", - Names: []string{"A"}, - }, - Result: []string{"A", "B"}, - }, - } - - for tn, tc := range cases { - t.Run(tn, func(t *testing.T) { - rm := NewReferenceMap(tc.Nodes) - result := rm.Referrers(tc.Check) + result := rm.References(tc.Check) var resultStr []string for _, v := range result { diff --git a/terraform/transform_resource_count.go b/terraform/transform_resource_count.go index c70a3c144..439cd9255 100644 --- a/terraform/transform_resource_count.go +++ b/terraform/transform_resource_count.go @@ -1,10 +1,11 @@ package terraform import ( + "log" + "github.com/hashicorp/terraform/addrs" "github.com/hashicorp/terraform/configs/configschema" "github.com/hashicorp/terraform/dag" - "github.com/zclconf/go-cty/cty" ) // ResourceCountTransformer is a GraphTransformer that expands the count @@ -15,33 +16,12 @@ type ResourceCountTransformer struct { Concrete ConcreteResourceInstanceNodeFunc Schema *configschema.Block - // Count is either the number of indexed instances to create, or -1 to - // indicate that count is not set at all and thus a no-key instance should - // be created. - Count int - ForEach map[string]cty.Value - Addr addrs.AbsResource + Addr addrs.AbsResource + InstanceAddrs []addrs.AbsResourceInstance } func (t *ResourceCountTransformer) Transform(g *Graph) error { - if t.Count < 0 && t.ForEach == nil { - // Negative count indicates that count is not set at all. - addr := t.Addr.Instance(addrs.NoKey) - - abstract := NewNodeAbstractResourceInstance(addr) - abstract.Schema = t.Schema - var node dag.Vertex = abstract - if f := t.Concrete; f != nil { - node = f(abstract) - } - - g.Add(node) - return nil - } - - // Add nodes related to the for_each expression - for key := range t.ForEach { - addr := t.Addr.Instance(addrs.StringKey(key)) + for _, addr := range t.InstanceAddrs { abstract := NewNodeAbstractResourceInstance(addr) abstract.Schema = t.Schema var node dag.Vertex = abstract @@ -49,23 +29,8 @@ func (t *ResourceCountTransformer) Transform(g *Graph) error { node = f(abstract) } + log.Printf("[TRACE] ResourceCountTransformer: adding %s as %T", addr, node) g.Add(node) } - - // For each count, build and add the node - for i := 0; i < t.Count; i++ { - key := addrs.IntKey(i) - addr := t.Addr.Instance(key) - - abstract := NewNodeAbstractResourceInstance(addr) - abstract.Schema = t.Schema - var node dag.Vertex = abstract - if f := t.Concrete; f != nil { - node = f(abstract) - } - - g.Add(node) - } - return nil } diff --git a/terraform/transform_root_test.go b/terraform/transform_root_test.go index 1ed628d4a..2cce2b927 100644 --- a/terraform/transform_root_test.go +++ b/terraform/transform_root_test.go @@ -58,11 +58,11 @@ func TestRootTransformer(t *testing.T) { const testTransformRootBasicStr = ` aws_instance.foo - provider.aws + provider["registry.terraform.io/-/aws"] do_droplet.bar - provider.do -provider.aws -provider.do + provider["registry.terraform.io/-/do"] +provider["registry.terraform.io/-/aws"] +provider["registry.terraform.io/-/do"] root aws_instance.foo do_droplet.bar diff --git a/terraform/transform_targets.go b/terraform/transform_targets.go index d25274e68..80cff5364 100644 --- a/terraform/transform_targets.go +++ b/terraform/transform_targets.go @@ -28,7 +28,7 @@ type GraphNodeTargetable interface { // they must get updated if any of their dependent resources get updated, // which would not normally be true if one of their dependencies were targeted. type GraphNodeTargetDownstream interface { - TargetDownstream(targeted, untargeted *dag.Set) bool + TargetDownstream(targeted, untargeted dag.Set) bool } // TargetsTransformer is a GraphTransformer that, when the user specifies a @@ -79,8 +79,8 @@ func (t *TargetsTransformer) Transform(g *Graph) error { // Returns a set of targeted nodes. A targeted node is either addressed // directly, address indirectly via its container, or it's a dependency of a // targeted node. Destroy mode keeps dependents instead of dependencies. -func (t *TargetsTransformer) selectTargetedNodes(g *Graph, addrs []addrs.Targetable) (*dag.Set, error) { - targetedNodes := new(dag.Set) +func (t *TargetsTransformer) selectTargetedNodes(g *Graph, addrs []addrs.Targetable) (dag.Set, error) { + targetedNodes := make(dag.Set) vertices := g.Vertices() @@ -95,7 +95,7 @@ func (t *TargetsTransformer) selectTargetedNodes(g *Graph, addrs []addrs.Targeta tn.SetTargets(addrs) } - var deps *dag.Set + var deps dag.Set var err error if t.Destroy { deps, err = g.Descendents(v) @@ -106,7 +106,7 @@ func (t *TargetsTransformer) selectTargetedNodes(g *Graph, addrs []addrs.Targeta return nil, err } - for _, d := range deps.List() { + for _, d := range deps { targetedNodes.Add(d) } } @@ -114,7 +114,7 @@ func (t *TargetsTransformer) selectTargetedNodes(g *Graph, addrs []addrs.Targeta return t.addDependencies(targetedNodes, g) } -func (t *TargetsTransformer) addDependencies(targetedNodes *dag.Set, g *Graph) (*dag.Set, error) { +func (t *TargetsTransformer) addDependencies(targetedNodes dag.Set, g *Graph) (dag.Set, error) { // Handle nodes that need to be included if their dependencies are included. // This requires multiple passes since we need to catch transitive // dependencies if and only if they are via other nodes that also @@ -150,7 +150,7 @@ func (t *TargetsTransformer) addDependencies(targetedNodes *dag.Set, g *Graph) ( continue } - for _, dv := range dependers.List() { + for _, dv := range dependers { if targetedNodes.Include(dv) { // Already present, so nothing to do continue @@ -186,14 +186,14 @@ func (t *TargetsTransformer) addDependencies(targetedNodes *dag.Set, g *Graph) ( // This essentially maintains the previous behavior where interpolation in // outputs would fail silently, but can now surface errors where the output // is required. -func filterPartialOutputs(v interface{}, targetedNodes *dag.Set, g *Graph) bool { +func filterPartialOutputs(v interface{}, targetedNodes dag.Set, g *Graph) bool { // should this just be done with TargetDownstream? if _, ok := v.(*NodeApplyableOutput); !ok { return true } dependers := g.UpEdges(v) - for _, d := range dependers.List() { + for _, d := range dependers { if _, ok := d.(*NodeCountBoundary); ok { continue } @@ -210,7 +210,7 @@ func filterPartialOutputs(v interface{}, targetedNodes *dag.Set, g *Graph) bool depends := g.DownEdges(v) - for _, d := range depends.List() { + for _, d := range depends { if !targetedNodes.Include(d) { log.Printf("[WARN] %s missing targeted dependency %s, removing from the graph", dag.VertexName(v), dag.VertexName(d)) diff --git a/terraform/transform_transitive_reduction_test.go b/terraform/transform_transitive_reduction_test.go index 250978701..3f4cdb0dc 100644 --- a/terraform/transform_transitive_reduction_test.go +++ b/terraform/transform_transitive_reduction_test.go @@ -31,8 +31,8 @@ func TestTransitiveReductionTransformer(t *testing.T) { { transform := &AttachSchemaTransformer{ Schemas: &Schemas{ - Providers: map[string]*ProviderSchema{ - "aws": { + Providers: map[addrs.Provider]*ProviderSchema{ + addrs.NewLegacyProvider("aws"): { ResourceTypes: map[string]*configschema.Block{ "aws_instance": &configschema.Block{ Attributes: map[string]*configschema.Attribute{ diff --git a/terraform/util.go b/terraform/util.go index 5428cd5a0..7966b58dd 100644 --- a/terraform/util.go +++ b/terraform/util.go @@ -12,8 +12,8 @@ type Semaphore chan struct{} // NewSemaphore creates a semaphore that allows up // to a given limit of simultaneous acquisitions func NewSemaphore(n int) Semaphore { - if n == 0 { - panic("semaphore with limit 0") + if n <= 0 { + panic("semaphore with limit <=0") } ch := make(chan struct{}, n) return Semaphore(ch) diff --git a/terraform/variables.go b/terraform/variables.go index 60408b88c..14f6a3ccf 100644 --- a/terraform/variables.go +++ b/terraform/variables.go @@ -78,7 +78,7 @@ func (v ValueSourceType) GoString() string { return fmt.Sprintf("terraform.%s", v) } -//go:generate stringer -type ValueSourceType +//go:generate go run golang.org/x/tools/cmd/stringer -type ValueSourceType // InputValues is a map of InputValue instances. type InputValues map[string]*InputValue diff --git a/tfdiags/consolidate_warnings.go b/tfdiags/consolidate_warnings.go new file mode 100644 index 000000000..06f3d52cc --- /dev/null +++ b/tfdiags/consolidate_warnings.go @@ -0,0 +1,146 @@ +package tfdiags + +import "fmt" + +// ConsolidateWarnings checks if there is an unreasonable amount of warnings +// with the same summary in the receiver and, if so, returns a new diagnostics +// with some of those warnings consolidated into a single warning in order +// to reduce the verbosity of the output. +// +// This mechanism is here primarily for diagnostics printed out at the CLI. In +// other contexts it is likely better to just return the warnings directly, +// particularly if they are going to be interpreted by software rather than +// by a human reader. +// +// The returned slice always has a separate backing array from the reciever, +// but some diagnostic values themselves might be shared. +// +// The definition of "unreasonable" is given as the threshold argument. At most +// that many warnings with the same summary will be shown. +func (diags Diagnostics) ConsolidateWarnings(threshold int) Diagnostics { + if len(diags) == 0 { + return nil + } + + newDiags := make(Diagnostics, 0, len(diags)) + + // We'll track how many times we've seen each warning summary so we can + // decide when to start consolidating. Once we _have_ started consolidating, + // we'll also track the object representing the consolidated warning + // so we can continue appending to it. + warningStats := make(map[string]int) + warningGroups := make(map[string]*warningGroup) + + for _, diag := range diags { + severity := diag.Severity() + if severity != Warning || diag.Source().Subject == nil { + // Only warnings can get special treatment, and we only + // consolidate warnings that have source locations because + // our primary goal here is to deal with the situation where + // some configuration language feature is producing a warning + // each time it's used across a potentially-large config. + newDiags = newDiags.Append(diag) + continue + } + + desc := diag.Description() + summary := desc.Summary + if g, ok := warningGroups[summary]; ok { + // We're already grouping this one, so we'll just continue it. + g.Append(diag) + continue + } + + warningStats[summary]++ + if warningStats[summary] == threshold { + // Initially creating the group doesn't really change anything + // visibly in the result, since a group with only one warning + // is just a passthrough anyway, but once we do this any additional + // warnings with the same summary will get appended to this group. + g := &warningGroup{} + newDiags = newDiags.Append(g) + warningGroups[summary] = g + g.Append(diag) + continue + } + + // If this warning is not consolidating yet then we'll just append + // it directly. + newDiags = newDiags.Append(diag) + } + + return newDiags +} + +// A warningGroup is one or more warning diagnostics grouped together for +// UI consolidation purposes. +// +// A warningGroup with only one diagnostic in it is just a passthrough for +// that one diagnostic. If it has more than one then it will behave mostly +// like the first one but its detail message will include an additional +// sentence mentioning the consolidation. A warningGroup with no diagnostics +// at all is invalid and will panic when used. +type warningGroup struct { + Warnings Diagnostics +} + +var _ Diagnostic = (*warningGroup)(nil) + +func (wg *warningGroup) Severity() Severity { + return wg.Warnings[0].Severity() +} + +func (wg *warningGroup) Description() Description { + desc := wg.Warnings[0].Description() + if len(wg.Warnings) < 2 { + return desc + } + extraCount := len(wg.Warnings) - 1 + var msg string + switch extraCount { + case 1: + msg = "(and one more similar warning elsewhere)" + default: + msg = fmt.Sprintf("(and %d more similar warnings elsewhere)", extraCount) + } + if desc.Detail != "" { + desc.Detail = desc.Detail + "\n\n" + msg + } else { + desc.Detail = msg + } + return desc +} + +func (wg *warningGroup) Source() Source { + return wg.Warnings[0].Source() +} + +func (wg *warningGroup) FromExpr() *FromExpr { + return wg.Warnings[0].FromExpr() +} + +func (wg *warningGroup) Append(diag Diagnostic) { + if diag.Severity() != Warning { + panic("can't append a non-warning diagnostic to a warningGroup") + } + wg.Warnings = append(wg.Warnings, diag) +} + +// WarningGroupSourceRanges can be used in conjunction with +// Diagnostics.ConsolidateWarnings to recover the full set of original source +// locations from a consolidated warning. +// +// For convenience, this function accepts any diagnostic and will just return +// the single Source value from any diagnostic that isn't a warning group. +func WarningGroupSourceRanges(diag Diagnostic) []Source { + wg, ok := diag.(*warningGroup) + if !ok { + return []Source{diag.Source()} + } + + ret := make([]Source, len(wg.Warnings)) + for i, wrappedDiag := range wg.Warnings { + ret[i] = wrappedDiag.Source() + } + return ret +} diff --git a/tfdiags/consolidate_warnings_test.go b/tfdiags/consolidate_warnings_test.go new file mode 100644 index 000000000..df94d4af8 --- /dev/null +++ b/tfdiags/consolidate_warnings_test.go @@ -0,0 +1,179 @@ +package tfdiags + +import ( + "fmt" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/hcl/v2" +) + +func TestConsolidateWarnings(t *testing.T) { + var diags Diagnostics + + for i := 0; i < 4; i++ { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Warning 1", + Detail: fmt.Sprintf("This one has a subject %d", i), + Subject: &hcl.Range{ + Filename: "foo.tf", + Start: hcl.Pos{Line: 1, Column: 1, Byte: 0}, + End: hcl.Pos{Line: 1, Column: 1, Byte: 0}, + }, + }) + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Error 1", + Detail: fmt.Sprintf("This one has a subject %d", i), + Subject: &hcl.Range{ + Filename: "foo.tf", + Start: hcl.Pos{Line: 1, Column: 1, Byte: 0}, + End: hcl.Pos{Line: 1, Column: 1, Byte: 0}, + }, + }) + diags = diags.Append(Sourceless( + Warning, + "Warning 2", + fmt.Sprintf("This one is sourceless %d", i), + )) + diags = diags.Append(SimpleWarning("Warning 3")) + } + + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: "Warning 4", + Detail: "Only one of this one", + Subject: &hcl.Range{ + Filename: "foo.tf", + Start: hcl.Pos{Line: 1, Column: 1, Byte: 0}, + End: hcl.Pos{Line: 1, Column: 1, Byte: 0}, + }, + }) + + // We're using ForRPC here to force the diagnostics to be of a consistent + // type that we can easily assert against below. + got := diags.ConsolidateWarnings(2).ForRPC() + want := Diagnostics{ + // First set + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 1", + Detail_: "This one has a subject 0", + Subject_: &SourceRange{ + Filename: "foo.tf", + Start: SourcePos{Line: 1, Column: 1, Byte: 0}, + End: SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + &rpcFriendlyDiag{ + Severity_: Error, + Summary_: "Error 1", + Detail_: "This one has a subject 0", + Subject_: &SourceRange{ + Filename: "foo.tf", + Start: SourcePos{Line: 1, Column: 1, Byte: 0}, + End: SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 2", + Detail_: "This one is sourceless 0", + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 3", + }, + + // Second set (consolidation begins; note additional paragraph in Warning 1 detail) + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 1", + Detail_: "This one has a subject 1\n\n(and 2 more similar warnings elsewhere)", + Subject_: &SourceRange{ + Filename: "foo.tf", + Start: SourcePos{Line: 1, Column: 1, Byte: 0}, + End: SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + &rpcFriendlyDiag{ + Severity_: Error, + Summary_: "Error 1", + Detail_: "This one has a subject 1", + Subject_: &SourceRange{ + Filename: "foo.tf", + Start: SourcePos{Line: 1, Column: 1, Byte: 0}, + End: SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 2", + Detail_: "This one is sourceless 1", + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 3", + }, + + // Third set (no more Warning 1, because it's consolidated) + &rpcFriendlyDiag{ + Severity_: Error, + Summary_: "Error 1", + Detail_: "This one has a subject 2", + Subject_: &SourceRange{ + Filename: "foo.tf", + Start: SourcePos{Line: 1, Column: 1, Byte: 0}, + End: SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 2", + Detail_: "This one is sourceless 2", + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 3", + }, + + // Fourth set (still no warning 1) + &rpcFriendlyDiag{ + Severity_: Error, + Summary_: "Error 1", + Detail_: "This one has a subject 3", + Subject_: &SourceRange{ + Filename: "foo.tf", + Start: SourcePos{Line: 1, Column: 1, Byte: 0}, + End: SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 2", + Detail_: "This one is sourceless 3", + }, + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 3", + }, + + // Special straggler warning gets to show up unconsolidated, because + // there is only one of it. + &rpcFriendlyDiag{ + Severity_: Warning, + Summary_: "Warning 4", + Detail_: "Only one of this one", + Subject_: &SourceRange{ + Filename: "foo.tf", + Start: SourcePos{Line: 1, Column: 1, Byte: 0}, + End: SourcePos{Line: 1, Column: 1, Byte: 0}, + }, + }, + } + + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } +} diff --git a/tfdiags/contextual.go b/tfdiags/contextual.go index 59c06b70b..d55bc2f0c 100644 --- a/tfdiags/contextual.go +++ b/tfdiags/contextual.go @@ -308,8 +308,8 @@ func hclRangeFromIndexStepAndAttribute(idxStep cty.IndexStep, attr *hcl.Attribut } stepKey := idxStep.Key.AsString() for _, kvPair := range pairs { - key, err := kvPair.Key.Value(nil) - if err != nil { + key, diags := kvPair.Key.Value(nil) + if diags.HasErrors() { return attr.Expr.Range() } if key.AsString() == stepKey { diff --git a/tfdiags/diagnostic.go b/tfdiags/diagnostic.go index d39f24de4..a7699cf01 100644 --- a/tfdiags/diagnostic.go +++ b/tfdiags/diagnostic.go @@ -17,7 +17,7 @@ type Diagnostic interface { type Severity rune -//go:generate stringer -type=Severity +//go:generate go run golang.org/x/tools/cmd/stringer -type=Severity const ( Error Severity = 'E' diff --git a/tools/terraform-bundle/package.go b/tools/terraform-bundle/package.go index 154a79388..9ad41c74e 100644 --- a/tools/terraform-bundle/package.go +++ b/tools/terraform-bundle/package.go @@ -182,7 +182,7 @@ func (c *PackageCommand) Run(args []string) int { } else { //attempt to get from the public registry if not found locally c.ui.Output(fmt.Sprintf("- Checking for provider plugin on %s...", releaseHost)) - _, _, err := installer.Get(addrs.ProviderType{Name: name}, constraint) + _, _, err := installer.Get(addrs.NewLegacyProvider(name), constraint) if err != nil { c.ui.Error(fmt.Sprintf("- Failed to resolve %s provider %s: %s", name, constraint, err)) return 1 diff --git a/tools/tools.go b/tools/tools.go new file mode 100644 index 000000000..48135a7ed --- /dev/null +++ b/tools/tools.go @@ -0,0 +1,9 @@ +// +build tools + +package tools + +import ( + _ "github.com/golang/mock/mockgen" + _ "golang.org/x/tools/cmd/cover" + _ "golang.org/x/tools/cmd/stringer" +) diff --git a/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/resources/mgmt/resources/models.go b/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/resources/mgmt/resources/models.go index c1e080fbd..94a1a8b8e 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/resources/mgmt/resources/models.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/resources/mgmt/resources/models.go @@ -1,6 +1,6 @@ // +build go1.9 -// Copyright 2018 Microsoft Corporation +// Copyright 2019 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -19,16 +19,16 @@ package resources -import original "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources" +import ( + "context" + + original "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources" +) const ( DefaultBaseURI = original.DefaultBaseURI ) -type BaseClient = original.BaseClient -type DeploymentOperationsClient = original.DeploymentOperationsClient -type DeploymentsClient = original.DeploymentsClient -type GroupsClient = original.GroupsClient type DeploymentMode = original.DeploymentMode const ( @@ -44,7 +44,10 @@ const ( type AliasPathType = original.AliasPathType type AliasType = original.AliasType +type BaseClient = original.BaseClient type BasicDependency = original.BasicDependency +type Client = original.Client +type CloudError = original.CloudError type DebugSetting = original.DebugSetting type Dependency = original.Dependency type Deployment = original.Deployment @@ -56,14 +59,18 @@ type DeploymentListResultIterator = original.DeploymentListResultIterator type DeploymentListResultPage = original.DeploymentListResultPage type DeploymentOperation = original.DeploymentOperation type DeploymentOperationProperties = original.DeploymentOperationProperties +type DeploymentOperationsClient = original.DeploymentOperationsClient type DeploymentOperationsListResult = original.DeploymentOperationsListResult type DeploymentOperationsListResultIterator = original.DeploymentOperationsListResultIterator type DeploymentOperationsListResultPage = original.DeploymentOperationsListResultPage type DeploymentProperties = original.DeploymentProperties type DeploymentPropertiesExtended = original.DeploymentPropertiesExtended +type DeploymentValidateResult = original.DeploymentValidateResult +type DeploymentsClient = original.DeploymentsClient type DeploymentsCreateOrUpdateFuture = original.DeploymentsCreateOrUpdateFuture type DeploymentsDeleteFuture = original.DeploymentsDeleteFuture -type DeploymentValidateResult = original.DeploymentValidateResult +type ErrorAdditionalInfo = original.ErrorAdditionalInfo +type ErrorResponse = original.ErrorResponse type ExportTemplateRequest = original.ExportTemplateRequest type GenericResource = original.GenericResource type GenericResourceFilter = original.GenericResourceFilter @@ -74,6 +81,7 @@ type GroupListResult = original.GroupListResult type GroupListResultIterator = original.GroupListResultIterator type GroupListResultPage = original.GroupListResultPage type GroupProperties = original.GroupProperties +type GroupsClient = original.GroupsClient type GroupsDeleteFuture = original.GroupsDeleteFuture type HTTPMessage = original.HTTPMessage type Identity = original.Identity @@ -91,70 +99,106 @@ type ProviderListResultIterator = original.ProviderListResultIterator type ProviderListResultPage = original.ProviderListResultPage type ProviderOperationDisplayProperties = original.ProviderOperationDisplayProperties type ProviderResourceType = original.ProviderResourceType +type ProvidersClient = original.ProvidersClient type Resource = original.Resource type Sku = original.Sku type SubResource = original.SubResource type TagCount = original.TagCount type TagDetails = original.TagDetails +type TagValue = original.TagValue +type TagsClient = original.TagsClient type TagsListResult = original.TagsListResult type TagsListResultIterator = original.TagsListResultIterator type TagsListResultPage = original.TagsListResultPage -type TagValue = original.TagValue type TargetResource = original.TargetResource +type TemplateHashResult = original.TemplateHashResult type TemplateLink = original.TemplateLink type UpdateFuture = original.UpdateFuture -type ProvidersClient = original.ProvidersClient -type Client = original.Client -type TagsClient = original.TagsClient func New(subscriptionID string) BaseClient { return original.New(subscriptionID) } -func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient { - return original.NewWithBaseURI(baseURI, subscriptionID) -} -func NewDeploymentOperationsClient(subscriptionID string) DeploymentOperationsClient { - return original.NewDeploymentOperationsClient(subscriptionID) -} -func NewDeploymentOperationsClientWithBaseURI(baseURI string, subscriptionID string) DeploymentOperationsClient { - return original.NewDeploymentOperationsClientWithBaseURI(baseURI, subscriptionID) -} -func NewDeploymentsClient(subscriptionID string) DeploymentsClient { - return original.NewDeploymentsClient(subscriptionID) -} -func NewDeploymentsClientWithBaseURI(baseURI string, subscriptionID string) DeploymentsClient { - return original.NewDeploymentsClientWithBaseURI(baseURI, subscriptionID) -} -func NewGroupsClient(subscriptionID string) GroupsClient { - return original.NewGroupsClient(subscriptionID) -} -func NewGroupsClientWithBaseURI(baseURI string, subscriptionID string) GroupsClient { - return original.NewGroupsClientWithBaseURI(baseURI, subscriptionID) -} -func PossibleDeploymentModeValues() []DeploymentMode { - return original.PossibleDeploymentModeValues() -} -func PossibleResourceIdentityTypeValues() []ResourceIdentityType { - return original.PossibleResourceIdentityTypeValues() -} -func NewProvidersClient(subscriptionID string) ProvidersClient { - return original.NewProvidersClient(subscriptionID) -} -func NewProvidersClientWithBaseURI(baseURI string, subscriptionID string) ProvidersClient { - return original.NewProvidersClientWithBaseURI(baseURI, subscriptionID) -} func NewClient(subscriptionID string) Client { return original.NewClient(subscriptionID) } func NewClientWithBaseURI(baseURI string, subscriptionID string) Client { return original.NewClientWithBaseURI(baseURI, subscriptionID) } +func NewDeploymentListResultIterator(page DeploymentListResultPage) DeploymentListResultIterator { + return original.NewDeploymentListResultIterator(page) +} +func NewDeploymentListResultPage(getNextPage func(context.Context, DeploymentListResult) (DeploymentListResult, error)) DeploymentListResultPage { + return original.NewDeploymentListResultPage(getNextPage) +} +func NewDeploymentOperationsClient(subscriptionID string) DeploymentOperationsClient { + return original.NewDeploymentOperationsClient(subscriptionID) +} +func NewDeploymentOperationsClientWithBaseURI(baseURI string, subscriptionID string) DeploymentOperationsClient { + return original.NewDeploymentOperationsClientWithBaseURI(baseURI, subscriptionID) +} +func NewDeploymentOperationsListResultIterator(page DeploymentOperationsListResultPage) DeploymentOperationsListResultIterator { + return original.NewDeploymentOperationsListResultIterator(page) +} +func NewDeploymentOperationsListResultPage(getNextPage func(context.Context, DeploymentOperationsListResult) (DeploymentOperationsListResult, error)) DeploymentOperationsListResultPage { + return original.NewDeploymentOperationsListResultPage(getNextPage) +} +func NewDeploymentsClient(subscriptionID string) DeploymentsClient { + return original.NewDeploymentsClient(subscriptionID) +} +func NewDeploymentsClientWithBaseURI(baseURI string, subscriptionID string) DeploymentsClient { + return original.NewDeploymentsClientWithBaseURI(baseURI, subscriptionID) +} +func NewGroupListResultIterator(page GroupListResultPage) GroupListResultIterator { + return original.NewGroupListResultIterator(page) +} +func NewGroupListResultPage(getNextPage func(context.Context, GroupListResult) (GroupListResult, error)) GroupListResultPage { + return original.NewGroupListResultPage(getNextPage) +} +func NewGroupsClient(subscriptionID string) GroupsClient { + return original.NewGroupsClient(subscriptionID) +} +func NewGroupsClientWithBaseURI(baseURI string, subscriptionID string) GroupsClient { + return original.NewGroupsClientWithBaseURI(baseURI, subscriptionID) +} +func NewListResultIterator(page ListResultPage) ListResultIterator { + return original.NewListResultIterator(page) +} +func NewListResultPage(getNextPage func(context.Context, ListResult) (ListResult, error)) ListResultPage { + return original.NewListResultPage(getNextPage) +} +func NewProviderListResultIterator(page ProviderListResultPage) ProviderListResultIterator { + return original.NewProviderListResultIterator(page) +} +func NewProviderListResultPage(getNextPage func(context.Context, ProviderListResult) (ProviderListResult, error)) ProviderListResultPage { + return original.NewProviderListResultPage(getNextPage) +} +func NewProvidersClient(subscriptionID string) ProvidersClient { + return original.NewProvidersClient(subscriptionID) +} +func NewProvidersClientWithBaseURI(baseURI string, subscriptionID string) ProvidersClient { + return original.NewProvidersClientWithBaseURI(baseURI, subscriptionID) +} func NewTagsClient(subscriptionID string) TagsClient { return original.NewTagsClient(subscriptionID) } func NewTagsClientWithBaseURI(baseURI string, subscriptionID string) TagsClient { return original.NewTagsClientWithBaseURI(baseURI, subscriptionID) } +func NewTagsListResultIterator(page TagsListResultPage) TagsListResultIterator { + return original.NewTagsListResultIterator(page) +} +func NewTagsListResultPage(getNextPage func(context.Context, TagsListResult) (TagsListResult, error)) TagsListResultPage { + return original.NewTagsListResultPage(getNextPage) +} +func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient { + return original.NewWithBaseURI(baseURI, subscriptionID) +} +func PossibleDeploymentModeValues() []DeploymentMode { + return original.PossibleDeploymentModeValues() +} +func PossibleResourceIdentityTypeValues() []ResourceIdentityType { + return original.PossibleResourceIdentityTypeValues() +} func UserAgent() string { return original.UserAgent() + " profiles/2017-03-09" } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage/models.go b/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage/models.go index d638c2a81..1af1d6eb0 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage/models.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage/models.go @@ -1,6 +1,6 @@ // +build go1.9 -// Copyright 2018 Microsoft Corporation +// Copyright 2019 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -21,13 +21,10 @@ package storage import original "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage" -type AccountsClient = original.AccountsClient - const ( DefaultBaseURI = original.DefaultBaseURI ) -type BaseClient = original.BaseClient type AccessTier = original.AccessTier const ( @@ -109,8 +106,10 @@ type AccountProperties = original.AccountProperties type AccountPropertiesCreateParameters = original.AccountPropertiesCreateParameters type AccountPropertiesUpdateParameters = original.AccountPropertiesUpdateParameters type AccountRegenerateKeyParameters = original.AccountRegenerateKeyParameters -type AccountsCreateFuture = original.AccountsCreateFuture type AccountUpdateParameters = original.AccountUpdateParameters +type AccountsClient = original.AccountsClient +type AccountsCreateFuture = original.AccountsCreateFuture +type BaseClient = original.BaseClient type CheckNameAvailabilityResult = original.CheckNameAvailabilityResult type CustomDomain = original.CustomDomain type Encryption = original.Encryption @@ -120,18 +119,24 @@ type Endpoints = original.Endpoints type Resource = original.Resource type Sku = original.Sku type Usage = original.Usage +type UsageClient = original.UsageClient type UsageListResult = original.UsageListResult type UsageName = original.UsageName -type UsageClient = original.UsageClient +func New(subscriptionID string) BaseClient { + return original.New(subscriptionID) +} func NewAccountsClient(subscriptionID string) AccountsClient { return original.NewAccountsClient(subscriptionID) } func NewAccountsClientWithBaseURI(baseURI string, subscriptionID string) AccountsClient { return original.NewAccountsClientWithBaseURI(baseURI, subscriptionID) } -func New(subscriptionID string) BaseClient { - return original.New(subscriptionID) +func NewUsageClient(subscriptionID string) UsageClient { + return original.NewUsageClient(subscriptionID) +} +func NewUsageClientWithBaseURI(baseURI string, subscriptionID string) UsageClient { + return original.NewUsageClientWithBaseURI(baseURI, subscriptionID) } func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient { return original.NewWithBaseURI(baseURI, subscriptionID) @@ -163,12 +168,6 @@ func PossibleSkuTierValues() []SkuTier { func PossibleUsageUnitValues() []UsageUnit { return original.PossibleUsageUnitValues() } -func NewUsageClient(subscriptionID string) UsageClient { - return original.NewUsageClient(subscriptionID) -} -func NewUsageClientWithBaseURI(baseURI string, subscriptionID string) UsageClient { - return original.NewUsageClientWithBaseURI(baseURI, subscriptionID) -} func UserAgent() string { return original.UserAgent() + " profiles/2017-03-09" } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/applications.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/applications.go new file mode 100644 index 000000000..fab7676fa --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/applications.go @@ -0,0 +1,1177 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// ApplicationsClient is the the Graph RBAC Management Client +type ApplicationsClient struct { + BaseClient +} + +// NewApplicationsClient creates an instance of the ApplicationsClient client. +func NewApplicationsClient(tenantID string) ApplicationsClient { + return NewApplicationsClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewApplicationsClientWithBaseURI creates an instance of the ApplicationsClient client. +func NewApplicationsClientWithBaseURI(baseURI string, tenantID string) ApplicationsClient { + return ApplicationsClient{NewWithBaseURI(baseURI, tenantID)} +} + +// AddOwner add an owner to an application. +// Parameters: +// applicationObjectID - the object ID of the application to which to add the owner. +// parameters - the URL of the owner object, such as +// https://graph.windows.net/0b1f9851-1bf0-433f-aec3-cb9272f093dc/directoryObjects/f260bbc4-c254-447b-94cf-293b5ec434dd. +func (client ApplicationsClient) AddOwner(ctx context.Context, applicationObjectID string, parameters AddOwnerParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.AddOwner") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.URL", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.ApplicationsClient", "AddOwner", err.Error()) + } + + req, err := client.AddOwnerPreparer(ctx, applicationObjectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "AddOwner", nil, "Failure preparing request") + return + } + + resp, err := client.AddOwnerSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "AddOwner", resp, "Failure sending request") + return + } + + result, err = client.AddOwnerResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "AddOwner", resp, "Failure responding to request") + } + + return +} + +// AddOwnerPreparer prepares the AddOwner request. +func (client ApplicationsClient) AddOwnerPreparer(ctx context.Context, applicationObjectID string, parameters AddOwnerParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}/$links/owners", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// AddOwnerSender sends the AddOwner request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) AddOwnerSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// AddOwnerResponder handles the response to the AddOwner request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) AddOwnerResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Create create a new application. +// Parameters: +// parameters - the parameters for creating an application. +func (client ApplicationsClient) Create(ctx context.Context, parameters ApplicationCreateParameters) (result Application, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.Create") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.DisplayName", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.ApplicationsClient", "Create", err.Error()) + } + + req, err := client.CreatePreparer(ctx, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Create", nil, "Failure preparing request") + return + } + + resp, err := client.CreateSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Create", resp, "Failure sending request") + return + } + + result, err = client.CreateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Create", resp, "Failure responding to request") + } + + return +} + +// CreatePreparer prepares the Create request. +func (client ApplicationsClient) CreatePreparer(ctx context.Context, parameters ApplicationCreateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateSender sends the Create request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) CreateSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// CreateResponder handles the response to the Create request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) CreateResponder(resp *http.Response) (result Application, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete delete an application. +// Parameters: +// applicationObjectID - application object ID. +func (client ApplicationsClient) Delete(ctx context.Context, applicationObjectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.DeletePreparer(ctx, applicationObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Delete", nil, "Failure preparing request") + return + } + + resp, err := client.DeleteSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Delete", resp, "Failure sending request") + return + } + + result, err = client.DeleteResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Delete", resp, "Failure responding to request") + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client ApplicationsClient) DeletePreparer(ctx context.Context, applicationObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) DeleteSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get get an application by object ID. +// Parameters: +// applicationObjectID - application object ID. +func (client ApplicationsClient) Get(ctx context.Context, applicationObjectID string) (result Application, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetPreparer(ctx, applicationObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client ApplicationsClient) GetPreparer(ctx context.Context, applicationObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) GetSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) GetResponder(resp *http.Response) (result Application, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetServicePrincipalsIDByAppID gets an object id for a given application id from the current tenant. +// Parameters: +// applicationID - the application ID. +func (client ApplicationsClient) GetServicePrincipalsIDByAppID(ctx context.Context, applicationID string) (result ServicePrincipalObjectResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.GetServicePrincipalsIDByAppID") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetServicePrincipalsIDByAppIDPreparer(ctx, applicationID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "GetServicePrincipalsIDByAppID", nil, "Failure preparing request") + return + } + + resp, err := client.GetServicePrincipalsIDByAppIDSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "GetServicePrincipalsIDByAppID", resp, "Failure sending request") + return + } + + result, err = client.GetServicePrincipalsIDByAppIDResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "GetServicePrincipalsIDByAppID", resp, "Failure responding to request") + } + + return +} + +// GetServicePrincipalsIDByAppIDPreparer prepares the GetServicePrincipalsIDByAppID request. +func (client ApplicationsClient) GetServicePrincipalsIDByAppIDPreparer(ctx context.Context, applicationID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationID": autorest.Encode("path", applicationID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipalsByAppId/{applicationID}/objectId", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetServicePrincipalsIDByAppIDSender sends the GetServicePrincipalsIDByAppID request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) GetServicePrincipalsIDByAppIDSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetServicePrincipalsIDByAppIDResponder handles the response to the GetServicePrincipalsIDByAppID request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) GetServicePrincipalsIDByAppIDResponder(resp *http.Response) (result ServicePrincipalObjectResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List lists applications by filter parameters. +// Parameters: +// filter - the filters to apply to the operation. +func (client ApplicationsClient) List(ctx context.Context, filter string) (result ApplicationListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.List") + defer func() { + sc := -1 + if result.alr.Response.Response != nil { + sc = result.alr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult ApplicationListResult) (ApplicationListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return ApplicationListResult{}, nil + } + return client.ListNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.ListPreparer(ctx, filter) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.alr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "List", resp, "Failure sending request") + return + } + + result.alr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client ApplicationsClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) ListSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) ListResponder(resp *http.Response) (result ApplicationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ApplicationsClient) ListComplete(ctx context.Context, filter string) (result ApplicationListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.List(ctx, filter) + return +} + +// ListKeyCredentials get the keyCredentials associated with an application. +// Parameters: +// applicationObjectID - application object ID. +func (client ApplicationsClient) ListKeyCredentials(ctx context.Context, applicationObjectID string) (result KeyCredentialListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.ListKeyCredentials") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListKeyCredentialsPreparer(ctx, applicationObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListKeyCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.ListKeyCredentialsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListKeyCredentials", resp, "Failure sending request") + return + } + + result, err = client.ListKeyCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListKeyCredentials", resp, "Failure responding to request") + } + + return +} + +// ListKeyCredentialsPreparer prepares the ListKeyCredentials request. +func (client ApplicationsClient) ListKeyCredentialsPreparer(ctx context.Context, applicationObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}/keyCredentials", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListKeyCredentialsSender sends the ListKeyCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) ListKeyCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListKeyCredentialsResponder handles the response to the ListKeyCredentials request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) ListKeyCredentialsResponder(resp *http.Response) (result KeyCredentialListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListNext gets a list of applications from the current tenant. +// Parameters: +// nextLink - next link for the list operation. +func (client ApplicationsClient) ListNext(ctx context.Context, nextLink string) (result ApplicationListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.ListNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListNext", nil, "Failure preparing request") + return + } + + resp, err := client.ListNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListNext", resp, "Failure sending request") + return + } + + result, err = client.ListNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListNext", resp, "Failure responding to request") + } + + return +} + +// ListNextPreparer prepares the ListNext request. +func (client ApplicationsClient) ListNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListNextSender sends the ListNext request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) ListNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListNextResponder handles the response to the ListNext request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) ListNextResponder(resp *http.Response) (result ApplicationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListOwners the owners are a set of non-admin users who are allowed to modify this object. +// Parameters: +// applicationObjectID - the object ID of the application for which to get owners. +func (client ApplicationsClient) ListOwners(ctx context.Context, applicationObjectID string) (result DirectoryObjectListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.ListOwners") + defer func() { + sc := -1 + if result.dolr.Response.Response != nil { + sc = result.dolr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = client.listOwnersNextResults + req, err := client.ListOwnersPreparer(ctx, applicationObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListOwners", nil, "Failure preparing request") + return + } + + resp, err := client.ListOwnersSender(req) + if err != nil { + result.dolr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListOwners", resp, "Failure sending request") + return + } + + result.dolr, err = client.ListOwnersResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListOwners", resp, "Failure responding to request") + } + + return +} + +// ListOwnersPreparer prepares the ListOwners request. +func (client ApplicationsClient) ListOwnersPreparer(ctx context.Context, applicationObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}/owners", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListOwnersSender sends the ListOwners request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) ListOwnersSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListOwnersResponder handles the response to the ListOwners request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) ListOwnersResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listOwnersNextResults retrieves the next set of results, if any. +func (client ApplicationsClient) listOwnersNextResults(ctx context.Context, lastResults DirectoryObjectListResult) (result DirectoryObjectListResult, err error) { + req, err := lastResults.directoryObjectListResultPreparer(ctx) + if err != nil { + return result, autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "listOwnersNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListOwnersSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "listOwnersNextResults", resp, "Failure sending next results request") + } + result, err = client.ListOwnersResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "listOwnersNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListOwnersComplete enumerates all values, automatically crossing page boundaries as required. +func (client ApplicationsClient) ListOwnersComplete(ctx context.Context, applicationObjectID string) (result DirectoryObjectListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.ListOwners") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.ListOwners(ctx, applicationObjectID) + return +} + +// ListPasswordCredentials get the passwordCredentials associated with an application. +// Parameters: +// applicationObjectID - application object ID. +func (client ApplicationsClient) ListPasswordCredentials(ctx context.Context, applicationObjectID string) (result PasswordCredentialListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.ListPasswordCredentials") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListPasswordCredentialsPreparer(ctx, applicationObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListPasswordCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.ListPasswordCredentialsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListPasswordCredentials", resp, "Failure sending request") + return + } + + result, err = client.ListPasswordCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "ListPasswordCredentials", resp, "Failure responding to request") + } + + return +} + +// ListPasswordCredentialsPreparer prepares the ListPasswordCredentials request. +func (client ApplicationsClient) ListPasswordCredentialsPreparer(ctx context.Context, applicationObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}/passwordCredentials", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListPasswordCredentialsSender sends the ListPasswordCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) ListPasswordCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListPasswordCredentialsResponder handles the response to the ListPasswordCredentials request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) ListPasswordCredentialsResponder(resp *http.Response) (result PasswordCredentialListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Patch update an existing application. +// Parameters: +// applicationObjectID - application object ID. +// parameters - parameters to update an existing application. +func (client ApplicationsClient) Patch(ctx context.Context, applicationObjectID string, parameters ApplicationUpdateParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.Patch") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.PatchPreparer(ctx, applicationObjectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Patch", nil, "Failure preparing request") + return + } + + resp, err := client.PatchSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Patch", resp, "Failure sending request") + return + } + + result, err = client.PatchResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "Patch", resp, "Failure responding to request") + } + + return +} + +// PatchPreparer prepares the Patch request. +func (client ApplicationsClient) PatchPreparer(ctx context.Context, applicationObjectID string, parameters ApplicationUpdateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// PatchSender sends the Patch request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) PatchSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// PatchResponder handles the response to the Patch request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) PatchResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// RemoveOwner remove a member from owners. +// Parameters: +// applicationObjectID - the object ID of the application from which to remove the owner. +// ownerObjectID - owner object id +func (client ApplicationsClient) RemoveOwner(ctx context.Context, applicationObjectID string, ownerObjectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.RemoveOwner") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.RemoveOwnerPreparer(ctx, applicationObjectID, ownerObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "RemoveOwner", nil, "Failure preparing request") + return + } + + resp, err := client.RemoveOwnerSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "RemoveOwner", resp, "Failure sending request") + return + } + + result, err = client.RemoveOwnerResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "RemoveOwner", resp, "Failure responding to request") + } + + return +} + +// RemoveOwnerPreparer prepares the RemoveOwner request. +func (client ApplicationsClient) RemoveOwnerPreparer(ctx context.Context, applicationObjectID string, ownerObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "ownerObjectId": autorest.Encode("path", ownerObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}/$links/owners/{ownerObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RemoveOwnerSender sends the RemoveOwner request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) RemoveOwnerSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// RemoveOwnerResponder handles the response to the RemoveOwner request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) RemoveOwnerResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// UpdateKeyCredentials update the keyCredentials associated with an application. +// Parameters: +// applicationObjectID - application object ID. +// parameters - parameters to update the keyCredentials of an existing application. +func (client ApplicationsClient) UpdateKeyCredentials(ctx context.Context, applicationObjectID string, parameters KeyCredentialsUpdateParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.UpdateKeyCredentials") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.UpdateKeyCredentialsPreparer(ctx, applicationObjectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "UpdateKeyCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.UpdateKeyCredentialsSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "UpdateKeyCredentials", resp, "Failure sending request") + return + } + + result, err = client.UpdateKeyCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "UpdateKeyCredentials", resp, "Failure responding to request") + } + + return +} + +// UpdateKeyCredentialsPreparer prepares the UpdateKeyCredentials request. +func (client ApplicationsClient) UpdateKeyCredentialsPreparer(ctx context.Context, applicationObjectID string, parameters KeyCredentialsUpdateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}/keyCredentials", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateKeyCredentialsSender sends the UpdateKeyCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) UpdateKeyCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// UpdateKeyCredentialsResponder handles the response to the UpdateKeyCredentials request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) UpdateKeyCredentialsResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// UpdatePasswordCredentials update passwordCredentials associated with an application. +// Parameters: +// applicationObjectID - application object ID. +// parameters - parameters to update passwordCredentials of an existing application. +func (client ApplicationsClient) UpdatePasswordCredentials(ctx context.Context, applicationObjectID string, parameters PasswordCredentialsUpdateParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationsClient.UpdatePasswordCredentials") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.UpdatePasswordCredentialsPreparer(ctx, applicationObjectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "UpdatePasswordCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.UpdatePasswordCredentialsSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "UpdatePasswordCredentials", resp, "Failure sending request") + return + } + + result, err = client.UpdatePasswordCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ApplicationsClient", "UpdatePasswordCredentials", resp, "Failure responding to request") + } + + return +} + +// UpdatePasswordCredentialsPreparer prepares the UpdatePasswordCredentials request. +func (client ApplicationsClient) UpdatePasswordCredentialsPreparer(ctx context.Context, applicationObjectID string, parameters PasswordCredentialsUpdateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/applications/{applicationObjectId}/passwordCredentials", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdatePasswordCredentialsSender sends the UpdatePasswordCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationsClient) UpdatePasswordCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// UpdatePasswordCredentialsResponder handles the response to the UpdatePasswordCredentials request. The method always +// closes the http.Response Body. +func (client ApplicationsClient) UpdatePasswordCredentialsResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/client.go new file mode 100644 index 000000000..6f46fe18b --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/client.go @@ -0,0 +1,51 @@ +// Package graphrbac implements the Azure ARM Graphrbac service API version 1.6. +// +// The Graph RBAC Management Client +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "github.com/Azure/go-autorest/autorest" +) + +const ( + // DefaultBaseURI is the default URI used for the service Graphrbac + DefaultBaseURI = "https://graph.windows.net" +) + +// BaseClient is the base client for Graphrbac. +type BaseClient struct { + autorest.Client + BaseURI string + TenantID string +} + +// New creates an instance of the BaseClient client. +func New(tenantID string) BaseClient { + return NewWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewWithBaseURI creates an instance of the BaseClient client. +func NewWithBaseURI(baseURI string, tenantID string) BaseClient { + return BaseClient{ + Client: autorest.NewClientWithUserAgent(UserAgent()), + BaseURI: baseURI, + TenantID: tenantID, + } +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/deletedapplications.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/deletedapplications.go new file mode 100644 index 000000000..fe3b2da5d --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/deletedapplications.go @@ -0,0 +1,365 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// DeletedApplicationsClient is the the Graph RBAC Management Client +type DeletedApplicationsClient struct { + BaseClient +} + +// NewDeletedApplicationsClient creates an instance of the DeletedApplicationsClient client. +func NewDeletedApplicationsClient(tenantID string) DeletedApplicationsClient { + return NewDeletedApplicationsClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewDeletedApplicationsClientWithBaseURI creates an instance of the DeletedApplicationsClient client. +func NewDeletedApplicationsClientWithBaseURI(baseURI string, tenantID string) DeletedApplicationsClient { + return DeletedApplicationsClient{NewWithBaseURI(baseURI, tenantID)} +} + +// HardDelete hard-delete an application. +// Parameters: +// applicationObjectID - application object ID. +func (client DeletedApplicationsClient) HardDelete(ctx context.Context, applicationObjectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeletedApplicationsClient.HardDelete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.HardDeletePreparer(ctx, applicationObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "HardDelete", nil, "Failure preparing request") + return + } + + resp, err := client.HardDeleteSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "HardDelete", resp, "Failure sending request") + return + } + + result, err = client.HardDeleteResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "HardDelete", resp, "Failure responding to request") + } + + return +} + +// HardDeletePreparer prepares the HardDelete request. +func (client DeletedApplicationsClient) HardDeletePreparer(ctx context.Context, applicationObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationObjectId": autorest.Encode("path", applicationObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/deletedApplications/{applicationObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// HardDeleteSender sends the HardDelete request. The method will close the +// http.Response Body if it receives an error. +func (client DeletedApplicationsClient) HardDeleteSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// HardDeleteResponder handles the response to the HardDelete request. The method always +// closes the http.Response Body. +func (client DeletedApplicationsClient) HardDeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// List gets a list of deleted applications in the directory. +// Parameters: +// filter - the filter to apply to the operation. +func (client DeletedApplicationsClient) List(ctx context.Context, filter string) (result ApplicationListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeletedApplicationsClient.List") + defer func() { + sc := -1 + if result.alr.Response.Response != nil { + sc = result.alr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult ApplicationListResult) (ApplicationListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return ApplicationListResult{}, nil + } + return client.ListNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.ListPreparer(ctx, filter) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.alr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "List", resp, "Failure sending request") + return + } + + result.alr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client DeletedApplicationsClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/deletedApplications", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client DeletedApplicationsClient) ListSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client DeletedApplicationsClient) ListResponder(resp *http.Response) (result ApplicationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client DeletedApplicationsClient) ListComplete(ctx context.Context, filter string) (result ApplicationListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeletedApplicationsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.List(ctx, filter) + return +} + +// ListNext gets a list of deleted applications in the directory. +// Parameters: +// nextLink - next link for the list operation. +func (client DeletedApplicationsClient) ListNext(ctx context.Context, nextLink string) (result ApplicationListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeletedApplicationsClient.ListNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "ListNext", nil, "Failure preparing request") + return + } + + resp, err := client.ListNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "ListNext", resp, "Failure sending request") + return + } + + result, err = client.ListNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "ListNext", resp, "Failure responding to request") + } + + return +} + +// ListNextPreparer prepares the ListNext request. +func (client DeletedApplicationsClient) ListNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListNextSender sends the ListNext request. The method will close the +// http.Response Body if it receives an error. +func (client DeletedApplicationsClient) ListNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListNextResponder handles the response to the ListNext request. The method always +// closes the http.Response Body. +func (client DeletedApplicationsClient) ListNextResponder(resp *http.Response) (result ApplicationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Restore restores the deleted application in the directory. +// Parameters: +// objectID - application object ID. +func (client DeletedApplicationsClient) Restore(ctx context.Context, objectID string) (result Application, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeletedApplicationsClient.Restore") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.RestorePreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "Restore", nil, "Failure preparing request") + return + } + + resp, err := client.RestoreSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "Restore", resp, "Failure sending request") + return + } + + result, err = client.RestoreResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DeletedApplicationsClient", "Restore", resp, "Failure responding to request") + } + + return +} + +// RestorePreparer prepares the Restore request. +func (client DeletedApplicationsClient) RestorePreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/deletedApplications/{objectId}/restore", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RestoreSender sends the Restore request. The method will close the +// http.Response Body if it receives an error. +func (client DeletedApplicationsClient) RestoreSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// RestoreResponder handles the response to the Restore request. The method always +// closes the http.Response Body. +func (client DeletedApplicationsClient) RestoreResponder(resp *http.Response) (result Application, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/domains.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/domains.go new file mode 100644 index 000000000..93c6cac1e --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/domains.go @@ -0,0 +1,193 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// DomainsClient is the the Graph RBAC Management Client +type DomainsClient struct { + BaseClient +} + +// NewDomainsClient creates an instance of the DomainsClient client. +func NewDomainsClient(tenantID string) DomainsClient { + return NewDomainsClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewDomainsClientWithBaseURI creates an instance of the DomainsClient client. +func NewDomainsClientWithBaseURI(baseURI string, tenantID string) DomainsClient { + return DomainsClient{NewWithBaseURI(baseURI, tenantID)} +} + +// Get gets a specific domain in the current tenant. +// Parameters: +// domainName - name of the domain. +func (client DomainsClient) Get(ctx context.Context, domainName string) (result Domain, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DomainsClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetPreparer(ctx, domainName) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DomainsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.DomainsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DomainsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client DomainsClient) GetPreparer(ctx context.Context, domainName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "domainName": autorest.Encode("path", domainName), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/domains/{domainName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client DomainsClient) GetSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client DomainsClient) GetResponder(resp *http.Response) (result Domain, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets a list of domains for the current tenant. +// Parameters: +// filter - the filter to apply to the operation. +func (client DomainsClient) List(ctx context.Context, filter string) (result DomainListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DomainsClient.List") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListPreparer(ctx, filter) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DomainsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.DomainsClient", "List", resp, "Failure sending request") + return + } + + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.DomainsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client DomainsClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/domains", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client DomainsClient) ListSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client DomainsClient) ListResponder(resp *http.Response) (result DomainListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/groups.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/groups.go new file mode 100644 index 000000000..4ed37e94f --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/groups.go @@ -0,0 +1,1224 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// GroupsClient is the the Graph RBAC Management Client +type GroupsClient struct { + BaseClient +} + +// NewGroupsClient creates an instance of the GroupsClient client. +func NewGroupsClient(tenantID string) GroupsClient { + return NewGroupsClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewGroupsClientWithBaseURI creates an instance of the GroupsClient client. +func NewGroupsClientWithBaseURI(baseURI string, tenantID string) GroupsClient { + return GroupsClient{NewWithBaseURI(baseURI, tenantID)} +} + +// AddMember add a member to a group. +// Parameters: +// groupObjectID - the object ID of the group to which to add the member. +// parameters - the URL of the member object, such as +// https://graph.windows.net/0b1f9851-1bf0-433f-aec3-cb9272f093dc/directoryObjects/f260bbc4-c254-447b-94cf-293b5ec434dd. +func (client GroupsClient) AddMember(ctx context.Context, groupObjectID string, parameters GroupAddMemberParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.AddMember") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.URL", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.GroupsClient", "AddMember", err.Error()) + } + + req, err := client.AddMemberPreparer(ctx, groupObjectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "AddMember", nil, "Failure preparing request") + return + } + + resp, err := client.AddMemberSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "AddMember", resp, "Failure sending request") + return + } + + result, err = client.AddMemberResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "AddMember", resp, "Failure responding to request") + } + + return +} + +// AddMemberPreparer prepares the AddMember request. +func (client GroupsClient) AddMemberPreparer(ctx context.Context, groupObjectID string, parameters GroupAddMemberParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "groupObjectId": autorest.Encode("path", groupObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{groupObjectId}/$links/members", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// AddMemberSender sends the AddMember request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) AddMemberSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// AddMemberResponder handles the response to the AddMember request. The method always +// closes the http.Response Body. +func (client GroupsClient) AddMemberResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// AddOwner add an owner to a group. +// Parameters: +// objectID - the object ID of the application to which to add the owner. +// parameters - the URL of the owner object, such as +// https://graph.windows.net/0b1f9851-1bf0-433f-aec3-cb9272f093dc/directoryObjects/f260bbc4-c254-447b-94cf-293b5ec434dd. +func (client GroupsClient) AddOwner(ctx context.Context, objectID string, parameters AddOwnerParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.AddOwner") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.URL", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.GroupsClient", "AddOwner", err.Error()) + } + + req, err := client.AddOwnerPreparer(ctx, objectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "AddOwner", nil, "Failure preparing request") + return + } + + resp, err := client.AddOwnerSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "AddOwner", resp, "Failure sending request") + return + } + + result, err = client.AddOwnerResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "AddOwner", resp, "Failure responding to request") + } + + return +} + +// AddOwnerPreparer prepares the AddOwner request. +func (client GroupsClient) AddOwnerPreparer(ctx context.Context, objectID string, parameters AddOwnerParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{objectId}/$links/owners", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// AddOwnerSender sends the AddOwner request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) AddOwnerSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// AddOwnerResponder handles the response to the AddOwner request. The method always +// closes the http.Response Body. +func (client GroupsClient) AddOwnerResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Create create a group in the directory. +// Parameters: +// parameters - the parameters for the group to create. +func (client GroupsClient) Create(ctx context.Context, parameters GroupCreateParameters) (result ADGroup, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.Create") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.DisplayName", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.MailEnabled", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.MailNickname", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.SecurityEnabled", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.GroupsClient", "Create", err.Error()) + } + + req, err := client.CreatePreparer(ctx, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Create", nil, "Failure preparing request") + return + } + + resp, err := client.CreateSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Create", resp, "Failure sending request") + return + } + + result, err = client.CreateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Create", resp, "Failure responding to request") + } + + return +} + +// CreatePreparer prepares the Create request. +func (client GroupsClient) CreatePreparer(ctx context.Context, parameters GroupCreateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateSender sends the Create request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) CreateSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// CreateResponder handles the response to the Create request. The method always +// closes the http.Response Body. +func (client GroupsClient) CreateResponder(resp *http.Response) (result ADGroup, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete delete a group from the directory. +// Parameters: +// objectID - the object ID of the group to delete. +func (client GroupsClient) Delete(ctx context.Context, objectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.DeletePreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Delete", nil, "Failure preparing request") + return + } + + resp, err := client.DeleteSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Delete", resp, "Failure sending request") + return + } + + result, err = client.DeleteResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Delete", resp, "Failure responding to request") + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client GroupsClient) DeletePreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{objectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) DeleteSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client GroupsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets group information from the directory. +// Parameters: +// objectID - the object ID of the user for which to get group information. +func (client GroupsClient) Get(ctx context.Context, objectID string) (result ADGroup, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetPreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client GroupsClient) GetPreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{objectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) GetSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client GroupsClient) GetResponder(resp *http.Response) (result ADGroup, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetGroupMembers gets the members of a group. +// Parameters: +// objectID - the object ID of the group whose members should be retrieved. +func (client GroupsClient) GetGroupMembers(ctx context.Context, objectID string) (result DirectoryObjectListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.GetGroupMembers") + defer func() { + sc := -1 + if result.dolr.Response.Response != nil { + sc = result.dolr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult DirectoryObjectListResult) (DirectoryObjectListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return DirectoryObjectListResult{}, nil + } + return client.GetGroupMembersNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.GetGroupMembersPreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetGroupMembers", nil, "Failure preparing request") + return + } + + resp, err := client.GetGroupMembersSender(req) + if err != nil { + result.dolr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetGroupMembers", resp, "Failure sending request") + return + } + + result.dolr, err = client.GetGroupMembersResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetGroupMembers", resp, "Failure responding to request") + } + + return +} + +// GetGroupMembersPreparer prepares the GetGroupMembers request. +func (client GroupsClient) GetGroupMembersPreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{objectId}/members", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetGroupMembersSender sends the GetGroupMembers request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) GetGroupMembersSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetGroupMembersResponder handles the response to the GetGroupMembers request. The method always +// closes the http.Response Body. +func (client GroupsClient) GetGroupMembersResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetGroupMembersComplete enumerates all values, automatically crossing page boundaries as required. +func (client GroupsClient) GetGroupMembersComplete(ctx context.Context, objectID string) (result DirectoryObjectListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.GetGroupMembers") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.GetGroupMembers(ctx, objectID) + return +} + +// GetGroupMembersNext gets the members of a group. +// Parameters: +// nextLink - next link for the list operation. +func (client GroupsClient) GetGroupMembersNext(ctx context.Context, nextLink string) (result DirectoryObjectListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.GetGroupMembersNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetGroupMembersNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetGroupMembersNext", nil, "Failure preparing request") + return + } + + resp, err := client.GetGroupMembersNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetGroupMembersNext", resp, "Failure sending request") + return + } + + result, err = client.GetGroupMembersNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetGroupMembersNext", resp, "Failure responding to request") + } + + return +} + +// GetGroupMembersNextPreparer prepares the GetGroupMembersNext request. +func (client GroupsClient) GetGroupMembersNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetGroupMembersNextSender sends the GetGroupMembersNext request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) GetGroupMembersNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetGroupMembersNextResponder handles the response to the GetGroupMembersNext request. The method always +// closes the http.Response Body. +func (client GroupsClient) GetGroupMembersNextResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetMemberGroups gets a collection of object IDs of groups of which the specified group is a member. +// Parameters: +// objectID - the object ID of the group for which to get group membership. +// parameters - group filtering parameters. +func (client GroupsClient) GetMemberGroups(ctx context.Context, objectID string, parameters GroupGetMemberGroupsParameters) (result GroupGetMemberGroupsResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.GetMemberGroups") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.SecurityEnabledOnly", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.GroupsClient", "GetMemberGroups", err.Error()) + } + + req, err := client.GetMemberGroupsPreparer(ctx, objectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetMemberGroups", nil, "Failure preparing request") + return + } + + resp, err := client.GetMemberGroupsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetMemberGroups", resp, "Failure sending request") + return + } + + result, err = client.GetMemberGroupsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "GetMemberGroups", resp, "Failure responding to request") + } + + return +} + +// GetMemberGroupsPreparer prepares the GetMemberGroups request. +func (client GroupsClient) GetMemberGroupsPreparer(ctx context.Context, objectID string, parameters GroupGetMemberGroupsParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{objectId}/getMemberGroups", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetMemberGroupsSender sends the GetMemberGroups request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) GetMemberGroupsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetMemberGroupsResponder handles the response to the GetMemberGroups request. The method always +// closes the http.Response Body. +func (client GroupsClient) GetMemberGroupsResponder(resp *http.Response) (result GroupGetMemberGroupsResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// IsMemberOf checks whether the specified user, group, contact, or service principal is a direct or transitive member +// of the specified group. +// Parameters: +// parameters - the check group membership parameters. +func (client GroupsClient) IsMemberOf(ctx context.Context, parameters CheckGroupMembershipParameters) (result CheckGroupMembershipResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.IsMemberOf") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.GroupID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.MemberID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.GroupsClient", "IsMemberOf", err.Error()) + } + + req, err := client.IsMemberOfPreparer(ctx, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "IsMemberOf", nil, "Failure preparing request") + return + } + + resp, err := client.IsMemberOfSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "IsMemberOf", resp, "Failure sending request") + return + } + + result, err = client.IsMemberOfResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "IsMemberOf", resp, "Failure responding to request") + } + + return +} + +// IsMemberOfPreparer prepares the IsMemberOf request. +func (client GroupsClient) IsMemberOfPreparer(ctx context.Context, parameters CheckGroupMembershipParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/isMemberOf", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// IsMemberOfSender sends the IsMemberOf request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) IsMemberOfSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// IsMemberOfResponder handles the response to the IsMemberOf request. The method always +// closes the http.Response Body. +func (client GroupsClient) IsMemberOfResponder(resp *http.Response) (result CheckGroupMembershipResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets list of groups for the current tenant. +// Parameters: +// filter - the filter to apply to the operation. +func (client GroupsClient) List(ctx context.Context, filter string) (result GroupListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.List") + defer func() { + sc := -1 + if result.glr.Response.Response != nil { + sc = result.glr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult GroupListResult) (GroupListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return GroupListResult{}, nil + } + return client.ListNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.ListPreparer(ctx, filter) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.glr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "List", resp, "Failure sending request") + return + } + + result.glr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client GroupsClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) ListSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client GroupsClient) ListResponder(resp *http.Response) (result GroupListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client GroupsClient) ListComplete(ctx context.Context, filter string) (result GroupListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.List(ctx, filter) + return +} + +// ListNext gets a list of groups for the current tenant. +// Parameters: +// nextLink - next link for the list operation. +func (client GroupsClient) ListNext(ctx context.Context, nextLink string) (result GroupListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.ListNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "ListNext", nil, "Failure preparing request") + return + } + + resp, err := client.ListNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "ListNext", resp, "Failure sending request") + return + } + + result, err = client.ListNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "ListNext", resp, "Failure responding to request") + } + + return +} + +// ListNextPreparer prepares the ListNext request. +func (client GroupsClient) ListNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListNextSender sends the ListNext request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) ListNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListNextResponder handles the response to the ListNext request. The method always +// closes the http.Response Body. +func (client GroupsClient) ListNextResponder(resp *http.Response) (result GroupListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListOwners the owners are a set of non-admin users who are allowed to modify this object. +// Parameters: +// objectID - the object ID of the group for which to get owners. +func (client GroupsClient) ListOwners(ctx context.Context, objectID string) (result DirectoryObjectListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.ListOwners") + defer func() { + sc := -1 + if result.dolr.Response.Response != nil { + sc = result.dolr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = client.listOwnersNextResults + req, err := client.ListOwnersPreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "ListOwners", nil, "Failure preparing request") + return + } + + resp, err := client.ListOwnersSender(req) + if err != nil { + result.dolr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "ListOwners", resp, "Failure sending request") + return + } + + result.dolr, err = client.ListOwnersResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "ListOwners", resp, "Failure responding to request") + } + + return +} + +// ListOwnersPreparer prepares the ListOwners request. +func (client GroupsClient) ListOwnersPreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{objectId}/owners", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListOwnersSender sends the ListOwners request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) ListOwnersSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListOwnersResponder handles the response to the ListOwners request. The method always +// closes the http.Response Body. +func (client GroupsClient) ListOwnersResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listOwnersNextResults retrieves the next set of results, if any. +func (client GroupsClient) listOwnersNextResults(ctx context.Context, lastResults DirectoryObjectListResult) (result DirectoryObjectListResult, err error) { + req, err := lastResults.directoryObjectListResultPreparer(ctx) + if err != nil { + return result, autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "listOwnersNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListOwnersSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "listOwnersNextResults", resp, "Failure sending next results request") + } + result, err = client.ListOwnersResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "listOwnersNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListOwnersComplete enumerates all values, automatically crossing page boundaries as required. +func (client GroupsClient) ListOwnersComplete(ctx context.Context, objectID string) (result DirectoryObjectListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.ListOwners") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.ListOwners(ctx, objectID) + return +} + +// RemoveMember remove a member from a group. +// Parameters: +// groupObjectID - the object ID of the group from which to remove the member. +// memberObjectID - member object id +func (client GroupsClient) RemoveMember(ctx context.Context, groupObjectID string, memberObjectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.RemoveMember") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.RemoveMemberPreparer(ctx, groupObjectID, memberObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "RemoveMember", nil, "Failure preparing request") + return + } + + resp, err := client.RemoveMemberSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "RemoveMember", resp, "Failure sending request") + return + } + + result, err = client.RemoveMemberResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "RemoveMember", resp, "Failure responding to request") + } + + return +} + +// RemoveMemberPreparer prepares the RemoveMember request. +func (client GroupsClient) RemoveMemberPreparer(ctx context.Context, groupObjectID string, memberObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "groupObjectId": autorest.Encode("path", groupObjectID), + "memberObjectId": autorest.Encode("path", memberObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{groupObjectId}/$links/members/{memberObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RemoveMemberSender sends the RemoveMember request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) RemoveMemberSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// RemoveMemberResponder handles the response to the RemoveMember request. The method always +// closes the http.Response Body. +func (client GroupsClient) RemoveMemberResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// RemoveOwner remove a member from owners. +// Parameters: +// objectID - the object ID of the group from which to remove the owner. +// ownerObjectID - owner object id +func (client GroupsClient) RemoveOwner(ctx context.Context, objectID string, ownerObjectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.RemoveOwner") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.RemoveOwnerPreparer(ctx, objectID, ownerObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "RemoveOwner", nil, "Failure preparing request") + return + } + + resp, err := client.RemoveOwnerSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "RemoveOwner", resp, "Failure sending request") + return + } + + result, err = client.RemoveOwnerResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.GroupsClient", "RemoveOwner", resp, "Failure responding to request") + } + + return +} + +// RemoveOwnerPreparer prepares the RemoveOwner request. +func (client GroupsClient) RemoveOwnerPreparer(ctx context.Context, objectID string, ownerObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "ownerObjectId": autorest.Encode("path", ownerObjectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/groups/{objectId}/$links/owners/{ownerObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RemoveOwnerSender sends the RemoveOwner request. The method will close the +// http.Response Body if it receives an error. +func (client GroupsClient) RemoveOwnerSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// RemoveOwnerResponder handles the response to the RemoveOwner request. The method always +// closes the http.Response Body. +func (client GroupsClient) RemoveOwnerResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/models.go new file mode 100644 index 000000000..e6b583d96 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/models.go @@ -0,0 +1,4662 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "encoding/json" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/date" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// The package's fully qualified name. +const fqdn = "github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac" + +// ConsentType enumerates the values for consent type. +type ConsentType string + +const ( + // AllPrincipals ... + AllPrincipals ConsentType = "AllPrincipals" + // Principal ... + Principal ConsentType = "Principal" +) + +// PossibleConsentTypeValues returns an array of possible values for the ConsentType const type. +func PossibleConsentTypeValues() []ConsentType { + return []ConsentType{AllPrincipals, Principal} +} + +// ObjectType enumerates the values for object type. +type ObjectType string + +const ( + // ObjectTypeApplication ... + ObjectTypeApplication ObjectType = "Application" + // ObjectTypeDirectoryObject ... + ObjectTypeDirectoryObject ObjectType = "DirectoryObject" + // ObjectTypeGroup ... + ObjectTypeGroup ObjectType = "Group" + // ObjectTypeServicePrincipal ... + ObjectTypeServicePrincipal ObjectType = "ServicePrincipal" + // ObjectTypeUser ... + ObjectTypeUser ObjectType = "User" +) + +// PossibleObjectTypeValues returns an array of possible values for the ObjectType const type. +func PossibleObjectTypeValues() []ObjectType { + return []ObjectType{ObjectTypeApplication, ObjectTypeDirectoryObject, ObjectTypeGroup, ObjectTypeServicePrincipal, ObjectTypeUser} +} + +// UserType enumerates the values for user type. +type UserType string + +const ( + // Guest ... + Guest UserType = "Guest" + // Member ... + Member UserType = "Member" +) + +// PossibleUserTypeValues returns an array of possible values for the UserType const type. +func PossibleUserTypeValues() []UserType { + return []UserType{Guest, Member} +} + +// AddOwnerParameters request parameters for adding a owner to an application. +type AddOwnerParameters struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // URL - A owner object URL, such as "https://graph.windows.net/0b1f9851-1bf0-433f-aec3-cb9272f093dc/directoryObjects/f260bbc4-c254-447b-94cf-293b5ec434dd", where "0b1f9851-1bf0-433f-aec3-cb9272f093dc" is the tenantId and "f260bbc4-c254-447b-94cf-293b5ec434dd" is the objectId of the owner (user, application, servicePrincipal, group) to be added. + URL *string `json:"url,omitempty"` +} + +// MarshalJSON is the custom marshaler for AddOwnerParameters. +func (aop AddOwnerParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if aop.URL != nil { + objectMap["url"] = aop.URL + } + for k, v := range aop.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for AddOwnerParameters struct. +func (aop *AddOwnerParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if aop.AdditionalProperties == nil { + aop.AdditionalProperties = make(map[string]interface{}) + } + aop.AdditionalProperties[k] = additionalProperties + } + case "url": + if v != nil { + var URL string + err = json.Unmarshal(*v, &URL) + if err != nil { + return err + } + aop.URL = &URL + } + } + } + + return nil +} + +// ADGroup active Directory group information. +type ADGroup struct { + autorest.Response `json:"-"` + // DisplayName - The display name of the group. + DisplayName *string `json:"displayName,omitempty"` + // MailEnabled - Whether the group is mail-enabled. Must be false. This is because only pure security groups can be created using the Graph API. + MailEnabled *bool `json:"mailEnabled,omitempty"` + // MailNickname - The mail alias for the group. + MailNickname *string `json:"mailNickname,omitempty"` + // SecurityEnabled - Whether the group is security-enable. + SecurityEnabled *bool `json:"securityEnabled,omitempty"` + // Mail - The primary email address of the group. + Mail *string `json:"mail,omitempty"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ObjectID - READ-ONLY; The object ID. + ObjectID *string `json:"objectId,omitempty"` + // DeletionTimestamp - READ-ONLY; The time at which the directory object was deleted. + DeletionTimestamp *date.Time `json:"deletionTimestamp,omitempty"` + // ObjectType - Possible values include: 'ObjectTypeDirectoryObject', 'ObjectTypeApplication', 'ObjectTypeGroup', 'ObjectTypeServicePrincipal', 'ObjectTypeUser' + ObjectType ObjectType `json:"objectType,omitempty"` +} + +// MarshalJSON is the custom marshaler for ADGroup. +func (ag ADGroup) MarshalJSON() ([]byte, error) { + ag.ObjectType = ObjectTypeGroup + objectMap := make(map[string]interface{}) + if ag.DisplayName != nil { + objectMap["displayName"] = ag.DisplayName + } + if ag.MailEnabled != nil { + objectMap["mailEnabled"] = ag.MailEnabled + } + if ag.MailNickname != nil { + objectMap["mailNickname"] = ag.MailNickname + } + if ag.SecurityEnabled != nil { + objectMap["securityEnabled"] = ag.SecurityEnabled + } + if ag.Mail != nil { + objectMap["mail"] = ag.Mail + } + if ag.ObjectType != "" { + objectMap["objectType"] = ag.ObjectType + } + for k, v := range ag.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// AsApplication is the BasicDirectoryObject implementation for ADGroup. +func (ag ADGroup) AsApplication() (*Application, bool) { + return nil, false +} + +// AsADGroup is the BasicDirectoryObject implementation for ADGroup. +func (ag ADGroup) AsADGroup() (*ADGroup, bool) { + return &ag, true +} + +// AsServicePrincipal is the BasicDirectoryObject implementation for ADGroup. +func (ag ADGroup) AsServicePrincipal() (*ServicePrincipal, bool) { + return nil, false +} + +// AsUser is the BasicDirectoryObject implementation for ADGroup. +func (ag ADGroup) AsUser() (*User, bool) { + return nil, false +} + +// AsDirectoryObject is the BasicDirectoryObject implementation for ADGroup. +func (ag ADGroup) AsDirectoryObject() (*DirectoryObject, bool) { + return nil, false +} + +// AsBasicDirectoryObject is the BasicDirectoryObject implementation for ADGroup. +func (ag ADGroup) AsBasicDirectoryObject() (BasicDirectoryObject, bool) { + return &ag, true +} + +// UnmarshalJSON is the custom unmarshaler for ADGroup struct. +func (ag *ADGroup) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "displayName": + if v != nil { + var displayName string + err = json.Unmarshal(*v, &displayName) + if err != nil { + return err + } + ag.DisplayName = &displayName + } + case "mailEnabled": + if v != nil { + var mailEnabled bool + err = json.Unmarshal(*v, &mailEnabled) + if err != nil { + return err + } + ag.MailEnabled = &mailEnabled + } + case "mailNickname": + if v != nil { + var mailNickname string + err = json.Unmarshal(*v, &mailNickname) + if err != nil { + return err + } + ag.MailNickname = &mailNickname + } + case "securityEnabled": + if v != nil { + var securityEnabled bool + err = json.Unmarshal(*v, &securityEnabled) + if err != nil { + return err + } + ag.SecurityEnabled = &securityEnabled + } + case "mail": + if v != nil { + var mailVar string + err = json.Unmarshal(*v, &mailVar) + if err != nil { + return err + } + ag.Mail = &mailVar + } + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if ag.AdditionalProperties == nil { + ag.AdditionalProperties = make(map[string]interface{}) + } + ag.AdditionalProperties[k] = additionalProperties + } + case "objectId": + if v != nil { + var objectID string + err = json.Unmarshal(*v, &objectID) + if err != nil { + return err + } + ag.ObjectID = &objectID + } + case "deletionTimestamp": + if v != nil { + var deletionTimestamp date.Time + err = json.Unmarshal(*v, &deletionTimestamp) + if err != nil { + return err + } + ag.DeletionTimestamp = &deletionTimestamp + } + case "objectType": + if v != nil { + var objectType ObjectType + err = json.Unmarshal(*v, &objectType) + if err != nil { + return err + } + ag.ObjectType = objectType + } + } + } + + return nil +} + +// Application active Directory application information. +type Application struct { + autorest.Response `json:"-"` + // AppID - The application ID. + AppID *string `json:"appId,omitempty"` + // AllowGuestsSignIn - A property on the application to indicate if the application accepts other IDPs or not or partially accepts. + AllowGuestsSignIn *bool `json:"allowGuestsSignIn,omitempty"` + // AllowPassthroughUsers - Indicates that the application supports pass through users who have no presence in the resource tenant. + AllowPassthroughUsers *bool `json:"allowPassthroughUsers,omitempty"` + // AppLogoURL - The url for the application logo image stored in a CDN. + AppLogoURL *string `json:"appLogoUrl,omitempty"` + // AppRoles - The collection of application roles that an application may declare. These roles can be assigned to users, groups or service principals. + AppRoles *[]AppRole `json:"appRoles,omitempty"` + // AppPermissions - The application permissions. + AppPermissions *[]string `json:"appPermissions,omitempty"` + // AvailableToOtherTenants - Whether the application is available to other tenants. + AvailableToOtherTenants *bool `json:"availableToOtherTenants,omitempty"` + // DisplayName - The display name of the application. + DisplayName *string `json:"displayName,omitempty"` + // ErrorURL - A URL provided by the author of the application to report errors when using the application. + ErrorURL *string `json:"errorUrl,omitempty"` + // GroupMembershipClaims - Configures the groups claim issued in a user or OAuth 2.0 access token that the app expects. + GroupMembershipClaims interface{} `json:"groupMembershipClaims,omitempty"` + // Homepage - The home page of the application. + Homepage *string `json:"homepage,omitempty"` + // IdentifierUris - A collection of URIs for the application. + IdentifierUris *[]string `json:"identifierUris,omitempty"` + // InformationalUrls - URLs with more information about the application. + InformationalUrls *InformationalURL `json:"informationalUrls,omitempty"` + // IsDeviceOnlyAuthSupported - Specifies whether this application supports device authentication without a user. The default is false. + IsDeviceOnlyAuthSupported *bool `json:"isDeviceOnlyAuthSupported,omitempty"` + // KeyCredentials - A collection of KeyCredential objects. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // KnownClientApplications - Client applications that are tied to this resource application. Consent to any of the known client applications will result in implicit consent to the resource application through a combined consent dialog (showing the OAuth permission scopes required by the client and the resource). + KnownClientApplications *[]string `json:"knownClientApplications,omitempty"` + // LogoutURL - the url of the logout page + LogoutURL *string `json:"logoutUrl,omitempty"` + // Oauth2AllowImplicitFlow - Whether to allow implicit grant flow for OAuth2 + Oauth2AllowImplicitFlow *bool `json:"oauth2AllowImplicitFlow,omitempty"` + // Oauth2AllowURLPathMatching - Specifies whether during a token Request Azure AD will allow path matching of the redirect URI against the applications collection of replyURLs. The default is false. + Oauth2AllowURLPathMatching *bool `json:"oauth2AllowUrlPathMatching,omitempty"` + // Oauth2Permissions - The collection of OAuth 2.0 permission scopes that the web API (resource) application exposes to client applications. These permission scopes may be granted to client applications during consent. + Oauth2Permissions *[]OAuth2Permission `json:"oauth2Permissions,omitempty"` + // Oauth2RequirePostResponse - Specifies whether, as part of OAuth 2.0 token requests, Azure AD will allow POST requests, as opposed to GET requests. The default is false, which specifies that only GET requests will be allowed. + Oauth2RequirePostResponse *bool `json:"oauth2RequirePostResponse,omitempty"` + // OrgRestrictions - A list of tenants allowed to access application. + OrgRestrictions *[]string `json:"orgRestrictions,omitempty"` + OptionalClaims *OptionalClaims `json:"optionalClaims,omitempty"` + // PasswordCredentials - A collection of PasswordCredential objects + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // PreAuthorizedApplications - list of pre-authorized applications. + PreAuthorizedApplications *[]PreAuthorizedApplication `json:"preAuthorizedApplications,omitempty"` + // PublicClient - Specifies whether this application is a public client (such as an installed application running on a mobile device). Default is false. + PublicClient *bool `json:"publicClient,omitempty"` + // PublisherDomain - Reliable domain which can be used to identify an application. + PublisherDomain *string `json:"publisherDomain,omitempty"` + // ReplyUrls - A collection of reply URLs for the application. + ReplyUrls *[]string `json:"replyUrls,omitempty"` + // RequiredResourceAccess - Specifies resources that this application requires access to and the set of OAuth permission scopes and application roles that it needs under each of those resources. This pre-configuration of required resource access drives the consent experience. + RequiredResourceAccess *[]RequiredResourceAccess `json:"requiredResourceAccess,omitempty"` + // SamlMetadataURL - The URL to the SAML metadata for the application. + SamlMetadataURL *string `json:"samlMetadataUrl,omitempty"` + // SignInAudience - Audience for signing in to the application (AzureADMyOrganization, AzureADAllOrganizations, AzureADAndMicrosoftAccounts). + SignInAudience *string `json:"signInAudience,omitempty"` + // WwwHomepage - The primary Web page. + WwwHomepage *string `json:"wwwHomepage,omitempty"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ObjectID - READ-ONLY; The object ID. + ObjectID *string `json:"objectId,omitempty"` + // DeletionTimestamp - READ-ONLY; The time at which the directory object was deleted. + DeletionTimestamp *date.Time `json:"deletionTimestamp,omitempty"` + // ObjectType - Possible values include: 'ObjectTypeDirectoryObject', 'ObjectTypeApplication', 'ObjectTypeGroup', 'ObjectTypeServicePrincipal', 'ObjectTypeUser' + ObjectType ObjectType `json:"objectType,omitempty"` +} + +// MarshalJSON is the custom marshaler for Application. +func (a Application) MarshalJSON() ([]byte, error) { + a.ObjectType = ObjectTypeApplication + objectMap := make(map[string]interface{}) + if a.AppID != nil { + objectMap["appId"] = a.AppID + } + if a.AllowGuestsSignIn != nil { + objectMap["allowGuestsSignIn"] = a.AllowGuestsSignIn + } + if a.AllowPassthroughUsers != nil { + objectMap["allowPassthroughUsers"] = a.AllowPassthroughUsers + } + if a.AppLogoURL != nil { + objectMap["appLogoUrl"] = a.AppLogoURL + } + if a.AppRoles != nil { + objectMap["appRoles"] = a.AppRoles + } + if a.AppPermissions != nil { + objectMap["appPermissions"] = a.AppPermissions + } + if a.AvailableToOtherTenants != nil { + objectMap["availableToOtherTenants"] = a.AvailableToOtherTenants + } + if a.DisplayName != nil { + objectMap["displayName"] = a.DisplayName + } + if a.ErrorURL != nil { + objectMap["errorUrl"] = a.ErrorURL + } + if a.GroupMembershipClaims != nil { + objectMap["groupMembershipClaims"] = a.GroupMembershipClaims + } + if a.Homepage != nil { + objectMap["homepage"] = a.Homepage + } + if a.IdentifierUris != nil { + objectMap["identifierUris"] = a.IdentifierUris + } + if a.InformationalUrls != nil { + objectMap["informationalUrls"] = a.InformationalUrls + } + if a.IsDeviceOnlyAuthSupported != nil { + objectMap["isDeviceOnlyAuthSupported"] = a.IsDeviceOnlyAuthSupported + } + if a.KeyCredentials != nil { + objectMap["keyCredentials"] = a.KeyCredentials + } + if a.KnownClientApplications != nil { + objectMap["knownClientApplications"] = a.KnownClientApplications + } + if a.LogoutURL != nil { + objectMap["logoutUrl"] = a.LogoutURL + } + if a.Oauth2AllowImplicitFlow != nil { + objectMap["oauth2AllowImplicitFlow"] = a.Oauth2AllowImplicitFlow + } + if a.Oauth2AllowURLPathMatching != nil { + objectMap["oauth2AllowUrlPathMatching"] = a.Oauth2AllowURLPathMatching + } + if a.Oauth2Permissions != nil { + objectMap["oauth2Permissions"] = a.Oauth2Permissions + } + if a.Oauth2RequirePostResponse != nil { + objectMap["oauth2RequirePostResponse"] = a.Oauth2RequirePostResponse + } + if a.OrgRestrictions != nil { + objectMap["orgRestrictions"] = a.OrgRestrictions + } + if a.OptionalClaims != nil { + objectMap["optionalClaims"] = a.OptionalClaims + } + if a.PasswordCredentials != nil { + objectMap["passwordCredentials"] = a.PasswordCredentials + } + if a.PreAuthorizedApplications != nil { + objectMap["preAuthorizedApplications"] = a.PreAuthorizedApplications + } + if a.PublicClient != nil { + objectMap["publicClient"] = a.PublicClient + } + if a.PublisherDomain != nil { + objectMap["publisherDomain"] = a.PublisherDomain + } + if a.ReplyUrls != nil { + objectMap["replyUrls"] = a.ReplyUrls + } + if a.RequiredResourceAccess != nil { + objectMap["requiredResourceAccess"] = a.RequiredResourceAccess + } + if a.SamlMetadataURL != nil { + objectMap["samlMetadataUrl"] = a.SamlMetadataURL + } + if a.SignInAudience != nil { + objectMap["signInAudience"] = a.SignInAudience + } + if a.WwwHomepage != nil { + objectMap["wwwHomepage"] = a.WwwHomepage + } + if a.ObjectType != "" { + objectMap["objectType"] = a.ObjectType + } + for k, v := range a.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// AsApplication is the BasicDirectoryObject implementation for Application. +func (a Application) AsApplication() (*Application, bool) { + return &a, true +} + +// AsADGroup is the BasicDirectoryObject implementation for Application. +func (a Application) AsADGroup() (*ADGroup, bool) { + return nil, false +} + +// AsServicePrincipal is the BasicDirectoryObject implementation for Application. +func (a Application) AsServicePrincipal() (*ServicePrincipal, bool) { + return nil, false +} + +// AsUser is the BasicDirectoryObject implementation for Application. +func (a Application) AsUser() (*User, bool) { + return nil, false +} + +// AsDirectoryObject is the BasicDirectoryObject implementation for Application. +func (a Application) AsDirectoryObject() (*DirectoryObject, bool) { + return nil, false +} + +// AsBasicDirectoryObject is the BasicDirectoryObject implementation for Application. +func (a Application) AsBasicDirectoryObject() (BasicDirectoryObject, bool) { + return &a, true +} + +// UnmarshalJSON is the custom unmarshaler for Application struct. +func (a *Application) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "appId": + if v != nil { + var appID string + err = json.Unmarshal(*v, &appID) + if err != nil { + return err + } + a.AppID = &appID + } + case "allowGuestsSignIn": + if v != nil { + var allowGuestsSignIn bool + err = json.Unmarshal(*v, &allowGuestsSignIn) + if err != nil { + return err + } + a.AllowGuestsSignIn = &allowGuestsSignIn + } + case "allowPassthroughUsers": + if v != nil { + var allowPassthroughUsers bool + err = json.Unmarshal(*v, &allowPassthroughUsers) + if err != nil { + return err + } + a.AllowPassthroughUsers = &allowPassthroughUsers + } + case "appLogoUrl": + if v != nil { + var appLogoURL string + err = json.Unmarshal(*v, &appLogoURL) + if err != nil { + return err + } + a.AppLogoURL = &appLogoURL + } + case "appRoles": + if v != nil { + var appRoles []AppRole + err = json.Unmarshal(*v, &appRoles) + if err != nil { + return err + } + a.AppRoles = &appRoles + } + case "appPermissions": + if v != nil { + var appPermissions []string + err = json.Unmarshal(*v, &appPermissions) + if err != nil { + return err + } + a.AppPermissions = &appPermissions + } + case "availableToOtherTenants": + if v != nil { + var availableToOtherTenants bool + err = json.Unmarshal(*v, &availableToOtherTenants) + if err != nil { + return err + } + a.AvailableToOtherTenants = &availableToOtherTenants + } + case "displayName": + if v != nil { + var displayName string + err = json.Unmarshal(*v, &displayName) + if err != nil { + return err + } + a.DisplayName = &displayName + } + case "errorUrl": + if v != nil { + var errorURL string + err = json.Unmarshal(*v, &errorURL) + if err != nil { + return err + } + a.ErrorURL = &errorURL + } + case "groupMembershipClaims": + if v != nil { + var groupMembershipClaims interface{} + err = json.Unmarshal(*v, &groupMembershipClaims) + if err != nil { + return err + } + a.GroupMembershipClaims = groupMembershipClaims + } + case "homepage": + if v != nil { + var homepage string + err = json.Unmarshal(*v, &homepage) + if err != nil { + return err + } + a.Homepage = &homepage + } + case "identifierUris": + if v != nil { + var identifierUris []string + err = json.Unmarshal(*v, &identifierUris) + if err != nil { + return err + } + a.IdentifierUris = &identifierUris + } + case "informationalUrls": + if v != nil { + var informationalUrls InformationalURL + err = json.Unmarshal(*v, &informationalUrls) + if err != nil { + return err + } + a.InformationalUrls = &informationalUrls + } + case "isDeviceOnlyAuthSupported": + if v != nil { + var isDeviceOnlyAuthSupported bool + err = json.Unmarshal(*v, &isDeviceOnlyAuthSupported) + if err != nil { + return err + } + a.IsDeviceOnlyAuthSupported = &isDeviceOnlyAuthSupported + } + case "keyCredentials": + if v != nil { + var keyCredentials []KeyCredential + err = json.Unmarshal(*v, &keyCredentials) + if err != nil { + return err + } + a.KeyCredentials = &keyCredentials + } + case "knownClientApplications": + if v != nil { + var knownClientApplications []string + err = json.Unmarshal(*v, &knownClientApplications) + if err != nil { + return err + } + a.KnownClientApplications = &knownClientApplications + } + case "logoutUrl": + if v != nil { + var logoutURL string + err = json.Unmarshal(*v, &logoutURL) + if err != nil { + return err + } + a.LogoutURL = &logoutURL + } + case "oauth2AllowImplicitFlow": + if v != nil { + var oauth2AllowImplicitFlow bool + err = json.Unmarshal(*v, &oauth2AllowImplicitFlow) + if err != nil { + return err + } + a.Oauth2AllowImplicitFlow = &oauth2AllowImplicitFlow + } + case "oauth2AllowUrlPathMatching": + if v != nil { + var oauth2AllowURLPathMatching bool + err = json.Unmarshal(*v, &oauth2AllowURLPathMatching) + if err != nil { + return err + } + a.Oauth2AllowURLPathMatching = &oauth2AllowURLPathMatching + } + case "oauth2Permissions": + if v != nil { + var oauth2Permissions []OAuth2Permission + err = json.Unmarshal(*v, &oauth2Permissions) + if err != nil { + return err + } + a.Oauth2Permissions = &oauth2Permissions + } + case "oauth2RequirePostResponse": + if v != nil { + var oauth2RequirePostResponse bool + err = json.Unmarshal(*v, &oauth2RequirePostResponse) + if err != nil { + return err + } + a.Oauth2RequirePostResponse = &oauth2RequirePostResponse + } + case "orgRestrictions": + if v != nil { + var orgRestrictions []string + err = json.Unmarshal(*v, &orgRestrictions) + if err != nil { + return err + } + a.OrgRestrictions = &orgRestrictions + } + case "optionalClaims": + if v != nil { + var optionalClaims OptionalClaims + err = json.Unmarshal(*v, &optionalClaims) + if err != nil { + return err + } + a.OptionalClaims = &optionalClaims + } + case "passwordCredentials": + if v != nil { + var passwordCredentials []PasswordCredential + err = json.Unmarshal(*v, &passwordCredentials) + if err != nil { + return err + } + a.PasswordCredentials = &passwordCredentials + } + case "preAuthorizedApplications": + if v != nil { + var preAuthorizedApplications []PreAuthorizedApplication + err = json.Unmarshal(*v, &preAuthorizedApplications) + if err != nil { + return err + } + a.PreAuthorizedApplications = &preAuthorizedApplications + } + case "publicClient": + if v != nil { + var publicClient bool + err = json.Unmarshal(*v, &publicClient) + if err != nil { + return err + } + a.PublicClient = &publicClient + } + case "publisherDomain": + if v != nil { + var publisherDomain string + err = json.Unmarshal(*v, &publisherDomain) + if err != nil { + return err + } + a.PublisherDomain = &publisherDomain + } + case "replyUrls": + if v != nil { + var replyUrls []string + err = json.Unmarshal(*v, &replyUrls) + if err != nil { + return err + } + a.ReplyUrls = &replyUrls + } + case "requiredResourceAccess": + if v != nil { + var requiredResourceAccess []RequiredResourceAccess + err = json.Unmarshal(*v, &requiredResourceAccess) + if err != nil { + return err + } + a.RequiredResourceAccess = &requiredResourceAccess + } + case "samlMetadataUrl": + if v != nil { + var samlMetadataURL string + err = json.Unmarshal(*v, &samlMetadataURL) + if err != nil { + return err + } + a.SamlMetadataURL = &samlMetadataURL + } + case "signInAudience": + if v != nil { + var signInAudience string + err = json.Unmarshal(*v, &signInAudience) + if err != nil { + return err + } + a.SignInAudience = &signInAudience + } + case "wwwHomepage": + if v != nil { + var wwwHomepage string + err = json.Unmarshal(*v, &wwwHomepage) + if err != nil { + return err + } + a.WwwHomepage = &wwwHomepage + } + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if a.AdditionalProperties == nil { + a.AdditionalProperties = make(map[string]interface{}) + } + a.AdditionalProperties[k] = additionalProperties + } + case "objectId": + if v != nil { + var objectID string + err = json.Unmarshal(*v, &objectID) + if err != nil { + return err + } + a.ObjectID = &objectID + } + case "deletionTimestamp": + if v != nil { + var deletionTimestamp date.Time + err = json.Unmarshal(*v, &deletionTimestamp) + if err != nil { + return err + } + a.DeletionTimestamp = &deletionTimestamp + } + case "objectType": + if v != nil { + var objectType ObjectType + err = json.Unmarshal(*v, &objectType) + if err != nil { + return err + } + a.ObjectType = objectType + } + } + } + + return nil +} + +// ApplicationBase active Directive Application common properties shared among GET, POST and PATCH +type ApplicationBase struct { + // AllowGuestsSignIn - A property on the application to indicate if the application accepts other IDPs or not or partially accepts. + AllowGuestsSignIn *bool `json:"allowGuestsSignIn,omitempty"` + // AllowPassthroughUsers - Indicates that the application supports pass through users who have no presence in the resource tenant. + AllowPassthroughUsers *bool `json:"allowPassthroughUsers,omitempty"` + // AppLogoURL - The url for the application logo image stored in a CDN. + AppLogoURL *string `json:"appLogoUrl,omitempty"` + // AppRoles - The collection of application roles that an application may declare. These roles can be assigned to users, groups or service principals. + AppRoles *[]AppRole `json:"appRoles,omitempty"` + // AppPermissions - The application permissions. + AppPermissions *[]string `json:"appPermissions,omitempty"` + // AvailableToOtherTenants - Whether the application is available to other tenants. + AvailableToOtherTenants *bool `json:"availableToOtherTenants,omitempty"` + // ErrorURL - A URL provided by the author of the application to report errors when using the application. + ErrorURL *string `json:"errorUrl,omitempty"` + // GroupMembershipClaims - Configures the groups claim issued in a user or OAuth 2.0 access token that the app expects. + GroupMembershipClaims interface{} `json:"groupMembershipClaims,omitempty"` + // Homepage - The home page of the application. + Homepage *string `json:"homepage,omitempty"` + // InformationalUrls - URLs with more information about the application. + InformationalUrls *InformationalURL `json:"informationalUrls,omitempty"` + // IsDeviceOnlyAuthSupported - Specifies whether this application supports device authentication without a user. The default is false. + IsDeviceOnlyAuthSupported *bool `json:"isDeviceOnlyAuthSupported,omitempty"` + // KeyCredentials - A collection of KeyCredential objects. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // KnownClientApplications - Client applications that are tied to this resource application. Consent to any of the known client applications will result in implicit consent to the resource application through a combined consent dialog (showing the OAuth permission scopes required by the client and the resource). + KnownClientApplications *[]string `json:"knownClientApplications,omitempty"` + // LogoutURL - the url of the logout page + LogoutURL *string `json:"logoutUrl,omitempty"` + // Oauth2AllowImplicitFlow - Whether to allow implicit grant flow for OAuth2 + Oauth2AllowImplicitFlow *bool `json:"oauth2AllowImplicitFlow,omitempty"` + // Oauth2AllowURLPathMatching - Specifies whether during a token Request Azure AD will allow path matching of the redirect URI against the applications collection of replyURLs. The default is false. + Oauth2AllowURLPathMatching *bool `json:"oauth2AllowUrlPathMatching,omitempty"` + // Oauth2Permissions - The collection of OAuth 2.0 permission scopes that the web API (resource) application exposes to client applications. These permission scopes may be granted to client applications during consent. + Oauth2Permissions *[]OAuth2Permission `json:"oauth2Permissions,omitempty"` + // Oauth2RequirePostResponse - Specifies whether, as part of OAuth 2.0 token requests, Azure AD will allow POST requests, as opposed to GET requests. The default is false, which specifies that only GET requests will be allowed. + Oauth2RequirePostResponse *bool `json:"oauth2RequirePostResponse,omitempty"` + // OrgRestrictions - A list of tenants allowed to access application. + OrgRestrictions *[]string `json:"orgRestrictions,omitempty"` + OptionalClaims *OptionalClaims `json:"optionalClaims,omitempty"` + // PasswordCredentials - A collection of PasswordCredential objects + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // PreAuthorizedApplications - list of pre-authorized applications. + PreAuthorizedApplications *[]PreAuthorizedApplication `json:"preAuthorizedApplications,omitempty"` + // PublicClient - Specifies whether this application is a public client (such as an installed application running on a mobile device). Default is false. + PublicClient *bool `json:"publicClient,omitempty"` + // PublisherDomain - Reliable domain which can be used to identify an application. + PublisherDomain *string `json:"publisherDomain,omitempty"` + // ReplyUrls - A collection of reply URLs for the application. + ReplyUrls *[]string `json:"replyUrls,omitempty"` + // RequiredResourceAccess - Specifies resources that this application requires access to and the set of OAuth permission scopes and application roles that it needs under each of those resources. This pre-configuration of required resource access drives the consent experience. + RequiredResourceAccess *[]RequiredResourceAccess `json:"requiredResourceAccess,omitempty"` + // SamlMetadataURL - The URL to the SAML metadata for the application. + SamlMetadataURL *string `json:"samlMetadataUrl,omitempty"` + // SignInAudience - Audience for signing in to the application (AzureADMyOrganization, AzureADAllOrganizations, AzureADAndMicrosoftAccounts). + SignInAudience *string `json:"signInAudience,omitempty"` + // WwwHomepage - The primary Web page. + WwwHomepage *string `json:"wwwHomepage,omitempty"` +} + +// ApplicationCreateParameters request parameters for creating a new application. +type ApplicationCreateParameters struct { + // DisplayName - The display name of the application. + DisplayName *string `json:"displayName,omitempty"` + // IdentifierUris - A collection of URIs for the application. + IdentifierUris *[]string `json:"identifierUris,omitempty"` + // AllowGuestsSignIn - A property on the application to indicate if the application accepts other IDPs or not or partially accepts. + AllowGuestsSignIn *bool `json:"allowGuestsSignIn,omitempty"` + // AllowPassthroughUsers - Indicates that the application supports pass through users who have no presence in the resource tenant. + AllowPassthroughUsers *bool `json:"allowPassthroughUsers,omitempty"` + // AppLogoURL - The url for the application logo image stored in a CDN. + AppLogoURL *string `json:"appLogoUrl,omitempty"` + // AppRoles - The collection of application roles that an application may declare. These roles can be assigned to users, groups or service principals. + AppRoles *[]AppRole `json:"appRoles,omitempty"` + // AppPermissions - The application permissions. + AppPermissions *[]string `json:"appPermissions,omitempty"` + // AvailableToOtherTenants - Whether the application is available to other tenants. + AvailableToOtherTenants *bool `json:"availableToOtherTenants,omitempty"` + // ErrorURL - A URL provided by the author of the application to report errors when using the application. + ErrorURL *string `json:"errorUrl,omitempty"` + // GroupMembershipClaims - Configures the groups claim issued in a user or OAuth 2.0 access token that the app expects. + GroupMembershipClaims interface{} `json:"groupMembershipClaims,omitempty"` + // Homepage - The home page of the application. + Homepage *string `json:"homepage,omitempty"` + // InformationalUrls - URLs with more information about the application. + InformationalUrls *InformationalURL `json:"informationalUrls,omitempty"` + // IsDeviceOnlyAuthSupported - Specifies whether this application supports device authentication without a user. The default is false. + IsDeviceOnlyAuthSupported *bool `json:"isDeviceOnlyAuthSupported,omitempty"` + // KeyCredentials - A collection of KeyCredential objects. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // KnownClientApplications - Client applications that are tied to this resource application. Consent to any of the known client applications will result in implicit consent to the resource application through a combined consent dialog (showing the OAuth permission scopes required by the client and the resource). + KnownClientApplications *[]string `json:"knownClientApplications,omitempty"` + // LogoutURL - the url of the logout page + LogoutURL *string `json:"logoutUrl,omitempty"` + // Oauth2AllowImplicitFlow - Whether to allow implicit grant flow for OAuth2 + Oauth2AllowImplicitFlow *bool `json:"oauth2AllowImplicitFlow,omitempty"` + // Oauth2AllowURLPathMatching - Specifies whether during a token Request Azure AD will allow path matching of the redirect URI against the applications collection of replyURLs. The default is false. + Oauth2AllowURLPathMatching *bool `json:"oauth2AllowUrlPathMatching,omitempty"` + // Oauth2Permissions - The collection of OAuth 2.0 permission scopes that the web API (resource) application exposes to client applications. These permission scopes may be granted to client applications during consent. + Oauth2Permissions *[]OAuth2Permission `json:"oauth2Permissions,omitempty"` + // Oauth2RequirePostResponse - Specifies whether, as part of OAuth 2.0 token requests, Azure AD will allow POST requests, as opposed to GET requests. The default is false, which specifies that only GET requests will be allowed. + Oauth2RequirePostResponse *bool `json:"oauth2RequirePostResponse,omitempty"` + // OrgRestrictions - A list of tenants allowed to access application. + OrgRestrictions *[]string `json:"orgRestrictions,omitempty"` + OptionalClaims *OptionalClaims `json:"optionalClaims,omitempty"` + // PasswordCredentials - A collection of PasswordCredential objects + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // PreAuthorizedApplications - list of pre-authorized applications. + PreAuthorizedApplications *[]PreAuthorizedApplication `json:"preAuthorizedApplications,omitempty"` + // PublicClient - Specifies whether this application is a public client (such as an installed application running on a mobile device). Default is false. + PublicClient *bool `json:"publicClient,omitempty"` + // PublisherDomain - Reliable domain which can be used to identify an application. + PublisherDomain *string `json:"publisherDomain,omitempty"` + // ReplyUrls - A collection of reply URLs for the application. + ReplyUrls *[]string `json:"replyUrls,omitempty"` + // RequiredResourceAccess - Specifies resources that this application requires access to and the set of OAuth permission scopes and application roles that it needs under each of those resources. This pre-configuration of required resource access drives the consent experience. + RequiredResourceAccess *[]RequiredResourceAccess `json:"requiredResourceAccess,omitempty"` + // SamlMetadataURL - The URL to the SAML metadata for the application. + SamlMetadataURL *string `json:"samlMetadataUrl,omitempty"` + // SignInAudience - Audience for signing in to the application (AzureADMyOrganization, AzureADAllOrganizations, AzureADAndMicrosoftAccounts). + SignInAudience *string `json:"signInAudience,omitempty"` + // WwwHomepage - The primary Web page. + WwwHomepage *string `json:"wwwHomepage,omitempty"` +} + +// ApplicationListResult application list operation result. +type ApplicationListResult struct { + autorest.Response `json:"-"` + // Value - A collection of applications. + Value *[]Application `json:"value,omitempty"` + // OdataNextLink - The URL to get the next set of results. + OdataNextLink *string `json:"odata.nextLink,omitempty"` +} + +// ApplicationListResultIterator provides access to a complete listing of Application values. +type ApplicationListResultIterator struct { + i int + page ApplicationListResultPage +} + +// NextWithContext advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ApplicationListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err = iter.page.NextWithContext(ctx) + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *ApplicationListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ApplicationListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ApplicationListResultIterator) Response() ApplicationListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ApplicationListResultIterator) Value() Application { + if !iter.page.NotDone() { + return Application{} + } + return iter.page.Values()[iter.i] +} + +// Creates a new instance of the ApplicationListResultIterator type. +func NewApplicationListResultIterator(page ApplicationListResultPage) ApplicationListResultIterator { + return ApplicationListResultIterator{page: page} +} + +// IsEmpty returns true if the ListResult contains no values. +func (alr ApplicationListResult) IsEmpty() bool { + return alr.Value == nil || len(*alr.Value) == 0 +} + +// ApplicationListResultPage contains a page of Application values. +type ApplicationListResultPage struct { + fn func(context.Context, ApplicationListResult) (ApplicationListResult, error) + alr ApplicationListResult +} + +// NextWithContext advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ApplicationListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ApplicationListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.alr) + if err != nil { + return err + } + page.alr = next + return nil +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *ApplicationListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ApplicationListResultPage) NotDone() bool { + return !page.alr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ApplicationListResultPage) Response() ApplicationListResult { + return page.alr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ApplicationListResultPage) Values() []Application { + if page.alr.IsEmpty() { + return nil + } + return *page.alr.Value +} + +// Creates a new instance of the ApplicationListResultPage type. +func NewApplicationListResultPage(getNextPage func(context.Context, ApplicationListResult) (ApplicationListResult, error)) ApplicationListResultPage { + return ApplicationListResultPage{fn: getNextPage} +} + +// ApplicationUpdateParameters request parameters for updating a new application. +type ApplicationUpdateParameters struct { + // DisplayName - The display name of the application. + DisplayName *string `json:"displayName,omitempty"` + // IdentifierUris - A collection of URIs for the application. + IdentifierUris *[]string `json:"identifierUris,omitempty"` + // AllowGuestsSignIn - A property on the application to indicate if the application accepts other IDPs or not or partially accepts. + AllowGuestsSignIn *bool `json:"allowGuestsSignIn,omitempty"` + // AllowPassthroughUsers - Indicates that the application supports pass through users who have no presence in the resource tenant. + AllowPassthroughUsers *bool `json:"allowPassthroughUsers,omitempty"` + // AppLogoURL - The url for the application logo image stored in a CDN. + AppLogoURL *string `json:"appLogoUrl,omitempty"` + // AppRoles - The collection of application roles that an application may declare. These roles can be assigned to users, groups or service principals. + AppRoles *[]AppRole `json:"appRoles,omitempty"` + // AppPermissions - The application permissions. + AppPermissions *[]string `json:"appPermissions,omitempty"` + // AvailableToOtherTenants - Whether the application is available to other tenants. + AvailableToOtherTenants *bool `json:"availableToOtherTenants,omitempty"` + // ErrorURL - A URL provided by the author of the application to report errors when using the application. + ErrorURL *string `json:"errorUrl,omitempty"` + // GroupMembershipClaims - Configures the groups claim issued in a user or OAuth 2.0 access token that the app expects. + GroupMembershipClaims interface{} `json:"groupMembershipClaims,omitempty"` + // Homepage - The home page of the application. + Homepage *string `json:"homepage,omitempty"` + // InformationalUrls - URLs with more information about the application. + InformationalUrls *InformationalURL `json:"informationalUrls,omitempty"` + // IsDeviceOnlyAuthSupported - Specifies whether this application supports device authentication without a user. The default is false. + IsDeviceOnlyAuthSupported *bool `json:"isDeviceOnlyAuthSupported,omitempty"` + // KeyCredentials - A collection of KeyCredential objects. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // KnownClientApplications - Client applications that are tied to this resource application. Consent to any of the known client applications will result in implicit consent to the resource application through a combined consent dialog (showing the OAuth permission scopes required by the client and the resource). + KnownClientApplications *[]string `json:"knownClientApplications,omitempty"` + // LogoutURL - the url of the logout page + LogoutURL *string `json:"logoutUrl,omitempty"` + // Oauth2AllowImplicitFlow - Whether to allow implicit grant flow for OAuth2 + Oauth2AllowImplicitFlow *bool `json:"oauth2AllowImplicitFlow,omitempty"` + // Oauth2AllowURLPathMatching - Specifies whether during a token Request Azure AD will allow path matching of the redirect URI against the applications collection of replyURLs. The default is false. + Oauth2AllowURLPathMatching *bool `json:"oauth2AllowUrlPathMatching,omitempty"` + // Oauth2Permissions - The collection of OAuth 2.0 permission scopes that the web API (resource) application exposes to client applications. These permission scopes may be granted to client applications during consent. + Oauth2Permissions *[]OAuth2Permission `json:"oauth2Permissions,omitempty"` + // Oauth2RequirePostResponse - Specifies whether, as part of OAuth 2.0 token requests, Azure AD will allow POST requests, as opposed to GET requests. The default is false, which specifies that only GET requests will be allowed. + Oauth2RequirePostResponse *bool `json:"oauth2RequirePostResponse,omitempty"` + // OrgRestrictions - A list of tenants allowed to access application. + OrgRestrictions *[]string `json:"orgRestrictions,omitempty"` + OptionalClaims *OptionalClaims `json:"optionalClaims,omitempty"` + // PasswordCredentials - A collection of PasswordCredential objects + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // PreAuthorizedApplications - list of pre-authorized applications. + PreAuthorizedApplications *[]PreAuthorizedApplication `json:"preAuthorizedApplications,omitempty"` + // PublicClient - Specifies whether this application is a public client (such as an installed application running on a mobile device). Default is false. + PublicClient *bool `json:"publicClient,omitempty"` + // PublisherDomain - Reliable domain which can be used to identify an application. + PublisherDomain *string `json:"publisherDomain,omitempty"` + // ReplyUrls - A collection of reply URLs for the application. + ReplyUrls *[]string `json:"replyUrls,omitempty"` + // RequiredResourceAccess - Specifies resources that this application requires access to and the set of OAuth permission scopes and application roles that it needs under each of those resources. This pre-configuration of required resource access drives the consent experience. + RequiredResourceAccess *[]RequiredResourceAccess `json:"requiredResourceAccess,omitempty"` + // SamlMetadataURL - The URL to the SAML metadata for the application. + SamlMetadataURL *string `json:"samlMetadataUrl,omitempty"` + // SignInAudience - Audience for signing in to the application (AzureADMyOrganization, AzureADAllOrganizations, AzureADAndMicrosoftAccounts). + SignInAudience *string `json:"signInAudience,omitempty"` + // WwwHomepage - The primary Web page. + WwwHomepage *string `json:"wwwHomepage,omitempty"` +} + +// AppRole ... +type AppRole struct { + // ID - Unique role identifier inside the appRoles collection. + ID *string `json:"id,omitempty"` + // AllowedMemberTypes - Specifies whether this app role definition can be assigned to users and groups by setting to 'User', or to other applications (that are accessing this application in daemon service scenarios) by setting to 'Application', or to both. + AllowedMemberTypes *[]string `json:"allowedMemberTypes,omitempty"` + // Description - Permission help text that appears in the admin app assignment and consent experiences. + Description *string `json:"description,omitempty"` + // DisplayName - Display name for the permission that appears in the admin consent and app assignment experiences. + DisplayName *string `json:"displayName,omitempty"` + // IsEnabled - When creating or updating a role definition, this must be set to true (which is the default). To delete a role, this must first be set to false. At that point, in a subsequent call, this role may be removed. + IsEnabled *bool `json:"isEnabled,omitempty"` + // Value - Specifies the value of the roles claim that the application should expect in the authentication and access tokens. + Value *string `json:"value,omitempty"` +} + +// CheckGroupMembershipParameters request parameters for IsMemberOf API call. +type CheckGroupMembershipParameters struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // GroupID - The object ID of the group to check. + GroupID *string `json:"groupId,omitempty"` + // MemberID - The object ID of the contact, group, user, or service principal to check for membership in the specified group. + MemberID *string `json:"memberId,omitempty"` +} + +// MarshalJSON is the custom marshaler for CheckGroupMembershipParameters. +func (cgmp CheckGroupMembershipParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if cgmp.GroupID != nil { + objectMap["groupId"] = cgmp.GroupID + } + if cgmp.MemberID != nil { + objectMap["memberId"] = cgmp.MemberID + } + for k, v := range cgmp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for CheckGroupMembershipParameters struct. +func (cgmp *CheckGroupMembershipParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if cgmp.AdditionalProperties == nil { + cgmp.AdditionalProperties = make(map[string]interface{}) + } + cgmp.AdditionalProperties[k] = additionalProperties + } + case "groupId": + if v != nil { + var groupID string + err = json.Unmarshal(*v, &groupID) + if err != nil { + return err + } + cgmp.GroupID = &groupID + } + case "memberId": + if v != nil { + var memberID string + err = json.Unmarshal(*v, &memberID) + if err != nil { + return err + } + cgmp.MemberID = &memberID + } + } + } + + return nil +} + +// CheckGroupMembershipResult server response for IsMemberOf API call +type CheckGroupMembershipResult struct { + autorest.Response `json:"-"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // Value - True if the specified user, group, contact, or service principal has either direct or transitive membership in the specified group; otherwise, false. + Value *bool `json:"value,omitempty"` +} + +// MarshalJSON is the custom marshaler for CheckGroupMembershipResult. +func (cgmr CheckGroupMembershipResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if cgmr.Value != nil { + objectMap["value"] = cgmr.Value + } + for k, v := range cgmr.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for CheckGroupMembershipResult struct. +func (cgmr *CheckGroupMembershipResult) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if cgmr.AdditionalProperties == nil { + cgmr.AdditionalProperties = make(map[string]interface{}) + } + cgmr.AdditionalProperties[k] = additionalProperties + } + case "value": + if v != nil { + var value bool + err = json.Unmarshal(*v, &value) + if err != nil { + return err + } + cgmr.Value = &value + } + } + } + + return nil +} + +// BasicDirectoryObject represents an Azure Active Directory object. +type BasicDirectoryObject interface { + AsApplication() (*Application, bool) + AsADGroup() (*ADGroup, bool) + AsServicePrincipal() (*ServicePrincipal, bool) + AsUser() (*User, bool) + AsDirectoryObject() (*DirectoryObject, bool) +} + +// DirectoryObject represents an Azure Active Directory object. +type DirectoryObject struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ObjectID - READ-ONLY; The object ID. + ObjectID *string `json:"objectId,omitempty"` + // DeletionTimestamp - READ-ONLY; The time at which the directory object was deleted. + DeletionTimestamp *date.Time `json:"deletionTimestamp,omitempty"` + // ObjectType - Possible values include: 'ObjectTypeDirectoryObject', 'ObjectTypeApplication', 'ObjectTypeGroup', 'ObjectTypeServicePrincipal', 'ObjectTypeUser' + ObjectType ObjectType `json:"objectType,omitempty"` +} + +func unmarshalBasicDirectoryObject(body []byte) (BasicDirectoryObject, error) { + var m map[string]interface{} + err := json.Unmarshal(body, &m) + if err != nil { + return nil, err + } + + switch m["objectType"] { + case string(ObjectTypeApplication): + var a Application + err := json.Unmarshal(body, &a) + return a, err + case string(ObjectTypeGroup): + var ag ADGroup + err := json.Unmarshal(body, &ag) + return ag, err + case string(ObjectTypeServicePrincipal): + var sp ServicePrincipal + err := json.Unmarshal(body, &sp) + return sp, err + case string(ObjectTypeUser): + var u User + err := json.Unmarshal(body, &u) + return u, err + default: + var do DirectoryObject + err := json.Unmarshal(body, &do) + return do, err + } +} +func unmarshalBasicDirectoryObjectArray(body []byte) ([]BasicDirectoryObject, error) { + var rawMessages []*json.RawMessage + err := json.Unmarshal(body, &rawMessages) + if err != nil { + return nil, err + } + + doArray := make([]BasicDirectoryObject, len(rawMessages)) + + for index, rawMessage := range rawMessages { + do, err := unmarshalBasicDirectoryObject(*rawMessage) + if err != nil { + return nil, err + } + doArray[index] = do + } + return doArray, nil +} + +// MarshalJSON is the custom marshaler for DirectoryObject. +func (do DirectoryObject) MarshalJSON() ([]byte, error) { + do.ObjectType = ObjectTypeDirectoryObject + objectMap := make(map[string]interface{}) + if do.ObjectType != "" { + objectMap["objectType"] = do.ObjectType + } + for k, v := range do.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// AsApplication is the BasicDirectoryObject implementation for DirectoryObject. +func (do DirectoryObject) AsApplication() (*Application, bool) { + return nil, false +} + +// AsADGroup is the BasicDirectoryObject implementation for DirectoryObject. +func (do DirectoryObject) AsADGroup() (*ADGroup, bool) { + return nil, false +} + +// AsServicePrincipal is the BasicDirectoryObject implementation for DirectoryObject. +func (do DirectoryObject) AsServicePrincipal() (*ServicePrincipal, bool) { + return nil, false +} + +// AsUser is the BasicDirectoryObject implementation for DirectoryObject. +func (do DirectoryObject) AsUser() (*User, bool) { + return nil, false +} + +// AsDirectoryObject is the BasicDirectoryObject implementation for DirectoryObject. +func (do DirectoryObject) AsDirectoryObject() (*DirectoryObject, bool) { + return &do, true +} + +// AsBasicDirectoryObject is the BasicDirectoryObject implementation for DirectoryObject. +func (do DirectoryObject) AsBasicDirectoryObject() (BasicDirectoryObject, bool) { + return &do, true +} + +// UnmarshalJSON is the custom unmarshaler for DirectoryObject struct. +func (do *DirectoryObject) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if do.AdditionalProperties == nil { + do.AdditionalProperties = make(map[string]interface{}) + } + do.AdditionalProperties[k] = additionalProperties + } + case "objectId": + if v != nil { + var objectID string + err = json.Unmarshal(*v, &objectID) + if err != nil { + return err + } + do.ObjectID = &objectID + } + case "deletionTimestamp": + if v != nil { + var deletionTimestamp date.Time + err = json.Unmarshal(*v, &deletionTimestamp) + if err != nil { + return err + } + do.DeletionTimestamp = &deletionTimestamp + } + case "objectType": + if v != nil { + var objectType ObjectType + err = json.Unmarshal(*v, &objectType) + if err != nil { + return err + } + do.ObjectType = objectType + } + } + } + + return nil +} + +// DirectoryObjectListResult directoryObject list operation result. +type DirectoryObjectListResult struct { + autorest.Response `json:"-"` + // Value - A collection of DirectoryObject. + Value *[]BasicDirectoryObject `json:"value,omitempty"` + // OdataNextLink - The URL to get the next set of results. + OdataNextLink *string `json:"odata.nextLink,omitempty"` +} + +// UnmarshalJSON is the custom unmarshaler for DirectoryObjectListResult struct. +func (dolr *DirectoryObjectListResult) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "value": + if v != nil { + value, err := unmarshalBasicDirectoryObjectArray(*v) + if err != nil { + return err + } + dolr.Value = &value + } + case "odata.nextLink": + if v != nil { + var odataNextLink string + err = json.Unmarshal(*v, &odataNextLink) + if err != nil { + return err + } + dolr.OdataNextLink = &odataNextLink + } + } + } + + return nil +} + +// DirectoryObjectListResultIterator provides access to a complete listing of DirectoryObject values. +type DirectoryObjectListResultIterator struct { + i int + page DirectoryObjectListResultPage +} + +// NextWithContext advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *DirectoryObjectListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DirectoryObjectListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err = iter.page.NextWithContext(ctx) + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *DirectoryObjectListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter DirectoryObjectListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter DirectoryObjectListResultIterator) Response() DirectoryObjectListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter DirectoryObjectListResultIterator) Value() BasicDirectoryObject { + if !iter.page.NotDone() { + return DirectoryObject{} + } + return iter.page.Values()[iter.i] +} + +// Creates a new instance of the DirectoryObjectListResultIterator type. +func NewDirectoryObjectListResultIterator(page DirectoryObjectListResultPage) DirectoryObjectListResultIterator { + return DirectoryObjectListResultIterator{page: page} +} + +// IsEmpty returns true if the ListResult contains no values. +func (dolr DirectoryObjectListResult) IsEmpty() bool { + return dolr.Value == nil || len(*dolr.Value) == 0 +} + +// directoryObjectListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (dolr DirectoryObjectListResult) directoryObjectListResultPreparer(ctx context.Context) (*http.Request, error) { + if dolr.OdataNextLink == nil || len(to.String(dolr.OdataNextLink)) < 1 { + return nil, nil + } + return autorest.Prepare((&http.Request{}).WithContext(ctx), + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(dolr.OdataNextLink))) +} + +// DirectoryObjectListResultPage contains a page of BasicDirectoryObject values. +type DirectoryObjectListResultPage struct { + fn func(context.Context, DirectoryObjectListResult) (DirectoryObjectListResult, error) + dolr DirectoryObjectListResult +} + +// NextWithContext advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *DirectoryObjectListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DirectoryObjectListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.dolr) + if err != nil { + return err + } + page.dolr = next + return nil +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *DirectoryObjectListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page DirectoryObjectListResultPage) NotDone() bool { + return !page.dolr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page DirectoryObjectListResultPage) Response() DirectoryObjectListResult { + return page.dolr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page DirectoryObjectListResultPage) Values() []BasicDirectoryObject { + if page.dolr.IsEmpty() { + return nil + } + return *page.dolr.Value +} + +// Creates a new instance of the DirectoryObjectListResultPage type. +func NewDirectoryObjectListResultPage(getNextPage func(context.Context, DirectoryObjectListResult) (DirectoryObjectListResult, error)) DirectoryObjectListResultPage { + return DirectoryObjectListResultPage{fn: getNextPage} +} + +// Domain active Directory Domain information. +type Domain struct { + autorest.Response `json:"-"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // AuthenticationType - READ-ONLY; the type of the authentication into the domain. + AuthenticationType *string `json:"authenticationType,omitempty"` + // IsDefault - READ-ONLY; if this is the default domain in the tenant. + IsDefault *bool `json:"isDefault,omitempty"` + // IsVerified - READ-ONLY; if this domain's ownership is verified. + IsVerified *bool `json:"isVerified,omitempty"` + // Name - the domain name. + Name *string `json:"name,omitempty"` +} + +// MarshalJSON is the custom marshaler for Domain. +func (d Domain) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if d.Name != nil { + objectMap["name"] = d.Name + } + for k, v := range d.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Domain struct. +func (d *Domain) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if d.AdditionalProperties == nil { + d.AdditionalProperties = make(map[string]interface{}) + } + d.AdditionalProperties[k] = additionalProperties + } + case "authenticationType": + if v != nil { + var authenticationType string + err = json.Unmarshal(*v, &authenticationType) + if err != nil { + return err + } + d.AuthenticationType = &authenticationType + } + case "isDefault": + if v != nil { + var isDefault bool + err = json.Unmarshal(*v, &isDefault) + if err != nil { + return err + } + d.IsDefault = &isDefault + } + case "isVerified": + if v != nil { + var isVerified bool + err = json.Unmarshal(*v, &isVerified) + if err != nil { + return err + } + d.IsVerified = &isVerified + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + d.Name = &name + } + } + } + + return nil +} + +// DomainListResult server response for Get tenant domains API call. +type DomainListResult struct { + autorest.Response `json:"-"` + // Value - the list of domains. + Value *[]Domain `json:"value,omitempty"` +} + +// ErrorMessage active Directory error message. +type ErrorMessage struct { + // Message - Error message value. + Message *string `json:"value,omitempty"` +} + +// GetObjectsParameters request parameters for the GetObjectsByObjectIds API. +type GetObjectsParameters struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ObjectIds - The requested object IDs. + ObjectIds *[]string `json:"objectIds,omitempty"` + // Types - The requested object types. + Types *[]string `json:"types,omitempty"` + // IncludeDirectoryObjectReferences - If true, also searches for object IDs in the partner tenant. + IncludeDirectoryObjectReferences *bool `json:"includeDirectoryObjectReferences,omitempty"` +} + +// MarshalJSON is the custom marshaler for GetObjectsParameters. +func (gop GetObjectsParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if gop.ObjectIds != nil { + objectMap["objectIds"] = gop.ObjectIds + } + if gop.Types != nil { + objectMap["types"] = gop.Types + } + if gop.IncludeDirectoryObjectReferences != nil { + objectMap["includeDirectoryObjectReferences"] = gop.IncludeDirectoryObjectReferences + } + for k, v := range gop.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for GetObjectsParameters struct. +func (gop *GetObjectsParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if gop.AdditionalProperties == nil { + gop.AdditionalProperties = make(map[string]interface{}) + } + gop.AdditionalProperties[k] = additionalProperties + } + case "objectIds": + if v != nil { + var objectIds []string + err = json.Unmarshal(*v, &objectIds) + if err != nil { + return err + } + gop.ObjectIds = &objectIds + } + case "types": + if v != nil { + var typesVar []string + err = json.Unmarshal(*v, &typesVar) + if err != nil { + return err + } + gop.Types = &typesVar + } + case "includeDirectoryObjectReferences": + if v != nil { + var includeDirectoryObjectReferences bool + err = json.Unmarshal(*v, &includeDirectoryObjectReferences) + if err != nil { + return err + } + gop.IncludeDirectoryObjectReferences = &includeDirectoryObjectReferences + } + } + } + + return nil +} + +// GraphError active Directory error information. +type GraphError struct { + // OdataError - A Graph API error. + *OdataError `json:"odata.error,omitempty"` +} + +// MarshalJSON is the custom marshaler for GraphError. +func (ge GraphError) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ge.OdataError != nil { + objectMap["odata.error"] = ge.OdataError + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for GraphError struct. +func (ge *GraphError) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "odata.error": + if v != nil { + var odataError OdataError + err = json.Unmarshal(*v, &odataError) + if err != nil { + return err + } + ge.OdataError = &odataError + } + } + } + + return nil +} + +// GroupAddMemberParameters request parameters for adding a member to a group. +type GroupAddMemberParameters struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // URL - A member object URL, such as "https://graph.windows.net/0b1f9851-1bf0-433f-aec3-cb9272f093dc/directoryObjects/f260bbc4-c254-447b-94cf-293b5ec434dd", where "0b1f9851-1bf0-433f-aec3-cb9272f093dc" is the tenantId and "f260bbc4-c254-447b-94cf-293b5ec434dd" is the objectId of the member (user, application, servicePrincipal, group) to be added. + URL *string `json:"url,omitempty"` +} + +// MarshalJSON is the custom marshaler for GroupAddMemberParameters. +func (gamp GroupAddMemberParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if gamp.URL != nil { + objectMap["url"] = gamp.URL + } + for k, v := range gamp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for GroupAddMemberParameters struct. +func (gamp *GroupAddMemberParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if gamp.AdditionalProperties == nil { + gamp.AdditionalProperties = make(map[string]interface{}) + } + gamp.AdditionalProperties[k] = additionalProperties + } + case "url": + if v != nil { + var URL string + err = json.Unmarshal(*v, &URL) + if err != nil { + return err + } + gamp.URL = &URL + } + } + } + + return nil +} + +// GroupCreateParameters request parameters for creating a new group. +type GroupCreateParameters struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // DisplayName - Group display name + DisplayName *string `json:"displayName,omitempty"` + // MailEnabled - Whether the group is mail-enabled. Must be false. This is because only pure security groups can be created using the Graph API. + MailEnabled *bool `json:"mailEnabled,omitempty"` + // MailNickname - Mail nickname + MailNickname *string `json:"mailNickname,omitempty"` + // SecurityEnabled - Whether the group is a security group. Must be true. This is because only pure security groups can be created using the Graph API. + SecurityEnabled *bool `json:"securityEnabled,omitempty"` +} + +// MarshalJSON is the custom marshaler for GroupCreateParameters. +func (gcp GroupCreateParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if gcp.DisplayName != nil { + objectMap["displayName"] = gcp.DisplayName + } + if gcp.MailEnabled != nil { + objectMap["mailEnabled"] = gcp.MailEnabled + } + if gcp.MailNickname != nil { + objectMap["mailNickname"] = gcp.MailNickname + } + if gcp.SecurityEnabled != nil { + objectMap["securityEnabled"] = gcp.SecurityEnabled + } + for k, v := range gcp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for GroupCreateParameters struct. +func (gcp *GroupCreateParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if gcp.AdditionalProperties == nil { + gcp.AdditionalProperties = make(map[string]interface{}) + } + gcp.AdditionalProperties[k] = additionalProperties + } + case "displayName": + if v != nil { + var displayName string + err = json.Unmarshal(*v, &displayName) + if err != nil { + return err + } + gcp.DisplayName = &displayName + } + case "mailEnabled": + if v != nil { + var mailEnabled bool + err = json.Unmarshal(*v, &mailEnabled) + if err != nil { + return err + } + gcp.MailEnabled = &mailEnabled + } + case "mailNickname": + if v != nil { + var mailNickname string + err = json.Unmarshal(*v, &mailNickname) + if err != nil { + return err + } + gcp.MailNickname = &mailNickname + } + case "securityEnabled": + if v != nil { + var securityEnabled bool + err = json.Unmarshal(*v, &securityEnabled) + if err != nil { + return err + } + gcp.SecurityEnabled = &securityEnabled + } + } + } + + return nil +} + +// GroupGetMemberGroupsParameters request parameters for GetMemberGroups API call. +type GroupGetMemberGroupsParameters struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // SecurityEnabledOnly - If true, only membership in security-enabled groups should be checked. Otherwise, membership in all groups should be checked. + SecurityEnabledOnly *bool `json:"securityEnabledOnly,omitempty"` +} + +// MarshalJSON is the custom marshaler for GroupGetMemberGroupsParameters. +func (ggmgp GroupGetMemberGroupsParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ggmgp.SecurityEnabledOnly != nil { + objectMap["securityEnabledOnly"] = ggmgp.SecurityEnabledOnly + } + for k, v := range ggmgp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for GroupGetMemberGroupsParameters struct. +func (ggmgp *GroupGetMemberGroupsParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if ggmgp.AdditionalProperties == nil { + ggmgp.AdditionalProperties = make(map[string]interface{}) + } + ggmgp.AdditionalProperties[k] = additionalProperties + } + case "securityEnabledOnly": + if v != nil { + var securityEnabledOnly bool + err = json.Unmarshal(*v, &securityEnabledOnly) + if err != nil { + return err + } + ggmgp.SecurityEnabledOnly = &securityEnabledOnly + } + } + } + + return nil +} + +// GroupGetMemberGroupsResult server response for GetMemberGroups API call. +type GroupGetMemberGroupsResult struct { + autorest.Response `json:"-"` + // Value - A collection of group IDs of which the group is a member. + Value *[]string `json:"value,omitempty"` +} + +// GroupListResult server response for Get tenant groups API call +type GroupListResult struct { + autorest.Response `json:"-"` + // Value - A collection of Active Directory groups. + Value *[]ADGroup `json:"value,omitempty"` + // OdataNextLink - The URL to get the next set of results. + OdataNextLink *string `json:"odata.nextLink,omitempty"` +} + +// GroupListResultIterator provides access to a complete listing of ADGroup values. +type GroupListResultIterator struct { + i int + page GroupListResultPage +} + +// NextWithContext advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *GroupListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err = iter.page.NextWithContext(ctx) + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *GroupListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter GroupListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter GroupListResultIterator) Response() GroupListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter GroupListResultIterator) Value() ADGroup { + if !iter.page.NotDone() { + return ADGroup{} + } + return iter.page.Values()[iter.i] +} + +// Creates a new instance of the GroupListResultIterator type. +func NewGroupListResultIterator(page GroupListResultPage) GroupListResultIterator { + return GroupListResultIterator{page: page} +} + +// IsEmpty returns true if the ListResult contains no values. +func (glr GroupListResult) IsEmpty() bool { + return glr.Value == nil || len(*glr.Value) == 0 +} + +// GroupListResultPage contains a page of ADGroup values. +type GroupListResultPage struct { + fn func(context.Context, GroupListResult) (GroupListResult, error) + glr GroupListResult +} + +// NextWithContext advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *GroupListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.glr) + if err != nil { + return err + } + page.glr = next + return nil +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *GroupListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page GroupListResultPage) NotDone() bool { + return !page.glr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page GroupListResultPage) Response() GroupListResult { + return page.glr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page GroupListResultPage) Values() []ADGroup { + if page.glr.IsEmpty() { + return nil + } + return *page.glr.Value +} + +// Creates a new instance of the GroupListResultPage type. +func NewGroupListResultPage(getNextPage func(context.Context, GroupListResult) (GroupListResult, error)) GroupListResultPage { + return GroupListResultPage{fn: getNextPage} +} + +// InformationalURL represents a group of URIs that provide terms of service, marketing, support and +// privacy policy information about an application. The default value for each string is null. +type InformationalURL struct { + // TermsOfService - The terms of service URI + TermsOfService *string `json:"termsOfService,omitempty"` + // Marketing - The marketing URI + Marketing *string `json:"marketing,omitempty"` + // Privacy - The privacy policy URI + Privacy *string `json:"privacy,omitempty"` + // Support - The support URI + Support *string `json:"support,omitempty"` +} + +// KeyCredential active Directory Key Credential information. +type KeyCredential struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // StartDate - Start date. + StartDate *date.Time `json:"startDate,omitempty"` + // EndDate - End date. + EndDate *date.Time `json:"endDate,omitempty"` + // Value - Key value. + Value *string `json:"value,omitempty"` + // KeyID - Key ID. + KeyID *string `json:"keyId,omitempty"` + // Usage - Usage. Acceptable values are 'Verify' and 'Sign'. + Usage *string `json:"usage,omitempty"` + // Type - Type. Acceptable values are 'AsymmetricX509Cert' and 'Symmetric'. + Type *string `json:"type,omitempty"` + // CustomKeyIdentifier - Custom Key Identifier + CustomKeyIdentifier *string `json:"customKeyIdentifier,omitempty"` +} + +// MarshalJSON is the custom marshaler for KeyCredential. +func (kc KeyCredential) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if kc.StartDate != nil { + objectMap["startDate"] = kc.StartDate + } + if kc.EndDate != nil { + objectMap["endDate"] = kc.EndDate + } + if kc.Value != nil { + objectMap["value"] = kc.Value + } + if kc.KeyID != nil { + objectMap["keyId"] = kc.KeyID + } + if kc.Usage != nil { + objectMap["usage"] = kc.Usage + } + if kc.Type != nil { + objectMap["type"] = kc.Type + } + if kc.CustomKeyIdentifier != nil { + objectMap["customKeyIdentifier"] = kc.CustomKeyIdentifier + } + for k, v := range kc.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for KeyCredential struct. +func (kc *KeyCredential) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if kc.AdditionalProperties == nil { + kc.AdditionalProperties = make(map[string]interface{}) + } + kc.AdditionalProperties[k] = additionalProperties + } + case "startDate": + if v != nil { + var startDate date.Time + err = json.Unmarshal(*v, &startDate) + if err != nil { + return err + } + kc.StartDate = &startDate + } + case "endDate": + if v != nil { + var endDate date.Time + err = json.Unmarshal(*v, &endDate) + if err != nil { + return err + } + kc.EndDate = &endDate + } + case "value": + if v != nil { + var value string + err = json.Unmarshal(*v, &value) + if err != nil { + return err + } + kc.Value = &value + } + case "keyId": + if v != nil { + var keyID string + err = json.Unmarshal(*v, &keyID) + if err != nil { + return err + } + kc.KeyID = &keyID + } + case "usage": + if v != nil { + var usage string + err = json.Unmarshal(*v, &usage) + if err != nil { + return err + } + kc.Usage = &usage + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + kc.Type = &typeVar + } + case "customKeyIdentifier": + if v != nil { + var customKeyIdentifier string + err = json.Unmarshal(*v, &customKeyIdentifier) + if err != nil { + return err + } + kc.CustomKeyIdentifier = &customKeyIdentifier + } + } + } + + return nil +} + +// KeyCredentialListResult keyCredential list operation result. +type KeyCredentialListResult struct { + autorest.Response `json:"-"` + // Value - A collection of KeyCredentials. + Value *[]KeyCredential `json:"value,omitempty"` +} + +// KeyCredentialsUpdateParameters request parameters for a KeyCredentials update operation +type KeyCredentialsUpdateParameters struct { + // Value - A collection of KeyCredentials. + Value *[]KeyCredential `json:"value,omitempty"` +} + +// OAuth2Permission represents an OAuth 2.0 delegated permission scope. The specified OAuth 2.0 delegated +// permission scopes may be requested by client applications (through the requiredResourceAccess collection +// on the Application object) when calling a resource application. The oauth2Permissions property of the +// ServicePrincipal entity and of the Application entity is a collection of OAuth2Permission. +type OAuth2Permission struct { + // AdminConsentDescription - Permission help text that appears in the admin consent and app assignment experiences. + AdminConsentDescription *string `json:"adminConsentDescription,omitempty"` + // AdminConsentDisplayName - Display name for the permission that appears in the admin consent and app assignment experiences. + AdminConsentDisplayName *string `json:"adminConsentDisplayName,omitempty"` + // ID - Unique scope permission identifier inside the oauth2Permissions collection. + ID *string `json:"id,omitempty"` + // IsEnabled - When creating or updating a permission, this property must be set to true (which is the default). To delete a permission, this property must first be set to false. At that point, in a subsequent call, the permission may be removed. + IsEnabled *bool `json:"isEnabled,omitempty"` + // Type - Specifies whether this scope permission can be consented to by an end user, or whether it is a tenant-wide permission that must be consented to by a Company Administrator. Possible values are "User" or "Admin". + Type *string `json:"type,omitempty"` + // UserConsentDescription - Permission help text that appears in the end user consent experience. + UserConsentDescription *string `json:"userConsentDescription,omitempty"` + // UserConsentDisplayName - Display name for the permission that appears in the end user consent experience. + UserConsentDisplayName *string `json:"userConsentDisplayName,omitempty"` + // Value - The value of the scope claim that the resource application should expect in the OAuth 2.0 access token. + Value *string `json:"value,omitempty"` +} + +// OAuth2PermissionGrant ... +type OAuth2PermissionGrant struct { + autorest.Response `json:"-"` + // OdataType - Microsoft.DirectoryServices.OAuth2PermissionGrant + OdataType *string `json:"odata.type,omitempty"` + // ClientID - The id of the resource's service principal granted consent to impersonate the user when accessing the resource (represented by the resourceId property). + ClientID *string `json:"clientId,omitempty"` + // ObjectID - The id of the permission grant + ObjectID *string `json:"objectId,omitempty"` + // ConsentType - Indicates if consent was provided by the administrator (on behalf of the organization) or by an individual. Possible values include: 'AllPrincipals', 'Principal' + ConsentType ConsentType `json:"consentType,omitempty"` + // PrincipalID - When consent type is Principal, this property specifies the id of the user that granted consent and applies only for that user. + PrincipalID *string `json:"principalId,omitempty"` + // ResourceID - Object Id of the resource you want to grant + ResourceID *string `json:"resourceId,omitempty"` + // Scope - Specifies the value of the scope claim that the resource application should expect in the OAuth 2.0 access token. For example, User.Read + Scope *string `json:"scope,omitempty"` + // StartTime - Start time for TTL + StartTime *string `json:"startTime,omitempty"` + // ExpiryTime - Expiry time for TTL + ExpiryTime *string `json:"expiryTime,omitempty"` +} + +// OAuth2PermissionGrantListResult server response for get oauth2 permissions grants +type OAuth2PermissionGrantListResult struct { + autorest.Response `json:"-"` + // Value - the list of oauth2 permissions grants + Value *[]OAuth2PermissionGrant `json:"value,omitempty"` + // OdataNextLink - the URL to get the next set of results. + OdataNextLink *string `json:"odata.nextLink,omitempty"` +} + +// OAuth2PermissionGrantListResultIterator provides access to a complete listing of OAuth2PermissionGrant +// values. +type OAuth2PermissionGrantListResultIterator struct { + i int + page OAuth2PermissionGrantListResultPage +} + +// NextWithContext advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *OAuth2PermissionGrantListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/OAuth2PermissionGrantListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err = iter.page.NextWithContext(ctx) + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *OAuth2PermissionGrantListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter OAuth2PermissionGrantListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter OAuth2PermissionGrantListResultIterator) Response() OAuth2PermissionGrantListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter OAuth2PermissionGrantListResultIterator) Value() OAuth2PermissionGrant { + if !iter.page.NotDone() { + return OAuth2PermissionGrant{} + } + return iter.page.Values()[iter.i] +} + +// Creates a new instance of the OAuth2PermissionGrantListResultIterator type. +func NewOAuth2PermissionGrantListResultIterator(page OAuth2PermissionGrantListResultPage) OAuth2PermissionGrantListResultIterator { + return OAuth2PermissionGrantListResultIterator{page: page} +} + +// IsEmpty returns true if the ListResult contains no values. +func (oa2pglr OAuth2PermissionGrantListResult) IsEmpty() bool { + return oa2pglr.Value == nil || len(*oa2pglr.Value) == 0 +} + +// OAuth2PermissionGrantListResultPage contains a page of OAuth2PermissionGrant values. +type OAuth2PermissionGrantListResultPage struct { + fn func(context.Context, OAuth2PermissionGrantListResult) (OAuth2PermissionGrantListResult, error) + oa2pglr OAuth2PermissionGrantListResult +} + +// NextWithContext advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *OAuth2PermissionGrantListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/OAuth2PermissionGrantListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.oa2pglr) + if err != nil { + return err + } + page.oa2pglr = next + return nil +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *OAuth2PermissionGrantListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page OAuth2PermissionGrantListResultPage) NotDone() bool { + return !page.oa2pglr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page OAuth2PermissionGrantListResultPage) Response() OAuth2PermissionGrantListResult { + return page.oa2pglr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page OAuth2PermissionGrantListResultPage) Values() []OAuth2PermissionGrant { + if page.oa2pglr.IsEmpty() { + return nil + } + return *page.oa2pglr.Value +} + +// Creates a new instance of the OAuth2PermissionGrantListResultPage type. +func NewOAuth2PermissionGrantListResultPage(getNextPage func(context.Context, OAuth2PermissionGrantListResult) (OAuth2PermissionGrantListResult, error)) OAuth2PermissionGrantListResultPage { + return OAuth2PermissionGrantListResultPage{fn: getNextPage} +} + +// OdataError active Directory OData error information. +type OdataError struct { + // Code - Error code. + Code *string `json:"code,omitempty"` + // ErrorMessage - Error Message. + *ErrorMessage `json:"message,omitempty"` +} + +// MarshalJSON is the custom marshaler for OdataError. +func (oe OdataError) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if oe.Code != nil { + objectMap["code"] = oe.Code + } + if oe.ErrorMessage != nil { + objectMap["message"] = oe.ErrorMessage + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for OdataError struct. +func (oe *OdataError) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "code": + if v != nil { + var code string + err = json.Unmarshal(*v, &code) + if err != nil { + return err + } + oe.Code = &code + } + case "message": + if v != nil { + var errorMessage ErrorMessage + err = json.Unmarshal(*v, &errorMessage) + if err != nil { + return err + } + oe.ErrorMessage = &errorMessage + } + } + } + + return nil +} + +// OptionalClaim specifying the claims to be included in a token. +type OptionalClaim struct { + // Name - Claim name. + Name *string `json:"name,omitempty"` + // Source - Claim source. + Source *string `json:"source,omitempty"` + // Essential - Is this a required claim. + Essential *bool `json:"essential,omitempty"` + AdditionalProperties interface{} `json:"additionalProperties,omitempty"` +} + +// OptionalClaims specifying the claims to be included in the token. +type OptionalClaims struct { + // IDToken - Optional claims requested to be included in the id token. + IDToken *[]OptionalClaim `json:"idToken,omitempty"` + // AccessToken - Optional claims requested to be included in the access token. + AccessToken *[]OptionalClaim `json:"accessToken,omitempty"` + // SamlToken - Optional claims requested to be included in the saml token. + SamlToken *[]OptionalClaim `json:"samlToken,omitempty"` +} + +// PasswordCredential active Directory Password Credential information. +type PasswordCredential struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // StartDate - Start date. + StartDate *date.Time `json:"startDate,omitempty"` + // EndDate - End date. + EndDate *date.Time `json:"endDate,omitempty"` + // KeyID - Key ID. + KeyID *string `json:"keyId,omitempty"` + // Value - Key value. + Value *string `json:"value,omitempty"` + // CustomKeyIdentifier - Custom Key Identifier + CustomKeyIdentifier *[]byte `json:"customKeyIdentifier,omitempty"` +} + +// MarshalJSON is the custom marshaler for PasswordCredential. +func (pc PasswordCredential) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if pc.StartDate != nil { + objectMap["startDate"] = pc.StartDate + } + if pc.EndDate != nil { + objectMap["endDate"] = pc.EndDate + } + if pc.KeyID != nil { + objectMap["keyId"] = pc.KeyID + } + if pc.Value != nil { + objectMap["value"] = pc.Value + } + if pc.CustomKeyIdentifier != nil { + objectMap["customKeyIdentifier"] = pc.CustomKeyIdentifier + } + for k, v := range pc.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for PasswordCredential struct. +func (pc *PasswordCredential) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if pc.AdditionalProperties == nil { + pc.AdditionalProperties = make(map[string]interface{}) + } + pc.AdditionalProperties[k] = additionalProperties + } + case "startDate": + if v != nil { + var startDate date.Time + err = json.Unmarshal(*v, &startDate) + if err != nil { + return err + } + pc.StartDate = &startDate + } + case "endDate": + if v != nil { + var endDate date.Time + err = json.Unmarshal(*v, &endDate) + if err != nil { + return err + } + pc.EndDate = &endDate + } + case "keyId": + if v != nil { + var keyID string + err = json.Unmarshal(*v, &keyID) + if err != nil { + return err + } + pc.KeyID = &keyID + } + case "value": + if v != nil { + var value string + err = json.Unmarshal(*v, &value) + if err != nil { + return err + } + pc.Value = &value + } + case "customKeyIdentifier": + if v != nil { + var customKeyIdentifier []byte + err = json.Unmarshal(*v, &customKeyIdentifier) + if err != nil { + return err + } + pc.CustomKeyIdentifier = &customKeyIdentifier + } + } + } + + return nil +} + +// PasswordCredentialListResult passwordCredential list operation result. +type PasswordCredentialListResult struct { + autorest.Response `json:"-"` + // Value - A collection of PasswordCredentials. + Value *[]PasswordCredential `json:"value,omitempty"` +} + +// PasswordCredentialsUpdateParameters request parameters for a PasswordCredentials update operation. +type PasswordCredentialsUpdateParameters struct { + // Value - A collection of PasswordCredentials. + Value *[]PasswordCredential `json:"value,omitempty"` +} + +// PasswordProfile the password profile associated with a user. +type PasswordProfile struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // Password - Password + Password *string `json:"password,omitempty"` + // ForceChangePasswordNextLogin - Whether to force a password change on next login. + ForceChangePasswordNextLogin *bool `json:"forceChangePasswordNextLogin,omitempty"` +} + +// MarshalJSON is the custom marshaler for PasswordProfile. +func (pp PasswordProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if pp.Password != nil { + objectMap["password"] = pp.Password + } + if pp.ForceChangePasswordNextLogin != nil { + objectMap["forceChangePasswordNextLogin"] = pp.ForceChangePasswordNextLogin + } + for k, v := range pp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for PasswordProfile struct. +func (pp *PasswordProfile) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if pp.AdditionalProperties == nil { + pp.AdditionalProperties = make(map[string]interface{}) + } + pp.AdditionalProperties[k] = additionalProperties + } + case "password": + if v != nil { + var password string + err = json.Unmarshal(*v, &password) + if err != nil { + return err + } + pp.Password = &password + } + case "forceChangePasswordNextLogin": + if v != nil { + var forceChangePasswordNextLogin bool + err = json.Unmarshal(*v, &forceChangePasswordNextLogin) + if err != nil { + return err + } + pp.ForceChangePasswordNextLogin = &forceChangePasswordNextLogin + } + } + } + + return nil +} + +// PreAuthorizedApplication contains information about pre authorized client application. +type PreAuthorizedApplication struct { + // AppID - Represents the application id. + AppID *string `json:"appId,omitempty"` + // Permissions - Collection of required app permissions/entitlements from the resource application. + Permissions *[]PreAuthorizedApplicationPermission `json:"permissions,omitempty"` + // Extensions - Collection of extensions from the resource application. + Extensions *[]PreAuthorizedApplicationExtension `json:"extensions,omitempty"` +} + +// PreAuthorizedApplicationExtension representation of an app PreAuthorizedApplicationExtension required by +// a pre authorized client app. +type PreAuthorizedApplicationExtension struct { + // Conditions - The extension's conditions. + Conditions *[]string `json:"conditions,omitempty"` +} + +// PreAuthorizedApplicationPermission contains information about the pre-authorized permissions. +type PreAuthorizedApplicationPermission struct { + // DirectAccessGrant - Indicates whether the permission set is DirectAccess or impersonation. + DirectAccessGrant *bool `json:"directAccessGrant,omitempty"` + // AccessGrants - The list of permissions. + AccessGrants *[]string `json:"accessGrants,omitempty"` +} + +// RequiredResourceAccess specifies the set of OAuth 2.0 permission scopes and app roles under the +// specified resource that an application requires access to. The specified OAuth 2.0 permission scopes may +// be requested by client applications (through the requiredResourceAccess collection) when calling a +// resource application. The requiredResourceAccess property of the Application entity is a collection of +// RequiredResourceAccess. +type RequiredResourceAccess struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ResourceAccess - The list of OAuth2.0 permission scopes and app roles that the application requires from the specified resource. + ResourceAccess *[]ResourceAccess `json:"resourceAccess,omitempty"` + // ResourceAppID - The unique identifier for the resource that the application requires access to. This should be equal to the appId declared on the target resource application. + ResourceAppID *string `json:"resourceAppId,omitempty"` +} + +// MarshalJSON is the custom marshaler for RequiredResourceAccess. +func (rra RequiredResourceAccess) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if rra.ResourceAccess != nil { + objectMap["resourceAccess"] = rra.ResourceAccess + } + if rra.ResourceAppID != nil { + objectMap["resourceAppId"] = rra.ResourceAppID + } + for k, v := range rra.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for RequiredResourceAccess struct. +func (rra *RequiredResourceAccess) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if rra.AdditionalProperties == nil { + rra.AdditionalProperties = make(map[string]interface{}) + } + rra.AdditionalProperties[k] = additionalProperties + } + case "resourceAccess": + if v != nil { + var resourceAccess []ResourceAccess + err = json.Unmarshal(*v, &resourceAccess) + if err != nil { + return err + } + rra.ResourceAccess = &resourceAccess + } + case "resourceAppId": + if v != nil { + var resourceAppID string + err = json.Unmarshal(*v, &resourceAppID) + if err != nil { + return err + } + rra.ResourceAppID = &resourceAppID + } + } + } + + return nil +} + +// ResourceAccess specifies an OAuth 2.0 permission scope or an app role that an application requires. The +// resourceAccess property of the RequiredResourceAccess type is a collection of ResourceAccess. +type ResourceAccess struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ID - The unique identifier for one of the OAuth2Permission or AppRole instances that the resource application exposes. + ID *string `json:"id,omitempty"` + // Type - Specifies whether the id property references an OAuth2Permission or an AppRole. Possible values are "scope" or "role". + Type *string `json:"type,omitempty"` +} + +// MarshalJSON is the custom marshaler for ResourceAccess. +func (ra ResourceAccess) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ra.ID != nil { + objectMap["id"] = ra.ID + } + if ra.Type != nil { + objectMap["type"] = ra.Type + } + for k, v := range ra.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ResourceAccess struct. +func (ra *ResourceAccess) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if ra.AdditionalProperties == nil { + ra.AdditionalProperties = make(map[string]interface{}) + } + ra.AdditionalProperties[k] = additionalProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + ra.ID = &ID + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + ra.Type = &typeVar + } + } + } + + return nil +} + +// ServicePrincipal active Directory service principal information. +type ServicePrincipal struct { + autorest.Response `json:"-"` + // AccountEnabled - whether or not the service principal account is enabled + AccountEnabled *bool `json:"accountEnabled,omitempty"` + // AlternativeNames - alternative names + AlternativeNames *[]string `json:"alternativeNames,omitempty"` + // AppDisplayName - READ-ONLY; The display name exposed by the associated application. + AppDisplayName *string `json:"appDisplayName,omitempty"` + // AppID - The application ID. + AppID *string `json:"appId,omitempty"` + // AppOwnerTenantID - READ-ONLY + AppOwnerTenantID *string `json:"appOwnerTenantId,omitempty"` + // AppRoleAssignmentRequired - Specifies whether an AppRoleAssignment to a user or group is required before Azure AD will issue a user or access token to the application. + AppRoleAssignmentRequired *bool `json:"appRoleAssignmentRequired,omitempty"` + // AppRoles - The collection of application roles that an application may declare. These roles can be assigned to users, groups or service principals. + AppRoles *[]AppRole `json:"appRoles,omitempty"` + // DisplayName - The display name of the service principal. + DisplayName *string `json:"displayName,omitempty"` + // ErrorURL - A URL provided by the author of the associated application to report errors when using the application. + ErrorURL *string `json:"errorUrl,omitempty"` + // Homepage - The URL to the homepage of the associated application. + Homepage *string `json:"homepage,omitempty"` + // KeyCredentials - The collection of key credentials associated with the service principal. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // LogoutURL - A URL provided by the author of the associated application to logout + LogoutURL *string `json:"logoutUrl,omitempty"` + // Oauth2Permissions - READ-ONLY; The OAuth 2.0 permissions exposed by the associated application. + Oauth2Permissions *[]OAuth2Permission `json:"oauth2Permissions,omitempty"` + // PasswordCredentials - The collection of password credentials associated with the service principal. + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // PreferredTokenSigningKeyThumbprint - The thumbprint of preferred certificate to sign the token + PreferredTokenSigningKeyThumbprint *string `json:"preferredTokenSigningKeyThumbprint,omitempty"` + // PublisherName - The publisher's name of the associated application + PublisherName *string `json:"publisherName,omitempty"` + // ReplyUrls - The URLs that user tokens are sent to for sign in with the associated application. The redirect URIs that the oAuth 2.0 authorization code and access tokens are sent to for the associated application. + ReplyUrls *[]string `json:"replyUrls,omitempty"` + // SamlMetadataURL - The URL to the SAML metadata of the associated application + SamlMetadataURL *string `json:"samlMetadataUrl,omitempty"` + // ServicePrincipalNames - A collection of service principal names. + ServicePrincipalNames *[]string `json:"servicePrincipalNames,omitempty"` + // ServicePrincipalType - the type of the service principal + ServicePrincipalType *string `json:"servicePrincipalType,omitempty"` + // Tags - Optional list of tags that you can apply to your service principals. Not nullable. + Tags *[]string `json:"tags,omitempty"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ObjectID - READ-ONLY; The object ID. + ObjectID *string `json:"objectId,omitempty"` + // DeletionTimestamp - READ-ONLY; The time at which the directory object was deleted. + DeletionTimestamp *date.Time `json:"deletionTimestamp,omitempty"` + // ObjectType - Possible values include: 'ObjectTypeDirectoryObject', 'ObjectTypeApplication', 'ObjectTypeGroup', 'ObjectTypeServicePrincipal', 'ObjectTypeUser' + ObjectType ObjectType `json:"objectType,omitempty"` +} + +// MarshalJSON is the custom marshaler for ServicePrincipal. +func (sp ServicePrincipal) MarshalJSON() ([]byte, error) { + sp.ObjectType = ObjectTypeServicePrincipal + objectMap := make(map[string]interface{}) + if sp.AccountEnabled != nil { + objectMap["accountEnabled"] = sp.AccountEnabled + } + if sp.AlternativeNames != nil { + objectMap["alternativeNames"] = sp.AlternativeNames + } + if sp.AppID != nil { + objectMap["appId"] = sp.AppID + } + if sp.AppRoleAssignmentRequired != nil { + objectMap["appRoleAssignmentRequired"] = sp.AppRoleAssignmentRequired + } + if sp.AppRoles != nil { + objectMap["appRoles"] = sp.AppRoles + } + if sp.DisplayName != nil { + objectMap["displayName"] = sp.DisplayName + } + if sp.ErrorURL != nil { + objectMap["errorUrl"] = sp.ErrorURL + } + if sp.Homepage != nil { + objectMap["homepage"] = sp.Homepage + } + if sp.KeyCredentials != nil { + objectMap["keyCredentials"] = sp.KeyCredentials + } + if sp.LogoutURL != nil { + objectMap["logoutUrl"] = sp.LogoutURL + } + if sp.PasswordCredentials != nil { + objectMap["passwordCredentials"] = sp.PasswordCredentials + } + if sp.PreferredTokenSigningKeyThumbprint != nil { + objectMap["preferredTokenSigningKeyThumbprint"] = sp.PreferredTokenSigningKeyThumbprint + } + if sp.PublisherName != nil { + objectMap["publisherName"] = sp.PublisherName + } + if sp.ReplyUrls != nil { + objectMap["replyUrls"] = sp.ReplyUrls + } + if sp.SamlMetadataURL != nil { + objectMap["samlMetadataUrl"] = sp.SamlMetadataURL + } + if sp.ServicePrincipalNames != nil { + objectMap["servicePrincipalNames"] = sp.ServicePrincipalNames + } + if sp.ServicePrincipalType != nil { + objectMap["servicePrincipalType"] = sp.ServicePrincipalType + } + if sp.Tags != nil { + objectMap["tags"] = sp.Tags + } + if sp.ObjectType != "" { + objectMap["objectType"] = sp.ObjectType + } + for k, v := range sp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// AsApplication is the BasicDirectoryObject implementation for ServicePrincipal. +func (sp ServicePrincipal) AsApplication() (*Application, bool) { + return nil, false +} + +// AsADGroup is the BasicDirectoryObject implementation for ServicePrincipal. +func (sp ServicePrincipal) AsADGroup() (*ADGroup, bool) { + return nil, false +} + +// AsServicePrincipal is the BasicDirectoryObject implementation for ServicePrincipal. +func (sp ServicePrincipal) AsServicePrincipal() (*ServicePrincipal, bool) { + return &sp, true +} + +// AsUser is the BasicDirectoryObject implementation for ServicePrincipal. +func (sp ServicePrincipal) AsUser() (*User, bool) { + return nil, false +} + +// AsDirectoryObject is the BasicDirectoryObject implementation for ServicePrincipal. +func (sp ServicePrincipal) AsDirectoryObject() (*DirectoryObject, bool) { + return nil, false +} + +// AsBasicDirectoryObject is the BasicDirectoryObject implementation for ServicePrincipal. +func (sp ServicePrincipal) AsBasicDirectoryObject() (BasicDirectoryObject, bool) { + return &sp, true +} + +// UnmarshalJSON is the custom unmarshaler for ServicePrincipal struct. +func (sp *ServicePrincipal) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "accountEnabled": + if v != nil { + var accountEnabled bool + err = json.Unmarshal(*v, &accountEnabled) + if err != nil { + return err + } + sp.AccountEnabled = &accountEnabled + } + case "alternativeNames": + if v != nil { + var alternativeNames []string + err = json.Unmarshal(*v, &alternativeNames) + if err != nil { + return err + } + sp.AlternativeNames = &alternativeNames + } + case "appDisplayName": + if v != nil { + var appDisplayName string + err = json.Unmarshal(*v, &appDisplayName) + if err != nil { + return err + } + sp.AppDisplayName = &appDisplayName + } + case "appId": + if v != nil { + var appID string + err = json.Unmarshal(*v, &appID) + if err != nil { + return err + } + sp.AppID = &appID + } + case "appOwnerTenantId": + if v != nil { + var appOwnerTenantID string + err = json.Unmarshal(*v, &appOwnerTenantID) + if err != nil { + return err + } + sp.AppOwnerTenantID = &appOwnerTenantID + } + case "appRoleAssignmentRequired": + if v != nil { + var appRoleAssignmentRequired bool + err = json.Unmarshal(*v, &appRoleAssignmentRequired) + if err != nil { + return err + } + sp.AppRoleAssignmentRequired = &appRoleAssignmentRequired + } + case "appRoles": + if v != nil { + var appRoles []AppRole + err = json.Unmarshal(*v, &appRoles) + if err != nil { + return err + } + sp.AppRoles = &appRoles + } + case "displayName": + if v != nil { + var displayName string + err = json.Unmarshal(*v, &displayName) + if err != nil { + return err + } + sp.DisplayName = &displayName + } + case "errorUrl": + if v != nil { + var errorURL string + err = json.Unmarshal(*v, &errorURL) + if err != nil { + return err + } + sp.ErrorURL = &errorURL + } + case "homepage": + if v != nil { + var homepage string + err = json.Unmarshal(*v, &homepage) + if err != nil { + return err + } + sp.Homepage = &homepage + } + case "keyCredentials": + if v != nil { + var keyCredentials []KeyCredential + err = json.Unmarshal(*v, &keyCredentials) + if err != nil { + return err + } + sp.KeyCredentials = &keyCredentials + } + case "logoutUrl": + if v != nil { + var logoutURL string + err = json.Unmarshal(*v, &logoutURL) + if err != nil { + return err + } + sp.LogoutURL = &logoutURL + } + case "oauth2Permissions": + if v != nil { + var oauth2Permissions []OAuth2Permission + err = json.Unmarshal(*v, &oauth2Permissions) + if err != nil { + return err + } + sp.Oauth2Permissions = &oauth2Permissions + } + case "passwordCredentials": + if v != nil { + var passwordCredentials []PasswordCredential + err = json.Unmarshal(*v, &passwordCredentials) + if err != nil { + return err + } + sp.PasswordCredentials = &passwordCredentials + } + case "preferredTokenSigningKeyThumbprint": + if v != nil { + var preferredTokenSigningKeyThumbprint string + err = json.Unmarshal(*v, &preferredTokenSigningKeyThumbprint) + if err != nil { + return err + } + sp.PreferredTokenSigningKeyThumbprint = &preferredTokenSigningKeyThumbprint + } + case "publisherName": + if v != nil { + var publisherName string + err = json.Unmarshal(*v, &publisherName) + if err != nil { + return err + } + sp.PublisherName = &publisherName + } + case "replyUrls": + if v != nil { + var replyUrls []string + err = json.Unmarshal(*v, &replyUrls) + if err != nil { + return err + } + sp.ReplyUrls = &replyUrls + } + case "samlMetadataUrl": + if v != nil { + var samlMetadataURL string + err = json.Unmarshal(*v, &samlMetadataURL) + if err != nil { + return err + } + sp.SamlMetadataURL = &samlMetadataURL + } + case "servicePrincipalNames": + if v != nil { + var servicePrincipalNames []string + err = json.Unmarshal(*v, &servicePrincipalNames) + if err != nil { + return err + } + sp.ServicePrincipalNames = &servicePrincipalNames + } + case "servicePrincipalType": + if v != nil { + var servicePrincipalType string + err = json.Unmarshal(*v, &servicePrincipalType) + if err != nil { + return err + } + sp.ServicePrincipalType = &servicePrincipalType + } + case "tags": + if v != nil { + var tags []string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + sp.Tags = &tags + } + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if sp.AdditionalProperties == nil { + sp.AdditionalProperties = make(map[string]interface{}) + } + sp.AdditionalProperties[k] = additionalProperties + } + case "objectId": + if v != nil { + var objectID string + err = json.Unmarshal(*v, &objectID) + if err != nil { + return err + } + sp.ObjectID = &objectID + } + case "deletionTimestamp": + if v != nil { + var deletionTimestamp date.Time + err = json.Unmarshal(*v, &deletionTimestamp) + if err != nil { + return err + } + sp.DeletionTimestamp = &deletionTimestamp + } + case "objectType": + if v != nil { + var objectType ObjectType + err = json.Unmarshal(*v, &objectType) + if err != nil { + return err + } + sp.ObjectType = objectType + } + } + } + + return nil +} + +// ServicePrincipalBase active Directory service principal common properties shared among GET, POST and +// PATCH +type ServicePrincipalBase struct { + // AccountEnabled - whether or not the service principal account is enabled + AccountEnabled *bool `json:"accountEnabled,omitempty"` + // AppRoleAssignmentRequired - Specifies whether an AppRoleAssignment to a user or group is required before Azure AD will issue a user or access token to the application. + AppRoleAssignmentRequired *bool `json:"appRoleAssignmentRequired,omitempty"` + // KeyCredentials - The collection of key credentials associated with the service principal. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // PasswordCredentials - The collection of password credentials associated with the service principal. + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // ServicePrincipalType - the type of the service principal + ServicePrincipalType *string `json:"servicePrincipalType,omitempty"` + // Tags - Optional list of tags that you can apply to your service principals. Not nullable. + Tags *[]string `json:"tags,omitempty"` +} + +// ServicePrincipalCreateParameters request parameters for creating a new service principal. +type ServicePrincipalCreateParameters struct { + // AppID - The application ID. + AppID *string `json:"appId,omitempty"` + // AccountEnabled - whether or not the service principal account is enabled + AccountEnabled *bool `json:"accountEnabled,omitempty"` + // AppRoleAssignmentRequired - Specifies whether an AppRoleAssignment to a user or group is required before Azure AD will issue a user or access token to the application. + AppRoleAssignmentRequired *bool `json:"appRoleAssignmentRequired,omitempty"` + // KeyCredentials - The collection of key credentials associated with the service principal. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // PasswordCredentials - The collection of password credentials associated with the service principal. + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // ServicePrincipalType - the type of the service principal + ServicePrincipalType *string `json:"servicePrincipalType,omitempty"` + // Tags - Optional list of tags that you can apply to your service principals. Not nullable. + Tags *[]string `json:"tags,omitempty"` +} + +// ServicePrincipalListResult server response for get tenant service principals API call. +type ServicePrincipalListResult struct { + autorest.Response `json:"-"` + // Value - the list of service principals. + Value *[]ServicePrincipal `json:"value,omitempty"` + // OdataNextLink - the URL to get the next set of results. + OdataNextLink *string `json:"odata.nextLink,omitempty"` +} + +// ServicePrincipalListResultIterator provides access to a complete listing of ServicePrincipal values. +type ServicePrincipalListResultIterator struct { + i int + page ServicePrincipalListResultPage +} + +// NextWithContext advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ServicePrincipalListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err = iter.page.NextWithContext(ctx) + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *ServicePrincipalListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ServicePrincipalListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ServicePrincipalListResultIterator) Response() ServicePrincipalListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ServicePrincipalListResultIterator) Value() ServicePrincipal { + if !iter.page.NotDone() { + return ServicePrincipal{} + } + return iter.page.Values()[iter.i] +} + +// Creates a new instance of the ServicePrincipalListResultIterator type. +func NewServicePrincipalListResultIterator(page ServicePrincipalListResultPage) ServicePrincipalListResultIterator { + return ServicePrincipalListResultIterator{page: page} +} + +// IsEmpty returns true if the ListResult contains no values. +func (splr ServicePrincipalListResult) IsEmpty() bool { + return splr.Value == nil || len(*splr.Value) == 0 +} + +// ServicePrincipalListResultPage contains a page of ServicePrincipal values. +type ServicePrincipalListResultPage struct { + fn func(context.Context, ServicePrincipalListResult) (ServicePrincipalListResult, error) + splr ServicePrincipalListResult +} + +// NextWithContext advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ServicePrincipalListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.splr) + if err != nil { + return err + } + page.splr = next + return nil +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *ServicePrincipalListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ServicePrincipalListResultPage) NotDone() bool { + return !page.splr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ServicePrincipalListResultPage) Response() ServicePrincipalListResult { + return page.splr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ServicePrincipalListResultPage) Values() []ServicePrincipal { + if page.splr.IsEmpty() { + return nil + } + return *page.splr.Value +} + +// Creates a new instance of the ServicePrincipalListResultPage type. +func NewServicePrincipalListResultPage(getNextPage func(context.Context, ServicePrincipalListResult) (ServicePrincipalListResult, error)) ServicePrincipalListResultPage { + return ServicePrincipalListResultPage{fn: getNextPage} +} + +// ServicePrincipalObjectResult service Principal Object Result. +type ServicePrincipalObjectResult struct { + autorest.Response `json:"-"` + // Value - The Object ID of the service principal with the specified application ID. + Value *string `json:"value,omitempty"` + // OdataMetadata - The URL representing edm equivalent. + OdataMetadata *string `json:"odata.metadata,omitempty"` +} + +// ServicePrincipalUpdateParameters request parameters for update an existing service principal. +type ServicePrincipalUpdateParameters struct { + // AccountEnabled - whether or not the service principal account is enabled + AccountEnabled *bool `json:"accountEnabled,omitempty"` + // AppRoleAssignmentRequired - Specifies whether an AppRoleAssignment to a user or group is required before Azure AD will issue a user or access token to the application. + AppRoleAssignmentRequired *bool `json:"appRoleAssignmentRequired,omitempty"` + // KeyCredentials - The collection of key credentials associated with the service principal. + KeyCredentials *[]KeyCredential `json:"keyCredentials,omitempty"` + // PasswordCredentials - The collection of password credentials associated with the service principal. + PasswordCredentials *[]PasswordCredential `json:"passwordCredentials,omitempty"` + // ServicePrincipalType - the type of the service principal + ServicePrincipalType *string `json:"servicePrincipalType,omitempty"` + // Tags - Optional list of tags that you can apply to your service principals. Not nullable. + Tags *[]string `json:"tags,omitempty"` +} + +// SignInName contains information about a sign-in name of a local account user in an Azure Active +// Directory B2C tenant. +type SignInName struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // Type - A string value that can be used to classify user sign-in types in your directory, such as 'emailAddress' or 'userName'. + Type *string `json:"type,omitempty"` + // Value - The sign-in used by the local account. Must be unique across the company/tenant. For example, 'johnc@example.com'. + Value *string `json:"value,omitempty"` +} + +// MarshalJSON is the custom marshaler for SignInName. +func (sin SignInName) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if sin.Type != nil { + objectMap["type"] = sin.Type + } + if sin.Value != nil { + objectMap["value"] = sin.Value + } + for k, v := range sin.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for SignInName struct. +func (sin *SignInName) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if sin.AdditionalProperties == nil { + sin.AdditionalProperties = make(map[string]interface{}) + } + sin.AdditionalProperties[k] = additionalProperties + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + sin.Type = &typeVar + } + case "value": + if v != nil { + var value string + err = json.Unmarshal(*v, &value) + if err != nil { + return err + } + sin.Value = &value + } + } + } + + return nil +} + +// User active Directory user information. +type User struct { + autorest.Response `json:"-"` + // ImmutableID - This must be specified if you are using a federated domain for the user's userPrincipalName (UPN) property when creating a new user account. It is used to associate an on-premises Active Directory user account with their Azure AD user object. + ImmutableID *string `json:"immutableId,omitempty"` + // UsageLocation - A two letter country code (ISO standard 3166). Required for users that will be assigned licenses due to legal requirement to check for availability of services in countries. Examples include: "US", "JP", and "GB". + UsageLocation *string `json:"usageLocation,omitempty"` + // GivenName - The given name for the user. + GivenName *string `json:"givenName,omitempty"` + // Surname - The user's surname (family name or last name). + Surname *string `json:"surname,omitempty"` + // UserType - A string value that can be used to classify user types in your directory, such as 'Member' and 'Guest'. Possible values include: 'Member', 'Guest' + UserType UserType `json:"userType,omitempty"` + // AccountEnabled - Whether the account is enabled. + AccountEnabled *bool `json:"accountEnabled,omitempty"` + // DisplayName - The display name of the user. + DisplayName *string `json:"displayName,omitempty"` + // UserPrincipalName - The principal name of the user. + UserPrincipalName *string `json:"userPrincipalName,omitempty"` + // MailNickname - The mail alias for the user. + MailNickname *string `json:"mailNickname,omitempty"` + // Mail - The primary email address of the user. + Mail *string `json:"mail,omitempty"` + // SignInNames - The sign-in names of the user. + SignInNames *[]SignInName `json:"signInNames,omitempty"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ObjectID - READ-ONLY; The object ID. + ObjectID *string `json:"objectId,omitempty"` + // DeletionTimestamp - READ-ONLY; The time at which the directory object was deleted. + DeletionTimestamp *date.Time `json:"deletionTimestamp,omitempty"` + // ObjectType - Possible values include: 'ObjectTypeDirectoryObject', 'ObjectTypeApplication', 'ObjectTypeGroup', 'ObjectTypeServicePrincipal', 'ObjectTypeUser' + ObjectType ObjectType `json:"objectType,omitempty"` +} + +// MarshalJSON is the custom marshaler for User. +func (u User) MarshalJSON() ([]byte, error) { + u.ObjectType = ObjectTypeUser + objectMap := make(map[string]interface{}) + if u.ImmutableID != nil { + objectMap["immutableId"] = u.ImmutableID + } + if u.UsageLocation != nil { + objectMap["usageLocation"] = u.UsageLocation + } + if u.GivenName != nil { + objectMap["givenName"] = u.GivenName + } + if u.Surname != nil { + objectMap["surname"] = u.Surname + } + if u.UserType != "" { + objectMap["userType"] = u.UserType + } + if u.AccountEnabled != nil { + objectMap["accountEnabled"] = u.AccountEnabled + } + if u.DisplayName != nil { + objectMap["displayName"] = u.DisplayName + } + if u.UserPrincipalName != nil { + objectMap["userPrincipalName"] = u.UserPrincipalName + } + if u.MailNickname != nil { + objectMap["mailNickname"] = u.MailNickname + } + if u.Mail != nil { + objectMap["mail"] = u.Mail + } + if u.SignInNames != nil { + objectMap["signInNames"] = u.SignInNames + } + if u.ObjectType != "" { + objectMap["objectType"] = u.ObjectType + } + for k, v := range u.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// AsApplication is the BasicDirectoryObject implementation for User. +func (u User) AsApplication() (*Application, bool) { + return nil, false +} + +// AsADGroup is the BasicDirectoryObject implementation for User. +func (u User) AsADGroup() (*ADGroup, bool) { + return nil, false +} + +// AsServicePrincipal is the BasicDirectoryObject implementation for User. +func (u User) AsServicePrincipal() (*ServicePrincipal, bool) { + return nil, false +} + +// AsUser is the BasicDirectoryObject implementation for User. +func (u User) AsUser() (*User, bool) { + return &u, true +} + +// AsDirectoryObject is the BasicDirectoryObject implementation for User. +func (u User) AsDirectoryObject() (*DirectoryObject, bool) { + return nil, false +} + +// AsBasicDirectoryObject is the BasicDirectoryObject implementation for User. +func (u User) AsBasicDirectoryObject() (BasicDirectoryObject, bool) { + return &u, true +} + +// UnmarshalJSON is the custom unmarshaler for User struct. +func (u *User) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "immutableId": + if v != nil { + var immutableID string + err = json.Unmarshal(*v, &immutableID) + if err != nil { + return err + } + u.ImmutableID = &immutableID + } + case "usageLocation": + if v != nil { + var usageLocation string + err = json.Unmarshal(*v, &usageLocation) + if err != nil { + return err + } + u.UsageLocation = &usageLocation + } + case "givenName": + if v != nil { + var givenName string + err = json.Unmarshal(*v, &givenName) + if err != nil { + return err + } + u.GivenName = &givenName + } + case "surname": + if v != nil { + var surname string + err = json.Unmarshal(*v, &surname) + if err != nil { + return err + } + u.Surname = &surname + } + case "userType": + if v != nil { + var userType UserType + err = json.Unmarshal(*v, &userType) + if err != nil { + return err + } + u.UserType = userType + } + case "accountEnabled": + if v != nil { + var accountEnabled bool + err = json.Unmarshal(*v, &accountEnabled) + if err != nil { + return err + } + u.AccountEnabled = &accountEnabled + } + case "displayName": + if v != nil { + var displayName string + err = json.Unmarshal(*v, &displayName) + if err != nil { + return err + } + u.DisplayName = &displayName + } + case "userPrincipalName": + if v != nil { + var userPrincipalName string + err = json.Unmarshal(*v, &userPrincipalName) + if err != nil { + return err + } + u.UserPrincipalName = &userPrincipalName + } + case "mailNickname": + if v != nil { + var mailNickname string + err = json.Unmarshal(*v, &mailNickname) + if err != nil { + return err + } + u.MailNickname = &mailNickname + } + case "mail": + if v != nil { + var mailVar string + err = json.Unmarshal(*v, &mailVar) + if err != nil { + return err + } + u.Mail = &mailVar + } + case "signInNames": + if v != nil { + var signInNames []SignInName + err = json.Unmarshal(*v, &signInNames) + if err != nil { + return err + } + u.SignInNames = &signInNames + } + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if u.AdditionalProperties == nil { + u.AdditionalProperties = make(map[string]interface{}) + } + u.AdditionalProperties[k] = additionalProperties + } + case "objectId": + if v != nil { + var objectID string + err = json.Unmarshal(*v, &objectID) + if err != nil { + return err + } + u.ObjectID = &objectID + } + case "deletionTimestamp": + if v != nil { + var deletionTimestamp date.Time + err = json.Unmarshal(*v, &deletionTimestamp) + if err != nil { + return err + } + u.DeletionTimestamp = &deletionTimestamp + } + case "objectType": + if v != nil { + var objectType ObjectType + err = json.Unmarshal(*v, &objectType) + if err != nil { + return err + } + u.ObjectType = objectType + } + } + } + + return nil +} + +// UserBase ... +type UserBase struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ImmutableID - This must be specified if you are using a federated domain for the user's userPrincipalName (UPN) property when creating a new user account. It is used to associate an on-premises Active Directory user account with their Azure AD user object. + ImmutableID *string `json:"immutableId,omitempty"` + // UsageLocation - A two letter country code (ISO standard 3166). Required for users that will be assigned licenses due to legal requirement to check for availability of services in countries. Examples include: "US", "JP", and "GB". + UsageLocation *string `json:"usageLocation,omitempty"` + // GivenName - The given name for the user. + GivenName *string `json:"givenName,omitempty"` + // Surname - The user's surname (family name or last name). + Surname *string `json:"surname,omitempty"` + // UserType - A string value that can be used to classify user types in your directory, such as 'Member' and 'Guest'. Possible values include: 'Member', 'Guest' + UserType UserType `json:"userType,omitempty"` +} + +// MarshalJSON is the custom marshaler for UserBase. +func (ub UserBase) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ub.ImmutableID != nil { + objectMap["immutableId"] = ub.ImmutableID + } + if ub.UsageLocation != nil { + objectMap["usageLocation"] = ub.UsageLocation + } + if ub.GivenName != nil { + objectMap["givenName"] = ub.GivenName + } + if ub.Surname != nil { + objectMap["surname"] = ub.Surname + } + if ub.UserType != "" { + objectMap["userType"] = ub.UserType + } + for k, v := range ub.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for UserBase struct. +func (ub *UserBase) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if ub.AdditionalProperties == nil { + ub.AdditionalProperties = make(map[string]interface{}) + } + ub.AdditionalProperties[k] = additionalProperties + } + case "immutableId": + if v != nil { + var immutableID string + err = json.Unmarshal(*v, &immutableID) + if err != nil { + return err + } + ub.ImmutableID = &immutableID + } + case "usageLocation": + if v != nil { + var usageLocation string + err = json.Unmarshal(*v, &usageLocation) + if err != nil { + return err + } + ub.UsageLocation = &usageLocation + } + case "givenName": + if v != nil { + var givenName string + err = json.Unmarshal(*v, &givenName) + if err != nil { + return err + } + ub.GivenName = &givenName + } + case "surname": + if v != nil { + var surname string + err = json.Unmarshal(*v, &surname) + if err != nil { + return err + } + ub.Surname = &surname + } + case "userType": + if v != nil { + var userType UserType + err = json.Unmarshal(*v, &userType) + if err != nil { + return err + } + ub.UserType = userType + } + } + } + + return nil +} + +// UserCreateParameters request parameters for creating a new work or school account user. +type UserCreateParameters struct { + // AccountEnabled - Whether the account is enabled. + AccountEnabled *bool `json:"accountEnabled,omitempty"` + // DisplayName - The display name of the user. + DisplayName *string `json:"displayName,omitempty"` + // PasswordProfile - Password Profile + PasswordProfile *PasswordProfile `json:"passwordProfile,omitempty"` + // UserPrincipalName - The user principal name (someuser@contoso.com). It must contain one of the verified domains for the tenant. + UserPrincipalName *string `json:"userPrincipalName,omitempty"` + // MailNickname - The mail alias for the user. + MailNickname *string `json:"mailNickname,omitempty"` + // Mail - The primary email address of the user. + Mail *string `json:"mail,omitempty"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ImmutableID - This must be specified if you are using a federated domain for the user's userPrincipalName (UPN) property when creating a new user account. It is used to associate an on-premises Active Directory user account with their Azure AD user object. + ImmutableID *string `json:"immutableId,omitempty"` + // UsageLocation - A two letter country code (ISO standard 3166). Required for users that will be assigned licenses due to legal requirement to check for availability of services in countries. Examples include: "US", "JP", and "GB". + UsageLocation *string `json:"usageLocation,omitempty"` + // GivenName - The given name for the user. + GivenName *string `json:"givenName,omitempty"` + // Surname - The user's surname (family name or last name). + Surname *string `json:"surname,omitempty"` + // UserType - A string value that can be used to classify user types in your directory, such as 'Member' and 'Guest'. Possible values include: 'Member', 'Guest' + UserType UserType `json:"userType,omitempty"` +} + +// MarshalJSON is the custom marshaler for UserCreateParameters. +func (ucp UserCreateParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ucp.AccountEnabled != nil { + objectMap["accountEnabled"] = ucp.AccountEnabled + } + if ucp.DisplayName != nil { + objectMap["displayName"] = ucp.DisplayName + } + if ucp.PasswordProfile != nil { + objectMap["passwordProfile"] = ucp.PasswordProfile + } + if ucp.UserPrincipalName != nil { + objectMap["userPrincipalName"] = ucp.UserPrincipalName + } + if ucp.MailNickname != nil { + objectMap["mailNickname"] = ucp.MailNickname + } + if ucp.Mail != nil { + objectMap["mail"] = ucp.Mail + } + if ucp.ImmutableID != nil { + objectMap["immutableId"] = ucp.ImmutableID + } + if ucp.UsageLocation != nil { + objectMap["usageLocation"] = ucp.UsageLocation + } + if ucp.GivenName != nil { + objectMap["givenName"] = ucp.GivenName + } + if ucp.Surname != nil { + objectMap["surname"] = ucp.Surname + } + if ucp.UserType != "" { + objectMap["userType"] = ucp.UserType + } + for k, v := range ucp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for UserCreateParameters struct. +func (ucp *UserCreateParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "accountEnabled": + if v != nil { + var accountEnabled bool + err = json.Unmarshal(*v, &accountEnabled) + if err != nil { + return err + } + ucp.AccountEnabled = &accountEnabled + } + case "displayName": + if v != nil { + var displayName string + err = json.Unmarshal(*v, &displayName) + if err != nil { + return err + } + ucp.DisplayName = &displayName + } + case "passwordProfile": + if v != nil { + var passwordProfile PasswordProfile + err = json.Unmarshal(*v, &passwordProfile) + if err != nil { + return err + } + ucp.PasswordProfile = &passwordProfile + } + case "userPrincipalName": + if v != nil { + var userPrincipalName string + err = json.Unmarshal(*v, &userPrincipalName) + if err != nil { + return err + } + ucp.UserPrincipalName = &userPrincipalName + } + case "mailNickname": + if v != nil { + var mailNickname string + err = json.Unmarshal(*v, &mailNickname) + if err != nil { + return err + } + ucp.MailNickname = &mailNickname + } + case "mail": + if v != nil { + var mailVar string + err = json.Unmarshal(*v, &mailVar) + if err != nil { + return err + } + ucp.Mail = &mailVar + } + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if ucp.AdditionalProperties == nil { + ucp.AdditionalProperties = make(map[string]interface{}) + } + ucp.AdditionalProperties[k] = additionalProperties + } + case "immutableId": + if v != nil { + var immutableID string + err = json.Unmarshal(*v, &immutableID) + if err != nil { + return err + } + ucp.ImmutableID = &immutableID + } + case "usageLocation": + if v != nil { + var usageLocation string + err = json.Unmarshal(*v, &usageLocation) + if err != nil { + return err + } + ucp.UsageLocation = &usageLocation + } + case "givenName": + if v != nil { + var givenName string + err = json.Unmarshal(*v, &givenName) + if err != nil { + return err + } + ucp.GivenName = &givenName + } + case "surname": + if v != nil { + var surname string + err = json.Unmarshal(*v, &surname) + if err != nil { + return err + } + ucp.Surname = &surname + } + case "userType": + if v != nil { + var userType UserType + err = json.Unmarshal(*v, &userType) + if err != nil { + return err + } + ucp.UserType = userType + } + } + } + + return nil +} + +// UserGetMemberGroupsParameters request parameters for GetMemberGroups API call. +type UserGetMemberGroupsParameters struct { + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // SecurityEnabledOnly - If true, only membership in security-enabled groups should be checked. Otherwise, membership in all groups should be checked. + SecurityEnabledOnly *bool `json:"securityEnabledOnly,omitempty"` +} + +// MarshalJSON is the custom marshaler for UserGetMemberGroupsParameters. +func (ugmgp UserGetMemberGroupsParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ugmgp.SecurityEnabledOnly != nil { + objectMap["securityEnabledOnly"] = ugmgp.SecurityEnabledOnly + } + for k, v := range ugmgp.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for UserGetMemberGroupsParameters struct. +func (ugmgp *UserGetMemberGroupsParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if ugmgp.AdditionalProperties == nil { + ugmgp.AdditionalProperties = make(map[string]interface{}) + } + ugmgp.AdditionalProperties[k] = additionalProperties + } + case "securityEnabledOnly": + if v != nil { + var securityEnabledOnly bool + err = json.Unmarshal(*v, &securityEnabledOnly) + if err != nil { + return err + } + ugmgp.SecurityEnabledOnly = &securityEnabledOnly + } + } + } + + return nil +} + +// UserGetMemberGroupsResult server response for GetMemberGroups API call. +type UserGetMemberGroupsResult struct { + autorest.Response `json:"-"` + // Value - A collection of group IDs of which the user is a member. + Value *[]string `json:"value,omitempty"` +} + +// UserListResult server response for Get tenant users API call. +type UserListResult struct { + autorest.Response `json:"-"` + // Value - the list of users. + Value *[]User `json:"value,omitempty"` + // OdataNextLink - The URL to get the next set of results. + OdataNextLink *string `json:"odata.nextLink,omitempty"` +} + +// UserListResultIterator provides access to a complete listing of User values. +type UserListResultIterator struct { + i int + page UserListResultPage +} + +// NextWithContext advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *UserListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UserListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err = iter.page.NextWithContext(ctx) + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *UserListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter UserListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter UserListResultIterator) Response() UserListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter UserListResultIterator) Value() User { + if !iter.page.NotDone() { + return User{} + } + return iter.page.Values()[iter.i] +} + +// Creates a new instance of the UserListResultIterator type. +func NewUserListResultIterator(page UserListResultPage) UserListResultIterator { + return UserListResultIterator{page: page} +} + +// IsEmpty returns true if the ListResult contains no values. +func (ulr UserListResult) IsEmpty() bool { + return ulr.Value == nil || len(*ulr.Value) == 0 +} + +// UserListResultPage contains a page of User values. +type UserListResultPage struct { + fn func(context.Context, UserListResult) (UserListResult, error) + ulr UserListResult +} + +// NextWithContext advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *UserListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UserListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.ulr) + if err != nil { + return err + } + page.ulr = next + return nil +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *UserListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page UserListResultPage) NotDone() bool { + return !page.ulr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page UserListResultPage) Response() UserListResult { + return page.ulr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page UserListResultPage) Values() []User { + if page.ulr.IsEmpty() { + return nil + } + return *page.ulr.Value +} + +// Creates a new instance of the UserListResultPage type. +func NewUserListResultPage(getNextPage func(context.Context, UserListResult) (UserListResult, error)) UserListResultPage { + return UserListResultPage{fn: getNextPage} +} + +// UserUpdateParameters request parameters for updating an existing work or school account user. +type UserUpdateParameters struct { + // AccountEnabled - Whether the account is enabled. + AccountEnabled *bool `json:"accountEnabled,omitempty"` + // DisplayName - The display name of the user. + DisplayName *string `json:"displayName,omitempty"` + // PasswordProfile - The password profile of the user. + PasswordProfile *PasswordProfile `json:"passwordProfile,omitempty"` + // UserPrincipalName - The user principal name (someuser@contoso.com). It must contain one of the verified domains for the tenant. + UserPrincipalName *string `json:"userPrincipalName,omitempty"` + // MailNickname - The mail alias for the user. + MailNickname *string `json:"mailNickname,omitempty"` + // AdditionalProperties - Unmatched properties from the message are deserialized this collection + AdditionalProperties map[string]interface{} `json:""` + // ImmutableID - This must be specified if you are using a federated domain for the user's userPrincipalName (UPN) property when creating a new user account. It is used to associate an on-premises Active Directory user account with their Azure AD user object. + ImmutableID *string `json:"immutableId,omitempty"` + // UsageLocation - A two letter country code (ISO standard 3166). Required for users that will be assigned licenses due to legal requirement to check for availability of services in countries. Examples include: "US", "JP", and "GB". + UsageLocation *string `json:"usageLocation,omitempty"` + // GivenName - The given name for the user. + GivenName *string `json:"givenName,omitempty"` + // Surname - The user's surname (family name or last name). + Surname *string `json:"surname,omitempty"` + // UserType - A string value that can be used to classify user types in your directory, such as 'Member' and 'Guest'. Possible values include: 'Member', 'Guest' + UserType UserType `json:"userType,omitempty"` +} + +// MarshalJSON is the custom marshaler for UserUpdateParameters. +func (uup UserUpdateParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if uup.AccountEnabled != nil { + objectMap["accountEnabled"] = uup.AccountEnabled + } + if uup.DisplayName != nil { + objectMap["displayName"] = uup.DisplayName + } + if uup.PasswordProfile != nil { + objectMap["passwordProfile"] = uup.PasswordProfile + } + if uup.UserPrincipalName != nil { + objectMap["userPrincipalName"] = uup.UserPrincipalName + } + if uup.MailNickname != nil { + objectMap["mailNickname"] = uup.MailNickname + } + if uup.ImmutableID != nil { + objectMap["immutableId"] = uup.ImmutableID + } + if uup.UsageLocation != nil { + objectMap["usageLocation"] = uup.UsageLocation + } + if uup.GivenName != nil { + objectMap["givenName"] = uup.GivenName + } + if uup.Surname != nil { + objectMap["surname"] = uup.Surname + } + if uup.UserType != "" { + objectMap["userType"] = uup.UserType + } + for k, v := range uup.AdditionalProperties { + objectMap[k] = v + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for UserUpdateParameters struct. +func (uup *UserUpdateParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "accountEnabled": + if v != nil { + var accountEnabled bool + err = json.Unmarshal(*v, &accountEnabled) + if err != nil { + return err + } + uup.AccountEnabled = &accountEnabled + } + case "displayName": + if v != nil { + var displayName string + err = json.Unmarshal(*v, &displayName) + if err != nil { + return err + } + uup.DisplayName = &displayName + } + case "passwordProfile": + if v != nil { + var passwordProfile PasswordProfile + err = json.Unmarshal(*v, &passwordProfile) + if err != nil { + return err + } + uup.PasswordProfile = &passwordProfile + } + case "userPrincipalName": + if v != nil { + var userPrincipalName string + err = json.Unmarshal(*v, &userPrincipalName) + if err != nil { + return err + } + uup.UserPrincipalName = &userPrincipalName + } + case "mailNickname": + if v != nil { + var mailNickname string + err = json.Unmarshal(*v, &mailNickname) + if err != nil { + return err + } + uup.MailNickname = &mailNickname + } + default: + if v != nil { + var additionalProperties interface{} + err = json.Unmarshal(*v, &additionalProperties) + if err != nil { + return err + } + if uup.AdditionalProperties == nil { + uup.AdditionalProperties = make(map[string]interface{}) + } + uup.AdditionalProperties[k] = additionalProperties + } + case "immutableId": + if v != nil { + var immutableID string + err = json.Unmarshal(*v, &immutableID) + if err != nil { + return err + } + uup.ImmutableID = &immutableID + } + case "usageLocation": + if v != nil { + var usageLocation string + err = json.Unmarshal(*v, &usageLocation) + if err != nil { + return err + } + uup.UsageLocation = &usageLocation + } + case "givenName": + if v != nil { + var givenName string + err = json.Unmarshal(*v, &givenName) + if err != nil { + return err + } + uup.GivenName = &givenName + } + case "surname": + if v != nil { + var surname string + err = json.Unmarshal(*v, &surname) + if err != nil { + return err + } + uup.Surname = &surname + } + case "userType": + if v != nil { + var userType UserType + err = json.Unmarshal(*v, &userType) + if err != nil { + return err + } + uup.UserType = userType + } + } + } + + return nil +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/oauth2permissiongrant.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/oauth2permissiongrant.go new file mode 100644 index 000000000..04ddbb3e8 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/oauth2permissiongrant.go @@ -0,0 +1,369 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// OAuth2PermissionGrantClient is the the Graph RBAC Management Client +type OAuth2PermissionGrantClient struct { + BaseClient +} + +// NewOAuth2PermissionGrantClient creates an instance of the OAuth2PermissionGrantClient client. +func NewOAuth2PermissionGrantClient(tenantID string) OAuth2PermissionGrantClient { + return NewOAuth2PermissionGrantClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewOAuth2PermissionGrantClientWithBaseURI creates an instance of the OAuth2PermissionGrantClient client. +func NewOAuth2PermissionGrantClientWithBaseURI(baseURI string, tenantID string) OAuth2PermissionGrantClient { + return OAuth2PermissionGrantClient{NewWithBaseURI(baseURI, tenantID)} +} + +// Create grants OAuth2 permissions for the relevant resource Ids of an app. +// Parameters: +// body - the relevant app Service Principal Object Id and the Service Principal Object Id you want to grant. +func (client OAuth2PermissionGrantClient) Create(ctx context.Context, body *OAuth2PermissionGrant) (result OAuth2PermissionGrant, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/OAuth2PermissionGrantClient.Create") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.CreatePreparer(ctx, body) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "Create", nil, "Failure preparing request") + return + } + + resp, err := client.CreateSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "Create", resp, "Failure sending request") + return + } + + result, err = client.CreateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "Create", resp, "Failure responding to request") + } + + return +} + +// CreatePreparer prepares the Create request. +func (client OAuth2PermissionGrantClient) CreatePreparer(ctx context.Context, body *OAuth2PermissionGrant) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/oauth2PermissionGrants", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if body != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(body)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateSender sends the Create request. The method will close the +// http.Response Body if it receives an error. +func (client OAuth2PermissionGrantClient) CreateSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// CreateResponder handles the response to the Create request. The method always +// closes the http.Response Body. +func (client OAuth2PermissionGrantClient) CreateResponder(resp *http.Response) (result OAuth2PermissionGrant, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete delete a OAuth2 permission grant for the relevant resource Ids of an app. +// Parameters: +// objectID - the object ID of a permission grant. +func (client OAuth2PermissionGrantClient) Delete(ctx context.Context, objectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/OAuth2PermissionGrantClient.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.DeletePreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "Delete", nil, "Failure preparing request") + return + } + + resp, err := client.DeleteSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "Delete", resp, "Failure sending request") + return + } + + result, err = client.DeleteResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "Delete", resp, "Failure responding to request") + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client OAuth2PermissionGrantClient) DeletePreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/oauth2PermissionGrants/{objectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client OAuth2PermissionGrantClient) DeleteSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client OAuth2PermissionGrantClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// List queries OAuth2 permissions grants for the relevant SP ObjectId of an app. +// Parameters: +// filter - this is the Service Principal ObjectId associated with the app +func (client OAuth2PermissionGrantClient) List(ctx context.Context, filter string) (result OAuth2PermissionGrantListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/OAuth2PermissionGrantClient.List") + defer func() { + sc := -1 + if result.oa2pglr.Response.Response != nil { + sc = result.oa2pglr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult OAuth2PermissionGrantListResult) (OAuth2PermissionGrantListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return OAuth2PermissionGrantListResult{}, nil + } + return client.ListNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.ListPreparer(ctx, filter) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.oa2pglr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "List", resp, "Failure sending request") + return + } + + result.oa2pglr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client OAuth2PermissionGrantClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/oauth2PermissionGrants", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client OAuth2PermissionGrantClient) ListSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client OAuth2PermissionGrantClient) ListResponder(resp *http.Response) (result OAuth2PermissionGrantListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client OAuth2PermissionGrantClient) ListComplete(ctx context.Context, filter string) (result OAuth2PermissionGrantListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/OAuth2PermissionGrantClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.List(ctx, filter) + return +} + +// ListNext gets the next page of OAuth2 permission grants +// Parameters: +// nextLink - next link for the list operation. +func (client OAuth2PermissionGrantClient) ListNext(ctx context.Context, nextLink string) (result OAuth2PermissionGrantListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/OAuth2PermissionGrantClient.ListNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "ListNext", nil, "Failure preparing request") + return + } + + resp, err := client.ListNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "ListNext", resp, "Failure sending request") + return + } + + result, err = client.ListNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.OAuth2PermissionGrantClient", "ListNext", resp, "Failure responding to request") + } + + return +} + +// ListNextPreparer prepares the ListNext request. +func (client OAuth2PermissionGrantClient) ListNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListNextSender sends the ListNext request. The method will close the +// http.Response Body if it receives an error. +func (client OAuth2PermissionGrantClient) ListNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListNextResponder handles the response to the ListNext request. The method always +// closes the http.Response Body. +func (client OAuth2PermissionGrantClient) ListNextResponder(resp *http.Response) (result OAuth2PermissionGrantListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/objects.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/objects.go new file mode 100644 index 000000000..3f6ca5c4b --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/objects.go @@ -0,0 +1,216 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// ObjectsClient is the the Graph RBAC Management Client +type ObjectsClient struct { + BaseClient +} + +// NewObjectsClient creates an instance of the ObjectsClient client. +func NewObjectsClient(tenantID string) ObjectsClient { + return NewObjectsClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewObjectsClientWithBaseURI creates an instance of the ObjectsClient client. +func NewObjectsClientWithBaseURI(baseURI string, tenantID string) ObjectsClient { + return ObjectsClient{NewWithBaseURI(baseURI, tenantID)} +} + +// GetObjectsByObjectIds gets the directory objects specified in a list of object IDs. You can also specify which +// resource collections (users, groups, etc.) should be searched by specifying the optional types parameter. +// Parameters: +// parameters - objects filtering parameters. +func (client ObjectsClient) GetObjectsByObjectIds(ctx context.Context, parameters GetObjectsParameters) (result DirectoryObjectListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ObjectsClient.GetObjectsByObjectIds") + defer func() { + sc := -1 + if result.dolr.Response.Response != nil { + sc = result.dolr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult DirectoryObjectListResult) (DirectoryObjectListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return DirectoryObjectListResult{}, nil + } + return client.GetObjectsByObjectIdsNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.GetObjectsByObjectIdsPreparer(ctx, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ObjectsClient", "GetObjectsByObjectIds", nil, "Failure preparing request") + return + } + + resp, err := client.GetObjectsByObjectIdsSender(req) + if err != nil { + result.dolr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ObjectsClient", "GetObjectsByObjectIds", resp, "Failure sending request") + return + } + + result.dolr, err = client.GetObjectsByObjectIdsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ObjectsClient", "GetObjectsByObjectIds", resp, "Failure responding to request") + } + + return +} + +// GetObjectsByObjectIdsPreparer prepares the GetObjectsByObjectIds request. +func (client ObjectsClient) GetObjectsByObjectIdsPreparer(ctx context.Context, parameters GetObjectsParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/getObjectsByObjectIds", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetObjectsByObjectIdsSender sends the GetObjectsByObjectIds request. The method will close the +// http.Response Body if it receives an error. +func (client ObjectsClient) GetObjectsByObjectIdsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetObjectsByObjectIdsResponder handles the response to the GetObjectsByObjectIds request. The method always +// closes the http.Response Body. +func (client ObjectsClient) GetObjectsByObjectIdsResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetObjectsByObjectIdsComplete enumerates all values, automatically crossing page boundaries as required. +func (client ObjectsClient) GetObjectsByObjectIdsComplete(ctx context.Context, parameters GetObjectsParameters) (result DirectoryObjectListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ObjectsClient.GetObjectsByObjectIds") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.GetObjectsByObjectIds(ctx, parameters) + return +} + +// GetObjectsByObjectIdsNext gets AD group membership for the specified AD object IDs. +// Parameters: +// nextLink - next link for the list operation. +func (client ObjectsClient) GetObjectsByObjectIdsNext(ctx context.Context, nextLink string) (result DirectoryObjectListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ObjectsClient.GetObjectsByObjectIdsNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetObjectsByObjectIdsNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ObjectsClient", "GetObjectsByObjectIdsNext", nil, "Failure preparing request") + return + } + + resp, err := client.GetObjectsByObjectIdsNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ObjectsClient", "GetObjectsByObjectIdsNext", resp, "Failure sending request") + return + } + + result, err = client.GetObjectsByObjectIdsNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ObjectsClient", "GetObjectsByObjectIdsNext", resp, "Failure responding to request") + } + + return +} + +// GetObjectsByObjectIdsNextPreparer prepares the GetObjectsByObjectIdsNext request. +func (client ObjectsClient) GetObjectsByObjectIdsNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetObjectsByObjectIdsNextSender sends the GetObjectsByObjectIdsNext request. The method will close the +// http.Response Body if it receives an error. +func (client ObjectsClient) GetObjectsByObjectIdsNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetObjectsByObjectIdsNextResponder handles the response to the GetObjectsByObjectIdsNext request. The method always +// closes the http.Response Body. +func (client ObjectsClient) GetObjectsByObjectIdsNextResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/serviceprincipals.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/serviceprincipals.go new file mode 100644 index 000000000..45099830b --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/serviceprincipals.go @@ -0,0 +1,942 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// ServicePrincipalsClient is the the Graph RBAC Management Client +type ServicePrincipalsClient struct { + BaseClient +} + +// NewServicePrincipalsClient creates an instance of the ServicePrincipalsClient client. +func NewServicePrincipalsClient(tenantID string) ServicePrincipalsClient { + return NewServicePrincipalsClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewServicePrincipalsClientWithBaseURI creates an instance of the ServicePrincipalsClient client. +func NewServicePrincipalsClientWithBaseURI(baseURI string, tenantID string) ServicePrincipalsClient { + return ServicePrincipalsClient{NewWithBaseURI(baseURI, tenantID)} +} + +// Create creates a service principal in the directory. +// Parameters: +// parameters - parameters to create a service principal. +func (client ServicePrincipalsClient) Create(ctx context.Context, parameters ServicePrincipalCreateParameters) (result ServicePrincipal, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.Create") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.AppID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.ServicePrincipalsClient", "Create", err.Error()) + } + + req, err := client.CreatePreparer(ctx, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Create", nil, "Failure preparing request") + return + } + + resp, err := client.CreateSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Create", resp, "Failure sending request") + return + } + + result, err = client.CreateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Create", resp, "Failure responding to request") + } + + return +} + +// CreatePreparer prepares the Create request. +func (client ServicePrincipalsClient) CreatePreparer(ctx context.Context, parameters ServicePrincipalCreateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateSender sends the Create request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) CreateSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// CreateResponder handles the response to the Create request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) CreateResponder(resp *http.Response) (result ServicePrincipal, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes a service principal from the directory. +// Parameters: +// objectID - the object ID of the service principal to delete. +func (client ServicePrincipalsClient) Delete(ctx context.Context, objectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.DeletePreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Delete", nil, "Failure preparing request") + return + } + + resp, err := client.DeleteSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Delete", resp, "Failure sending request") + return + } + + result, err = client.DeleteResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Delete", resp, "Failure responding to request") + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client ServicePrincipalsClient) DeletePreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) DeleteSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets service principal information from the directory. Query by objectId or pass a filter to query by appId +// Parameters: +// objectID - the object ID of the service principal to get. +func (client ServicePrincipalsClient) Get(ctx context.Context, objectID string) (result ServicePrincipal, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetPreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client ServicePrincipalsClient) GetPreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) GetSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) GetResponder(resp *http.Response) (result ServicePrincipal, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets a list of service principals from the current tenant. +// Parameters: +// filter - the filter to apply to the operation. +func (client ServicePrincipalsClient) List(ctx context.Context, filter string) (result ServicePrincipalListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.List") + defer func() { + sc := -1 + if result.splr.Response.Response != nil { + sc = result.splr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult ServicePrincipalListResult) (ServicePrincipalListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return ServicePrincipalListResult{}, nil + } + return client.ListNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.ListPreparer(ctx, filter) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.splr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "List", resp, "Failure sending request") + return + } + + result.splr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client ServicePrincipalsClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) ListSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) ListResponder(resp *http.Response) (result ServicePrincipalListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ServicePrincipalsClient) ListComplete(ctx context.Context, filter string) (result ServicePrincipalListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.List(ctx, filter) + return +} + +// ListKeyCredentials get the keyCredentials associated with the specified service principal. +// Parameters: +// objectID - the object ID of the service principal for which to get keyCredentials. +func (client ServicePrincipalsClient) ListKeyCredentials(ctx context.Context, objectID string) (result KeyCredentialListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.ListKeyCredentials") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListKeyCredentialsPreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListKeyCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.ListKeyCredentialsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListKeyCredentials", resp, "Failure sending request") + return + } + + result, err = client.ListKeyCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListKeyCredentials", resp, "Failure responding to request") + } + + return +} + +// ListKeyCredentialsPreparer prepares the ListKeyCredentials request. +func (client ServicePrincipalsClient) ListKeyCredentialsPreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}/keyCredentials", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListKeyCredentialsSender sends the ListKeyCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) ListKeyCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListKeyCredentialsResponder handles the response to the ListKeyCredentials request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) ListKeyCredentialsResponder(resp *http.Response) (result KeyCredentialListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListNext gets a list of service principals from the current tenant. +// Parameters: +// nextLink - next link for the list operation. +func (client ServicePrincipalsClient) ListNext(ctx context.Context, nextLink string) (result ServicePrincipalListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.ListNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListNext", nil, "Failure preparing request") + return + } + + resp, err := client.ListNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListNext", resp, "Failure sending request") + return + } + + result, err = client.ListNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListNext", resp, "Failure responding to request") + } + + return +} + +// ListNextPreparer prepares the ListNext request. +func (client ServicePrincipalsClient) ListNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListNextSender sends the ListNext request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) ListNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListNextResponder handles the response to the ListNext request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) ListNextResponder(resp *http.Response) (result ServicePrincipalListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListOwners the owners are a set of non-admin users who are allowed to modify this object. +// Parameters: +// objectID - the object ID of the service principal for which to get owners. +func (client ServicePrincipalsClient) ListOwners(ctx context.Context, objectID string) (result DirectoryObjectListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.ListOwners") + defer func() { + sc := -1 + if result.dolr.Response.Response != nil { + sc = result.dolr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = client.listOwnersNextResults + req, err := client.ListOwnersPreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListOwners", nil, "Failure preparing request") + return + } + + resp, err := client.ListOwnersSender(req) + if err != nil { + result.dolr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListOwners", resp, "Failure sending request") + return + } + + result.dolr, err = client.ListOwnersResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListOwners", resp, "Failure responding to request") + } + + return +} + +// ListOwnersPreparer prepares the ListOwners request. +func (client ServicePrincipalsClient) ListOwnersPreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}/owners", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListOwnersSender sends the ListOwners request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) ListOwnersSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListOwnersResponder handles the response to the ListOwners request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) ListOwnersResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listOwnersNextResults retrieves the next set of results, if any. +func (client ServicePrincipalsClient) listOwnersNextResults(ctx context.Context, lastResults DirectoryObjectListResult) (result DirectoryObjectListResult, err error) { + req, err := lastResults.directoryObjectListResultPreparer(ctx) + if err != nil { + return result, autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "listOwnersNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListOwnersSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "listOwnersNextResults", resp, "Failure sending next results request") + } + result, err = client.ListOwnersResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "listOwnersNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListOwnersComplete enumerates all values, automatically crossing page boundaries as required. +func (client ServicePrincipalsClient) ListOwnersComplete(ctx context.Context, objectID string) (result DirectoryObjectListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.ListOwners") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.ListOwners(ctx, objectID) + return +} + +// ListPasswordCredentials gets the passwordCredentials associated with a service principal. +// Parameters: +// objectID - the object ID of the service principal. +func (client ServicePrincipalsClient) ListPasswordCredentials(ctx context.Context, objectID string) (result PasswordCredentialListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.ListPasswordCredentials") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListPasswordCredentialsPreparer(ctx, objectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListPasswordCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.ListPasswordCredentialsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListPasswordCredentials", resp, "Failure sending request") + return + } + + result, err = client.ListPasswordCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "ListPasswordCredentials", resp, "Failure responding to request") + } + + return +} + +// ListPasswordCredentialsPreparer prepares the ListPasswordCredentials request. +func (client ServicePrincipalsClient) ListPasswordCredentialsPreparer(ctx context.Context, objectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}/passwordCredentials", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListPasswordCredentialsSender sends the ListPasswordCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) ListPasswordCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListPasswordCredentialsResponder handles the response to the ListPasswordCredentials request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) ListPasswordCredentialsResponder(resp *http.Response) (result PasswordCredentialListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Update updates a service principal in the directory. +// Parameters: +// objectID - the object ID of the service principal to delete. +// parameters - parameters to update a service principal. +func (client ServicePrincipalsClient) Update(ctx context.Context, objectID string, parameters ServicePrincipalUpdateParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.Update") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.UpdatePreparer(ctx, objectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Update", nil, "Failure preparing request") + return + } + + resp, err := client.UpdateSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Update", resp, "Failure sending request") + return + } + + result, err = client.UpdateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "Update", resp, "Failure responding to request") + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client ServicePrincipalsClient) UpdatePreparer(ctx context.Context, objectID string, parameters ServicePrincipalUpdateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) UpdateSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) UpdateResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// UpdateKeyCredentials update the keyCredentials associated with a service principal. +// Parameters: +// objectID - the object ID for which to get service principal information. +// parameters - parameters to update the keyCredentials of an existing service principal. +func (client ServicePrincipalsClient) UpdateKeyCredentials(ctx context.Context, objectID string, parameters KeyCredentialsUpdateParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.UpdateKeyCredentials") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.UpdateKeyCredentialsPreparer(ctx, objectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "UpdateKeyCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.UpdateKeyCredentialsSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "UpdateKeyCredentials", resp, "Failure sending request") + return + } + + result, err = client.UpdateKeyCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "UpdateKeyCredentials", resp, "Failure responding to request") + } + + return +} + +// UpdateKeyCredentialsPreparer prepares the UpdateKeyCredentials request. +func (client ServicePrincipalsClient) UpdateKeyCredentialsPreparer(ctx context.Context, objectID string, parameters KeyCredentialsUpdateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}/keyCredentials", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateKeyCredentialsSender sends the UpdateKeyCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) UpdateKeyCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// UpdateKeyCredentialsResponder handles the response to the UpdateKeyCredentials request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) UpdateKeyCredentialsResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// UpdatePasswordCredentials updates the passwordCredentials associated with a service principal. +// Parameters: +// objectID - the object ID of the service principal. +// parameters - parameters to update the passwordCredentials of an existing service principal. +func (client ServicePrincipalsClient) UpdatePasswordCredentials(ctx context.Context, objectID string, parameters PasswordCredentialsUpdateParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ServicePrincipalsClient.UpdatePasswordCredentials") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.UpdatePasswordCredentialsPreparer(ctx, objectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "UpdatePasswordCredentials", nil, "Failure preparing request") + return + } + + resp, err := client.UpdatePasswordCredentialsSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "UpdatePasswordCredentials", resp, "Failure sending request") + return + } + + result, err = client.UpdatePasswordCredentialsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.ServicePrincipalsClient", "UpdatePasswordCredentials", resp, "Failure responding to request") + } + + return +} + +// UpdatePasswordCredentialsPreparer prepares the UpdatePasswordCredentials request. +func (client ServicePrincipalsClient) UpdatePasswordCredentialsPreparer(ctx context.Context, objectID string, parameters PasswordCredentialsUpdateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/servicePrincipals/{objectId}/passwordCredentials", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdatePasswordCredentialsSender sends the UpdatePasswordCredentials request. The method will close the +// http.Response Body if it receives an error. +func (client ServicePrincipalsClient) UpdatePasswordCredentialsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// UpdatePasswordCredentialsResponder handles the response to the UpdatePasswordCredentials request. The method always +// closes the http.Response Body. +func (client ServicePrincipalsClient) UpdatePasswordCredentialsResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/signedinuser.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/signedinuser.go new file mode 100644 index 000000000..057658eef --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/signedinuser.go @@ -0,0 +1,283 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// SignedInUserClient is the the Graph RBAC Management Client +type SignedInUserClient struct { + BaseClient +} + +// NewSignedInUserClient creates an instance of the SignedInUserClient client. +func NewSignedInUserClient(tenantID string) SignedInUserClient { + return NewSignedInUserClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewSignedInUserClientWithBaseURI creates an instance of the SignedInUserClient client. +func NewSignedInUserClientWithBaseURI(baseURI string, tenantID string) SignedInUserClient { + return SignedInUserClient{NewWithBaseURI(baseURI, tenantID)} +} + +// Get gets the details for the currently logged-in user. +func (client SignedInUserClient) Get(ctx context.Context) (result User, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/SignedInUserClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client SignedInUserClient) GetPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/me", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client SignedInUserClient) GetSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client SignedInUserClient) GetResponder(resp *http.Response) (result User, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListOwnedObjects get the list of directory objects that are owned by the user. +func (client SignedInUserClient) ListOwnedObjects(ctx context.Context) (result DirectoryObjectListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/SignedInUserClient.ListOwnedObjects") + defer func() { + sc := -1 + if result.dolr.Response.Response != nil { + sc = result.dolr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult DirectoryObjectListResult) (DirectoryObjectListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return DirectoryObjectListResult{}, nil + } + return client.ListOwnedObjectsNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.ListOwnedObjectsPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "ListOwnedObjects", nil, "Failure preparing request") + return + } + + resp, err := client.ListOwnedObjectsSender(req) + if err != nil { + result.dolr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "ListOwnedObjects", resp, "Failure sending request") + return + } + + result.dolr, err = client.ListOwnedObjectsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "ListOwnedObjects", resp, "Failure responding to request") + } + + return +} + +// ListOwnedObjectsPreparer prepares the ListOwnedObjects request. +func (client SignedInUserClient) ListOwnedObjectsPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/me/ownedObjects", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListOwnedObjectsSender sends the ListOwnedObjects request. The method will close the +// http.Response Body if it receives an error. +func (client SignedInUserClient) ListOwnedObjectsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListOwnedObjectsResponder handles the response to the ListOwnedObjects request. The method always +// closes the http.Response Body. +func (client SignedInUserClient) ListOwnedObjectsResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListOwnedObjectsComplete enumerates all values, automatically crossing page boundaries as required. +func (client SignedInUserClient) ListOwnedObjectsComplete(ctx context.Context) (result DirectoryObjectListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/SignedInUserClient.ListOwnedObjects") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.ListOwnedObjects(ctx) + return +} + +// ListOwnedObjectsNext get the list of directory objects that are owned by the user. +// Parameters: +// nextLink - next link for the list operation. +func (client SignedInUserClient) ListOwnedObjectsNext(ctx context.Context, nextLink string) (result DirectoryObjectListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/SignedInUserClient.ListOwnedObjectsNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListOwnedObjectsNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "ListOwnedObjectsNext", nil, "Failure preparing request") + return + } + + resp, err := client.ListOwnedObjectsNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "ListOwnedObjectsNext", resp, "Failure sending request") + return + } + + result, err = client.ListOwnedObjectsNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.SignedInUserClient", "ListOwnedObjectsNext", resp, "Failure responding to request") + } + + return +} + +// ListOwnedObjectsNextPreparer prepares the ListOwnedObjectsNext request. +func (client SignedInUserClient) ListOwnedObjectsNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListOwnedObjectsNextSender sends the ListOwnedObjectsNext request. The method will close the +// http.Response Body if it receives an error. +func (client SignedInUserClient) ListOwnedObjectsNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListOwnedObjectsNextResponder handles the response to the ListOwnedObjectsNext request. The method always +// closes the http.Response Body. +func (client SignedInUserClient) ListOwnedObjectsNextResponder(resp *http.Response) (result DirectoryObjectListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/users.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/users.go new file mode 100644 index 000000000..3c688fe7b --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/users.go @@ -0,0 +1,614 @@ +package graphrbac + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" + "net/http" +) + +// UsersClient is the the Graph RBAC Management Client +type UsersClient struct { + BaseClient +} + +// NewUsersClient creates an instance of the UsersClient client. +func NewUsersClient(tenantID string) UsersClient { + return NewUsersClientWithBaseURI(DefaultBaseURI, tenantID) +} + +// NewUsersClientWithBaseURI creates an instance of the UsersClient client. +func NewUsersClientWithBaseURI(baseURI string, tenantID string) UsersClient { + return UsersClient{NewWithBaseURI(baseURI, tenantID)} +} + +// Create create a new user. +// Parameters: +// parameters - parameters to create a user. +func (client UsersClient) Create(ctx context.Context, parameters UserCreateParameters) (result User, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.Create") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.AccountEnabled", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.DisplayName", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.PasswordProfile", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.PasswordProfile.Password", Name: validation.Null, Rule: true, Chain: nil}}}, + {Target: "parameters.UserPrincipalName", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.MailNickname", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.UsersClient", "Create", err.Error()) + } + + req, err := client.CreatePreparer(ctx, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Create", nil, "Failure preparing request") + return + } + + resp, err := client.CreateSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Create", resp, "Failure sending request") + return + } + + result, err = client.CreateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Create", resp, "Failure responding to request") + } + + return +} + +// CreatePreparer prepares the Create request. +func (client UsersClient) CreatePreparer(ctx context.Context, parameters UserCreateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/users", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateSender sends the Create request. The method will close the +// http.Response Body if it receives an error. +func (client UsersClient) CreateSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// CreateResponder handles the response to the Create request. The method always +// closes the http.Response Body. +func (client UsersClient) CreateResponder(resp *http.Response) (result User, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete delete a user. +// Parameters: +// upnOrObjectID - the object ID or principal name of the user to delete. +func (client UsersClient) Delete(ctx context.Context, upnOrObjectID string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.DeletePreparer(ctx, upnOrObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Delete", nil, "Failure preparing request") + return + } + + resp, err := client.DeleteSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Delete", resp, "Failure sending request") + return + } + + result, err = client.DeleteResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Delete", resp, "Failure responding to request") + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client UsersClient) DeletePreparer(ctx context.Context, upnOrObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + "upnOrObjectId": autorest.Encode("path", upnOrObjectID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/users/{upnOrObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client UsersClient) DeleteSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client UsersClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets user information from the directory. +// Parameters: +// upnOrObjectID - the object ID or principal name of the user for which to get information. +func (client UsersClient) Get(ctx context.Context, upnOrObjectID string) (result User, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.GetPreparer(ctx, upnOrObjectID) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client UsersClient) GetPreparer(ctx context.Context, upnOrObjectID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + "upnOrObjectId": autorest.Encode("path", upnOrObjectID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/users/{upnOrObjectId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client UsersClient) GetSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client UsersClient) GetResponder(resp *http.Response) (result User, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetMemberGroups gets a collection that contains the object IDs of the groups of which the user is a member. +// Parameters: +// objectID - the object ID of the user for which to get group membership. +// parameters - user filtering parameters. +func (client UsersClient) GetMemberGroups(ctx context.Context, objectID string, parameters UserGetMemberGroupsParameters) (result UserGetMemberGroupsResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.GetMemberGroups") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.SecurityEnabledOnly", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("graphrbac.UsersClient", "GetMemberGroups", err.Error()) + } + + req, err := client.GetMemberGroupsPreparer(ctx, objectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "GetMemberGroups", nil, "Failure preparing request") + return + } + + resp, err := client.GetMemberGroupsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "GetMemberGroups", resp, "Failure sending request") + return + } + + result, err = client.GetMemberGroupsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "GetMemberGroups", resp, "Failure responding to request") + } + + return +} + +// GetMemberGroupsPreparer prepares the GetMemberGroups request. +func (client UsersClient) GetMemberGroupsPreparer(ctx context.Context, objectID string, parameters UserGetMemberGroupsParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "objectId": autorest.Encode("path", objectID), + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/users/{objectId}/getMemberGroups", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetMemberGroupsSender sends the GetMemberGroups request. The method will close the +// http.Response Body if it receives an error. +func (client UsersClient) GetMemberGroupsSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// GetMemberGroupsResponder handles the response to the GetMemberGroups request. The method always +// closes the http.Response Body. +func (client UsersClient) GetMemberGroupsResponder(resp *http.Response) (result UserGetMemberGroupsResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets list of users for the current tenant. +// Parameters: +// filter - the filter to apply to the operation. +func (client UsersClient) List(ctx context.Context, filter string) (result UserListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.List") + defer func() { + sc := -1 + if result.ulr.Response.Response != nil { + sc = result.ulr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.fn = func(ctx context.Context, lastResult UserListResult) (UserListResult, error) { + if lastResult.OdataNextLink == nil || len(to.String(lastResult.OdataNextLink)) < 1 { + return UserListResult{}, nil + } + return client.ListNext(ctx, *lastResult.OdataNextLink) + } + req, err := client.ListPreparer(ctx, filter) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.ulr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "List", resp, "Failure sending request") + return + } + + result.ulr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client UsersClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/users", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client UsersClient) ListSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client UsersClient) ListResponder(resp *http.Response) (result UserListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client UsersClient) ListComplete(ctx context.Context, filter string) (result UserListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + result.page, err = client.List(ctx, filter) + return +} + +// ListNext gets a list of users for the current tenant. +// Parameters: +// nextLink - next link for the list operation. +func (client UsersClient) ListNext(ctx context.Context, nextLink string) (result UserListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.ListNext") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.ListNextPreparer(ctx, nextLink) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "ListNext", nil, "Failure preparing request") + return + } + + resp, err := client.ListNextSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "ListNext", resp, "Failure sending request") + return + } + + result, err = client.ListNextResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "ListNext", resp, "Failure responding to request") + } + + return +} + +// ListNextPreparer prepares the ListNext request. +func (client UsersClient) ListNextPreparer(ctx context.Context, nextLink string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "nextLink": nextLink, + "tenantID": autorest.Encode("path", client.TenantID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/{nextLink}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListNextSender sends the ListNext request. The method will close the +// http.Response Body if it receives an error. +func (client UsersClient) ListNextSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// ListNextResponder handles the response to the ListNext request. The method always +// closes the http.Response Body. +func (client UsersClient) ListNextResponder(resp *http.Response) (result UserListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Update updates a user. +// Parameters: +// upnOrObjectID - the object ID or principal name of the user to update. +// parameters - parameters to update an existing user. +func (client UsersClient) Update(ctx context.Context, upnOrObjectID string, parameters UserUpdateParameters) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsersClient.Update") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.UpdatePreparer(ctx, upnOrObjectID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Update", nil, "Failure preparing request") + return + } + + resp, err := client.UpdateSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Update", resp, "Failure sending request") + return + } + + result, err = client.UpdateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "graphrbac.UsersClient", "Update", resp, "Failure responding to request") + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client UsersClient) UpdatePreparer(ctx context.Context, upnOrObjectID string, parameters UserUpdateParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "tenantID": autorest.Encode("path", client.TenantID), + "upnOrObjectId": autorest.Encode("path", upnOrObjectID), + } + + const APIVersion = "1.6" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{tenantID}/users/{upnOrObjectId}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client UsersClient) UpdateSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client UsersClient) UpdateResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/version.go new file mode 100644 index 000000000..b8397dc96 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac/version.go @@ -0,0 +1,30 @@ +package graphrbac + +import "github.com/Azure/azure-sdk-for-go/version" + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +// UserAgent returns the UserAgent string to use when sending http.Requests. +func UserAgent() string { + return "Azure-SDK-For-Go/" + version.Number + " graphrbac/1.6" +} + +// Version returns the semantic version (see http://semver.org) of the client. +func Version() string { + return version.Number +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deploymentoperations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deploymentoperations.go index 215e1ffba..3047aa00b 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deploymentoperations.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deploymentoperations.go @@ -22,6 +22,7 @@ import ( "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -46,11 +47,21 @@ func NewDeploymentOperationsClientWithBaseURI(baseURI string, subscriptionID str // deploymentName - the name of the deployment. // operationID - operation Id. func (client DeploymentOperationsClient) Get(ctx context.Context, resourceGroupName string, deploymentName string, operationID string) (result DeploymentOperation, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentOperationsClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentOperationsClient", "Get", err.Error()) } @@ -100,8 +111,8 @@ func (client DeploymentOperationsClient) GetPreparer(ctx context.Context, resour // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client DeploymentOperationsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // GetResponder handles the response to the Get request. The method always @@ -123,11 +134,21 @@ func (client DeploymentOperationsClient) GetResponder(resp *http.Response) (resu // deploymentName - the name of the deployment. // top - query parameters. func (client DeploymentOperationsClient) List(ctx context.Context, resourceGroupName string, deploymentName string, top *int32) (result DeploymentOperationsListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentOperationsClient.List") + defer func() { + sc := -1 + if result.dolr.Response.Response != nil { + sc = result.dolr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentOperationsClient", "List", err.Error()) } @@ -180,8 +201,8 @@ func (client DeploymentOperationsClient) ListPreparer(ctx context.Context, resou // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client DeploymentOperationsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always @@ -198,8 +219,8 @@ func (client DeploymentOperationsClient) ListResponder(resp *http.Response) (res } // listNextResults retrieves the next set of results, if any. -func (client DeploymentOperationsClient) listNextResults(lastResults DeploymentOperationsListResult) (result DeploymentOperationsListResult, err error) { - req, err := lastResults.deploymentOperationsListResultPreparer() +func (client DeploymentOperationsClient) listNextResults(ctx context.Context, lastResults DeploymentOperationsListResult) (result DeploymentOperationsListResult, err error) { + req, err := lastResults.deploymentOperationsListResultPreparer(ctx) if err != nil { return result, autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "listNextResults", nil, "Failure preparing next results request") } @@ -220,6 +241,16 @@ func (client DeploymentOperationsClient) listNextResults(lastResults DeploymentO // ListComplete enumerates all values, automatically crossing page boundaries as required. func (client DeploymentOperationsClient) ListComplete(ctx context.Context, resourceGroupName string, deploymentName string, top *int32) (result DeploymentOperationsListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentOperationsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.page, err = client.List(ctx, resourceGroupName, deploymentName, top) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deployments.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deployments.go index db4c7ca6a..184c1fecc 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deployments.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/deployments.go @@ -22,6 +22,7 @@ import ( "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -40,16 +41,98 @@ func NewDeploymentsClientWithBaseURI(baseURI string, subscriptionID string) Depl return DeploymentsClient{NewWithBaseURI(baseURI, subscriptionID)} } +// CalculateTemplateHash calculate the hash of the given template. +// Parameters: +// templateParameter - the template provided to calculate hash. +func (client DeploymentsClient) CalculateTemplateHash(ctx context.Context, templateParameter interface{}) (result TemplateHashResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.CalculateTemplateHash") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + req, err := client.CalculateTemplateHashPreparer(ctx, templateParameter) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CalculateTemplateHash", nil, "Failure preparing request") + return + } + + resp, err := client.CalculateTemplateHashSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CalculateTemplateHash", resp, "Failure sending request") + return + } + + result, err = client.CalculateTemplateHashResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CalculateTemplateHash", resp, "Failure responding to request") + } + + return +} + +// CalculateTemplateHashPreparer prepares the CalculateTemplateHash request. +func (client DeploymentsClient) CalculateTemplateHashPreparer(ctx context.Context, templateParameter interface{}) (*http.Request, error) { + const APIVersion = "2016-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPath("/providers/Microsoft.Resources/calculateTemplateHash"), + autorest.WithJSON(templateParameter), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CalculateTemplateHashSender sends the CalculateTemplateHash request. The method will close the +// http.Response Body if it receives an error. +func (client DeploymentsClient) CalculateTemplateHashSender(req *http.Request) (*http.Response, error) { + sd := autorest.GetSendDecorators(req.Context(), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + return autorest.SendWithSender(client, req, sd...) +} + +// CalculateTemplateHashResponder handles the response to the CalculateTemplateHash request. The method always +// closes the http.Response Body. +func (client DeploymentsClient) CalculateTemplateHashResponder(resp *http.Response) (result TemplateHashResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + // Cancel cancel a currently running template deployment. // Parameters: // resourceGroupName - the name of the resource group. The name is case insensitive. // deploymentName - the name of the deployment. func (client DeploymentsClient) Cancel(ctx context.Context, resourceGroupName string, deploymentName string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.Cancel") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentsClient", "Cancel", err.Error()) } @@ -98,8 +181,8 @@ func (client DeploymentsClient) CancelPreparer(ctx context.Context, resourceGrou // CancelSender sends the Cancel request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) CancelSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CancelResponder handles the response to the Cancel request. The method always @@ -119,11 +202,21 @@ func (client DeploymentsClient) CancelResponder(resp *http.Response) (result aut // resourceGroupName - the name of the resource group to check. The name is case insensitive. // deploymentName - the name of the deployment. func (client DeploymentsClient) CheckExistence(ctx context.Context, resourceGroupName string, deploymentName string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.CheckExistence") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentsClient", "CheckExistence", err.Error()) } @@ -172,8 +265,8 @@ func (client DeploymentsClient) CheckExistencePreparer(ctx context.Context, reso // CheckExistenceSender sends the CheckExistence request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) CheckExistenceSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CheckExistenceResponder handles the response to the CheckExistence request. The method always @@ -194,11 +287,21 @@ func (client DeploymentsClient) CheckExistenceResponder(resp *http.Response) (re // deploymentName - the name of the deployment. // parameters - additional parameters supplied to the operation. func (client DeploymentsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, deploymentName string, parameters Deployment) (result DeploymentsCreateOrUpdateFuture, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.CreateOrUpdate") + defer func() { + sc := -1 + if result.Response() != nil { + sc = result.Response().StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}, {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.Properties", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.Properties.TemplateLink", Name: validation.Null, Rule: false, @@ -250,9 +353,9 @@ func (client DeploymentsClient) CreateOrUpdatePreparer(ctx context.Context, reso // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) CreateOrUpdateSender(req *http.Request) (future DeploymentsCreateOrUpdateFuture, err error) { + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) var resp *http.Response - resp, err = autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + resp, err = autorest.SendWithSender(client, req, sd...) if err != nil { return } @@ -278,11 +381,21 @@ func (client DeploymentsClient) CreateOrUpdateResponder(resp *http.Response) (re // resourceGroupName - the name of the resource group. The name is case insensitive. // deploymentName - the name of the deployment to be deleted. func (client DeploymentsClient) Delete(ctx context.Context, resourceGroupName string, deploymentName string) (result DeploymentsDeleteFuture, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.Delete") + defer func() { + sc := -1 + if result.Response() != nil { + sc = result.Response().StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentsClient", "Delete", err.Error()) } @@ -325,9 +438,9 @@ func (client DeploymentsClient) DeletePreparer(ctx context.Context, resourceGrou // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) DeleteSender(req *http.Request) (future DeploymentsDeleteFuture, err error) { + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) var resp *http.Response - resp, err = autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + resp, err = autorest.SendWithSender(client, req, sd...) if err != nil { return } @@ -352,11 +465,21 @@ func (client DeploymentsClient) DeleteResponder(resp *http.Response) (result aut // resourceGroupName - the name of the resource group. The name is case insensitive. // deploymentName - the name of the deployment. func (client DeploymentsClient) ExportTemplate(ctx context.Context, resourceGroupName string, deploymentName string) (result DeploymentExportResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.ExportTemplate") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentsClient", "ExportTemplate", err.Error()) } @@ -405,8 +528,8 @@ func (client DeploymentsClient) ExportTemplatePreparer(ctx context.Context, reso // ExportTemplateSender sends the ExportTemplate request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) ExportTemplateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ExportTemplateResponder handles the response to the ExportTemplate request. The method always @@ -427,11 +550,21 @@ func (client DeploymentsClient) ExportTemplateResponder(resp *http.Response) (re // resourceGroupName - the name of the resource group to get. The name is case insensitive. // deploymentName - the name of the deployment. func (client DeploymentsClient) Get(ctx context.Context, resourceGroupName string, deploymentName string) (result DeploymentExtended, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentsClient", "Get", err.Error()) } @@ -480,8 +613,8 @@ func (client DeploymentsClient) GetPreparer(ctx context.Context, resourceGroupNa // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // GetResponder handles the response to the Get request. The method always @@ -503,11 +636,21 @@ func (client DeploymentsClient) GetResponder(resp *http.Response) (result Deploy // filter - the filter to apply on the operation. // top - query parameters. If null is passed returns all deployments. func (client DeploymentsClient) List(ctx context.Context, resourceGroupName string, filter string, top *int32) (result DeploymentListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.List") + defer func() { + sc := -1 + if result.dlr.Response.Response != nil { + sc = result.dlr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.DeploymentsClient", "List", err.Error()) } @@ -562,8 +705,8 @@ func (client DeploymentsClient) ListPreparer(ctx context.Context, resourceGroupN // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always @@ -580,8 +723,8 @@ func (client DeploymentsClient) ListResponder(resp *http.Response) (result Deplo } // listNextResults retrieves the next set of results, if any. -func (client DeploymentsClient) listNextResults(lastResults DeploymentListResult) (result DeploymentListResult, err error) { - req, err := lastResults.deploymentListResultPreparer() +func (client DeploymentsClient) listNextResults(ctx context.Context, lastResults DeploymentListResult) (result DeploymentListResult, err error) { + req, err := lastResults.deploymentListResultPreparer(ctx) if err != nil { return result, autorest.NewErrorWithError(err, "resources.DeploymentsClient", "listNextResults", nil, "Failure preparing next results request") } @@ -602,6 +745,16 @@ func (client DeploymentsClient) listNextResults(lastResults DeploymentListResult // ListComplete enumerates all values, automatically crossing page boundaries as required. func (client DeploymentsClient) ListComplete(ctx context.Context, resourceGroupName string, filter string, top *int32) (result DeploymentListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.page, err = client.List(ctx, resourceGroupName, filter, top) return } @@ -612,11 +765,21 @@ func (client DeploymentsClient) ListComplete(ctx context.Context, resourceGroupN // deploymentName - the name of the deployment. // parameters - deployment to validate. func (client DeploymentsClient) Validate(ctx context.Context, resourceGroupName string, deploymentName string, parameters Deployment) (result DeploymentValidateResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentsClient.Validate") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}, {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.Properties", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.Properties.TemplateLink", Name: validation.Null, Rule: false, @@ -674,8 +837,8 @@ func (client DeploymentsClient) ValidatePreparer(ctx context.Context, resourceGr // ValidateSender sends the Validate request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) ValidateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ValidateResponder handles the response to the Validate request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/groups.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/groups.go index f4d05a1b4..9858b639b 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/groups.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/groups.go @@ -22,6 +22,7 @@ import ( "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -44,11 +45,21 @@ func NewGroupsClientWithBaseURI(baseURI string, subscriptionID string) GroupsCli // Parameters: // resourceGroupName - the name of the resource group to check. The name is case insensitive. func (client GroupsClient) CheckExistence(ctx context.Context, resourceGroupName string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.CheckExistence") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.GroupsClient", "CheckExistence", err.Error()) } @@ -96,8 +107,8 @@ func (client GroupsClient) CheckExistencePreparer(ctx context.Context, resourceG // CheckExistenceSender sends the CheckExistence request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) CheckExistenceSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CheckExistenceResponder handles the response to the CheckExistence request. The method always @@ -117,11 +128,21 @@ func (client GroupsClient) CheckExistenceResponder(resp *http.Response) (result // resourceGroupName - the name of the resource group to be created or updated. // parameters - parameters supplied to the create or update resource group service operation. func (client GroupsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, parameters Group) (result Group, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.CreateOrUpdate") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}, {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.Location", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.GroupsClient", "CreateOrUpdate", err.Error()) @@ -160,6 +181,7 @@ func (client GroupsClient) CreateOrUpdatePreparer(ctx context.Context, resourceG "api-version": APIVersion, } + parameters.ID = nil preparer := autorest.CreatePreparer( autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), @@ -173,8 +195,8 @@ func (client GroupsClient) CreateOrUpdatePreparer(ctx context.Context, resourceG // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -194,11 +216,21 @@ func (client GroupsClient) CreateOrUpdateResponder(resp *http.Response) (result // Parameters: // resourceGroupName - the name of the resource group to be deleted. The name is case insensitive. func (client GroupsClient) Delete(ctx context.Context, resourceGroupName string) (result GroupsDeleteFuture, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.Delete") + defer func() { + sc := -1 + if result.Response() != nil { + sc = result.Response().StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.GroupsClient", "Delete", err.Error()) } @@ -240,9 +272,9 @@ func (client GroupsClient) DeletePreparer(ctx context.Context, resourceGroupName // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) DeleteSender(req *http.Request) (future GroupsDeleteFuture, err error) { + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) var resp *http.Response - resp, err = autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + resp, err = autorest.SendWithSender(client, req, sd...) if err != nil { return } @@ -267,11 +299,21 @@ func (client GroupsClient) DeleteResponder(resp *http.Response) (result autorest // resourceGroupName - the name of the resource group to be created or updated. // parameters - parameters supplied to the export template resource group operation. func (client GroupsClient) ExportTemplate(ctx context.Context, resourceGroupName string, parameters ExportTemplateRequest) (result GroupExportResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.ExportTemplate") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.GroupsClient", "ExportTemplate", err.Error()) } @@ -321,8 +363,8 @@ func (client GroupsClient) ExportTemplatePreparer(ctx context.Context, resourceG // ExportTemplateSender sends the ExportTemplate request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) ExportTemplateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ExportTemplateResponder handles the response to the ExportTemplate request. The method always @@ -342,11 +384,21 @@ func (client GroupsClient) ExportTemplateResponder(resp *http.Response) (result // Parameters: // resourceGroupName - the name of the resource group to get. The name is case insensitive. func (client GroupsClient) Get(ctx context.Context, resourceGroupName string) (result Group, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.GroupsClient", "Get", err.Error()) } @@ -394,8 +446,8 @@ func (client GroupsClient) GetPreparer(ctx context.Context, resourceGroupName st // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // GetResponder handles the response to the Get request. The method always @@ -416,6 +468,16 @@ func (client GroupsClient) GetResponder(resp *http.Response) (result Group, err // filter - the filter to apply on the operation. // top - query parameters. If null is passed returns all resource groups. func (client GroupsClient) List(ctx context.Context, filter string, top *int32) (result GroupListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.List") + defer func() { + sc := -1 + if result.glr.Response.Response != nil { + sc = result.glr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.fn = client.listNextResults req, err := client.ListPreparer(ctx, filter, top) if err != nil { @@ -466,8 +528,8 @@ func (client GroupsClient) ListPreparer(ctx context.Context, filter string, top // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always @@ -484,8 +546,8 @@ func (client GroupsClient) ListResponder(resp *http.Response) (result GroupListR } // listNextResults retrieves the next set of results, if any. -func (client GroupsClient) listNextResults(lastResults GroupListResult) (result GroupListResult, err error) { - req, err := lastResults.groupListResultPreparer() +func (client GroupsClient) listNextResults(ctx context.Context, lastResults GroupListResult) (result GroupListResult, err error) { + req, err := lastResults.groupListResultPreparer(ctx) if err != nil { return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "listNextResults", nil, "Failure preparing next results request") } @@ -506,6 +568,16 @@ func (client GroupsClient) listNextResults(lastResults GroupListResult) (result // ListComplete enumerates all values, automatically crossing page boundaries as required. func (client GroupsClient) ListComplete(ctx context.Context, filter string, top *int32) (result GroupListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.page, err = client.List(ctx, filter, top) return } @@ -517,11 +589,21 @@ func (client GroupsClient) ListComplete(ctx context.Context, filter string, top // expand - the $expand query parameter // top - query parameters. If null is passed returns all resource groups. func (client GroupsClient) ListResources(ctx context.Context, resourceGroupName string, filter string, expand string, top *int32) (result ListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.ListResources") + defer func() { + sc := -1 + if result.lr.Response.Response != nil { + sc = result.lr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.GroupsClient", "ListResources", err.Error()) } @@ -579,8 +661,8 @@ func (client GroupsClient) ListResourcesPreparer(ctx context.Context, resourceGr // ListResourcesSender sends the ListResources request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) ListResourcesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResourcesResponder handles the response to the ListResources request. The method always @@ -597,8 +679,8 @@ func (client GroupsClient) ListResourcesResponder(resp *http.Response) (result L } // listResourcesNextResults retrieves the next set of results, if any. -func (client GroupsClient) listResourcesNextResults(lastResults ListResult) (result ListResult, err error) { - req, err := lastResults.listResultPreparer() +func (client GroupsClient) listResourcesNextResults(ctx context.Context, lastResults ListResult) (result ListResult, err error) { + req, err := lastResults.listResultPreparer(ctx) if err != nil { return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "listResourcesNextResults", nil, "Failure preparing next results request") } @@ -619,6 +701,16 @@ func (client GroupsClient) listResourcesNextResults(lastResults ListResult) (res // ListResourcesComplete enumerates all values, automatically crossing page boundaries as required. func (client GroupsClient) ListResourcesComplete(ctx context.Context, resourceGroupName string, filter string, expand string, top *int32) (result ListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.ListResources") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.page, err = client.ListResources(ctx, resourceGroupName, filter, expand, top) return } @@ -630,11 +722,21 @@ func (client GroupsClient) ListResourcesComplete(ctx context.Context, resourceGr // resourceGroupName - the name of the resource group to be created or updated. The name is case insensitive. // parameters - parameters supplied to the update state resource group service operation. func (client GroupsClient) Patch(ctx context.Context, resourceGroupName string, parameters Group) (result Group, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupsClient.Patch") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.GroupsClient", "Patch", err.Error()) } @@ -671,6 +773,7 @@ func (client GroupsClient) PatchPreparer(ctx context.Context, resourceGroupName "api-version": APIVersion, } + parameters.ID = nil preparer := autorest.CreatePreparer( autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPatch(), @@ -684,8 +787,8 @@ func (client GroupsClient) PatchPreparer(ctx context.Context, resourceGroupName // PatchSender sends the Patch request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) PatchSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // PatchResponder handles the response to the Patch request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/models.go index bf84a5d4c..bfd5e4acb 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/models.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/models.go @@ -18,14 +18,19 @@ package resources // Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "encoding/json" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/date" "github.com/Azure/go-autorest/autorest/to" + "github.com/Azure/go-autorest/tracing" "net/http" ) +// The package's fully qualified name. +const fqdn = "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources" + // DeploymentMode enumerates the values for deployment mode. type DeploymentMode string @@ -80,6 +85,11 @@ type BasicDependency struct { ResourceName *string `json:"resourceName,omitempty"` } +// CloudError an error response for a resource management request. +type CloudError struct { + Error *ErrorResponse `json:"error,omitempty"` +} + // DebugSetting ... type DebugSetting struct { // DetailLevel - The debug detail level. @@ -114,7 +124,7 @@ type DeploymentExportResult struct { // DeploymentExtended deployment information. type DeploymentExtended struct { autorest.Response `json:"-"` - // ID - The ID of the deployment. + // ID - READ-ONLY; The ID of the deployment. ID *string `json:"id,omitempty"` // Name - The name of the deployment. Name *string `json:"name,omitempty"` @@ -143,14 +153,24 @@ type DeploymentListResultIterator struct { page DeploymentListResultPage } -// Next advances to the next value. If there was an error making +// NextWithContext advances to the next value. If there was an error making // the request the iterator does not advance and the error is returned. -func (iter *DeploymentListResultIterator) Next() error { +func (iter *DeploymentListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } iter.i++ if iter.i < len(iter.page.Values()) { return nil } - err := iter.page.Next() + err = iter.page.NextWithContext(ctx) if err != nil { iter.i-- return err @@ -159,6 +179,13 @@ func (iter *DeploymentListResultIterator) Next() error { return nil } +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *DeploymentListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + // NotDone returns true if the enumeration should be started or is not yet complete. func (iter DeploymentListResultIterator) NotDone() bool { return iter.page.NotDone() && iter.i < len(iter.page.Values()) @@ -178,6 +205,11 @@ func (iter DeploymentListResultIterator) Value() DeploymentExtended { return iter.page.Values()[iter.i] } +// Creates a new instance of the DeploymentListResultIterator type. +func NewDeploymentListResultIterator(page DeploymentListResultPage) DeploymentListResultIterator { + return DeploymentListResultIterator{page: page} +} + // IsEmpty returns true if the ListResult contains no values. func (dlr DeploymentListResult) IsEmpty() bool { return dlr.Value == nil || len(*dlr.Value) == 0 @@ -185,11 +217,11 @@ func (dlr DeploymentListResult) IsEmpty() bool { // deploymentListResultPreparer prepares a request to retrieve the next set of results. // It returns nil if no more results exist. -func (dlr DeploymentListResult) deploymentListResultPreparer() (*http.Request, error) { +func (dlr DeploymentListResult) deploymentListResultPreparer(ctx context.Context) (*http.Request, error) { if dlr.NextLink == nil || len(to.String(dlr.NextLink)) < 1 { return nil, nil } - return autorest.Prepare(&http.Request{}, + return autorest.Prepare((&http.Request{}).WithContext(ctx), autorest.AsJSON(), autorest.AsGet(), autorest.WithBaseURL(to.String(dlr.NextLink))) @@ -197,14 +229,24 @@ func (dlr DeploymentListResult) deploymentListResultPreparer() (*http.Request, e // DeploymentListResultPage contains a page of DeploymentExtended values. type DeploymentListResultPage struct { - fn func(DeploymentListResult) (DeploymentListResult, error) + fn func(context.Context, DeploymentListResult) (DeploymentListResult, error) dlr DeploymentListResult } -// Next advances to the next page of values. If there was an error making +// NextWithContext advances to the next page of values. If there was an error making // the request the page does not advance and the error is returned. -func (page *DeploymentListResultPage) Next() error { - next, err := page.fn(page.dlr) +func (page *DeploymentListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.dlr) if err != nil { return err } @@ -212,6 +254,13 @@ func (page *DeploymentListResultPage) Next() error { return nil } +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *DeploymentListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + // NotDone returns true if the page enumeration should be started or is not yet complete. func (page DeploymentListResultPage) NotDone() bool { return !page.dlr.IsEmpty() @@ -230,6 +279,11 @@ func (page DeploymentListResultPage) Values() []DeploymentExtended { return *page.dlr.Value } +// Creates a new instance of the DeploymentListResultPage type. +func NewDeploymentListResultPage(getNextPage func(context.Context, DeploymentListResult) (DeploymentListResult, error)) DeploymentListResultPage { + return DeploymentListResultPage{fn: getNextPage} +} + // DeploymentOperation deployment operation information. type DeploymentOperation struct { autorest.Response `json:"-"` @@ -270,20 +324,31 @@ type DeploymentOperationsListResult struct { NextLink *string `json:"nextLink,omitempty"` } -// DeploymentOperationsListResultIterator provides access to a complete listing of DeploymentOperation values. +// DeploymentOperationsListResultIterator provides access to a complete listing of DeploymentOperation +// values. type DeploymentOperationsListResultIterator struct { i int page DeploymentOperationsListResultPage } -// Next advances to the next value. If there was an error making +// NextWithContext advances to the next value. If there was an error making // the request the iterator does not advance and the error is returned. -func (iter *DeploymentOperationsListResultIterator) Next() error { +func (iter *DeploymentOperationsListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentOperationsListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } iter.i++ if iter.i < len(iter.page.Values()) { return nil } - err := iter.page.Next() + err = iter.page.NextWithContext(ctx) if err != nil { iter.i-- return err @@ -292,6 +357,13 @@ func (iter *DeploymentOperationsListResultIterator) Next() error { return nil } +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *DeploymentOperationsListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + // NotDone returns true if the enumeration should be started or is not yet complete. func (iter DeploymentOperationsListResultIterator) NotDone() bool { return iter.page.NotDone() && iter.i < len(iter.page.Values()) @@ -311,6 +383,11 @@ func (iter DeploymentOperationsListResultIterator) Value() DeploymentOperation { return iter.page.Values()[iter.i] } +// Creates a new instance of the DeploymentOperationsListResultIterator type. +func NewDeploymentOperationsListResultIterator(page DeploymentOperationsListResultPage) DeploymentOperationsListResultIterator { + return DeploymentOperationsListResultIterator{page: page} +} + // IsEmpty returns true if the ListResult contains no values. func (dolr DeploymentOperationsListResult) IsEmpty() bool { return dolr.Value == nil || len(*dolr.Value) == 0 @@ -318,11 +395,11 @@ func (dolr DeploymentOperationsListResult) IsEmpty() bool { // deploymentOperationsListResultPreparer prepares a request to retrieve the next set of results. // It returns nil if no more results exist. -func (dolr DeploymentOperationsListResult) deploymentOperationsListResultPreparer() (*http.Request, error) { +func (dolr DeploymentOperationsListResult) deploymentOperationsListResultPreparer(ctx context.Context) (*http.Request, error) { if dolr.NextLink == nil || len(to.String(dolr.NextLink)) < 1 { return nil, nil } - return autorest.Prepare(&http.Request{}, + return autorest.Prepare((&http.Request{}).WithContext(ctx), autorest.AsJSON(), autorest.AsGet(), autorest.WithBaseURL(to.String(dolr.NextLink))) @@ -330,14 +407,24 @@ func (dolr DeploymentOperationsListResult) deploymentOperationsListResultPrepare // DeploymentOperationsListResultPage contains a page of DeploymentOperation values. type DeploymentOperationsListResultPage struct { - fn func(DeploymentOperationsListResult) (DeploymentOperationsListResult, error) + fn func(context.Context, DeploymentOperationsListResult) (DeploymentOperationsListResult, error) dolr DeploymentOperationsListResult } -// Next advances to the next page of values. If there was an error making +// NextWithContext advances to the next page of values. If there was an error making // the request the page does not advance and the error is returned. -func (page *DeploymentOperationsListResultPage) Next() error { - next, err := page.fn(page.dolr) +func (page *DeploymentOperationsListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/DeploymentOperationsListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.dolr) if err != nil { return err } @@ -345,6 +432,13 @@ func (page *DeploymentOperationsListResultPage) Next() error { return nil } +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *DeploymentOperationsListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + // NotDone returns true if the page enumeration should be started or is not yet complete. func (page DeploymentOperationsListResultPage) NotDone() bool { return !page.dolr.IsEmpty() @@ -363,6 +457,11 @@ func (page DeploymentOperationsListResultPage) Values() []DeploymentOperation { return *page.dolr.Value } +// Creates a new instance of the DeploymentOperationsListResultPage type. +func NewDeploymentOperationsListResultPage(getNextPage func(context.Context, DeploymentOperationsListResult) (DeploymentOperationsListResult, error)) DeploymentOperationsListResultPage { + return DeploymentOperationsListResultPage{fn: getNextPage} +} + // DeploymentProperties deployment properties. type DeploymentProperties struct { // Template - The template content. It can be a JObject or a well formed JSON string. Use only one of Template or TemplateLink. @@ -387,7 +486,7 @@ type DeploymentPropertiesExtended struct { CorrelationID *string `json:"correlationId,omitempty"` // Timestamp - The timestamp of the template deployment. Timestamp *date.Time `json:"timestamp,omitempty"` - // Outputs - Key/value pairs that represent deploymentoutput. + // Outputs - Key/value pairs that represent deployment output. Outputs interface{} `json:"outputs,omitempty"` // Providers - The list of resource providers needed for the deployment. Providers *[]Provider `json:"providers,omitempty"` @@ -407,8 +506,8 @@ type DeploymentPropertiesExtended struct { DebugSetting *DebugSetting `json:"debugSetting,omitempty"` } -// DeploymentsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running -// operation. +// DeploymentsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. type DeploymentsCreateOrUpdateFuture struct { azure.Future } @@ -417,7 +516,7 @@ type DeploymentsCreateOrUpdateFuture struct { // If the operation has not completed it will return an error. func (future *DeploymentsCreateOrUpdateFuture) Result(client DeploymentsClient) (de DeploymentExtended, err error) { var done bool - done, err = future.Done(client) + done, err = future.DoneWithContext(context.Background(), client) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") return @@ -436,7 +535,8 @@ func (future *DeploymentsCreateOrUpdateFuture) Result(client DeploymentsClient) return } -// DeploymentsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +// DeploymentsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. type DeploymentsDeleteFuture struct { azure.Future } @@ -445,7 +545,7 @@ type DeploymentsDeleteFuture struct { // If the operation has not completed it will return an error. func (future *DeploymentsDeleteFuture) Result(client DeploymentsClient) (ar autorest.Response, err error) { var done bool - done, err = future.Done(client) + done, err = future.DoneWithContext(context.Background(), client) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentsDeleteFuture", "Result", future.Response(), "Polling failure") return @@ -467,11 +567,33 @@ type DeploymentValidateResult struct { Properties *DeploymentPropertiesExtended `json:"properties,omitempty"` } +// ErrorAdditionalInfo the resource management error additional info. +type ErrorAdditionalInfo struct { + // Type - READ-ONLY; The additional info type. + Type *string `json:"type,omitempty"` + // Info - READ-ONLY; The additional info. + Info interface{} `json:"info,omitempty"` +} + +// ErrorResponse the resource management error response. +type ErrorResponse struct { + // Code - READ-ONLY; The error code. + Code *string `json:"code,omitempty"` + // Message - READ-ONLY; The error message. + Message *string `json:"message,omitempty"` + // Target - READ-ONLY; The error target. + Target *string `json:"target,omitempty"` + // Details - READ-ONLY; The error details. + Details *[]ErrorResponse `json:"details,omitempty"` + // AdditionalInfo - READ-ONLY; The error additional info. + AdditionalInfo *[]ErrorAdditionalInfo `json:"additionalInfo,omitempty"` +} + // ExportTemplateRequest export resource group template request parameters. type ExportTemplateRequest struct { - // ResourcesProperty - The ids of the resources. The only supported string currently is '*' (all resources). Future api updates will support exporting specific resources. + // ResourcesProperty - The IDs of the resources to filter the export by. To export all resources, supply an array with single entry '*'. ResourcesProperty *[]string `json:"resources,omitempty"` - // Options - The export template options. Supported values include 'IncludeParameterDefaultValue', 'IncludeComments' or 'IncludeParameterDefaultValue, IncludeComments + // Options - The export template options. A CSV-formatted list containing zero or more of the following: 'IncludeParameterDefaultValue', 'IncludeComments', 'SkipResourceNameParameterization', 'SkipAllParameterization' Options *string `json:"options,omitempty"` } @@ -490,11 +612,11 @@ type GenericResource struct { Sku *Sku `json:"sku,omitempty"` // Identity - The identity of the resource. Identity *Identity `json:"identity,omitempty"` - // ID - Resource Id + // ID - READ-ONLY; Resource Id ID *string `json:"id,omitempty"` - // Name - Resource name + // Name - READ-ONLY; Resource name Name *string `json:"name,omitempty"` - // Type - Resource type + // Type - READ-ONLY; Resource type Type *string `json:"type,omitempty"` // Location - Resource location Location *string `json:"location,omitempty"` @@ -508,7 +630,9 @@ func (gr GenericResource) MarshalJSON() ([]byte, error) { if gr.Plan != nil { objectMap["plan"] = gr.Plan } - objectMap["properties"] = gr.Properties + if gr.Properties != nil { + objectMap["properties"] = gr.Properties + } if gr.Kind != nil { objectMap["kind"] = gr.Kind } @@ -521,15 +645,6 @@ func (gr GenericResource) MarshalJSON() ([]byte, error) { if gr.Identity != nil { objectMap["identity"] = gr.Identity } - if gr.ID != nil { - objectMap["id"] = gr.ID - } - if gr.Name != nil { - objectMap["name"] = gr.Name - } - if gr.Type != nil { - objectMap["type"] = gr.Type - } if gr.Location != nil { objectMap["location"] = gr.Location } @@ -552,7 +667,7 @@ type GenericResourceFilter struct { // Group resource group information. type Group struct { autorest.Response `json:"-"` - // ID - The ID of the resource group. + // ID - READ-ONLY; The ID of the resource group. ID *string `json:"id,omitempty"` // Name - The Name of the resource group. Name *string `json:"name,omitempty"` @@ -566,9 +681,6 @@ type Group struct { // MarshalJSON is the custom marshaler for Group. func (g Group) MarshalJSON() ([]byte, error) { objectMap := make(map[string]interface{}) - if g.ID != nil { - objectMap["id"] = g.ID - } if g.Name != nil { objectMap["name"] = g.Name } @@ -616,14 +728,24 @@ type GroupListResultIterator struct { page GroupListResultPage } -// Next advances to the next value. If there was an error making +// NextWithContext advances to the next value. If there was an error making // the request the iterator does not advance and the error is returned. -func (iter *GroupListResultIterator) Next() error { +func (iter *GroupListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } iter.i++ if iter.i < len(iter.page.Values()) { return nil } - err := iter.page.Next() + err = iter.page.NextWithContext(ctx) if err != nil { iter.i-- return err @@ -632,6 +754,13 @@ func (iter *GroupListResultIterator) Next() error { return nil } +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *GroupListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + // NotDone returns true if the enumeration should be started or is not yet complete. func (iter GroupListResultIterator) NotDone() bool { return iter.page.NotDone() && iter.i < len(iter.page.Values()) @@ -651,6 +780,11 @@ func (iter GroupListResultIterator) Value() Group { return iter.page.Values()[iter.i] } +// Creates a new instance of the GroupListResultIterator type. +func NewGroupListResultIterator(page GroupListResultPage) GroupListResultIterator { + return GroupListResultIterator{page: page} +} + // IsEmpty returns true if the ListResult contains no values. func (glr GroupListResult) IsEmpty() bool { return glr.Value == nil || len(*glr.Value) == 0 @@ -658,11 +792,11 @@ func (glr GroupListResult) IsEmpty() bool { // groupListResultPreparer prepares a request to retrieve the next set of results. // It returns nil if no more results exist. -func (glr GroupListResult) groupListResultPreparer() (*http.Request, error) { +func (glr GroupListResult) groupListResultPreparer(ctx context.Context) (*http.Request, error) { if glr.NextLink == nil || len(to.String(glr.NextLink)) < 1 { return nil, nil } - return autorest.Prepare(&http.Request{}, + return autorest.Prepare((&http.Request{}).WithContext(ctx), autorest.AsJSON(), autorest.AsGet(), autorest.WithBaseURL(to.String(glr.NextLink))) @@ -670,14 +804,24 @@ func (glr GroupListResult) groupListResultPreparer() (*http.Request, error) { // GroupListResultPage contains a page of Group values. type GroupListResultPage struct { - fn func(GroupListResult) (GroupListResult, error) + fn func(context.Context, GroupListResult) (GroupListResult, error) glr GroupListResult } -// Next advances to the next page of values. If there was an error making +// NextWithContext advances to the next page of values. If there was an error making // the request the page does not advance and the error is returned. -func (page *GroupListResultPage) Next() error { - next, err := page.fn(page.glr) +func (page *GroupListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/GroupListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.glr) if err != nil { return err } @@ -685,6 +829,13 @@ func (page *GroupListResultPage) Next() error { return nil } +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *GroupListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + // NotDone returns true if the page enumeration should be started or is not yet complete. func (page GroupListResultPage) NotDone() bool { return !page.glr.IsEmpty() @@ -703,9 +854,14 @@ func (page GroupListResultPage) Values() []Group { return *page.glr.Value } +// Creates a new instance of the GroupListResultPage type. +func NewGroupListResultPage(getNextPage func(context.Context, GroupListResult) (GroupListResult, error)) GroupListResultPage { + return GroupListResultPage{fn: getNextPage} +} + // GroupProperties the resource group properties. type GroupProperties struct { - // ProvisioningState - The provisioning state. + // ProvisioningState - READ-ONLY; The provisioning state. ProvisioningState *string `json:"provisioningState,omitempty"` } @@ -718,7 +874,7 @@ type GroupsDeleteFuture struct { // If the operation has not completed it will return an error. func (future *GroupsDeleteFuture) Result(client GroupsClient) (ar autorest.Response, err error) { var done bool - done, err = future.Done(client) + done, err = future.DoneWithContext(context.Background(), client) if err != nil { err = autorest.NewErrorWithError(err, "resources.GroupsDeleteFuture", "Result", future.Response(), "Polling failure") return @@ -739,9 +895,9 @@ type HTTPMessage struct { // Identity identity for the resource. type Identity struct { - // PrincipalID - The principal id of resource identity. + // PrincipalID - READ-ONLY; The principal id of resource identity. PrincipalID *string `json:"principalId,omitempty"` - // TenantID - The tenant id of resource. + // TenantID - READ-ONLY; The tenant id of resource. TenantID *string `json:"tenantId,omitempty"` // Type - The identity type. Possible values include: 'SystemAssigned' Type ResourceIdentityType `json:"type,omitempty"` @@ -762,14 +918,24 @@ type ListResultIterator struct { page ListResultPage } -// Next advances to the next value. If there was an error making +// NextWithContext advances to the next value. If there was an error making // the request the iterator does not advance and the error is returned. -func (iter *ListResultIterator) Next() error { +func (iter *ListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } iter.i++ if iter.i < len(iter.page.Values()) { return nil } - err := iter.page.Next() + err = iter.page.NextWithContext(ctx) if err != nil { iter.i-- return err @@ -778,6 +944,13 @@ func (iter *ListResultIterator) Next() error { return nil } +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *ListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + // NotDone returns true if the enumeration should be started or is not yet complete. func (iter ListResultIterator) NotDone() bool { return iter.page.NotDone() && iter.i < len(iter.page.Values()) @@ -797,6 +970,11 @@ func (iter ListResultIterator) Value() GenericResource { return iter.page.Values()[iter.i] } +// Creates a new instance of the ListResultIterator type. +func NewListResultIterator(page ListResultPage) ListResultIterator { + return ListResultIterator{page: page} +} + // IsEmpty returns true if the ListResult contains no values. func (lr ListResult) IsEmpty() bool { return lr.Value == nil || len(*lr.Value) == 0 @@ -804,11 +982,11 @@ func (lr ListResult) IsEmpty() bool { // listResultPreparer prepares a request to retrieve the next set of results. // It returns nil if no more results exist. -func (lr ListResult) listResultPreparer() (*http.Request, error) { +func (lr ListResult) listResultPreparer(ctx context.Context) (*http.Request, error) { if lr.NextLink == nil || len(to.String(lr.NextLink)) < 1 { return nil, nil } - return autorest.Prepare(&http.Request{}, + return autorest.Prepare((&http.Request{}).WithContext(ctx), autorest.AsJSON(), autorest.AsGet(), autorest.WithBaseURL(to.String(lr.NextLink))) @@ -816,14 +994,24 @@ func (lr ListResult) listResultPreparer() (*http.Request, error) { // ListResultPage contains a page of GenericResource values. type ListResultPage struct { - fn func(ListResult) (ListResult, error) + fn func(context.Context, ListResult) (ListResult, error) lr ListResult } -// Next advances to the next page of values. If there was an error making +// NextWithContext advances to the next page of values. If there was an error making // the request the page does not advance and the error is returned. -func (page *ListResultPage) Next() error { - next, err := page.fn(page.lr) +func (page *ListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.lr) if err != nil { return err } @@ -831,6 +1019,13 @@ func (page *ListResultPage) Next() error { return nil } +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *ListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + // NotDone returns true if the page enumeration should be started or is not yet complete. func (page ListResultPage) NotDone() bool { return !page.lr.IsEmpty() @@ -849,6 +1044,11 @@ func (page ListResultPage) Values() []GenericResource { return *page.lr.Value } +// Creates a new instance of the ListResultPage type. +func NewListResultPage(getNextPage func(context.Context, ListResult) (ListResult, error)) ListResultPage { + return ListResultPage{fn: getNextPage} +} + // ManagementErrorWithDetails ... type ManagementErrorWithDetails struct { // Code - The error code returned from the server. @@ -869,7 +1069,8 @@ type MoveInfo struct { TargetResourceGroup *string `json:"targetResourceGroup,omitempty"` } -// MoveResourcesFuture an abstraction for monitoring and retrieving the results of a long-running operation. +// MoveResourcesFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. type MoveResourcesFuture struct { azure.Future } @@ -878,7 +1079,7 @@ type MoveResourcesFuture struct { // If the operation has not completed it will return an error. func (future *MoveResourcesFuture) Result(client Client) (ar autorest.Response, err error) { var done bool - done, err = future.Done(client) + done, err = future.DoneWithContext(context.Background(), client) if err != nil { err = autorest.NewErrorWithError(err, "resources.MoveResourcesFuture", "Result", future.Response(), "Polling failure") return @@ -891,7 +1092,7 @@ func (future *MoveResourcesFuture) Result(client Client) (ar autorest.Response, return } -// ParametersLink entity representing the reference to the deployment paramaters. +// ParametersLink entity representing the reference to the deployment parameters. type ParametersLink struct { // URI - URI referencing the template. URI *string `json:"uri,omitempty"` @@ -939,14 +1140,24 @@ type ProviderListResultIterator struct { page ProviderListResultPage } -// Next advances to the next value. If there was an error making +// NextWithContext advances to the next value. If there was an error making // the request the iterator does not advance and the error is returned. -func (iter *ProviderListResultIterator) Next() error { +func (iter *ProviderListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ProviderListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } iter.i++ if iter.i < len(iter.page.Values()) { return nil } - err := iter.page.Next() + err = iter.page.NextWithContext(ctx) if err != nil { iter.i-- return err @@ -955,6 +1166,13 @@ func (iter *ProviderListResultIterator) Next() error { return nil } +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *ProviderListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + // NotDone returns true if the enumeration should be started or is not yet complete. func (iter ProviderListResultIterator) NotDone() bool { return iter.page.NotDone() && iter.i < len(iter.page.Values()) @@ -974,6 +1192,11 @@ func (iter ProviderListResultIterator) Value() Provider { return iter.page.Values()[iter.i] } +// Creates a new instance of the ProviderListResultIterator type. +func NewProviderListResultIterator(page ProviderListResultPage) ProviderListResultIterator { + return ProviderListResultIterator{page: page} +} + // IsEmpty returns true if the ListResult contains no values. func (plr ProviderListResult) IsEmpty() bool { return plr.Value == nil || len(*plr.Value) == 0 @@ -981,11 +1204,11 @@ func (plr ProviderListResult) IsEmpty() bool { // providerListResultPreparer prepares a request to retrieve the next set of results. // It returns nil if no more results exist. -func (plr ProviderListResult) providerListResultPreparer() (*http.Request, error) { +func (plr ProviderListResult) providerListResultPreparer(ctx context.Context) (*http.Request, error) { if plr.NextLink == nil || len(to.String(plr.NextLink)) < 1 { return nil, nil } - return autorest.Prepare(&http.Request{}, + return autorest.Prepare((&http.Request{}).WithContext(ctx), autorest.AsJSON(), autorest.AsGet(), autorest.WithBaseURL(to.String(plr.NextLink))) @@ -993,14 +1216,24 @@ func (plr ProviderListResult) providerListResultPreparer() (*http.Request, error // ProviderListResultPage contains a page of Provider values. type ProviderListResultPage struct { - fn func(ProviderListResult) (ProviderListResult, error) + fn func(context.Context, ProviderListResult) (ProviderListResult, error) plr ProviderListResult } -// Next advances to the next page of values. If there was an error making +// NextWithContext advances to the next page of values. If there was an error making // the request the page does not advance and the error is returned. -func (page *ProviderListResultPage) Next() error { - next, err := page.fn(page.plr) +func (page *ProviderListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ProviderListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.plr) if err != nil { return err } @@ -1008,6 +1241,13 @@ func (page *ProviderListResultPage) Next() error { return nil } +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *ProviderListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + // NotDone returns true if the page enumeration should be started or is not yet complete. func (page ProviderListResultPage) NotDone() bool { return !page.plr.IsEmpty() @@ -1026,6 +1266,11 @@ func (page ProviderListResultPage) Values() []Provider { return *page.plr.Value } +// Creates a new instance of the ProviderListResultPage type. +func NewProviderListResultPage(getNextPage func(context.Context, ProviderListResult) (ProviderListResult, error)) ProviderListResultPage { + return ProviderListResultPage{fn: getNextPage} +} + // ProviderOperationDisplayProperties resource provider operation's display properties. type ProviderOperationDisplayProperties struct { // Publisher - Operation description. @@ -1077,11 +1322,11 @@ func (prt ProviderResourceType) MarshalJSON() ([]byte, error) { // Resource ... type Resource struct { - // ID - Resource Id + // ID - READ-ONLY; Resource Id ID *string `json:"id,omitempty"` - // Name - Resource name + // Name - READ-ONLY; Resource name Name *string `json:"name,omitempty"` - // Type - Resource type + // Type - READ-ONLY; Resource type Type *string `json:"type,omitempty"` // Location - Resource location Location *string `json:"location,omitempty"` @@ -1092,15 +1337,6 @@ type Resource struct { // MarshalJSON is the custom marshaler for Resource. func (r Resource) MarshalJSON() ([]byte, error) { objectMap := make(map[string]interface{}) - if r.ID != nil { - objectMap["id"] = r.ID - } - if r.Name != nil { - objectMap["name"] = r.Name - } - if r.Type != nil { - objectMap["type"] = r.Type - } if r.Location != nil { objectMap["location"] = r.Location } @@ -1143,7 +1379,7 @@ type TagCount struct { // TagDetails tag details. type TagDetails struct { autorest.Response `json:"-"` - // ID - The tag ID. + // ID - READ-ONLY; The tag ID. ID *string `json:"id,omitempty"` // TagName - The tag name. TagName *string `json:"tagName,omitempty"` @@ -1168,14 +1404,24 @@ type TagsListResultIterator struct { page TagsListResultPage } -// Next advances to the next value. If there was an error making +// NextWithContext advances to the next value. If there was an error making // the request the iterator does not advance and the error is returned. -func (iter *TagsListResultIterator) Next() error { +func (iter *TagsListResultIterator) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsListResultIterator.NextWithContext") + defer func() { + sc := -1 + if iter.Response().Response.Response != nil { + sc = iter.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } iter.i++ if iter.i < len(iter.page.Values()) { return nil } - err := iter.page.Next() + err = iter.page.NextWithContext(ctx) if err != nil { iter.i-- return err @@ -1184,6 +1430,13 @@ func (iter *TagsListResultIterator) Next() error { return nil } +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (iter *TagsListResultIterator) Next() error { + return iter.NextWithContext(context.Background()) +} + // NotDone returns true if the enumeration should be started or is not yet complete. func (iter TagsListResultIterator) NotDone() bool { return iter.page.NotDone() && iter.i < len(iter.page.Values()) @@ -1203,6 +1456,11 @@ func (iter TagsListResultIterator) Value() TagDetails { return iter.page.Values()[iter.i] } +// Creates a new instance of the TagsListResultIterator type. +func NewTagsListResultIterator(page TagsListResultPage) TagsListResultIterator { + return TagsListResultIterator{page: page} +} + // IsEmpty returns true if the ListResult contains no values. func (tlr TagsListResult) IsEmpty() bool { return tlr.Value == nil || len(*tlr.Value) == 0 @@ -1210,11 +1468,11 @@ func (tlr TagsListResult) IsEmpty() bool { // tagsListResultPreparer prepares a request to retrieve the next set of results. // It returns nil if no more results exist. -func (tlr TagsListResult) tagsListResultPreparer() (*http.Request, error) { +func (tlr TagsListResult) tagsListResultPreparer(ctx context.Context) (*http.Request, error) { if tlr.NextLink == nil || len(to.String(tlr.NextLink)) < 1 { return nil, nil } - return autorest.Prepare(&http.Request{}, + return autorest.Prepare((&http.Request{}).WithContext(ctx), autorest.AsJSON(), autorest.AsGet(), autorest.WithBaseURL(to.String(tlr.NextLink))) @@ -1222,14 +1480,24 @@ func (tlr TagsListResult) tagsListResultPreparer() (*http.Request, error) { // TagsListResultPage contains a page of TagDetails values. type TagsListResultPage struct { - fn func(TagsListResult) (TagsListResult, error) + fn func(context.Context, TagsListResult) (TagsListResult, error) tlr TagsListResult } -// Next advances to the next page of values. If there was an error making +// NextWithContext advances to the next page of values. If there was an error making // the request the page does not advance and the error is returned. -func (page *TagsListResultPage) Next() error { - next, err := page.fn(page.tlr) +func (page *TagsListResultPage) NextWithContext(ctx context.Context) (err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsListResultPage.NextWithContext") + defer func() { + sc := -1 + if page.Response().Response.Response != nil { + sc = page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } + next, err := page.fn(ctx, page.tlr) if err != nil { return err } @@ -1237,6 +1505,13 @@ func (page *TagsListResultPage) Next() error { return nil } +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +// Deprecated: Use NextWithContext() instead. +func (page *TagsListResultPage) Next() error { + return page.NextWithContext(context.Background()) +} + // NotDone returns true if the page enumeration should be started or is not yet complete. func (page TagsListResultPage) NotDone() bool { return !page.tlr.IsEmpty() @@ -1255,10 +1530,15 @@ func (page TagsListResultPage) Values() []TagDetails { return *page.tlr.Value } +// Creates a new instance of the TagsListResultPage type. +func NewTagsListResultPage(getNextPage func(context.Context, TagsListResult) (TagsListResult, error)) TagsListResultPage { + return TagsListResultPage{fn: getNextPage} +} + // TagValue tag information. type TagValue struct { autorest.Response `json:"-"` - // ID - The tag ID. + // ID - READ-ONLY; The tag ID. ID *string `json:"id,omitempty"` // TagValue - The tag value. TagValue *string `json:"tagValue,omitempty"` @@ -1276,6 +1556,16 @@ type TargetResource struct { ResourceType *string `json:"resourceType,omitempty"` } +// TemplateHashResult result of the request to calculate template hash. It contains a string of minified +// template and its hash. +type TemplateHashResult struct { + autorest.Response `json:"-"` + // MinifiedTemplate - The minified template string. + MinifiedTemplate *string `json:"minifiedTemplate,omitempty"` + // TemplateHash - The template hash. + TemplateHash *string `json:"templateHash,omitempty"` +} + // TemplateLink entity representing the reference to the template. type TemplateLink struct { // URI - URI referencing the template. @@ -1293,7 +1583,7 @@ type UpdateFuture struct { // If the operation has not completed it will return an error. func (future *UpdateFuture) Result(client Client) (gr GenericResource, err error) { var done bool - done, err = future.Done(client) + done, err = future.DoneWithContext(context.Background(), client) if err != nil { err = autorest.NewErrorWithError(err, "resources.UpdateFuture", "Result", future.Response(), "Polling failure") return diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/providers.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/providers.go index 4993b1f43..7ba9bdfc8 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/providers.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/providers.go @@ -21,6 +21,7 @@ import ( "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -45,6 +46,16 @@ func NewProvidersClientWithBaseURI(baseURI string, subscriptionID string) Provid // expand - the $expand query parameter. e.g. To include property aliases in response, use // $expand=resourceTypes/aliases. func (client ProvidersClient) Get(ctx context.Context, resourceProviderNamespace string, expand string) (result Provider, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ProvidersClient.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.GetPreparer(ctx, resourceProviderNamespace, expand) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "Get", nil, "Failure preparing request") @@ -92,8 +103,8 @@ func (client ProvidersClient) GetPreparer(ctx context.Context, resourceProviderN // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // GetResponder handles the response to the Get request. The method always @@ -115,6 +126,16 @@ func (client ProvidersClient) GetResponder(resp *http.Response) (result Provider // expand - the $expand query parameter. e.g. To include property aliases in response, use // $expand=resourceTypes/aliases. func (client ProvidersClient) List(ctx context.Context, top *int32, expand string) (result ProviderListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ProvidersClient.List") + defer func() { + sc := -1 + if result.plr.Response.Response != nil { + sc = result.plr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.fn = client.listNextResults req, err := client.ListPreparer(ctx, top, expand) if err != nil { @@ -165,8 +186,8 @@ func (client ProvidersClient) ListPreparer(ctx context.Context, top *int32, expa // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always @@ -183,8 +204,8 @@ func (client ProvidersClient) ListResponder(resp *http.Response) (result Provide } // listNextResults retrieves the next set of results, if any. -func (client ProvidersClient) listNextResults(lastResults ProviderListResult) (result ProviderListResult, err error) { - req, err := lastResults.providerListResultPreparer() +func (client ProvidersClient) listNextResults(ctx context.Context, lastResults ProviderListResult) (result ProviderListResult, err error) { + req, err := lastResults.providerListResultPreparer(ctx) if err != nil { return result, autorest.NewErrorWithError(err, "resources.ProvidersClient", "listNextResults", nil, "Failure preparing next results request") } @@ -205,6 +226,16 @@ func (client ProvidersClient) listNextResults(lastResults ProviderListResult) (r // ListComplete enumerates all values, automatically crossing page boundaries as required. func (client ProvidersClient) ListComplete(ctx context.Context, top *int32, expand string) (result ProviderListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ProvidersClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.page, err = client.List(ctx, top, expand) return } @@ -213,6 +244,16 @@ func (client ProvidersClient) ListComplete(ctx context.Context, top *int32, expa // Parameters: // resourceProviderNamespace - namespace of the resource provider. func (client ProvidersClient) Register(ctx context.Context, resourceProviderNamespace string) (result Provider, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ProvidersClient.Register") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.RegisterPreparer(ctx, resourceProviderNamespace) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "Register", nil, "Failure preparing request") @@ -257,8 +298,8 @@ func (client ProvidersClient) RegisterPreparer(ctx context.Context, resourceProv // RegisterSender sends the Register request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) RegisterSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // RegisterResponder handles the response to the Register request. The method always @@ -278,6 +319,16 @@ func (client ProvidersClient) RegisterResponder(resp *http.Response) (result Pro // Parameters: // resourceProviderNamespace - namespace of the resource provider. func (client ProvidersClient) Unregister(ctx context.Context, resourceProviderNamespace string) (result Provider, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/ProvidersClient.Unregister") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.UnregisterPreparer(ctx, resourceProviderNamespace) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "Unregister", nil, "Failure preparing request") @@ -322,8 +373,8 @@ func (client ProvidersClient) UnregisterPreparer(ctx context.Context, resourcePr // UnregisterSender sends the Unregister request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) UnregisterSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // UnregisterResponder handles the response to the Unregister request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/resources.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/resources.go index 301b9934e..d0c2d4c5f 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/resources.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/resources.go @@ -22,6 +22,7 @@ import ( "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -48,11 +49,21 @@ func NewClientWithBaseURI(baseURI string, subscriptionID string) Client { // resourceType - resource identity. // resourceName - resource identity. func (client Client) CheckExistence(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.CheckExistence") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.Client", "CheckExistence", err.Error()) } @@ -104,8 +115,8 @@ func (client Client) CheckExistencePreparer(ctx context.Context, resourceGroupNa // CheckExistenceSender sends the CheckExistence request. The method will close the // http.Response Body if it receives an error. func (client Client) CheckExistenceSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CheckExistenceResponder handles the response to the CheckExistence request. The method always @@ -129,11 +140,21 @@ func (client Client) CheckExistenceResponder(resp *http.Response) (result autore // resourceName - resource identity. // parameters - create or update resource parameters. func (client Client) CreateOrUpdate(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource) (result GenericResource, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.CreateOrUpdate") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.Client", "CreateOrUpdate", err.Error()) } @@ -187,8 +208,8 @@ func (client Client) CreateOrUpdatePreparer(ctx context.Context, resourceGroupNa // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client Client) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -212,11 +233,21 @@ func (client Client) CreateOrUpdateResponder(resp *http.Response) (result Generi // resourceType - resource identity. // resourceName - resource identity. func (client Client) Delete(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.Client", "Delete", err.Error()) } @@ -268,8 +299,8 @@ func (client Client) DeletePreparer(ctx context.Context, resourceGroupName strin // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client Client) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // DeleteResponder handles the response to the Delete request. The method always @@ -292,11 +323,21 @@ func (client Client) DeleteResponder(resp *http.Response) (result autorest.Respo // resourceType - resource identity. // resourceName - resource identity. func (client Client) Get(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result GenericResource, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.Get") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.Client", "Get", err.Error()) } @@ -348,8 +389,8 @@ func (client Client) GetPreparer(ctx context.Context, resourceGroupName string, // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client Client) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // GetResponder handles the response to the Get request. The method always @@ -371,6 +412,16 @@ func (client Client) GetResponder(resp *http.Response) (result GenericResource, // expand - the $expand query parameter. // top - query parameters. If null is passed returns all resource groups. func (client Client) List(ctx context.Context, filter string, expand string, top *int32) (result ListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.List") + defer func() { + sc := -1 + if result.lr.Response.Response != nil { + sc = result.lr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.fn = client.listNextResults req, err := client.ListPreparer(ctx, filter, expand, top) if err != nil { @@ -424,8 +475,8 @@ func (client Client) ListPreparer(ctx context.Context, filter string, expand str // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client Client) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always @@ -442,8 +493,8 @@ func (client Client) ListResponder(resp *http.Response) (result ListResult, err } // listNextResults retrieves the next set of results, if any. -func (client Client) listNextResults(lastResults ListResult) (result ListResult, err error) { - req, err := lastResults.listResultPreparer() +func (client Client) listNextResults(ctx context.Context, lastResults ListResult) (result ListResult, err error) { + req, err := lastResults.listResultPreparer(ctx) if err != nil { return result, autorest.NewErrorWithError(err, "resources.Client", "listNextResults", nil, "Failure preparing next results request") } @@ -464,6 +515,16 @@ func (client Client) listNextResults(lastResults ListResult) (result ListResult, // ListComplete enumerates all values, automatically crossing page boundaries as required. func (client Client) ListComplete(ctx context.Context, filter string, expand string, top *int32) (result ListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.page, err = client.List(ctx, filter, expand, top) return } @@ -474,11 +535,21 @@ func (client Client) ListComplete(ctx context.Context, filter string, expand str // sourceResourceGroupName - source resource group name. // parameters - move resources' parameters. func (client Client) MoveResources(ctx context.Context, sourceResourceGroupName string, parameters MoveInfo) (result MoveResourcesFuture, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.MoveResources") + defer func() { + sc := -1 + if result.Response() != nil { + sc = result.Response().StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: sourceResourceGroupName, Constraints: []validation.Constraint{{Target: "sourceResourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "sourceResourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "sourceResourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "sourceResourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.Client", "MoveResources", err.Error()) } @@ -522,9 +593,9 @@ func (client Client) MoveResourcesPreparer(ctx context.Context, sourceResourceGr // MoveResourcesSender sends the MoveResources request. The method will close the // http.Response Body if it receives an error. func (client Client) MoveResourcesSender(req *http.Request) (future MoveResourcesFuture, err error) { + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) var resp *http.Response - resp, err = autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + resp, err = autorest.SendWithSender(client, req, sd...) if err != nil { return } @@ -553,11 +624,21 @@ func (client Client) MoveResourcesResponder(resp *http.Response) (result autores // resourceName - the name of the resource to update. // parameters - parameters for updating the resource. func (client Client) Update(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource) (result UpdateFuture, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/Client.Update") + defer func() { + sc := -1 + if result.Response() != nil { + sc = result.Response().StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\p{L}\._\(\)\w]+$`, Chain: nil}}}}); err != nil { return result, validation.NewError("resources.Client", "Update", err.Error()) } @@ -605,9 +686,9 @@ func (client Client) UpdatePreparer(ctx context.Context, resourceGroupName strin // UpdateSender sends the Update request. The method will close the // http.Response Body if it receives an error. func (client Client) UpdateSender(req *http.Request) (future UpdateFuture, err error) { + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) var resp *http.Response - resp, err = autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + resp, err = autorest.SendWithSender(client, req, sd...) if err != nil { return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/tags.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/tags.go index c38a38188..62f9d05b9 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/tags.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources/tags.go @@ -21,6 +21,7 @@ import ( "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -43,6 +44,16 @@ func NewTagsClientWithBaseURI(baseURI string, subscriptionID string) TagsClient // Parameters: // tagName - the name of the tag. func (client TagsClient) CreateOrUpdate(ctx context.Context, tagName string) (result TagDetails, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsClient.CreateOrUpdate") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.CreateOrUpdatePreparer(ctx, tagName) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "CreateOrUpdate", nil, "Failure preparing request") @@ -87,8 +98,8 @@ func (client TagsClient) CreateOrUpdatePreparer(ctx context.Context, tagName str // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -109,6 +120,16 @@ func (client TagsClient) CreateOrUpdateResponder(resp *http.Response) (result Ta // tagName - the name of the tag. // tagValue - the value of the tag. func (client TagsClient) CreateOrUpdateValue(ctx context.Context, tagName string, tagValue string) (result TagValue, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsClient.CreateOrUpdateValue") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.CreateOrUpdateValuePreparer(ctx, tagName, tagValue) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "CreateOrUpdateValue", nil, "Failure preparing request") @@ -154,8 +175,8 @@ func (client TagsClient) CreateOrUpdateValuePreparer(ctx context.Context, tagNam // CreateOrUpdateValueSender sends the CreateOrUpdateValue request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) CreateOrUpdateValueSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CreateOrUpdateValueResponder handles the response to the CreateOrUpdateValue request. The method always @@ -175,6 +196,16 @@ func (client TagsClient) CreateOrUpdateValueResponder(resp *http.Response) (resu // Parameters: // tagName - the name of the tag. func (client TagsClient) Delete(ctx context.Context, tagName string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsClient.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.DeletePreparer(ctx, tagName) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "Delete", nil, "Failure preparing request") @@ -219,8 +250,8 @@ func (client TagsClient) DeletePreparer(ctx context.Context, tagName string) (*h // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // DeleteResponder handles the response to the Delete request. The method always @@ -240,6 +271,16 @@ func (client TagsClient) DeleteResponder(resp *http.Response) (result autorest.R // tagName - the name of the tag. // tagValue - the value of the tag. func (client TagsClient) DeleteValue(ctx context.Context, tagName string, tagValue string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsClient.DeleteValue") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.DeleteValuePreparer(ctx, tagName, tagValue) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "DeleteValue", nil, "Failure preparing request") @@ -285,8 +326,8 @@ func (client TagsClient) DeleteValuePreparer(ctx context.Context, tagName string // DeleteValueSender sends the DeleteValue request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) DeleteValueSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // DeleteValueResponder handles the response to the DeleteValue request. The method always @@ -303,6 +344,16 @@ func (client TagsClient) DeleteValueResponder(resp *http.Response) (result autor // List get a list of subscription resource tags. func (client TagsClient) List(ctx context.Context) (result TagsListResultPage, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsClient.List") + defer func() { + sc := -1 + if result.tlr.Response.Response != nil { + sc = result.tlr.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.fn = client.listNextResults req, err := client.ListPreparer(ctx) if err != nil { @@ -347,8 +398,8 @@ func (client TagsClient) ListPreparer(ctx context.Context) (*http.Request, error // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always @@ -365,8 +416,8 @@ func (client TagsClient) ListResponder(resp *http.Response) (result TagsListResu } // listNextResults retrieves the next set of results, if any. -func (client TagsClient) listNextResults(lastResults TagsListResult) (result TagsListResult, err error) { - req, err := lastResults.tagsListResultPreparer() +func (client TagsClient) listNextResults(ctx context.Context, lastResults TagsListResult) (result TagsListResult, err error) { + req, err := lastResults.tagsListResultPreparer(ctx) if err != nil { return result, autorest.NewErrorWithError(err, "resources.TagsClient", "listNextResults", nil, "Failure preparing next results request") } @@ -387,6 +438,16 @@ func (client TagsClient) listNextResults(lastResults TagsListResult) (result Tag // ListComplete enumerates all values, automatically crossing page boundaries as required. func (client TagsClient) ListComplete(ctx context.Context) (result TagsListResultIterator, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/TagsClient.List") + defer func() { + sc := -1 + if result.Response().Response.Response != nil { + sc = result.page.Response().Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } result.page, err = client.List(ctx) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/accounts.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/accounts.go index b9d2cbd99..4d553ca22 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/accounts.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/accounts.go @@ -22,6 +22,7 @@ import ( "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -45,6 +46,16 @@ func NewAccountsClientWithBaseURI(baseURI string, subscriptionID string) Account // accountName - the name of the storage account within the specified resource group. Storage account names // must be between 3 and 24 characters in length and use numbers and lower-case letters only. func (client AccountsClient) CheckNameAvailability(ctx context.Context, accountName AccountCheckNameAvailabilityParameters) (result CheckNameAvailabilityResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.CheckNameAvailability") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName.Name", Name: validation.Null, Rule: true, Chain: nil}, @@ -97,8 +108,8 @@ func (client AccountsClient) CheckNameAvailabilityPreparer(ctx context.Context, // CheckNameAvailabilitySender sends the CheckNameAvailability request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) CheckNameAvailabilitySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // CheckNameAvailabilityResponder handles the response to the CheckNameAvailability request. The method always @@ -124,6 +135,16 @@ func (client AccountsClient) CheckNameAvailabilityResponder(resp *http.Response) // must be between 3 and 24 characters in length and use numbers and lower-case letters only. // parameters - the parameters to provide for the created account. func (client AccountsClient) Create(ctx context.Context, resourceGroupName string, accountName string, parameters AccountCreateParameters) (result AccountsCreateFuture, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.Create") + defer func() { + sc := -1 + if result.Response() != nil { + sc = result.Response().StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, @@ -181,13 +202,9 @@ func (client AccountsClient) CreatePreparer(ctx context.Context, resourceGroupNa // CreateSender sends the Create request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) CreateSender(req *http.Request) (future AccountsCreateFuture, err error) { + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) var resp *http.Response - resp, err = autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) - if err != nil { - return - } - err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + resp, err = autorest.SendWithSender(client, req, sd...) if err != nil { return } @@ -214,6 +231,16 @@ func (client AccountsClient) CreateResponder(resp *http.Response) (result Accoun // accountName - the name of the storage account within the specified resource group. Storage account names // must be between 3 and 24 characters in length and use numbers and lower-case letters only. func (client AccountsClient) Delete(ctx context.Context, resourceGroupName string, accountName string) (result autorest.Response, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.Delete") + defer func() { + sc := -1 + if result.Response != nil { + sc = result.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, @@ -266,8 +293,8 @@ func (client AccountsClient) DeletePreparer(ctx context.Context, resourceGroupNa // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // DeleteResponder handles the response to the Delete request. The method always @@ -289,6 +316,16 @@ func (client AccountsClient) DeleteResponder(resp *http.Response) (result autore // accountName - the name of the storage account within the specified resource group. Storage account names // must be between 3 and 24 characters in length and use numbers and lower-case letters only. func (client AccountsClient) GetProperties(ctx context.Context, resourceGroupName string, accountName string) (result Account, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.GetProperties") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, @@ -341,8 +378,8 @@ func (client AccountsClient) GetPropertiesPreparer(ctx context.Context, resource // GetPropertiesSender sends the GetProperties request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) GetPropertiesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // GetPropertiesResponder handles the response to the GetProperties request. The method always @@ -361,6 +398,16 @@ func (client AccountsClient) GetPropertiesResponder(resp *http.Response) (result // List lists all the storage accounts available under the subscription. Note that storage keys are not returned; use // the ListKeys operation for this. func (client AccountsClient) List(ctx context.Context) (result AccountListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.List") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "List", nil, "Failure preparing request") @@ -404,8 +451,8 @@ func (client AccountsClient) ListPreparer(ctx context.Context) (*http.Request, e // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always @@ -426,6 +473,16 @@ func (client AccountsClient) ListResponder(resp *http.Response) (result AccountL // Parameters: // resourceGroupName - the name of the resource group within the user's subscription. func (client AccountsClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result AccountListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListByResourceGroup") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "ListByResourceGroup", nil, "Failure preparing request") @@ -470,8 +527,8 @@ func (client AccountsClient) ListByResourceGroupPreparer(ctx context.Context, re // ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always @@ -493,6 +550,16 @@ func (client AccountsClient) ListByResourceGroupResponder(resp *http.Response) ( // accountName - the name of the storage account within the specified resource group. Storage account names // must be between 3 and 24 characters in length and use numbers and lower-case letters only. func (client AccountsClient) ListKeys(ctx context.Context, resourceGroupName string, accountName string) (result AccountListKeysResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListKeys") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, @@ -545,8 +612,8 @@ func (client AccountsClient) ListKeysPreparer(ctx context.Context, resourceGroup // ListKeysSender sends the ListKeys request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListKeysSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListKeysResponder handles the response to the ListKeys request. The method always @@ -569,6 +636,16 @@ func (client AccountsClient) ListKeysResponder(resp *http.Response) (result Acco // must be between 3 and 24 characters in length and use numbers and lower-case letters only. // regenerateKey - specifies name of the key which should be regenerated -- key1 or key2. func (client AccountsClient) RegenerateKey(ctx context.Context, resourceGroupName string, accountName string, regenerateKey AccountRegenerateKeyParameters) (result AccountListKeysResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.RegenerateKey") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, @@ -625,8 +702,8 @@ func (client AccountsClient) RegenerateKeyPreparer(ctx context.Context, resource // RegenerateKeySender sends the RegenerateKey request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) RegenerateKeySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // RegenerateKeyResponder handles the response to the RegenerateKey request. The method always @@ -654,6 +731,16 @@ func (client AccountsClient) RegenerateKeyResponder(resp *http.Response) (result // must be between 3 and 24 characters in length and use numbers and lower-case letters only. // parameters - the parameters to provide for the updated account. func (client AccountsClient) Update(ctx context.Context, resourceGroupName string, accountName string, parameters AccountUpdateParameters) (result Account, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.Update") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, @@ -708,8 +795,8 @@ func (client AccountsClient) UpdatePreparer(ctx context.Context, resourceGroupNa // UpdateSender sends the Update request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) UpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // UpdateResponder handles the response to the Update request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/models.go index 4fdf467fa..f8dbdc078 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/models.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/models.go @@ -18,6 +18,7 @@ package storage // Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "encoding/json" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" @@ -25,6 +26,9 @@ import ( "net/http" ) +// The package's fully qualified name. +const fqdn = "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage" + // AccessTier enumerates the values for access tier. type AccessTier string @@ -179,16 +183,16 @@ func PossibleUsageUnitValues() []UsageUnit { // Account the storage account. type Account struct { autorest.Response `json:"-"` - // Sku - Gets the SKU. + // Sku - READ-ONLY; Gets the SKU. Sku *Sku `json:"sku,omitempty"` - // Kind - Gets the Kind. Possible values include: 'Storage', 'BlobStorage' + // Kind - READ-ONLY; Gets the Kind. Possible values include: 'Storage', 'BlobStorage' Kind Kind `json:"kind,omitempty"` *AccountProperties `json:"properties,omitempty"` - // ID - Resource Id + // ID - READ-ONLY; Resource Id ID *string `json:"id,omitempty"` - // Name - Resource name + // Name - READ-ONLY; Resource name Name *string `json:"name,omitempty"` - // Type - Resource type + // Type - READ-ONLY; Resource type Type *string `json:"type,omitempty"` // Location - Resource location Location *string `json:"location,omitempty"` @@ -199,24 +203,9 @@ type Account struct { // MarshalJSON is the custom marshaler for Account. func (a Account) MarshalJSON() ([]byte, error) { objectMap := make(map[string]interface{}) - if a.Sku != nil { - objectMap["sku"] = a.Sku - } - if a.Kind != "" { - objectMap["kind"] = a.Kind - } if a.AccountProperties != nil { objectMap["properties"] = a.AccountProperties } - if a.ID != nil { - objectMap["id"] = a.ID - } - if a.Name != nil { - objectMap["name"] = a.Name - } - if a.Type != nil { - objectMap["type"] = a.Type - } if a.Location != nil { objectMap["location"] = a.Location } @@ -415,53 +404,53 @@ func (acp *AccountCreateParameters) UnmarshalJSON(body []byte) error { // AccountKey an access key for the storage account. type AccountKey struct { - // KeyName - Name of the key. + // KeyName - READ-ONLY; Name of the key. KeyName *string `json:"keyName,omitempty"` - // Value - Base 64-encoded value of the key. + // Value - READ-ONLY; Base 64-encoded value of the key. Value *string `json:"value,omitempty"` - // Permissions - Permissions for the key -- read-only or full permissions. Possible values include: 'READ', 'FULL' + // Permissions - READ-ONLY; Permissions for the key -- read-only or full permissions. Possible values include: 'READ', 'FULL' Permissions KeyPermission `json:"permissions,omitempty"` } // AccountListKeysResult the response from the ListKeys operation. type AccountListKeysResult struct { autorest.Response `json:"-"` - // Keys - Gets the list of storage account keys and their properties for the specified storage account. + // Keys - READ-ONLY; Gets the list of storage account keys and their properties for the specified storage account. Keys *[]AccountKey `json:"keys,omitempty"` } // AccountListResult the response from the List Storage Accounts operation. type AccountListResult struct { autorest.Response `json:"-"` - // Value - Gets the list of storage accounts and their properties. + // Value - READ-ONLY; Gets the list of storage accounts and their properties. Value *[]Account `json:"value,omitempty"` } // AccountProperties ... type AccountProperties struct { - // ProvisioningState - Gets the status of the storage account at the time the operation was called. Possible values include: 'Creating', 'ResolvingDNS', 'Succeeded' + // ProvisioningState - READ-ONLY; Gets the status of the storage account at the time the operation was called. Possible values include: 'Creating', 'ResolvingDNS', 'Succeeded' ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` - // PrimaryEndpoints - Gets the URLs that are used to perform a retrieval of a public blob, queue, or table object. Note that Standard_ZRS and Premium_LRS accounts only return the blob endpoint. + // PrimaryEndpoints - READ-ONLY; Gets the URLs that are used to perform a retrieval of a public blob, queue, or table object. Note that Standard_ZRS and Premium_LRS accounts only return the blob endpoint. PrimaryEndpoints *Endpoints `json:"primaryEndpoints,omitempty"` - // PrimaryLocation - Gets the location of the primary data center for the storage account. + // PrimaryLocation - READ-ONLY; Gets the location of the primary data center for the storage account. PrimaryLocation *string `json:"primaryLocation,omitempty"` - // StatusOfPrimary - Gets the status indicating whether the primary location of the storage account is available or unavailable. Possible values include: 'Available', 'Unavailable' + // StatusOfPrimary - READ-ONLY; Gets the status indicating whether the primary location of the storage account is available or unavailable. Possible values include: 'Available', 'Unavailable' StatusOfPrimary AccountStatus `json:"statusOfPrimary,omitempty"` - // LastGeoFailoverTime - Gets the timestamp of the most recent instance of a failover to the secondary location. Only the most recent timestamp is retained. This element is not returned if there has never been a failover instance. Only available if the accountType is Standard_GRS or Standard_RAGRS. + // LastGeoFailoverTime - READ-ONLY; Gets the timestamp of the most recent instance of a failover to the secondary location. Only the most recent timestamp is retained. This element is not returned if there has never been a failover instance. Only available if the accountType is Standard_GRS or Standard_RAGRS. LastGeoFailoverTime *date.Time `json:"lastGeoFailoverTime,omitempty"` - // SecondaryLocation - Gets the location of the geo-replicated secondary for the storage account. Only available if the accountType is Standard_GRS or Standard_RAGRS. + // SecondaryLocation - READ-ONLY; Gets the location of the geo-replicated secondary for the storage account. Only available if the accountType is Standard_GRS or Standard_RAGRS. SecondaryLocation *string `json:"secondaryLocation,omitempty"` - // StatusOfSecondary - Gets the status indicating whether the secondary location of the storage account is available or unavailable. Only available if the SKU name is Standard_GRS or Standard_RAGRS. Possible values include: 'Available', 'Unavailable' + // StatusOfSecondary - READ-ONLY; Gets the status indicating whether the secondary location of the storage account is available or unavailable. Only available if the SKU name is Standard_GRS or Standard_RAGRS. Possible values include: 'Available', 'Unavailable' StatusOfSecondary AccountStatus `json:"statusOfSecondary,omitempty"` - // CreationTime - Gets the creation date and time of the storage account in UTC. + // CreationTime - READ-ONLY; Gets the creation date and time of the storage account in UTC. CreationTime *date.Time `json:"creationTime,omitempty"` - // CustomDomain - Gets the custom domain the user assigned to this storage account. + // CustomDomain - READ-ONLY; Gets the custom domain the user assigned to this storage account. CustomDomain *CustomDomain `json:"customDomain,omitempty"` - // SecondaryEndpoints - Gets the URLs that are used to perform a retrieval of a public blob, queue, or table object from the secondary location of the storage account. Only available if the SKU name is Standard_RAGRS. + // SecondaryEndpoints - READ-ONLY; Gets the URLs that are used to perform a retrieval of a public blob, queue, or table object from the secondary location of the storage account. Only available if the SKU name is Standard_RAGRS. SecondaryEndpoints *Endpoints `json:"secondaryEndpoints,omitempty"` - // Encryption - Gets the encryption settings on the account. If unspecified, the account is unencrypted. + // Encryption - READ-ONLY; Gets the encryption settings on the account. If unspecified, the account is unencrypted. Encryption *Encryption `json:"encryption,omitempty"` - // AccessTier - Required for storage accounts where kind = BlobStorage. The access tier used for billing. Possible values include: 'Hot', 'Cool' + // AccessTier - READ-ONLY; Required for storage accounts where kind = BlobStorage. The access tier used for billing. Possible values include: 'Hot', 'Cool' AccessTier AccessTier `json:"accessTier,omitempty"` } @@ -490,7 +479,8 @@ type AccountRegenerateKeyParameters struct { KeyName *string `json:"keyName,omitempty"` } -// AccountsCreateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +// AccountsCreateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. type AccountsCreateFuture struct { azure.Future } @@ -499,7 +489,7 @@ type AccountsCreateFuture struct { // If the operation has not completed it will return an error. func (future *AccountsCreateFuture) Result(client AccountsClient) (a Account, err error) { var done bool - done, err = future.Done(client) + done, err = future.DoneWithContext(context.Background(), client) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsCreateFuture", "Result", future.Response(), "Polling failure") return @@ -518,7 +508,8 @@ func (future *AccountsCreateFuture) Result(client AccountsClient) (a Account, er return } -// AccountUpdateParameters the parameters that can be provided when updating the storage account properties. +// AccountUpdateParameters the parameters that can be provided when updating the storage account +// properties. type AccountUpdateParameters struct { // Sku - Gets or sets the SKU name. Note that the SKU name cannot be updated to Standard_ZRS or Premium_LRS, nor can accounts of those sku names be updated to any other value. Sku *Sku `json:"sku,omitempty"` @@ -587,11 +578,11 @@ func (aup *AccountUpdateParameters) UnmarshalJSON(body []byte) error { // CheckNameAvailabilityResult the CheckNameAvailability operation response. type CheckNameAvailabilityResult struct { autorest.Response `json:"-"` - // NameAvailable - Gets a boolean value that indicates whether the name is available for you to use. If true, the name is available. If false, the name has already been taken or is invalid and cannot be used. + // NameAvailable - READ-ONLY; Gets a boolean value that indicates whether the name is available for you to use. If true, the name is available. If false, the name has already been taken or is invalid and cannot be used. NameAvailable *bool `json:"nameAvailable,omitempty"` - // Reason - Gets the reason that a storage account name could not be used. The Reason element is only returned if NameAvailable is false. Possible values include: 'AccountNameInvalid', 'AlreadyExists' + // Reason - READ-ONLY; Gets the reason that a storage account name could not be used. The Reason element is only returned if NameAvailable is false. Possible values include: 'AccountNameInvalid', 'AlreadyExists' Reason Reason `json:"reason,omitempty"` - // Message - Gets an error message explaining the Reason value in more detail. + // Message - READ-ONLY; Gets an error message explaining the Reason value in more detail. Message *string `json:"message,omitempty"` } @@ -599,8 +590,8 @@ type CheckNameAvailabilityResult struct { type CustomDomain struct { // Name - Gets or sets the custom domain name assigned to the storage account. Name is the CNAME source. Name *string `json:"name,omitempty"` - // UseSubDomain - Indicates whether indirect CName validation is enabled. Default value is false. This should only be set on updates. - UseSubDomain *bool `json:"useSubDomain,omitempty"` + // UseSubDomainName - Indicates whether indirect CName validation is enabled. Default value is false. This should only be set on updates. + UseSubDomainName *bool `json:"useSubDomainName,omitempty"` } // Encryption the encryption settings on the storage account. @@ -615,7 +606,7 @@ type Encryption struct { type EncryptionService struct { // Enabled - A boolean indicating whether or not the service encrypts the data as it is stored. Enabled *bool `json:"enabled,omitempty"` - // LastEnabledTime - Gets a rough estimate of the date/time when the encryption was last enabled by the user. Only returned when encryption is enabled. There might be some unencrypted blobs which were written after this time, as it is just a rough estimate. + // LastEnabledTime - READ-ONLY; Gets a rough estimate of the date/time when the encryption was last enabled by the user. Only returned when encryption is enabled. There might be some unencrypted blobs which were written after this time, as it is just a rough estimate. LastEnabledTime *date.Time `json:"lastEnabledTime,omitempty"` } @@ -627,23 +618,23 @@ type EncryptionServices struct { // Endpoints the URIs that are used to perform a retrieval of a public blob, queue, or table object. type Endpoints struct { - // Blob - Gets the blob endpoint. + // Blob - READ-ONLY; Gets the blob endpoint. Blob *string `json:"blob,omitempty"` - // Queue - Gets the queue endpoint. + // Queue - READ-ONLY; Gets the queue endpoint. Queue *string `json:"queue,omitempty"` - // Table - Gets the table endpoint. + // Table - READ-ONLY; Gets the table endpoint. Table *string `json:"table,omitempty"` - // File - Gets the file endpoint. + // File - READ-ONLY; Gets the file endpoint. File *string `json:"file,omitempty"` } // Resource ... type Resource struct { - // ID - Resource Id + // ID - READ-ONLY; Resource Id ID *string `json:"id,omitempty"` - // Name - Resource name + // Name - READ-ONLY; Resource name Name *string `json:"name,omitempty"` - // Type - Resource type + // Type - READ-ONLY; Resource type Type *string `json:"type,omitempty"` // Location - Resource location Location *string `json:"location,omitempty"` @@ -654,15 +645,6 @@ type Resource struct { // MarshalJSON is the custom marshaler for Resource. func (r Resource) MarshalJSON() ([]byte, error) { objectMap := make(map[string]interface{}) - if r.ID != nil { - objectMap["id"] = r.ID - } - if r.Name != nil { - objectMap["name"] = r.Name - } - if r.Type != nil { - objectMap["type"] = r.Type - } if r.Location != nil { objectMap["location"] = r.Location } @@ -676,19 +658,19 @@ func (r Resource) MarshalJSON() ([]byte, error) { type Sku struct { // Name - Gets or sets the sku name. Required for account creation; optional for update. Note that in older versions, sku name was called accountType. Possible values include: 'StandardLRS', 'StandardGRS', 'StandardRAGRS', 'StandardZRS', 'PremiumLRS' Name SkuName `json:"name,omitempty"` - // Tier - Gets the sku tier. This is based on the SKU name. Possible values include: 'Standard', 'Premium' + // Tier - READ-ONLY; Gets the sku tier. This is based on the SKU name. Possible values include: 'Standard', 'Premium' Tier SkuTier `json:"tier,omitempty"` } // Usage describes Storage Resource Usage. type Usage struct { - // Unit - Gets the unit of measurement. Possible values include: 'Count', 'Bytes', 'Seconds', 'Percent', 'CountsPerSecond', 'BytesPerSecond' + // Unit - READ-ONLY; Gets the unit of measurement. Possible values include: 'Count', 'Bytes', 'Seconds', 'Percent', 'CountsPerSecond', 'BytesPerSecond' Unit UsageUnit `json:"unit,omitempty"` - // CurrentValue - Gets the current count of the allocated resources in the subscription. + // CurrentValue - READ-ONLY; Gets the current count of the allocated resources in the subscription. CurrentValue *int32 `json:"currentValue,omitempty"` - // Limit - Gets the maximum count of the resources that can be allocated in the subscription. + // Limit - READ-ONLY; Gets the maximum count of the resources that can be allocated in the subscription. Limit *int32 `json:"limit,omitempty"` - // Name - Gets the name of the type of usage. + // Name - READ-ONLY; Gets the name of the type of usage. Name *UsageName `json:"name,omitempty"` } @@ -701,8 +683,8 @@ type UsageListResult struct { // UsageName the usage names that can be used; currently limited to StorageAccount. type UsageName struct { - // Value - Gets a string describing the resource name. + // Value - READ-ONLY; Gets a string describing the resource name. Value *string `json:"value,omitempty"` - // LocalizedValue - Gets a localized string describing the resource name. + // LocalizedValue - READ-ONLY; Gets a localized string describing the resource name. LocalizedValue *string `json:"localizedValue,omitempty"` } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/usage.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/usage.go index 1c136faf7..cc6ce3315 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/usage.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage/usage.go @@ -21,6 +21,7 @@ import ( "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/tracing" "net/http" ) @@ -41,6 +42,16 @@ func NewUsageClientWithBaseURI(baseURI string, subscriptionID string) UsageClien // List gets the current usage count and the limit for the resources under the subscription. func (client UsageClient) List(ctx context.Context) (result UsageListResult, err error) { + if tracing.IsEnabled() { + ctx = tracing.StartSpan(ctx, fqdn+"/UsageClient.List") + defer func() { + sc := -1 + if result.Response.Response != nil { + sc = result.Response.Response.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + } req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "storage.UsageClient", "List", nil, "Failure preparing request") @@ -84,8 +95,8 @@ func (client UsageClient) ListPreparer(ctx context.Context) (*http.Request, erro // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client UsageClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req, - azure.DoRetryWithRegistration(client.Client)) + sd := autorest.GetSendDecorators(req.Context(), azure.DoRetryWithRegistration(client.Client)) + return autorest.SendWithSender(client, req, sd...) } // ListResponder handles the response to the List request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go index 31894dbfc..62e461a55 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go @@ -107,7 +107,7 @@ func (c *Client) blobAndFileSASURI(options SASOptions, uri, permissions, canonic if options.UseHTTPS { protocols = "https" } - stringToSign, err := blobSASStringToSign(permissions, start, expiry, canonicalizedResource, options.Identifier, options.IP, protocols, c.apiVersion, headers) + stringToSign, err := blobSASStringToSign(permissions, start, expiry, canonicalizedResource, options.Identifier, options.IP, protocols, c.apiVersion, signedResource, "", headers) if err != nil { return "", err } @@ -149,7 +149,7 @@ func (c *Client) blobAndFileSASURI(options SASOptions, uri, permissions, canonic return sasURL.String(), nil } -func blobSASStringToSign(signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedIP, protocols, signedVersion string, headers OverrideHeaders) (string, error) { +func blobSASStringToSign(signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedIP, protocols, signedVersion, signedResource, signedSnapshotTime string, headers OverrideHeaders) (string, error) { rscc := headers.CacheControl rscd := headers.ContentDisposition rsce := headers.ContentEncoding @@ -160,6 +160,11 @@ func blobSASStringToSign(signedPermissions, signedStart, signedExpiry, canonical canonicalizedResource = "/blob" + canonicalizedResource } + // https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas + if signedVersion >= "2018-11-09" { + return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s", signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedIP, protocols, signedVersion, signedResource, signedSnapshotTime, rscc, rscd, rsce, rscl, rsct), nil + } + // https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx#Anchor_12 if signedVersion >= "2015-04-05" { return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s", signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedIP, protocols, signedVersion, rscc, rscd, rsce, rscl, rsct), nil diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go index c9c62d799..bd19eccc4 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go @@ -197,6 +197,47 @@ func (b *Blob) PutBlockWithLength(blockID string, size uint64, blob io.Reader, o return b.respondCreation(resp, BlobTypeBlock) } +// PutBlockFromURLOptions includes the options for a put block from URL operation +type PutBlockFromURLOptions struct { + PutBlockOptions + + SourceContentMD5 string `header:"x-ms-source-content-md5"` + SourceContentCRC64 string `header:"x-ms-source-content-crc64"` +} + +// PutBlockFromURL copy data of exactly specified size from specified URL to +// the block blob with given ID. It is an alternative to PutBlocks where data +// comes from a remote URL and the offset and length is known in advance. +// +// The API rejects requests with size > 100 MiB (but this limit is not +// checked by the SDK). +// +// See https://docs.microsoft.com/en-us/rest/api/storageservices/put-block-from-url +func (b *Blob) PutBlockFromURL(blockID string, blobURL string, offset int64, size uint64, options *PutBlockFromURLOptions) error { + query := url.Values{ + "comp": {"block"}, + "blockid": {blockID}, + } + headers := b.Container.bsc.client.getStandardHeaders() + // The value of this header must be set to zero. + // When the length is not zero, the operation will fail with the status code 400 (Bad Request). + headers["Content-Length"] = "0" + headers["x-ms-copy-source"] = blobURL + headers["x-ms-source-range"] = fmt.Sprintf("bytes=%d-%d", offset, uint64(offset)+size-1) + + if options != nil { + query = addTimeout(query, options.Timeout) + headers = mergeHeaders(headers, headersFromStruct(*options)) + } + uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), query) + + resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, nil, b.Container.bsc.auth) + if err != nil { + return err + } + return b.respondCreation(resp, BlobTypeBlock) +} + // PutBlockListOptions includes the options for a put block list operation type PutBlockListOptions struct { Timeout uint diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go index 427558b5d..99702effe 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go @@ -46,7 +46,7 @@ const ( // DefaultAPIVersion is the Azure Storage API version string used when a // basic client is created. - DefaultAPIVersion = "2016-05-31" + DefaultAPIVersion = "2018-03-28" defaultUseHTTPS = true defaultRetryAttempts = 5 @@ -367,11 +367,14 @@ func newSASClient(accountName, baseURL string, sasToken url.Values) Client { accountName: accountName, baseURL: baseURL, accountSASToken: sasToken, + useHTTPS: defaultUseHTTPS, } c.userAgent = c.getDefaultUserAgent() // Get API version and protocol from token c.apiVersion = sasToken.Get("sv") - c.useHTTPS = sasToken.Get("spr") == "https" + if spr := sasToken.Get("spr"); spr != "" { + c.useHTTPS = spr == "https" + } return c } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go index fbbcb93ba..385253527 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go @@ -27,7 +27,7 @@ import ( "strings" "time" - "github.com/satori/go.uuid" + uuid "github.com/satori/go.uuid" ) // Annotating as secure for gas scanning @@ -257,6 +257,9 @@ func (e *Entity) MarshalJSON() ([]byte, error) { case int64: completeMap[typeKey] = OdataInt64 completeMap[k] = fmt.Sprintf("%v", v) + case float32, float64: + completeMap[typeKey] = OdataDouble + completeMap[k] = fmt.Sprintf("%v", v) default: completeMap[k] = v } @@ -264,7 +267,8 @@ func (e *Entity) MarshalJSON() ([]byte, error) { if !(completeMap[k] == OdataBinary || completeMap[k] == OdataDateTime || completeMap[k] == OdataGUID || - completeMap[k] == OdataInt64) { + completeMap[k] == OdataInt64 || + completeMap[k] == OdataDouble) { return nil, fmt.Errorf("Odata.type annotation %v value is not valid", k) } valueKey := strings.TrimSuffix(k, OdataTypeSuffix) @@ -339,6 +343,12 @@ func (e *Entity) UnmarshalJSON(data []byte) error { return fmt.Errorf(errorTemplate, err) } props[valueKey] = i + case OdataDouble: + f, err := strconv.ParseFloat(str, 64) + if err != nil { + return fmt.Errorf(errorTemplate, err) + } + props[valueKey] = f default: return fmt.Errorf(errorTemplate, fmt.Sprintf("%v is not supported", v)) } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go index 06bbe4ba0..6a480b12a 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go @@ -29,7 +29,11 @@ const fourMB = uint64(4194304) const oneTB = uint64(1099511627776) // Export maximum range and file sizes + +// MaxRangeSize defines the maximum size in bytes for a file range. const MaxRangeSize = fourMB + +// MaxFileSize defines the maximum size in bytes for a file. const MaxFileSize = oneTB // File represents a file on a share. diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go index 800adf129..0690e85ad 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go @@ -26,6 +26,7 @@ const ( OdataBinary = "Edm.Binary" OdataDateTime = "Edm.DateTime" + OdataDouble = "Edm.Double" OdataGUID = "Edm.Guid" OdataInt64 = "Edm.Int64" diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go index c338975ab..dc4199222 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go @@ -22,10 +22,12 @@ import ( // ServiceProperties represents the storage account service properties type ServiceProperties struct { - Logging *Logging - HourMetrics *Metrics - MinuteMetrics *Metrics - Cors *Cors + Logging *Logging + HourMetrics *Metrics + MinuteMetrics *Metrics + Cors *Cors + DeleteRetentionPolicy *RetentionPolicy // blob storage only + StaticWebsite *StaticWebsite // blob storage only } // Logging represents the Azure Analytics Logging settings @@ -65,6 +67,16 @@ type CorsRule struct { AllowedHeaders string } +// StaticWebsite - The properties that enable an account to host a static website +type StaticWebsite struct { + // Enabled - Indicates whether this account is hosting a static website + Enabled bool + // IndexDocument - The default name of the index page under each directory + IndexDocument *string + // ErrorDocument404Path - The absolute path of the custom 404 page + ErrorDocument404Path *string +} + func (c Client) getServiceProperties(service string, auth authentication) (*ServiceProperties, error) { query := url.Values{ "restype": {"service"}, @@ -102,10 +114,12 @@ func (c Client) setServiceProperties(props ServiceProperties, service string, au // Ideally, StorageServiceProperties would be the output struct // This is to avoid golint stuttering, while generating the correct XML type StorageServiceProperties struct { - Logging *Logging - HourMetrics *Metrics - MinuteMetrics *Metrics - Cors *Cors + Logging *Logging + HourMetrics *Metrics + MinuteMetrics *Metrics + Cors *Cors + DeleteRetentionPolicy *RetentionPolicy + StaticWebsite *StaticWebsite } input := StorageServiceProperties{ Logging: props.Logging, @@ -113,6 +127,11 @@ func (c Client) setServiceProperties(props ServiceProperties, service string, au MinuteMetrics: props.MinuteMetrics, Cors: props.Cors, } + // only set these fields for blob storage else it's invalid XML + if service == blobServiceName { + input.DeleteRetentionPolicy = props.DeleteRetentionPolicy + input.StaticWebsite = props.StaticWebsite + } body, length, err := xmlMarshal(input) if err != nil { diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go index 22d9b4f5c..0febf077f 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go @@ -355,8 +355,12 @@ func (t *Table) queryEntities(uri string, headers map[string]string, ml Metadata return nil, err } v := originalURI.Query() - v.Set(nextPartitionKeyQueryParameter, contToken.NextPartitionKey) - v.Set(nextRowKeyQueryParameter, contToken.NextRowKey) + if contToken.NextPartitionKey != "" { + v.Set(nextPartitionKeyQueryParameter, contToken.NextPartitionKey) + } + if contToken.NextRowKey != "" { + v.Set(nextRowKeyQueryParameter, contToken.NextRowKey) + } newURI := t.tsc.client.getEndpoint(tableServiceName, t.buildPath(), v) entities.NextLink = &newURI entities.ml = ml @@ -371,7 +375,7 @@ func extractContinuationTokenFromHeaders(h http.Header) *continuationToken { NextRowKey: h.Get(headerNextRowKey), } - if ct.NextPartitionKey != "" && ct.NextRowKey != "" { + if ct.NextPartitionKey != "" || ct.NextRowKey != "" { return &ct } return nil diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go index a2159e296..5b05e3e2a 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go @@ -25,8 +25,6 @@ import ( "net/textproto" "sort" "strings" - - "github.com/marstr/guid" ) // Operation type. Insert, Delete, Replace etc. @@ -132,8 +130,7 @@ func (t *TableBatch) MergeEntity(entity *Entity) { // As per document https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/performing-entity-group-transactions func (t *TableBatch) ExecuteBatch() error { - // Using `github.com/marstr/guid` is in response to issue #947 (https://github.com/Azure/azure-sdk-for-go/issues/947). - id, err := guid.NewGUIDs(guid.CreationStrategyVersion1) + id, err := newUUID() if err != nil { return err } @@ -145,7 +142,7 @@ func (t *TableBatch) ExecuteBatch() error { return err } - id, err = guid.NewGUIDs(guid.CreationStrategyVersion1) + id, err = newUUID() if err != nil { return err } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go index e8a5dcf8c..677394790 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go @@ -17,6 +17,7 @@ package storage import ( "bytes" "crypto/hmac" + "crypto/rand" "crypto/sha256" "encoding/base64" "encoding/xml" @@ -29,6 +30,8 @@ import ( "strconv" "strings" "time" + + uuid "github.com/satori/go.uuid" ) var ( @@ -242,3 +245,16 @@ func getMetadataFromHeaders(header http.Header) map[string]string { return metadata } + +// newUUID returns a new uuid using RFC 4122 algorithm. +func newUUID() (uuid.UUID, error) { + u := [16]byte{} + // Set all bits to randomly (or pseudo-randomly) chosen values. + _, err := rand.Read(u[:]) + if err != nil { + return uuid.UUID{}, err + } + u[8] = (u[8]&(0xff>>2) | (0x02 << 6)) // u.setVariant(ReservedRFC4122) + u[6] = (u[6] & 0xF) | (uuid.V4 << 4) // u.setVersion(V4) + return uuid.FromBytes(u[:]) +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/version/version.go b/vendor/github.com/Azure/azure-sdk-for-go/version/version.go index 7f0f6f2b6..073281bb8 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/version/version.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/version/version.go @@ -18,4 +18,4 @@ package version // Changes may cause incorrect behavior and will be lost if the code is regenerated. // Number contains the semantic version of this SDK. -const Number = "v21.3.0" +const Number = "v36.2.0" diff --git a/vendor/github.com/Azure/go-autorest/LICENSE b/vendor/github.com/Azure/go-autorest/autorest/LICENSE similarity index 100% rename from vendor/github.com/Azure/go-autorest/LICENSE rename to vendor/github.com/Azure/go-autorest/autorest/LICENSE diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/LICENSE b/vendor/github.com/Azure/go-autorest/autorest/adal/LICENSE new file mode 100644 index 000000000..b9d6a27ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Microsoft Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/README.md b/vendor/github.com/Azure/go-autorest/autorest/adal/README.md index 7b0c4bc4d..fec416a9c 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/README.md +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/README.md @@ -135,7 +135,7 @@ resource := "https://management.core.windows.net/" applicationSecret := "APPLICATION_SECRET" spt, err := adal.NewServicePrincipalToken( - oauthConfig, + *oauthConfig, appliationID, applicationSecret, resource, @@ -170,7 +170,7 @@ if err != nil { } spt, err := adal.NewServicePrincipalTokenFromCertificate( - oauthConfig, + *oauthConfig, applicationID, certificate, rsaPrivateKey, @@ -195,7 +195,7 @@ oauthClient := &http.Client{} // Acquire the device code deviceCode, err := adal.InitiateDeviceAuth( oauthClient, - oauthConfig, + *oauthConfig, applicationID, resource) if err != nil { @@ -212,7 +212,7 @@ if err != nil { } spt, err := adal.NewServicePrincipalTokenFromManualToken( - oauthConfig, + *oauthConfig, applicationID, resource, *token, @@ -227,7 +227,7 @@ if (err == nil) { ```Go spt, err := adal.NewServicePrincipalTokenFromUsernamePassword( - oauthConfig, + *oauthConfig, applicationID, username, password, @@ -243,11 +243,11 @@ if (err == nil) { ``` Go spt, err := adal.NewServicePrincipalTokenFromAuthorizationCode( - oauthConfig, + *oauthConfig, applicationID, clientSecret, - authorizationCode, - redirectURI, + authorizationCode, + redirectURI, resource, callbacks...) diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/config.go b/vendor/github.com/Azure/go-autorest/autorest/adal/config.go index bee5e61dd..fa5964742 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/config.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/config.go @@ -15,12 +15,13 @@ package adal // limitations under the License. import ( + "errors" "fmt" "net/url" ) const ( - activeDirectoryAPIVersion = "1.0" + activeDirectoryEndpointTemplate = "%s/oauth2/%s%s" ) // OAuthConfig represents the endpoints needed @@ -46,11 +47,24 @@ func validateStringParam(param, name string) error { // NewOAuthConfig returns an OAuthConfig with tenant specific urls func NewOAuthConfig(activeDirectoryEndpoint, tenantID string) (*OAuthConfig, error) { + apiVer := "1.0" + return NewOAuthConfigWithAPIVersion(activeDirectoryEndpoint, tenantID, &apiVer) +} + +// NewOAuthConfigWithAPIVersion returns an OAuthConfig with tenant specific urls. +// If apiVersion is not nil the "api-version" query parameter will be appended to the endpoint URLs with the specified value. +func NewOAuthConfigWithAPIVersion(activeDirectoryEndpoint, tenantID string, apiVersion *string) (*OAuthConfig, error) { if err := validateStringParam(activeDirectoryEndpoint, "activeDirectoryEndpoint"); err != nil { return nil, err } + api := "" // it's legal for tenantID to be empty so don't validate it - const activeDirectoryEndpointTemplate = "%s/oauth2/%s?api-version=%s" + if apiVersion != nil { + if err := validateStringParam(*apiVersion, "apiVersion"); err != nil { + return nil, err + } + api = fmt.Sprintf("?api-version=%s", *apiVersion) + } u, err := url.Parse(activeDirectoryEndpoint) if err != nil { return nil, err @@ -59,15 +73,15 @@ func NewOAuthConfig(activeDirectoryEndpoint, tenantID string) (*OAuthConfig, err if err != nil { return nil, err } - authorizeURL, err := u.Parse(fmt.Sprintf(activeDirectoryEndpointTemplate, tenantID, "authorize", activeDirectoryAPIVersion)) + authorizeURL, err := u.Parse(fmt.Sprintf(activeDirectoryEndpointTemplate, tenantID, "authorize", api)) if err != nil { return nil, err } - tokenURL, err := u.Parse(fmt.Sprintf(activeDirectoryEndpointTemplate, tenantID, "token", activeDirectoryAPIVersion)) + tokenURL, err := u.Parse(fmt.Sprintf(activeDirectoryEndpointTemplate, tenantID, "token", api)) if err != nil { return nil, err } - deviceCodeURL, err := u.Parse(fmt.Sprintf(activeDirectoryEndpointTemplate, tenantID, "devicecode", activeDirectoryAPIVersion)) + deviceCodeURL, err := u.Parse(fmt.Sprintf(activeDirectoryEndpointTemplate, tenantID, "devicecode", api)) if err != nil { return nil, err } @@ -79,3 +93,59 @@ func NewOAuthConfig(activeDirectoryEndpoint, tenantID string) (*OAuthConfig, err DeviceCodeEndpoint: *deviceCodeURL, }, nil } + +// MultiTenantOAuthConfig provides endpoints for primary and aulixiary tenant IDs. +type MultiTenantOAuthConfig interface { + PrimaryTenant() *OAuthConfig + AuxiliaryTenants() []*OAuthConfig +} + +// OAuthOptions contains optional OAuthConfig creation arguments. +type OAuthOptions struct { + APIVersion string +} + +func (c OAuthOptions) apiVersion() string { + if c.APIVersion != "" { + return fmt.Sprintf("?api-version=%s", c.APIVersion) + } + return "1.0" +} + +// NewMultiTenantOAuthConfig creates an object that support multitenant OAuth configuration. +// See https://docs.microsoft.com/en-us/azure/azure-resource-manager/authenticate-multi-tenant for more information. +func NewMultiTenantOAuthConfig(activeDirectoryEndpoint, primaryTenantID string, auxiliaryTenantIDs []string, options OAuthOptions) (MultiTenantOAuthConfig, error) { + if len(auxiliaryTenantIDs) == 0 || len(auxiliaryTenantIDs) > 3 { + return nil, errors.New("must specify one to three auxiliary tenants") + } + mtCfg := multiTenantOAuthConfig{ + cfgs: make([]*OAuthConfig, len(auxiliaryTenantIDs)+1), + } + apiVer := options.apiVersion() + pri, err := NewOAuthConfigWithAPIVersion(activeDirectoryEndpoint, primaryTenantID, &apiVer) + if err != nil { + return nil, fmt.Errorf("failed to create OAuthConfig for primary tenant: %v", err) + } + mtCfg.cfgs[0] = pri + for i := range auxiliaryTenantIDs { + aux, err := NewOAuthConfig(activeDirectoryEndpoint, auxiliaryTenantIDs[i]) + if err != nil { + return nil, fmt.Errorf("failed to create OAuthConfig for tenant '%s': %v", auxiliaryTenantIDs[i], err) + } + mtCfg.cfgs[i+1] = aux + } + return mtCfg, nil +} + +type multiTenantOAuthConfig struct { + // first config in the slice is the primary tenant + cfgs []*OAuthConfig +} + +func (m multiTenantOAuthConfig) PrimaryTenant() *OAuthConfig { + return m.cfgs[0] +} + +func (m multiTenantOAuthConfig) AuxiliaryTenants() []*OAuthConfig { + return m.cfgs[1:] +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go b/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go index b38f4c245..914f8af5e 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go @@ -24,6 +24,7 @@ package adal */ import ( + "context" "encoding/json" "fmt" "io/ioutil" @@ -101,7 +102,14 @@ type deviceToken struct { // InitiateDeviceAuth initiates a device auth flow. It returns a DeviceCode // that can be used with CheckForUserCompletion or WaitForUserCompletion. +// Deprecated: use InitiateDeviceAuthWithContext() instead. func InitiateDeviceAuth(sender Sender, oauthConfig OAuthConfig, clientID, resource string) (*DeviceCode, error) { + return InitiateDeviceAuthWithContext(context.Background(), sender, oauthConfig, clientID, resource) +} + +// InitiateDeviceAuthWithContext initiates a device auth flow. It returns a DeviceCode +// that can be used with CheckForUserCompletion or WaitForUserCompletion. +func InitiateDeviceAuthWithContext(ctx context.Context, sender Sender, oauthConfig OAuthConfig, clientID, resource string) (*DeviceCode, error) { v := url.Values{ "client_id": []string{clientID}, "resource": []string{resource}, @@ -117,7 +125,7 @@ func InitiateDeviceAuth(sender Sender, oauthConfig OAuthConfig, clientID, resour req.ContentLength = int64(len(s)) req.Header.Set(contentType, mimeTypeFormPost) - resp, err := sender.Do(req) + resp, err := sender.Do(req.WithContext(ctx)) if err != nil { return nil, fmt.Errorf("%s %s: %s", logPrefix, errCodeSendingFails, err.Error()) } @@ -151,7 +159,14 @@ func InitiateDeviceAuth(sender Sender, oauthConfig OAuthConfig, clientID, resour // CheckForUserCompletion takes a DeviceCode and checks with the Azure AD OAuth endpoint // to see if the device flow has: been completed, timed out, or otherwise failed +// Deprecated: use CheckForUserCompletionWithContext() instead. func CheckForUserCompletion(sender Sender, code *DeviceCode) (*Token, error) { + return CheckForUserCompletionWithContext(context.Background(), sender, code) +} + +// CheckForUserCompletionWithContext takes a DeviceCode and checks with the Azure AD OAuth endpoint +// to see if the device flow has: been completed, timed out, or otherwise failed +func CheckForUserCompletionWithContext(ctx context.Context, sender Sender, code *DeviceCode) (*Token, error) { v := url.Values{ "client_id": []string{code.ClientID}, "code": []string{*code.DeviceCode}, @@ -169,7 +184,7 @@ func CheckForUserCompletion(sender Sender, code *DeviceCode) (*Token, error) { req.ContentLength = int64(len(s)) req.Header.Set(contentType, mimeTypeFormPost) - resp, err := sender.Do(req) + resp, err := sender.Do(req.WithContext(ctx)) if err != nil { return nil, fmt.Errorf("%s %s: %s", logPrefix, errTokenSendingFails, err.Error()) } @@ -213,12 +228,19 @@ func CheckForUserCompletion(sender Sender, code *DeviceCode) (*Token, error) { // WaitForUserCompletion calls CheckForUserCompletion repeatedly until a token is granted or an error state occurs. // This prevents the user from looping and checking against 'ErrDeviceAuthorizationPending'. +// Deprecated: use WaitForUserCompletionWithContext() instead. func WaitForUserCompletion(sender Sender, code *DeviceCode) (*Token, error) { + return WaitForUserCompletionWithContext(context.Background(), sender, code) +} + +// WaitForUserCompletionWithContext calls CheckForUserCompletion repeatedly until a token is granted or an error +// state occurs. This prevents the user from looping and checking against 'ErrDeviceAuthorizationPending'. +func WaitForUserCompletionWithContext(ctx context.Context, sender Sender, code *DeviceCode) (*Token, error) { intervalDuration := time.Duration(*code.Interval) * time.Second waitDuration := intervalDuration for { - token, err := CheckForUserCompletion(sender, code) + token, err := CheckForUserCompletionWithContext(ctx, sender, code) if err == nil { return token, nil @@ -237,6 +259,11 @@ func WaitForUserCompletion(sender Sender, code *DeviceCode) (*Token, error) { return nil, fmt.Errorf("%s Error waiting for user to complete device flow. Server told us to slow_down too much", logPrefix) } - time.Sleep(waitDuration) + select { + case <-time.After(waitDuration): + // noop + case <-ctx.Done(): + return nil, ctx.Err() + } } } diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/go.mod b/vendor/github.com/Azure/go-autorest/autorest/adal/go.mod new file mode 100644 index 000000000..fdc5b90ca --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/go.mod @@ -0,0 +1,12 @@ +module github.com/Azure/go-autorest/autorest/adal + +go 1.12 + +require ( + github.com/Azure/go-autorest/autorest v0.9.0 + github.com/Azure/go-autorest/autorest/date v0.2.0 + github.com/Azure/go-autorest/autorest/mocks v0.3.0 + github.com/Azure/go-autorest/tracing v0.5.0 + github.com/dgrijalva/jwt-go v3.2.0+incompatible + golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 +) diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/go.sum b/vendor/github.com/Azure/go-autorest/autorest/adal/go.sum new file mode 100644 index 000000000..f0a018563 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/go.sum @@ -0,0 +1,23 @@ +github.com/Azure/go-autorest/autorest v0.9.0 h1:MRvx8gncNaXJqOoLmhNjUAKh33JJF8LyxPhomEtOsjs= +github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/date v0.1.0 h1:YGrhWfrgtFs84+h0o46rJrlmsZtyZRg470CqAXTZaGM= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/date v0.2.0 h1:yW+Zlqf26583pE43KhfnhFcdmSWlm5Ew6bxipnr/tbM= +github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g= +github.com/Azure/go-autorest/autorest/mocks v0.1.0 h1:Kx+AUU2Te+A3JIyYn6Dfs+cFgx5XorQKuIXrZGoq/SI= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0 h1:Ww5g4zThfD/6cLb4z6xxgeyDa7QDkizMkJKe0ysZXp0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.3.0 h1:qJumjCaCudz+OcqE9/XtEPfvtOjOmKaui4EOpFI6zZc= +github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM= +github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= +github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= +github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= +github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/go_mod_tidy_hack.go b/vendor/github.com/Azure/go-autorest/autorest/adal/go_mod_tidy_hack.go new file mode 100644 index 000000000..28a4bfc4c --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/go_mod_tidy_hack.go @@ -0,0 +1,24 @@ +// +build modhack + +package adal + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// This file, and the github.com/Azure/go-autorest/autorest import, won't actually become part of +// the resultant binary. + +// Necessary for safely adding multi-module repo. +// See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository +import _ "github.com/Azure/go-autorest/autorest" diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go b/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go index 0e5ad14d3..d7e4372bb 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go @@ -15,7 +15,12 @@ package adal // limitations under the License. import ( + "crypto/tls" "net/http" + "net/http/cookiejar" + "sync" + + "github.com/Azure/go-autorest/tracing" ) const ( @@ -23,6 +28,9 @@ const ( mimeTypeFormPost = "application/x-www-form-urlencoded" ) +var defaultSender Sender +var defaultSenderInit = &sync.Once{} + // Sender is the interface that wraps the Do method to send HTTP requests. // // The standard http.Client conforms to this interface. @@ -38,14 +46,14 @@ func (sf SenderFunc) Do(r *http.Request) (*http.Response, error) { return sf(r) } -// SendDecorator takes and possibily decorates, by wrapping, a Sender. Decorators may affect the +// SendDecorator takes and possibly decorates, by wrapping, a Sender. Decorators may affect the // http.Request and pass it along or, first, pass the http.Request along then react to the // http.Response result. type SendDecorator func(Sender) Sender // CreateSender creates, decorates, and returns, as a Sender, the default http.Client. func CreateSender(decorators ...SendDecorator) Sender { - return DecorateSender(&http.Client{}, decorators...) + return DecorateSender(sender(), decorators...) } // DecorateSender accepts a Sender and a, possibly empty, set of SendDecorators, which is applies to @@ -58,3 +66,30 @@ func DecorateSender(s Sender, decorators ...SendDecorator) Sender { } return s } + +func sender() Sender { + // note that we can't init defaultSender in init() since it will + // execute before calling code has had a chance to enable tracing + defaultSenderInit.Do(func() { + // Use behaviour compatible with DefaultTransport, but require TLS minimum version. + defaultTransport := http.DefaultTransport.(*http.Transport) + transport := &http.Transport{ + Proxy: defaultTransport.Proxy, + DialContext: defaultTransport.DialContext, + MaxIdleConns: defaultTransport.MaxIdleConns, + IdleConnTimeout: defaultTransport.IdleConnTimeout, + TLSHandshakeTimeout: defaultTransport.TLSHandshakeTimeout, + ExpectContinueTimeout: defaultTransport.ExpectContinueTimeout, + TLSClientConfig: &tls.Config{ + MinVersion: tls.VersionTLS12, + }, + } + var roundTripper http.RoundTripper = transport + if tracing.IsEnabled() { + roundTripper = tracing.NewTransport(transport) + } + j, _ := cookiejar.New(nil) + defaultSender = &http.Client{Jar: j, Transport: roundTripper} + }) + return defaultSender +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/token.go b/vendor/github.com/Azure/go-autorest/autorest/adal/token.go index 32aea8389..33bbd6ea1 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/token.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/token.go @@ -26,16 +26,14 @@ import ( "fmt" "io/ioutil" "math" - "net" "net/http" "net/url" - "strconv" + "os" "strings" "sync" "time" "github.com/Azure/go-autorest/autorest/date" - "github.com/Azure/go-autorest/version" "github.com/dgrijalva/jwt-go" ) @@ -65,6 +63,12 @@ const ( // the default number of attempts to refresh an MSI authentication token defaultMaxMSIRefreshAttempts = 5 + + // asMSIEndpointEnv is the environment variable used to store the endpoint on App Service and Functions + asMSIEndpointEnv = "MSI_ENDPOINT" + + // asMSISecretEnv is the environment variable used to store the request secret on App Service and Functions + asMSISecretEnv = "MSI_SECRET" ) // OAuthTokenProvider is an interface which should be implemented by an access token retriever @@ -72,6 +76,12 @@ type OAuthTokenProvider interface { OAuthToken() string } +// MultitenantOAuthTokenProvider provides tokens used for multi-tenant authorization. +type MultitenantOAuthTokenProvider interface { + PrimaryOAuthToken() string + AuxiliaryOAuthTokens() []string +} + // TokenRefreshError is an interface used by errors returned during token refresh. type TokenRefreshError interface { error @@ -96,19 +106,31 @@ type RefresherWithContext interface { // a successful token refresh type TokenRefreshCallback func(Token) error +// TokenRefresh is a type representing a custom callback to refresh a token +type TokenRefresh func(ctx context.Context, resource string) (*Token, error) + // Token encapsulates the access token used to authorize Azure requests. +// https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-oauth2-client-creds-grant-flow#service-to-service-access-token-response type Token struct { AccessToken string `json:"access_token"` RefreshToken string `json:"refresh_token"` - ExpiresIn string `json:"expires_in"` - ExpiresOn string `json:"expires_on"` - NotBefore string `json:"not_before"` + ExpiresIn json.Number `json:"expires_in"` + ExpiresOn json.Number `json:"expires_on"` + NotBefore json.Number `json:"not_before"` Resource string `json:"resource"` Type string `json:"token_type"` } +func newToken() Token { + return Token{ + ExpiresIn: "0", + ExpiresOn: "0", + NotBefore: "0", + } +} + // IsZero returns true if the token object is zero-initialized. func (t Token) IsZero() bool { return t == Token{} @@ -116,12 +138,12 @@ func (t Token) IsZero() bool { // Expires returns the time.Time when the Token expires. func (t Token) Expires() time.Time { - s, err := strconv.Atoi(t.ExpiresOn) + s, err := t.ExpiresOn.Float64() if err != nil { s = -3600 } - expiration := date.NewUnixTimeFromSeconds(float64(s)) + expiration := date.NewUnixTimeFromSeconds(s) return time.Time(expiration).UTC() } @@ -218,6 +240,8 @@ func (secret *ServicePrincipalCertificateSecret) SignJwt(spt *ServicePrincipalTo token := jwt.New(jwt.SigningMethodRS256) token.Header["x5t"] = thumbprint + x5c := []string{base64.StdEncoding.EncodeToString(secret.Certificate.Raw)} + token.Header["x5c"] = x5c token.Claims = jwt.MapClaims{ "aud": spt.inner.OauthConfig.TokenEndpoint.String(), "iss": spt.inner.ClientID, @@ -323,10 +347,11 @@ func (secret ServicePrincipalAuthorizationCodeSecret) MarshalJSON() ([]byte, err // ServicePrincipalToken encapsulates a Token created for a Service Principal. type ServicePrincipalToken struct { - inner servicePrincipalToken - refreshLock *sync.RWMutex - sender Sender - refreshCallbacks []TokenRefreshCallback + inner servicePrincipalToken + refreshLock *sync.RWMutex + sender Sender + customRefreshFunc TokenRefresh + refreshCallbacks []TokenRefreshCallback // MaxMSIRefreshAttempts is the maximum number of attempts to refresh an MSI token. MaxMSIRefreshAttempts int } @@ -341,6 +366,11 @@ func (spt *ServicePrincipalToken) SetRefreshCallbacks(callbacks []TokenRefreshCa spt.refreshCallbacks = callbacks } +// SetCustomRefreshFunc sets a custom refresh function used to refresh the token. +func (spt *ServicePrincipalToken) SetCustomRefreshFunc(customRefreshFunc TokenRefresh) { + spt.customRefreshFunc = customRefreshFunc +} + // MarshalJSON implements the json.Marshaler interface. func (spt ServicePrincipalToken) MarshalJSON() ([]byte, error) { return json.Marshal(spt.inner) @@ -375,8 +405,13 @@ func (spt *ServicePrincipalToken) UnmarshalJSON(data []byte) error { if err != nil { return err } - spt.refreshLock = &sync.RWMutex{} - spt.sender = &http.Client{} + // Don't override the refreshLock or the sender if those have been already set. + if spt.refreshLock == nil { + spt.refreshLock = &sync.RWMutex{} + } + if spt.sender == nil { + spt.sender = sender() + } return nil } @@ -414,6 +449,7 @@ func NewServicePrincipalTokenWithSecret(oauthConfig OAuthConfig, id string, reso } spt := &ServicePrincipalToken{ inner: servicePrincipalToken{ + Token: newToken(), OauthConfig: oauthConfig, Secret: secret, ClientID: id, @@ -422,7 +458,7 @@ func NewServicePrincipalTokenWithSecret(oauthConfig OAuthConfig, id string, reso RefreshWithin: defaultRefresh, }, refreshLock: &sync.RWMutex{}, - sender: &http.Client{}, + sender: sender(), refreshCallbacks: callbacks, } return spt, nil @@ -613,6 +649,31 @@ func GetMSIVMEndpoint() (string, error) { return msiEndpoint, nil } +func isAppService() bool { + _, asMSIEndpointEnvExists := os.LookupEnv(asMSIEndpointEnv) + _, asMSISecretEnvExists := os.LookupEnv(asMSISecretEnv) + + return asMSIEndpointEnvExists && asMSISecretEnvExists +} + +// GetMSIAppServiceEndpoint get the MSI endpoint for App Service and Functions +func GetMSIAppServiceEndpoint() (string, error) { + asMSIEndpoint, asMSIEndpointEnvExists := os.LookupEnv(asMSIEndpointEnv) + + if asMSIEndpointEnvExists { + return asMSIEndpoint, nil + } + return "", errors.New("MSI endpoint not found") +} + +// GetMSIEndpoint get the appropriate MSI endpoint depending on the runtime environment +func GetMSIEndpoint() (string, error) { + if isAppService() { + return GetMSIAppServiceEndpoint() + } + return GetMSIVMEndpoint() +} + // NewServicePrincipalTokenFromMSI creates a ServicePrincipalToken via the MSI VM Extension. // It will use the system assigned identity when creating the token. func NewServicePrincipalTokenFromMSI(msiEndpoint, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { @@ -645,7 +706,12 @@ func newServicePrincipalTokenFromMSI(msiEndpoint, resource string, userAssignedI v := url.Values{} v.Set("resource", resource) - v.Set("api-version", "2018-02-01") + // App Service MSI currently only supports token API version 2017-09-01 + if isAppService() { + v.Set("api-version", "2017-09-01") + } else { + v.Set("api-version", "2018-02-01") + } if userAssignedID != nil { v.Set("client_id", *userAssignedID) } @@ -653,6 +719,7 @@ func newServicePrincipalTokenFromMSI(msiEndpoint, resource string, userAssignedI spt := &ServicePrincipalToken{ inner: servicePrincipalToken{ + Token: newToken(), OauthConfig: OAuthConfig{ TokenEndpoint: *msiEndpointURL, }, @@ -662,7 +729,7 @@ func newServicePrincipalTokenFromMSI(msiEndpoint, resource string, userAssignedI RefreshWithin: defaultRefresh, }, refreshLock: &sync.RWMutex{}, - sender: &http.Client{}, + sender: sender(), refreshCallbacks: callbacks, MaxMSIRefreshAttempts: defaultMaxMSIRefreshAttempts, } @@ -728,13 +795,13 @@ func (spt *ServicePrincipalToken) InvokeRefreshCallbacks(token Token) error { } // Refresh obtains a fresh token for the Service Principal. -// This method is not safe for concurrent use and should be syncrhonized. +// This method is safe for concurrent use. func (spt *ServicePrincipalToken) Refresh() error { return spt.RefreshWithContext(context.Background()) } // RefreshWithContext obtains a fresh token for the Service Principal. -// This method is not safe for concurrent use and should be syncrhonized. +// This method is safe for concurrent use. func (spt *ServicePrincipalToken) RefreshWithContext(ctx context.Context) error { spt.refreshLock.Lock() defer spt.refreshLock.Unlock() @@ -742,13 +809,13 @@ func (spt *ServicePrincipalToken) RefreshWithContext(ctx context.Context) error } // RefreshExchange refreshes the token, but for a different resource. -// This method is not safe for concurrent use and should be syncrhonized. +// This method is safe for concurrent use. func (spt *ServicePrincipalToken) RefreshExchange(resource string) error { return spt.RefreshExchangeWithContext(context.Background(), resource) } // RefreshExchangeWithContext refreshes the token, but for a different resource. -// This method is not safe for concurrent use and should be syncrhonized. +// This method is safe for concurrent use. func (spt *ServicePrincipalToken) RefreshExchangeWithContext(ctx context.Context, resource string) error { spt.refreshLock.Lock() defer spt.refreshLock.Unlock() @@ -771,15 +838,29 @@ func isIMDS(u url.URL) bool { if err != nil { return false } - return u.Host == imds.Host && u.Path == imds.Path + return (u.Host == imds.Host && u.Path == imds.Path) || isAppService() } func (spt *ServicePrincipalToken) refreshInternal(ctx context.Context, resource string) error { + if spt.customRefreshFunc != nil { + token, err := spt.customRefreshFunc(ctx, resource) + if err != nil { + return err + } + spt.inner.Token = *token + return spt.InvokeRefreshCallbacks(spt.inner.Token) + } + req, err := http.NewRequest(http.MethodPost, spt.inner.OauthConfig.TokenEndpoint.String(), nil) if err != nil { return fmt.Errorf("adal: Failed to build the refresh request. Error = '%v'", err) } - req.Header.Add("User-Agent", version.UserAgent()) + req.Header.Add("User-Agent", UserAgent()) + // Add header when runtime is on App Service or Functions + if isAppService() { + asMSISecret, _ := os.LookupEnv(asMSISecretEnv) + req.Header.Add("Secret", asMSISecret) + } req = req.WithContext(ctx) if !isIMDS(spt.inner.OauthConfig.TokenEndpoint) { v := url.Values{} @@ -824,7 +905,8 @@ func (spt *ServicePrincipalToken) refreshInternal(ctx context.Context, resource resp, err = spt.sender.Do(req) } if err != nil { - return newTokenRefreshError(fmt.Sprintf("adal: Failed to execute the refresh request. Error = '%v'", err), nil) + // don't return a TokenRefreshError here; this will allow retry logic to apply + return fmt.Errorf("adal: Failed to execute the refresh request. Error = '%v'", err) } defer resp.Body.Close() @@ -891,10 +973,8 @@ func retryForIMDS(sender Sender, req *http.Request, maxAttempts int) (resp *http for attempt < maxAttempts { resp, err = sender.Do(req) - // retry on temporary network errors, e.g. transient network failures. - // if we don't receive a response then assume we can't connect to the - // endpoint so we're likely not running on an Azure VM so don't retry. - if (err != nil && !isTemporaryNetworkError(err)) || resp == nil || resp.StatusCode == http.StatusOK || !containsInt(retries, resp.StatusCode) { + // we want to retry if err is not nil or the status code is in the list of retry codes + if err == nil && !responseHasStatusCode(resp, retries...) { return } @@ -918,20 +998,12 @@ func retryForIMDS(sender Sender, req *http.Request, maxAttempts int) (resp *http return } -// returns true if the specified error is a temporary network error or false if it's not. -// if the error doesn't implement the net.Error interface the return value is true. -func isTemporaryNetworkError(err error) bool { - if netErr, ok := err.(net.Error); !ok || (ok && netErr.Temporary()) { - return true - } - return false -} - -// returns true if slice ints contains the value n -func containsInt(ints []int, n int) bool { - for _, i := range ints { - if i == n { - return true +func responseHasStatusCode(resp *http.Response, codes ...int) bool { + if resp != nil { + for _, i := range codes { + if i == resp.StatusCode { + return true + } } } return false @@ -966,3 +1038,93 @@ func (spt *ServicePrincipalToken) Token() Token { defer spt.refreshLock.RUnlock() return spt.inner.Token } + +// MultiTenantServicePrincipalToken contains tokens for multi-tenant authorization. +type MultiTenantServicePrincipalToken struct { + PrimaryToken *ServicePrincipalToken + AuxiliaryTokens []*ServicePrincipalToken +} + +// PrimaryOAuthToken returns the primary authorization token. +func (mt *MultiTenantServicePrincipalToken) PrimaryOAuthToken() string { + return mt.PrimaryToken.OAuthToken() +} + +// AuxiliaryOAuthTokens returns one to three auxiliary authorization tokens. +func (mt *MultiTenantServicePrincipalToken) AuxiliaryOAuthTokens() []string { + tokens := make([]string, len(mt.AuxiliaryTokens)) + for i := range mt.AuxiliaryTokens { + tokens[i] = mt.AuxiliaryTokens[i].OAuthToken() + } + return tokens +} + +// EnsureFreshWithContext will refresh the token if it will expire within the refresh window (as set by +// RefreshWithin) and autoRefresh flag is on. This method is safe for concurrent use. +func (mt *MultiTenantServicePrincipalToken) EnsureFreshWithContext(ctx context.Context) error { + if err := mt.PrimaryToken.EnsureFreshWithContext(ctx); err != nil { + return fmt.Errorf("failed to refresh primary token: %v", err) + } + for _, aux := range mt.AuxiliaryTokens { + if err := aux.EnsureFreshWithContext(ctx); err != nil { + return fmt.Errorf("failed to refresh auxiliary token: %v", err) + } + } + return nil +} + +// RefreshWithContext obtains a fresh token for the Service Principal. +func (mt *MultiTenantServicePrincipalToken) RefreshWithContext(ctx context.Context) error { + if err := mt.PrimaryToken.RefreshWithContext(ctx); err != nil { + return fmt.Errorf("failed to refresh primary token: %v", err) + } + for _, aux := range mt.AuxiliaryTokens { + if err := aux.RefreshWithContext(ctx); err != nil { + return fmt.Errorf("failed to refresh auxiliary token: %v", err) + } + } + return nil +} + +// RefreshExchangeWithContext refreshes the token, but for a different resource. +func (mt *MultiTenantServicePrincipalToken) RefreshExchangeWithContext(ctx context.Context, resource string) error { + if err := mt.PrimaryToken.RefreshExchangeWithContext(ctx, resource); err != nil { + return fmt.Errorf("failed to refresh primary token: %v", err) + } + for _, aux := range mt.AuxiliaryTokens { + if err := aux.RefreshExchangeWithContext(ctx, resource); err != nil { + return fmt.Errorf("failed to refresh auxiliary token: %v", err) + } + } + return nil +} + +// NewMultiTenantServicePrincipalToken creates a new MultiTenantServicePrincipalToken with the specified credentials and resource. +func NewMultiTenantServicePrincipalToken(multiTenantCfg MultiTenantOAuthConfig, clientID string, secret string, resource string) (*MultiTenantServicePrincipalToken, error) { + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(secret, "secret"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + auxTenants := multiTenantCfg.AuxiliaryTenants() + m := MultiTenantServicePrincipalToken{ + AuxiliaryTokens: make([]*ServicePrincipalToken, len(auxTenants)), + } + primary, err := NewServicePrincipalToken(*multiTenantCfg.PrimaryTenant(), clientID, secret, resource) + if err != nil { + return nil, fmt.Errorf("failed to create SPT for primary tenant: %v", err) + } + m.PrimaryToken = primary + for i := range auxTenants { + aux, err := NewServicePrincipalToken(*auxTenants[i], clientID, secret, resource) + if err != nil { + return nil, fmt.Errorf("failed to create SPT for auxiliary tenant: %v", err) + } + m.AuxiliaryTokens[i] = aux + } + return &m, nil +} diff --git a/vendor/github.com/Azure/go-autorest/version/version.go b/vendor/github.com/Azure/go-autorest/autorest/adal/version.go similarity index 65% rename from vendor/github.com/Azure/go-autorest/version/version.go rename to vendor/github.com/Azure/go-autorest/autorest/adal/version.go index ad2d6099f..c867b3484 100644 --- a/vendor/github.com/Azure/go-autorest/version/version.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/version.go @@ -1,4 +1,9 @@ -package version +package adal + +import ( + "fmt" + "runtime" +) // Copyright 2017 Microsoft Corporation // @@ -14,24 +19,27 @@ package version // See the License for the specific language governing permissions and // limitations under the License. -import ( - "fmt" - "runtime" -) - -// Number contains the semantic version of this SDK. -const Number = "v10.15.4" +const number = "v1.0.0" var ( - userAgent = fmt.Sprintf("Go/%s (%s-%s) go-autorest/%s", + ua = fmt.Sprintf("Go/%s (%s-%s) go-autorest/adal/%s", runtime.Version(), runtime.GOARCH, runtime.GOOS, - Number, + number, ) ) -// UserAgent returns a string containing the Go version, system archityecture and OS, and the go-autorest version. +// UserAgent returns a string containing the Go version, system architecture and OS, and the adal version. func UserAgent() string { - return userAgent + return ua +} + +// AddToUserAgent adds an extension to the current user agent +func AddToUserAgent(extension string) error { + if extension != "" { + ua = fmt.Sprintf("%s %s", ua, extension) + return nil + } + return fmt.Errorf("Extension was empty, User Agent remained as '%s'", ua) } diff --git a/vendor/github.com/Azure/go-autorest/autorest/authorization.go b/vendor/github.com/Azure/go-autorest/autorest/authorization.go index 77eff45bd..54e87b5b6 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/authorization.go +++ b/vendor/github.com/Azure/go-autorest/autorest/authorization.go @@ -15,6 +15,8 @@ package autorest // limitations under the License. import ( + "crypto/tls" + "encoding/base64" "fmt" "net/http" "net/url" @@ -30,6 +32,8 @@ const ( apiKeyAuthorizerHeader = "Ocp-Apim-Subscription-Key" bingAPISdkHeader = "X-BingApis-SDK-Client" golangBingAPISdkHeaderValue = "Go-SDK" + authorization = "Authorization" + basic = "Basic" ) // Authorizer is the interface that provides a PrepareDecorator used to supply request @@ -68,7 +72,7 @@ func NewAPIKeyAuthorizer(headers map[string]interface{}, queryParameters map[str return &APIKeyAuthorizer{headers: headers, queryParameters: queryParameters} } -// WithAuthorization returns a PrepareDecorator that adds an HTTP headers and Query Paramaters +// WithAuthorization returns a PrepareDecorator that adds an HTTP headers and Query Parameters. func (aka *APIKeyAuthorizer) WithAuthorization() PrepareDecorator { return func(p Preparer) Preparer { return DecoratePreparer(p, WithHeaders(aka.headers), WithQueryParameters(aka.queryParameters)) @@ -145,11 +149,11 @@ type BearerAuthorizerCallback struct { // NewBearerAuthorizerCallback creates a bearer authorization callback. The callback // is invoked when the HTTP request is submitted. -func NewBearerAuthorizerCallback(sender Sender, callback BearerAuthorizerCallbackFunc) *BearerAuthorizerCallback { - if sender == nil { - sender = &http.Client{} +func NewBearerAuthorizerCallback(s Sender, callback BearerAuthorizerCallbackFunc) *BearerAuthorizerCallback { + if s == nil { + s = sender(tls.RenegotiateNever) } - return &BearerAuthorizerCallback{sender: sender, callback: callback} + return &BearerAuthorizerCallback{sender: s, callback: callback} } // WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose value @@ -257,3 +261,76 @@ func (egta EventGridKeyAuthorizer) WithAuthorization() PrepareDecorator { } return NewAPIKeyAuthorizerWithHeaders(headers).WithAuthorization() } + +// BasicAuthorizer implements basic HTTP authorization by adding the Authorization HTTP header +// with the value "Basic " where is a base64-encoded username:password tuple. +type BasicAuthorizer struct { + userName string + password string +} + +// NewBasicAuthorizer creates a new BasicAuthorizer with the specified username and password. +func NewBasicAuthorizer(userName, password string) *BasicAuthorizer { + return &BasicAuthorizer{ + userName: userName, + password: password, + } +} + +// WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose +// value is "Basic " followed by the base64-encoded username:password tuple. +func (ba *BasicAuthorizer) WithAuthorization() PrepareDecorator { + headers := make(map[string]interface{}) + headers[authorization] = basic + " " + base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%s:%s", ba.userName, ba.password))) + + return NewAPIKeyAuthorizerWithHeaders(headers).WithAuthorization() +} + +// MultiTenantServicePrincipalTokenAuthorizer provides authentication across tenants. +type MultiTenantServicePrincipalTokenAuthorizer interface { + WithAuthorization() PrepareDecorator +} + +// NewMultiTenantServicePrincipalTokenAuthorizer crates a BearerAuthorizer using the given token provider +func NewMultiTenantServicePrincipalTokenAuthorizer(tp adal.MultitenantOAuthTokenProvider) MultiTenantServicePrincipalTokenAuthorizer { + return &multiTenantSPTAuthorizer{tp: tp} +} + +type multiTenantSPTAuthorizer struct { + tp adal.MultitenantOAuthTokenProvider +} + +// WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header using the +// primary token along with the auxiliary authorization header using the auxiliary tokens. +// +// By default, the token will be automatically refreshed through the Refresher interface. +func (mt multiTenantSPTAuthorizer) WithAuthorization() PrepareDecorator { + return func(p Preparer) Preparer { + return PreparerFunc(func(r *http.Request) (*http.Request, error) { + r, err := p.Prepare(r) + if err != nil { + return r, err + } + if refresher, ok := mt.tp.(adal.RefresherWithContext); ok { + err = refresher.EnsureFreshWithContext(r.Context()) + if err != nil { + var resp *http.Response + if tokError, ok := err.(adal.TokenRefreshError); ok { + resp = tokError.Response() + } + return r, NewErrorWithError(err, "azure.multiTenantSPTAuthorizer", "WithAuthorization", resp, + "Failed to refresh one or more Tokens for request to %s", r.URL) + } + } + r, err = Prepare(r, WithHeader(headerAuthorization, fmt.Sprintf("Bearer %s", mt.tp.PrimaryOAuthToken()))) + if err != nil { + return r, err + } + auxTokens := mt.tp.AuxiliaryOAuthTokens() + for i := range auxTokens { + auxTokens[i] = fmt.Sprintf("Bearer %s", auxTokens[i]) + } + return Prepare(r, WithHeader(headerAuxAuthorization, strings.Join(auxTokens, "; "))) + }) + } +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/async.go b/vendor/github.com/Azure/go-autorest/autorest/azure/async.go index 9dd7a1d27..1cb41cbeb 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/async.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/async.go @@ -26,6 +26,7 @@ import ( "time" "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/tracing" ) const ( @@ -44,24 +45,14 @@ var pollingCodes = [...]int{http.StatusNoContent, http.StatusAccepted, http.Stat // Future provides a mechanism to access the status and results of an asynchronous request. // Since futures are stateful they should be passed by value to avoid race conditions. type Future struct { - req *http.Request // legacy - pt pollingTracker -} - -// NewFuture returns a new Future object initialized with the specified request. -// Deprecated: Please use NewFutureFromResponse instead. -func NewFuture(req *http.Request) Future { - return Future{req: req} + pt pollingTracker } // NewFutureFromResponse returns a new Future object initialized // with the initial response from an asynchronous operation. func NewFutureFromResponse(resp *http.Response) (Future, error) { pt, err := createPollingTracker(resp) - if err != nil { - return Future{}, err - } - return Future{pt: pt}, nil + return Future{pt: pt}, err } // Response returns the last HTTP response. @@ -88,29 +79,25 @@ func (f Future) PollingMethod() PollingMethodType { return f.pt.pollingMethod() } -// Done queries the service to see if the operation has completed. -func (f *Future) Done(sender autorest.Sender) (bool, error) { - // support for legacy Future implementation - if f.req != nil { - resp, err := sender.Do(f.req) - if err != nil { - return false, err +// DoneWithContext queries the service to see if the operation has completed. +func (f *Future) DoneWithContext(ctx context.Context, sender autorest.Sender) (done bool, err error) { + ctx = tracing.StartSpan(ctx, "github.com/Azure/go-autorest/autorest/azure/async.DoneWithContext") + defer func() { + sc := -1 + resp := f.Response() + if resp != nil { + sc = resp.StatusCode } - pt, err := createPollingTracker(resp) - if err != nil { - return false, err - } - f.pt = pt - f.req = nil - } - // end legacy + tracing.EndSpan(ctx, sc, err) + }() + if f.pt == nil { return false, autorest.NewError("Future", "Done", "future is not initialized") } if f.pt.hasTerminated() { return true, f.pt.pollingError() } - if err := f.pt.pollForStatus(sender); err != nil { + if err := f.pt.pollForStatus(ctx, sender); err != nil { return false, err } if err := f.pt.checkForErrors(); err != nil { @@ -154,24 +141,35 @@ func (f Future) GetPollingDelay() (time.Duration, bool) { return d, true } -// WaitForCompletion will return when one of the following conditions is met: the long -// running operation has completed, the provided context is cancelled, or the client's -// polling duration has been exceeded. It will retry failed polling attempts based on -// the retry value defined in the client up to the maximum retry attempts. -// Deprecated: Please use WaitForCompletionRef() instead. -func (f Future) WaitForCompletion(ctx context.Context, client autorest.Client) error { - return f.WaitForCompletionRef(ctx, client) -} - // WaitForCompletionRef will return when one of the following conditions is met: the long // running operation has completed, the provided context is cancelled, or the client's // polling duration has been exceeded. It will retry failed polling attempts based on // the retry value defined in the client up to the maximum retry attempts. -func (f *Future) WaitForCompletionRef(ctx context.Context, client autorest.Client) error { - ctx, cancel := context.WithTimeout(ctx, client.PollingDuration) - defer cancel() - done, err := f.Done(client) - for attempts := 0; !done; done, err = f.Done(client) { +// If no deadline is specified in the context then the client.PollingDuration will be +// used to determine if a default deadline should be used. +// If PollingDuration is greater than zero the value will be used as the context's timeout. +// If PollingDuration is zero then no default deadline will be used. +func (f *Future) WaitForCompletionRef(ctx context.Context, client autorest.Client) (err error) { + ctx = tracing.StartSpan(ctx, "github.com/Azure/go-autorest/autorest/azure/async.WaitForCompletionRef") + defer func() { + sc := -1 + resp := f.Response() + if resp != nil { + sc = resp.StatusCode + } + tracing.EndSpan(ctx, sc, err) + }() + cancelCtx := ctx + // if the provided context already has a deadline don't override it + _, hasDeadline := ctx.Deadline() + if d := client.PollingDuration; !hasDeadline && d != 0 { + var cancel context.CancelFunc + cancelCtx, cancel = context.WithTimeout(ctx, d) + defer cancel() + } + + done, err := f.DoneWithContext(ctx, client) + for attempts := 0; !done; done, err = f.DoneWithContext(ctx, client) { if attempts >= client.RetryAttempts { return autorest.NewErrorWithError(err, "Future", "WaitForCompletion", f.pt.latestResponse(), "the number of retries has been exceeded") } @@ -195,12 +193,12 @@ func (f *Future) WaitForCompletionRef(ctx context.Context, client autorest.Clien attempts++ } // wait until the delay elapses or the context is cancelled - delayElapsed := autorest.DelayForBackoff(delay, delayAttempt, ctx.Done()) + delayElapsed := autorest.DelayForBackoff(delay, delayAttempt, cancelCtx.Done()) if !delayElapsed { - return autorest.NewErrorWithError(ctx.Err(), "Future", "WaitForCompletion", f.pt.latestResponse(), "context has been cancelled") + return autorest.NewErrorWithError(cancelCtx.Err(), "Future", "WaitForCompletion", f.pt.latestResponse(), "context has been cancelled") } } - return err + return } // MarshalJSON implements the json.Marshaler interface. @@ -285,7 +283,7 @@ type pollingTracker interface { initializeState() error // makes an HTTP request to check the status of the LRO - pollForStatus(sender autorest.Sender) error + pollForStatus(ctx context.Context, sender autorest.Sender) error // updates internal tracker state, call this after each call to pollForStatus updatePollingState(provStateApl bool) error @@ -399,6 +397,10 @@ func (pt *pollingTrackerBase) updateRawBody() error { if err != nil { return autorest.NewErrorWithError(err, "pollingTrackerBase", "updateRawBody", nil, "failed to read response body") } + // observed in 204 responses over HTTP/2.0; the content length is -1 but body is empty + if len(b) == 0 { + return nil + } // put the body back so it's available to other callers pt.resp.Body = ioutil.NopCloser(bytes.NewReader(b)) if err = json.Unmarshal(b, &pt.rawBody); err != nil { @@ -408,14 +410,17 @@ func (pt *pollingTrackerBase) updateRawBody() error { return nil } -func (pt *pollingTrackerBase) pollForStatus(sender autorest.Sender) error { +func (pt *pollingTrackerBase) pollForStatus(ctx context.Context, sender autorest.Sender) error { req, err := http.NewRequest(http.MethodGet, pt.URI, nil) if err != nil { return autorest.NewErrorWithError(err, "pollingTrackerBase", "pollForStatus", nil, "failed to create HTTP request") } - // attach the context from the original request if available (it will be absent for deserialized futures) - if pt.resp != nil { - req = req.WithContext(pt.resp.Request.Context()) + + req = req.WithContext(ctx) + preparer := autorest.CreatePreparer(autorest.GetPrepareDecorators(ctx)...) + req, err = preparer.Prepare(req) + if err != nil { + return autorest.NewErrorWithError(err, "pollingTrackerBase", "pollForStatus", nil, "failed preparing HTTP request") } pt.resp, err = sender.Do(req) if err != nil { @@ -445,7 +450,7 @@ func (pt *pollingTrackerBase) updateErrorFromResponse() { re := respErr{} defer pt.resp.Body.Close() var b []byte - if b, err = ioutil.ReadAll(pt.resp.Body); err != nil { + if b, err = ioutil.ReadAll(pt.resp.Body); err != nil || len(b) == 0 { goto Default } if err = json.Unmarshal(b, &re); err != nil { @@ -663,7 +668,7 @@ func (pt *pollingTrackerPatch) updatePollingMethod() error { } } // for 202 prefer the Azure-AsyncOperation header but fall back to Location if necessary - // note the absense of the "final GET" mechanism for PATCH + // note the absence of the "final GET" mechanism for PATCH if pt.resp.StatusCode == http.StatusAccepted { ao, err := getURLFromAsyncOpHeader(pt.resp) if err != nil { @@ -794,8 +799,6 @@ func (pt *pollingTrackerPut) updatePollingMethod() error { pt.URI = lh pt.Pm = PollingLocation } - // when both headers are returned we use the value in the Location header for the final GET - pt.FinalGetURI = lh } // make sure a polling URL was found if pt.URI == "" { @@ -885,43 +888,6 @@ func isValidURL(s string) bool { return err == nil && u.IsAbs() } -// DoPollForAsynchronous returns a SendDecorator that polls if the http.Response is for an Azure -// long-running operation. It will delay between requests for the duration specified in the -// RetryAfter header or, if the header is absent, the passed delay. Polling may be canceled via -// the context associated with the http.Request. -// Deprecated: Prefer using Futures to allow for non-blocking async operations. -func DoPollForAsynchronous(delay time.Duration) autorest.SendDecorator { - return func(s autorest.Sender) autorest.Sender { - return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) { - resp, err := s.Do(r) - if err != nil { - return resp, err - } - if !autorest.ResponseHasStatusCode(resp, pollingCodes[:]...) { - return resp, nil - } - future, err := NewFutureFromResponse(resp) - if err != nil { - return resp, err - } - // retry until either the LRO completes or we receive an error - var done bool - for done, err = future.Done(s); !done && err == nil; done, err = future.Done(s) { - // check for Retry-After delay, if not present use the specified polling delay - if pd, ok := future.GetPollingDelay(); ok { - delay = pd - } - // wait until the delay elapses or the context is cancelled - if delayElapsed := autorest.DelayForBackoff(delay, 0, r.Context().Done()); !delayElapsed { - return future.Response(), - autorest.NewErrorWithError(r.Context().Err(), "azure", "DoPollForAsynchronous", future.Response(), "context has been cancelled") - } - } - return future.Response(), err - }) - } -} - // PollingMethodType defines a type used for enumerating polling mechanisms. type PollingMethodType string diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/cli/LICENSE b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/LICENSE new file mode 100644 index 000000000..b9d6a27ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Microsoft Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/cli/go.mod b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/go.mod new file mode 100644 index 000000000..cef48f1ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/go.mod @@ -0,0 +1,10 @@ +module github.com/Azure/go-autorest/autorest/azure/cli + +go 1.12 + +require ( + github.com/Azure/go-autorest/autorest/adal v0.5.0 + github.com/Azure/go-autorest/autorest/date v0.1.0 + github.com/dimchansky/utfbom v1.1.0 + github.com/mitchellh/go-homedir v1.1.0 +) diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/cli/go.sum b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/go.sum new file mode 100644 index 000000000..2d6636a33 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/go.sum @@ -0,0 +1,17 @@ +github.com/Azure/go-autorest/autorest/adal v0.5.0 h1:q2gDruN08/guU9vAjuPWff0+QIrpH6ediguzdAzXAUU= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/date v0.1.0 h1:YGrhWfrgtFs84+h0o46rJrlmsZtyZRg470CqAXTZaGM= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/mocks v0.1.0 h1:Kx+AUU2Te+A3JIyYn6Dfs+cFgx5XorQKuIXrZGoq/SI= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= +github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= +github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= +github.com/dimchansky/utfbom v1.1.0 h1:FcM3g+nofKgUteL8dm/UpdRXNC9KmADgTpLKsu0TRo4= +github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/cli/profile.go b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/profile.go index b62bf03ba..a336b958d 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/cli/profile.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/profile.go @@ -19,6 +19,8 @@ import ( "encoding/json" "fmt" "io/ioutil" + "os" + "path/filepath" "github.com/dimchansky/utfbom" "github.com/mitchellh/go-homedir" @@ -47,9 +49,14 @@ type User struct { Type string `json:"type"` } +const azureProfileJSON = "azureProfile.json" + // ProfilePath returns the path where the Azure Profile is stored from the Azure CLI func ProfilePath() (string, error) { - return homedir.Expand("~/.azure/azureProfile.json") + if cfgDir := os.Getenv("AZURE_CONFIG_DIR"); cfgDir != "" { + return filepath.Join(cfgDir, azureProfileJSON), nil + } + return homedir.Expand("~/.azure/" + azureProfileJSON) } // LoadProfile restores a Profile object from a file located at 'path'. diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/cli/token.go b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/token.go index 83b81c34b..810075ba6 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/cli/token.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/cli/token.go @@ -15,9 +15,13 @@ package cli // limitations under the License. import ( + "bytes" "encoding/json" "fmt" "os" + "os/exec" + "regexp" + "runtime" "strconv" "time" @@ -54,7 +58,7 @@ func (t Token) ToADALToken() (converted adal.Token, err error) { AccessToken: t.AccessToken, Type: t.TokenType, ExpiresIn: "3600", - ExpiresOn: strconv.Itoa(int(difference.Seconds())), + ExpiresOn: json.Number(strconv.Itoa(int(difference.Seconds()))), RefreshToken: t.RefreshToken, Resource: t.Resource, } @@ -112,3 +116,55 @@ func LoadTokens(path string) ([]Token, error) { return tokens, nil } + +// GetTokenFromCLI gets a token using Azure CLI 2.0 for local development scenarios. +func GetTokenFromCLI(resource string) (*Token, error) { + // This is the path that a developer can set to tell this class what the install path for Azure CLI is. + const azureCLIPath = "AzureCLIPath" + + // The default install paths are used to find Azure CLI. This is for security, so that any path in the calling program's Path environment is not used to execute Azure CLI. + azureCLIDefaultPathWindows := fmt.Sprintf("%s\\Microsoft SDKs\\Azure\\CLI2\\wbin; %s\\Microsoft SDKs\\Azure\\CLI2\\wbin", os.Getenv("ProgramFiles(x86)"), os.Getenv("ProgramFiles")) + + // Default path for non-Windows. + const azureCLIDefaultPath = "/bin:/sbin:/usr/bin:/usr/local/bin" + + // Validate resource, since it gets sent as a command line argument to Azure CLI + const invalidResourceErrorTemplate = "Resource %s is not in expected format. Only alphanumeric characters, [dot], [colon], [hyphen], and [forward slash] are allowed." + match, err := regexp.MatchString("^[0-9a-zA-Z-.:/]+$", resource) + if err != nil { + return nil, err + } + if !match { + return nil, fmt.Errorf(invalidResourceErrorTemplate, resource) + } + + // Execute Azure CLI to get token + var cliCmd *exec.Cmd + if runtime.GOOS == "windows" { + cliCmd = exec.Command(fmt.Sprintf("%s\\system32\\cmd.exe", os.Getenv("windir"))) + cliCmd.Env = os.Environ() + cliCmd.Env = append(cliCmd.Env, fmt.Sprintf("PATH=%s;%s", os.Getenv(azureCLIPath), azureCLIDefaultPathWindows)) + cliCmd.Args = append(cliCmd.Args, "/c", "az") + } else { + cliCmd = exec.Command("az") + cliCmd.Env = os.Environ() + cliCmd.Env = append(cliCmd.Env, fmt.Sprintf("PATH=%s:%s", os.Getenv(azureCLIPath), azureCLIDefaultPath)) + } + cliCmd.Args = append(cliCmd.Args, "account", "get-access-token", "-o", "json", "--resource", resource) + + var stderr bytes.Buffer + cliCmd.Stderr = &stderr + + output, err := cliCmd.Output() + if err != nil { + return nil, fmt.Errorf("Invoking Azure CLI failed with the following error: %s", stderr.String()) + } + + tokenResponse := Token{} + err = json.Unmarshal(output, &tokenResponse) + if err != nil { + return nil, err + } + + return &tokenResponse, err +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go b/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go index 7e41f7fd9..6c20b8179 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go @@ -22,9 +22,14 @@ import ( "strings" ) -// EnvironmentFilepathName captures the name of the environment variable containing the path to the file -// to be used while populating the Azure Environment. -const EnvironmentFilepathName = "AZURE_ENVIRONMENT_FILEPATH" +const ( + // EnvironmentFilepathName captures the name of the environment variable containing the path to the file + // to be used while populating the Azure Environment. + EnvironmentFilepathName = "AZURE_ENVIRONMENT_FILEPATH" + + // NotAvailable is used for endpoints and resource IDs that are not available for a given cloud. + NotAvailable = "N/A" +) var environments = map[string]Environment{ "AZURECHINACLOUD": ChinaCloud, @@ -33,28 +38,40 @@ var environments = map[string]Environment{ "AZUREUSGOVERNMENTCLOUD": USGovernmentCloud, } +// ResourceIdentifier contains a set of Azure resource IDs. +type ResourceIdentifier struct { + Graph string `json:"graph"` + KeyVault string `json:"keyVault"` + Datalake string `json:"datalake"` + Batch string `json:"batch"` + OperationalInsights string `json:"operationalInsights"` + Storage string `json:"storage"` +} + // Environment represents a set of endpoints for each of Azure's Clouds. type Environment struct { - Name string `json:"name"` - ManagementPortalURL string `json:"managementPortalURL"` - PublishSettingsURL string `json:"publishSettingsURL"` - ServiceManagementEndpoint string `json:"serviceManagementEndpoint"` - ResourceManagerEndpoint string `json:"resourceManagerEndpoint"` - ActiveDirectoryEndpoint string `json:"activeDirectoryEndpoint"` - GalleryEndpoint string `json:"galleryEndpoint"` - KeyVaultEndpoint string `json:"keyVaultEndpoint"` - GraphEndpoint string `json:"graphEndpoint"` - ServiceBusEndpoint string `json:"serviceBusEndpoint"` - BatchManagementEndpoint string `json:"batchManagementEndpoint"` - StorageEndpointSuffix string `json:"storageEndpointSuffix"` - SQLDatabaseDNSSuffix string `json:"sqlDatabaseDNSSuffix"` - TrafficManagerDNSSuffix string `json:"trafficManagerDNSSuffix"` - KeyVaultDNSSuffix string `json:"keyVaultDNSSuffix"` - ServiceBusEndpointSuffix string `json:"serviceBusEndpointSuffix"` - ServiceManagementVMDNSSuffix string `json:"serviceManagementVMDNSSuffix"` - ResourceManagerVMDNSSuffix string `json:"resourceManagerVMDNSSuffix"` - ContainerRegistryDNSSuffix string `json:"containerRegistryDNSSuffix"` - TokenAudience string `json:"tokenAudience"` + Name string `json:"name"` + ManagementPortalURL string `json:"managementPortalURL"` + PublishSettingsURL string `json:"publishSettingsURL"` + ServiceManagementEndpoint string `json:"serviceManagementEndpoint"` + ResourceManagerEndpoint string `json:"resourceManagerEndpoint"` + ActiveDirectoryEndpoint string `json:"activeDirectoryEndpoint"` + GalleryEndpoint string `json:"galleryEndpoint"` + KeyVaultEndpoint string `json:"keyVaultEndpoint"` + GraphEndpoint string `json:"graphEndpoint"` + ServiceBusEndpoint string `json:"serviceBusEndpoint"` + BatchManagementEndpoint string `json:"batchManagementEndpoint"` + StorageEndpointSuffix string `json:"storageEndpointSuffix"` + SQLDatabaseDNSSuffix string `json:"sqlDatabaseDNSSuffix"` + TrafficManagerDNSSuffix string `json:"trafficManagerDNSSuffix"` + KeyVaultDNSSuffix string `json:"keyVaultDNSSuffix"` + ServiceBusEndpointSuffix string `json:"serviceBusEndpointSuffix"` + ServiceManagementVMDNSSuffix string `json:"serviceManagementVMDNSSuffix"` + ResourceManagerVMDNSSuffix string `json:"resourceManagerVMDNSSuffix"` + ContainerRegistryDNSSuffix string `json:"containerRegistryDNSSuffix"` + CosmosDBDNSSuffix string `json:"cosmosDBDNSSuffix"` + TokenAudience string `json:"tokenAudience"` + ResourceIdentifiers ResourceIdentifier `json:"resourceIdentifiers"` } var ( @@ -79,7 +96,16 @@ var ( ServiceManagementVMDNSSuffix: "cloudapp.net", ResourceManagerVMDNSSuffix: "cloudapp.azure.com", ContainerRegistryDNSSuffix: "azurecr.io", + CosmosDBDNSSuffix: "documents.azure.com", TokenAudience: "https://management.azure.com/", + ResourceIdentifiers: ResourceIdentifier{ + Graph: "https://graph.windows.net/", + KeyVault: "https://vault.azure.net", + Datalake: "https://datalake.azure.net/", + Batch: "https://batch.core.windows.net/", + OperationalInsights: "https://api.loganalytics.io", + Storage: "https://storage.azure.com/", + }, } // USGovernmentCloud is the cloud environment for the US Government @@ -102,8 +128,17 @@ var ( ServiceBusEndpointSuffix: "servicebus.usgovcloudapi.net", ServiceManagementVMDNSSuffix: "usgovcloudapp.net", ResourceManagerVMDNSSuffix: "cloudapp.windowsazure.us", - ContainerRegistryDNSSuffix: "azurecr.io", + ContainerRegistryDNSSuffix: "azurecr.us", + CosmosDBDNSSuffix: "documents.azure.us", TokenAudience: "https://management.usgovcloudapi.net/", + ResourceIdentifiers: ResourceIdentifier{ + Graph: "https://graph.windows.net/", + KeyVault: "https://vault.usgovcloudapi.net", + Datalake: NotAvailable, + Batch: "https://batch.core.usgovcloudapi.net/", + OperationalInsights: "https://api.loganalytics.us", + Storage: "https://storage.azure.com/", + }, } // ChinaCloud is the cloud environment operated in China @@ -126,8 +161,17 @@ var ( ServiceBusEndpointSuffix: "servicebus.chinacloudapi.cn", ServiceManagementVMDNSSuffix: "chinacloudapp.cn", ResourceManagerVMDNSSuffix: "cloudapp.azure.cn", - ContainerRegistryDNSSuffix: "azurecr.io", + ContainerRegistryDNSSuffix: "azurecr.cn", + CosmosDBDNSSuffix: "documents.azure.cn", TokenAudience: "https://management.chinacloudapi.cn/", + ResourceIdentifiers: ResourceIdentifier{ + Graph: "https://graph.chinacloudapi.cn/", + KeyVault: "https://vault.azure.cn", + Datalake: NotAvailable, + Batch: "https://batch.chinacloudapi.cn/", + OperationalInsights: NotAvailable, + Storage: "https://storage.azure.com/", + }, } // GermanCloud is the cloud environment operated in Germany @@ -150,8 +194,17 @@ var ( ServiceBusEndpointSuffix: "servicebus.cloudapi.de", ServiceManagementVMDNSSuffix: "azurecloudapp.de", ResourceManagerVMDNSSuffix: "cloudapp.microsoftazure.de", - ContainerRegistryDNSSuffix: "azurecr.io", + ContainerRegistryDNSSuffix: NotAvailable, + CosmosDBDNSSuffix: "documents.microsoftazure.de", TokenAudience: "https://management.microsoftazure.de/", + ResourceIdentifiers: ResourceIdentifier{ + Graph: "https://graph.cloudapi.de/", + KeyVault: "https://vault.microsoftazure.de", + Datalake: NotAvailable, + Batch: "https://batch.cloudapi.de/", + OperationalInsights: NotAvailable, + Storage: "https://storage.azure.com/", + }, } ) diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go b/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go index bd34f0ed5..86ce9f2b5 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go @@ -140,8 +140,8 @@ func register(client autorest.Client, originalReq *http.Request, re RequestError } // poll for registered provisioning state - now := time.Now() - for err == nil && time.Since(now) < client.PollingDuration { + registrationStartTime := time.Now() + for err == nil && (client.PollingDuration == 0 || (client.PollingDuration != 0 && time.Since(registrationStartTime) < client.PollingDuration)) { // taken from the resources SDK // https://github.com/Azure/azure-sdk-for-go/blob/9f366792afa3e0ddaecdc860e793ba9d75e76c27/arm/resources/resources/providers.go#L45 preparer := autorest.CreatePreparer( @@ -183,7 +183,7 @@ func register(client autorest.Client, originalReq *http.Request, re RequestError return originalReq.Context().Err() } } - if !(time.Since(now) < client.PollingDuration) { + if client.PollingDuration != 0 && !(time.Since(registrationStartTime) < client.PollingDuration) { return errors.New("polling for resource provider registration has exceeded the polling duration") } return err diff --git a/vendor/github.com/Azure/go-autorest/autorest/client.go b/vendor/github.com/Azure/go-autorest/autorest/client.go index 5c558c83a..1c6a0617a 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/client.go +++ b/vendor/github.com/Azure/go-autorest/autorest/client.go @@ -16,17 +16,16 @@ package autorest import ( "bytes" + "crypto/tls" "fmt" "io" "io/ioutil" "log" "net/http" - "net/http/cookiejar" "strings" "time" "github.com/Azure/go-autorest/logger" - "github.com/Azure/go-autorest/version" ) const ( @@ -72,6 +71,22 @@ type Response struct { *http.Response `json:"-"` } +// IsHTTPStatus returns true if the returned HTTP status code matches the provided status code. +// If there was no response (i.e. the underlying http.Response is nil) the return value is false. +func (r Response) IsHTTPStatus(statusCode int) bool { + if r.Response == nil { + return false + } + return r.Response.StatusCode == statusCode +} + +// HasHTTPStatus returns true if the returned HTTP status code matches one of the provided status codes. +// If there was no response (i.e. the underlying http.Response is nil) or not status codes are provided +// the return value is false. +func (r Response) HasHTTPStatus(statusCodes ...int) bool { + return ResponseHasStatusCode(r.Response, statusCodes...) +} + // LoggingInspector implements request and response inspectors that log the full request and // response to a supplied log. type LoggingInspector struct { @@ -147,6 +162,7 @@ type Client struct { PollingDelay time.Duration // PollingDuration sets the maximum polling time after which an error is returned. + // Setting this to zero will use the provided context to control the duration. PollingDuration time.Duration // RetryAttempts sets the default number of retry attempts for client. @@ -168,14 +184,32 @@ type Client struct { // NewClientWithUserAgent returns an instance of a Client with the UserAgent set to the passed // string. func NewClientWithUserAgent(ua string) Client { + return newClient(ua, tls.RenegotiateNever) +} + +// ClientOptions contains various Client configuration options. +type ClientOptions struct { + // UserAgent is an optional user-agent string to append to the default user agent. + UserAgent string + + // Renegotiation is an optional setting to control client-side TLS renegotiation. + Renegotiation tls.RenegotiationSupport +} + +// NewClientWithOptions returns an instance of a Client with the specified values. +func NewClientWithOptions(options ClientOptions) Client { + return newClient(options.UserAgent, options.Renegotiation) +} + +func newClient(ua string, renegotiation tls.RenegotiationSupport) Client { c := Client{ PollingDelay: DefaultPollingDelay, PollingDuration: DefaultPollingDuration, RetryAttempts: DefaultRetryAttempts, RetryDuration: DefaultRetryDuration, - UserAgent: version.UserAgent(), + UserAgent: UserAgent(), } - c.Sender = c.sender() + c.Sender = c.sender(renegotiation) c.AddToUserAgent(ua) return c } @@ -219,17 +253,16 @@ func (c Client) Do(r *http.Request) (*http.Response, error) { return true, v }, }) - resp, err := SendWithSender(c.sender(), r) + resp, err := SendWithSender(c.sender(tls.RenegotiateNever), r) logger.Instance.WriteResponse(resp, logger.Filter{}) Respond(resp, c.ByInspecting()) return resp, err } // sender returns the Sender to which to send requests. -func (c Client) sender() Sender { +func (c Client) sender(renengotiation tls.RenegotiationSupport) Sender { if c.Sender == nil { - j, _ := cookiejar.New(nil) - return &http.Client{Jar: j} + return sender(renengotiation) } return c.Sender } diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/LICENSE b/vendor/github.com/Azure/go-autorest/autorest/date/LICENSE new file mode 100644 index 000000000..b9d6a27ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/date/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Microsoft Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/go.mod b/vendor/github.com/Azure/go-autorest/autorest/date/go.mod new file mode 100644 index 000000000..3adc4804c --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/date/go.mod @@ -0,0 +1,5 @@ +module github.com/Azure/go-autorest/autorest/date + +go 1.12 + +require github.com/Azure/go-autorest/autorest v0.9.0 diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/go.sum b/vendor/github.com/Azure/go-autorest/autorest/date/go.sum new file mode 100644 index 000000000..9e2ee7a94 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/date/go.sum @@ -0,0 +1,16 @@ +github.com/Azure/go-autorest/autorest v0.9.0 h1:MRvx8gncNaXJqOoLmhNjUAKh33JJF8LyxPhomEtOsjs= +github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest/adal v0.5.0 h1:q2gDruN08/guU9vAjuPWff0+QIrpH6ediguzdAzXAUU= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0 h1:Ww5g4zThfD/6cLb4z6xxgeyDa7QDkizMkJKe0ysZXp0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= +github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= +github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= +github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/go_mod_tidy_hack.go b/vendor/github.com/Azure/go-autorest/autorest/date/go_mod_tidy_hack.go new file mode 100644 index 000000000..55adf930f --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/date/go_mod_tidy_hack.go @@ -0,0 +1,24 @@ +// +build modhack + +package date + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// This file, and the github.com/Azure/go-autorest/autorest import, won't actually become part of +// the resultant binary. + +// Necessary for safely adding multi-module repo. +// See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository +import _ "github.com/Azure/go-autorest/autorest" diff --git a/vendor/github.com/Azure/go-autorest/autorest/go.mod b/vendor/github.com/Azure/go-autorest/autorest/go.mod new file mode 100644 index 000000000..ab2ae66ac --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/go.mod @@ -0,0 +1,11 @@ +module github.com/Azure/go-autorest/autorest + +go 1.12 + +require ( + github.com/Azure/go-autorest/autorest/adal v0.5.0 + github.com/Azure/go-autorest/autorest/mocks v0.2.0 + github.com/Azure/go-autorest/logger v0.1.0 + github.com/Azure/go-autorest/tracing v0.5.0 + golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 +) diff --git a/vendor/github.com/Azure/go-autorest/autorest/go.sum b/vendor/github.com/Azure/go-autorest/autorest/go.sum new file mode 100644 index 000000000..729b99cd0 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/go.sum @@ -0,0 +1,18 @@ +github.com/Azure/go-autorest/autorest/adal v0.5.0 h1:q2gDruN08/guU9vAjuPWff0+QIrpH6ediguzdAzXAUU= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/date v0.1.0 h1:YGrhWfrgtFs84+h0o46rJrlmsZtyZRg470CqAXTZaGM= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/mocks v0.1.0 h1:Kx+AUU2Te+A3JIyYn6Dfs+cFgx5XorQKuIXrZGoq/SI= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0 h1:Ww5g4zThfD/6cLb4z6xxgeyDa7QDkizMkJKe0ysZXp0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= +github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= +github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= +github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= diff --git a/vendor/github.com/Azure/go-autorest/autorest/preparer.go b/vendor/github.com/Azure/go-autorest/autorest/preparer.go index 6d67bd733..6e8ed64eb 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/preparer.go +++ b/vendor/github.com/Azure/go-autorest/autorest/preparer.go @@ -16,7 +16,9 @@ package autorest import ( "bytes" + "context" "encoding/json" + "encoding/xml" "fmt" "io" "io/ioutil" @@ -31,11 +33,33 @@ const ( mimeTypeOctetStream = "application/octet-stream" mimeTypeFormPost = "application/x-www-form-urlencoded" - headerAuthorization = "Authorization" - headerContentType = "Content-Type" - headerUserAgent = "User-Agent" + headerAuthorization = "Authorization" + headerAuxAuthorization = "x-ms-authorization-auxiliary" + headerContentType = "Content-Type" + headerUserAgent = "User-Agent" ) +// used as a key type in context.WithValue() +type ctxPrepareDecorators struct{} + +// WithPrepareDecorators adds the specified PrepareDecorators to the provided context. +// If no PrepareDecorators are provided the context is unchanged. +func WithPrepareDecorators(ctx context.Context, prepareDecorator []PrepareDecorator) context.Context { + if len(prepareDecorator) == 0 { + return ctx + } + return context.WithValue(ctx, ctxPrepareDecorators{}, prepareDecorator) +} + +// GetPrepareDecorators returns the PrepareDecorators in the provided context or the provided default PrepareDecorators. +func GetPrepareDecorators(ctx context.Context, defaultPrepareDecorators ...PrepareDecorator) []PrepareDecorator { + inCtx := ctx.Value(ctxPrepareDecorators{}) + if pd, ok := inCtx.([]PrepareDecorator); ok { + return pd + } + return defaultPrepareDecorators +} + // Preparer is the interface that wraps the Prepare method. // // Prepare accepts and possibly modifies an http.Request (e.g., adding Headers). Implementations @@ -190,6 +214,9 @@ func AsGet() PrepareDecorator { return WithMethod("GET") } // AsHead returns a PrepareDecorator that sets the HTTP method to HEAD. func AsHead() PrepareDecorator { return WithMethod("HEAD") } +// AsMerge returns a PrepareDecorator that sets the HTTP method to MERGE. +func AsMerge() PrepareDecorator { return WithMethod("MERGE") } + // AsOptions returns a PrepareDecorator that sets the HTTP method to OPTIONS. func AsOptions() PrepareDecorator { return WithMethod("OPTIONS") } @@ -225,6 +252,25 @@ func WithBaseURL(baseURL string) PrepareDecorator { } } +// WithBytes returns a PrepareDecorator that takes a list of bytes +// which passes the bytes directly to the body +func WithBytes(input *[]byte) PrepareDecorator { + return func(p Preparer) Preparer { + return PreparerFunc(func(r *http.Request) (*http.Request, error) { + r, err := p.Prepare(r) + if err == nil { + if input == nil { + return r, fmt.Errorf("Input Bytes was nil") + } + + r.ContentLength = int64(len(*input)) + r.Body = ioutil.NopCloser(bytes.NewReader(*input)) + } + return r, err + }) + } +} + // WithCustomBaseURL returns a PrepareDecorator that replaces brace-enclosed keys within the // request base URL (i.e., http.Request.URL) with the corresponding values from the passed map. func WithCustomBaseURL(baseURL string, urlParameters map[string]interface{}) PrepareDecorator { @@ -377,6 +423,28 @@ func WithJSON(v interface{}) PrepareDecorator { } } +// WithXML returns a PrepareDecorator that encodes the data passed as XML into the body of the +// request and sets the Content-Length header. +func WithXML(v interface{}) PrepareDecorator { + return func(p Preparer) Preparer { + return PreparerFunc(func(r *http.Request) (*http.Request, error) { + r, err := p.Prepare(r) + if err == nil { + b, err := xml.Marshal(v) + if err == nil { + // we have to tack on an XML header + withHeader := xml.Header + string(b) + bytesWithHeader := []byte(withHeader) + + r.ContentLength = int64(len(bytesWithHeader)) + r.Body = ioutil.NopCloser(bytes.NewReader(bytesWithHeader)) + } + } + return r, err + }) + } +} + // WithPath returns a PrepareDecorator that adds the supplied path to the request URL. If the path // is absolute (that is, it begins with a "/"), it replaces the existing path. func WithPath(path string) PrepareDecorator { @@ -455,7 +523,7 @@ func parseURL(u *url.URL, path string) (*url.URL, error) { // WithQueryParameters returns a PrepareDecorators that encodes and applies the query parameters // given in the supplied map (i.e., key=value). func WithQueryParameters(queryParameters map[string]interface{}) PrepareDecorator { - parameters := ensureValueStrings(queryParameters) + parameters := MapToValues(queryParameters) return func(p Preparer) Preparer { return PreparerFunc(func(r *http.Request) (*http.Request, error) { r, err := p.Prepare(r) @@ -463,14 +531,16 @@ func WithQueryParameters(queryParameters map[string]interface{}) PrepareDecorato if r.URL == nil { return r, NewError("autorest", "WithQueryParameters", "Invoked with a nil URL") } - v := r.URL.Query() for key, value := range parameters { - d, err := url.QueryUnescape(value) - if err != nil { - return r, err + for i := range value { + d, err := url.QueryUnescape(value[i]) + if err != nil { + return r, err + } + value[i] = d } - v.Add(key, d) + v[key] = value } r.URL.RawQuery = v.Encode() } diff --git a/vendor/github.com/Azure/go-autorest/autorest/responder.go b/vendor/github.com/Azure/go-autorest/autorest/responder.go index a908a0adb..349e1963a 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/responder.go +++ b/vendor/github.com/Azure/go-autorest/autorest/responder.go @@ -153,6 +153,25 @@ func ByClosingIfError() RespondDecorator { } } +// ByUnmarshallingBytes returns a RespondDecorator that copies the Bytes returned in the +// response Body into the value pointed to by v. +func ByUnmarshallingBytes(v *[]byte) RespondDecorator { + return func(r Responder) Responder { + return ResponderFunc(func(resp *http.Response) error { + err := r.Respond(resp) + if err == nil { + bytes, errInner := ioutil.ReadAll(resp.Body) + if errInner != nil { + err = fmt.Errorf("Error occurred reading http.Response#Body - Error = '%v'", errInner) + } else { + *v = bytes + } + } + return err + }) + } +} + // ByUnmarshallingJSON returns a RespondDecorator that decodes a JSON document returned in the // response Body into the value pointed to by v. func ByUnmarshallingJSON(v interface{}) RespondDecorator { diff --git a/vendor/github.com/Azure/go-autorest/autorest/sender.go b/vendor/github.com/Azure/go-autorest/autorest/sender.go index cacbd8157..5e595d7b1 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/sender.go +++ b/vendor/github.com/Azure/go-autorest/autorest/sender.go @@ -15,14 +15,40 @@ package autorest // limitations under the License. import ( + "context" + "crypto/tls" "fmt" "log" "math" "net/http" + "net/http/cookiejar" "strconv" "time" + + "github.com/Azure/go-autorest/tracing" ) +// used as a key type in context.WithValue() +type ctxSendDecorators struct{} + +// WithSendDecorators adds the specified SendDecorators to the provided context. +// If no SendDecorators are provided the context is unchanged. +func WithSendDecorators(ctx context.Context, sendDecorator []SendDecorator) context.Context { + if len(sendDecorator) == 0 { + return ctx + } + return context.WithValue(ctx, ctxSendDecorators{}, sendDecorator) +} + +// GetSendDecorators returns the SendDecorators in the provided context or the provided default SendDecorators. +func GetSendDecorators(ctx context.Context, defaultSendDecorators ...SendDecorator) []SendDecorator { + inCtx := ctx.Value(ctxSendDecorators{}) + if sd, ok := inCtx.([]SendDecorator); ok { + return sd + } + return defaultSendDecorators +} + // Sender is the interface that wraps the Do method to send HTTP requests. // // The standard http.Client conforms to this interface. @@ -38,14 +64,14 @@ func (sf SenderFunc) Do(r *http.Request) (*http.Response, error) { return sf(r) } -// SendDecorator takes and possibily decorates, by wrapping, a Sender. Decorators may affect the +// SendDecorator takes and possibly decorates, by wrapping, a Sender. Decorators may affect the // http.Request and pass it along or, first, pass the http.Request along then react to the // http.Response result. type SendDecorator func(Sender) Sender // CreateSender creates, decorates, and returns, as a Sender, the default http.Client. func CreateSender(decorators ...SendDecorator) Sender { - return DecorateSender(&http.Client{}, decorators...) + return DecorateSender(sender(tls.RenegotiateNever), decorators...) } // DecorateSender accepts a Sender and a, possibly empty, set of SendDecorators, which is applies to @@ -68,7 +94,7 @@ func DecorateSender(s Sender, decorators ...SendDecorator) Sender { // // Send will not poll or retry requests. func Send(r *http.Request, decorators ...SendDecorator) (*http.Response, error) { - return SendWithSender(&http.Client{}, r, decorators...) + return SendWithSender(sender(tls.RenegotiateNever), r, decorators...) } // SendWithSender sends the passed http.Request, through the provided Sender, returning the @@ -80,6 +106,29 @@ func SendWithSender(s Sender, r *http.Request, decorators ...SendDecorator) (*ht return DecorateSender(s, decorators...).Do(r) } +func sender(renengotiation tls.RenegotiationSupport) Sender { + // Use behaviour compatible with DefaultTransport, but require TLS minimum version. + defaultTransport := http.DefaultTransport.(*http.Transport) + transport := &http.Transport{ + Proxy: defaultTransport.Proxy, + DialContext: defaultTransport.DialContext, + MaxIdleConns: defaultTransport.MaxIdleConns, + IdleConnTimeout: defaultTransport.IdleConnTimeout, + TLSHandshakeTimeout: defaultTransport.TLSHandshakeTimeout, + ExpectContinueTimeout: defaultTransport.ExpectContinueTimeout, + TLSClientConfig: &tls.Config{ + MinVersion: tls.VersionTLS12, + Renegotiation: renengotiation, + }, + } + var roundTripper http.RoundTripper = transport + if tracing.IsEnabled() { + roundTripper = tracing.NewTransport(transport) + } + j, _ := cookiejar.New(nil) + return &http.Client{Jar: j, Transport: roundTripper} +} + // AfterDelay returns a SendDecorator that delays for the passed time.Duration before // invoking the Sender. The delay may be terminated by closing the optional channel on the // http.Request. If canceled, no further Senders are invoked. @@ -209,54 +258,73 @@ func DoRetryForAttempts(attempts int, backoff time.Duration) SendDecorator { // DoRetryForStatusCodes returns a SendDecorator that retries for specified statusCodes for up to the specified // number of attempts, exponentially backing off between requests using the supplied backoff -// time.Duration (which may be zero). Retrying may be canceled by closing the optional channel on -// the http.Request. +// time.Duration (which may be zero). Retrying may be canceled by cancelling the context on the http.Request. +// NOTE: Code http.StatusTooManyRequests (429) will *not* be counted against the number of attempts. func DoRetryForStatusCodes(attempts int, backoff time.Duration, codes ...int) SendDecorator { return func(s Sender) Sender { - return SenderFunc(func(r *http.Request) (resp *http.Response, err error) { - rr := NewRetriableRequest(r) - // Increment to add the first call (attempts denotes number of retries) - attempts++ - for attempt := 0; attempt < attempts; { - err = rr.Prepare() - if err != nil { - return resp, err - } - resp, err = s.Do(rr.Request()) - // if the error isn't temporary don't bother retrying - if err != nil && !IsTemporaryNetworkError(err) { - return nil, err - } - // we want to retry if err is not nil (e.g. transient network failure). note that for failed authentication - // resp and err will both have a value, so in this case we don't want to retry as it will never succeed. - if err == nil && !ResponseHasStatusCode(resp, codes...) || IsTokenRefreshError(err) { - return resp, err - } - delayed := DelayWithRetryAfter(resp, r.Context().Done()) - if !delayed && !DelayForBackoff(backoff, attempt, r.Context().Done()) { - return nil, r.Context().Err() - } - // don't count a 429 against the number of attempts - // so that we continue to retry until it succeeds - if resp == nil || resp.StatusCode != http.StatusTooManyRequests { - attempt++ - } - } - return resp, err + return SenderFunc(func(r *http.Request) (*http.Response, error) { + return doRetryForStatusCodesImpl(s, r, false, attempts, backoff, 0, codes...) }) } } -// DelayWithRetryAfter invokes time.After for the duration specified in the "Retry-After" header in -// responses with status code 429 +// DoRetryForStatusCodesWithCap returns a SendDecorator that retries for specified statusCodes for up to the +// specified number of attempts, exponentially backing off between requests using the supplied backoff +// time.Duration (which may be zero). To cap the maximum possible delay between iterations specify a value greater +// than zero for cap. Retrying may be canceled by cancelling the context on the http.Request. +func DoRetryForStatusCodesWithCap(attempts int, backoff, cap time.Duration, codes ...int) SendDecorator { + return func(s Sender) Sender { + return SenderFunc(func(r *http.Request) (*http.Response, error) { + return doRetryForStatusCodesImpl(s, r, true, attempts, backoff, cap, codes...) + }) + } +} + +func doRetryForStatusCodesImpl(s Sender, r *http.Request, count429 bool, attempts int, backoff, cap time.Duration, codes ...int) (resp *http.Response, err error) { + rr := NewRetriableRequest(r) + // Increment to add the first call (attempts denotes number of retries) + for attempt := 0; attempt < attempts+1; { + err = rr.Prepare() + if err != nil { + return + } + resp, err = s.Do(rr.Request()) + // we want to retry if err is not nil (e.g. transient network failure). note that for failed authentication + // resp and err will both have a value, so in this case we don't want to retry as it will never succeed. + if err == nil && !ResponseHasStatusCode(resp, codes...) || IsTokenRefreshError(err) { + return resp, err + } + delayed := DelayWithRetryAfter(resp, r.Context().Done()) + if !delayed && !DelayForBackoffWithCap(backoff, cap, attempt, r.Context().Done()) { + return resp, r.Context().Err() + } + // when count429 == false don't count a 429 against the number + // of attempts so that we continue to retry until it succeeds + if count429 || (resp == nil || resp.StatusCode != http.StatusTooManyRequests) { + attempt++ + } + } + return resp, err +} + +// DelayWithRetryAfter invokes time.After for the duration specified in the "Retry-After" header. +// The value of Retry-After can be either the number of seconds or a date in RFC1123 format. +// The function returns true after successfully waiting for the specified duration. If there is +// no Retry-After header or the wait is cancelled the return value is false. func DelayWithRetryAfter(resp *http.Response, cancel <-chan struct{}) bool { if resp == nil { return false } - retryAfter, _ := strconv.Atoi(resp.Header.Get("Retry-After")) - if resp.StatusCode == http.StatusTooManyRequests && retryAfter > 0 { + var dur time.Duration + ra := resp.Header.Get("Retry-After") + if retryAfter, _ := strconv.Atoi(ra); retryAfter > 0 { + dur = time.Duration(retryAfter) * time.Second + } else if t, err := time.Parse(time.RFC1123, ra); err == nil { + dur = t.Sub(time.Now()) + } + if dur > 0 { select { - case <-time.After(time.Duration(retryAfter) * time.Second): + case <-time.After(dur): return true case <-cancel: return false @@ -316,8 +384,22 @@ func WithLogging(logger *log.Logger) SendDecorator { // Note: Passing attempt 1 will result in doubling "backoff" duration. Treat this as a zero-based attempt // count. func DelayForBackoff(backoff time.Duration, attempt int, cancel <-chan struct{}) bool { + return DelayForBackoffWithCap(backoff, 0, attempt, cancel) +} + +// DelayForBackoffWithCap invokes time.After for the supplied backoff duration raised to the power of +// passed attempt (i.e., an exponential backoff delay). Backoff duration is in seconds and can set +// to zero for no delay. To cap the maximum possible delay specify a value greater than zero for cap. +// The delay may be canceled by closing the passed channel. If terminated early, returns false. +// Note: Passing attempt 1 will result in doubling "backoff" duration. Treat this as a zero-based attempt +// count. +func DelayForBackoffWithCap(backoff, cap time.Duration, attempt int, cancel <-chan struct{}) bool { + d := time.Duration(backoff.Seconds()*math.Pow(2, float64(attempt))) * time.Second + if cap > 0 && d > cap { + d = cap + } select { - case <-time.After(time.Duration(backoff.Seconds()*math.Pow(2, float64(attempt))) * time.Second): + case <-time.After(d): return true case <-cancel: return false diff --git a/vendor/github.com/Azure/go-autorest/autorest/to/LICENSE b/vendor/github.com/Azure/go-autorest/autorest/to/LICENSE new file mode 100644 index 000000000..b9d6a27ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/to/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Microsoft Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/Azure/go-autorest/autorest/to/convert.go b/vendor/github.com/Azure/go-autorest/autorest/to/convert.go index fdda2ce1a..86694bd25 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/to/convert.go +++ b/vendor/github.com/Azure/go-autorest/autorest/to/convert.go @@ -145,3 +145,8 @@ func Float64(i *float64) float64 { func Float64Ptr(i float64) *float64 { return &i } + +// ByteSlicePtr returns a pointer to the passed byte slice. +func ByteSlicePtr(b []byte) *[]byte { + return &b +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/to/go.mod b/vendor/github.com/Azure/go-autorest/autorest/to/go.mod new file mode 100644 index 000000000..48fd8c6e5 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/to/go.mod @@ -0,0 +1,5 @@ +module github.com/Azure/go-autorest/autorest/to + +go 1.12 + +require github.com/Azure/go-autorest/autorest v0.9.0 diff --git a/vendor/github.com/Azure/go-autorest/autorest/to/go.sum b/vendor/github.com/Azure/go-autorest/autorest/to/go.sum new file mode 100644 index 000000000..d7ee6b462 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/to/go.sum @@ -0,0 +1,17 @@ +github.com/Azure/go-autorest/autorest v0.9.0 h1:MRvx8gncNaXJqOoLmhNjUAKh33JJF8LyxPhomEtOsjs= +github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest/adal v0.5.0 h1:q2gDruN08/guU9vAjuPWff0+QIrpH6ediguzdAzXAUU= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/date v0.1.0 h1:YGrhWfrgtFs84+h0o46rJrlmsZtyZRg470CqAXTZaGM= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0 h1:Ww5g4zThfD/6cLb4z6xxgeyDa7QDkizMkJKe0ysZXp0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= +github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= +github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= +github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= diff --git a/vendor/github.com/Azure/go-autorest/autorest/to/go_mod_tidy_hack.go b/vendor/github.com/Azure/go-autorest/autorest/to/go_mod_tidy_hack.go new file mode 100644 index 000000000..8e8292107 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/to/go_mod_tidy_hack.go @@ -0,0 +1,24 @@ +// +build modhack + +package to + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// This file, and the github.com/Azure/go-autorest/autorest import, won't actually become part of +// the resultant binary. + +// Necessary for safely adding multi-module repo. +// See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository +import _ "github.com/Azure/go-autorest/autorest" diff --git a/vendor/github.com/Azure/go-autorest/autorest/utility.go b/vendor/github.com/Azure/go-autorest/autorest/utility.go index bfddd90b5..08cf11c11 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/utility.go +++ b/vendor/github.com/Azure/go-autorest/autorest/utility.go @@ -157,7 +157,7 @@ func AsStringSlice(s interface{}) ([]string, error) { } // String method converts interface v to string. If interface is a list, it -// joins list elements using the seperator. Note that only sep[0] will be used for +// joins list elements using the separator. Note that only sep[0] will be used for // joining if any separator is specified. func String(v interface{}, sep ...string) string { if len(sep) == 0 { diff --git a/vendor/github.com/Azure/go-autorest/autorest/validation/LICENSE b/vendor/github.com/Azure/go-autorest/autorest/validation/LICENSE new file mode 100644 index 000000000..b9d6a27ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/validation/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Microsoft Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/Azure/go-autorest/autorest/validation/go.mod b/vendor/github.com/Azure/go-autorest/autorest/validation/go.mod new file mode 100644 index 000000000..b3f9b6a09 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/validation/go.mod @@ -0,0 +1,8 @@ +module github.com/Azure/go-autorest/autorest/validation + +go 1.12 + +require ( + github.com/Azure/go-autorest/autorest v0.9.0 + github.com/stretchr/testify v1.3.0 +) diff --git a/vendor/github.com/Azure/go-autorest/autorest/validation/go.sum b/vendor/github.com/Azure/go-autorest/autorest/validation/go.sum new file mode 100644 index 000000000..6b9010a73 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/validation/go.sum @@ -0,0 +1,24 @@ +github.com/Azure/go-autorest/autorest v0.9.0 h1:MRvx8gncNaXJqOoLmhNjUAKh33JJF8LyxPhomEtOsjs= +github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest/adal v0.5.0 h1:q2gDruN08/guU9vAjuPWff0+QIrpH6ediguzdAzXAUU= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/date v0.1.0 h1:YGrhWfrgtFs84+h0o46rJrlmsZtyZRg470CqAXTZaGM= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0 h1:Ww5g4zThfD/6cLb4z6xxgeyDa7QDkizMkJKe0ysZXp0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= +github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= +github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= +github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= diff --git a/vendor/github.com/Azure/go-autorest/autorest/validation/go_mod_tidy_hack.go b/vendor/github.com/Azure/go-autorest/autorest/validation/go_mod_tidy_hack.go new file mode 100644 index 000000000..2b2668581 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/validation/go_mod_tidy_hack.go @@ -0,0 +1,24 @@ +// +build modhack + +package validation + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// This file, and the github.com/Azure/go-autorest/autorest import, won't actually become part of +// the resultant binary. + +// Necessary for safely adding multi-module repo. +// See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository +import _ "github.com/Azure/go-autorest/autorest" diff --git a/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go b/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go index ae987f8fa..65899b69b 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go +++ b/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go @@ -398,11 +398,3 @@ func toInt64(v interface{}) (int64, bool) { } return 0, false } - -// NewErrorWithValidationError appends package type and method name in -// validation error. -// -// Deprecated: Please use validation.NewError() instead. -func NewErrorWithValidationError(err error, packageType, method string) error { - return NewError(packageType, method, err.Error()) -} diff --git a/vendor/github.com/Azure/go-autorest/autorest/version.go b/vendor/github.com/Azure/go-autorest/autorest/version.go index 3c6451546..7a71089c9 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/version.go +++ b/vendor/github.com/Azure/go-autorest/autorest/version.go @@ -1,7 +1,5 @@ package autorest -import "github.com/Azure/go-autorest/version" - // Copyright 2017 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -16,7 +14,28 @@ import "github.com/Azure/go-autorest/version" // See the License for the specific language governing permissions and // limitations under the License. +import ( + "fmt" + "runtime" +) + +const number = "v13.0.2" + +var ( + userAgent = fmt.Sprintf("Go/%s (%s-%s) go-autorest/%s", + runtime.Version(), + runtime.GOARCH, + runtime.GOOS, + number, + ) +) + +// UserAgent returns a string containing the Go version, system architecture and OS, and the go-autorest version. +func UserAgent() string { + return userAgent +} + // Version returns the semantic version (see http://semver.org). func Version() string { - return version.Number + return number } diff --git a/vendor/github.com/Azure/go-autorest/logger/LICENSE b/vendor/github.com/Azure/go-autorest/logger/LICENSE new file mode 100644 index 000000000..b9d6a27ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/logger/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Microsoft Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/Azure/go-autorest/logger/go.mod b/vendor/github.com/Azure/go-autorest/logger/go.mod new file mode 100644 index 000000000..f22ed56bc --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/logger/go.mod @@ -0,0 +1,3 @@ +module github.com/Azure/go-autorest/logger + +go 1.12 diff --git a/vendor/github.com/Azure/go-autorest/logger/logger.go b/vendor/github.com/Azure/go-autorest/logger/logger.go index 756fd80ca..da09f394c 100644 --- a/vendor/github.com/Azure/go-autorest/logger/logger.go +++ b/vendor/github.com/Azure/go-autorest/logger/logger.go @@ -162,7 +162,7 @@ type Writer interface { // WriteResponse writes the specified HTTP response to the logger if the log level is greater than // or equal to LogInfo. The response body, if set, is logged at level LogDebug or higher. // Custom filters can be specified to exclude URL, header, and/or body content from the log. - // By default no respone content is excluded. + // By default no response content is excluded. WriteResponse(resp *http.Response, filter Filter) } @@ -318,7 +318,7 @@ func (fl fileLogger) WriteResponse(resp *http.Response, filter Filter) { // returns true if the provided body should be included in the log func (fl fileLogger) shouldLogBody(header http.Header, body io.ReadCloser) bool { ct := header.Get("Content-Type") - return fl.logLevel >= LogDebug && body != nil && strings.Index(ct, "application/octet-stream") == -1 + return fl.logLevel >= LogDebug && body != nil && !strings.Contains(ct, "application/octet-stream") } // creates standard header for log entries, it contains a timestamp and the log level diff --git a/vendor/github.com/Azure/go-autorest/tracing/LICENSE b/vendor/github.com/Azure/go-autorest/tracing/LICENSE new file mode 100644 index 000000000..b9d6a27ea --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/tracing/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Microsoft Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/Azure/go-autorest/tracing/go.mod b/vendor/github.com/Azure/go-autorest/tracing/go.mod new file mode 100644 index 000000000..25c34c108 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/tracing/go.mod @@ -0,0 +1,3 @@ +module github.com/Azure/go-autorest/tracing + +go 1.12 diff --git a/vendor/github.com/Azure/go-autorest/tracing/tracing.go b/vendor/github.com/Azure/go-autorest/tracing/tracing.go new file mode 100644 index 000000000..0e7a6e962 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/tracing/tracing.go @@ -0,0 +1,67 @@ +package tracing + +// Copyright 2018 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import ( + "context" + "net/http" +) + +// Tracer represents an HTTP tracing facility. +type Tracer interface { + NewTransport(base *http.Transport) http.RoundTripper + StartSpan(ctx context.Context, name string) context.Context + EndSpan(ctx context.Context, httpStatusCode int, err error) +} + +var ( + tracer Tracer +) + +// Register will register the provided Tracer. Pass nil to unregister a Tracer. +func Register(t Tracer) { + tracer = t +} + +// IsEnabled returns true if a Tracer has been registered. +func IsEnabled() bool { + return tracer != nil +} + +// NewTransport creates a new instrumenting http.RoundTripper for the +// registered Tracer. If no Tracer has been registered it returns nil. +func NewTransport(base *http.Transport) http.RoundTripper { + if tracer != nil { + return tracer.NewTransport(base) + } + return nil +} + +// StartSpan starts a trace span with the specified name, associating it with the +// provided context. Has no effect if a Tracer has not been registered. +func StartSpan(ctx context.Context, name string) context.Context { + if tracer != nil { + return tracer.StartSpan(ctx, name) + } + return ctx +} + +// EndSpan ends a previously started span stored in the context. +// Has no effect if a Tracer has not been registered. +func EndSpan(ctx context.Context, httpStatusCode int, err error) { + if tracer != nil { + tracer.EndSpan(ctx, httpStatusCode, err) + } +} diff --git a/vendor/github.com/apparentlymart/go-textseg/textseg/make_tables.go b/vendor/github.com/apparentlymart/go-textseg/textseg/make_tables.go deleted file mode 100644 index aad3d0506..000000000 --- a/vendor/github.com/apparentlymart/go-textseg/textseg/make_tables.go +++ /dev/null @@ -1,307 +0,0 @@ -// Copyright (c) 2014 Couchbase, Inc. -// Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file -// except in compliance with the License. You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// Unless required by applicable law or agreed to in writing, software distributed under the -// License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, -// either express or implied. See the License for the specific language governing permissions -// and limitations under the License. - -// Modified by Martin Atkins to serve the needs of package textseg. - -// +build ignore - -package main - -import ( - "bufio" - "flag" - "fmt" - "io" - "log" - "net/http" - "os" - "os/exec" - "sort" - "strconv" - "strings" - "unicode" -) - -var url = flag.String("url", - "http://www.unicode.org/Public/"+unicode.Version+"/ucd/auxiliary/", - "URL of Unicode database directory") -var verbose = flag.Bool("verbose", - false, - "write data to stdout as it is parsed") -var localFiles = flag.Bool("local", - false, - "data files have been copied to the current directory; for debugging only") -var outputFile = flag.String("output", - "", - "output file for generated tables; default stdout") - -var output *bufio.Writer - -func main() { - flag.Parse() - setupOutput() - - graphemePropertyRanges := make(map[string]*unicode.RangeTable) - loadUnicodeData("GraphemeBreakProperty.txt", graphemePropertyRanges) - wordPropertyRanges := make(map[string]*unicode.RangeTable) - loadUnicodeData("WordBreakProperty.txt", wordPropertyRanges) - sentencePropertyRanges := make(map[string]*unicode.RangeTable) - loadUnicodeData("SentenceBreakProperty.txt", sentencePropertyRanges) - - fmt.Fprintf(output, fileHeader, *url) - generateTables("Grapheme", graphemePropertyRanges) - generateTables("Word", wordPropertyRanges) - generateTables("Sentence", sentencePropertyRanges) - - flushOutput() -} - -// WordBreakProperty.txt has the form: -// 05F0..05F2 ; Hebrew_Letter # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD -// FB1D ; Hebrew_Letter # Lo HEBREW LETTER YOD WITH HIRIQ -func openReader(file string) (input io.ReadCloser) { - if *localFiles { - f, err := os.Open(file) - if err != nil { - log.Fatal(err) - } - input = f - } else { - path := *url + file - resp, err := http.Get(path) - if err != nil { - log.Fatal(err) - } - if resp.StatusCode != 200 { - log.Fatal("bad GET status for "+file, resp.Status) - } - input = resp.Body - } - return -} - -func loadUnicodeData(filename string, propertyRanges map[string]*unicode.RangeTable) { - f := openReader(filename) - defer f.Close() - bufioReader := bufio.NewReader(f) - line, err := bufioReader.ReadString('\n') - for err == nil { - parseLine(line, propertyRanges) - line, err = bufioReader.ReadString('\n') - } - // if the err was EOF still need to process last value - if err == io.EOF { - parseLine(line, propertyRanges) - } -} - -const comment = "#" -const sep = ";" -const rnge = ".." - -func parseLine(line string, propertyRanges map[string]*unicode.RangeTable) { - if strings.HasPrefix(line, comment) { - return - } - line = strings.TrimSpace(line) - if len(line) == 0 { - return - } - commentStart := strings.Index(line, comment) - if commentStart > 0 { - line = line[0:commentStart] - } - pieces := strings.Split(line, sep) - if len(pieces) != 2 { - log.Printf("unexpected %d pieces in %s", len(pieces), line) - return - } - - propertyName := strings.TrimSpace(pieces[1]) - - rangeTable, ok := propertyRanges[propertyName] - if !ok { - rangeTable = &unicode.RangeTable{ - LatinOffset: 0, - } - propertyRanges[propertyName] = rangeTable - } - - codepointRange := strings.TrimSpace(pieces[0]) - rngeIndex := strings.Index(codepointRange, rnge) - - if rngeIndex < 0 { - // single codepoint, not range - codepointInt, err := strconv.ParseUint(codepointRange, 16, 64) - if err != nil { - log.Printf("error parsing int: %v", err) - return - } - if codepointInt < 0x10000 { - r16 := unicode.Range16{ - Lo: uint16(codepointInt), - Hi: uint16(codepointInt), - Stride: 1, - } - addR16ToTable(rangeTable, r16) - } else { - r32 := unicode.Range32{ - Lo: uint32(codepointInt), - Hi: uint32(codepointInt), - Stride: 1, - } - addR32ToTable(rangeTable, r32) - } - } else { - rngeStart := codepointRange[0:rngeIndex] - rngeEnd := codepointRange[rngeIndex+2:] - rngeStartInt, err := strconv.ParseUint(rngeStart, 16, 64) - if err != nil { - log.Printf("error parsing int: %v", err) - return - } - rngeEndInt, err := strconv.ParseUint(rngeEnd, 16, 64) - if err != nil { - log.Printf("error parsing int: %v", err) - return - } - if rngeStartInt < 0x10000 && rngeEndInt < 0x10000 { - r16 := unicode.Range16{ - Lo: uint16(rngeStartInt), - Hi: uint16(rngeEndInt), - Stride: 1, - } - addR16ToTable(rangeTable, r16) - } else if rngeStartInt >= 0x10000 && rngeEndInt >= 0x10000 { - r32 := unicode.Range32{ - Lo: uint32(rngeStartInt), - Hi: uint32(rngeEndInt), - Stride: 1, - } - addR32ToTable(rangeTable, r32) - } else { - log.Printf("unexpected range") - } - } -} - -func addR16ToTable(r *unicode.RangeTable, r16 unicode.Range16) { - if r.R16 == nil { - r.R16 = make([]unicode.Range16, 0, 1) - } - r.R16 = append(r.R16, r16) - if r16.Hi <= unicode.MaxLatin1 { - r.LatinOffset++ - } -} - -func addR32ToTable(r *unicode.RangeTable, r32 unicode.Range32) { - if r.R32 == nil { - r.R32 = make([]unicode.Range32, 0, 1) - } - r.R32 = append(r.R32, r32) -} - -func generateTables(prefix string, propertyRanges map[string]*unicode.RangeTable) { - prNames := make([]string, 0, len(propertyRanges)) - for k := range propertyRanges { - prNames = append(prNames, k) - } - sort.Strings(prNames) - for _, key := range prNames { - rt := propertyRanges[key] - fmt.Fprintf(output, "var _%s%s = %s\n", prefix, key, generateRangeTable(rt)) - } - fmt.Fprintf(output, "type _%sRuneRange unicode.RangeTable\n", prefix) - - fmt.Fprintf(output, "func _%sRuneType(r rune) *_%sRuneRange {\n", prefix, prefix) - fmt.Fprintf(output, "\tswitch {\n") - for _, key := range prNames { - fmt.Fprintf(output, "\tcase unicode.Is(_%s%s, r):\n\t\treturn (*_%sRuneRange)(_%s%s)\n", prefix, key, prefix, prefix, key) - } - fmt.Fprintf(output, "\tdefault:\n\t\treturn nil\n") - fmt.Fprintf(output, "\t}\n") - fmt.Fprintf(output, "}\n") - - fmt.Fprintf(output, "func (rng *_%sRuneRange) String() string {\n", prefix) - fmt.Fprintf(output, "\tswitch (*unicode.RangeTable)(rng) {\n") - for _, key := range prNames { - fmt.Fprintf(output, "\tcase _%s%s:\n\t\treturn %q\n", prefix, key, key) - } - fmt.Fprintf(output, "\tdefault:\n\t\treturn \"Other\"\n") - fmt.Fprintf(output, "\t}\n") - fmt.Fprintf(output, "}\n") -} - -func generateRangeTable(rt *unicode.RangeTable) string { - rv := "&unicode.RangeTable{\n" - if rt.R16 != nil { - rv += "\tR16: []unicode.Range16{\n" - for _, r16 := range rt.R16 { - rv += fmt.Sprintf("\t\t%#v,\n", r16) - } - rv += "\t},\n" - } - if rt.R32 != nil { - rv += "\tR32: []unicode.Range32{\n" - for _, r32 := range rt.R32 { - rv += fmt.Sprintf("\t\t%#v,\n", r32) - } - rv += "\t},\n" - } - rv += fmt.Sprintf("\t\tLatinOffset: %d,\n", rt.LatinOffset) - rv += "}\n" - return rv -} - -const fileHeader = `// Generated by running -// maketables --url=%s -// DO NOT EDIT - -package textseg - -import( - "unicode" -) -` - -func setupOutput() { - output = bufio.NewWriter(startGofmt()) -} - -// startGofmt connects output to a gofmt process if -output is set. -func startGofmt() io.Writer { - if *outputFile == "" { - return os.Stdout - } - stdout, err := os.Create(*outputFile) - if err != nil { - log.Fatal(err) - } - // Pipe output to gofmt. - gofmt := exec.Command("gofmt") - fd, err := gofmt.StdinPipe() - if err != nil { - log.Fatal(err) - } - gofmt.Stdout = stdout - gofmt.Stderr = os.Stderr - err = gofmt.Start() - if err != nil { - log.Fatal(err) - } - return fd -} - -func flushOutput() { - err := output.Flush() - if err != nil { - log.Fatal(err) - } -} diff --git a/vendor/github.com/apparentlymart/go-textseg/textseg/make_test_tables.go b/vendor/github.com/apparentlymart/go-textseg/textseg/make_test_tables.go deleted file mode 100644 index ac4200260..000000000 --- a/vendor/github.com/apparentlymart/go-textseg/textseg/make_test_tables.go +++ /dev/null @@ -1,212 +0,0 @@ -// Copyright (c) 2014 Couchbase, Inc. -// Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file -// except in compliance with the License. You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// Unless required by applicable law or agreed to in writing, software distributed under the -// License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, -// either express or implied. See the License for the specific language governing permissions -// and limitations under the License. - -// +build ignore - -package main - -import ( - "bufio" - "bytes" - "flag" - "fmt" - "io" - "log" - "net/http" - "os" - "os/exec" - "strconv" - "strings" - "unicode" -) - -var url = flag.String("url", - "http://www.unicode.org/Public/"+unicode.Version+"/ucd/auxiliary/", - "URL of Unicode database directory") -var verbose = flag.Bool("verbose", - false, - "write data to stdout as it is parsed") -var localFiles = flag.Bool("local", - false, - "data files have been copied to the current directory; for debugging only") - -var outputFile = flag.String("output", - "", - "output file for generated tables; default stdout") - -var output *bufio.Writer - -func main() { - flag.Parse() - setupOutput() - - graphemeTests := make([]test, 0) - graphemeTests = loadUnicodeData("GraphemeBreakTest.txt", graphemeTests) - wordTests := make([]test, 0) - wordTests = loadUnicodeData("WordBreakTest.txt", wordTests) - sentenceTests := make([]test, 0) - sentenceTests = loadUnicodeData("SentenceBreakTest.txt", sentenceTests) - - fmt.Fprintf(output, fileHeader, *url) - generateTestTables("Grapheme", graphemeTests) - generateTestTables("Word", wordTests) - generateTestTables("Sentence", sentenceTests) - - flushOutput() -} - -// WordBreakProperty.txt has the form: -// 05F0..05F2 ; Hebrew_Letter # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD -// FB1D ; Hebrew_Letter # Lo HEBREW LETTER YOD WITH HIRIQ -func openReader(file string) (input io.ReadCloser) { - if *localFiles { - f, err := os.Open(file) - if err != nil { - log.Fatal(err) - } - input = f - } else { - path := *url + file - resp, err := http.Get(path) - if err != nil { - log.Fatal(err) - } - if resp.StatusCode != 200 { - log.Fatal("bad GET status for "+file, resp.Status) - } - input = resp.Body - } - return -} - -func loadUnicodeData(filename string, tests []test) []test { - f := openReader(filename) - defer f.Close() - bufioReader := bufio.NewReader(f) - line, err := bufioReader.ReadString('\n') - for err == nil { - tests = parseLine(line, tests) - line, err = bufioReader.ReadString('\n') - } - // if the err was EOF still need to process last value - if err == io.EOF { - tests = parseLine(line, tests) - } - return tests -} - -const comment = "#" -const brk = "÷" -const nbrk = "×" - -type test [][]byte - -func parseLine(line string, tests []test) []test { - if strings.HasPrefix(line, comment) { - return tests - } - line = strings.TrimSpace(line) - if len(line) == 0 { - return tests - } - commentStart := strings.Index(line, comment) - if commentStart > 0 { - line = line[0:commentStart] - } - pieces := strings.Split(line, brk) - t := make(test, 0) - for _, piece := range pieces { - piece = strings.TrimSpace(piece) - if len(piece) > 0 { - codePoints := strings.Split(piece, nbrk) - word := "" - for _, codePoint := range codePoints { - codePoint = strings.TrimSpace(codePoint) - r, err := strconv.ParseInt(codePoint, 16, 64) - if err != nil { - log.Printf("err: %v for '%s'", err, string(r)) - return tests - } - - word += string(r) - } - t = append(t, []byte(word)) - } - } - tests = append(tests, t) - return tests -} - -func generateTestTables(prefix string, tests []test) { - fmt.Fprintf(output, testHeader, prefix) - for _, t := range tests { - fmt.Fprintf(output, "\t\t{\n") - fmt.Fprintf(output, "\t\t\tinput: %#v,\n", bytes.Join(t, []byte{})) - fmt.Fprintf(output, "\t\t\toutput: %s,\n", generateTest(t)) - fmt.Fprintf(output, "\t\t},\n") - } - fmt.Fprintf(output, "}\n") -} - -func generateTest(t test) string { - rv := "[][]byte{" - for _, te := range t { - rv += fmt.Sprintf("%#v,", te) - } - rv += "}" - return rv -} - -const fileHeader = `// Generated by running -// maketesttables --url=%s -// DO NOT EDIT - -package textseg -` - -const testHeader = `var unicode%sTests = []struct { - input []byte - output [][]byte - }{ -` - -func setupOutput() { - output = bufio.NewWriter(startGofmt()) -} - -// startGofmt connects output to a gofmt process if -output is set. -func startGofmt() io.Writer { - if *outputFile == "" { - return os.Stdout - } - stdout, err := os.Create(*outputFile) - if err != nil { - log.Fatal(err) - } - // Pipe output to gofmt. - gofmt := exec.Command("gofmt") - fd, err := gofmt.StdinPipe() - if err != nil { - log.Fatal(err) - } - gofmt.Stdout = stdout - gofmt.Stderr = os.Stderr - err = gofmt.Start() - if err != nil { - log.Fatal(err) - } - return fd -} - -func flushOutput() { - err := output.Flush() - if err != nil { - log.Fatal(err) - } -} diff --git a/vendor/github.com/marstr/guid/LICENSE.txt b/vendor/github.com/apparentlymart/go-versions/LICENSE similarity index 95% rename from vendor/github.com/marstr/guid/LICENSE.txt rename to vendor/github.com/apparentlymart/go-versions/LICENSE index e18a0841a..83fe416ba 100644 --- a/vendor/github.com/marstr/guid/LICENSE.txt +++ b/vendor/github.com/apparentlymart/go-versions/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright (c) 2016 Martin Strobel +Copyright (c) 2018 Martin Atkins Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file +SOFTWARE. diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/canon_style.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/canon_style.go new file mode 100644 index 000000000..aa5e87cb7 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/canon_style.go @@ -0,0 +1,352 @@ +package constraints + +import ( + "fmt" + "strings" +) + +// Parse parses a constraint string using a syntax similar to that used by +// npm, Go "dep", Rust's "cargo", etc. Exact compatibility with any of these +// systems is not guaranteed, but instead we aim for familiarity in the choice +// of operators and their meanings. The syntax described here is considered the +// canonical syntax for this package, but a Ruby-style syntax is also offered +// via the function "ParseRubyStyle". +// +// A constraint string is a sequence of selection sets delimited by ||, with +// each selection set being a whitespace-delimited sequence of selections. +// Each selection is then the combination of a matching operator and a boundary +// version. The following is an example of a complex constraint string +// illustrating all of these features: +// +// >=1.0.0 <2.0.0 || 1.0.0-beta1 || =2.0.2 +// +// In practice constraint strings are usually simpler than this, but this +// complex example allows us to identify each of the parts by example: +// +// Selection Sets: ">=1.0.0 <2.0.0" +// "1.0.0-beta1" +// "=2.0.2" +// Selections: ">=1.0.0" +// "<2.0.0" +// "1.0.0-beta1" +// "=2.0.2" +// Matching Operators: ">=", "<", "=" are explicit operators +// "1.0.0-beta1" has an implicit "=" operator +// Boundary Versions: "1.0.0", "2.0.0", "1.0.0-beta1", "2.0.2" +// +// A constraint string describes the members of a version set by adding exact +// versions or ranges of versions to that set. A version is in the set if +// any one of the selection sets match that version. A selection set matches +// a version if all of its selections match that version. A selection matches +// a version if the version has the indicated relationship with the given +// boundary version. +// +// In the above example, the first selection set matches all released versions +// whose major segment is 1, since both selections must apply. However, the +// remaining two selection sets describe two specific versions outside of that +// range that are also admitted, in addition to those in the indicated range. +// +// The available matching operators are: +// +// < Less than +// <= Less than or equal +// > Greater than +// >= Greater than or equal +// = Equal +// ! Not equal +// ~ Greater than with implied upper limit (described below) +// ^ Greater than excluding new major releases (described below) +// +// If no operator is specified, the operator is implied to be "equal" for a +// full version specification, or a special additional "match" operator for +// a version containing wildcards as described below. +// +// The "~" matching operator is a shorthand for expressing both a lower and +// upper limit within a single expression. The effect of this operator depends +// on how many segments are specified in the boundary version: if only one +// segment is specified then new minor and patch versions are accepted, whereas +// if two or three segments are specified then only patch versions are accepted. +// For example: +// +// ~1 is equivalent to >=1.0.0 <2.0.0 +// ~1.0 is equivalent to >=1.0.0 <1.1.0 +// ~1.2 is equivalent to >=1.2.0 <1.3.0 +// ~1.2.0 is equivalent to >=1.2.0 <1.3.0 +// ~1.2.3 is equivalent to >=1.2.3 <1.3.0 +// +// The "^" matching operator is similar to "~" except that it always constrains +// only the major version number. It has an additional special behavior for +// when the major version number is zero: in that case, the minor release +// number is constrained, reflecting the common semver convention that initial +// development releases mark breaking changes by incrementing the minor version. +// For example: +// +// ^1 is equivalent to >=1.0.0 <2.0.0 +// ^1.2 is equivalent to >=1.2.0 <2.0.0 +// ^1.2.3 is equivalent to >=1.2.3 <2.0.0 +// ^0.1.0 is equivalent to >=0.1.0 <0.2.0 +// ^0.1.2 is equivalent to >=0.1.2 <0.2.0 +// +// The boundary version can contain wildcards for the major, minor or patch +// segments, which are specified using the markers "*", "x", or "X". When used +// in a selection with no explicit operator, these specify the implied "match" +// operator and define ranges with similar meaning to the "~" and "^" operators: +// +// 1.* is equivalent to >=1.0.0 <2.0.0 +// 1.*.* is equivalent to >=1.0.0 <2.0.0 +// 1.0.* is equivalent to >=1.0.0 <1.1.0 +// +// When wildcards are used, the first segment specified as a wildcard implies +// that all of the following segments are also wildcards. A version +// specification like "1.*.2" is invalid, because a wildcard minor version +// implies that the patch version must also be a wildcard. +// +// Wildcards have no special meaning when used with explicit operators, and so +// they are merely replaced with zeros in such cases. +// +// Explicit range syntax using a hyphen creates inclusive upper and lower +// bounds: +// +// 1.0.0 - 2.0.0 is equivalent to >=1.0.0 <=2.0.0 +// 1.2.3 - 2.3.4 is equivalent to >=1.2.3 <=2.3.4 +// +// Requests of exact pre-release versions with the equals operator have +// no special meaning to the constraint parser, but are interpreted as explicit +// requests for those versions when interpreted by the MeetingConstraints +// function (and related functions) in the "versions" package, in the parent +// directory. Pre-release versions that are not explicitly requested are +// excluded from selection so that e.g. "^1.0.0" will not match a version +// "2.0.0-beta.1". +// +// The result is always a UnionSpec, whose members are IntersectionSpecs +// each describing one selection set. In the common case where a string +// contains only one selection, both the UnionSpec and the IntersectionSpec +// will have only one element and can thus be effectively ignored by the +// caller. (Union and intersection of single sets are both no-op.) +// A valid string must contain at least one selection; if an empty selection +// is to be considered as either "no versions" or "all versions" then this +// special case must be handled by the caller prior to calling this function. +// +// If there are syntax errors or ambiguities in the provided string then an +// error is returned. All errors returned by this function are suitable for +// display to English-speaking end-users, and avoid any Go-specific +// terminology. +func Parse(str string) (UnionSpec, error) { + str = strings.TrimSpace(str) + + if str == "" { + return nil, fmt.Errorf("empty specification") + } + + // Most constraint strings contain only one selection, so we'll + // allocate under that assumption and re-allocate if needed. + uspec := make(UnionSpec, 0, 1) + ispec := make(IntersectionSpec, 0, 1) + + remain := str + for { + var selection SelectionSpec + var err error + selection, remain, err = parseSelection(remain) + if err != nil { + return nil, err + } + + remain = strings.TrimSpace(remain) + + if len(remain) > 0 && remain[0] == '-' { + // Looks like user wants to make a range expression, so we'll + // look for another selection. + remain = strings.TrimSpace(remain[1:]) + if remain == "" { + return nil, fmt.Errorf(`operator "-" must be followed by another version selection to specify the upper limit of the range`) + } + + var lower, upper SelectionSpec + lower = selection + upper, remain, err = parseSelection(remain) + remain = strings.TrimSpace(remain) + if err != nil { + return nil, err + } + + if lower.Operator != OpUnconstrained { + return nil, fmt.Errorf(`lower bound of range specified with "-" operator must be an exact version`) + } + if upper.Operator != OpUnconstrained { + return nil, fmt.Errorf(`upper bound of range specified with "-" operator must be an exact version`) + } + + lower.Operator = OpGreaterThanOrEqual + lower.Boundary = lower.Boundary.ConstrainToZero() + if upper.Boundary.IsExact() { + upper.Operator = OpLessThanOrEqual + } else { + upper.Operator = OpLessThan + upper.Boundary = upper.Boundary.ConstrainToUpperBound() + } + ispec = append(ispec, lower, upper) + } else { + if selection.Operator == OpUnconstrained { + // Select a default operator based on whether the version + // specification contains wildcards. + if selection.Boundary.IsExact() { + selection.Operator = OpEqual + } else { + selection.Operator = OpMatch + } + } + if selection.Operator != OpMatch { + switch selection.Operator { + case OpMatch: + // nothing to do + case OpLessThanOrEqual: + if !selection.Boundary.IsExact() { + selection.Operator = OpLessThan + selection.Boundary = selection.Boundary.ConstrainToUpperBound() + } + case OpGreaterThan: + if !selection.Boundary.IsExact() { + // If "greater than" has an imprecise boundary then we'll + // turn it into a "greater than or equal to" and use the + // upper bound of the boundary, so e.g.: + // >1.*.* means >=2.0.0, because that's greater than + // everything matched by 1.*.*. + selection.Operator = OpGreaterThanOrEqual + selection.Boundary = selection.Boundary.ConstrainToUpperBound() + } + default: + selection.Boundary = selection.Boundary.ConstrainToZero() + } + } + ispec = append(ispec, selection) + } + + if len(remain) == 0 { + // All done! + break + } + + if remain[0] == ',' { + return nil, fmt.Errorf(`commas are not needed to separate version selections; separate with spaces instead`) + } + + if remain[0] == '|' { + if !strings.HasPrefix(remain, "||") { + // User was probably trying for "||", so we'll produce a specialized error + return nil, fmt.Errorf(`single "|" is not a valid operator; did you mean "||" to specify an alternative?`) + } + remain = strings.TrimSpace(remain[2:]) + if remain == "" { + return nil, fmt.Errorf(`operator "||" must be followed by another version selection`) + } + + // Begin a new IntersectionSpec, added to our single UnionSpec + uspec = append(uspec, ispec) + ispec = make(IntersectionSpec, 0, 1) + } + } + + uspec = append(uspec, ispec) + + return uspec, nil +} + +// parseSelection parses one canon-style selection from the prefix of the +// given string, returning the result along with the remaining unconsumed +// string for the caller to use for further processing. +func parseSelection(str string) (SelectionSpec, string, error) { + raw, remain := scanConstraint(str) + var spec SelectionSpec + + if len(str) == len(remain) { + if len(remain) > 0 && remain[0] == 'v' { + // User seems to be trying to use a "v" prefix, like "v1.0.0" + return spec, remain, fmt.Errorf(`a "v" prefix should not be used when specifying versions`) + } + + // If we made no progress at all then the selection must be entirely invalid. + return spec, remain, fmt.Errorf("the sequence %q is not valid", remain) + } + + switch raw.op { + case "": + // We'll deal with this situation in the caller + spec.Operator = OpUnconstrained + case "=": + spec.Operator = OpEqual + case "!": + spec.Operator = OpNotEqual + case ">": + spec.Operator = OpGreaterThan + case ">=": + spec.Operator = OpGreaterThanOrEqual + case "<": + spec.Operator = OpLessThan + case "<=": + spec.Operator = OpLessThanOrEqual + case "~": + if raw.numCt > 1 { + spec.Operator = OpGreaterThanOrEqualPatchOnly + } else { + spec.Operator = OpGreaterThanOrEqualMinorOnly + } + case "^": + if len(raw.nums[0]) > 0 && raw.nums[0][0] == '0' { + // Special case for major version 0, which is initial development: + // we treat the minor number as if it's the major number. + spec.Operator = OpGreaterThanOrEqualPatchOnly + } else { + spec.Operator = OpGreaterThanOrEqualMinorOnly + } + case "=<": + return spec, remain, fmt.Errorf("invalid constraint operator %q; did you mean \"<=\"?", raw.op) + case "=>": + return spec, remain, fmt.Errorf("invalid constraint operator %q; did you mean \">=\"?", raw.op) + default: + return spec, remain, fmt.Errorf("invalid constraint operator %q", raw.op) + } + + if raw.sep != "" { + return spec, remain, fmt.Errorf("no spaces allowed after operator %q", raw.op) + } + + if raw.numCt > 3 { + return spec, remain, fmt.Errorf("too many numbered portions; only three are allowed (major, minor, patch)") + } + + // Unspecified portions are either zero or wildcard depending on whether + // any explicit wildcards are present. + seenWild := false + for i, s := range raw.nums { + switch { + case isWildcardNum(s): + seenWild = true + case i >= raw.numCt: + if seenWild { + raw.nums[i] = "*" + } else { + raw.nums[i] = "0" + } + default: + // If we find a non-wildcard after we've already seen a wildcard + // then this specification is inconsistent, which is an error. + if seenWild { + return spec, remain, fmt.Errorf("can't use exact %s segment after a previous segment was wildcard", rawNumNames[i]) + } + } + } + + if seenWild { + if raw.pre != "" { + return spec, remain, fmt.Errorf(`can't use prerelease segment (introduced by "-") in a version with wildcards`) + } + if raw.meta != "" { + return spec, remain, fmt.Errorf(`can't use build metadata segment (introduced by "+") in a version with wildcards`) + } + } + + spec.Boundary = raw.VersionSpec() + + return spec, remain, nil +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/constraintdepth_string.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/constraintdepth_string.go new file mode 100644 index 000000000..0f808f912 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/constraintdepth_string.go @@ -0,0 +1,16 @@ +// Code generated by "stringer -type ConstraintDepth"; DO NOT EDIT. + +package constraints + +import "strconv" + +const _ConstraintDepth_name = "UnconstrainedConstrainedMajorConstrainedMinorConstrainedPatch" + +var _ConstraintDepth_index = [...]uint8{0, 13, 29, 45, 61} + +func (i ConstraintDepth) String() string { + if i < 0 || i >= ConstraintDepth(len(_ConstraintDepth_index)-1) { + return "ConstraintDepth(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _ConstraintDepth_name[_ConstraintDepth_index[i]:_ConstraintDepth_index[i+1]] +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/doc.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/doc.go new file mode 100644 index 000000000..17b8b90f2 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/doc.go @@ -0,0 +1,13 @@ +// Package constraints contains a high-level representation of version +// constraints that retains enough information for direct analysis and +// serialization as a string. +// +// The package also contains parsers to produce that representation from +// various compact constraint specification formats. +// +// The main "versions" package, available in the parent directory, can consume +// the high-level constraint representation from this package to construct +// a version set that contains all versions meeting the given constraints. +// Package "constraints" does not contain any functionalty for checking versions +// against constraints since that is provided by package "versions". +package constraints diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw.go new file mode 100644 index 000000000..fdea80a35 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw.go @@ -0,0 +1,74 @@ +package constraints + +import ( + "strconv" +) + +//go:generate ragel -G1 -Z raw_scan.rl +//go:generate gofmt -w raw_scan.go + +// rawConstraint is a tokenization of a constraint string, used internally +// as the first layer of parsing. +type rawConstraint struct { + op string + sep string + nums [3]string + numCt int + pre string + meta string +} + +// VersionSpec turns the receiver into a VersionSpec in a reasonable +// default way. This method assumes that the raw constraint was already +// validated, and will panic or produce undefined results if it contains +// anything invalid. +// +// In particular, numbers are automatically marked as unconstrained if they +// are omitted or set to wildcards, so the caller must apply any additional +// validation rules on the usage of unconstrained numbers before calling. +func (raw rawConstraint) VersionSpec() VersionSpec { + return VersionSpec{ + Major: parseRawNumConstraint(raw.nums[0]), + Minor: parseRawNumConstraint(raw.nums[1]), + Patch: parseRawNumConstraint(raw.nums[2]), + Prerelease: raw.pre, + Metadata: raw.meta, + } +} + +var rawNumNames = [...]string{"major", "minor", "patch"} + +func isWildcardNum(s string) bool { + switch s { + case "*", "x", "X": + return true + default: + return false + } +} + +// parseRawNum parses a raw number string which the caller has already +// determined is non-empty and non-wildcard. If the string is not numeric +// then this function will panic. +func parseRawNum(s string) uint64 { + v, err := strconv.ParseUint(s, 10, 64) + if err != nil { + panic(err) + } + return v +} + +// parseRawNumConstraint parses a raw number into a NumConstraint, setting it +// to unconstrained if the value is empty or a wildcard. +func parseRawNumConstraint(s string) NumConstraint { + switch { + case s == "" || isWildcardNum(s): + return NumConstraint{ + Unconstrained: true, + } + default: + return NumConstraint{ + Num: parseRawNum(s), + } + } +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw_scan.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw_scan.go new file mode 100644 index 000000000..2f7a81c47 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw_scan.go @@ -0,0 +1,623 @@ +// line 1 "raw_scan.rl" +// This file is generated from raw_scan.rl. DO NOT EDIT. + +// line 5 "raw_scan.rl" + +package constraints + +// line 12 "raw_scan.go" +var _scan_eof_actions []byte = []byte{ + 0, 1, 1, 7, 9, 9, 9, 11, + 14, 15, 11, +} + +const scan_start int = 1 +const scan_first_final int = 7 +const scan_error int = 0 + +const scan_en_main int = 1 + +// line 11 "raw_scan.rl" + +func scanConstraint(data string) (rawConstraint, string) { + var constraint rawConstraint + var numIdx int + var extra string + + // Ragel state + p := 0 // "Pointer" into data + pe := len(data) // End-of-data "pointer" + cs := 0 // constraint state (will be initialized by ragel-generated code) + ts := 0 + te := 0 + eof := pe + + // Keep Go compiler happy even if generated code doesn't use these + _ = ts + _ = te + _ = eof + + // line 47 "raw_scan.go" + { + cs = scan_start + } + + // line 52 "raw_scan.go" + { + if p == pe { + goto _test_eof + } + if cs == 0 { + goto _out + } + _resume: + switch cs { + case 1: + switch data[p] { + case 32: + goto tr1 + case 42: + goto tr2 + case 46: + goto tr3 + case 88: + goto tr2 + case 120: + goto tr2 + } + switch { + case data[p] < 48: + if 9 <= data[p] && data[p] <= 13 { + goto tr1 + } + case data[p] > 57: + switch { + case data[p] > 90: + if 97 <= data[p] && data[p] <= 122 { + goto tr3 + } + case data[p] >= 65: + goto tr3 + } + default: + goto tr4 + } + goto tr0 + case 2: + switch data[p] { + case 32: + goto tr6 + case 42: + goto tr7 + case 46: + goto tr3 + case 88: + goto tr7 + case 120: + goto tr7 + } + switch { + case data[p] < 48: + if 9 <= data[p] && data[p] <= 13 { + goto tr6 + } + case data[p] > 57: + switch { + case data[p] > 90: + if 97 <= data[p] && data[p] <= 122 { + goto tr3 + } + case data[p] >= 65: + goto tr3 + } + default: + goto tr8 + } + goto tr5 + case 3: + switch data[p] { + case 32: + goto tr10 + case 42: + goto tr11 + case 88: + goto tr11 + case 120: + goto tr11 + } + switch { + case data[p] > 13: + if 48 <= data[p] && data[p] <= 57 { + goto tr12 + } + case data[p] >= 9: + goto tr10 + } + goto tr9 + case 0: + goto _out + case 7: + switch data[p] { + case 43: + goto tr19 + case 45: + goto tr20 + case 46: + goto tr21 + } + goto tr18 + case 4: + switch { + case data[p] < 48: + if 45 <= data[p] && data[p] <= 46 { + goto tr14 + } + case data[p] > 57: + switch { + case data[p] > 90: + if 97 <= data[p] && data[p] <= 122 { + goto tr14 + } + case data[p] >= 65: + goto tr14 + } + default: + goto tr14 + } + goto tr13 + case 8: + switch { + case data[p] < 48: + if 45 <= data[p] && data[p] <= 46 { + goto tr14 + } + case data[p] > 57: + switch { + case data[p] > 90: + if 97 <= data[p] && data[p] <= 122 { + goto tr14 + } + case data[p] >= 65: + goto tr14 + } + default: + goto tr14 + } + goto tr22 + case 5: + switch { + case data[p] < 48: + if 45 <= data[p] && data[p] <= 46 { + goto tr15 + } + case data[p] > 57: + switch { + case data[p] > 90: + if 97 <= data[p] && data[p] <= 122 { + goto tr15 + } + case data[p] >= 65: + goto tr15 + } + default: + goto tr15 + } + goto tr13 + case 9: + if data[p] == 43 { + goto tr24 + } + switch { + case data[p] < 48: + if 45 <= data[p] && data[p] <= 46 { + goto tr15 + } + case data[p] > 57: + switch { + case data[p] > 90: + if 97 <= data[p] && data[p] <= 122 { + goto tr15 + } + case data[p] >= 65: + goto tr15 + } + default: + goto tr15 + } + goto tr23 + case 6: + switch data[p] { + case 42: + goto tr16 + case 88: + goto tr16 + case 120: + goto tr16 + } + if 48 <= data[p] && data[p] <= 57 { + goto tr17 + } + goto tr13 + case 10: + switch data[p] { + case 43: + goto tr19 + case 45: + goto tr20 + case 46: + goto tr21 + } + if 48 <= data[p] && data[p] <= 57 { + goto tr25 + } + goto tr18 + } + + tr3: + cs = 0 + goto f0 + tr9: + cs = 0 + goto f6 + tr13: + cs = 0 + goto f8 + tr18: + cs = 0 + goto f10 + tr22: + cs = 0 + goto f13 + tr23: + cs = 0 + goto f14 + tr5: + cs = 2 + goto _again + tr0: + cs = 2 + goto f1 + tr10: + cs = 3 + goto _again + tr1: + cs = 3 + goto f2 + tr6: + cs = 3 + goto f4 + tr19: + cs = 4 + goto f11 + tr24: + cs = 4 + goto f15 + tr20: + cs = 5 + goto f11 + tr21: + cs = 6 + goto f12 + tr2: + cs = 7 + goto f3 + tr7: + cs = 7 + goto f5 + tr11: + cs = 7 + goto f7 + tr16: + cs = 7 + goto f9 + tr14: + cs = 8 + goto _again + tr15: + cs = 9 + goto _again + tr25: + cs = 10 + goto _again + tr4: + cs = 10 + goto f3 + tr8: + cs = 10 + goto f5 + tr12: + cs = 10 + goto f7 + tr17: + cs = 10 + goto f9 + + f9: + // line 38 "raw_scan.rl" + + ts = p + + goto _again + f12: + // line 52 "raw_scan.rl" + + te = p + constraint.numCt++ + if numIdx < len(constraint.nums) { + constraint.nums[numIdx] = data[ts:p] + numIdx++ + } + + goto _again + f8: + // line 71 "raw_scan.rl" + + extra = data[p:] + + goto _again + f1: + // line 33 "raw_scan.rl" + + numIdx = 0 + constraint = rawConstraint{} + + // line 38 "raw_scan.rl" + + ts = p + + goto _again + f4: + // line 42 "raw_scan.rl" + + te = p + constraint.op = data[ts:p] + + // line 38 "raw_scan.rl" + + ts = p + + goto _again + f7: + // line 47 "raw_scan.rl" + + te = p + constraint.sep = data[ts:p] + + // line 38 "raw_scan.rl" + + ts = p + + goto _again + f6: + // line 47 "raw_scan.rl" + + te = p + constraint.sep = data[ts:p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + goto _again + f11: + // line 52 "raw_scan.rl" + + te = p + constraint.numCt++ + if numIdx < len(constraint.nums) { + constraint.nums[numIdx] = data[ts:p] + numIdx++ + } + + // line 38 "raw_scan.rl" + + ts = p + + goto _again + f10: + // line 52 "raw_scan.rl" + + te = p + constraint.numCt++ + if numIdx < len(constraint.nums) { + constraint.nums[numIdx] = data[ts:p] + numIdx++ + } + + // line 71 "raw_scan.rl" + + extra = data[p:] + + goto _again + f15: + // line 61 "raw_scan.rl" + + te = p + constraint.pre = data[ts+1 : p] + + // line 38 "raw_scan.rl" + + ts = p + + goto _again + f14: + // line 61 "raw_scan.rl" + + te = p + constraint.pre = data[ts+1 : p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + goto _again + f13: + // line 66 "raw_scan.rl" + + te = p + constraint.meta = data[ts+1 : p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + goto _again + f2: + // line 33 "raw_scan.rl" + + numIdx = 0 + constraint = rawConstraint{} + + // line 38 "raw_scan.rl" + + ts = p + + // line 42 "raw_scan.rl" + + te = p + constraint.op = data[ts:p] + + goto _again + f5: + // line 42 "raw_scan.rl" + + te = p + constraint.op = data[ts:p] + + // line 38 "raw_scan.rl" + + ts = p + + // line 47 "raw_scan.rl" + + te = p + constraint.sep = data[ts:p] + + goto _again + f0: + // line 42 "raw_scan.rl" + + te = p + constraint.op = data[ts:p] + + // line 47 "raw_scan.rl" + + te = p + constraint.sep = data[ts:p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + goto _again + f3: + // line 33 "raw_scan.rl" + + numIdx = 0 + constraint = rawConstraint{} + + // line 38 "raw_scan.rl" + + ts = p + + // line 42 "raw_scan.rl" + + te = p + constraint.op = data[ts:p] + + // line 47 "raw_scan.rl" + + te = p + constraint.sep = data[ts:p] + + goto _again + + _again: + if cs == 0 { + goto _out + } + if p++; p != pe { + goto _resume + } + _test_eof: + { + } + if p == eof { + switch _scan_eof_actions[cs] { + case 9: + // line 71 "raw_scan.rl" + + extra = data[p:] + + case 7: + // line 47 "raw_scan.rl" + + te = p + constraint.sep = data[ts:p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + case 11: + // line 52 "raw_scan.rl" + + te = p + constraint.numCt++ + if numIdx < len(constraint.nums) { + constraint.nums[numIdx] = data[ts:p] + numIdx++ + } + + // line 71 "raw_scan.rl" + + extra = data[p:] + + case 15: + // line 61 "raw_scan.rl" + + te = p + constraint.pre = data[ts+1 : p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + case 14: + // line 66 "raw_scan.rl" + + te = p + constraint.meta = data[ts+1 : p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + case 1: + // line 42 "raw_scan.rl" + + te = p + constraint.op = data[ts:p] + + // line 47 "raw_scan.rl" + + te = p + constraint.sep = data[ts:p] + + // line 71 "raw_scan.rl" + + extra = data[p:] + + // line 610 "raw_scan.go" + } + } + + _out: + { + } + } + + // line 92 "raw_scan.rl" + + return constraint, extra +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw_scan.rl b/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw_scan.rl new file mode 100644 index 000000000..da2151da9 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/raw_scan.rl @@ -0,0 +1,95 @@ +// This file is generated from raw_scan.rl. DO NOT EDIT. +%%{ + # (except you are actually in raw_scan.rl here, so edit away!) + machine scan; +}%% + +package constraints + +%%{ + write data; +}%% + +func scanConstraint(data string) (rawConstraint, string) { + var constraint rawConstraint + var numIdx int + var extra string + + // Ragel state + p := 0 // "Pointer" into data + pe := len(data) // End-of-data "pointer" + cs := 0 // constraint state (will be initialized by ragel-generated code) + ts := 0 + te := 0 + eof := pe + + // Keep Go compiler happy even if generated code doesn't use these + _ = ts + _ = te + _ = eof + + %%{ + + action enterConstraint { + numIdx = 0 + constraint = rawConstraint{} + } + + action ts { + ts = p + } + + action finishOp { + te = p + constraint.op = data[ts:p] + } + + action finishSep { + te = p + constraint.sep = data[ts:p] + } + + action finishNum { + te = p + constraint.numCt++ + if numIdx < len(constraint.nums) { + constraint.nums[numIdx] = data[ts:p] + numIdx++ + } + } + + action finishPre { + te = p + constraint.pre = data[ts+1:p] + } + + action finishMeta { + te = p + constraint.meta = data[ts+1:p] + } + + action finishExtra { + extra = data[p:] + } + + num = (digit+ | '*' | 'x' | 'X') >ts %finishNum %err(finishNum) %eof(finishNum); + + op = ((any - (digit | space | alpha | '.' | '*'))**) >ts %finishOp %err(finishOp) %eof(finishOp); + likelyOp = ('^' | '>' | '<' | '-' | '~' | '!'); + sep = (space**) >ts %finishSep %err(finishSep) %eof(finishSep); + nums = (num ('.' num)*); + extraStr = (alnum | '.' | '-')+; + pre = ('-' extraStr) >ts %finishPre %err(finishPre) %eof(finishPre); + meta = ('+' extraStr) >ts %finishMeta %err(finishMeta) %eof(finishMeta); + + constraint = (op sep nums pre? meta?) >enterConstraint; + + main := (constraint) @/finishExtra %/finishExtra $!finishExtra; + + write init; + write exec; + + }%% + + return constraint, extra +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/ruby_style.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/ruby_style.go new file mode 100644 index 000000000..59b2ecd34 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/ruby_style.go @@ -0,0 +1,181 @@ +package constraints + +import ( + "fmt" + "strings" +) + +// ParseRubyStyle parses a single selection constraint using a syntax similar +// to that used by rubygems and other Ruby tools. +// +// Exact compatibility with rubygems is not guaranteed; "ruby-style" here +// just means that users familiar with rubygems should find familiar the choice +// of operators and their meanings. +// +// ParseRubyStyle parses only a single specification, mimicking the usual +// rubygems approach of providing each selection as a separate string. +// The result can be combined with other results to create an IntersectionSpec +// that describes the effect of multiple such constraints. +func ParseRubyStyle(str string) (SelectionSpec, error) { + if strings.TrimSpace(str) == "" { + return SelectionSpec{}, fmt.Errorf("empty specification") + } + spec, remain, err := parseRubyStyle(str) + if err != nil { + return spec, err + } + if remain != "" { + remain = strings.TrimSpace(remain) + switch { + case remain == "": + return spec, fmt.Errorf("extraneous spaces at end of specification") + case strings.HasPrefix(remain, "v"): + // User seems to be trying to use a "v" prefix, like "v1.0.0" + return spec, fmt.Errorf(`a "v" prefix should not be used`) + case strings.HasPrefix(remain, "||") || strings.HasPrefix(remain, ","): + // User seems to be trying to specify multiple constraints + return spec, fmt.Errorf(`only one constraint may be specified`) + case strings.HasPrefix(remain, "-"): + // User seems to be trying to use npm-style range constraints + return spec, fmt.Errorf(`range constraints are not supported`) + default: + return spec, fmt.Errorf("invalid characters %q", remain) + } + } + + return spec, nil +} + +// ParseRubyStyleAll is a helper wrapper around ParseRubyStyle that accepts +// multiple selection strings and combines them together into a single +// IntersectionSpec. +func ParseRubyStyleAll(strs ...string) (IntersectionSpec, error) { + spec := make(IntersectionSpec, 0, len(strs)) + for _, str := range strs { + subSpec, err := ParseRubyStyle(str) + if err != nil { + return nil, fmt.Errorf("invalid specification %q: %s", str, err) + } + spec = append(spec, subSpec) + } + return spec, nil +} + +// ParseRubyStyleMulti is similar to ParseRubyStyle, but rather than parsing +// only a single selection specification it instead expects one or more +// comma-separated specifications, returning the result as an +// IntersectionSpec. +func ParseRubyStyleMulti(str string) (IntersectionSpec, error) { + var spec IntersectionSpec + remain := strings.TrimSpace(str) + for remain != "" { + if strings.TrimSpace(remain) == "" { + break + } + + var subSpec SelectionSpec + var err error + var newRemain string + subSpec, newRemain, err = parseRubyStyle(remain) + consumed := remain[:len(remain)-len(newRemain)] + if err != nil { + return nil, fmt.Errorf("invalid specification %q: %s", consumed, err) + } + remain = strings.TrimSpace(newRemain) + + if remain != "" { + if !strings.HasPrefix(remain, ",") { + return nil, fmt.Errorf("missing comma after %q", consumed) + } + // Eat the separator comma + remain = strings.TrimSpace(remain[1:]) + } + + spec = append(spec, subSpec) + } + + return spec, nil +} + +// parseRubyStyle parses a ruby-style constraint from the prefix of the given +// string and returns the remaining unconsumed string for the caller to use +// for further processing. +func parseRubyStyle(str string) (SelectionSpec, string, error) { + raw, remain := scanConstraint(str) + var spec SelectionSpec + + switch raw.op { + case "=", "": + spec.Operator = OpEqual + case "!=": + spec.Operator = OpNotEqual + case ">": + spec.Operator = OpGreaterThan + case ">=": + spec.Operator = OpGreaterThanOrEqual + case "<": + spec.Operator = OpLessThan + case "<=": + spec.Operator = OpLessThanOrEqual + case "~>": + // Ruby-style pessimistic can be either a minor-only or patch-only + // constraint, depending on how many digits were given. + switch raw.numCt { + case 3: + spec.Operator = OpGreaterThanOrEqualPatchOnly + default: + spec.Operator = OpGreaterThanOrEqualMinorOnly + } + case "=<": + return spec, remain, fmt.Errorf("invalid constraint operator %q; did you mean \"<=\"?", raw.op) + case "=>": + return spec, remain, fmt.Errorf("invalid constraint operator %q; did you mean \">=\"?", raw.op) + default: + return spec, remain, fmt.Errorf("invalid constraint operator %q", raw.op) + } + + switch raw.sep { + case "": + if raw.op != "" { + return spec, remain, fmt.Errorf("a space separator is required after the operator %q", raw.op) + } + case " ": + if raw.op == "" { + return spec, remain, fmt.Errorf("extraneous spaces at start of specification") + } + default: + if raw.op == "" { + return spec, remain, fmt.Errorf("extraneous spaces at start of specification") + } else { + return spec, remain, fmt.Errorf("only one space is expected after the operator %q", raw.op) + } + } + + if raw.numCt > 3 { + return spec, remain, fmt.Errorf("too many numbered portions; only three are allowed (major, minor, patch)") + } + + // Ruby-style doesn't use explicit wildcards + for i, s := range raw.nums { + switch { + case isWildcardNum(s): + // Can't use wildcards in an exact specification + return spec, remain, fmt.Errorf("can't use wildcard for %s number; omit segments that should be unconstrained", rawNumNames[i]) + } + } + + if raw.pre != "" || raw.meta != "" { + // If either the prerelease or meta portions are set then any unconstrained + // segments are implied to be zero in order to guarantee constraint + // consistency. + for i, s := range raw.nums { + if s == "" { + raw.nums[i] = "0" + } + } + } + + spec.Boundary = raw.VersionSpec() + + return spec, remain, nil +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/selectionop_string.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/selectionop_string.go new file mode 100644 index 000000000..e3c2b129c --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/selectionop_string.go @@ -0,0 +1,43 @@ +// Code generated by "stringer -type SelectionOp"; DO NOT EDIT. + +package constraints + +import "strconv" + +const ( + _SelectionOp_name_0 = "OpUnconstrained" + _SelectionOp_name_1 = "OpMatch" + _SelectionOp_name_2 = "OpLessThanOpEqualOpGreaterThan" + _SelectionOp_name_3 = "OpGreaterThanOrEqualMinorOnly" + _SelectionOp_name_4 = "OpGreaterThanOrEqualPatchOnly" + _SelectionOp_name_5 = "OpNotEqual" + _SelectionOp_name_6 = "OpLessThanOrEqualOpGreaterThanOrEqual" +) + +var ( + _SelectionOp_index_2 = [...]uint8{0, 10, 17, 30} + _SelectionOp_index_6 = [...]uint8{0, 17, 37} +) + +func (i SelectionOp) String() string { + switch { + case i == 0: + return _SelectionOp_name_0 + case i == 42: + return _SelectionOp_name_1 + case 60 <= i && i <= 62: + i -= 60 + return _SelectionOp_name_2[_SelectionOp_index_2[i]:_SelectionOp_index_2[i+1]] + case i == 94: + return _SelectionOp_name_3 + case i == 126: + return _SelectionOp_name_4 + case i == 8800: + return _SelectionOp_name_5 + case 8804 <= i && i <= 8805: + i -= 8804 + return _SelectionOp_name_6[_SelectionOp_index_6[i]:_SelectionOp_index_6[i+1]] + default: + return "SelectionOp(" + strconv.FormatInt(int64(i), 10) + ")" + } +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/spec.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/spec.go new file mode 100644 index 000000000..2cce04555 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/spec.go @@ -0,0 +1,249 @@ +package constraints + +import ( + "bytes" + "fmt" + "strconv" +) + +// Spec is an interface type that UnionSpec, IntersectionSpec, SelectionSpec, +// and VersionSpec all belong to. +// +// It's provided to allow generic code to be written that accepts and operates +// on all specs, but such code must still handle each type separately using +// e.g. a type switch. This is a closed type that will not have any new +// implementations added in future. +type Spec interface { + isSpec() +} + +// UnionSpec represents an "or" operation on nested version constraints. +// +// This is not directly representable in all of our supported constraint +// syntaxes. +type UnionSpec []IntersectionSpec + +func (s UnionSpec) isSpec() {} + +// IntersectionSpec represents an "and" operation on nested version constraints. +type IntersectionSpec []SelectionSpec + +func (s IntersectionSpec) isSpec() {} + +// SelectionSpec represents applying a single operator to a particular +// "boundary" version. +type SelectionSpec struct { + Boundary VersionSpec + Operator SelectionOp +} + +func (s SelectionSpec) isSpec() {} + +// VersionSpec represents the boundary within a SelectionSpec. +type VersionSpec struct { + Major NumConstraint + Minor NumConstraint + Patch NumConstraint + Prerelease string + Metadata string +} + +func (s VersionSpec) isSpec() {} + +// IsExact returns bool if all of the version numbers in the receiver are +// fully-constrained. This is the same as s.ConstraintDepth() == ConstrainedPatch +func (s VersionSpec) IsExact() bool { + return s.ConstraintDepth() == ConstrainedPatch +} + +// ConstraintDepth returns the constraint depth of the receiver, which is +// the most specifc version number segment that is exactly constrained. +// +// The constraints must be consistent, which means that if a given segment +// is unconstrained then all of the deeper segments must also be unconstrained. +// If not, this method will panic. Version specs produced by the parsers in +// this package are guaranteed to be consistent. +func (s VersionSpec) ConstraintDepth() ConstraintDepth { + if s == (VersionSpec{}) { + // zero value is a degenerate case meaning completely unconstrained + return Unconstrained + } + + switch { + case s.Major.Unconstrained: + if !(s.Minor.Unconstrained && s.Patch.Unconstrained && s.Prerelease == "" && s.Metadata == "") { + panic("inconsistent constraint depth") + } + return Unconstrained + case s.Minor.Unconstrained: + if !(s.Patch.Unconstrained && s.Prerelease == "" && s.Metadata == "") { + panic("inconsistent constraint depth") + } + return ConstrainedMajor + case s.Patch.Unconstrained: + if s.Prerelease != "" || s.Metadata != "" { + panic(fmt.Errorf("inconsistent constraint depth: wildcard major, minor and patch followed by prerelease %q and metadata %q", s.Prerelease, s.Metadata)) + } + return ConstrainedMinor + default: + return ConstrainedPatch + } +} + +// ConstraintBounds returns two exact VersionSpecs that represent the upper +// and lower bounds of the possibly-inexact receiver. If the receiver +// is already exact then the two bounds are identical and have operator +// OpEqual. If they are different then the lower bound is OpGreaterThanOrEqual +// and the upper bound is OpLessThan. +// +// As a special case, if the version spec is entirely unconstrained the +// two bounds will be identical and the zero value of SelectionSpec. For +// consistency, this result is also returned if the receiver is already +// the zero value of VersionSpec, since a zero spec represents a lack of +// constraint. +// +// The constraints must be consistent as defined by ConstraintDepth, or this +// method will panic. +func (s VersionSpec) ConstraintBounds() (SelectionSpec, SelectionSpec) { + switch s.ConstraintDepth() { + case Unconstrained: + return SelectionSpec{}, SelectionSpec{} + case ConstrainedMajor: + lowerBound := s.ConstrainToZero() + lowerBound.Metadata = "" + upperBound := lowerBound + upperBound.Major.Num++ + upperBound.Minor.Num = 0 + upperBound.Patch.Num = 0 + upperBound.Prerelease = "" + upperBound.Metadata = "" + return SelectionSpec{ + Operator: OpGreaterThanOrEqual, + Boundary: lowerBound, + }, SelectionSpec{ + Operator: OpLessThan, + Boundary: upperBound, + } + case ConstrainedMinor: + lowerBound := s.ConstrainToZero() + lowerBound.Metadata = "" + upperBound := lowerBound + upperBound.Minor.Num++ + upperBound.Patch.Num = 0 + upperBound.Metadata = "" + return SelectionSpec{ + Operator: OpGreaterThanOrEqual, + Boundary: lowerBound, + }, SelectionSpec{ + Operator: OpLessThan, + Boundary: upperBound, + } + default: + eq := SelectionSpec{ + Operator: OpEqual, + Boundary: s, + } + return eq, eq + } +} + +// ConstrainToZero returns a copy of the receiver with all of its +// unconstrained numeric segments constrained to zero. +func (s VersionSpec) ConstrainToZero() VersionSpec { + switch s.ConstraintDepth() { + case Unconstrained: + s.Major = NumConstraint{Num: 0} + s.Minor = NumConstraint{Num: 0} + s.Patch = NumConstraint{Num: 0} + s.Prerelease = "" + s.Metadata = "" + case ConstrainedMajor: + s.Minor = NumConstraint{Num: 0} + s.Patch = NumConstraint{Num: 0} + s.Prerelease = "" + s.Metadata = "" + case ConstrainedMinor: + s.Patch = NumConstraint{Num: 0} + s.Prerelease = "" + s.Metadata = "" + } + return s +} + +// ConstrainToUpperBound returns a copy of the receiver with all of its +// unconstrained numeric segments constrained to zero and its last +// constrained segment increased by one. +// +// This operation is not meaningful for an entirely unconstrained VersionSpec, +// so will return the zero value of the type in that case. +func (s VersionSpec) ConstrainToUpperBound() VersionSpec { + switch s.ConstraintDepth() { + case Unconstrained: + return VersionSpec{} + case ConstrainedMajor: + s.Major.Num++ + s.Minor = NumConstraint{Num: 0} + s.Patch = NumConstraint{Num: 0} + s.Prerelease = "" + s.Metadata = "" + case ConstrainedMinor: + s.Minor.Num++ + s.Patch = NumConstraint{Num: 0} + s.Prerelease = "" + s.Metadata = "" + } + return s +} + +func (s VersionSpec) String() string { + var buf bytes.Buffer + fmt.Fprintf(&buf, "%s.%s.%s", s.Major, s.Minor, s.Patch) + if s.Prerelease != "" { + fmt.Fprintf(&buf, "-%s", s.Prerelease) + } + if s.Metadata != "" { + fmt.Fprintf(&buf, "+%s", s.Metadata) + } + return buf.String() +} + +type SelectionOp rune + +//go:generate stringer -type SelectionOp + +const ( + OpUnconstrained SelectionOp = 0 + OpGreaterThan SelectionOp = '>' + OpLessThan SelectionOp = '<' + OpGreaterThanOrEqual SelectionOp = '≥' + OpGreaterThanOrEqualPatchOnly SelectionOp = '~' + OpGreaterThanOrEqualMinorOnly SelectionOp = '^' + OpLessThanOrEqual SelectionOp = '≤' + OpEqual SelectionOp = '=' + OpNotEqual SelectionOp = '≠' + OpMatch SelectionOp = '*' +) + +type NumConstraint struct { + Num uint64 + Unconstrained bool +} + +func (c NumConstraint) String() string { + if c.Unconstrained { + return "*" + } else { + return strconv.FormatUint(c.Num, 10) + } +} + +type ConstraintDepth int + +//go:generate stringer -type ConstraintDepth + +const ( + Unconstrained ConstraintDepth = 0 + ConstrainedMajor ConstraintDepth = 1 + ConstrainedMinor ConstraintDepth = 2 + ConstrainedPatch ConstraintDepth = 3 +) diff --git a/vendor/github.com/apparentlymart/go-versions/versions/constraints/version.go b/vendor/github.com/apparentlymart/go-versions/versions/constraints/version.go new file mode 100644 index 000000000..9e6f24ea7 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/constraints/version.go @@ -0,0 +1,81 @@ +package constraints + +import ( + "fmt" + "strings" +) + +// ParseExactVersion parses a string that must contain the specification of a +// single, exact version, and then returns it as a VersionSpec. +// +// This is primarily here to allow versions.ParseVersion to re-use the +// constraint grammar, and isn't very useful for direct use from calling +// applications. +func ParseExactVersion(vs string) (VersionSpec, error) { + spec := VersionSpec{} + + if strings.TrimSpace(vs) == "" { + return spec, fmt.Errorf("empty specification") + } + + raw, remain := scanConstraint(vs) + + switch strings.TrimSpace(raw.op) { + case ">", ">=", "<", "<=", "!", "!=", "~>", "^", "~": + // If it looks like the user was trying to write a constraint string + // then we'll help them out with a more specialized error. + return spec, fmt.Errorf("can't use constraint operator %q; an exact version is required", raw.op) + case "": + // Empty operator is okay as long as we don't also have separator spaces. + // (Caller can trim off spaces beforehand if they want to tolerate this.) + if raw.sep != "" { + return spec, fmt.Errorf("extraneous spaces at start of specification") + } + default: + return spec, fmt.Errorf("invalid sequence %q at start of specification", raw.op) + } + + if remain != "" { + remain = strings.TrimSpace(remain) + switch { + case remain == "": + return spec, fmt.Errorf("extraneous spaces at end of specification") + case strings.HasPrefix(vs, "v"): + // User seems to be trying to use a "v" prefix, like "v1.0.0" + return spec, fmt.Errorf(`a "v" prefix should not be used`) + case strings.HasPrefix(remain, ",") || strings.HasPrefix(remain, "|"): + // User seems to be trying to list/combine multiple versions + return spec, fmt.Errorf("can't specify multiple versions; a single exact version is required") + case strings.HasPrefix(remain, "-"): + // User seems to be trying to use the npm-style range operator + return spec, fmt.Errorf("can't specify version range; a single exact version is required") + case strings.HasPrefix(strings.TrimSpace(vs), remain): + // Whole string is invalid, then. + return spec, fmt.Errorf("invalid specification; required format is three positive integers separated by periods") + default: + return spec, fmt.Errorf("invalid characters %q", remain) + } + } + + if raw.numCt > 3 { + return spec, fmt.Errorf("too many numbered portions; only three are allowed (major, minor, patch)") + } + + for i := raw.numCt; i < len(raw.nums); i++ { + raw.nums[i] = "0" + } + + for i, s := range raw.nums { + switch { + case isWildcardNum(s): + // Can't use wildcards in an exact specification + return spec, fmt.Errorf("can't use wildcard for %s number; an exact version is required", rawNumNames[i]) + } + } + + // Since we eliminated all of the unconstrained cases above, either by normalizing + // or returning an error, we are guaranteed to get constrained numbers here. + spec = raw.VersionSpec() + + return spec, nil +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/doc.go b/vendor/github.com/apparentlymart/go-versions/versions/doc.go new file mode 100644 index 000000000..6e4ffe520 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/doc.go @@ -0,0 +1,14 @@ +// Package versions is a library for wrangling version numbers in Go. +// +// There are many libraries offering some or all of this functionality. +// This package aims to distinguish itself by offering a more convenient and +// ergonomic API than seen in some other libraries. Code that is resolving +// versions and version constraints tends to be hairy and complex already, so +// an expressive API for talking about these concepts will hopefully help to +// make that code more readable. +// +// The version model is based on Semantic Versioning as defined at +// https://semver.org/ . Semantic Versioning does not include any specification +// for constraints, so the constraint model is based on that used by rubygems, +// allowing for upper and lower bounds as well as individual version exclusions. +package versions diff --git a/vendor/github.com/apparentlymart/go-versions/versions/list.go b/vendor/github.com/apparentlymart/go-versions/versions/list.go new file mode 100644 index 000000000..083e68589 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/list.go @@ -0,0 +1,149 @@ +package versions + +import ( + "sort" +) + +// List is a slice of Version that implements sort.Interface, and also includes +// some other helper functions. +type List []Version + +// Filter removes from the receiver any elements that are not in the given +// set, moving retained elements to lower indices to close any gaps and +// modifying the underlying array in-place. The return value is a slice +// describing the new bounds within the existing backing array. The relative +// ordering of the retained elements is preserved. +// +// The result must always be either the same length or shorter than the +// initial value, so no allocation is required. +// +// As a special case, if the result would be a slice of length zero then a +// nil slice is returned instead, leaving the backing array untouched. +func (l List) Filter(set Set) List { + writeI := 0 + + for readI := range l { + if set.Has(l[readI]) { + l[writeI] = l[readI] + writeI++ + } + } + + if writeI == 0 { + return nil + } + return l[:writeI:len(l)] +} + +// Newest returns the newest version in the list, or Unspecified if the list +// is empty. +// +// Since build metadata does not participate in precedence, it is possible +// that a given list may have multiple equally-new versions; in that case +// Newest will return an arbitrary version from that subset. +func (l List) Newest() Version { + ret := Unspecified + for i := len(l) - 1; i >= 0; i-- { + if l[i].GreaterThan(ret) { + ret = l[i] + } + } + return ret +} + +// NewestInSet is like Filter followed by Newest, except that it does not +// modify the underlying array. This is convenient for the common case of +// selecting the newest version from a set derived from a user-supplied +// constraint. +// +// Similar to Newest, the result is Unspecified if the list is empty or if +// none of the items are in the given set. Also similar to newest, if there +// are multiple newest versions (possibly differentiated only by metadata) +// then one is arbitrarily chosen. +func (l List) NewestInSet(set Set) Version { + ret := Unspecified + for i := len(l) - 1; i >= 0; i-- { + if l[i].GreaterThan(ret) && set.Has(l[i]) { + ret = l[i] + } + } + return ret +} + +// NewestList returns a List containing all of the list items that have the +// highest precedence. +// +// For an already-sorted list, the returned slice is a sub-slice of the +// receiver, sharing the same backing array. For an unsorted list, a new +// array is allocated for the result. For an empty list, the result is always +// nil. +// +// Relative ordering of elements in the receiver is preserved in the output. +func (l List) NewestList() List { + if len(l) == 0 { + return nil + } + + if l.IsSorted() { + // This is a happy path since we can just count off items from the + // end of our existing list until we find one that is not the same + // as the last. + var i int + n := len(l) + for i = n - 1; i >= 0; i-- { + if !l[i].Same(l[n-1]) { + break + } + } + if i < 0 { + i = 0 + } + return l[i:] + } + + // For an unsorted list we'll allocate so that we can construct a new, + // filtered slice. + ret := make(List, 0, 1) // one item is the common case, in the absense of build metadata + example := l.Newest() + for _, v := range l { + if v.Same(example) { + ret = append(ret, v) + } + } + return ret +} + +// Set returns a finite Set containing the versions in the receiver. +// +// Although it is possible to recover a list from the return value using +// its List method, the result may be in a different order and will have +// any duplicate elements from the receiving list consolidated. +func (l List) Set() Set { + return Selection(l...) +} + +func (l List) Len() int { + return len(l) +} + +func (l List) Less(i, j int) bool { + return l[i].LessThan(l[j]) +} + +func (l List) Swap(i, j int) { + l[i], l[j] = l[j], l[i] +} + +// Sort applies an in-place sort on the list, preserving the relative order of +// any elements that differ only in build metadata. Earlier versions sort +// first, so the newest versions will be at the highest indices in the list +// once this method returns. +func (l List) Sort() { + sort.Stable(l) +} + +// IsSorted returns true if the list is already in ascending order by +// version priority. +func (l List) IsSorted() bool { + return sort.IsSorted(l) +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/parse.go b/vendor/github.com/apparentlymart/go-versions/versions/parse.go new file mode 100644 index 000000000..66150e337 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/parse.go @@ -0,0 +1,243 @@ +package versions + +import ( + "fmt" + + "github.com/apparentlymart/go-versions/versions/constraints" +) + +// ParseVersion attempts to parse the given string as a semantic version +// specification, and returns the result if successful. +// +// If the given string is not parseable then an error is returned that is +// suitable for display directly to a hypothetical end-user that provided this +// version string, as long as they can read English. +func ParseVersion(s string) (Version, error) { + spec, err := constraints.ParseExactVersion(s) + if err != nil { + return Unspecified, err + } + return versionFromExactVersionSpec(spec), nil +} + +// MustParseVersion is the same as ParseVersion except that it will panic +// instead of returning an error. +func MustParseVersion(s string) Version { + v, err := ParseVersion(s) + if err != nil { + panic(err) + } + return v +} + +// MeetingConstraints returns a version set that contains all of the versions +// that meet the given constraints, specified using the Spec type from the +// constraints package. +// +// The resulting Set has all pre-release versions excluded, except any that +// are explicitly mentioned as exact selections. For example, the constraint +// "2.0.0-beta1 || >2" contains 2.0.0-beta1 but not 2.0.0-beta2 or 3.0.0-beta1. +// This additional constraint on pre-releases can be avoided by calling +// MeetingConstraintsExact instead, at which point the caller can apply other +// logic to deal with prereleases. +// +// This function expects an internally-consistent Spec like what would be +// generated by that package's constraint parsers. Behavior is undefined -- +// including the possibility of panics -- if specs are hand-created and the +// expected invariants aren't met. +func MeetingConstraints(spec constraints.Spec) Set { + exact := MeetingConstraintsExact(spec) + reqd := exact.AllRequested().List() + set := Intersection(Released, exact) + reqd = reqd.Filter(Prerelease) + if len(reqd) != 0 { + set = Union(Selection(reqd...), set) + } + return set +} + +// MeetingConstraintsExact is like MeetingConstraints except that it doesn't +// apply the extra rules to exclude pre-release versions that are not +// explicitly requested. +// +// This means that given a constraint ">=1.0.0 <2.0.0" a hypothetical version +// 2.0.0-beta1 _is_ in the returned set, because prerelease versions have +// lower precedence than their corresponding release. +// +// A caller can use this to implement its own specialized handling of +// pre-release versions by applying additional set operations to the result, +// such as intersecting it with the predefined set versions.Released to +// remove prerelease versions altogether. +func MeetingConstraintsExact(spec constraints.Spec) Set { + if spec == nil { + return All + } + + switch ts := spec.(type) { + + case constraints.VersionSpec: + lowerBound, upperBound := ts.ConstraintBounds() + switch lowerBound.Operator { + case constraints.OpUnconstrained: + return All + case constraints.OpEqual: + return Only(versionFromExactVersionSpec(lowerBound.Boundary)) + default: + return AtLeast( + versionFromExactVersionSpec(lowerBound.Boundary), + ).Intersection( + OlderThan(versionFromExactVersionSpec(upperBound.Boundary))) + } + + case constraints.SelectionSpec: + lower := ts.Boundary.ConstrainToZero() + if ts.Operator != constraints.OpEqual && ts.Operator != constraints.OpNotEqual { + lower.Metadata = "" // metadata is only considered for exact matches + } + + switch ts.Operator { + case constraints.OpUnconstrained: + // Degenerate case, but we'll allow it. + return All + case constraints.OpMatch: + // The match operator uses the constraints implied by the + // Boundary version spec as the specification. + // Note that we discard "lower" in this case, because we do want + // to match our metadata if it's specified. + return MeetingConstraintsExact(ts.Boundary) + case constraints.OpEqual, constraints.OpNotEqual: + set := Only(versionFromExactVersionSpec(lower)) + if ts.Operator == constraints.OpNotEqual { + // We want everything _except_ what's in our set, then. + set = All.Subtract(set) + } + return set + case constraints.OpGreaterThan: + return NewerThan(versionFromExactVersionSpec(lower)) + case constraints.OpGreaterThanOrEqual: + return AtLeast(versionFromExactVersionSpec(lower)) + case constraints.OpLessThan: + return OlderThan(versionFromExactVersionSpec(lower)) + case constraints.OpLessThanOrEqual: + return AtMost(versionFromExactVersionSpec(lower)) + case constraints.OpGreaterThanOrEqualMinorOnly: + upper := lower + upper.Major.Num++ + upper.Minor.Num = 0 + upper.Patch.Num = 0 + upper.Prerelease = "" + return AtLeast( + versionFromExactVersionSpec(lower), + ).Intersection( + OlderThan(versionFromExactVersionSpec(upper))) + case constraints.OpGreaterThanOrEqualPatchOnly: + upper := lower + upper.Minor.Num++ + upper.Patch.Num = 0 + upper.Prerelease = "" + return AtLeast( + versionFromExactVersionSpec(lower), + ).Intersection( + OlderThan(versionFromExactVersionSpec(upper))) + default: + panic(fmt.Errorf("unsupported constraints.SelectionOp %s", ts.Operator)) + } + + case constraints.UnionSpec: + if len(ts) == 0 { + return All + } + if len(ts) == 1 { + return MeetingConstraintsExact(ts[0]) + } + union := make(setUnion, len(ts)) + for i, subSpec := range ts { + union[i] = MeetingConstraintsExact(subSpec).setI + } + return Set{setI: union} + + case constraints.IntersectionSpec: + if len(ts) == 0 { + return All + } + if len(ts) == 1 { + return MeetingConstraintsExact(ts[0]) + } + intersection := make(setIntersection, len(ts)) + for i, subSpec := range ts { + intersection[i] = MeetingConstraintsExact(subSpec).setI + } + return Set{setI: intersection} + + default: + // should never happen because the above cases are exhaustive for + // all valid constraint implementations. + panic(fmt.Errorf("unsupported constraints.Spec implementation %T", spec)) + } +} + +// MeetingConstraintsString attempts to parse the given spec as a constraints +// string in our canonical format, which is most similar to the syntax used by +// npm, Go's "dep" tool, Rust's "cargo", etc. +// +// This is a covenience wrapper around calling constraints.Parse and then +// passing the result to MeetingConstraints. Call into the constraints package +// yourself for access to the constraint tree. +// +// If unsuccessful, the error from the underlying parser is returned verbatim. +// Parser errors are suitable for showing to an end-user in situations where +// the given spec came from user input. +func MeetingConstraintsString(spec string) (Set, error) { + s, err := constraints.Parse(spec) + if err != nil { + return None, err + } + return MeetingConstraints(s), nil +} + +// MeetingConstraintsStringRuby attempts to parse the given spec as a +// "Ruby-style" version constraint string, and returns the set of versions +// that match the constraint if successful. +// +// If unsuccessful, the error from the underlying parser is returned verbatim. +// Parser errors are suitable for showing to an end-user in situations where +// the given spec came from user input. +// +// "Ruby-style" here is not a promise of exact compatibility with rubygems +// or any other Ruby tools. Rather, it refers to this parser using a syntax +// that is intended to feel familiar to those who are familiar with rubygems +// syntax. +// +// Constraints are parsed in "multi" mode, allowing multiple comma-separated +// constraints that are combined with the Intersection operator. For more +// control over the parsing process, use the constraints package API directly +// and then call MeetingConstraints. +func MeetingConstraintsStringRuby(spec string) (Set, error) { + s, err := constraints.ParseRubyStyleMulti(spec) + if err != nil { + return None, err + } + return MeetingConstraints(s), nil +} + +// MustMakeSet can be used to wrap any function that returns a set and an error +// to make it panic if an error occurs and return the set otherwise. +// +// This is intended for tests and other situations where input is from +// known-good constants. +func MustMakeSet(set Set, err error) Set { + if err != nil { + panic(err) + } + return set +} + +func versionFromExactVersionSpec(spec constraints.VersionSpec) Version { + return Version{ + Major: spec.Major.Num, + Minor: spec.Minor.Num, + Patch: spec.Patch.Num, + Prerelease: VersionExtra(spec.Prerelease), + Metadata: VersionExtra(spec.Metadata), + } +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set.go b/vendor/github.com/apparentlymart/go-versions/versions/set.go new file mode 100644 index 000000000..ad4d5efe5 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set.go @@ -0,0 +1,89 @@ +package versions + +// Set is a set of versions, usually created by parsing a constraint string. +type Set struct { + setI +} + +// setI is the private interface implemented by our various constraint +// operators. +type setI interface { + Has(v Version) bool + AllRequested() Set + GoString() string +} + +// Has returns true if the given version is a member of the receiving set. +func (s Set) Has(v Version) bool { + // The special Unspecified version is excluded as soon as any sort of + // constraint is applied, and so the only set it is a member of is + // the special All set. + if v == Unspecified { + return s == All + } + + return s.setI.Has(v) +} + +// Requests returns true if the given version is specifically requested by +// the receiving set. +// +// Requesting is a stronger form of set membership that represents an explicit +// request for a particular version, as opposed to the version just happening +// to match some criteria. +// +// The functions Only and Selection mark their arguments as requested in +// their returned sets. Exact version constraints given in constraint strings +// also mark their versions as requested. +// +// The concept of requesting is intended to help deal with pre-release versions +// in a safe and convenient way. When given generic version constraints like +// ">= 1.0.0" the user generally does not intend to match a pre-release version +// like "2.0.0-beta1", but it is important to stil be able to use that +// version if explicitly requested using the constraint string "2.0.0-beta1". +func (s Set) Requests(v Version) bool { + return s.AllRequested().Has(v) +} + +// AllRequested returns a subset of the receiver containing only the requested +// versions, as defined in the documentation for the method Requests. +// +// This can be used in conjunction with the predefined set "Released" to +// include pre-release versions only by explicit request, which is supported +// via the helper method WithoutUnrequestedPrereleases. +// +// The result of AllRequested is always a finite set. +func (s Set) AllRequested() Set { + return s.setI.AllRequested() +} + +// WithoutUnrequestedPrereleases returns a new set that includes all released +// versions from the receiving set, plus any explicitly-requested pre-releases, +// but does not include any unrequested pre-releases. +// +// "Requested" here is as defined in the documentation for the "Requests" method. +// +// This method is equivalent to the following set operations: +// +// versions.Union(s.AllRequested(), s.Intersection(versions.Released)) +func (s Set) WithoutUnrequestedPrereleases() Set { + return Union(s.AllRequested(), Released.Intersection(s)) +} + +// UnmarshalText is an implementation of encoding.TextUnmarshaler, allowing +// sets to be automatically unmarshalled from strings in text-based +// serialization formats, including encoding/json. +// +// The format expected is what is accepted by MeetingConstraintsString. Any +// parser errors are passed on verbatim to the caller. +func (s *Set) UnmarshalText(text []byte) error { + str := string(text) + new, err := MeetingConstraintsString(str) + if err != nil { + return err + } + *s = new + return nil +} + +var InitialDevelopment Set = OlderThan(MustParseVersion("1.0.0")) diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_bound.go b/vendor/github.com/apparentlymart/go-versions/versions/set_bound.go new file mode 100644 index 000000000..2e2ba09ce --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_bound.go @@ -0,0 +1,98 @@ +package versions + +import ( + "fmt" +) + +type setBound struct { + v Version + op setBoundOp +} + +func (s setBound) Has(v Version) bool { + switch s.op { + case setBoundGT: + return v.GreaterThan(s.v) + case setBoundGTE: + return v.GreaterThan(s.v) || v.Same(s.v) + case setBoundLT: + return v.LessThan(s.v) + case setBoundLTE: + return v.LessThan(s.v) || v.Same(s.v) + default: + // Should never happen because the above is exhaustive + panic("invalid setBound operator") + } +} + +func (s setBound) AllRequested() Set { + // Inequalities request nothing. + return None +} + +func (s setBound) GoString() string { + switch s.op { + case setBoundGT: + return fmt.Sprintf("versions.NewerThan(%#v)", s.v) + case setBoundGTE: + return fmt.Sprintf("versions.AtLeast(%#v)", s.v) + case setBoundLT: + return fmt.Sprintf("versions.OlderThan(%#v)", s.v) + case setBoundLTE: + return fmt.Sprintf("versions.AtMost(%#v)", s.v) + default: + // Should never happen because the above is exhaustive + return fmt.Sprintf("versions.Set{versions.setBound{v:%#v,op:%#v}}", s.v, s.op) + } +} + +// NewerThan returns a set containing all versions greater than the given +// version, non-inclusive. +func NewerThan(v Version) Set { + return Set{ + setI: setBound{ + v: v, + op: setBoundGT, + }, + } +} + +// OlderThan returns a set containing all versions lower than the given +// version, non-inclusive. +func OlderThan(v Version) Set { + return Set{ + setI: setBound{ + v: v, + op: setBoundLT, + }, + } +} + +// AtLeast returns a set containing all versions greater than or equal to +// the given version. +func AtLeast(v Version) Set { + return Set{ + setI: setBound{ + v: v, + op: setBoundGTE, + }, + } +} + +// AtMost returns a set containing all versions less than or equal to the given +// version, non-inclusive. +func AtMost(v Version) Set { + return Set{ + setI: setBound{ + v: v, + op: setBoundLTE, + }, + } +} + +type setBoundOp rune + +const setBoundGT = '>' +const setBoundGTE = '≥' +const setBoundLT = '<' +const setBoundLTE = '≤' diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_exact.go b/vendor/github.com/apparentlymart/go-versions/versions/set_exact.go new file mode 100644 index 000000000..4b2fa7bc4 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_exact.go @@ -0,0 +1,103 @@ +package versions + +import ( + "bytes" + "fmt" +) + +type setExact map[Version]struct{} + +func (s setExact) Has(v Version) bool { + _, has := s[v] + return has +} + +func (s setExact) AllRequested() Set { + // We just return the receiver verbatim here, because everything in it + // is explicitly requested. + return Set{setI: s} +} + +func (s setExact) GoString() string { + if len(s) == 0 { + // Degenerate case; caller should use None instead + return "versions.Set{setExact{}}" + } + + if len(s) == 1 { + var first Version + for v := range s { + first = v + break + } + return fmt.Sprintf("versions.Only(%#v)", first) + } + + var buf bytes.Buffer + fmt.Fprint(&buf, "versions.Selection(") + versions := s.listVersions() + versions.Sort() + for i, version := range versions { + if i == 0 { + fmt.Fprint(&buf, version.GoString()) + } else { + fmt.Fprintf(&buf, ", %#v", version) + } + } + fmt.Fprint(&buf, ")") + return buf.String() +} + +// Only returns a version set containing only the given version. +// +// This function is guaranteed to produce a finite set. +func Only(v Version) Set { + return Set{ + setI: setExact{v: struct{}{}}, + } +} + +// Selection returns a version set containing only the versions given +// as arguments. +// +// This function is guaranteed to produce a finite set. +func Selection(vs ...Version) Set { + if len(vs) == 0 { + return None + } + ret := make(setExact) + for _, v := range vs { + ret[v] = struct{}{} + } + return Set{setI: ret} +} + +// Exactly returns true if and only if the receiving set is finite and +// contains only a single version that is the same as the version given. +func (s Set) Exactly(v Version) bool { + if !s.IsFinite() { + return false + } + l := s.List() + if len(l) != 1 { + return false + } + return v.Same(l[0]) +} + +var _ setFinite = setExact(nil) + +func (s setExact) isFinite() bool { + return true +} + +func (s setExact) listVersions() List { + if len(s) == 0 { + return nil + } + ret := make(List, 0, len(s)) + for v := range s { + ret = append(ret, v) + } + return ret +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_extremes.go b/vendor/github.com/apparentlymart/go-versions/versions/set_extremes.go new file mode 100644 index 000000000..cae13f99b --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_extremes.go @@ -0,0 +1,49 @@ +package versions + +// All is an infinite set containing all possible versions. +var All Set + +// None is a finite set containing no versions. +var None Set + +type setExtreme bool + +func (s setExtreme) Has(v Version) bool { + return bool(s) +} + +func (s setExtreme) AllRequested() Set { + // The extreme sets request nothing. + return None +} + +func (s setExtreme) GoString() string { + switch bool(s) { + case true: + return "versions.All" + case false: + return "versions.None" + default: + panic("strange new boolean value") + } +} + +var _ setFinite = setExtreme(false) + +func (s setExtreme) isFinite() bool { + // Only None is finite + return !bool(s) +} + +func (s setExtreme) listVersions() List { + return nil +} + +func init() { + All = Set{ + setI: setExtreme(true), + } + None = Set{ + setI: setExtreme(false), + } +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_finite.go b/vendor/github.com/apparentlymart/go-versions/versions/set_finite.go new file mode 100644 index 000000000..eb1a5dc2f --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_finite.go @@ -0,0 +1,34 @@ +package versions + +// setFinite is the interface implemented by set implementations that +// represent a finite number of versions, and can thus list those versions. +type setFinite interface { + isFinite() bool + listVersions() List +} + +// IsFinite returns true if the set represents a finite number of versions, +// and can thus support List without panicking. +func (s Set) IsFinite() bool { + return isFinite(s.setI) +} + +// List returns the specific versions represented by a finite list, in an +// undefined order. If desired, the caller can sort the resulting list +// using its Sort method. +// +// If the set is not finite, this method will panic. Use IsFinite to check +// unless a finite set was guaranteed by whatever operation(s) constructed +// the set. +func (s Set) List() List { + finite, ok := s.setI.(setFinite) + if !ok || !finite.isFinite() { + panic("List called on infinite set") + } + return finite.listVersions() +} + +func isFinite(s setI) bool { + finite, ok := s.(setFinite) + return ok && finite.isFinite() +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_intersection.go b/vendor/github.com/apparentlymart/go-versions/versions/set_intersection.go new file mode 100644 index 000000000..4afd1b423 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_intersection.go @@ -0,0 +1,132 @@ +package versions + +import ( + "bytes" + "fmt" +) + +type setIntersection []setI + +func (s setIntersection) Has(v Version) bool { + if len(s) == 0 { + // Weird to have an intersection with no elements, but we'll + // allow it and return something sensible. + return false + } + for _, ss := range s { + if !ss.Has(v) { + return false + } + } + return true +} + +func (s setIntersection) AllRequested() Set { + // The requested set for an intersection is the union of all of its + // members requested sets intersection the receiver. Therefore we'll + // borrow the same logic from setUnion's implementation here but + // then wrap it up in a setIntersection before we return. + + asUnion := setUnion(s) + ar := asUnion.AllRequested() + si := make(setIntersection, len(s)+1) + si[0] = ar.setI + copy(si[1:], s) + return Set{setI: si} +} + +func (s setIntersection) GoString() string { + var buf bytes.Buffer + fmt.Fprint(&buf, "versions.Intersection(") + for i, ss := range s { + if i == 0 { + fmt.Fprint(&buf, ss.GoString()) + } else { + fmt.Fprintf(&buf, ", %#v", ss) + } + } + fmt.Fprint(&buf, ")") + return buf.String() +} + +// Intersection creates a new set that contains the versions that all of the +// given sets have in common. +// +// The result is finite if any of the given sets are finite. +func Intersection(sets ...Set) Set { + if len(sets) == 0 { + return None + } + + r := make(setIntersection, 0, len(sets)) + for _, set := range sets { + if set == All { + continue + } + if set == None { + return None + } + if su, ok := set.setI.(setIntersection); ok { + r = append(r, su...) + } else { + r = append(r, set.setI) + } + } + if len(r) == 1 { + return Set{setI: r[0]} + } + return Set{setI: r} +} + +// Intersection returns a new set that contains all of the versions that +// the receiver and the given sets have in common. +// +// The result is a finite set if the receiver or any of the given sets are +// finite. +func (s Set) Intersection(others ...Set) Set { + r := make(setIntersection, 1, len(others)+1) + r[0] = s.setI + for _, ss := range others { + if ss == All { + continue + } + if ss == None { + return None + } + if su, ok := ss.setI.(setIntersection); ok { + r = append(r, su...) + } else { + r = append(r, ss.setI) + } + } + if len(r) == 1 { + return Set{setI: r[0]} + } + return Set{setI: r} +} + +var _ setFinite = setIntersection{} + +func (s setIntersection) isFinite() bool { + // intersection is finite if any of its members are, or if it is empty + if len(s) == 0 { + return true + } + for _, ss := range s { + if isFinite(ss) { + return true + } + } + return false +} + +func (s setIntersection) listVersions() List { + var ret List + for _, ss := range s { + if isFinite(ss) { + ret = append(ret, ss.(setFinite).listVersions()...) + } + } + ret.Filter(Set{setI: s}) + return ret +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_released.go b/vendor/github.com/apparentlymart/go-versions/versions/set_released.go new file mode 100644 index 000000000..dea240363 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_released.go @@ -0,0 +1,30 @@ +package versions + +type setReleased struct{} + +func (s setReleased) Has(v Version) bool { + return v.Prerelease == "" +} + +func (s setReleased) AllRequested() Set { + // The set of all released versions requests nothing. + return None +} + +func (s setReleased) GoString() string { + return "versions.Released" +} + +// Released is a set containing all versions that have an empty prerelease +// string. +var Released Set + +// Prerelease is a set containing all versions that have a prerelease marker. +// This is the complement of Released, or in other words it is +// All.Subtract(Released). +var Prerelease Set + +func init() { + Released = Set{setI: setReleased{}} + Prerelease = All.Subtract(Released) +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_subtract.go b/vendor/github.com/apparentlymart/go-versions/versions/set_subtract.go new file mode 100644 index 000000000..19a9c01e2 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_subtract.go @@ -0,0 +1,56 @@ +package versions + +import "fmt" + +type setSubtract struct { + from setI + sub setI +} + +func (s setSubtract) Has(v Version) bool { + return s.from.Has(v) && !s.sub.Has(v) +} + +func (s setSubtract) AllRequested() Set { + // Our set requests anything that is requested by "from", unless it'd + // be excluded by "sub". Notice that the whole of "sub" is used, rather + // than just the requested parts, because requesting is a positive + // action only. + return Set{setI: s.from}.AllRequested().Subtract(Set{setI: s.sub}) +} + +func (s setSubtract) GoString() string { + return fmt.Sprintf("(%#v).Subtract(%#v)", s.from, s.sub) +} + +// Subtract returns a new set that has all of the versions from the receiver +// except for any versions in the other given set. +// +// If the receiver is finite then the returned set is also finite. +func (s Set) Subtract(other Set) Set { + if other == None || s == None { + return s + } + if other == All { + return None + } + return Set{ + setI: setSubtract{ + from: s.setI, + sub: other.setI, + }, + } +} + +var _ setFinite = setSubtract{} + +func (s setSubtract) isFinite() bool { + // subtract is finite if its "from" is finite + return isFinite(s.from) +} + +func (s setSubtract) listVersions() List { + ret := s.from.(setFinite).listVersions() + ret = ret.Filter(Set{setI: s.sub}) + return ret +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/set_union.go b/vendor/github.com/apparentlymart/go-versions/versions/set_union.go new file mode 100644 index 000000000..1482f690f --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/set_union.go @@ -0,0 +1,121 @@ +package versions + +import ( + "bytes" + "fmt" +) + +type setUnion []setI + +func (s setUnion) Has(v Version) bool { + for _, ss := range s { + if ss.Has(v) { + return true + } + } + return false +} + +func (s setUnion) AllRequested() Set { + // Since a union includes everything from its members, it includes all + // of the requested versions from its members too. + if len(s) == 0 { + return None + } + si := make(setUnion, 0, len(s)) + for _, ss := range s { + ar := ss.AllRequested() + if ar == None { + continue + } + si = append(si, ar.setI) + } + if len(si) == 1 { + return Set{setI: si[0]} + } + return Set{setI: si} +} + +func (s setUnion) GoString() string { + var buf bytes.Buffer + fmt.Fprint(&buf, "versions.Union(") + for i, ss := range s { + if i == 0 { + fmt.Fprint(&buf, ss.GoString()) + } else { + fmt.Fprintf(&buf, ", %#v", ss) + } + } + fmt.Fprint(&buf, ")") + return buf.String() +} + +// Union creates a new set that contains all of the given versions. +// +// The result is finite only if the receiver and all of the other given sets +// are finite. +func Union(sets ...Set) Set { + if len(sets) == 0 { + return None + } + + r := make(setUnion, 0, len(sets)) + for _, set := range sets { + if set == None { + continue + } + if su, ok := set.setI.(setUnion); ok { + r = append(r, su...) + } else { + r = append(r, set.setI) + } + } + if len(r) == 1 { + return Set{setI: r[0]} + } + return Set{setI: r} +} + +// Union returns a new set that contains all of the versions from the +// receiver and all of the versions from each of the other given sets. +// +// The result is finite only if the receiver and all of the other given sets +// are finite. +func (s Set) Union(others ...Set) Set { + r := make(setUnion, 1, len(others)+1) + r[0] = s.setI + for _, ss := range others { + if ss == None { + continue + } + if su, ok := ss.setI.(setUnion); ok { + r = append(r, su...) + } else { + r = append(r, ss.setI) + } + } + if len(r) == 1 { + return Set{setI: r[0]} + } + return Set{setI: r} +} + +var _ setFinite = setUnion{} + +func (s setUnion) isFinite() bool { + // union is finite only if all of its members are finite + for _, ss := range s { + if !isFinite(ss) { + return false + } + } + return true +} + +func (s setUnion) listVersions() List { + var ret List + for _, ss := range s { + ret = append(ret, ss.(setFinite).listVersions()...) + } + return ret +} diff --git a/vendor/github.com/apparentlymart/go-versions/versions/version.go b/vendor/github.com/apparentlymart/go-versions/versions/version.go new file mode 100644 index 000000000..8cd0eb5a0 --- /dev/null +++ b/vendor/github.com/apparentlymart/go-versions/versions/version.go @@ -0,0 +1,222 @@ +package versions + +import ( + "fmt" + "strings" +) + +// Version represents a single version. +type Version struct { + Major uint64 + Minor uint64 + Patch uint64 + Prerelease VersionExtra + Metadata VersionExtra +} + +// Unspecified is the zero value of Version and represents the absense of a +// version number. +// +// Note that this is indistinguishable from the explicit version that +// results from parsing the string "0.0.0". +var Unspecified Version + +// Same returns true if the receiver has the same precedence as the other +// given version. In other words, it has the same major, minor and patch +// version number and an identical prerelease portion. The Metadata, if +// any, is not considered. +func (v Version) Same(other Version) bool { + return (v.Major == other.Major && + v.Minor == other.Minor && + v.Patch == other.Patch && + v.Prerelease == other.Prerelease) +} + +// Comparable returns a version that is the same as the receiver but its +// metadata is the empty string. For Comparable versions, the standard +// equality operator == is equivalent to method Same. +func (v Version) Comparable() Version { + v.Metadata = "" + return v +} + +// String is an implementation of fmt.Stringer that returns the receiver +// in the canonical "semver" format. +func (v Version) String() string { + s := fmt.Sprintf("%d.%d.%d", v.Major, v.Minor, v.Patch) + if v.Prerelease != "" { + s = fmt.Sprintf("%s-%s", s, v.Prerelease) + } + if v.Metadata != "" { + s = fmt.Sprintf("%s+%s", s, v.Metadata) + } + return s +} + +func (v Version) GoString() string { + return fmt.Sprintf("versions.MustParseVersion(%q)", v.String()) +} + +// LessThan returns true if the receiver has a lower precedence than the +// other given version, as defined by the semantic versioning specification. +func (v Version) LessThan(other Version) bool { + switch { + case v.Major != other.Major: + return v.Major < other.Major + case v.Minor != other.Minor: + return v.Minor < other.Minor + case v.Patch != other.Patch: + return v.Patch < other.Patch + case v.Prerelease != other.Prerelease: + if v.Prerelease == "" { + return false + } + if other.Prerelease == "" { + return true + } + return v.Prerelease.LessThan(other.Prerelease) + default: + return false + } +} + +// GreaterThan returns true if the receiver has a higher precedence than the +// other given version, as defined by the semantic versioning specification. +func (v Version) GreaterThan(other Version) bool { + switch { + case v.Major != other.Major: + return v.Major > other.Major + case v.Minor != other.Minor: + return v.Minor > other.Minor + case v.Patch != other.Patch: + return v.Patch > other.Patch + case v.Prerelease != other.Prerelease: + if v.Prerelease == "" { + return true + } + if other.Prerelease == "" { + return false + } + return !v.Prerelease.LessThan(other.Prerelease) + default: + return false + } +} + +// MarshalText is an implementation of encoding.TextMarshaler, allowing versions +// to be automatically marshalled for text-based serialization formats, +// including encoding/json. +// +// The format used is that returned by String, which can be parsed using +// ParseVersion. +func (v Version) MarshalText() (text []byte, err error) { + return []byte(v.String()), nil +} + +// UnmarshalText is an implementation of encoding.TextUnmarshaler, allowing +// versions to be automatically unmarshalled from strings in text-based +// serialization formats, including encoding/json. +// +// The format expected is what is accepted by ParseVersion. Any parser errors +// are passed on verbatim to the caller. +func (v *Version) UnmarshalText(text []byte) error { + str := string(text) + new, err := ParseVersion(str) + if err != nil { + return err + } + *v = new + return nil +} + +// VersionExtra represents a string containing dot-delimited tokens, as used +// in the pre-release and build metadata portions of a Semantic Versioning +// version expression. +type VersionExtra string + +// Parts tokenizes the string into its separate parts by splitting on dots. +// +// The result is undefined if the receiver is not valid per the semver spec, +func (e VersionExtra) Parts() []string { + return strings.Split(string(e), ".") +} + +func (e VersionExtra) Raw() string { + return string(e) +} + +// LessThan returns true if the receiever has lower precedence than the +// other given VersionExtra string, per the rules defined in the semver +// spec for pre-release versions. +// +// Build metadata has no defined precedence rules, so it is not meaningful +// to call this method on a VersionExtra representing build metadata. +func (e VersionExtra) LessThan(other VersionExtra) bool { + if e == other { + // Easy path + return false + } + + s1 := string(e) + s2 := string(other) + for { + d1 := strings.IndexByte(s1, '.') + d2 := strings.IndexByte(s2, '.') + + switch { + case d1 == -1 && d2 != -1: + // s1 has fewer parts, so it precedes s2 + return true + case d2 == -1 && d1 != -1: + // s1 has more parts, so it succeeds s2 + return false + case d1 == -1: // d2 must be -1 too, because of the above + // this is our last portion to compare + return lessThanStr(s1, s2) + default: + s1s := s1[:d1] + s2s := s2[:d2] + if s1s != s2s { + return lessThanStr(s1s, s2s) + } + s1 = s1[d1+1:] + s2 = s2[d2+1:] + } + } +} + +func lessThanStr(s1, s2 string) bool { + // How we compare here depends on whether the string is entirely consistent of digits + s1Numeric := true + s2Numeric := true + for _, c := range s1 { + if c < '0' || c > '9' { + s1Numeric = false + break + } + } + for _, c := range s2 { + if c < '0' || c > '9' { + s2Numeric = false + break + } + } + + switch { + case s1Numeric && !s2Numeric: + return true + case s2Numeric && !s1Numeric: + return false + case s1Numeric: // s2Numeric must also be true + switch { + case len(s1) < len(s2): + return true + case len(s2) < len(s1): + return false + default: + return s1 < s2 + } + default: + return s1 < s2 + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/client.go b/vendor/github.com/aws/aws-sdk-go/aws/client/client.go index 709605384..c022407f5 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/client/client.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/client.go @@ -64,7 +64,7 @@ func New(cfg aws.Config, info metadata.ClientInfo, handlers request.Handlers, op default: maxRetries := aws.IntValue(cfg.MaxRetries) if cfg.MaxRetries == nil || maxRetries == aws.UseServiceDefaultRetries { - maxRetries = 3 + maxRetries = DefaultRetryerMaxNumRetries } svc.Retryer = DefaultRetryer{NumMaxRetries: maxRetries} } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go index a397b0d04..0fda42510 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go @@ -1,6 +1,7 @@ package client import ( + "math" "strconv" "time" @@ -9,82 +10,142 @@ import ( ) // DefaultRetryer implements basic retry logic using exponential backoff for -// most services. If you want to implement custom retry logic, implement the -// request.Retryer interface or create a structure type that composes this -// struct and override the specific methods. For example, to override only -// the MaxRetries method: +// most services. If you want to implement custom retry logic, you can implement the +// request.Retryer interface. // -// type retryer struct { -// client.DefaultRetryer -// } -// -// // This implementation always has 100 max retries -// func (d retryer) MaxRetries() int { return 100 } type DefaultRetryer struct { - NumMaxRetries int + // Num max Retries is the number of max retries that will be performed. + // By default, this is zero. + NumMaxRetries int + + // MinRetryDelay is the minimum retry delay after which retry will be performed. + // If not set, the value is 0ns. + MinRetryDelay time.Duration + + // MinThrottleRetryDelay is the minimum retry delay when throttled. + // If not set, the value is 0ns. + MinThrottleDelay time.Duration + + // MaxRetryDelay is the maximum retry delay before which retry must be performed. + // If not set, the value is 0ns. + MaxRetryDelay time.Duration + + // MaxThrottleDelay is the maximum retry delay when throttled. + // If not set, the value is 0ns. + MaxThrottleDelay time.Duration } +const ( + // DefaultRetryerMaxNumRetries sets maximum number of retries + DefaultRetryerMaxNumRetries = 3 + + // DefaultRetryerMinRetryDelay sets minimum retry delay + DefaultRetryerMinRetryDelay = 30 * time.Millisecond + + // DefaultRetryerMinThrottleDelay sets minimum delay when throttled + DefaultRetryerMinThrottleDelay = 500 * time.Millisecond + + // DefaultRetryerMaxRetryDelay sets maximum retry delay + DefaultRetryerMaxRetryDelay = 300 * time.Second + + // DefaultRetryerMaxThrottleDelay sets maximum delay when throttled + DefaultRetryerMaxThrottleDelay = 300 * time.Second +) + // MaxRetries returns the number of maximum returns the service will use to make // an individual API request. func (d DefaultRetryer) MaxRetries() int { return d.NumMaxRetries } +// setRetryerDefaults sets the default values of the retryer if not set +func (d *DefaultRetryer) setRetryerDefaults() { + if d.MinRetryDelay == 0 { + d.MinRetryDelay = DefaultRetryerMinRetryDelay + } + if d.MaxRetryDelay == 0 { + d.MaxRetryDelay = DefaultRetryerMaxRetryDelay + } + if d.MinThrottleDelay == 0 { + d.MinThrottleDelay = DefaultRetryerMinThrottleDelay + } + if d.MaxThrottleDelay == 0 { + d.MaxThrottleDelay = DefaultRetryerMaxThrottleDelay + } +} + // RetryRules returns the delay duration before retrying this request again func (d DefaultRetryer) RetryRules(r *request.Request) time.Duration { - // Set the upper limit of delay in retrying at ~five minutes - minTime := 30 - throttle := d.shouldThrottle(r) - if throttle { - if delay, ok := getRetryDelay(r); ok { - return delay - } - minTime = 500 + // if number of max retries is zero, no retries will be performed. + if d.NumMaxRetries == 0 { + return 0 + } + + // Sets default value for retryer members + d.setRetryerDefaults() + + // minDelay is the minimum retryer delay + minDelay := d.MinRetryDelay + + var initialDelay time.Duration + + isThrottle := r.IsErrorThrottle() + if isThrottle { + if delay, ok := getRetryAfterDelay(r); ok { + initialDelay = delay + } + minDelay = d.MinThrottleDelay } retryCount := r.RetryCount - if throttle && retryCount > 8 { - retryCount = 8 - } else if retryCount > 13 { - retryCount = 13 + + // maxDelay the maximum retryer delay + maxDelay := d.MaxRetryDelay + + if isThrottle { + maxDelay = d.MaxThrottleDelay } - delay := (1 << uint(retryCount)) * (sdkrand.SeededRand.Intn(minTime) + minTime) - return time.Duration(delay) * time.Millisecond + var delay time.Duration + + // Logic to cap the retry count based on the minDelay provided + actualRetryCount := int(math.Log2(float64(minDelay))) + 1 + if actualRetryCount < 63-retryCount { + delay = time.Duration(1< maxDelay { + delay = getJitterDelay(maxDelay / 2) + } + } else { + delay = getJitterDelay(maxDelay / 2) + } + return delay + initialDelay +} + +// getJitterDelay returns a jittered delay for retry +func getJitterDelay(duration time.Duration) time.Duration { + return time.Duration(sdkrand.SeededRand.Int63n(int64(duration)) + int64(duration)) } // ShouldRetry returns true if the request should be retried. func (d DefaultRetryer) ShouldRetry(r *request.Request) bool { + + // ShouldRetry returns false if number of max retries is 0. + if d.NumMaxRetries == 0 { + return false + } + // If one of the other handlers already set the retry state // we don't want to override it based on the service's state if r.Retryable != nil { return *r.Retryable } - - if r.HTTPResponse.StatusCode >= 500 && r.HTTPResponse.StatusCode != 501 { - return true - } - return r.IsErrorRetryable() || d.shouldThrottle(r) -} - -// ShouldThrottle returns true if the request should be throttled. -func (d DefaultRetryer) shouldThrottle(r *request.Request) bool { - switch r.HTTPResponse.StatusCode { - case 429: - case 502: - case 503: - case 504: - default: - return r.IsErrorThrottle() - } - - return true + return r.IsErrorRetryable() || r.IsErrorThrottle() } // This will look in the Retry-After header, RFC 7231, for how long // it will wait before attempting another request -func getRetryDelay(r *request.Request) (time.Duration, bool) { +func getRetryAfterDelay(r *request.Request) (time.Duration, bool) { if !canUseRetryAfterHeader(r) { return 0, false } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/no_op_retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/client/no_op_retryer.go new file mode 100644 index 000000000..881d575f0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/no_op_retryer.go @@ -0,0 +1,28 @@ +package client + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws/request" +) + +// NoOpRetryer provides a retryer that performs no retries. +// It should be used when we do not want retries to be performed. +type NoOpRetryer struct{} + +// MaxRetries returns the number of maximum returns the service will use to make +// an individual API; For NoOpRetryer the MaxRetries will always be zero. +func (d NoOpRetryer) MaxRetries() int { + return 0 +} + +// ShouldRetry will always return false for NoOpRetryer, as it should never retry. +func (d NoOpRetryer) ShouldRetry(_ *request.Request) bool { + return false +} + +// RetryRules returns the delay duration before retrying this request again; +// since NoOpRetryer does not retry, RetryRules always returns 0. +func (d NoOpRetryer) RetryRules(_ *request.Request) time.Duration { + return 0 +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/config.go b/vendor/github.com/aws/aws-sdk-go/aws/config.go index 10634d173..fd1e240f6 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/config.go @@ -20,7 +20,7 @@ type RequestRetryer interface{} // A Config provides service configuration for service clients. By default, // all clients will use the defaults.DefaultConfig structure. // -// // Create Session with MaxRetry configuration to be shared by multiple +// // Create Session with MaxRetries configuration to be shared by multiple // // service clients. // sess := session.Must(session.NewSession(&aws.Config{ // MaxRetries: aws.Int(3), @@ -251,7 +251,7 @@ type Config struct { // NewConfig returns a new Config pointer that can be chained with builder // methods to set multiple configuration values inline without using pointers. // -// // Create Session with MaxRetry configuration to be shared by multiple +// // Create Session with MaxRetries configuration to be shared by multiple // // service clients. // sess := session.Must(session.NewSession(aws.NewConfig(). // WithMaxRetries(3), diff --git a/vendor/github.com/aws/aws-sdk-go/aws/convert_types.go b/vendor/github.com/aws/aws-sdk-go/aws/convert_types.go index ff5d58e06..4e076c183 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/convert_types.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/convert_types.go @@ -179,6 +179,242 @@ func IntValueMap(src map[string]*int) map[string]int { return dst } +// Uint returns a pointer to the uint value passed in. +func Uint(v uint) *uint { + return &v +} + +// UintValue returns the value of the uint pointer passed in or +// 0 if the pointer is nil. +func UintValue(v *uint) uint { + if v != nil { + return *v + } + return 0 +} + +// UintSlice converts a slice of uint values uinto a slice of +// uint pointers +func UintSlice(src []uint) []*uint { + dst := make([]*uint, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// UintValueSlice converts a slice of uint pointers uinto a slice of +// uint values +func UintValueSlice(src []*uint) []uint { + dst := make([]uint, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// UintMap converts a string map of uint values uinto a string +// map of uint pointers +func UintMap(src map[string]uint) map[string]*uint { + dst := make(map[string]*uint) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// UintValueMap converts a string map of uint pointers uinto a string +// map of uint values +func UintValueMap(src map[string]*uint) map[string]uint { + dst := make(map[string]uint) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int8 returns a pointer to the int8 value passed in. +func Int8(v int8) *int8 { + return &v +} + +// Int8Value returns the value of the int8 pointer passed in or +// 0 if the pointer is nil. +func Int8Value(v *int8) int8 { + if v != nil { + return *v + } + return 0 +} + +// Int8Slice converts a slice of int8 values into a slice of +// int8 pointers +func Int8Slice(src []int8) []*int8 { + dst := make([]*int8, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int8ValueSlice converts a slice of int8 pointers into a slice of +// int8 values +func Int8ValueSlice(src []*int8) []int8 { + dst := make([]int8, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int8Map converts a string map of int8 values into a string +// map of int8 pointers +func Int8Map(src map[string]int8) map[string]*int8 { + dst := make(map[string]*int8) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int8ValueMap converts a string map of int8 pointers into a string +// map of int8 values +func Int8ValueMap(src map[string]*int8) map[string]int8 { + dst := make(map[string]int8) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int16 returns a pointer to the int16 value passed in. +func Int16(v int16) *int16 { + return &v +} + +// Int16Value returns the value of the int16 pointer passed in or +// 0 if the pointer is nil. +func Int16Value(v *int16) int16 { + if v != nil { + return *v + } + return 0 +} + +// Int16Slice converts a slice of int16 values into a slice of +// int16 pointers +func Int16Slice(src []int16) []*int16 { + dst := make([]*int16, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int16ValueSlice converts a slice of int16 pointers into a slice of +// int16 values +func Int16ValueSlice(src []*int16) []int16 { + dst := make([]int16, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int16Map converts a string map of int16 values into a string +// map of int16 pointers +func Int16Map(src map[string]int16) map[string]*int16 { + dst := make(map[string]*int16) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int16ValueMap converts a string map of int16 pointers into a string +// map of int16 values +func Int16ValueMap(src map[string]*int16) map[string]int16 { + dst := make(map[string]int16) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int32 returns a pointer to the int32 value passed in. +func Int32(v int32) *int32 { + return &v +} + +// Int32Value returns the value of the int32 pointer passed in or +// 0 if the pointer is nil. +func Int32Value(v *int32) int32 { + if v != nil { + return *v + } + return 0 +} + +// Int32Slice converts a slice of int32 values into a slice of +// int32 pointers +func Int32Slice(src []int32) []*int32 { + dst := make([]*int32, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int32ValueSlice converts a slice of int32 pointers into a slice of +// int32 values +func Int32ValueSlice(src []*int32) []int32 { + dst := make([]int32, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int32Map converts a string map of int32 values into a string +// map of int32 pointers +func Int32Map(src map[string]int32) map[string]*int32 { + dst := make(map[string]*int32) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int32ValueMap converts a string map of int32 pointers into a string +// map of int32 values +func Int32ValueMap(src map[string]*int32) map[string]int32 { + dst := make(map[string]int32) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + // Int64 returns a pointer to the int64 value passed in. func Int64(v int64) *int64 { return &v @@ -238,6 +474,301 @@ func Int64ValueMap(src map[string]*int64) map[string]int64 { return dst } +// Uint8 returns a pointer to the uint8 value passed in. +func Uint8(v uint8) *uint8 { + return &v +} + +// Uint8Value returns the value of the uint8 pointer passed in or +// 0 if the pointer is nil. +func Uint8Value(v *uint8) uint8 { + if v != nil { + return *v + } + return 0 +} + +// Uint8Slice converts a slice of uint8 values into a slice of +// uint8 pointers +func Uint8Slice(src []uint8) []*uint8 { + dst := make([]*uint8, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint8ValueSlice converts a slice of uint8 pointers into a slice of +// uint8 values +func Uint8ValueSlice(src []*uint8) []uint8 { + dst := make([]uint8, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint8Map converts a string map of uint8 values into a string +// map of uint8 pointers +func Uint8Map(src map[string]uint8) map[string]*uint8 { + dst := make(map[string]*uint8) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint8ValueMap converts a string map of uint8 pointers into a string +// map of uint8 values +func Uint8ValueMap(src map[string]*uint8) map[string]uint8 { + dst := make(map[string]uint8) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint16 returns a pointer to the uint16 value passed in. +func Uint16(v uint16) *uint16 { + return &v +} + +// Uint16Value returns the value of the uint16 pointer passed in or +// 0 if the pointer is nil. +func Uint16Value(v *uint16) uint16 { + if v != nil { + return *v + } + return 0 +} + +// Uint16Slice converts a slice of uint16 values into a slice of +// uint16 pointers +func Uint16Slice(src []uint16) []*uint16 { + dst := make([]*uint16, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint16ValueSlice converts a slice of uint16 pointers into a slice of +// uint16 values +func Uint16ValueSlice(src []*uint16) []uint16 { + dst := make([]uint16, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint16Map converts a string map of uint16 values into a string +// map of uint16 pointers +func Uint16Map(src map[string]uint16) map[string]*uint16 { + dst := make(map[string]*uint16) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint16ValueMap converts a string map of uint16 pointers into a string +// map of uint16 values +func Uint16ValueMap(src map[string]*uint16) map[string]uint16 { + dst := make(map[string]uint16) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint32 returns a pointer to the uint32 value passed in. +func Uint32(v uint32) *uint32 { + return &v +} + +// Uint32Value returns the value of the uint32 pointer passed in or +// 0 if the pointer is nil. +func Uint32Value(v *uint32) uint32 { + if v != nil { + return *v + } + return 0 +} + +// Uint32Slice converts a slice of uint32 values into a slice of +// uint32 pointers +func Uint32Slice(src []uint32) []*uint32 { + dst := make([]*uint32, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint32ValueSlice converts a slice of uint32 pointers into a slice of +// uint32 values +func Uint32ValueSlice(src []*uint32) []uint32 { + dst := make([]uint32, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint32Map converts a string map of uint32 values into a string +// map of uint32 pointers +func Uint32Map(src map[string]uint32) map[string]*uint32 { + dst := make(map[string]*uint32) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint32ValueMap converts a string map of uint32 pointers into a string +// map of uint32 values +func Uint32ValueMap(src map[string]*uint32) map[string]uint32 { + dst := make(map[string]uint32) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint64 returns a pointer to the uint64 value passed in. +func Uint64(v uint64) *uint64 { + return &v +} + +// Uint64Value returns the value of the uint64 pointer passed in or +// 0 if the pointer is nil. +func Uint64Value(v *uint64) uint64 { + if v != nil { + return *v + } + return 0 +} + +// Uint64Slice converts a slice of uint64 values into a slice of +// uint64 pointers +func Uint64Slice(src []uint64) []*uint64 { + dst := make([]*uint64, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint64ValueSlice converts a slice of uint64 pointers into a slice of +// uint64 values +func Uint64ValueSlice(src []*uint64) []uint64 { + dst := make([]uint64, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint64Map converts a string map of uint64 values into a string +// map of uint64 pointers +func Uint64Map(src map[string]uint64) map[string]*uint64 { + dst := make(map[string]*uint64) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint64ValueMap converts a string map of uint64 pointers into a string +// map of uint64 values +func Uint64ValueMap(src map[string]*uint64) map[string]uint64 { + dst := make(map[string]uint64) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Float32 returns a pointer to the float32 value passed in. +func Float32(v float32) *float32 { + return &v +} + +// Float32Value returns the value of the float32 pointer passed in or +// 0 if the pointer is nil. +func Float32Value(v *float32) float32 { + if v != nil { + return *v + } + return 0 +} + +// Float32Slice converts a slice of float32 values into a slice of +// float32 pointers +func Float32Slice(src []float32) []*float32 { + dst := make([]*float32, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Float32ValueSlice converts a slice of float32 pointers into a slice of +// float32 values +func Float32ValueSlice(src []*float32) []float32 { + dst := make([]float32, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Float32Map converts a string map of float32 values into a string +// map of float32 pointers +func Float32Map(src map[string]float32) map[string]*float32 { + dst := make(map[string]*float32) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Float32ValueMap converts a string map of float32 pointers into a string +// map of float32 values +func Float32ValueMap(src map[string]*float32) map[string]float32 { + dst := make(map[string]float32) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + // Float64 returns a pointer to the float64 value passed in. func Float64(v float64) *float64 { return &v diff --git a/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go index f8853d78a..0c60e612e 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go @@ -159,9 +159,9 @@ func handleSendError(r *request.Request, err error) { Body: ioutil.NopCloser(bytes.NewReader([]byte{})), } } - // Catch all other request errors. + // Catch all request errors, and let the default retrier determine + // if the error is retryable. r.Error = awserr.New("RequestError", "send request failed", err) - r.Retryable = aws.Bool(true) // network errors are retryable // Override the error with a context canceled error, if that was canceled. ctx := r.Context() @@ -184,37 +184,39 @@ var ValidateResponseHandler = request.NamedHandler{Name: "core.ValidateResponseH // AfterRetryHandler performs final checks to determine if the request should // be retried and how long to delay. -var AfterRetryHandler = request.NamedHandler{Name: "core.AfterRetryHandler", Fn: func(r *request.Request) { - // If one of the other handlers already set the retry state - // we don't want to override it based on the service's state - if r.Retryable == nil || aws.BoolValue(r.Config.EnforceShouldRetryCheck) { - r.Retryable = aws.Bool(r.ShouldRetry(r)) - } - - if r.WillRetry() { - r.RetryDelay = r.RetryRules(r) - - if sleepFn := r.Config.SleepDelay; sleepFn != nil { - // Support SleepDelay for backwards compatibility and testing - sleepFn(r.RetryDelay) - } else if err := aws.SleepWithContext(r.Context(), r.RetryDelay); err != nil { - r.Error = awserr.New(request.CanceledErrorCode, - "request context canceled", err) - r.Retryable = aws.Bool(false) - return +var AfterRetryHandler = request.NamedHandler{ + Name: "core.AfterRetryHandler", + Fn: func(r *request.Request) { + // If one of the other handlers already set the retry state + // we don't want to override it based on the service's state + if r.Retryable == nil || aws.BoolValue(r.Config.EnforceShouldRetryCheck) { + r.Retryable = aws.Bool(r.ShouldRetry(r)) } - // when the expired token exception occurs the credentials - // need to be expired locally so that the next request to - // get credentials will trigger a credentials refresh. - if r.IsErrorExpired() { - r.Config.Credentials.Expire() - } + if r.WillRetry() { + r.RetryDelay = r.RetryRules(r) - r.RetryCount++ - r.Error = nil - } -}} + if sleepFn := r.Config.SleepDelay; sleepFn != nil { + // Support SleepDelay for backwards compatibility and testing + sleepFn(r.RetryDelay) + } else if err := aws.SleepWithContext(r.Context(), r.RetryDelay); err != nil { + r.Error = awserr.New(request.CanceledErrorCode, + "request context canceled", err) + r.Retryable = aws.Bool(false) + return + } + + // when the expired token exception occurs the credentials + // need to be expired locally so that the next request to + // get credentials will trigger a credentials refresh. + if r.IsErrorExpired() { + r.Config.Credentials.Expire() + } + + r.RetryCount++ + r.Error = nil + } + }} // ValidateEndpointHandler is a request handler to validate a request had the // appropriate Region and Endpoint set. Will set r.Error if the endpoint or diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go index c2b2c5d65..1a7af53a4 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go @@ -98,8 +98,8 @@ func NewProviderClient(cfg aws.Config, handlers request.Handlers, endpoint strin return p } -// NewCredentialsClient returns a Credentials wrapper for retrieving credentials -// from an arbitrary endpoint concurrently. The client will request the +// NewCredentialsClient returns a pointer to a new Credentials object +// wrapping the endpoint credentials Provider. func NewCredentialsClient(cfg aws.Config, handlers request.Handlers, endpoint string, options ...func(*Provider)) *credentials.Credentials { return credentials.NewCredentials(NewProviderClient(cfg, handlers, endpoint, options...)) } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/stscreds/web_identity_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/stscreds/web_identity_provider.go index 20510d9ae..b20b63394 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/stscreds/web_identity_provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/stscreds/web_identity_provider.go @@ -76,12 +76,15 @@ func (p *WebIdentityRoleProvider) Retrieve() (credentials.Value, error) { // uses unix time in nanoseconds to uniquely identify sessions. sessionName = strconv.FormatInt(now().UnixNano(), 10) } - resp, err := p.client.AssumeRoleWithWebIdentity(&sts.AssumeRoleWithWebIdentityInput{ + req, resp := p.client.AssumeRoleWithWebIdentityRequest(&sts.AssumeRoleWithWebIdentityInput{ RoleArn: &p.roleARN, RoleSessionName: &sessionName, WebIdentityToken: aws.String(string(b)), }) - if err != nil { + // InvalidIdentityToken error is a temporary error that can occur + // when assuming an Role with a JWT web identity token. + req.RetryErrorCodes = append(req.RetryErrorCodes, sts.ErrCodeInvalidIdentityTokenException) + if err := req.Send(); err != nil { return credentials.Value{}, awserr.New(ErrCodeWebIdentity, "failed to retrieve credentials", err) } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/csm/metric_chan.go b/vendor/github.com/aws/aws-sdk-go/aws/csm/metric_chan.go index 514fc3739..82a3e345e 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/csm/metric_chan.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/csm/metric_chan.go @@ -16,25 +16,26 @@ var ( type metricChan struct { ch chan metric - paused int64 + paused *int64 } func newMetricChan(size int) metricChan { return metricChan{ - ch: make(chan metric, size), + ch: make(chan metric, size), + paused: new(int64), } } func (ch *metricChan) Pause() { - atomic.StoreInt64(&ch.paused, pausedEnum) + atomic.StoreInt64(ch.paused, pausedEnum) } func (ch *metricChan) Continue() { - atomic.StoreInt64(&ch.paused, runningEnum) + atomic.StoreInt64(ch.paused, runningEnum) } func (ch *metricChan) IsPaused() bool { - v := atomic.LoadInt64(&ch.paused) + v := atomic.LoadInt64(ch.paused) return v == pausedEnum } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go index 2c8d5f56d..d126764ce 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go @@ -152,18 +152,19 @@ type EC2IAMInfo struct { // An EC2InstanceIdentityDocument provides the shape for unmarshaling // an instance identity document type EC2InstanceIdentityDocument struct { - DevpayProductCodes []string `json:"devpayProductCodes"` - AvailabilityZone string `json:"availabilityZone"` - PrivateIP string `json:"privateIp"` - Version string `json:"version"` - Region string `json:"region"` - InstanceID string `json:"instanceId"` - BillingProducts []string `json:"billingProducts"` - InstanceType string `json:"instanceType"` - AccountID string `json:"accountId"` - PendingTime time.Time `json:"pendingTime"` - ImageID string `json:"imageId"` - KernelID string `json:"kernelId"` - RamdiskID string `json:"ramdiskId"` - Architecture string `json:"architecture"` + DevpayProductCodes []string `json:"devpayProductCodes"` + MarketplaceProductCodes []string `json:"marketplaceProductCodes"` + AvailabilityZone string `json:"availabilityZone"` + PrivateIP string `json:"privateIp"` + Version string `json:"version"` + Region string `json:"region"` + InstanceID string `json:"instanceId"` + BillingProducts []string `json:"billingProducts"` + InstanceType string `json:"instanceType"` + AccountID string `json:"accountId"` + PendingTime time.Time `json:"pendingTime"` + ImageID string `json:"imageId"` + KernelID string `json:"kernelId"` + RamdiskID string `json:"ramdiskId"` + Architecture string `json:"architecture"` } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go index f0c1d31e7..4c5636e35 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go @@ -123,7 +123,7 @@ func unmarshalHandler(r *request.Request) { defer r.HTTPResponse.Body.Close() b := &bytes.Buffer{} if _, err := io.Copy(b, r.HTTPResponse.Body); err != nil { - r.Error = awserr.New(request.ErrCodeSerialization, "unable to unmarshal EC2 metadata respose", err) + r.Error = awserr.New(request.ErrCodeSerialization, "unable to unmarshal EC2 metadata response", err) return } @@ -136,7 +136,7 @@ func unmarshalError(r *request.Request) { defer r.HTTPResponse.Body.Close() b := &bytes.Buffer{} if _, err := io.Copy(b, r.HTTPResponse.Body); err != nil { - r.Error = awserr.New(request.ErrCodeSerialization, "unable to unmarshal EC2 metadata error respose", err) + r.Error = awserr.New(request.ErrCodeSerialization, "unable to unmarshal EC2 metadata error response", err) return } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index 50b6c7af1..452cefda6 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -11,6 +11,8 @@ const ( AwsPartitionID = "aws" // AWS Standard partition. AwsCnPartitionID = "aws-cn" // AWS China partition. AwsUsGovPartitionID = "aws-us-gov" // AWS GovCloud (US) partition. + AwsIsoPartitionID = "aws-iso" // AWS ISO (US) partition. + AwsIsoBPartitionID = "aws-iso-b" // AWS ISOB (US) partition. ) // AWS Standard partition's regions. @@ -47,8 +49,18 @@ const ( UsGovWest1RegionID = "us-gov-west-1" // AWS GovCloud (US). ) +// AWS ISO (US) partition's regions. +const ( + UsIsoEast1RegionID = "us-iso-east-1" // US ISO East. +) + +// AWS ISOB (US) partition's regions. +const ( + UsIsobEast1RegionID = "us-isob-east-1" // US ISOB East (Ohio). +) + // DefaultResolver returns an Endpoint resolver that will be able -// to resolve endpoints for: AWS Standard, AWS China, and AWS GovCloud (US). +// to resolve endpoints for: AWS Standard, AWS China, AWS GovCloud (US), AWS ISO (US), and AWS ISOB (US). // // Use DefaultPartitions() to get the list of the default partitions. func DefaultResolver() Resolver { @@ -56,7 +68,7 @@ func DefaultResolver() Resolver { } // DefaultPartitions returns a list of the partitions the SDK is bundled -// with. The available partitions are: AWS Standard, AWS China, and AWS GovCloud (US). +// with. The available partitions are: AWS Standard, AWS China, AWS GovCloud (US), AWS ISO (US), and AWS ISOB (US). // // partitions := endpoints.DefaultPartitions // for _, p := range partitions { @@ -70,6 +82,8 @@ var defaultPartitions = partitions{ awsPartition, awscnPartition, awsusgovPartition, + awsisoPartition, + awsisobPartition, } // AwsPartition returns the Resolver for AWS Standard. @@ -320,6 +334,7 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, @@ -339,6 +354,7 @@ var awsPartition = partition{ "api.sagemaker": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -346,8 +362,11 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-1-fips": endpoint{ Hostname: "api-fips.sagemaker.us-east-1.amazonaws.com", @@ -485,6 +504,7 @@ var awsPartition = partition{ "athena": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -581,6 +601,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -728,6 +749,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -903,6 +925,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, @@ -1031,6 +1054,16 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "connect": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "cur": service{ Endpoints: endpoints{ @@ -1069,10 +1102,35 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "datasync-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "datasync-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "datasync-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "datasync-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "dax": service{ @@ -1100,6 +1158,7 @@ var awsPartition = partition{ "directconnect": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -1529,11 +1588,12 @@ var awsPartition = partition{ Region: "us-west-1", }, }, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "events": service{ @@ -1562,6 +1622,7 @@ var awsPartition = partition{ "firehose": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -1605,6 +1666,7 @@ var awsPartition = partition{ "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "us-east-1": endpoint{}, @@ -1672,6 +1734,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1723,11 +1786,36 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "guardduty-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "guardduty-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "guardduty-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "guardduty-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, "health": service{ @@ -1788,6 +1876,7 @@ var awsPartition = partition{ }, }, Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -1799,6 +1888,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1936,11 +2026,14 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -1980,6 +2073,16 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "lakeformation": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "lambda": service{ Endpoints: endpoints{ @@ -2018,6 +2121,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -2088,6 +2192,7 @@ var awsPartition = partition{ "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, @@ -2144,6 +2249,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, @@ -2441,6 +2547,16 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "qldb": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "ram": service{ Endpoints: endpoints{ @@ -2541,12 +2657,36 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "resource-groups-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "resource-groups-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "resource-groups-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "resource-groups-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "robomaker": service{ @@ -2616,6 +2756,7 @@ var awsPartition = partition{ "runtime.sagemaker": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -2623,8 +2764,11 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-1-fips": endpoint{ Hostname: "runtime-fips.sagemaker.us-east-1.amazonaws.com", @@ -3056,6 +3200,7 @@ var awsPartition = partition{ "servicediscovery": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -3063,9 +3208,11 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -3073,6 +3220,16 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "session.qldb": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "shield": service{ IsRegionalized: boxedFalse, Defaults: endpoint{ @@ -3098,6 +3255,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -3251,6 +3409,7 @@ var awsPartition = partition{ "storagegateway": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -3471,9 +3630,11 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -3601,6 +3762,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -3895,7 +4057,8 @@ var awscnPartition = partition{ }, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "kinesis": service{ @@ -4296,6 +4459,12 @@ var awsusgovPartition = partition{ "datasync": service{ Endpoints: endpoints{ + "fips-us-gov-west-1": endpoint{ + Hostname: "datasync-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-west-1": endpoint{}, }, }, @@ -4469,6 +4638,12 @@ var awsusgovPartition = partition{ "us-gov-west-1": endpoint{}, }, }, + "health": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "iam": service{ PartitionEndpoint: "aws-us-gov-global", IsRegionalized: boxedFalse, @@ -4564,6 +4739,23 @@ var awsusgovPartition = partition{ "us-gov-west-1": endpoint{}, }, }, + "neptune": service{ + + Endpoints: endpoints{ + "us-gov-east-1": endpoint{ + Hostname: "rds.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "rds.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + }, + }, "organizations": service{ PartitionEndpoint: "aws-us-gov-global", IsRegionalized: boxedFalse, @@ -4586,6 +4778,7 @@ var awsusgovPartition = partition{ "ram": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, @@ -4609,6 +4802,19 @@ var awsusgovPartition = partition{ "us-gov-west-1": endpoint{}, }, }, + "route53": service{ + PartitionEndpoint: "aws-us-gov-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-us-gov-global": endpoint{ + Hostname: "route53.us-gov.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + }, + }, "runtime.sagemaker": service{ Endpoints: endpoints{ @@ -4689,11 +4895,26 @@ var awsusgovPartition = partition{ Protocols: []string{"https"}, }, Endpoints: endpoints{ + "us-gov-east-1": endpoint{ + Protocols: []string{"https"}, + }, "us-gov-west-1": endpoint{ Protocols: []string{"https"}, }, }, }, + "servicecatalog": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "servicecatalog-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + }, + }, "sms": service{ Endpoints: endpoints{ @@ -4819,3 +5040,612 @@ var awsusgovPartition = partition{ }, }, } + +// AwsIsoPartition returns the Resolver for AWS ISO (US). +func AwsIsoPartition() Partition { + return awsisoPartition.Partition() +} + +var awsisoPartition = partition{ + ID: "aws-iso", + Name: "AWS ISO (US)", + DNSSuffix: "c2s.ic.gov", + RegionRegex: regionRegex{ + Regexp: func() *regexp.Regexp { + reg, _ := regexp.Compile("^us\\-iso\\-\\w+\\-\\d+$") + return reg + }(), + }, + Defaults: endpoint{ + Hostname: "{service}.{region}.{dnsSuffix}", + Protocols: []string{"https"}, + SignatureVersions: []string{"v4"}, + }, + Regions: regions{ + "us-iso-east-1": region{ + Description: "US ISO East", + }, + }, + Services: services{ + "api.ecr": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Hostname: "api.ecr.us-iso-east-1.c2s.ic.gov", + CredentialScope: credentialScope{ + Region: "us-iso-east-1", + }, + }, + }, + }, + "application-autoscaling": service{ + Defaults: endpoint{ + Hostname: "autoscaling.{region}.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "application-autoscaling", + }, + }, + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "autoscaling": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "cloudformation": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "cloudtrail": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "codedeploy": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "config": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "datapipeline": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "directconnect": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "dms": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "ds": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "dynamodb": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "ec2": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "ec2metadata": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "169.254.169.254/latest", + Protocols: []string{"http"}, + }, + }, + }, + "ecs": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "elasticache": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "elasticloadbalancing": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "elasticmapreduce": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"https"}, + }, + }, + }, + "events": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "glacier": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "health": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "iam": service{ + PartitionEndpoint: "aws-iso-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-iso-global": endpoint{ + Hostname: "iam.us-iso-east-1.c2s.ic.gov", + CredentialScope: credentialScope{ + Region: "us-iso-east-1", + }, + }, + }, + }, + "kinesis": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "kms": service{ + + Endpoints: endpoints{ + "ProdFips": endpoint{ + Hostname: "kms-fips.us-iso-east-1.c2s.ic.gov", + CredentialScope: credentialScope{ + Region: "us-iso-east-1", + }, + }, + "us-iso-east-1": endpoint{}, + }, + }, + "lambda": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "logs": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "monitoring": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "rds": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "redshift": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "route53": service{ + PartitionEndpoint: "aws-iso-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-iso-global": endpoint{ + Hostname: "route53.c2s.ic.gov", + CredentialScope: credentialScope{ + Region: "us-iso-east-1", + }, + }, + }, + }, + "s3": service{ + Defaults: endpoint{ + SignatureVersions: []string{"s3v4"}, + }, + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + SignatureVersions: []string{"s3v4"}, + }, + }, + }, + "snowball": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "sns": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "sqs": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "states": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "streams.dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "dynamodb", + }, + }, + Endpoints: endpoints{ + "us-iso-east-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "sts": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "support": service{ + PartitionEndpoint: "aws-iso-global", + + Endpoints: endpoints{ + "aws-iso-global": endpoint{ + Hostname: "support.us-iso-east-1.c2s.ic.gov", + CredentialScope: credentialScope{ + Region: "us-iso-east-1", + }, + }, + }, + }, + "swf": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + "workspaces": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, + }, +} + +// AwsIsoBPartition returns the Resolver for AWS ISOB (US). +func AwsIsoBPartition() Partition { + return awsisobPartition.Partition() +} + +var awsisobPartition = partition{ + ID: "aws-iso-b", + Name: "AWS ISOB (US)", + DNSSuffix: "sc2s.sgov.gov", + RegionRegex: regionRegex{ + Regexp: func() *regexp.Regexp { + reg, _ := regexp.Compile("^us\\-isob\\-\\w+\\-\\d+$") + return reg + }(), + }, + Defaults: endpoint{ + Hostname: "{service}.{region}.{dnsSuffix}", + Protocols: []string{"https"}, + SignatureVersions: []string{"v4"}, + }, + Regions: regions{ + "us-isob-east-1": region{ + Description: "US ISOB East (Ohio)", + }, + }, + Services: services{ + "application-autoscaling": service{ + Defaults: endpoint{ + Hostname: "autoscaling.{region}.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "application-autoscaling", + }, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "autoscaling": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "cloudformation": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "cloudtrail": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "config": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "directconnect": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "dms": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "ec2": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "ec2metadata": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "169.254.169.254/latest", + Protocols: []string{"http"}, + }, + }, + }, + "elasticache": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "elasticloadbalancing": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{ + Protocols: []string{"https"}, + }, + }, + }, + "elasticmapreduce": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "events": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "glacier": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "health": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "iam": service{ + PartitionEndpoint: "aws-iso-b-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-iso-b-global": endpoint{ + Hostname: "iam.us-isob-east-1.sc2s.sgov.gov", + CredentialScope: credentialScope{ + Region: "us-isob-east-1", + }, + }, + }, + }, + "kinesis": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "kms": service{ + + Endpoints: endpoints{ + "ProdFips": endpoint{ + Hostname: "kms-fips.us-isob-east-1.sc2s.sgov.gov", + CredentialScope: credentialScope{ + Region: "us-isob-east-1", + }, + }, + "us-isob-east-1": endpoint{}, + }, + }, + "logs": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "monitoring": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "rds": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "redshift": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "s3": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + SignatureVersions: []string{"s3v4"}, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "snowball": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "sns": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "sqs": service{ + Defaults: endpoint{ + SSLCommonName: "{region}.queue.{dnsSuffix}", + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "states": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "streams.dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "dynamodb", + }, + }, + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "sts": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + "support": service{ + PartitionEndpoint: "aws-iso-b-global", + + Endpoints: endpoints{ + "aws-iso-b-global": endpoint{ + Hostname: "support.us-isob-east-1.sc2s.sgov.gov", + CredentialScope: credentialScope{ + Region: "us-isob-east-1", + }, + }, + }, + }, + "swf": service{ + + Endpoints: endpoints{ + "us-isob-east-1": endpoint{}, + }, + }, + }, +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go b/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go index 627ec722c..185b07318 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go @@ -23,7 +23,7 @@ type Handlers struct { Complete HandlerList } -// Copy returns of this handler's lists. +// Copy returns a copy of this handler's lists. func (h *Handlers) Copy() Handlers { return Handlers{ Validate: h.Validate.copy(), @@ -42,7 +42,7 @@ func (h *Handlers) Copy() Handlers { } } -// Clear removes callback functions for all handlers +// Clear removes callback functions for all handlers. func (h *Handlers) Clear() { h.Validate.Clear() h.Build.Clear() diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go index e7c9b2b61..8e332cce6 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go @@ -4,7 +4,6 @@ import ( "bytes" "fmt" "io" - "net" "net/http" "net/url" "reflect" @@ -65,6 +64,15 @@ type Request struct { LastSignedAt time.Time DisableFollowRedirects bool + // Additional API error codes that should be retried. IsErrorRetryable + // will consider these codes in addition to its built in cases. + RetryErrorCodes []string + + // Additional API error codes that should be retried with throttle backoff + // delay. IsErrorThrottle will consider these codes in addition to its + // built in cases. + ThrottleErrorCodes []string + // A value greater than 0 instructs the request to be signed as Presigned URL // You should not set this field directly. Instead use Request's // Presign or PresignRequest methods. @@ -498,21 +506,17 @@ func (r *Request) Send() error { if err := r.sendRequest(); err == nil { return nil - } else if !shouldRetryError(r.Error) { + } + r.Handlers.Retry.Run(r) + r.Handlers.AfterRetry.Run(r) + + if r.Error != nil || !aws.BoolValue(r.Retryable) { + return r.Error + } + + if err := r.prepareRetry(); err != nil { + r.Error = err return err - } else { - r.Handlers.Retry.Run(r) - r.Handlers.AfterRetry.Run(r) - - if r.Error != nil || !aws.BoolValue(r.Retryable) { - return r.Error - } - - if err := r.prepareRetry(); err != nil { - r.Error = err - return err - } - continue } } } @@ -596,51 +600,6 @@ func AddToUserAgent(r *Request, s string) { r.HTTPRequest.Header.Set("User-Agent", s) } -type temporary interface { - Temporary() bool -} - -func shouldRetryError(origErr error) bool { - switch err := origErr.(type) { - case awserr.Error: - if err.Code() == CanceledErrorCode { - return false - } - return shouldRetryError(err.OrigErr()) - case *url.Error: - if strings.Contains(err.Error(), "connection refused") { - // Refused connections should be retried as the service may not yet - // be running on the port. Go TCP dial considers refused - // connections as not temporary. - return true - } - // *url.Error only implements Temporary after golang 1.6 but since - // url.Error only wraps the error: - return shouldRetryError(err.Err) - case temporary: - if netErr, ok := err.(*net.OpError); ok && netErr.Op == "dial" { - return true - } - // If the error is temporary, we want to allow continuation of the - // retry process - return err.Temporary() || isErrConnectionReset(origErr) - case nil: - // `awserr.Error.OrigErr()` can be nil, meaning there was an error but - // because we don't know the cause, it is marked as retryable. See - // TestRequest4xxUnretryable for an example. - return true - default: - switch err.Error() { - case "net/http: request canceled", - "net/http: request canceled while waiting for connection": - // known 1.5 error case when an http request is cancelled - return false - } - // here we don't know the error; so we allow a retry. - return true - } -} - // SanitizeHostForHeader removes default port from host and updates request.Host func SanitizeHostForHeader(r *http.Request) { host := getHost(r) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go index d0aa54c6d..e84084da5 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go @@ -1,23 +1,41 @@ package request import ( + "net" + "net/url" + "strings" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" ) -// Retryer is an interface to control retry logic for a given service. -// The default implementation used by most services is the client.DefaultRetryer -// structure, which contains basic retry logic using exponential backoff. +// Retryer provides the interface drive the SDK's request retry behavior. The +// Retryer implementation is responsible for implementing exponential backoff, +// and determine if a request API error should be retried. +// +// client.DefaultRetryer is the SDK's default implementation of the Retryer. It +// uses the which uses the Request.IsErrorRetryable and Request.IsErrorThrottle +// methods to determine if the request is retried. type Retryer interface { + // RetryRules return the retry delay that should be used by the SDK before + // making another request attempt for the failed request. RetryRules(*Request) time.Duration + + // ShouldRetry returns if the failed request is retryable. + // + // Implementations may consider request attempt count when determining if a + // request is retryable, but the SDK will use MaxRetries to limit the + // number of attempts a request are made. ShouldRetry(*Request) bool + + // MaxRetries is the number of times a request may be retried before + // failing. MaxRetries() int } -// WithRetryer sets a config Retryer value to the given Config returning it -// for chaining. +// WithRetryer sets a Retryer value to the given Config returning the Config +// value for chaining. func WithRetryer(cfg *aws.Config, retryer Retryer) *aws.Config { cfg.Retryer = retryer return cfg @@ -76,10 +94,6 @@ var validParentCodes = map[string]struct{}{ ErrCodeRead: {}, } -type temporaryError interface { - Temporary() bool -} - func isNestedErrorRetryable(parentErr awserr.Error) bool { if parentErr == nil { return false @@ -98,7 +112,7 @@ func isNestedErrorRetryable(parentErr awserr.Error) bool { return isCodeRetryable(aerr.Code()) } - if t, ok := err.(temporaryError); ok { + if t, ok := err.(temporary); ok { return t.Temporary() || isErrConnectionReset(err) } @@ -108,32 +122,90 @@ func isNestedErrorRetryable(parentErr awserr.Error) bool { // IsErrorRetryable returns whether the error is retryable, based on its Code. // Returns false if error is nil. func IsErrorRetryable(err error) bool { - if err != nil { - if aerr, ok := err.(awserr.Error); ok { - return isCodeRetryable(aerr.Code()) || isNestedErrorRetryable(aerr) - } + if err == nil { + return false + } + return shouldRetryError(err) +} + +type temporary interface { + Temporary() bool +} + +func shouldRetryError(origErr error) bool { + switch err := origErr.(type) { + case awserr.Error: + if err.Code() == CanceledErrorCode { + return false + } + if isNestedErrorRetryable(err) { + return true + } + + origErr := err.OrigErr() + var shouldRetry bool + if origErr != nil { + shouldRetry := shouldRetryError(origErr) + if err.Code() == "RequestError" && !shouldRetry { + return false + } + } + if isCodeRetryable(err.Code()) { + return true + } + return shouldRetry + + case *url.Error: + if strings.Contains(err.Error(), "connection refused") { + // Refused connections should be retried as the service may not yet + // be running on the port. Go TCP dial considers refused + // connections as not temporary. + return true + } + // *url.Error only implements Temporary after golang 1.6 but since + // url.Error only wraps the error: + return shouldRetryError(err.Err) + + case temporary: + if netErr, ok := err.(*net.OpError); ok && netErr.Op == "dial" { + return true + } + // If the error is temporary, we want to allow continuation of the + // retry process + return err.Temporary() || isErrConnectionReset(origErr) + + case nil: + // `awserr.Error.OrigErr()` can be nil, meaning there was an error but + // because we don't know the cause, it is marked as retryable. See + // TestRequest4xxUnretryable for an example. + return true + + default: + switch err.Error() { + case "net/http: request canceled", + "net/http: request canceled while waiting for connection": + // known 1.5 error case when an http request is cancelled + return false + } + // here we don't know the error; so we allow a retry. + return true } - return false } // IsErrorThrottle returns whether the error is to be throttled based on its code. // Returns false if error is nil. func IsErrorThrottle(err error) bool { - if err != nil { - if aerr, ok := err.(awserr.Error); ok { - return isCodeThrottle(aerr.Code()) - } + if aerr, ok := err.(awserr.Error); ok && aerr != nil { + return isCodeThrottle(aerr.Code()) } return false } -// IsErrorExpiredCreds returns whether the error code is a credential expiry error. -// Returns false if error is nil. +// IsErrorExpiredCreds returns whether the error code is a credential expiry +// error. Returns false if error is nil. func IsErrorExpiredCreds(err error) bool { - if err != nil { - if aerr, ok := err.(awserr.Error); ok { - return isCodeExpiredCreds(aerr.Code()) - } + if aerr, ok := err.(awserr.Error); ok && aerr != nil { + return isCodeExpiredCreds(aerr.Code()) } return false } @@ -143,17 +215,58 @@ func IsErrorExpiredCreds(err error) bool { // // Alias for the utility function IsErrorRetryable func (r *Request) IsErrorRetryable() bool { + if isErrCode(r.Error, r.RetryErrorCodes) { + return true + } + + // HTTP response status code 501 should not be retried. + // 501 represents Not Implemented which means the request method is not + // supported by the server and cannot be handled. + if r.HTTPResponse != nil { + // HTTP response status code 500 represents internal server error and + // should be retried without any throttle. + if r.HTTPResponse.StatusCode == 500 { + return true + } + } return IsErrorRetryable(r.Error) } -// IsErrorThrottle returns whether the error is to be throttled based on its code. -// Returns false if the request has no Error set +// IsErrorThrottle returns whether the error is to be throttled based on its +// code. Returns false if the request has no Error set. // // Alias for the utility function IsErrorThrottle func (r *Request) IsErrorThrottle() bool { + if isErrCode(r.Error, r.ThrottleErrorCodes) { + return true + } + + if r.HTTPResponse != nil { + switch r.HTTPResponse.StatusCode { + case + 429, // error caused due to too many requests + 502, // Bad Gateway error should be throttled + 503, // caused when service is unavailable + 504: // error occurred due to gateway timeout + return true + } + } + return IsErrorThrottle(r.Error) } +func isErrCode(err error, codes []string) bool { + if aerr, ok := err.(awserr.Error); ok && aerr != nil { + for _, code := range codes { + if code == aerr.Code() { + return true + } + } + } + + return false +} + // IsErrorExpired returns whether the error code is a credential expiry error. // Returns false if the request has no Error set. // diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go index 3a998d5bd..60a6f9ce2 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go @@ -99,10 +99,10 @@ type envConfig struct { CustomCABundle string csmEnabled string - CSMEnabled bool + CSMEnabled *bool CSMPort string - CSMClientID string CSMHost string + CSMClientID string // Enables endpoint discovery via environment variables. // @@ -230,7 +230,11 @@ func envConfigLoad(enableSharedConfig bool) envConfig { setFromEnvVal(&cfg.CSMHost, csmHostEnvKey) setFromEnvVal(&cfg.CSMPort, csmPortEnvKey) setFromEnvVal(&cfg.CSMClientID, csmClientIDEnvKey) - cfg.CSMEnabled = len(cfg.csmEnabled) > 0 + + if len(cfg.csmEnabled) != 0 { + v, _ := strconv.ParseBool(cfg.csmEnabled) + cfg.CSMEnabled = &v + } regionKeys := regionEnvKeys profileKeys := profileEnvKeys diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go index 1b4fcdb10..7b0a942e2 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go @@ -104,9 +104,13 @@ func New(cfgs ...*aws.Config) *Session { } s := deprecatedNewSession(cfgs...) - if envCfg.CSMEnabled { - err := enableCSM(&s.Handlers, envCfg.CSMClientID, - envCfg.CSMHost, envCfg.CSMPort, s.Config.Logger) + + if csmCfg, err := loadCSMConfig(envCfg, []string{}); err != nil { + if l := s.Config.Logger; l != nil { + l.Log(fmt.Sprintf("ERROR: failed to load CSM configuration, %v", err)) + } + } else if csmCfg.Enabled { + err := enableCSM(&s.Handlers, csmCfg, s.Config.Logger) if err != nil { err = fmt.Errorf("failed to enable CSM, %v", err) s.Config.Logger.Log("ERROR:", err.Error()) @@ -132,7 +136,7 @@ func New(cfgs ...*aws.Config) *Session { // to be built with retrieving credentials with AssumeRole set in the config. // // See the NewSessionWithOptions func for information on how to override or -// control through code how the Session will be created. Such as specifying the +// control through code how the Session will be created, such as specifying the // config profile, and controlling if shared config is enabled or not. func NewSession(cfgs ...*aws.Config) (*Session, error) { opts := Options{} @@ -347,15 +351,12 @@ func deprecatedNewSession(cfgs ...*aws.Config) *Session { return s } -func enableCSM(handlers *request.Handlers, - clientID, host, port string, - logger aws.Logger, -) error { +func enableCSM(handlers *request.Handlers, cfg csmConfig, logger aws.Logger) error { if logger != nil { logger.Log("Enabling CSM") } - r, err := csm.Start(clientID, csm.AddressWithDefaults(host, port)) + r, err := csm.Start(cfg.ClientID, csm.AddressWithDefaults(cfg.Host, cfg.Port)) if err != nil { return err } @@ -395,7 +396,13 @@ func newSession(opts Options, envCfg envConfig, cfgs ...*aws.Config) (*Session, // Load additional config from file(s) sharedCfg, err := loadSharedConfig(envCfg.Profile, cfgFiles, envCfg.EnableSharedConfig) if err != nil { - if _, ok := err.(SharedConfigProfileNotExistsError); !ok { + if len(envCfg.Profile) == 0 && !envCfg.EnableSharedConfig && (envCfg.Creds.HasKeys() || userCfg.Credentials != nil) { + // Special case where the user has not explicitly specified an AWS_PROFILE, + // or session.Options.profile, shared config is not enabled, and the + // environment has credentials, allow the shared config file to fail to + // load since the user has already provided credentials, and nothing else + // is required to be read file. Github(aws/aws-sdk-go#2455) + } else if _, ok := err.(SharedConfigProfileNotExistsError); !ok { return nil, err } } @@ -410,9 +417,13 @@ func newSession(opts Options, envCfg envConfig, cfgs ...*aws.Config) (*Session, } initHandlers(s) - if envCfg.CSMEnabled { - err := enableCSM(&s.Handlers, envCfg.CSMClientID, - envCfg.CSMHost, envCfg.CSMPort, s.Config.Logger) + + if csmCfg, err := loadCSMConfig(envCfg, cfgFiles); err != nil { + if l := s.Config.Logger; l != nil { + l.Log(fmt.Sprintf("ERROR: failed to load CSM configuration, %v", err)) + } + } else if csmCfg.Enabled { + err = enableCSM(&s.Handlers, csmCfg, s.Config.Logger) if err != nil { return nil, err } @@ -428,6 +439,46 @@ func newSession(opts Options, envCfg envConfig, cfgs ...*aws.Config) (*Session, return s, nil } +type csmConfig struct { + Enabled bool + Host string + Port string + ClientID string +} + +var csmProfileName = "aws_csm" + +func loadCSMConfig(envCfg envConfig, cfgFiles []string) (csmConfig, error) { + if envCfg.CSMEnabled != nil { + if *envCfg.CSMEnabled { + return csmConfig{ + Enabled: true, + ClientID: envCfg.CSMClientID, + Host: envCfg.CSMHost, + Port: envCfg.CSMPort, + }, nil + } + return csmConfig{}, nil + } + + sharedCfg, err := loadSharedConfig(csmProfileName, cfgFiles, false) + if err != nil { + if _, ok := err.(SharedConfigProfileNotExistsError); !ok { + return csmConfig{}, err + } + } + if sharedCfg.CSMEnabled != nil && *sharedCfg.CSMEnabled == true { + return csmConfig{ + Enabled: true, + ClientID: sharedCfg.CSMClientID, + Host: sharedCfg.CSMHost, + Port: sharedCfg.CSMPort, + }, nil + } + + return csmConfig{}, nil +} + func loadCustomCABundle(s *Session, bundle io.Reader) error { var t *http.Transport switch v := s.Config.HTTPClient.Transport.(type) { @@ -520,7 +571,7 @@ func initHandlers(s *Session) { } } -// Copy creates and returns a copy of the current Session, coping the config +// Copy creates and returns a copy of the current Session, copying the config // and handlers. If any additional configs are provided they will be merged // on top of the Session's copied config. // diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go index 5170b4982..d91ac93a5 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go @@ -22,6 +22,12 @@ const ( mfaSerialKey = `mfa_serial` // optional roleSessionNameKey = `role_session_name` // optional + // CSM options + csmEnabledKey = `csm_enabled` + csmHostKey = `csm_host` + csmPortKey = `csm_port` + csmClientIDKey = `csm_client_id` + // Additional Config fields regionKey = `region` @@ -76,6 +82,12 @@ type sharedConfig struct { // // endpoint_discovery_enabled = true EnableEndpointDiscovery *bool + + // CSM Options + CSMEnabled *bool + CSMHost string + CSMPort string + CSMClientID string } type sharedConfigFile struct { @@ -251,10 +263,13 @@ func (cfg *sharedConfig) setFromIniFile(profile string, file sharedConfigFile, e } // Endpoint discovery - if section.Has(enableEndpointDiscoveryKey) { - v := section.Bool(enableEndpointDiscoveryKey) - cfg.EnableEndpointDiscovery = &v - } + updateBoolPtr(&cfg.EnableEndpointDiscovery, section, enableEndpointDiscoveryKey) + + // CSM options + updateBoolPtr(&cfg.CSMEnabled, section, csmEnabledKey) + updateString(&cfg.CSMHost, section, csmHostKey) + updateString(&cfg.CSMPort, section, csmPortKey) + updateString(&cfg.CSMClientID, section, csmClientIDKey) return nil } @@ -348,6 +363,16 @@ func updateString(dst *string, section ini.Section, key string) { *dst = section.String(key) } +// updateBoolPtr will only update the dst with the value in the section key, +// key is present in the section. +func updateBoolPtr(dst **bool, section ini.Section, key string) { + if !section.Has(key) { + return + } + *dst = new(bool) + **dst = section.Bool(key) +} + // SharedConfigLoadError is an error for the shared config file failed to load. type SharedConfigLoadError struct { Filename string diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go index 184ef5d07..d1548ebd8 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/version.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.22.0" +const SDKVersion = "1.25.3" diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkio/byte.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkio/byte.go new file mode 100644 index 000000000..6c443988b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkio/byte.go @@ -0,0 +1,12 @@ +package sdkio + +const ( + // Byte is 8 bits + Byte int64 = 1 + // KibiByte (KiB) is 1024 Bytes + KibiByte = Byte * 1024 + // MebiByte (MiB) is 1024 KiB + MebiByte = KibiByte * 1024 + // GibiByte (GiB) is 1024 MiB + GibiByte = MebiByte * 1024 +) diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkmath/floor.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkmath/floor.go new file mode 100644 index 000000000..44898eed0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkmath/floor.go @@ -0,0 +1,15 @@ +// +build go1.10 + +package sdkmath + +import "math" + +// Round returns the nearest integer, rounding half away from zero. +// +// Special cases are: +// Round(±0) = ±0 +// Round(±Inf) = ±Inf +// Round(NaN) = NaN +func Round(x float64) float64 { + return math.Round(x) +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkmath/floor_go1.9.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkmath/floor_go1.9.go new file mode 100644 index 000000000..810ec7f08 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkmath/floor_go1.9.go @@ -0,0 +1,56 @@ +// +build !go1.10 + +package sdkmath + +import "math" + +// Copied from the Go standard library's (Go 1.12) math/floor.go for use in +// Go version prior to Go 1.10. +const ( + uvone = 0x3FF0000000000000 + mask = 0x7FF + shift = 64 - 11 - 1 + bias = 1023 + signMask = 1 << 63 + fracMask = 1<= 0.5 { + // return t + Copysign(1, x) + // } + // return t + // } + bits := math.Float64bits(x) + e := uint(bits>>shift) & mask + if e < bias { + // Round abs(x) < 1 including denormals. + bits &= signMask // +-0 + if e == bias-1 { + bits |= uvone // +-1 + } + } else if e < bias+shift { + // Round any abs(x) >= 1 containing a fractional component [0,1). + // + // Numbers with larger exponents are returned unchanged since they + // must be either an integer, infinity, or NaN. + const half = 1 << (shift - 1) + e -= bias + bits += half >> e + bits &^= fracMask >> e + } + return math.Float64frombits(bits) +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/read.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/read.go new file mode 100644 index 000000000..f4651da2d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/read.go @@ -0,0 +1,11 @@ +// +build go1.6 + +package sdkrand + +import "math/rand" + +// Read provides the stub for math.Rand.Read method support for go version's +// 1.6 and greater. +func Read(r *rand.Rand, p []byte) (int, error) { + return r.Read(p) +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/read_1_5.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/read_1_5.go new file mode 100644 index 000000000..b1d93a33d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/read_1_5.go @@ -0,0 +1,24 @@ +// +build !go1.6 + +package sdkrand + +import "math/rand" + +// Read backfills Go 1.6's math.Rand.Reader for Go 1.5 +func Read(r *rand.Rand, p []byte) (n int, err error) { + // Copy of Go standard libraries math package's read function not added to + // standard library until Go 1.6. + var pos int8 + var val int64 + for n = 0; n < len(p); n++ { + if pos == 0 { + val = r.Int63() + pos = 7 + } + p[n] = byte(val) + val >>= 8 + pos-- + } + + return n, err +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go index de021367d..74e361e07 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go @@ -146,6 +146,9 @@ func unmarshalStatusCode(v reflect.Value, statusCode int) { } func unmarshalHeaderMap(r reflect.Value, headers http.Header, prefix string) error { + if len(headers) == 0 { + return nil + } switch r.Interface().(type) { case map[string]*string: // we only support string map value types out := map[string]*string{} @@ -155,19 +158,28 @@ func unmarshalHeaderMap(r reflect.Value, headers http.Header, prefix string) err out[k[len(prefix):]] = &v[0] } } - r.Set(reflect.ValueOf(out)) + if len(out) != 0 { + r.Set(reflect.ValueOf(out)) + } + } return nil } func unmarshalHeader(v reflect.Value, header string, tag reflect.StructTag) error { - isJSONValue := tag.Get("type") == "jsonvalue" - if isJSONValue { + switch tag.Get("type") { + case "jsonvalue": if len(header) == 0 { return nil } - } else if !v.IsValid() || (header == "" && v.Elem().Kind() != reflect.String) { - return nil + case "blob": + if len(header) == 0 { + return nil + } + default: + if !v.IsValid() || (header == "" && v.Elem().Kind() != reflect.String) { + return nil + } } switch v.Interface().(type) { @@ -178,7 +190,7 @@ func unmarshalHeader(v reflect.Value, header string, tag reflect.StructTag) erro if err != nil { return err } - v.Set(reflect.ValueOf(&b)) + v.Set(reflect.ValueOf(b)) case *bool: b, err := strconv.ParseBool(header) if err != nil { diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go index cf569645d..07a6187ea 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go @@ -39,7 +39,7 @@ func Build(r *request.Request) { r.Error = awserr.NewRequestFailure( awserr.New(request.ErrCodeSerialization, "failed to encode rest XML request", err), - r.HTTPResponse.StatusCode, + 0, r.RequestID, ) return diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/timestamp.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/timestamp.go index b7ed6c6f8..05d4ff519 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/timestamp.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/timestamp.go @@ -1,8 +1,11 @@ package protocol import ( + "math" "strconv" "time" + + "github.com/aws/aws-sdk-go/internal/sdkmath" ) // Names of time formats supported by the SDK @@ -13,12 +16,19 @@ const ( ) // Time formats supported by the SDK +// Output time is intended to not contain decimals const ( // RFC 7231#section-7.1.1.1 timetamp format. e.g Tue, 29 Apr 2014 18:30:38 GMT RFC822TimeFormat = "Mon, 2 Jan 2006 15:04:05 GMT" + // This format is used for output time without seconds precision + RFC822OutputTimeFormat = "Mon, 02 Jan 2006 15:04:05 GMT" + // RFC3339 a subset of the ISO8601 timestamp format. e.g 2014-04-29T18:30:38Z - ISO8601TimeFormat = "2006-01-02T15:04:05Z" + ISO8601TimeFormat = "2006-01-02T15:04:05.999999999Z" + + // This format is used for output time without seconds precision + ISO8601OutputTimeFormat = "2006-01-02T15:04:05Z" ) // IsKnownTimestampFormat returns if the timestamp format name @@ -42,9 +52,9 @@ func FormatTime(name string, t time.Time) string { switch name { case RFC822TimeFormatName: - return t.Format(RFC822TimeFormat) + return t.Format(RFC822OutputTimeFormat) case ISO8601TimeFormatName: - return t.Format(ISO8601TimeFormat) + return t.Format(ISO8601OutputTimeFormat) case UnixTimeFormatName: return strconv.FormatInt(t.Unix(), 10) default: @@ -62,10 +72,12 @@ func ParseTime(formatName, value string) (time.Time, error) { return time.Parse(ISO8601TimeFormat, value) case UnixTimeFormatName: v, err := strconv.ParseFloat(value, 64) + _, dec := math.Modf(v) + dec = sdkmath.Round(dec*1e3) / 1e3 //Rounds 0.1229999 to 0.123 if err != nil { return time.Time{}, err } - return time.Unix(int64(v), 0), nil + return time.Unix(int64(v), int64(dec*(1e9))), nil default: panic("unknown timestamp format name, " + formatName) } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/sort.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/sort.go new file mode 100644 index 000000000..c1a511851 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/sort.go @@ -0,0 +1,32 @@ +package xmlutil + +import ( + "encoding/xml" + "strings" +) + +type xmlAttrSlice []xml.Attr + +func (x xmlAttrSlice) Len() int { + return len(x) +} + +func (x xmlAttrSlice) Less(i, j int) bool { + spaceI, spaceJ := x[i].Name.Space, x[j].Name.Space + localI, localJ := x[i].Name.Local, x[j].Name.Local + valueI, valueJ := x[i].Value, x[j].Value + + spaceCmp := strings.Compare(spaceI, spaceJ) + localCmp := strings.Compare(localI, localJ) + valueCmp := strings.Compare(valueI, valueJ) + + if spaceCmp == -1 || (spaceCmp == 0 && (localCmp == -1 || (localCmp == 0 && valueCmp == -1))) { + return true + } + + return false +} + +func (x xmlAttrSlice) Swap(i, j int) { + x[i], x[j] = x[j], x[i] +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go index 515ce1521..42f71648e 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go @@ -119,7 +119,18 @@ func (n *XMLNode) findElem(name string) (string, bool) { // StructToXML writes an XMLNode to a xml.Encoder as tokens. func StructToXML(e *xml.Encoder, node *XMLNode, sorted bool) error { - e.EncodeToken(xml.StartElement{Name: node.Name, Attr: node.Attr}) + // Sort Attributes + attrs := node.Attr + if sorted { + sortedAttrs := make([]xml.Attr, len(attrs)) + for _, k := range node.Attr { + sortedAttrs = append(sortedAttrs, k) + } + sort.Sort(xmlAttrSlice(sortedAttrs)) + attrs = sortedAttrs + } + + e.EncodeToken(xml.StartElement{Name: node.Name, Attr: attrs}) if node.Text != "" { e.EncodeToken(xml.CharData([]byte(node.Text))) diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/customizations.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/customizations.go index 333e61bfc..c019e63df 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/customizations.go +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/customizations.go @@ -5,7 +5,6 @@ import ( "hash/crc32" "io" "io/ioutil" - "math" "strconv" "time" @@ -15,15 +14,6 @@ import ( "github.com/aws/aws-sdk-go/aws/request" ) -type retryer struct { - client.DefaultRetryer -} - -func (d retryer) RetryRules(r *request.Request) time.Duration { - delay := time.Duration(math.Pow(2, float64(r.RetryCount))) * 50 - return delay * time.Millisecond -} - func init() { initClient = func(c *client.Client) { if c.Config.Retryer == nil { @@ -43,10 +33,9 @@ func setCustomRetryer(c *client.Client) { maxRetries = 10 } - c.Retryer = retryer{ - DefaultRetryer: client.DefaultRetryer{ - NumMaxRetries: maxRetries, - }, + c.Retryer = client.DefaultRetryer{ + NumMaxRetries: maxRetries, + MinRetryDelay: 50 * time.Millisecond, } } diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/api.go b/vendor/github.com/aws/aws-sdk-go/service/iam/api.go index 686836132..64f254326 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iam/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/api.go @@ -16287,7 +16287,7 @@ func (s ChangePasswordOutput) GoString() string { // evaluating the Condition elements of the input policies. // // This data type is used as an input parameter to SimulateCustomPolicy and -// SimulateCustomPolicy . +// SimulatePrincipalPolicy . type ContextEntry struct { _ struct{} `type:"structure"` @@ -17183,8 +17183,7 @@ type CreateRoleInput struct { // * The special characters tab (\u0009), line feed (\u000A), and carriage // return (\u000D) // - // Upon success, the response includes the same trust policy as a URL-encoded - // JSON string. + // Upon success, the response includes the same trust policy in JSON format. // // AssumeRolePolicyDocument is a required field AssumeRolePolicyDocument *string `min:"1" type:"string" required:"true"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go index 139c27d14..b4a4e8c4a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go @@ -7043,7 +7043,7 @@ func (s *AbortIncompleteMultipartUpload) SetDaysAfterInitiation(v int64) *AbortI } type AbortMultipartUploadInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"AbortMultipartUploadRequest" type:"structure"` // Name of the bucket to which the multipart upload was initiated. // @@ -8084,7 +8084,7 @@ func (s *CommonPrefix) SetPrefix(v string) *CommonPrefix { } type CompleteMultipartUploadInput struct { - _ struct{} `type:"structure" payload:"MultipartUpload"` + _ struct{} `locationName:"CompleteMultipartUploadRequest" type:"structure" payload:"MultipartUpload"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -8404,7 +8404,7 @@ func (s *ContinuationEvent) UnmarshalEvent( } type CopyObjectInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"CopyObjectRequest" type:"structure"` // The canned ACL to apply to the object. ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` @@ -9025,7 +9025,7 @@ func (s *CreateBucketConfiguration) SetLocationConstraint(v string) *CreateBucke } type CreateBucketInput struct { - _ struct{} `type:"structure" payload:"CreateBucketConfiguration"` + _ struct{} `locationName:"CreateBucketRequest" type:"structure" payload:"CreateBucketConfiguration"` // The canned ACL to apply to the bucket. ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"BucketCannedACL"` @@ -9166,7 +9166,7 @@ func (s *CreateBucketOutput) SetLocation(v string) *CreateBucketOutput { } type CreateMultipartUploadInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"CreateMultipartUploadRequest" type:"structure"` // The canned ACL to apply to the object. ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` @@ -9708,7 +9708,7 @@ func (s *Delete) SetQuiet(v bool) *Delete { } type DeleteBucketAnalyticsConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketAnalyticsConfigurationRequest" type:"structure"` // The name of the bucket from which an analytics configuration is deleted. // @@ -9784,7 +9784,7 @@ func (s DeleteBucketAnalyticsConfigurationOutput) GoString() string { } type DeleteBucketCorsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketCorsRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -9844,7 +9844,7 @@ func (s DeleteBucketCorsOutput) GoString() string { } type DeleteBucketEncryptionInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketEncryptionRequest" type:"structure"` // The name of the bucket containing the server-side encryption configuration // to delete. @@ -9907,7 +9907,7 @@ func (s DeleteBucketEncryptionOutput) GoString() string { } type DeleteBucketInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -9953,7 +9953,7 @@ func (s *DeleteBucketInput) getBucket() (v string) { } type DeleteBucketInventoryConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketInventoryConfigurationRequest" type:"structure"` // The name of the bucket containing the inventory configuration to delete. // @@ -10029,7 +10029,7 @@ func (s DeleteBucketInventoryConfigurationOutput) GoString() string { } type DeleteBucketLifecycleInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketLifecycleRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -10089,7 +10089,7 @@ func (s DeleteBucketLifecycleOutput) GoString() string { } type DeleteBucketMetricsConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketMetricsConfigurationRequest" type:"structure"` // The name of the bucket containing the metrics configuration to delete. // @@ -10179,7 +10179,7 @@ func (s DeleteBucketOutput) GoString() string { } type DeleteBucketPolicyInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketPolicyRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -10239,7 +10239,7 @@ func (s DeleteBucketPolicyOutput) GoString() string { } type DeleteBucketReplicationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketReplicationRequest" type:"structure"` // The bucket name. // @@ -10304,7 +10304,7 @@ func (s DeleteBucketReplicationOutput) GoString() string { } type DeleteBucketTaggingInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketTaggingRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -10364,7 +10364,7 @@ func (s DeleteBucketTaggingOutput) GoString() string { } type DeleteBucketWebsiteInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteBucketWebsiteRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -10510,7 +10510,7 @@ func (s *DeleteMarkerReplication) SetStatus(v string) *DeleteMarkerReplication { } type DeleteObjectInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteObjectRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -10656,7 +10656,7 @@ func (s *DeleteObjectOutput) SetVersionId(v string) *DeleteObjectOutput { } type DeleteObjectTaggingInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeleteObjectTaggingRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -10749,7 +10749,7 @@ func (s *DeleteObjectTaggingOutput) SetVersionId(v string) *DeleteObjectTaggingO } type DeleteObjectsInput struct { - _ struct{} `type:"structure" payload:"Delete"` + _ struct{} `locationName:"DeleteObjectsRequest" type:"structure" payload:"Delete"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -10885,7 +10885,7 @@ func (s *DeleteObjectsOutput) SetRequestCharged(v string) *DeleteObjectsOutput { } type DeletePublicAccessBlockInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"DeletePublicAccessBlockRequest" type:"structure"` // The Amazon S3 bucket whose PublicAccessBlock configuration you want to delete. // @@ -11341,7 +11341,7 @@ func (s *FilterRule) SetValue(v string) *FilterRule { } type GetBucketAccelerateConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketAccelerateConfigurationRequest" type:"structure"` // Name of the bucket for which the accelerate configuration is retrieved. // @@ -11412,7 +11412,7 @@ func (s *GetBucketAccelerateConfigurationOutput) SetStatus(v string) *GetBucketA } type GetBucketAclInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketAclRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -11489,7 +11489,7 @@ func (s *GetBucketAclOutput) SetOwner(v *Owner) *GetBucketAclOutput { } type GetBucketAnalyticsConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketAnalyticsConfigurationRequest" type:"structure"` // The name of the bucket from which an analytics configuration is retrieved. // @@ -11574,7 +11574,7 @@ func (s *GetBucketAnalyticsConfigurationOutput) SetAnalyticsConfiguration(v *Ana } type GetBucketCorsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketCorsRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -11642,7 +11642,7 @@ func (s *GetBucketCorsOutput) SetCORSRules(v []*CORSRule) *GetBucketCorsOutput { } type GetBucketEncryptionInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketEncryptionRequest" type:"structure"` // The name of the bucket from which the server-side encryption configuration // is retrieved. @@ -11714,7 +11714,7 @@ func (s *GetBucketEncryptionOutput) SetServerSideEncryptionConfiguration(v *Serv } type GetBucketInventoryConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketInventoryConfigurationRequest" type:"structure"` // The name of the bucket containing the inventory configuration to retrieve. // @@ -11799,7 +11799,7 @@ func (s *GetBucketInventoryConfigurationOutput) SetInventoryConfiguration(v *Inv } type GetBucketLifecycleConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketLifecycleConfigurationRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -11867,7 +11867,7 @@ func (s *GetBucketLifecycleConfigurationOutput) SetRules(v []*LifecycleRule) *Ge } type GetBucketLifecycleInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketLifecycleRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -11935,7 +11935,7 @@ func (s *GetBucketLifecycleOutput) SetRules(v []*Rule) *GetBucketLifecycleOutput } type GetBucketLocationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketLocationRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12003,7 +12003,7 @@ func (s *GetBucketLocationOutput) SetLocationConstraint(v string) *GetBucketLoca } type GetBucketLoggingInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketLoggingRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12075,7 +12075,7 @@ func (s *GetBucketLoggingOutput) SetLoggingEnabled(v *LoggingEnabled) *GetBucket } type GetBucketMetricsConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketMetricsConfigurationRequest" type:"structure"` // The name of the bucket containing the metrics configuration to retrieve. // @@ -12160,7 +12160,7 @@ func (s *GetBucketMetricsConfigurationOutput) SetMetricsConfiguration(v *Metrics } type GetBucketNotificationConfigurationRequest struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketNotificationConfigurationRequest" type:"structure"` // Name of the bucket to get the notification configuration for. // @@ -12208,7 +12208,7 @@ func (s *GetBucketNotificationConfigurationRequest) getBucket() (v string) { } type GetBucketPolicyInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketPolicyRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12277,7 +12277,7 @@ func (s *GetBucketPolicyOutput) SetPolicy(v string) *GetBucketPolicyOutput { } type GetBucketPolicyStatusInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketPolicyStatusRequest" type:"structure"` // The name of the Amazon S3 bucket whose policy status you want to retrieve. // @@ -12348,7 +12348,7 @@ func (s *GetBucketPolicyStatusOutput) SetPolicyStatus(v *PolicyStatus) *GetBucke } type GetBucketReplicationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketReplicationRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12418,7 +12418,7 @@ func (s *GetBucketReplicationOutput) SetReplicationConfiguration(v *ReplicationC } type GetBucketRequestPaymentInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketRequestPaymentRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12487,7 +12487,7 @@ func (s *GetBucketRequestPaymentOutput) SetPayer(v string) *GetBucketRequestPaym } type GetBucketTaggingInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketTaggingRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12556,7 +12556,7 @@ func (s *GetBucketTaggingOutput) SetTagSet(v []*Tag) *GetBucketTaggingOutput { } type GetBucketVersioningInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketVersioningRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12636,7 +12636,7 @@ func (s *GetBucketVersioningOutput) SetStatus(v string) *GetBucketVersioningOutp } type GetBucketWebsiteInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetBucketWebsiteRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12730,7 +12730,7 @@ func (s *GetBucketWebsiteOutput) SetRoutingRules(v []*RoutingRule) *GetBucketWeb } type GetObjectAclInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetObjectAclRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -12853,7 +12853,7 @@ func (s *GetObjectAclOutput) SetRequestCharged(v string) *GetObjectAclOutput { } type GetObjectInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetObjectRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -13090,7 +13090,7 @@ func (s *GetObjectInput) SetVersionId(v string) *GetObjectInput { } type GetObjectLegalHoldInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetObjectLegalHoldRequest" type:"structure"` // The bucket containing the object whose Legal Hold status you want to retrieve. // @@ -13199,7 +13199,7 @@ func (s *GetObjectLegalHoldOutput) SetLegalHold(v *ObjectLockLegalHold) *GetObje } type GetObjectLockConfigurationInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetObjectLockConfigurationRequest" type:"structure"` // The bucket whose object lock configuration you want to retrieve. // @@ -13581,7 +13581,7 @@ func (s *GetObjectOutput) SetWebsiteRedirectLocation(v string) *GetObjectOutput } type GetObjectRetentionInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetObjectRetentionRequest" type:"structure"` // The bucket containing the object whose retention settings you want to retrieve. // @@ -13690,7 +13690,7 @@ func (s *GetObjectRetentionOutput) SetRetention(v *ObjectLockRetention) *GetObje } type GetObjectTaggingInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetObjectTaggingRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -13790,7 +13790,7 @@ func (s *GetObjectTaggingOutput) SetVersionId(v string) *GetObjectTaggingOutput } type GetObjectTorrentInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetObjectTorrentRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -13895,7 +13895,7 @@ func (s *GetObjectTorrentOutput) SetRequestCharged(v string) *GetObjectTorrentOu } type GetPublicAccessBlockInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"GetPublicAccessBlockRequest" type:"structure"` // The name of the Amazon S3 bucket whose PublicAccessBlock configuration you // want to retrieve. @@ -14126,7 +14126,7 @@ func (s *Grantee) SetURI(v string) *Grantee { } type HeadBucketInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"HeadBucketRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -14186,7 +14186,7 @@ func (s HeadBucketOutput) GoString() string { } type HeadObjectInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"HeadObjectRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -15661,7 +15661,7 @@ func (s *LifecycleRuleFilter) SetTag(v *Tag) *LifecycleRuleFilter { } type ListBucketAnalyticsConfigurationsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListBucketAnalyticsConfigurationsRequest" type:"structure"` // The name of the bucket from which analytics configurations are retrieved. // @@ -15773,7 +15773,7 @@ func (s *ListBucketAnalyticsConfigurationsOutput) SetNextContinuationToken(v str } type ListBucketInventoryConfigurationsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListBucketInventoryConfigurationsRequest" type:"structure"` // The name of the bucket containing the inventory configurations to retrieve. // @@ -15887,7 +15887,7 @@ func (s *ListBucketInventoryConfigurationsOutput) SetNextContinuationToken(v str } type ListBucketMetricsConfigurationsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListBucketMetricsConfigurationsRequest" type:"structure"` // The name of the bucket containing the metrics configurations to retrieve. // @@ -16047,7 +16047,7 @@ func (s *ListBucketsOutput) SetOwner(v *Owner) *ListBucketsOutput { } type ListMultipartUploadsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListMultipartUploadsRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -16291,7 +16291,7 @@ func (s *ListMultipartUploadsOutput) SetUploads(v []*MultipartUpload) *ListMulti } type ListObjectVersionsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListObjectVersionsRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -16524,7 +16524,7 @@ func (s *ListObjectVersionsOutput) SetVersions(v []*ObjectVersion) *ListObjectVe } type ListObjectsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListObjectsRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -16736,7 +16736,7 @@ func (s *ListObjectsOutput) SetPrefix(v string) *ListObjectsOutput { } type ListObjectsV2Input struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListObjectsV2Request" type:"structure"` // Name of the bucket to list. // @@ -16997,7 +16997,7 @@ func (s *ListObjectsV2Output) SetStartAfter(v string) *ListObjectsV2Output { } type ListPartsInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"ListPartsRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -18622,7 +18622,7 @@ func (s *PublicAccessBlockConfiguration) SetRestrictPublicBuckets(v bool) *Publi } type PutBucketAccelerateConfigurationInput struct { - _ struct{} `type:"structure" payload:"AccelerateConfiguration"` + _ struct{} `locationName:"PutBucketAccelerateConfigurationRequest" type:"structure" payload:"AccelerateConfiguration"` // Specifies the Accelerate Configuration you want to set for the bucket. // @@ -18698,7 +18698,7 @@ func (s PutBucketAccelerateConfigurationOutput) GoString() string { } type PutBucketAclInput struct { - _ struct{} `type:"structure" payload:"AccessControlPolicy"` + _ struct{} `locationName:"PutBucketAclRequest" type:"structure" payload:"AccessControlPolicy"` // The canned ACL to apply to the bucket. ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"BucketCannedACL"` @@ -18827,7 +18827,7 @@ func (s PutBucketAclOutput) GoString() string { } type PutBucketAnalyticsConfigurationInput struct { - _ struct{} `type:"structure" payload:"AnalyticsConfiguration"` + _ struct{} `locationName:"PutBucketAnalyticsConfigurationRequest" type:"structure" payload:"AnalyticsConfiguration"` // The configuration and any analyses for the analytics filter. // @@ -18922,7 +18922,7 @@ func (s PutBucketAnalyticsConfigurationOutput) GoString() string { } type PutBucketCorsInput struct { - _ struct{} `type:"structure" payload:"CORSConfiguration"` + _ struct{} `locationName:"PutBucketCorsRequest" type:"structure" payload:"CORSConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19004,7 +19004,7 @@ func (s PutBucketCorsOutput) GoString() string { } type PutBucketEncryptionInput struct { - _ struct{} `type:"structure" payload:"ServerSideEncryptionConfiguration"` + _ struct{} `locationName:"PutBucketEncryptionRequest" type:"structure" payload:"ServerSideEncryptionConfiguration"` // Specifies default encryption for a bucket using server-side encryption with // Amazon S3-managed keys (SSE-S3) or AWS KMS-managed keys (SSE-KMS). For information @@ -19089,7 +19089,7 @@ func (s PutBucketEncryptionOutput) GoString() string { } type PutBucketInventoryConfigurationInput struct { - _ struct{} `type:"structure" payload:"InventoryConfiguration"` + _ struct{} `locationName:"PutBucketInventoryConfigurationRequest" type:"structure" payload:"InventoryConfiguration"` // The name of the bucket where the inventory configuration will be stored. // @@ -19184,7 +19184,7 @@ func (s PutBucketInventoryConfigurationOutput) GoString() string { } type PutBucketLifecycleConfigurationInput struct { - _ struct{} `type:"structure" payload:"LifecycleConfiguration"` + _ struct{} `locationName:"PutBucketLifecycleConfigurationRequest" type:"structure" payload:"LifecycleConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19260,7 +19260,7 @@ func (s PutBucketLifecycleConfigurationOutput) GoString() string { } type PutBucketLifecycleInput struct { - _ struct{} `type:"structure" payload:"LifecycleConfiguration"` + _ struct{} `locationName:"PutBucketLifecycleRequest" type:"structure" payload:"LifecycleConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19333,7 +19333,7 @@ func (s PutBucketLifecycleOutput) GoString() string { } type PutBucketLoggingInput struct { - _ struct{} `type:"structure" payload:"BucketLoggingStatus"` + _ struct{} `locationName:"PutBucketLoggingRequest" type:"structure" payload:"BucketLoggingStatus"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19410,7 +19410,7 @@ func (s PutBucketLoggingOutput) GoString() string { } type PutBucketMetricsConfigurationInput struct { - _ struct{} `type:"structure" payload:"MetricsConfiguration"` + _ struct{} `locationName:"PutBucketMetricsConfigurationRequest" type:"structure" payload:"MetricsConfiguration"` // The name of the bucket for which the metrics configuration is set. // @@ -19505,7 +19505,7 @@ func (s PutBucketMetricsConfigurationOutput) GoString() string { } type PutBucketNotificationConfigurationInput struct { - _ struct{} `type:"structure" payload:"NotificationConfiguration"` + _ struct{} `locationName:"PutBucketNotificationConfigurationRequest" type:"structure" payload:"NotificationConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19585,7 +19585,7 @@ func (s PutBucketNotificationConfigurationOutput) GoString() string { } type PutBucketNotificationInput struct { - _ struct{} `type:"structure" payload:"NotificationConfiguration"` + _ struct{} `locationName:"PutBucketNotificationRequest" type:"structure" payload:"NotificationConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19657,7 +19657,7 @@ func (s PutBucketNotificationOutput) GoString() string { } type PutBucketPolicyInput struct { - _ struct{} `type:"structure" payload:"Policy"` + _ struct{} `locationName:"PutBucketPolicyRequest" type:"structure" payload:"Policy"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19741,7 +19741,7 @@ func (s PutBucketPolicyOutput) GoString() string { } type PutBucketReplicationInput struct { - _ struct{} `type:"structure" payload:"ReplicationConfiguration"` + _ struct{} `locationName:"PutBucketReplicationRequest" type:"structure" payload:"ReplicationConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19830,7 +19830,7 @@ func (s PutBucketReplicationOutput) GoString() string { } type PutBucketRequestPaymentInput struct { - _ struct{} `type:"structure" payload:"RequestPaymentConfiguration"` + _ struct{} `locationName:"PutBucketRequestPaymentRequest" type:"structure" payload:"RequestPaymentConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19907,7 +19907,7 @@ func (s PutBucketRequestPaymentOutput) GoString() string { } type PutBucketTaggingInput struct { - _ struct{} `type:"structure" payload:"Tagging"` + _ struct{} `locationName:"PutBucketTaggingRequest" type:"structure" payload:"Tagging"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -19984,7 +19984,7 @@ func (s PutBucketTaggingOutput) GoString() string { } type PutBucketVersioningInput struct { - _ struct{} `type:"structure" payload:"VersioningConfiguration"` + _ struct{} `locationName:"PutBucketVersioningRequest" type:"structure" payload:"VersioningConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -20070,7 +20070,7 @@ func (s PutBucketVersioningOutput) GoString() string { } type PutBucketWebsiteInput struct { - _ struct{} `type:"structure" payload:"WebsiteConfiguration"` + _ struct{} `locationName:"PutBucketWebsiteRequest" type:"structure" payload:"WebsiteConfiguration"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -20149,7 +20149,7 @@ func (s PutBucketWebsiteOutput) GoString() string { } type PutObjectAclInput struct { - _ struct{} `type:"structure" payload:"AccessControlPolicy"` + _ struct{} `locationName:"PutObjectAclRequest" type:"structure" payload:"AccessControlPolicy"` // The canned ACL to apply to the object. ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` @@ -20324,7 +20324,7 @@ func (s *PutObjectAclOutput) SetRequestCharged(v string) *PutObjectAclOutput { } type PutObjectInput struct { - _ struct{} `type:"structure" payload:"Body"` + _ struct{} `locationName:"PutObjectRequest" type:"structure" payload:"Body"` // The canned ACL to apply to the object. ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` @@ -20671,7 +20671,7 @@ func (s *PutObjectInput) SetWebsiteRedirectLocation(v string) *PutObjectInput { } type PutObjectLegalHoldInput struct { - _ struct{} `type:"structure" payload:"LegalHold"` + _ struct{} `locationName:"PutObjectLegalHoldRequest" type:"structure" payload:"LegalHold"` // The bucket containing the object that you want to place a Legal Hold on. // @@ -20791,7 +20791,7 @@ func (s *PutObjectLegalHoldOutput) SetRequestCharged(v string) *PutObjectLegalHo } type PutObjectLockConfigurationInput struct { - _ struct{} `type:"structure" payload:"ObjectLockConfiguration"` + _ struct{} `locationName:"PutObjectLockConfigurationRequest" type:"structure" payload:"ObjectLockConfiguration"` // The bucket whose object lock configuration you want to create or replace. // @@ -20998,7 +20998,7 @@ func (s *PutObjectOutput) SetVersionId(v string) *PutObjectOutput { } type PutObjectRetentionInput struct { - _ struct{} `type:"structure" payload:"Retention"` + _ struct{} `locationName:"PutObjectRetentionRequest" type:"structure" payload:"Retention"` // The bucket that contains the object you want to apply this Object Retention // configuration to. @@ -21129,7 +21129,7 @@ func (s *PutObjectRetentionOutput) SetRequestCharged(v string) *PutObjectRetenti } type PutObjectTaggingInput struct { - _ struct{} `type:"structure" payload:"Tagging"` + _ struct{} `locationName:"PutObjectTaggingRequest" type:"structure" payload:"Tagging"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -21237,7 +21237,7 @@ func (s *PutObjectTaggingOutput) SetVersionId(v string) *PutObjectTaggingOutput } type PutPublicAccessBlockInput struct { - _ struct{} `type:"structure" payload:"PublicAccessBlockConfiguration"` + _ struct{} `locationName:"PutPublicAccessBlockRequest" type:"structure" payload:"PublicAccessBlockConfiguration"` // The name of the Amazon S3 bucket whose PublicAccessBlock configuration you // want to set. @@ -21999,7 +21999,7 @@ func (s *RequestProgress) SetEnabled(v bool) *RequestProgress { } type RestoreObjectInput struct { - _ struct{} `type:"structure" payload:"RestoreRequest"` + _ struct{} `locationName:"RestoreObjectRequest" type:"structure" payload:"RestoreRequest"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -23715,7 +23715,7 @@ func (s *Transition) SetStorageClass(v string) *Transition { } type UploadPartCopyInput struct { - _ struct{} `type:"structure"` + _ struct{} `locationName:"UploadPartCopyRequest" type:"structure"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` @@ -24045,7 +24045,7 @@ func (s *UploadPartCopyOutput) SetServerSideEncryption(v string) *UploadPartCopy } type UploadPartInput struct { - _ struct{} `type:"structure" payload:"Body"` + _ struct{} `locationName:"UploadPartRequest" type:"structure" payload:"Body"` // Object data. Body io.ReadSeeker `type:"blob"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go b/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go index 39b912c26..4b65f7153 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go @@ -63,6 +63,20 @@ // See the s3manager package's Downloader type documentation for more information. // https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#Downloader // +// Automatic URI cleaning +// +// Interacting with objects whose keys contain adjacent slashes (e.g. bucketname/foo//bar/objectname) +// requires setting DisableRestProtocolURICleaning to true in the aws.Config struct +// used by the service client. +// +// svc := s3.New(sess, &aws.Config{ +// DisableRestProtocolURICleaning: aws.Bool(true), +// }) +// out, err := svc.GetObject(&s3.GetObjectInput { +// Bucket: aws.String("bucketname"), +// Key: aws.String("//foo//bar//moo"), +// }) +// // Get Bucket Region // // GetBucketRegion will attempt to get the region for a bucket using a region diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/customizations.go b/vendor/github.com/aws/aws-sdk-go/service/sts/customizations.go new file mode 100644 index 000000000..d5307fcaa --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/customizations.go @@ -0,0 +1,11 @@ +package sts + +import "github.com/aws/aws-sdk-go/aws/request" + +func init() { + initRequest = customizeRequest +} + +func customizeRequest(r *request.Request) { + r.RetryErrorCodes = append(r.RetryErrorCodes, ErrCodeIDPCommunicationErrorException) +} diff --git a/vendor/github.com/dimchansky/utfbom/.travis.yml b/vendor/github.com/dimchansky/utfbom/.travis.yml index df88e37b2..3512c8519 100644 --- a/vendor/github.com/dimchansky/utfbom/.travis.yml +++ b/vendor/github.com/dimchansky/utfbom/.travis.yml @@ -1,8 +1,8 @@ language: go go: - - 1.7 - - tip + - '1.10' + - '1.11' # sudo=false makes the build run using a container sudo: false @@ -15,4 +15,4 @@ before_install: script: - gofiles=$(find ./ -name '*.go') && [ -z "$gofiles" ] || unformatted=$(goimports -l $gofiles) && [ -z "$unformatted" ] || (echo >&2 "Go files must be formatted with gofmt. Following files has problem:\n $unformatted" && false) - golint ./... # This won't break the build, just show warnings - - $HOME/gopath/bin/goveralls -service=travis-ci \ No newline at end of file + - $HOME/gopath/bin/goveralls -service=travis-ci diff --git a/vendor/github.com/dimchansky/utfbom/README.md b/vendor/github.com/dimchansky/utfbom/README.md index 2f06ecacd..8ece28008 100644 --- a/vendor/github.com/dimchansky/utfbom/README.md +++ b/vendor/github.com/dimchansky/utfbom/README.md @@ -37,22 +37,7 @@ func trySkip(byteData []byte) { // skip BOM and detect encoding sr, enc := utfbom.Skip(bytes.NewReader(byteData)) - var encStr string - switch enc { - case utfbom.UTF8: - encStr = "UTF8" - case utfbom.UTF16BigEndian: - encStr = "UTF16 big endian" - case utfbom.UTF16LittleEndian: - encStr = "UTF16 little endian" - case utfbom.UTF32BigEndian: - encStr = "UTF32 big endian" - case utfbom.UTF32LittleEndian: - encStr = "UTF32 little endian" - default: - encStr = "Unknown, no byte-order mark found" - } - fmt.Println("Detected encoding:", encStr) + fmt.Printf("Detected encoding: %s\n", enc) output, err = ioutil.ReadAll(sr) if err != nil { fmt.Println(err) @@ -74,7 +59,7 @@ ReadAll with BOM detection and skipping [104 101 108 108 111] Input: [104 101 108 108 111] ReadAll with BOM skipping [104 101 108 108 111] -Detected encoding: Unknown, no byte-order mark found +Detected encoding: Unknown ReadAll with BOM detection and skipping [104 101 108 108 111] ``` diff --git a/vendor/github.com/dimchansky/utfbom/utfbom.go b/vendor/github.com/dimchansky/utfbom/utfbom.go index 648184a12..77a303e56 100644 --- a/vendor/github.com/dimchansky/utfbom/utfbom.go +++ b/vendor/github.com/dimchansky/utfbom/utfbom.go @@ -32,6 +32,24 @@ const ( UTF32LittleEndian ) +// String returns a user-friendly string representation of the encoding. Satisfies fmt.Stringer interface. +func (e Encoding) String() string { + switch e { + case UTF8: + return "UTF8" + case UTF16BigEndian: + return "UTF16BigEndian" + case UTF16LittleEndian: + return "UTF16LittleEndian" + case UTF32BigEndian: + return "UTF32BigEndian" + case UTF32LittleEndian: + return "UTF32LittleEndian" + default: + return "Unknown" + } +} + const maxConsecutiveEmptyReads = 100 // Skip creates Reader which automatically detects BOM (Unicode Byte Order Mark) and removes it as necessary. diff --git a/vendor/github.com/golang/mock/mockgen/mockgen.go b/vendor/github.com/golang/mock/mockgen/mockgen.go new file mode 100644 index 000000000..a64bd2554 --- /dev/null +++ b/vendor/github.com/golang/mock/mockgen/mockgen.go @@ -0,0 +1,588 @@ +// Copyright 2010 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// MockGen generates mock implementations of Go interfaces. +package main + +// TODO: This does not support recursive embedded interfaces. +// TODO: This does not support embedding package-local interfaces in a separate file. + +import ( + "bytes" + "flag" + "fmt" + "go/build" + "go/format" + "go/token" + "io" + "io/ioutil" + "log" + "os" + "path" + "path/filepath" + "sort" + "strconv" + "strings" + "unicode" + + "github.com/golang/mock/mockgen/model" +) + +const ( + gomockImportPath = "github.com/golang/mock/gomock" +) + +var ( + source = flag.String("source", "", "(source mode) Input Go source file; enables source mode.") + destination = flag.String("destination", "", "Output file; defaults to stdout.") + mockNames = flag.String("mock_names", "", "Comma-separated interfaceName=mockName pairs of explicit mock names to use. Mock names default to 'Mock'+ interfaceName suffix.") + packageOut = flag.String("package", "", "Package of the generated code; defaults to the package of the input with a 'mock_' prefix.") + selfPackage = flag.String("self_package", "", "The full package import path for the generated code. The purpose of this flag is to prevent import cycles in the generated code by trying to include its own package. This can happen if the mock's package is set to one of its inputs (usually the main one) and the output is stdio so mockgen cannot detect the final output package. Setting this flag will then tell mockgen which import to exclude.") + writePkgComment = flag.Bool("write_package_comment", true, "Writes package documentation comment (godoc) if true.") + copyrightFile = flag.String("copyright_file", "", "Copyright file used to add copyright header") + + debugParser = flag.Bool("debug_parser", false, "Print out parser results only.") +) + +func main() { + flag.Usage = usage + flag.Parse() + + var pkg *model.Package + var err error + if *source != "" { + pkg, err = parseFile(*source) + } else { + if flag.NArg() != 2 { + usage() + log.Fatal("Expected exactly two arguments") + } + pkg, err = reflect(flag.Arg(0), strings.Split(flag.Arg(1), ",")) + } + if err != nil { + log.Fatalf("Loading input failed: %v", err) + } + + if *debugParser { + pkg.Print(os.Stdout) + return + } + + dst := os.Stdout + if len(*destination) > 0 { + if err := os.MkdirAll(filepath.Dir(*destination), os.ModePerm); err != nil { + log.Fatalf("Unable to create directory: %v", err) + } + f, err := os.Create(*destination) + if err != nil { + log.Fatalf("Failed opening destination file: %v", err) + } + defer f.Close() + dst = f + } + + packageName := *packageOut + if packageName == "" { + // pkg.Name in reflect mode is the base name of the import path, + // which might have characters that are illegal to have in package names. + packageName = "mock_" + sanitize(pkg.Name) + } + + // outputPackagePath represents the fully qualified name of the package of + // the generated code. Its purposes are to prevent the module from importing + // itself and to prevent qualifying type names that come from its own + // package (i.e. if there is a type called X then we want to print "X" not + // "package.X" since "package" is this package). This can happen if the mock + // is output into an already existing package. + outputPackagePath := *selfPackage + if len(outputPackagePath) == 0 && len(*destination) > 0 { + dst, _ := filepath.Abs(filepath.Dir(*destination)) + for _, prefix := range build.Default.SrcDirs() { + if strings.HasPrefix(dst, prefix) { + if rel, err := filepath.Rel(prefix, dst); err == nil { + outputPackagePath = rel + break + } + } + } + } + + g := new(generator) + if *source != "" { + g.filename = *source + } else { + g.srcPackage = flag.Arg(0) + g.srcInterfaces = flag.Arg(1) + } + + if *mockNames != "" { + g.mockNames = parseMockNames(*mockNames) + } + if *copyrightFile != "" { + header, err := ioutil.ReadFile(*copyrightFile) + if err != nil { + log.Fatalf("Failed reading copyright file: %v", err) + } + + g.copyrightHeader = string(header) + } + if err := g.Generate(pkg, packageName, outputPackagePath); err != nil { + log.Fatalf("Failed generating mock: %v", err) + } + if _, err := dst.Write(g.Output()); err != nil { + log.Fatalf("Failed writing to destination: %v", err) + } +} + +func parseMockNames(names string) map[string]string { + mocksMap := make(map[string]string) + for _, kv := range strings.Split(names, ",") { + parts := strings.SplitN(kv, "=", 2) + if len(parts) != 2 || parts[1] == "" { + log.Fatalf("bad mock names spec: %v", kv) + } + mocksMap[parts[0]] = parts[1] + } + return mocksMap +} + +func usage() { + io.WriteString(os.Stderr, usageText) + flag.PrintDefaults() +} + +const usageText = `mockgen has two modes of operation: source and reflect. + +Source mode generates mock interfaces from a source file. +It is enabled by using the -source flag. Other flags that +may be useful in this mode are -imports and -aux_files. +Example: + mockgen -source=foo.go [other options] + +Reflect mode generates mock interfaces by building a program +that uses reflection to understand interfaces. It is enabled +by passing two non-flag arguments: an import path, and a +comma-separated list of symbols. +Example: + mockgen database/sql/driver Conn,Driver + +` + +type generator struct { + buf bytes.Buffer + indent string + mockNames map[string]string // may be empty + filename string // may be empty + srcPackage, srcInterfaces string // may be empty + copyrightHeader string + + packageMap map[string]string // map from import path to package name +} + +func (g *generator) p(format string, args ...interface{}) { + fmt.Fprintf(&g.buf, g.indent+format+"\n", args...) +} + +func (g *generator) in() { + g.indent += "\t" +} + +func (g *generator) out() { + if len(g.indent) > 0 { + g.indent = g.indent[0 : len(g.indent)-1] + } +} + +func removeDot(s string) string { + if len(s) > 0 && s[len(s)-1] == '.' { + return s[0 : len(s)-1] + } + return s +} + +// sanitize cleans up a string to make a suitable package name. +func sanitize(s string) string { + t := "" + for _, r := range s { + if t == "" { + if unicode.IsLetter(r) || r == '_' { + t += string(r) + continue + } + } else { + if unicode.IsLetter(r) || unicode.IsDigit(r) || r == '_' { + t += string(r) + continue + } + } + t += "_" + } + if t == "_" { + t = "x" + } + return t +} + +func (g *generator) Generate(pkg *model.Package, pkgName string, outputPackagePath string) error { + if pkgName != pkg.Name { + outputPackagePath = "" + } + + if g.copyrightHeader != "" { + lines := strings.Split(g.copyrightHeader, "\n") + for _, line := range lines { + g.p("// %s", line) + } + g.p("") + } + + g.p("// Code generated by MockGen. DO NOT EDIT.") + if g.filename != "" { + g.p("// Source: %v", g.filename) + } else { + g.p("// Source: %v (interfaces: %v)", g.srcPackage, g.srcInterfaces) + } + g.p("") + + // Get all required imports, and generate unique names for them all. + im := pkg.Imports() + im[gomockImportPath] = true + + // Only import reflect if it's used. We only use reflect in mocked methods + // so only import if any of the mocked interfaces have methods. + for _, intf := range pkg.Interfaces { + if len(intf.Methods) > 0 { + im["reflect"] = true + break + } + } + + // Sort keys to make import alias generation predictable + sortedPaths := make([]string, len(im), len(im)) + x := 0 + for pth := range im { + sortedPaths[x] = pth + x++ + } + sort.Strings(sortedPaths) + + g.packageMap = make(map[string]string, len(im)) + localNames := make(map[string]bool, len(im)) + for _, pth := range sortedPaths { + base := sanitize(path.Base(pth)) + + // Local names for an imported package can usually be the basename of the import path. + // A couple of situations don't permit that, such as duplicate local names + // (e.g. importing "html/template" and "text/template"), or where the basename is + // a keyword (e.g. "foo/case"). + // try base0, base1, ... + pkgName := base + i := 0 + for localNames[pkgName] || token.Lookup(pkgName).IsKeyword() { + pkgName = base + strconv.Itoa(i) + i++ + } + + g.packageMap[pth] = pkgName + localNames[pkgName] = true + } + + if *writePkgComment { + g.p("// Package %v is a generated GoMock package.", pkgName) + } + g.p("package %v", pkgName) + g.p("") + g.p("import (") + g.in() + for path, pkg := range g.packageMap { + if path == outputPackagePath { + continue + } + g.p("%v %q", pkg, path) + } + for _, path := range pkg.DotImports { + g.p(". %q", path) + } + g.out() + g.p(")") + + for _, intf := range pkg.Interfaces { + if err := g.GenerateMockInterface(intf, outputPackagePath); err != nil { + return err + } + } + + return nil +} + +// The name of the mock type to use for the given interface identifier. +func (g *generator) mockName(typeName string) string { + if mockName, ok := g.mockNames[typeName]; ok { + return mockName + } + + return "Mock" + typeName +} + +func (g *generator) GenerateMockInterface(intf *model.Interface, outputPackagePath string) error { + mockType := g.mockName(intf.Name) + + g.p("") + g.p("// %v is a mock of %v interface", mockType, intf.Name) + g.p("type %v struct {", mockType) + g.in() + g.p("ctrl *gomock.Controller") + g.p("recorder *%vMockRecorder", mockType) + g.out() + g.p("}") + g.p("") + + g.p("// %vMockRecorder is the mock recorder for %v", mockType, mockType) + g.p("type %vMockRecorder struct {", mockType) + g.in() + g.p("mock *%v", mockType) + g.out() + g.p("}") + g.p("") + + // TODO: Re-enable this if we can import the interface reliably. + //g.p("// Verify that the mock satisfies the interface at compile time.") + //g.p("var _ %v = (*%v)(nil)", typeName, mockType) + //g.p("") + + g.p("// New%v creates a new mock instance", mockType) + g.p("func New%v(ctrl *gomock.Controller) *%v {", mockType, mockType) + g.in() + g.p("mock := &%v{ctrl: ctrl}", mockType) + g.p("mock.recorder = &%vMockRecorder{mock}", mockType) + g.p("return mock") + g.out() + g.p("}") + g.p("") + + // XXX: possible name collision here if someone has EXPECT in their interface. + g.p("// EXPECT returns an object that allows the caller to indicate expected use") + g.p("func (m *%v) EXPECT() *%vMockRecorder {", mockType, mockType) + g.in() + g.p("return m.recorder") + g.out() + g.p("}") + + g.GenerateMockMethods(mockType, intf, outputPackagePath) + + return nil +} + +func (g *generator) GenerateMockMethods(mockType string, intf *model.Interface, pkgOverride string) { + for _, m := range intf.Methods { + g.p("") + g.GenerateMockMethod(mockType, m, pkgOverride) + g.p("") + g.GenerateMockRecorderMethod(mockType, m) + } +} + +func makeArgString(argNames, argTypes []string) string { + args := make([]string, len(argNames)) + for i, name := range argNames { + // specify the type only once for consecutive args of the same type + if i+1 < len(argTypes) && argTypes[i] == argTypes[i+1] { + args[i] = name + } else { + args[i] = name + " " + argTypes[i] + } + } + return strings.Join(args, ", ") +} + +// GenerateMockMethod generates a mock method implementation. +// If non-empty, pkgOverride is the package in which unqualified types reside. +func (g *generator) GenerateMockMethod(mockType string, m *model.Method, pkgOverride string) error { + argNames := g.getArgNames(m) + argTypes := g.getArgTypes(m, pkgOverride) + argString := makeArgString(argNames, argTypes) + + rets := make([]string, len(m.Out)) + for i, p := range m.Out { + rets[i] = p.Type.String(g.packageMap, pkgOverride) + } + retString := strings.Join(rets, ", ") + if len(rets) > 1 { + retString = "(" + retString + ")" + } + if retString != "" { + retString = " " + retString + } + + ia := newIdentifierAllocator(argNames) + idRecv := ia.allocateIdentifier("m") + + g.p("// %v mocks base method", m.Name) + g.p("func (%v *%v) %v(%v)%v {", idRecv, mockType, m.Name, argString, retString) + g.in() + g.p("%s.ctrl.T.Helper()", idRecv) + + var callArgs string + if m.Variadic == nil { + if len(argNames) > 0 { + callArgs = ", " + strings.Join(argNames, ", ") + } + } else { + // Non-trivial. The generated code must build a []interface{}, + // but the variadic argument may be any type. + idVarArgs := ia.allocateIdentifier("varargs") + idVArg := ia.allocateIdentifier("a") + g.p("%s := []interface{}{%s}", idVarArgs, strings.Join(argNames[:len(argNames)-1], ", ")) + g.p("for _, %s := range %s {", idVArg, argNames[len(argNames)-1]) + g.in() + g.p("%s = append(%s, %s)", idVarArgs, idVarArgs, idVArg) + g.out() + g.p("}") + callArgs = ", " + idVarArgs + "..." + } + if len(m.Out) == 0 { + g.p(`%v.ctrl.Call(%v, %q%v)`, idRecv, idRecv, m.Name, callArgs) + } else { + idRet := ia.allocateIdentifier("ret") + g.p(`%v := %v.ctrl.Call(%v, %q%v)`, idRet, idRecv, idRecv, m.Name, callArgs) + + // Go does not allow "naked" type assertions on nil values, so we use the two-value form here. + // The value of that is either (x.(T), true) or (Z, false), where Z is the zero value for T. + // Happily, this coincides with the semantics we want here. + retNames := make([]string, len(rets)) + for i, t := range rets { + retNames[i] = ia.allocateIdentifier(fmt.Sprintf("ret%d", i)) + g.p("%s, _ := %s[%d].(%s)", retNames[i], idRet, i, t) + } + g.p("return " + strings.Join(retNames, ", ")) + } + + g.out() + g.p("}") + return nil +} + +func (g *generator) GenerateMockRecorderMethod(mockType string, m *model.Method) error { + argNames := g.getArgNames(m) + + var argString string + if m.Variadic == nil { + argString = strings.Join(argNames, ", ") + } else { + argString = strings.Join(argNames[:len(argNames)-1], ", ") + } + if argString != "" { + argString += " interface{}" + } + + if m.Variadic != nil { + if argString != "" { + argString += ", " + } + argString += fmt.Sprintf("%s ...interface{}", argNames[len(argNames)-1]) + } + + ia := newIdentifierAllocator(argNames) + idRecv := ia.allocateIdentifier("mr") + + g.p("// %v indicates an expected call of %v", m.Name, m.Name) + g.p("func (%s *%vMockRecorder) %v(%v) *gomock.Call {", idRecv, mockType, m.Name, argString) + g.in() + g.p("%s.mock.ctrl.T.Helper()", idRecv) + + var callArgs string + if m.Variadic == nil { + if len(argNames) > 0 { + callArgs = ", " + strings.Join(argNames, ", ") + } + } else { + if len(argNames) == 1 { + // Easy: just use ... to push the arguments through. + callArgs = ", " + argNames[0] + "..." + } else { + // Hard: create a temporary slice. + idVarArgs := ia.allocateIdentifier("varargs") + g.p("%s := append([]interface{}{%s}, %s...)", + idVarArgs, + strings.Join(argNames[:len(argNames)-1], ", "), + argNames[len(argNames)-1]) + callArgs = ", " + idVarArgs + "..." + } + } + g.p(`return %s.mock.ctrl.RecordCallWithMethodType(%s.mock, "%s", reflect.TypeOf((*%s)(nil).%s)%s)`, idRecv, idRecv, m.Name, mockType, m.Name, callArgs) + + g.out() + g.p("}") + return nil +} + +func (g *generator) getArgNames(m *model.Method) []string { + argNames := make([]string, len(m.In)) + for i, p := range m.In { + name := p.Name + if name == "" { + name = fmt.Sprintf("arg%d", i) + } + argNames[i] = name + } + if m.Variadic != nil { + name := m.Variadic.Name + if name == "" { + name = fmt.Sprintf("arg%d", len(m.In)) + } + argNames = append(argNames, name) + } + return argNames +} + +func (g *generator) getArgTypes(m *model.Method, pkgOverride string) []string { + argTypes := make([]string, len(m.In)) + for i, p := range m.In { + argTypes[i] = p.Type.String(g.packageMap, pkgOverride) + } + if m.Variadic != nil { + argTypes = append(argTypes, "..."+m.Variadic.Type.String(g.packageMap, pkgOverride)) + } + return argTypes +} + +type identifierAllocator map[string]struct{} + +func newIdentifierAllocator(taken []string) identifierAllocator { + a := make(identifierAllocator, len(taken)) + for _, s := range taken { + a[s] = struct{}{} + } + return a +} + +func (o identifierAllocator) allocateIdentifier(want string) string { + id := want + for i := 2; ; i++ { + if _, ok := o[id]; !ok { + o[id] = struct{}{} + return id + } + id = want + "_" + strconv.Itoa(i) + } +} + +// Output returns the generator's output, formatted in the standard Go style. +func (g *generator) Output() []byte { + src, err := format.Source(g.buf.Bytes()) + if err != nil { + log.Fatalf("Failed to format generated source code: %s\n%s", err, g.buf.String()) + } + return src +} diff --git a/vendor/github.com/golang/mock/mockgen/model/model.go b/vendor/github.com/golang/mock/mockgen/model/model.go new file mode 100644 index 000000000..8113e3d39 --- /dev/null +++ b/vendor/github.com/golang/mock/mockgen/model/model.go @@ -0,0 +1,461 @@ +// Copyright 2012 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package model contains the data model necessary for generating mock implementations. +package model + +import ( + "encoding/gob" + "fmt" + "io" + "reflect" + "strings" +) + +// pkgPath is the importable path for package model +const pkgPath = "github.com/golang/mock/mockgen/model" + +// Package is a Go package. It may be a subset. +type Package struct { + Name string + Interfaces []*Interface + DotImports []string +} + +func (pkg *Package) Print(w io.Writer) { + fmt.Fprintf(w, "package %s\n", pkg.Name) + for _, intf := range pkg.Interfaces { + intf.Print(w) + } +} + +// Imports returns the imports needed by the Package as a set of import paths. +func (pkg *Package) Imports() map[string]bool { + im := make(map[string]bool) + for _, intf := range pkg.Interfaces { + intf.addImports(im) + } + return im +} + +// Interface is a Go interface. +type Interface struct { + Name string + Methods []*Method +} + +func (intf *Interface) Print(w io.Writer) { + fmt.Fprintf(w, "interface %s\n", intf.Name) + for _, m := range intf.Methods { + m.Print(w) + } +} + +func (intf *Interface) addImports(im map[string]bool) { + for _, m := range intf.Methods { + m.addImports(im) + } +} + +// Method is a single method of an interface. +type Method struct { + Name string + In, Out []*Parameter + Variadic *Parameter // may be nil +} + +func (m *Method) Print(w io.Writer) { + fmt.Fprintf(w, " - method %s\n", m.Name) + if len(m.In) > 0 { + fmt.Fprintf(w, " in:\n") + for _, p := range m.In { + p.Print(w) + } + } + if m.Variadic != nil { + fmt.Fprintf(w, " ...:\n") + m.Variadic.Print(w) + } + if len(m.Out) > 0 { + fmt.Fprintf(w, " out:\n") + for _, p := range m.Out { + p.Print(w) + } + } +} + +func (m *Method) addImports(im map[string]bool) { + for _, p := range m.In { + p.Type.addImports(im) + } + if m.Variadic != nil { + m.Variadic.Type.addImports(im) + } + for _, p := range m.Out { + p.Type.addImports(im) + } +} + +// Parameter is an argument or return parameter of a method. +type Parameter struct { + Name string // may be empty + Type Type +} + +func (p *Parameter) Print(w io.Writer) { + n := p.Name + if n == "" { + n = `""` + } + fmt.Fprintf(w, " - %v: %v\n", n, p.Type.String(nil, "")) +} + +// Type is a Go type. +type Type interface { + String(pm map[string]string, pkgOverride string) string + addImports(im map[string]bool) +} + +func init() { + gob.Register(&ArrayType{}) + gob.Register(&ChanType{}) + gob.Register(&FuncType{}) + gob.Register(&MapType{}) + gob.Register(&NamedType{}) + gob.Register(&PointerType{}) + + // Call gob.RegisterName to make sure it has the consistent name registered + // for both gob decoder and encoder. + // + // For a non-pointer type, gob.Register will try to get package full path by + // calling rt.PkgPath() for a name to register. If your project has vendor + // directory, it is possible that PkgPath will get a path like this: + // ../../../vendor/github.com/golang/mock/mockgen/model + gob.RegisterName(pkgPath+".PredeclaredType", PredeclaredType("")) +} + +// ArrayType is an array or slice type. +type ArrayType struct { + Len int // -1 for slices, >= 0 for arrays + Type Type +} + +func (at *ArrayType) String(pm map[string]string, pkgOverride string) string { + s := "[]" + if at.Len > -1 { + s = fmt.Sprintf("[%d]", at.Len) + } + return s + at.Type.String(pm, pkgOverride) +} + +func (at *ArrayType) addImports(im map[string]bool) { at.Type.addImports(im) } + +// ChanType is a channel type. +type ChanType struct { + Dir ChanDir // 0, 1 or 2 + Type Type +} + +func (ct *ChanType) String(pm map[string]string, pkgOverride string) string { + s := ct.Type.String(pm, pkgOverride) + if ct.Dir == RecvDir { + return "<-chan " + s + } + if ct.Dir == SendDir { + return "chan<- " + s + } + return "chan " + s +} + +func (ct *ChanType) addImports(im map[string]bool) { ct.Type.addImports(im) } + +// ChanDir is a channel direction. +type ChanDir int + +const ( + RecvDir ChanDir = 1 + SendDir ChanDir = 2 +) + +// FuncType is a function type. +type FuncType struct { + In, Out []*Parameter + Variadic *Parameter // may be nil +} + +func (ft *FuncType) String(pm map[string]string, pkgOverride string) string { + args := make([]string, len(ft.In)) + for i, p := range ft.In { + args[i] = p.Type.String(pm, pkgOverride) + } + if ft.Variadic != nil { + args = append(args, "..."+ft.Variadic.Type.String(pm, pkgOverride)) + } + rets := make([]string, len(ft.Out)) + for i, p := range ft.Out { + rets[i] = p.Type.String(pm, pkgOverride) + } + retString := strings.Join(rets, ", ") + if nOut := len(ft.Out); nOut == 1 { + retString = " " + retString + } else if nOut > 1 { + retString = " (" + retString + ")" + } + return "func(" + strings.Join(args, ", ") + ")" + retString +} + +func (ft *FuncType) addImports(im map[string]bool) { + for _, p := range ft.In { + p.Type.addImports(im) + } + if ft.Variadic != nil { + ft.Variadic.Type.addImports(im) + } + for _, p := range ft.Out { + p.Type.addImports(im) + } +} + +// MapType is a map type. +type MapType struct { + Key, Value Type +} + +func (mt *MapType) String(pm map[string]string, pkgOverride string) string { + return "map[" + mt.Key.String(pm, pkgOverride) + "]" + mt.Value.String(pm, pkgOverride) +} + +func (mt *MapType) addImports(im map[string]bool) { + mt.Key.addImports(im) + mt.Value.addImports(im) +} + +// NamedType is an exported type in a package. +type NamedType struct { + Package string // may be empty + Type string // TODO: should this be typed Type? +} + +func (nt *NamedType) String(pm map[string]string, pkgOverride string) string { + // TODO: is this right? + if pkgOverride == nt.Package { + return nt.Type + } + prefix := pm[nt.Package] + if prefix != "" { + return prefix + "." + nt.Type + } else { + return nt.Type + } +} +func (nt *NamedType) addImports(im map[string]bool) { + if nt.Package != "" { + im[nt.Package] = true + } +} + +// PointerType is a pointer to another type. +type PointerType struct { + Type Type +} + +func (pt *PointerType) String(pm map[string]string, pkgOverride string) string { + return "*" + pt.Type.String(pm, pkgOverride) +} +func (pt *PointerType) addImports(im map[string]bool) { pt.Type.addImports(im) } + +// PredeclaredType is a predeclared type such as "int". +type PredeclaredType string + +func (pt PredeclaredType) String(pm map[string]string, pkgOverride string) string { return string(pt) } +func (pt PredeclaredType) addImports(im map[string]bool) {} + +// The following code is intended to be called by the program generated by ../reflect.go. + +func InterfaceFromInterfaceType(it reflect.Type) (*Interface, error) { + if it.Kind() != reflect.Interface { + return nil, fmt.Errorf("%v is not an interface", it) + } + intf := &Interface{} + + for i := 0; i < it.NumMethod(); i++ { + mt := it.Method(i) + // TODO: need to skip unexported methods? or just raise an error? + m := &Method{ + Name: mt.Name, + } + + var err error + m.In, m.Variadic, m.Out, err = funcArgsFromType(mt.Type) + if err != nil { + return nil, err + } + + intf.Methods = append(intf.Methods, m) + } + + return intf, nil +} + +// t's Kind must be a reflect.Func. +func funcArgsFromType(t reflect.Type) (in []*Parameter, variadic *Parameter, out []*Parameter, err error) { + nin := t.NumIn() + if t.IsVariadic() { + nin-- + } + var p *Parameter + for i := 0; i < nin; i++ { + p, err = parameterFromType(t.In(i)) + if err != nil { + return + } + in = append(in, p) + } + if t.IsVariadic() { + p, err = parameterFromType(t.In(nin).Elem()) + if err != nil { + return + } + variadic = p + } + for i := 0; i < t.NumOut(); i++ { + p, err = parameterFromType(t.Out(i)) + if err != nil { + return + } + out = append(out, p) + } + return +} + +func parameterFromType(t reflect.Type) (*Parameter, error) { + tt, err := typeFromType(t) + if err != nil { + return nil, err + } + return &Parameter{Type: tt}, nil +} + +var errorType = reflect.TypeOf((*error)(nil)).Elem() + +var byteType = reflect.TypeOf(byte(0)) + +func typeFromType(t reflect.Type) (Type, error) { + // Hack workaround for https://golang.org/issue/3853. + // This explicit check should not be necessary. + if t == byteType { + return PredeclaredType("byte"), nil + } + + if imp := t.PkgPath(); imp != "" { + return &NamedType{ + Package: impPath(imp), + Type: t.Name(), + }, nil + } + + // only unnamed or predeclared types after here + + // Lots of types have element types. Let's do the parsing and error checking for all of them. + var elemType Type + switch t.Kind() { + case reflect.Array, reflect.Chan, reflect.Map, reflect.Ptr, reflect.Slice: + var err error + elemType, err = typeFromType(t.Elem()) + if err != nil { + return nil, err + } + } + + switch t.Kind() { + case reflect.Array: + return &ArrayType{ + Len: t.Len(), + Type: elemType, + }, nil + case reflect.Bool, reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, + reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, + reflect.Float32, reflect.Float64, reflect.Complex64, reflect.Complex128, reflect.String: + return PredeclaredType(t.Kind().String()), nil + case reflect.Chan: + var dir ChanDir + switch t.ChanDir() { + case reflect.RecvDir: + dir = RecvDir + case reflect.SendDir: + dir = SendDir + } + return &ChanType{ + Dir: dir, + Type: elemType, + }, nil + case reflect.Func: + in, variadic, out, err := funcArgsFromType(t) + if err != nil { + return nil, err + } + return &FuncType{ + In: in, + Out: out, + Variadic: variadic, + }, nil + case reflect.Interface: + // Two special interfaces. + if t.NumMethod() == 0 { + return PredeclaredType("interface{}"), nil + } + if t == errorType { + return PredeclaredType("error"), nil + } + case reflect.Map: + kt, err := typeFromType(t.Key()) + if err != nil { + return nil, err + } + return &MapType{ + Key: kt, + Value: elemType, + }, nil + case reflect.Ptr: + return &PointerType{ + Type: elemType, + }, nil + case reflect.Slice: + return &ArrayType{ + Len: -1, + Type: elemType, + }, nil + case reflect.Struct: + if t.NumField() == 0 { + return PredeclaredType("struct{}"), nil + } + } + + // TODO: Struct, UnsafePointer + return nil, fmt.Errorf("can't yet turn %v (%v) into a model.Type", t, t.Kind()) +} + +// impPath sanitizes the package path returned by `PkgPath` method of a reflect Type so that +// it is importable. PkgPath might return a path that includes "vendor". These paths do not +// compile, so we need to remove everything up to and including "/vendor/". +// See https://github.com/golang/go/issues/12019. +func impPath(imp string) string { + if strings.HasPrefix(imp, "vendor/") { + imp = "/" + imp + } + if i := strings.LastIndex(imp, "/vendor/"); i != -1 { + imp = imp[i+len("/vendor/"):] + } + return imp +} diff --git a/vendor/github.com/golang/mock/mockgen/parse.go b/vendor/github.com/golang/mock/mockgen/parse.go new file mode 100644 index 000000000..9c5475e93 --- /dev/null +++ b/vendor/github.com/golang/mock/mockgen/parse.go @@ -0,0 +1,523 @@ +// Copyright 2012 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package main + +// This file contains the model construction by parsing source files. + +import ( + "errors" + "flag" + "fmt" + "go/ast" + "go/build" + "go/parser" + "go/token" + "log" + "path" + "path/filepath" + "strconv" + "strings" + + "github.com/golang/mock/mockgen/model" + "golang.org/x/tools/go/packages" +) + +var ( + imports = flag.String("imports", "", "(source mode) Comma-separated name=path pairs of explicit imports to use.") + auxFiles = flag.String("aux_files", "", "(source mode) Comma-separated pkg=path pairs of auxiliary Go source files.") +) + +// TODO: simplify error reporting + +func parseFile(source string) (*model.Package, error) { + srcDir, err := filepath.Abs(filepath.Dir(source)) + if err != nil { + return nil, fmt.Errorf("failed getting source directory: %v", err) + } + + cfg := &packages.Config{Mode: packages.LoadSyntax, Tests: true} + pkgs, err := packages.Load(cfg, "file="+source) + if err != nil { + return nil, err + } + if packages.PrintErrors(pkgs) > 0 || len(pkgs) == 0 { + return nil, errors.New("loading package failed") + } + + packageImport := pkgs[0].PkgPath + + // It is illegal to import a _test package. + packageImport = strings.TrimSuffix(packageImport, "_test") + + fs := token.NewFileSet() + file, err := parser.ParseFile(fs, source, nil, 0) + if err != nil { + return nil, fmt.Errorf("failed parsing source file %v: %v", source, err) + } + + p := &fileParser{ + fileSet: fs, + imports: make(map[string]string), + importedInterfaces: make(map[string]map[string]*ast.InterfaceType), + auxInterfaces: make(map[string]map[string]*ast.InterfaceType), + srcDir: srcDir, + } + + // Handle -imports. + dotImports := make(map[string]bool) + if *imports != "" { + for _, kv := range strings.Split(*imports, ",") { + eq := strings.Index(kv, "=") + k, v := kv[:eq], kv[eq+1:] + if k == "." { + // TODO: Catch dupes? + dotImports[v] = true + } else { + // TODO: Catch dupes? + p.imports[k] = v + } + } + } + + // Handle -aux_files. + if err := p.parseAuxFiles(*auxFiles); err != nil { + return nil, err + } + p.addAuxInterfacesFromFile(packageImport, file) // this file + + pkg, err := p.parseFile(packageImport, file) + if err != nil { + return nil, err + } + for path := range dotImports { + pkg.DotImports = append(pkg.DotImports, path) + } + return pkg, nil +} + +type fileParser struct { + fileSet *token.FileSet + imports map[string]string // package name => import path + importedInterfaces map[string]map[string]*ast.InterfaceType // package (or "") => name => interface + + auxFiles []*ast.File + auxInterfaces map[string]map[string]*ast.InterfaceType // package (or "") => name => interface + + srcDir string +} + +func (p *fileParser) errorf(pos token.Pos, format string, args ...interface{}) error { + ps := p.fileSet.Position(pos) + format = "%s:%d:%d: " + format + args = append([]interface{}{ps.Filename, ps.Line, ps.Column}, args...) + return fmt.Errorf(format, args...) +} + +func (p *fileParser) parseAuxFiles(auxFiles string) error { + auxFiles = strings.TrimSpace(auxFiles) + if auxFiles == "" { + return nil + } + for _, kv := range strings.Split(auxFiles, ",") { + parts := strings.SplitN(kv, "=", 2) + if len(parts) != 2 { + return fmt.Errorf("bad aux file spec: %v", kv) + } + pkg, fpath := parts[0], parts[1] + + file, err := parser.ParseFile(p.fileSet, fpath, nil, 0) + if err != nil { + return err + } + p.auxFiles = append(p.auxFiles, file) + p.addAuxInterfacesFromFile(pkg, file) + } + return nil +} + +func (p *fileParser) addAuxInterfacesFromFile(pkg string, file *ast.File) { + if _, ok := p.auxInterfaces[pkg]; !ok { + p.auxInterfaces[pkg] = make(map[string]*ast.InterfaceType) + } + for ni := range iterInterfaces(file) { + p.auxInterfaces[pkg][ni.name.Name] = ni.it + } +} + +// parseFile loads all file imports and auxiliary files import into the +// fileParser, parses all file interfaces and returns package model. +func (p *fileParser) parseFile(importPath string, file *ast.File) (*model.Package, error) { + allImports, dotImports := importsOfFile(file) + // Don't stomp imports provided by -imports. Those should take precedence. + for pkg, path := range allImports { + if _, ok := p.imports[pkg]; !ok { + p.imports[pkg] = path + } + } + // Add imports from auxiliary files, which might be needed for embedded interfaces. + // Don't stomp any other imports. + for _, f := range p.auxFiles { + auxImports, _ := importsOfFile(f) + for pkg, path := range auxImports { + if _, ok := p.imports[pkg]; !ok { + p.imports[pkg] = path + } + } + } + + var is []*model.Interface + for ni := range iterInterfaces(file) { + i, err := p.parseInterface(ni.name.String(), importPath, ni.it) + if err != nil { + return nil, err + } + is = append(is, i) + } + return &model.Package{ + Name: file.Name.String(), + Interfaces: is, + DotImports: dotImports, + }, nil +} + +// parsePackage loads package specified by path, parses it and populates +// corresponding imports and importedInterfaces into the fileParser. +func (p *fileParser) parsePackage(path string) error { + var pkgs map[string]*ast.Package + if imp, err := build.Import(path, p.srcDir, build.FindOnly); err != nil { + return err + } else if pkgs, err = parser.ParseDir(p.fileSet, imp.Dir, nil, 0); err != nil { + return err + } + for _, pkg := range pkgs { + file := ast.MergePackageFiles(pkg, ast.FilterFuncDuplicates|ast.FilterUnassociatedComments|ast.FilterImportDuplicates) + if _, ok := p.importedInterfaces[path]; !ok { + p.importedInterfaces[path] = make(map[string]*ast.InterfaceType) + } + for ni := range iterInterfaces(file) { + p.importedInterfaces[path][ni.name.Name] = ni.it + } + imports, _ := importsOfFile(file) + for pkgName, pkgPath := range imports { + if _, ok := p.imports[pkgName]; !ok { + p.imports[pkgName] = pkgPath + } + } + } + return nil +} + +func (p *fileParser) parseInterface(name, pkg string, it *ast.InterfaceType) (*model.Interface, error) { + intf := &model.Interface{Name: name} + for _, field := range it.Methods.List { + switch v := field.Type.(type) { + case *ast.FuncType: + if nn := len(field.Names); nn != 1 { + return nil, fmt.Errorf("expected one name for interface %v, got %d", intf.Name, nn) + } + m := &model.Method{ + Name: field.Names[0].String(), + } + var err error + m.In, m.Variadic, m.Out, err = p.parseFunc(pkg, v) + if err != nil { + return nil, err + } + intf.Methods = append(intf.Methods, m) + case *ast.Ident: + // Embedded interface in this package. + ei := p.auxInterfaces[pkg][v.String()] + if ei == nil { + if ei = p.importedInterfaces[pkg][v.String()]; ei == nil { + return nil, p.errorf(v.Pos(), "unknown embedded interface %s", v.String()) + } + } + eintf, err := p.parseInterface(v.String(), pkg, ei) + if err != nil { + return nil, err + } + // Copy the methods. + // TODO: apply shadowing rules. + for _, m := range eintf.Methods { + intf.Methods = append(intf.Methods, m) + } + case *ast.SelectorExpr: + // Embedded interface in another package. + fpkg, sel := v.X.(*ast.Ident).String(), v.Sel.String() + epkg, ok := p.imports[fpkg] + if !ok { + return nil, p.errorf(v.X.Pos(), "unknown package %s", fpkg) + } + ei := p.auxInterfaces[fpkg][sel] + if ei == nil { + fpkg = epkg + if _, ok = p.importedInterfaces[epkg]; !ok { + if err := p.parsePackage(epkg); err != nil { + return nil, p.errorf(v.Pos(), "could not parse package %s: %v", fpkg, err) + } + } + if ei = p.importedInterfaces[epkg][sel]; ei == nil { + return nil, p.errorf(v.Pos(), "unknown embedded interface %s.%s", fpkg, sel) + } + } + eintf, err := p.parseInterface(sel, fpkg, ei) + if err != nil { + return nil, err + } + // Copy the methods. + // TODO: apply shadowing rules. + for _, m := range eintf.Methods { + intf.Methods = append(intf.Methods, m) + } + default: + return nil, fmt.Errorf("don't know how to mock method of type %T", field.Type) + } + } + return intf, nil +} + +func (p *fileParser) parseFunc(pkg string, f *ast.FuncType) (in []*model.Parameter, variadic *model.Parameter, out []*model.Parameter, err error) { + if f.Params != nil { + regParams := f.Params.List + if isVariadic(f) { + n := len(regParams) + varParams := regParams[n-1:] + regParams = regParams[:n-1] + vp, err := p.parseFieldList(pkg, varParams) + if err != nil { + return nil, nil, nil, p.errorf(varParams[0].Pos(), "failed parsing variadic argument: %v", err) + } + variadic = vp[0] + } + in, err = p.parseFieldList(pkg, regParams) + if err != nil { + return nil, nil, nil, p.errorf(f.Pos(), "failed parsing arguments: %v", err) + } + } + if f.Results != nil { + out, err = p.parseFieldList(pkg, f.Results.List) + if err != nil { + return nil, nil, nil, p.errorf(f.Pos(), "failed parsing returns: %v", err) + } + } + return +} + +func (p *fileParser) parseFieldList(pkg string, fields []*ast.Field) ([]*model.Parameter, error) { + nf := 0 + for _, f := range fields { + nn := len(f.Names) + if nn == 0 { + nn = 1 // anonymous parameter + } + nf += nn + } + if nf == 0 { + return nil, nil + } + ps := make([]*model.Parameter, nf) + i := 0 // destination index + for _, f := range fields { + t, err := p.parseType(pkg, f.Type) + if err != nil { + return nil, err + } + + if len(f.Names) == 0 { + // anonymous arg + ps[i] = &model.Parameter{Type: t} + i++ + continue + } + for _, name := range f.Names { + ps[i] = &model.Parameter{Name: name.Name, Type: t} + i++ + } + } + return ps, nil +} + +func (p *fileParser) parseType(pkg string, typ ast.Expr) (model.Type, error) { + switch v := typ.(type) { + case *ast.ArrayType: + ln := -1 + if v.Len != nil { + x, err := strconv.Atoi(v.Len.(*ast.BasicLit).Value) + if err != nil { + return nil, p.errorf(v.Len.Pos(), "bad array size: %v", err) + } + ln = x + } + t, err := p.parseType(pkg, v.Elt) + if err != nil { + return nil, err + } + return &model.ArrayType{Len: ln, Type: t}, nil + case *ast.ChanType: + t, err := p.parseType(pkg, v.Value) + if err != nil { + return nil, err + } + var dir model.ChanDir + if v.Dir == ast.SEND { + dir = model.SendDir + } + if v.Dir == ast.RECV { + dir = model.RecvDir + } + return &model.ChanType{Dir: dir, Type: t}, nil + case *ast.Ellipsis: + // assume we're parsing a variadic argument + return p.parseType(pkg, v.Elt) + case *ast.FuncType: + in, variadic, out, err := p.parseFunc(pkg, v) + if err != nil { + return nil, err + } + return &model.FuncType{In: in, Out: out, Variadic: variadic}, nil + case *ast.Ident: + if v.IsExported() { + // `pkg` may be an aliased imported pkg + // if so, patch the import w/ the fully qualified import + maybeImportedPkg, ok := p.imports[pkg] + if ok { + pkg = maybeImportedPkg + } + // assume type in this package + return &model.NamedType{Package: pkg, Type: v.Name}, nil + } + + // assume predeclared type + return model.PredeclaredType(v.Name), nil + case *ast.InterfaceType: + if v.Methods != nil && len(v.Methods.List) > 0 { + return nil, p.errorf(v.Pos(), "can't handle non-empty unnamed interface types") + } + return model.PredeclaredType("interface{}"), nil + case *ast.MapType: + key, err := p.parseType(pkg, v.Key) + if err != nil { + return nil, err + } + value, err := p.parseType(pkg, v.Value) + if err != nil { + return nil, err + } + return &model.MapType{Key: key, Value: value}, nil + case *ast.SelectorExpr: + pkgName := v.X.(*ast.Ident).String() + pkg, ok := p.imports[pkgName] + if !ok { + return nil, p.errorf(v.Pos(), "unknown package %q", pkgName) + } + return &model.NamedType{Package: pkg, Type: v.Sel.String()}, nil + case *ast.StarExpr: + t, err := p.parseType(pkg, v.X) + if err != nil { + return nil, err + } + return &model.PointerType{Type: t}, nil + case *ast.StructType: + if v.Fields != nil && len(v.Fields.List) > 0 { + return nil, p.errorf(v.Pos(), "can't handle non-empty unnamed struct types") + } + return model.PredeclaredType("struct{}"), nil + } + + return nil, fmt.Errorf("don't know how to parse type %T", typ) +} + +// importsOfFile returns a map of package name to import path +// of the imports in file. +func importsOfFile(file *ast.File) (normalImports map[string]string, dotImports []string) { + normalImports = make(map[string]string) + dotImports = make([]string, 0) + for _, is := range file.Imports { + var pkgName string + importPath := is.Path.Value[1 : len(is.Path.Value)-1] // remove quotes + + if is.Name != nil { + // Named imports are always certain. + if is.Name.Name == "_" { + continue + } + pkgName = is.Name.Name + } else { + pkg, err := build.Import(importPath, "", 0) + if err != nil { + // Fallback to import path suffix. Note that this is uncertain. + _, last := path.Split(importPath) + // If the last path component has dots, the first dot-delimited + // field is used as the name. + pkgName = strings.SplitN(last, ".", 2)[0] + } else { + pkgName = pkg.Name + } + } + + if pkgName == "." { + dotImports = append(dotImports, importPath) + } else { + + if _, ok := normalImports[pkgName]; ok { + log.Fatalf("imported package collision: %q imported twice", pkgName) + } + normalImports[pkgName] = importPath + } + } + return +} + +type namedInterface struct { + name *ast.Ident + it *ast.InterfaceType +} + +// Create an iterator over all interfaces in file. +func iterInterfaces(file *ast.File) <-chan namedInterface { + ch := make(chan namedInterface) + go func() { + for _, decl := range file.Decls { + gd, ok := decl.(*ast.GenDecl) + if !ok || gd.Tok != token.TYPE { + continue + } + for _, spec := range gd.Specs { + ts, ok := spec.(*ast.TypeSpec) + if !ok { + continue + } + it, ok := ts.Type.(*ast.InterfaceType) + if !ok { + continue + } + + ch <- namedInterface{ts.Name, it} + } + } + close(ch) + }() + return ch +} + +// isVariadic returns whether the function is variadic. +func isVariadic(f *ast.FuncType) bool { + nargs := len(f.Params.List) + if nargs == 0 { + return false + } + _, ok := f.Params.List[nargs-1].Type.(*ast.Ellipsis) + return ok +} diff --git a/vendor/github.com/golang/mock/mockgen/reflect.go b/vendor/github.com/golang/mock/mockgen/reflect.go new file mode 100644 index 000000000..d4c7b7fe2 --- /dev/null +++ b/vendor/github.com/golang/mock/mockgen/reflect.go @@ -0,0 +1,243 @@ +// Copyright 2012 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package main + +// This file contains the model construction by reflection. + +import ( + "bytes" + "encoding/gob" + "flag" + "go/build" + "io/ioutil" + "log" + "os" + "os/exec" + "path/filepath" + "runtime" + "text/template" + + "github.com/golang/mock/mockgen/model" +) + +var ( + progOnly = flag.Bool("prog_only", false, "(reflect mode) Only generate the reflection program; write it to stdout and exit.") + execOnly = flag.String("exec_only", "", "(reflect mode) If set, execute this reflection program.") + buildFlags = flag.String("build_flags", "", "(reflect mode) Additional flags for go build.") +) + +func writeProgram(importPath string, symbols []string) ([]byte, error) { + var program bytes.Buffer + data := reflectData{ + ImportPath: importPath, + Symbols: symbols, + } + if err := reflectProgram.Execute(&program, &data); err != nil { + return nil, err + } + return program.Bytes(), nil +} + +// run the given program and parse the output as a model.Package. +func run(program string) (*model.Package, error) { + f, err := ioutil.TempFile("", "") + if err != nil { + return nil, err + } + + filename := f.Name() + defer os.Remove(filename) + if err := f.Close(); err != nil { + return nil, err + } + + // Run the program. + cmd := exec.Command(program, "-output", filename) + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + if err := cmd.Run(); err != nil { + return nil, err + } + + f, err = os.Open(filename) + if err != nil { + return nil, err + } + + // Process output. + var pkg model.Package + if err := gob.NewDecoder(f).Decode(&pkg); err != nil { + return nil, err + } + + if err := f.Close(); err != nil { + return nil, err + } + + return &pkg, nil +} + +// runInDir writes the given program into the given dir, runs it there, and +// parses the output as a model.Package. +func runInDir(program []byte, dir string) (*model.Package, error) { + // We use TempDir instead of TempFile so we can control the filename. + tmpDir, err := ioutil.TempDir(dir, "gomock_reflect_") + if err != nil { + return nil, err + } + defer func() { + if err := os.RemoveAll(tmpDir); err != nil { + log.Printf("failed to remove temp directory: %s", err) + } + }() + const progSource = "prog.go" + var progBinary = "prog.bin" + if runtime.GOOS == "windows" { + // Windows won't execute a program unless it has a ".exe" suffix. + progBinary += ".exe" + } + + if err := ioutil.WriteFile(filepath.Join(tmpDir, progSource), program, 0600); err != nil { + return nil, err + } + + cmdArgs := []string{} + cmdArgs = append(cmdArgs, "build") + if *buildFlags != "" { + cmdArgs = append(cmdArgs, *buildFlags) + } + cmdArgs = append(cmdArgs, "-o", progBinary, progSource) + + // Build the program. + cmd := exec.Command("go", cmdArgs...) + cmd.Dir = tmpDir + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + if err := cmd.Run(); err != nil { + return nil, err + } + return run(filepath.Join(tmpDir, progBinary)) +} + +func reflect(importPath string, symbols []string) (*model.Package, error) { + // TODO: sanity check arguments + + if *execOnly != "" { + return run(*execOnly) + } + + program, err := writeProgram(importPath, symbols) + if err != nil { + return nil, err + } + + if *progOnly { + os.Stdout.Write(program) + os.Exit(0) + } + + wd, _ := os.Getwd() + + // Try to run the program in the same directory as the input package. + if p, err := build.Import(importPath, wd, build.FindOnly); err == nil { + dir := p.Dir + if p, err := runInDir(program, dir); err == nil { + return p, nil + } + } + + // Since that didn't work, try to run it in the current working directory. + if p, err := runInDir(program, wd); err == nil { + return p, nil + } + // Since that didn't work, try to run it in a standard temp directory. + return runInDir(program, "") +} + +type reflectData struct { + ImportPath string + Symbols []string +} + +// This program reflects on an interface value, and prints the +// gob encoding of a model.Package to standard output. +// JSON doesn't work because of the model.Type interface. +var reflectProgram = template.Must(template.New("program").Parse(` +package main + +import ( + "encoding/gob" + "flag" + "fmt" + "os" + "path" + "reflect" + + "github.com/golang/mock/mockgen/model" + + pkg_ {{printf "%q" .ImportPath}} +) + +var output = flag.String("output", "", "The output file name, or empty to use stdout.") + +func main() { + flag.Parse() + + its := []struct{ + sym string + typ reflect.Type + }{ + {{range .Symbols}} + { {{printf "%q" .}}, reflect.TypeOf((*pkg_.{{.}})(nil)).Elem()}, + {{end}} + } + pkg := &model.Package{ + // NOTE: This behaves contrary to documented behaviour if the + // package name is not the final component of the import path. + // The reflect package doesn't expose the package name, though. + Name: path.Base({{printf "%q" .ImportPath}}), + } + + for _, it := range its { + intf, err := model.InterfaceFromInterfaceType(it.typ) + if err != nil { + fmt.Fprintf(os.Stderr, "Reflection: %v\n", err) + os.Exit(1) + } + intf.Name = it.sym + pkg.Interfaces = append(pkg.Interfaces, intf) + } + + outfile := os.Stdout + if len(*output) != 0 { + var err error + outfile, err = os.Create(*output) + if err != nil { + fmt.Fprintf(os.Stderr, "failed to open output file %q", *output) + } + defer func() { + if err := outfile.Close(); err != nil { + fmt.Fprintf(os.Stderr, "failed to close output file %q", *output) + os.Exit(1) + } + }() + } + + if err := gob.NewEncoder(outfile).Encode(pkg); err != nil { + fmt.Fprintf(os.Stderr, "gob encode: %v\n", err) + os.Exit(1) + } +} +`)) diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go b/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go index 938f646f0..24fbae6e3 100644 --- a/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go @@ -19,7 +19,7 @@ func SortKeys(vs []reflect.Value) []reflect.Value { } // Sort the map keys. - sort.Slice(vs, func(i, j int) bool { return isLess(vs[i], vs[j]) }) + sort.SliceStable(vs, func(i, j int) bool { return isLess(vs[i], vs[j]) }) // Deduplicate keys (fails for NaNs). vs2 := vs[:1] @@ -42,6 +42,8 @@ func isLess(x, y reflect.Value) bool { case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: return x.Uint() < y.Uint() case reflect.Float32, reflect.Float64: + // NOTE: This does not sort -0 as less than +0 + // since Go maps treat -0 and +0 as equal keys. fx, fy := x.Float(), y.Float() return fx < fy || math.IsNaN(fx) && !math.IsNaN(fy) case reflect.Complex64, reflect.Complex128: diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/zero.go b/vendor/github.com/google/go-cmp/cmp/internal/value/zero.go index d13a12ccf..06a8ffd03 100644 --- a/vendor/github.com/google/go-cmp/cmp/internal/value/zero.go +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/zero.go @@ -4,7 +4,10 @@ package value -import "reflect" +import ( + "math" + "reflect" +) // IsZero reports whether v is the zero value. // This does not rely on Interface and so can be used on unexported fields. @@ -17,9 +20,9 @@ func IsZero(v reflect.Value) bool { case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: return v.Uint() == 0 case reflect.Float32, reflect.Float64: - return v.Float() == 0 + return math.Float64bits(v.Float()) == 0 case reflect.Complex64, reflect.Complex128: - return v.Complex() == 0 + return math.Float64bits(real(v.Complex())) == 0 && math.Float64bits(imag(v.Complex())) == 0 case reflect.String: return v.String() == "" case reflect.UnsafePointer: diff --git a/vendor/github.com/google/go-cmp/cmp/report_compare.go b/vendor/github.com/google/go-cmp/cmp/report_compare.go index 05efb992c..17a05eede 100644 --- a/vendor/github.com/google/go-cmp/cmp/report_compare.go +++ b/vendor/github.com/google/go-cmp/cmp/report_compare.go @@ -168,7 +168,7 @@ func (opts formatOptions) formatDiffList(recs []reportRecord, k reflect.Kind) te var isZero bool switch opts.DiffMode { case diffIdentical: - isZero = value.IsZero(r.Value.ValueX) || value.IsZero(r.Value.ValueX) + isZero = value.IsZero(r.Value.ValueX) || value.IsZero(r.Value.ValueY) case diffRemoved: isZero = value.IsZero(r.Value.ValueX) case diffInserted: diff --git a/vendor/github.com/google/go-cmp/cmp/report_reflect.go b/vendor/github.com/google/go-cmp/cmp/report_reflect.go index 5521c604c..2761b6289 100644 --- a/vendor/github.com/google/go-cmp/cmp/report_reflect.go +++ b/vendor/github.com/google/go-cmp/cmp/report_reflect.go @@ -208,7 +208,6 @@ func (opts formatOptions) FormatValue(v reflect.Value, m visitedPointers) (out t func formatMapKey(v reflect.Value) string { var opts formatOptions opts.TypeMode = elideType - opts.AvoidStringer = true opts.ShallowPointers = true s := opts.FormatValue(v, visitedPointers{}).String() return strings.TrimSpace(s) diff --git a/vendor/github.com/google/go-cmp/cmp/report_slices.go b/vendor/github.com/google/go-cmp/cmp/report_slices.go index 8cb3265e7..eafcf2e4c 100644 --- a/vendor/github.com/google/go-cmp/cmp/report_slices.go +++ b/vendor/github.com/google/go-cmp/cmp/report_slices.go @@ -90,7 +90,7 @@ func (opts formatOptions) FormatDiffSlice(v *valueNode) textNode { } if r == '\n' { if maxLineLen < i-lastLineIdx { - lastLineIdx = i - lastLineIdx + maxLineLen = i - lastLineIdx } lastLineIdx = i + 1 numLines++ @@ -322,7 +322,7 @@ func coalesceInterveningIdentical(groups []diffStats, windowSize int) []diffStat hadX, hadY := prev.NumRemoved > 0, prev.NumInserted > 0 hasX, hasY := next.NumRemoved > 0, next.NumInserted > 0 if ((hadX || hasX) && (hadY || hasY)) && curr.NumIdentical <= windowSize { - *prev = (*prev).Append(*curr).Append(*next) + *prev = prev.Append(*curr).Append(*next) groups = groups[:len(groups)-1] // Truncate off equal group continue } diff --git a/vendor/github.com/google/go-cmp/cmp/report_text.go b/vendor/github.com/google/go-cmp/cmp/report_text.go index 80605d0e4..8b8fcab7b 100644 --- a/vendor/github.com/google/go-cmp/cmp/report_text.go +++ b/vendor/github.com/google/go-cmp/cmp/report_text.go @@ -19,6 +19,11 @@ var randBool = rand.New(rand.NewSource(time.Now().Unix())).Intn(2) == 0 type indentMode int func (n indentMode) appendIndent(b []byte, d diffMode) []byte { + // The output of Diff is documented as being unstable to provide future + // flexibility in changing the output for more humanly readable reports. + // This logic intentionally introduces instability to the exact output + // so that users can detect accidental reliance on stability early on, + // rather than much later when an actual change to the format occurs. if flags.Deterministic || randBool { // Use regular spaces (U+0020). switch d { @@ -360,7 +365,7 @@ func (s diffStats) String() string { // Pluralize the name (adjusting for some obscure English grammar rules). name := s.Name if sum > 1 { - name = name + "s" + name += "s" if strings.HasSuffix(name, "ys") { name = name[:len(name)-2] + "ies" // e.g., "entrys" => "entries" } diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/.travis.yml b/vendor/github.com/hashicorp/aws-sdk-go-base/.travis.yml index d29b47205..f1aaa658b 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/.travis.yml +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/.travis.yml @@ -1,21 +1,20 @@ dist: xenial language: go go: -- "1.11.x" -env: - GOFLAGS=-mod=vendor +- "1.13.x" + +matrix: + fast_finish: true + allow_failures: + - go: tip install: - make tools script: - make lint -- make test +- go test -timeout=30s -parallel=4 -v ./... branches: only: - master -matrix: - fast_finish: true - allow_failures: - - go: tip diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/CHANGELOG.md b/vendor/github.com/hashicorp/aws-sdk-go-base/CHANGELOG.md index ed0d5878d..558e7e393 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/CHANGELOG.md +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/CHANGELOG.md @@ -1,3 +1,9 @@ +# v0.4.0 (October 3, 2019) + +BUG FIXES + +* awsauth: fixed credentials retrieval, validation, and error handling + # v0.3.0 (February 26, 2019) BUG FIXES diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/GNUmakefile b/vendor/github.com/hashicorp/aws-sdk-go-base/GNUmakefile index ef101a111..3001f2ee4 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/GNUmakefile +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/GNUmakefile @@ -1,5 +1,9 @@ default: test lint +fmt: + @echo "==> Fixing source code with gofmt..." + gofmt -s -w ./ + lint: @echo "==> Checking source code against linters..." @golangci-lint run ./... diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/README.md b/vendor/github.com/hashicorp/aws-sdk-go-base/README.md index dc410ebd4..a5be53987 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/README.md +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/README.md @@ -6,7 +6,7 @@ An opinionated [AWS Go SDK](https://github.com/aws/aws-sdk-go) library for consi ## Requirements -- [Go](https://golang.org/doc/install) 1.11.4+ +- [Go](https://golang.org/doc/install) 1.12 ## Development diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/awsauth.go b/vendor/github.com/hashicorp/aws-sdk-go-base/awsauth.go index 17bc443d5..531162038 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/awsauth.go +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/awsauth.go @@ -22,6 +22,21 @@ import ( "github.com/hashicorp/go-multierror" ) +const ( + // errMsgNoValidCredentialSources error getting credentials + errMsgNoValidCredentialSources = `No valid credential sources found for AWS Provider. + Please see https://terraform.io/docs/providers/aws/index.html for more information on + providing credentials for the AWS Provider` +) + +var ( + // ErrNoValidCredentialSources indicates that no credentials source could be found + ErrNoValidCredentialSources = errNoValidCredentialSources() +) + +func errNoValidCredentialSources() error { return errors.New(errMsgNoValidCredentialSources) } + +// GetAccountIDAndPartition gets the account ID and associated partition. func GetAccountIDAndPartition(iamconn *iam.IAM, stsconn *sts.STS, authProviderName string) (string, string, error) { var accountID, partition string var err, errors error @@ -51,6 +66,8 @@ func GetAccountIDAndPartition(iamconn *iam.IAM, stsconn *sts.STS, authProviderNa return accountID, partition, errors } +// GetAccountIDAndPartitionFromEC2Metadata gets the account ID and associated +// partition from EC2 metadata. func GetAccountIDAndPartitionFromEC2Metadata() (string, string, error) { log.Println("[DEBUG] Trying to get account information via EC2 Metadata") @@ -75,6 +92,8 @@ func GetAccountIDAndPartitionFromEC2Metadata() (string, string, error) { return parseAccountIDAndPartitionFromARN(info.InstanceProfileArn) } +// GetAccountIDAndPartitionFromIAMGetUser gets the account ID and associated +// partition from IAM. func GetAccountIDAndPartitionFromIAMGetUser(iamconn *iam.IAM) (string, string, error) { log.Println("[DEBUG] Trying to get account information via iam:GetUser") @@ -102,6 +121,8 @@ func GetAccountIDAndPartitionFromIAMGetUser(iamconn *iam.IAM) (string, string, e return parseAccountIDAndPartitionFromARN(aws.StringValue(output.User.Arn)) } +// GetAccountIDAndPartitionFromIAMListRoles gets the account ID and associated +// partition from listing IAM roles. func GetAccountIDAndPartitionFromIAMListRoles(iamconn *iam.IAM) (string, string, error) { log.Println("[DEBUG] Trying to get account information via iam:ListRoles") @@ -123,6 +144,8 @@ func GetAccountIDAndPartitionFromIAMListRoles(iamconn *iam.IAM) (string, string, return parseAccountIDAndPartitionFromARN(aws.StringValue(output.Roles[0].Arn)) } +// GetAccountIDAndPartitionFromSTSGetCallerIdentity gets the account ID and associated +// partition from STS caller identity. func GetAccountIDAndPartitionFromSTSGetCallerIdentity(stsconn *sts.STS) (string, string, error) { log.Println("[DEBUG] Trying to get account information via sts:GetCallerIdentity") @@ -148,9 +171,54 @@ func parseAccountIDAndPartitionFromARN(inputARN string) (string, string, error) return arn.AccountID, arn.Partition, nil } -// This function is responsible for reading credentials from the -// environment in the case that they're not explicitly specified -// in the Terraform configuration. +// GetCredentialsFromSession returns credentials derived from a session. A +// session uses the AWS SDK Go chain of providers so may use a provider (e.g., +// ProcessProvider) that is not part of the Terraform provider chain. +func GetCredentialsFromSession(c *Config) (*awsCredentials.Credentials, error) { + log.Printf("[INFO] Attempting to use session-derived credentials") + + var sess *session.Session + var err error + if c.Profile == "" { + sess, err = session.NewSession() + if err != nil { + return nil, ErrNoValidCredentialSources + } + } else { + options := &session.Options{ + Config: aws.Config{ + HTTPClient: cleanhttp.DefaultClient(), + MaxRetries: aws.Int(0), + Region: aws.String(c.Region), + }, + } + options.Profile = c.Profile + options.SharedConfigState = session.SharedConfigEnable + + sess, err = session.NewSessionWithOptions(*options) + if err != nil { + if IsAWSErr(err, "NoCredentialProviders", "") { + return nil, ErrNoValidCredentialSources + } + return nil, fmt.Errorf("Error creating AWS session: %s", err) + } + } + + creds := sess.Config.Credentials + cp, err := sess.Config.Credentials.Get() + if err != nil { + return nil, ErrNoValidCredentialSources + } + + log.Printf("[INFO] Successfully derived credentials from session") + log.Printf("[INFO] AWS Auth provider used: %q", cp.ProviderName) + return creds, nil +} + +// GetCredentials gets credentials from the environment, shared credentials, +// or the session (which may include a credential process). GetCredentials also +// validates the credentials and the ability to assume a role or will return an +// error if unsuccessful. func GetCredentials(c *Config) (*awsCredentials.Credentials, error) { // build a chain provider, lazy-evaluated by aws-sdk providers := []awsCredentials.Provider{ @@ -225,29 +293,31 @@ func GetCredentials(c *Config) (*awsCredentials.Credentials, error) { } } - // This is the "normal" flow (i.e. not assuming a role) - if c.AssumeRoleARN == "" { - return awsCredentials.NewChainCredentials(providers), nil - } - - // Otherwise we need to construct and STS client with the main credentials, and verify - // that we can assume the defined role. - log.Printf("[INFO] Attempting to AssumeRole %s (SessionName: %q, ExternalId: %q, Policy: %q)", - c.AssumeRoleARN, c.AssumeRoleSessionName, c.AssumeRoleExternalID, c.AssumeRolePolicy) - + // Validate the credentials before returning them creds := awsCredentials.NewChainCredentials(providers) cp, err := creds.Get() if err != nil { - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoCredentialProviders" { - return nil, errors.New(`No valid credential sources found for AWS Provider. - Please see https://terraform.io/docs/providers/aws/index.html for more information on - providing credentials for the AWS Provider`) + if IsAWSErr(err, "NoCredentialProviders", "") { + creds, err = GetCredentialsFromSession(c) + if err != nil { + return nil, err + } + } else { + return nil, fmt.Errorf("Error loading credentials for AWS Provider: %s", err) } - - return nil, fmt.Errorf("Error loading credentials for AWS Provider: %s", err) + } else { + log.Printf("[INFO] AWS Auth provider used: %q", cp.ProviderName) } - log.Printf("[INFO] AWS Auth provider used: %q", cp.ProviderName) + // This is the "normal" flow (i.e. not assuming a role) + if c.AssumeRoleARN == "" { + return creds, nil + } + + // Otherwise we need to construct an STS client with the main credentials, and verify + // that we can assume the defined role. + log.Printf("[INFO] Attempting to AssumeRole %s (SessionName: %q, ExternalId: %q, Policy: %q)", + c.AssumeRoleARN, c.AssumeRoleSessionName, c.AssumeRoleExternalID, c.AssumeRolePolicy) awsConfig := &aws.Config{ Credentials: creds, diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/go.mod b/vendor/github.com/hashicorp/aws-sdk-go-base/go.mod index 9df74cb33..1af445d2b 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/go.mod +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/go.mod @@ -1,10 +1,12 @@ module github.com/hashicorp/aws-sdk-go-base require ( - github.com/aws/aws-sdk-go v1.16.36 + github.com/aws/aws-sdk-go v1.25.3 github.com/hashicorp/go-cleanhttp v0.5.0 github.com/hashicorp/go-multierror v1.0.0 github.com/stretchr/testify v1.3.0 // indirect golang.org/x/net v0.0.0-20190213061140-3a22650c66bd // indirect golang.org/x/text v0.3.0 // indirect ) + +go 1.13 diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/go.sum b/vendor/github.com/hashicorp/aws-sdk-go-base/go.sum index fe20b5e55..f06c2e910 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/go.sum +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/go.sum @@ -1,5 +1,7 @@ github.com/aws/aws-sdk-go v1.16.36 h1:POeH34ZME++pr7GBGh+ZO6Y5kOwSMQpqp5BGUgooJ6k= github.com/aws/aws-sdk-go v1.16.36/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= +github.com/aws/aws-sdk-go v1.25.3 h1:uM16hIw9BotjZKMZlX05SN2EFtaWfi/NonPKIARiBLQ= +github.com/aws/aws-sdk-go v1.25.3/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA= diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/mock.go b/vendor/github.com/hashicorp/aws-sdk-go-base/mock.go index 66d172564..e3c802660 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/mock.go +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/mock.go @@ -6,6 +6,7 @@ import ( "log" "net/http" "net/http/httptest" + "os" "time" "github.com/aws/aws-sdk-go/aws" @@ -13,9 +14,8 @@ import ( "github.com/aws/aws-sdk-go/aws/session" ) -// GetMockedAwsApiSession establishes a httptest server to simulate behaviour -// of a real AWS API server -func GetMockedAwsApiSession(svcName string, endpoints []*MockEndpoint) (func(), *session.Session, error) { +// MockAwsApiServer establishes a httptest server to simulate behaviour of a real AWS API server +func MockAwsApiServer(svcName string, endpoints []*MockEndpoint) *httptest.Server { ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { buf := new(bytes.Buffer) if _, err := buf.ReadFrom(r.Body); err != nil { @@ -46,6 +46,13 @@ func GetMockedAwsApiSession(svcName string, endpoints []*MockEndpoint) (func(), w.WriteHeader(400) })) + return ts +} + +// GetMockedAwsApiSession establishes an AWS session to a simulated AWS API server for a given service and route endpoints. +func GetMockedAwsApiSession(svcName string, endpoints []*MockEndpoint) (func(), *session.Session, error) { + ts := MockAwsApiServer(svcName, endpoints) + sc := awsCredentials.NewStaticCredentials("accessKey", "secretKey", "") sess, err := session.NewSession(&aws.Config{ @@ -58,19 +65,166 @@ func GetMockedAwsApiSession(svcName string, endpoints []*MockEndpoint) (func(), return ts.Close, sess, err } +// awsMetadataApiMock establishes a httptest server to mock out the internal AWS Metadata +// service. IAM Credentials are retrieved by the EC2RoleProvider, which makes +// API calls to this internal URL. By replacing the server with a test server, +// we can simulate an AWS environment +func awsMetadataApiMock(responses []*MetadataResponse) func() { + ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/plain") + w.Header().Add("Server", "MockEC2") + log.Printf("[DEBUG] Mocker server received request to %q", r.RequestURI) + for _, e := range responses { + if r.RequestURI == e.Uri { + fmt.Fprintln(w, e.Body) + return + } + } + w.WriteHeader(400) + })) + + os.Setenv("AWS_METADATA_URL", ts.URL+"/latest") + return ts.Close +} + +// MockEndpoint represents a basic request and response that can be used for creating simple httptest server routes. type MockEndpoint struct { Request *MockRequest Response *MockResponse } +// MockRequest represents a basic HTTP request type MockRequest struct { Method string Uri string Body string } +// MockResponse represents a basic HTTP response. type MockResponse struct { StatusCode int Body string ContentType string } + +// MetadataResponse represents a metadata server response URI and body +type MetadataResponse struct { + Uri string `json:"uri"` + Body string `json:"body"` +} + +var ec2metadata_instanceIdEndpoint = &MetadataResponse{ + Uri: "/latest/meta-data/instance-id", + Body: "mock-instance-id", +} + +var ec2metadata_securityCredentialsEndpoints = []*MetadataResponse{ + { + Uri: "/latest/meta-data/iam/security-credentials/", + Body: "test_role", + }, + { + Uri: "/latest/meta-data/iam/security-credentials/test_role", + Body: "{\"Code\":\"Success\",\"LastUpdated\":\"2015-12-11T17:17:25Z\",\"Type\":\"AWS-HMAC\",\"AccessKeyId\":\"somekey\",\"SecretAccessKey\":\"somesecret\",\"Token\":\"sometoken\"}", + }, +} + +var ec2metadata_iamInfoEndpoint = &MetadataResponse{ + Uri: "/latest/meta-data/iam/info", + Body: "{\"Code\": \"Success\",\"LastUpdated\": \"2016-03-17T12:27:32Z\",\"InstanceProfileArn\": \"arn:aws:iam::000000000000:instance-profile/my-instance-profile\",\"InstanceProfileId\": \"AIPAABCDEFGHIJKLMN123\"}", +} + +const ec2metadata_iamInfoEndpoint_expectedAccountID = `000000000000` +const ec2metadata_iamInfoEndpoint_expectedPartition = `aws` + +const iamResponse_GetUser_valid = ` + + + AIDACKCEVSQ6C2EXAMPLE + /division_abc/subdivision_xyz/ + Bob + arn:aws:iam::111111111111:user/division_abc/subdivision_xyz/Bob + 2013-10-02T17:01:44Z + 2014-10-10T14:37:51Z + + + + 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE + +` + +const iamResponse_GetUser_valid_expectedAccountID = `111111111111` +const iamResponse_GetUser_valid_expectedPartition = `aws` + +const iamResponse_GetUser_unauthorized = ` + + Sender + AccessDenied + User: arn:aws:iam::123456789012:user/Bob is not authorized to perform: iam:GetUser on resource: arn:aws:iam::123456789012:user/Bob + + 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE +` + +const stsResponse_GetCallerIdentity_valid = ` + + arn:aws:iam::222222222222:user/Alice + AKIAI44QH8DHBEXAMPLE + 222222222222 + + + 01234567-89ab-cdef-0123-456789abcdef + +` + +const stsResponse_GetCallerIdentity_valid_expectedAccountID = `222222222222` +const stsResponse_GetCallerIdentity_valid_expectedPartition = `aws` + +const stsResponse_GetCallerIdentity_unauthorized = ` + + Sender + AccessDenied + User: arn:aws:iam::123456789012:user/Bob is not authorized to perform: sts:GetCallerIdentity + + 01234567-89ab-cdef-0123-456789abcdef +` + +const iamResponse_GetUser_federatedFailure = ` + + Sender + ValidationError + Must specify userName when calling with non-User credentials + + 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE +` + +const iamResponse_ListRoles_valid = ` + + true + AWceSSsKsazQ4IEplT9o4hURCzBs00iavlEvEXAMPLE + + + / + %7B%22Version%22%3A%222008-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22ec2.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D + AROACKCEVSQ6C2EXAMPLE + elasticbeanstalk-role + arn:aws:iam::444444444444:role/elasticbeanstalk-role + 2013-10-02T17:01:44Z + + + + + 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE + +` + +const iamResponse_ListRoles_valid_expectedAccountID = `444444444444` +const iamResponse_ListRoles_valid_expectedPartition = `aws` + +const iamResponse_ListRoles_unauthorized = ` + + Sender + AccessDenied + User: arn:aws:iam::123456789012:user/Bob is not authorized to perform: iam:ListRoles on resource: arn:aws:iam::123456789012:role/ + + 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE +` diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/session.go b/vendor/github.com/hashicorp/aws-sdk-go-base/session.go index 35b991bbf..92671e131 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/session.go +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/session.go @@ -2,7 +2,6 @@ package awsbase import ( "crypto/tls" - "errors" "fmt" "log" "net/http" @@ -28,45 +27,14 @@ func GetSessionOptions(c *Config) (*session.Options, error) { }, } + // get and validate credentials creds, err := GetCredentials(c) if err != nil { return nil, err } - // Call Get to check for credential provider. If nothing found, we'll get an - // error, and we can present it nicely to the user - cp, err := creds.Get() - if err != nil { - if IsAWSErr(err, "NoCredentialProviders", "") { - // If a profile wasn't specified, the session may still be able to resolve credentials from shared config. - if c.Profile == "" { - sess, err := session.NewSession() - if err != nil { - return nil, errors.New(`No valid credential sources found for AWS Provider. - Please see https://terraform.io/docs/providers/aws/index.html for more information on - providing credentials for the AWS Provider`) - } - _, err = sess.Config.Credentials.Get() - if err != nil { - return nil, errors.New(`No valid credential sources found for AWS Provider. - Please see https://terraform.io/docs/providers/aws/index.html for more information on - providing credentials for the AWS Provider`) - } - log.Printf("[INFO] Using session-derived AWS Auth") - options.Config.Credentials = sess.Config.Credentials - } else { - log.Printf("[INFO] AWS Auth using Profile: %q", c.Profile) - options.Profile = c.Profile - options.SharedConfigState = session.SharedConfigEnable - } - } else { - return nil, fmt.Errorf("Error loading credentials for AWS Provider: %s", err) - } - } else { - // add the validated credentials to the session options - log.Printf("[INFO] AWS Auth provider used: %q", cp.ProviderName) - options.Config.Credentials = creds - } + // add the validated credentials to the session options + options.Config.Credentials = creds if c.Insecure { transport := options.Config.HTTPClient.Transport.(*http.Transport) @@ -83,7 +51,7 @@ func GetSessionOptions(c *Config) (*session.Options, error) { return options, nil } -// GetSession attempts to return valid AWS Go SDK session +// GetSession attempts to return valid AWS Go SDK session. func GetSession(c *Config) (*session.Session, error) { options, err := GetSessionOptions(c) @@ -94,9 +62,7 @@ func GetSession(c *Config) (*session.Session, error) { sess, err := session.NewSessionWithOptions(*options) if err != nil { if IsAWSErr(err, "NoCredentialProviders", "") { - return nil, errors.New(`No valid credential sources found for AWS Provider. - Please see https://terraform.io/docs/providers/aws/index.html for more information on - providing credentials for the AWS Provider`) + return nil, ErrNoValidCredentialSources } return nil, fmt.Errorf("Error creating AWS session: %s", err) } @@ -138,7 +104,7 @@ func GetSession(c *Config) (*session.Session, error) { if !c.SkipCredsValidation { stsClient := sts.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.StsEndpoint)})) if _, _, err := GetAccountIDAndPartitionFromSTSGetCallerIdentity(stsClient); err != nil { - return nil, fmt.Errorf("error validating provider credentials: %s", err) + return nil, fmt.Errorf("error using credentials to get account ID: %s", err) } } diff --git a/vendor/github.com/hashicorp/aws-sdk-go-base/validation.go b/vendor/github.com/hashicorp/aws-sdk-go-base/validation.go index bf320351f..0e97557c3 100644 --- a/vendor/github.com/hashicorp/aws-sdk-go-base/validation.go +++ b/vendor/github.com/hashicorp/aws-sdk-go-base/validation.go @@ -24,7 +24,7 @@ func ValidateAccountID(accountID string, allowedAccountIDs, forbiddenAccountIDs } } - return fmt.Errorf("AWS Account ID not allowed: %s)", accountID) + return fmt.Errorf("AWS Account ID not allowed: %s", accountID) } return nil diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method.go index d599b2eee..d011ec53e 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method.go @@ -2,7 +2,6 @@ package authentication import ( "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/adal" ) type authMethod interface { @@ -10,7 +9,7 @@ type authMethod interface { isApplicable(b Builder) bool - getAuthorizationToken(oauthConfig *adal.OAuthConfig, endpoint string) (*autorest.BearerAuthorizer, error) + getAuthorizationToken(sender autorest.Sender, oauthConfig *OAuthConfig, endpoint string) (autorest.Authorizer, error) name() string diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_azure_cli_token.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_azure_cli_token.go index 8f0927527..73816d4bb 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_azure_cli_token.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_azure_cli_token.go @@ -2,6 +2,7 @@ package authentication import ( "bytes" + "context" "encoding/json" "fmt" "os/exec" @@ -14,7 +15,8 @@ import ( ) type azureCliTokenAuth struct { - profile *azureCLIProfile + profile *azureCLIProfile + servicePrincipalAuthDocsLink string } func (a azureCliTokenAuth) build(b Builder) (authMethod, error) { @@ -25,6 +27,7 @@ func (a azureCliTokenAuth) build(b Builder) (authMethod, error) { subscriptionId: b.SubscriptionID, tenantId: b.TenantID, }, + servicePrincipalAuthDocsLink: b.ClientSecretDocsLink, } profilePath, err := cli.ProfilePath() if err != nil { @@ -38,6 +41,17 @@ func (a azureCliTokenAuth) build(b Builder) (authMethod, error) { auth.profile.profile = profile + // Authenticating as a Service Principal doesn't return all of the information we need for authentication purposes + // as such Service Principal authentication is supported using the specific auth method + if authenticatedAsAUser := auth.profile.verifyAuthenticatedAsAUser(); !authenticatedAsAUser { + return nil, fmt.Errorf(`Authenticating using the Azure CLI is only supported as a User (not a Service Principal). + +To authenticate to Azure using a Service Principal, you can use the separate 'Authenticate using a Service Principal' +auth method - instructions for which can be found here: %s + +Alternatively you can authenticate using the Azure CLI by using a User Account.`, auth.servicePrincipalAuthDocsLink) + } + err = auth.profile.populateFields() if err != nil { return nil, fmt.Errorf("Error retrieving the Profile from the Azure CLI: %s Please re-authenticate using `az login`.", err) @@ -55,7 +69,11 @@ func (a azureCliTokenAuth) isApplicable(b Builder) bool { return b.SupportsAzureCliToken } -func (a azureCliTokenAuth) getAuthorizationToken(oauthConfig *adal.OAuthConfig, endpoint string) (*autorest.BearerAuthorizer, error) { +func (a azureCliTokenAuth) getAuthorizationToken(sender autorest.Sender, oauth *OAuthConfig, endpoint string) (autorest.Authorizer, error) { + if oauth.OAuth == nil { + return nil, fmt.Errorf("Error getting Authorization Token for cli auth: an OAuth token wasn't configured correctly; please file a bug with more details") + } + // the Azure CLI appears to cache these, so to maintain compatibility with the interface this method is intentionally not on the pointer token, err := obtainAuthorizationToken(endpoint, a.profile.subscriptionId) if err != nil { @@ -67,11 +85,26 @@ func (a azureCliTokenAuth) getAuthorizationToken(oauthConfig *adal.OAuthConfig, return nil, fmt.Errorf("Error converting Authorization Token to an ADAL Token: %s", err) } - spt, err := adal.NewServicePrincipalTokenFromManualToken(*oauthConfig, a.profile.clientId, endpoint, adalToken) + spt, err := adal.NewServicePrincipalTokenFromManualToken(*oauth.OAuth, a.profile.clientId, endpoint, adalToken) if err != nil { return nil, err } + var refreshFunc adal.TokenRefresh = func(ctx context.Context, resource string) (*adal.Token, error) { + token, err := obtainAuthorizationToken(resource, a.profile.subscriptionId) + if err != nil { + return nil, err + } + + adalToken, err := token.ToADALToken() + if err != nil { + return nil, err + } + + return &adalToken, nil + } + spt.SetCustomRefreshFunc(refreshFunc) + auth := autorest.NewBearerAuthorizer(spt) return auth, nil } @@ -82,9 +115,19 @@ func (a azureCliTokenAuth) name() string { func (a azureCliTokenAuth) populateConfig(c *Config) error { c.ClientID = a.profile.clientId + c.TenantID = a.profile.tenantId c.Environment = a.profile.environment c.SubscriptionID = a.profile.subscriptionId - c.TenantID = a.profile.tenantId + + c.GetAuthenticatedObjectID = func(ctx context.Context) (string, error) { + objectId, err := obtainAuthenticatedObjectID() + if err != nil { + return "", err + } + + return objectId, nil + } + return nil } @@ -112,35 +155,56 @@ func (a azureCliTokenAuth) validate() error { return err.ErrorOrNil() } +func obtainAuthenticatedObjectID() (string, error) { + + var json struct { + ObjectId string `json:"objectId"` + } + + err := jsonUnmarshalAzCmd(&json, "ad", "signed-in-user", "show", "-o=json") + if err != nil { + return "", fmt.Errorf("Error parsing json result from the Azure CLI: %v", err) + } + + return json.ObjectId, nil +} + func obtainAuthorizationToken(endpoint string, subscriptionId string) (*cli.Token, error) { + var token cli.Token + err := jsonUnmarshalAzCmd(&token, "account", "get-access-token", "--resource", endpoint, "--subscription", subscriptionId, "-o=json") + if err != nil { + return nil, fmt.Errorf("Error parsing json result from the Azure CLI: %v", err) + } + + return &token, nil +} + +func jsonUnmarshalAzCmd(i interface{}, arg ...string) error { var stderr bytes.Buffer var stdout bytes.Buffer - cmd := exec.Command("az", "account", "get-access-token", "--resource", endpoint, "--subscription", subscriptionId, "-o=json") + cmd := exec.Command("az", arg...) cmd.Stderr = &stderr cmd.Stdout = &stdout if err := cmd.Start(); err != nil { - return nil, fmt.Errorf("Error launching Azure CLI: %+v", err) + return fmt.Errorf("Error launching Azure CLI: %+v", err) } if err := cmd.Wait(); err != nil { - return nil, fmt.Errorf("Error waiting for the Azure CLI: %+v", err) + return fmt.Errorf("Error waiting for the Azure CLI: %+v", err) } stdOutStr := stdout.String() stdErrStr := stderr.String() - if stdErrStr != "" { - return nil, fmt.Errorf("Error retrieving access token from Azure CLI: %s", strings.TrimSpace(stdErrStr)) + return fmt.Errorf("Error retrieving running Azure CLI: %s", strings.TrimSpace(stdErrStr)) } - var token *cli.Token - err := json.Unmarshal([]byte(stdOutStr), &token) - if err != nil { - return nil, fmt.Errorf("Error unmarshaling Access Token from the Azure CLI: %s", err) + if err := json.Unmarshal([]byte(stdOutStr), &i); err != nil { + return fmt.Errorf("Error unmarshaling the result of Azure CLI: %v", err) } - return token, nil + return nil } diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_cert.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_cert.go index 00a0d2794..14e455cc0 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_cert.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_cert.go @@ -41,7 +41,11 @@ func (a servicePrincipalClientCertificateAuth) name() string { return "Service Principal / Client Certificate" } -func (a servicePrincipalClientCertificateAuth) getAuthorizationToken(oauthConfig *adal.OAuthConfig, endpoint string) (*autorest.BearerAuthorizer, error) { +func (a servicePrincipalClientCertificateAuth) getAuthorizationToken(sender autorest.Sender, oauth *OAuthConfig, endpoint string) (autorest.Authorizer, error) { + if oauth.OAuth == nil { + return nil, fmt.Errorf("Error getting Authorization Token for client cert: an OAuth token wasn't configured correctly; please file a bug with more details") + } + certificateData, err := ioutil.ReadFile(a.clientCertPath) if err != nil { return nil, fmt.Errorf("Error reading Client Certificate %q: %v", a.clientCertPath, err) @@ -53,11 +57,13 @@ func (a servicePrincipalClientCertificateAuth) getAuthorizationToken(oauthConfig return nil, fmt.Errorf("Error decoding pkcs12 certificate: %v", err) } - spt, err := adal.NewServicePrincipalTokenFromCertificate(*oauthConfig, a.clientId, certificate, rsaPrivateKey, endpoint) + spt, err := adal.NewServicePrincipalTokenFromCertificate(*oauth.OAuth, a.clientId, certificate, rsaPrivateKey, endpoint) if err != nil { return nil, err } + spt.SetSender(sender) + err = spt.Refresh() if err != nil { return nil, err @@ -69,6 +75,7 @@ func (a servicePrincipalClientCertificateAuth) getAuthorizationToken(oauthConfig func (a servicePrincipalClientCertificateAuth) populateConfig(c *Config) error { c.AuthenticatedAsAServicePrincipal = true + c.GetAuthenticatedObjectID = buildServicePrincipalObjectIDFunc(c) return nil } diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_secret.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_secret.go index 4e41d5a93..36aaed352 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_secret.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_secret.go @@ -33,18 +33,23 @@ func (a servicePrincipalClientSecretAuth) name() string { return "Service Principal / Client Secret" } -func (a servicePrincipalClientSecretAuth) getAuthorizationToken(oauthConfig *adal.OAuthConfig, endpoint string) (*autorest.BearerAuthorizer, error) { - spt, err := adal.NewServicePrincipalToken(*oauthConfig, a.clientId, a.clientSecret, endpoint) +func (a servicePrincipalClientSecretAuth) getAuthorizationToken(sender autorest.Sender, oauth *OAuthConfig, endpoint string) (autorest.Authorizer, error) { + if oauth.OAuth == nil { + return nil, fmt.Errorf("Error getting Authorization Token for client secret auth: an OAuth token wasn't configured correctly; please file a bug with more details") + } + + spt, err := adal.NewServicePrincipalToken(*oauth.OAuth, a.clientId, a.clientSecret, endpoint) if err != nil { return nil, err } + spt.SetSender(sender) - auth := autorest.NewBearerAuthorizer(spt) - return auth, nil + return autorest.NewBearerAuthorizer(spt), nil } func (a servicePrincipalClientSecretAuth) populateConfig(c *Config) error { c.AuthenticatedAsAServicePrincipal = true + c.GetAuthenticatedObjectID = buildServicePrincipalObjectIDFunc(c) return nil } diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_secret_multi_tenant.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_secret_multi_tenant.go new file mode 100644 index 000000000..c435a74f4 --- /dev/null +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_client_secret_multi_tenant.go @@ -0,0 +1,84 @@ +package authentication + +import ( + "fmt" + + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/adal" + "github.com/hashicorp/go-multierror" +) + +type servicePrincipalClientSecretMultiTenantAuth struct { + clientId string + clientSecret string + subscriptionId string + tenantId string + auxiliaryTenantIDs []string +} + +func (a servicePrincipalClientSecretMultiTenantAuth) build(b Builder) (authMethod, error) { + method := servicePrincipalClientSecretMultiTenantAuth{ + clientId: b.ClientID, + clientSecret: b.ClientSecret, + subscriptionId: b.SubscriptionID, + tenantId: b.TenantID, + auxiliaryTenantIDs: b.AuxiliaryTenantIDs, + } + return method, nil +} + +func (a servicePrincipalClientSecretMultiTenantAuth) isApplicable(b Builder) bool { + return b.SupportsClientSecretAuth && b.ClientSecret != "" && b.SupportsAuxiliaryTenants && (len(b.AuxiliaryTenantIDs) > 0) +} + +func (a servicePrincipalClientSecretMultiTenantAuth) name() string { + return "Multi Tenant Service Principal / Client Secret" +} + +func (a servicePrincipalClientSecretMultiTenantAuth) getAuthorizationToken(sender autorest.Sender, oauth *OAuthConfig, endpoint string) (autorest.Authorizer, error) { + if oauth.MultiTenantOauth == nil { + return nil, fmt.Errorf("Error getting Authorization Token for client cert: an MultiTenantOauth token wasn't configured correctly; please file a bug with more details") + } + + spt, err := adal.NewMultiTenantServicePrincipalToken(*oauth.MultiTenantOauth, a.clientId, a.clientSecret, endpoint) + if err != nil { + return nil, err + } + + spt.PrimaryToken.SetSender(sender) + for _, t := range spt.AuxiliaryTokens { + t.SetSender(sender) + } + + auth := autorest.NewMultiTenantServicePrincipalTokenAuthorizer(spt) + return auth, nil +} + +func (a servicePrincipalClientSecretMultiTenantAuth) populateConfig(c *Config) error { + c.AuthenticatedAsAServicePrincipal = true + c.GetAuthenticatedObjectID = buildServicePrincipalObjectIDFunc(c) + return nil +} + +func (a servicePrincipalClientSecretMultiTenantAuth) validate() error { + var err *multierror.Error + + fmtErrorMessage := "A %s must be configured when authenticating as a Service Principal using a Multi Tenant Client Secret." + + if a.subscriptionId == "" { + err = multierror.Append(err, fmt.Errorf(fmtErrorMessage, "Subscription ID")) + } + if a.clientId == "" { + err = multierror.Append(err, fmt.Errorf(fmtErrorMessage, "Client ID")) + } + if a.clientSecret == "" { + err = multierror.Append(err, fmt.Errorf(fmtErrorMessage, "Client Secret")) + } + if a.tenantId == "" { + err = multierror.Append(err, fmt.Errorf(fmtErrorMessage, "Tenant ID")) + } + if len(a.auxiliaryTenantIDs) == 0 { + err = multierror.Append(err, fmt.Errorf(fmtErrorMessage, "Auxiliary Tenant IDs")) + } + return err.ErrorOrNil() +} diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_msi.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_msi.go index 9b0de8f5d..7d68271d0 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_msi.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/auth_method_msi.go @@ -10,23 +10,25 @@ import ( ) type managedServiceIdentityAuth struct { - endpoint string + msiEndpoint string + clientID string } func (a managedServiceIdentityAuth) build(b Builder) (authMethod, error) { - endpoint := b.MsiEndpoint - if endpoint == "" { - msiEndpoint, err := adal.GetMSIVMEndpoint() + msiEndpoint := b.MsiEndpoint + if msiEndpoint == "" { + ep, err := adal.GetMSIVMEndpoint() if err != nil { return nil, fmt.Errorf("Error determining MSI Endpoint: ensure the VM has MSI enabled, or configure the MSI Endpoint. Error: %s", err) } - endpoint = msiEndpoint + msiEndpoint = ep } - log.Printf("[DEBUG] Using MSI endpoint %q", endpoint) + log.Printf("[DEBUG] Using MSI msiEndpoint %q", msiEndpoint) auth := managedServiceIdentityAuth{ - endpoint: endpoint, + msiEndpoint: msiEndpoint, + clientID: b.ClientID, } return auth, nil } @@ -39,11 +41,28 @@ func (a managedServiceIdentityAuth) name() string { return "Managed Service Identity" } -func (a managedServiceIdentityAuth) getAuthorizationToken(oauthConfig *adal.OAuthConfig, endpoint string) (*autorest.BearerAuthorizer, error) { - spt, err := adal.NewServicePrincipalTokenFromMSI(a.endpoint, endpoint) - if err != nil { - return nil, err +func (a managedServiceIdentityAuth) getAuthorizationToken(sender autorest.Sender, oauth *OAuthConfig, endpoint string) (autorest.Authorizer, error) { + log.Printf("[DEBUG] getAuthorizationToken with MSI msiEndpoint %q, ClientID %q for msiEndpoint %q", a.msiEndpoint, a.clientID, endpoint) + + if oauth.OAuth == nil { + return nil, fmt.Errorf("Error getting Authorization Token for MSI auth: an OAuth token wasn't configured correctly; please file a bug with more details") } + + var spt *adal.ServicePrincipalToken + var err error + if a.clientID == "" { + spt, err = adal.NewServicePrincipalTokenFromMSI(a.msiEndpoint, endpoint) + if err != nil { + return nil, err + } + } else { + spt, err = adal.NewServicePrincipalTokenFromMSIWithUserAssignedID(a.msiEndpoint, endpoint, a.clientID) + if err != nil { + return nil, fmt.Errorf("failed to get an oauth token from MSI for user assigned identity from MSI endpoint %q with client ID %q for endpoint %q: %v", a.msiEndpoint, a.clientID, endpoint, err) + } + } + + spt.SetSender(sender) auth := autorest.NewBearerAuthorizer(spt) return auth, nil } @@ -56,7 +75,7 @@ func (a managedServiceIdentityAuth) populateConfig(c *Config) error { func (a managedServiceIdentityAuth) validate() error { var err *multierror.Error - if a.endpoint == "" { + if a.msiEndpoint == "" { err = multierror.Append(err, fmt.Errorf("An MSI Endpoint must be configured")) } diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_access_token.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_access_token.go index 822fb2d77..bc826d557 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_access_token.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_access_token.go @@ -10,8 +10,8 @@ import ( ) type azureCliAccessToken struct { - ClientID string - AccessToken *adal.Token + ClientID string + AccessToken *adal.Token } func findValidAccessTokenForTenant(tokens []cli.Token, tenantId string) (*azureCliAccessToken, error) { @@ -32,8 +32,8 @@ func findValidAccessTokenForTenant(tokens []cli.Token, tenantId string) (*azureC } validAccessToken := azureCliAccessToken{ - ClientID: accessToken.ClientID, - AccessToken: &token, + ClientID: accessToken.ClientID, + AccessToken: &token, } return &validAccessToken, nil } diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_profile.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_profile.go index 39fb30ddd..1f6a0b6af 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_profile.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_cli_profile.go @@ -1,16 +1,18 @@ package authentication import ( + "strings" + "github.com/Azure/go-autorest/autorest/azure/cli" ) type azureCLIProfile struct { profile cli.Profile - clientId string - environment string - subscriptionId string - tenantId string + clientId string + environment string + subscriptionId string + tenantId string } func (a *azureCLIProfile) populateFields() error { @@ -33,3 +35,18 @@ func (a *azureCLIProfile) populateFields() error { // always pull the environment from the Azure CLI, since the Access Token's associated with it return a.populateEnvironment() } + +func (a *azureCLIProfile) verifyAuthenticatedAsAUser() bool { + for _, subscription := range a.profile.Subscriptions { + if subscription.User == nil { + continue + } + + authenticatedAsAUser := strings.EqualFold(subscription.User.Type, "user") + if authenticatedAsAUser { + return true + } + } + + return false +} diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_sp_objectid.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_sp_objectid.go new file mode 100644 index 000000000..97c5bc874 --- /dev/null +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/azure_sp_objectid.go @@ -0,0 +1,49 @@ +package authentication + +import ( + "context" + "fmt" + + "github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac" + "github.com/hashicorp/go-azure-helpers/sender" +) + +func buildServicePrincipalObjectIDFunc(c *Config) func(ctx context.Context) (string, error) { + return func(ctx context.Context) (string, error) { + env, err := DetermineEnvironment(c.Environment) + if err != nil { + return "", err + } + + s := sender.BuildSender("GoAzureHelpers") + + oauthConfig, err := c.BuildOAuthConfig(env.ActiveDirectoryEndpoint) + if err != nil { + return "", err + } + + // Graph Endpoints + graphEndpoint := env.GraphEndpoint + graphAuth, err := c.GetAuthorizationToken(s, oauthConfig, env.GraphEndpoint) + if err != nil { + return "", err + } + + client := graphrbac.NewServicePrincipalsClientWithBaseURI(graphEndpoint, c.TenantID) + client.Authorizer = graphAuth + client.Sender = s + + filter := fmt.Sprintf("appId eq '%s'", c.ClientID) + listResult, listErr := client.List(ctx, filter) + + if listErr != nil { + return "", fmt.Errorf("Error listing Service Principals: %#v", listErr) + } + + if listResult.Values() == nil || len(listResult.Values()) != 1 || listResult.Values()[0].ObjectID == nil { + return "", fmt.Errorf("Unexpected Service Principal query result: %#v", listResult.Values()) + } + + return *listResult.Values()[0].ObjectID, nil + } +} diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/builder.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/builder.go index e37e8b137..9b5c1a114 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/builder.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/builder.go @@ -1,10 +1,15 @@ package authentication import ( + "context" "fmt" "log" ) +var ( + authenticatedObjectCache = "" +) + // Builder supports all of the possible Authentication values and feature toggles // required to build a working Config for Authentication purposes. type Builder struct { @@ -14,6 +19,10 @@ type Builder struct { TenantID string Environment string + // Auxiliary tenant IDs used for multi tenant auth + SupportsAuxiliaryTenants bool + AuxiliaryTenantIDs []string + // The custom Resource Manager Endpoint which should be used // only applicable for Azure Stack at this time. CustomResourceManagerEndpoint string @@ -33,6 +42,7 @@ type Builder struct { // Service Principal (Client Secret) Auth SupportsClientSecretAuth bool ClientSecret string + ClientSecretDocsLink string } // Build takes the configuration from the Builder and builds up a validated Config @@ -42,6 +52,7 @@ func (b Builder) Build() (*Config, error) { ClientID: b.ClientID, SubscriptionID: b.SubscriptionID, TenantID: b.TenantID, + AuxiliaryTenantIDs: b.AuxiliaryTenantIDs, Environment: b.Environment, CustomResourceManagerEndpoint: b.CustomResourceManagerEndpoint, } @@ -50,6 +61,7 @@ func (b Builder) Build() (*Config, error) { // since the Azure CLI Parsing should always be the last thing checked supportedAuthenticationMethods := []authMethod{ servicePrincipalClientCertificateAuth{}, + servicePrincipalClientSecretMultiTenantAuth{}, servicePrincipalClientSecretAuth{}, managedServiceIdentityAuth{}, azureCliTokenAuth{}, @@ -58,23 +70,44 @@ func (b Builder) Build() (*Config, error) { for _, method := range supportedAuthenticationMethods { name := method.name() log.Printf("Testing if %s is applicable for Authentication..", name) - if method.isApplicable(b) { - log.Printf("Using %s for Authentication", name) - auth, err := method.build(b) - if err != nil { - return nil, err - } - // populate authentication specific fields on the Config - // (e.g. is service principal, fields parsed from the azure cli) - err = auth.populateConfig(&config) - if err != nil { - return nil, err - } - - config.authMethod = auth - return config.validate() + // does not support it via validate? + if !method.isApplicable(b) { + continue } + + log.Printf("Using %s for Authentication", name) + auth, err := method.build(b) + if err != nil { + return nil, err + } + + // populate authentication specific fields on the Config + // (e.g. is service principal, fields parsed from the azure cli) + err = auth.populateConfig(&config) + if err != nil { + return nil, err + } + + config.authMethod = auth + + // Authenticated Object ID Cache + if config.GetAuthenticatedObjectID != nil { + uncachedFunction := config.GetAuthenticatedObjectID + config.GetAuthenticatedObjectID = func(ctx context.Context) (string, error) { + if authenticatedObjectCache == "" { + authenticatedObjectCache, err = uncachedFunction(ctx) + if err != nil { + return "", err + } + log.Printf("authenticated object ID cache miss, populting with: %q", authenticatedObjectCache) + } + + return authenticatedObjectCache, nil + } + } + + return &config, config.authMethod.validate() } return nil, fmt.Errorf("No supported authentication methods were found!") diff --git a/vendor/github.com/hashicorp/go-azure-helpers/authentication/config.go b/vendor/github.com/hashicorp/go-azure-helpers/authentication/config.go index c3068152f..20eb61adc 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/authentication/config.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/authentication/config.go @@ -1,6 +1,11 @@ package authentication import ( + "context" + "fmt" + "log" + "strings" + "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/adal" ) @@ -8,10 +13,13 @@ import ( // Config is the configuration structure used to instantiate a // new Azure management client. type Config struct { - ClientID string - SubscriptionID string - TenantID string - Environment string + ClientID string + SubscriptionID string + TenantID string + AuxiliaryTenantIDs []string + Environment string + + GetAuthenticatedObjectID func(context.Context) (string, error) AuthenticatedAsAServicePrincipal bool // A Custom Resource Manager Endpoint @@ -21,16 +29,96 @@ type Config struct { authMethod authMethod } -// GetAuthorizationToken returns an authorization token for the authentication method defined in the Config -func (c Config) GetAuthorizationToken(oauthConfig *adal.OAuthConfig, endpoint string) (*autorest.BearerAuthorizer, error) { - return c.authMethod.getAuthorizationToken(oauthConfig, endpoint) +type OAuthConfig struct { + OAuth *adal.OAuthConfig + MultiTenantOauth *adal.MultiTenantOAuthConfig } -func (c Config) validate() (*Config, error) { - err := c.authMethod.validate() +// GetAuthorizationToken returns an authorization token for the authentication method defined in the Config +func (c Config) GetOAuthConfig(activeDirectoryEndpoint string) (*adal.OAuthConfig, error) { + log.Printf("Getting OAuth config for endpoint %s with tenant %s", activeDirectoryEndpoint, c.TenantID) + + // fix for ADFS environments, if the login endpoint ends in `/adfs` it's an adfs environment + // the login endpoint ends up residing in `ActiveDirectoryEndpoint` + oAuthTenant := c.TenantID + if strings.HasSuffix(strings.ToLower(activeDirectoryEndpoint), "/adfs") { + log.Printf("[DEBUG] ADFS environment detected - overriding Tenant ID to `adfs`!") + oAuthTenant = "adfs" + } + + oauth, err := adal.NewOAuthConfig(activeDirectoryEndpoint, oAuthTenant) if err != nil { return nil, err } - return &c, nil + // OAuthConfigForTenant returns a pointer, which can be nil. + if oauth == nil { + return nil, fmt.Errorf("Unable to configure OAuthConfig for tenant %s", c.TenantID) + } + + return oauth, nil +} + +// GetMultiTenantOAuthConfig returns a multi-tenant authorization token for the authentication method defined in the Config +func (c Config) GetMultiTenantOAuthConfig(activeDirectoryEndpoint string) (*adal.MultiTenantOAuthConfig, error) { + log.Printf("Getting multi OAuth config for endpoint %s with tenant %s (aux tenants: %v)", activeDirectoryEndpoint, c.TenantID, c.AuxiliaryTenantIDs) + oauth, err := adal.NewMultiTenantOAuthConfig(activeDirectoryEndpoint, c.TenantID, c.AuxiliaryTenantIDs, adal.OAuthOptions{}) + if err != nil { + return nil, err + } + + // OAuthConfigForTenant returns a pointer, which can be nil. + if oauth == nil { + return nil, fmt.Errorf("Unable to configure OAuthConfig for tenant %s (auxiliary tenants %v)", c.TenantID, c.AuxiliaryTenantIDs) + } + + return &oauth, nil +} + +// BuildOAuthConfig builds the authorization configuration for the specified Active Directory Endpoint +func (c Config) BuildOAuthConfig(activeDirectoryEndpoint string) (*OAuthConfig, error) { + multiAuth := OAuthConfig{} + var err error + + multiAuth.OAuth, err = c.GetOAuthConfig(activeDirectoryEndpoint) + if err != nil { + return nil, err + } + + if len(c.AuxiliaryTenantIDs) > 0 { + multiAuth.MultiTenantOauth, err = c.GetMultiTenantOAuthConfig(activeDirectoryEndpoint) + if err != nil { + return nil, err + } + } + + return &multiAuth, nil +} + +// BearerAuthorizerCallback returns a BearerAuthorizer valid only for the Primary Tenant +// this signs a request using the AccessToken returned from the primary Resource Manager authorizer +func (c Config) BearerAuthorizerCallback(sender autorest.Sender, oauthConfig *OAuthConfig) *autorest.BearerAuthorizerCallback { + return autorest.NewBearerAuthorizerCallback(sender, func(tenantID, resource string) (*autorest.BearerAuthorizer, error) { + // a BearerAuthorizer is only valid for the primary tenant + newAuthConfig := &OAuthConfig{ + OAuth: oauthConfig.OAuth, + } + + storageSpt, err := c.GetAuthorizationToken(sender, newAuthConfig, resource) + if err != nil { + return nil, err + } + + cast, ok := storageSpt.(*autorest.BearerAuthorizer) + if !ok { + return nil, fmt.Errorf("Error converting %+v to a BearerAuthorizer", storageSpt) + } + + return cast, nil + }) +} + +// GetAuthorizationToken returns an authorization token for the authentication method defined in the Config +func (c Config) GetAuthorizationToken(sender autorest.Sender, oauth *OAuthConfig, endpoint string) (autorest.Authorizer, error) { + return c.authMethod.getAuthorizationToken(sender, oauth, endpoint) } diff --git a/vendor/github.com/hashicorp/go-azure-helpers/sender/sender.go b/vendor/github.com/hashicorp/go-azure-helpers/sender/sender.go new file mode 100644 index 000000000..b3301598b --- /dev/null +++ b/vendor/github.com/hashicorp/go-azure-helpers/sender/sender.go @@ -0,0 +1,57 @@ +package sender + +import ( + "log" + "net/http" + "net/http/httputil" + + "github.com/Azure/go-autorest/autorest" +) + +func BuildSender(providerName string) autorest.Sender { + return autorest.DecorateSender(&http.Client{ + Transport: &http.Transport{ + Proxy: http.ProxyFromEnvironment, + }, + }, withRequestLogging(providerName)) +} + +func withRequestLogging(providerName string) autorest.SendDecorator { + return func(s autorest.Sender) autorest.Sender { + return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) { + // strip the authorization header prior to printing + authHeaderName := "Authorization" + auth := r.Header.Get(authHeaderName) + if auth != "" { + r.Header.Del(authHeaderName) + } + + // dump request to wire format + if dump, err := httputil.DumpRequestOut(r, true); err == nil { + log.Printf("[DEBUG] %s Request: \n%s\n", providerName, dump) + } else { + // fallback to basic message + log.Printf("[DEBUG] %s Request: %s to %s\n", providerName, r.Method, r.URL) + } + + // add the auth header back + if auth != "" { + r.Header.Add(authHeaderName, auth) + } + + resp, err := s.Do(r) + if resp != nil { + // dump response to wire format + if dump, err2 := httputil.DumpResponse(resp, true); err2 == nil { + log.Printf("[DEBUG] %s Response for %s: \n%s\n", providerName, r.URL, dump) + } else { + // fallback to basic message + log.Printf("[DEBUG] %s Response: %s for %s\n", providerName, resp.Status, r.URL) + } + } else { + log.Printf("[DEBUG] Request to %s completed with no response", r.URL) + } + return resp, err + }) + } +} diff --git a/vendor/github.com/hashicorp/go-azure-helpers/storage/sas_token.go b/vendor/github.com/hashicorp/go-azure-helpers/storage/sas_token.go index 205baaeee..80e228a77 100644 --- a/vendor/github.com/hashicorp/go-azure-helpers/storage/sas_token.go +++ b/vendor/github.com/hashicorp/go-azure-helpers/storage/sas_token.go @@ -7,11 +7,14 @@ import ( "fmt" "net/url" "strings" + + "github.com/Azure/go-autorest/autorest/azure" ) const ( - connStringAccountKeyKey = "AccountKey" - connStringAccountNameKey = "AccountName" + connStringAccountKeyKey = "AccountKey" + connStringAccountNameKey = "AccountName" + blobContainerSignedVersion = "2018-11-09" ) // ComputeAccountSASToken computes the SAS Token for a Storage Account based on the @@ -67,6 +70,117 @@ func ComputeAccountSASToken(accountName string, return sasToken, nil } +// ComputeAccountSASConnectionString computes the composed SAS Connection String for a Storage Account based on the +// sas token +func ComputeAccountSASConnectionString(env *azure.Environment, accountName string, sasToken string) string { + return fmt.Sprintf( + "BlobEndpoint=https://%[1]s.blob.%[2]s/;"+ + "FileEndpoint=https://%[1]s.file.%[2]s/;"+ + "QueueEndpoint=https://%[1]s.queue.%[2]s/;"+ + "TableEndpoint=https://%[1]s.table.%[2]s/;"+ + "SharedAccessSignature=%[3]s", accountName, env.StorageEndpointSuffix, sasToken[1:]) // need to cut the first character '?' from the sas token +} + +// ComputeAccountSASConnectionUrlForType computes the SAS Connection String for a Storage Account based on the +// sas token and the storage type +func ComputeAccountSASConnectionUrlForType(env *azure.Environment, accountName string, sasToken string, storageType string) (*string, error) { + if !strings.EqualFold(storageType, "blob") && !strings.EqualFold(storageType, "file") && !strings.EqualFold(storageType, "queue") && !strings.EqualFold(storageType, "table") { + return nil, fmt.Errorf("Unexpected storage type %s!", storageType) + } + + url := fmt.Sprintf("https://%s.%s.%s%s", accountName, strings.ToLower(storageType), env.StorageEndpointSuffix, sasToken) + return &url, nil +} + +func ComputeContainerSASToken(signedPermissions string, + signedStart string, + signedExpiry string, + accountName string, + accountKey string, + containerName string, + signedIdentifier string, + signedIp string, + signedProtocol string, + signedSnapshotTime string, + cacheControl string, + contentDisposition string, + contentEncoding string, + contentLanguage string, + contentType string, +) (string, error) { + + canonicalizedResource := "/blob/" + accountName + "/" + containerName + signedVersion := blobContainerSignedVersion + signedResource := "c" // c for container + + // UTF-8 by default... + stringToSign := signedPermissions + "\n" + stringToSign += signedStart + "\n" + stringToSign += signedExpiry + "\n" + stringToSign += canonicalizedResource + "\n" + stringToSign += signedIdentifier + "\n" + stringToSign += signedIp + "\n" + stringToSign += signedProtocol + "\n" + stringToSign += signedVersion + "\n" + stringToSign += signedResource + "\n" + stringToSign += signedSnapshotTime + "\n" + stringToSign += cacheControl + "\n" + stringToSign += contentDisposition + "\n" + stringToSign += contentEncoding + "\n" + stringToSign += contentLanguage + "\n" + stringToSign += contentType + + binaryKey, err := base64.StdEncoding.DecodeString(accountKey) + if err != nil { + return "", err + } + hasher := hmac.New(sha256.New, binaryKey) + hasher.Write([]byte(stringToSign)) + signature := hasher.Sum(nil) + + sasToken := "?sv=" + signedVersion + sasToken += "&sr=" + signedResource + sasToken += "&st=" + url.QueryEscape(signedStart) + sasToken += "&se=" + url.QueryEscape(signedExpiry) + sasToken += "&sp=" + signedPermissions + + if len(signedIp) > 0 { + sasToken += "&sip=" + signedIp + } + + if len(signedProtocol) > 0 { + sasToken += "&spr=" + signedProtocol + } + + if len(signedIdentifier) > 0 { + sasToken += "&si=" + signedIdentifier + } + + if len(cacheControl) > 0 { + sasToken += "&rscc=" + url.QueryEscape(cacheControl) + } + + if len(contentDisposition) > 0 { + sasToken += "&rscd=" + url.QueryEscape(contentDisposition) + } + + if len(contentEncoding) > 0 { + sasToken += "&rsce=" + url.QueryEscape(contentEncoding) + } + + if len(contentLanguage) > 0 { + sasToken += "&rscl=" + url.QueryEscape(contentLanguage) + } + + if len(contentType) > 0 { + sasToken += "&rsct=" + url.QueryEscape(contentType) + } + + sasToken += "&sig=" + url.QueryEscape(base64.StdEncoding.EncodeToString(signature)) + + return sasToken, nil +} + // ParseAccountSASConnectionString parses the Connection String for a Storage Account func ParseAccountSASConnectionString(connString string) (map[string]string, error) { // This connection string was for a real storage account which has been deleted diff --git a/vendor/github.com/hashicorp/go-cleanhttp/handlers.go b/vendor/github.com/hashicorp/go-cleanhttp/handlers.go index 7eda3777f..3c845dc0d 100644 --- a/vendor/github.com/hashicorp/go-cleanhttp/handlers.go +++ b/vendor/github.com/hashicorp/go-cleanhttp/handlers.go @@ -27,17 +27,22 @@ func PrintablePathCheckHandler(next http.Handler, input *HandlerInput) http.Hand } return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - // Check URL path for non-printable characters - idx := strings.IndexFunc(r.URL.Path, func(c rune) bool { - return !unicode.IsPrint(c) - }) + if r != nil { + // Check URL path for non-printable characters + idx := strings.IndexFunc(r.URL.Path, func(c rune) bool { + return !unicode.IsPrint(c) + }) - if idx != -1 { - w.WriteHeader(input.ErrStatus) - return + if idx != -1 { + w.WriteHeader(input.ErrStatus) + return + } + + if next != nil { + next.ServeHTTP(w, r) + } } - next.ServeHTTP(w, r) return }) } diff --git a/vendor/github.com/hashicorp/go-getter/.travis.yml b/vendor/github.com/hashicorp/go-getter/.travis.yml deleted file mode 100644 index 4fe9176aa..000000000 --- a/vendor/github.com/hashicorp/go-getter/.travis.yml +++ /dev/null @@ -1,24 +0,0 @@ -sudo: false - -addons: - apt: - sources: - - sourceline: 'ppa:git-core/ppa' - packages: - - git - -language: go - -os: - - linux - - osx - -go: - - "1.11.x" - -before_script: - - go build ./cmd/go-getter - -branches: - only: - - master diff --git a/vendor/github.com/hashicorp/go-getter/README.md b/vendor/github.com/hashicorp/go-getter/README.md index 3de23c709..bbcd15de9 100644 --- a/vendor/github.com/hashicorp/go-getter/README.md +++ b/vendor/github.com/hashicorp/go-getter/README.md @@ -1,10 +1,10 @@ # go-getter -[![Build Status](http://img.shields.io/travis/hashicorp/go-getter.svg?style=flat-square)][travis] +[![CircleCI](https://circleci.com/gh/hashicorp/go-getter/tree/master.svg?style=svg)][circleci] [![Build status](https://ci.appveyor.com/api/projects/status/ulq3qr43n62croyq/branch/master?svg=true)][appveyor] [![Go Documentation](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)][godocs] -[travis]: http://travis-ci.org/hashicorp/go-getter +[circleci]: https://circleci.com/gh/hashicorp/go-getter/tree/master [godocs]: http://godoc.org/github.com/hashicorp/go-getter [appveyor]: https://ci.appveyor.com/project/hashicorp/go-getter/branch/master @@ -356,3 +356,7 @@ In order to access to GCS, authentication credentials should be provided. More i - gcs::https://www.googleapis.com/storage/v1/bucket - gcs::https://www.googleapis.com/storage/v1/bucket/foo.zip - www.googleapis.com/storage/v1/bucket/foo + +#### GCS Testing + +The tests for `get_gcs.go` require you to have GCP credentials set in your environment. These credentials can have any level of permissions to any project, they just need to exist. This means setting `GOOGLE_APPLICATION_CREDENTIALS="~/path/to/credentials.json"` or `GOOGLE_CREDENTIALS="{stringified-credentials-json}"`. Due to this configuration, `get_gcs_test.go` will fail for external contributors in CircleCI. diff --git a/vendor/github.com/hashicorp/go-getter/client.go b/vendor/github.com/hashicorp/go-getter/client.go index 007a78ba7..38fb43b8f 100644 --- a/vendor/github.com/hashicorp/go-getter/client.go +++ b/vendor/github.com/hashicorp/go-getter/client.go @@ -19,7 +19,7 @@ import ( // Using a client directly allows more fine-grained control over how downloading // is done, as well as customizing the protocols supported. type Client struct { - // Ctx for cancellation + // Ctx for cancellation Ctx context.Context // Src is the source URL to get. diff --git a/vendor/github.com/hashicorp/go-getter/get_git.go b/vendor/github.com/hashicorp/go-getter/get_git.go index bb1ec316d..1b9f4be81 100644 --- a/vendor/github.com/hashicorp/go-getter/get_git.go +++ b/vendor/github.com/hashicorp/go-getter/get_git.go @@ -1,6 +1,7 @@ package getter import ( + "bytes" "context" "encoding/base64" "fmt" @@ -9,6 +10,7 @@ import ( "os" "os/exec" "path/filepath" + "regexp" "runtime" "strconv" "strings" @@ -24,6 +26,8 @@ type GitGetter struct { getter } +var defaultBranchRegexp = regexp.MustCompile(`\s->\sorigin/(.*)`) + func (g *GitGetter) ClientMode(_ *url.URL) (ClientMode, error) { return ClientModeDir, nil } @@ -182,10 +186,10 @@ func (g *GitGetter) update(ctx context.Context, dst, sshKeyFile, ref string, dep cmd.Dir = dst if getRunCommand(cmd) != nil { - // Not a branch, switch to master. This will also catch non-existent - // branches, in which case we want to switch to master and then - // checkout the proper branch later. - ref = "master" + // Not a branch, switch to default branch. This will also catch + // non-existent branches, in which case we want to switch to default + // and then checkout the proper branch later. + ref = findDefaultBranch(dst) } // We have to be on a branch to pull @@ -216,6 +220,22 @@ func (g *GitGetter) fetchSubmodules(ctx context.Context, dst, sshKeyFile string, return getRunCommand(cmd) } +// findDefaultBranch checks the repo's origin remote for its default branch +// (generally "master"). "master" is returned if an origin default branch +// can't be determined. +func findDefaultBranch(dst string) string { + var stdoutbuf bytes.Buffer + cmd := exec.Command("git", "branch", "-r", "--points-at", "refs/remotes/origin/HEAD") + cmd.Dir = dst + cmd.Stdout = &stdoutbuf + err := cmd.Run() + matches := defaultBranchRegexp.FindStringSubmatch(stdoutbuf.String()) + if err != nil || matches == nil { + return "master" + } + return matches[len(matches)-1] +} + // setupGitEnv sets up the environment for the given command. This is used to // pass configuration data to git and ssh and enables advanced cloning methods. func setupGitEnv(cmd *exec.Cmd, sshKeyFile string) { diff --git a/vendor/github.com/hashicorp/go-getter/get_http.go b/vendor/github.com/hashicorp/go-getter/get_http.go index 7c4541c6e..9ffdba78a 100644 --- a/vendor/github.com/hashicorp/go-getter/get_http.go +++ b/vendor/github.com/hashicorp/go-getter/get_http.go @@ -9,7 +9,6 @@ import ( "net/url" "os" "path/filepath" - "strconv" "strings" safetemp "github.com/hashicorp/go-safetemp" @@ -88,7 +87,10 @@ func (g *HttpGetter) Get(dst string, u *url.URL) error { return err } - req.Header = g.Header + if g.Header != nil { + req.Header = g.Header + } + resp, err := g.Client.Do(req) if err != nil { return err @@ -128,6 +130,12 @@ func (g *HttpGetter) Get(dst string, u *url.URL) error { return g.getSubdir(ctx, dst, source, subDir) } +// GetFile fetches the file from src and stores it at dst. +// If the server supports Accept-Range, HttpGetter will attempt a range +// request. This means it is the caller's responsibility to ensure that an +// older version of the destination file does not exist, else it will be either +// falsely identified as being replaced, or corrupted with extra bytes +// appended. func (g *HttpGetter) GetFile(dst string, src *url.URL) error { ctx := g.Context() if g.Netrc { @@ -136,7 +144,6 @@ func (g *HttpGetter) GetFile(dst string, src *url.URL) error { return err } } - // Create all the parent directories if needed if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil { return err @@ -165,18 +172,17 @@ func (g *HttpGetter) GetFile(dst string, src *url.URL) error { req.Header = g.Header } headResp, err := g.Client.Do(req) - if err == nil && headResp != nil { + if err == nil { headResp.Body.Close() if headResp.StatusCode == 200 { // If the HEAD request succeeded, then attempt to set the range // query if we can. - if headResp.Header.Get("Accept-Ranges") == "bytes" { + if headResp.Header.Get("Accept-Ranges") == "bytes" && headResp.ContentLength >= 0 { if fi, err := f.Stat(); err == nil { - if _, err = f.Seek(0, os.SEEK_END); err == nil { - req.Header.Set("Range", fmt.Sprintf("bytes=%d-", fi.Size())) + if _, err = f.Seek(0, io.SeekEnd); err == nil { currentFileSize = fi.Size() - totalFileSize, _ := strconv.ParseInt(headResp.Header.Get("Content-Length"), 10, 64) - if currentFileSize >= totalFileSize { + req.Header.Set("Range", fmt.Sprintf("bytes=%d-", currentFileSize)) + if currentFileSize >= headResp.ContentLength { // file already present return nil } diff --git a/vendor/github.com/hashicorp/go-slug/README.md b/vendor/github.com/hashicorp/go-slug/README.md index 978314f1b..5c9b7c584 100644 --- a/vendor/github.com/hashicorp/go-slug/README.md +++ b/vendor/github.com/hashicorp/go-slug/README.md @@ -31,7 +31,7 @@ package main import ( "bytes" - "ioutil" + "io/ioutil" "log" "os" @@ -40,11 +40,11 @@ import ( func main() { // First create a buffer for storing the slug. - slug := bytes.NewBuffer(nil) + buf := bytes.NewBuffer(nil) // Then call the Pack function with a directory path containing the // configuration files and an io.Writer to write the slug to. - if _, err := Pack("test-fixtures/archive-dir", slug); err != nil { + if _, err := slug.Pack("testdata/archive-dir", buf, false); err != nil { log.Fatal(err) } @@ -58,7 +58,7 @@ func main() { // Unpacking a slug is done by calling the Unpack function with an // io.Reader to read the slug from and a directory path of an existing // directory to store the unpacked configuration files. - if err := Unpack(slug, dst); err != nil { + if err := slug.Unpack(buf, dst); err != nil { log.Fatal(err) } } diff --git a/vendor/github.com/hashicorp/go-slug/slug.go b/vendor/github.com/hashicorp/go-slug/slug.go index 8dd407863..059d26773 100644 --- a/vendor/github.com/hashicorp/go-slug/slug.go +++ b/vendor/github.com/hashicorp/go-slug/slug.go @@ -33,11 +33,15 @@ func Pack(src string, w io.Writer, dereference bool) (*Meta, error) { // Tar the file contents. tarW := tar.NewWriter(gzipW) + // Load the ignore rule configuration, which will use + // defaults if no .terraformignore is configured + ignoreRules := parseIgnoreFile(src) + // Track the metadata details as we go. meta := &Meta{} // Walk the tree of files. - err := filepath.Walk(src, packWalkFn(src, src, src, tarW, meta, dereference)) + err := filepath.Walk(src, packWalkFn(src, src, src, tarW, meta, dereference, ignoreRules)) if err != nil { return nil, err } @@ -55,17 +59,12 @@ func Pack(src string, w io.Writer, dereference bool) (*Meta, error) { return meta, nil } -func packWalkFn(root, src, dst string, tarW *tar.Writer, meta *Meta, dereference bool) filepath.WalkFunc { +func packWalkFn(root, src, dst string, tarW *tar.Writer, meta *Meta, dereference bool, ignoreRules []rule) filepath.WalkFunc { return func(path string, info os.FileInfo, err error) error { if err != nil { return err } - // Skip the .git directory. - if info.IsDir() && info.Name() == ".git" { - return filepath.SkipDir - } - // Get the relative path from the current src directory. subpath, err := filepath.Rel(src, path) if err != nil { @@ -75,20 +74,16 @@ func packWalkFn(root, src, dst string, tarW *tar.Writer, meta *Meta, dereference return nil } - // Ignore the .terraform directory itself. - if info.IsDir() && info.Name() == ".terraform" { + if m := matchIgnoreRule(subpath, ignoreRules); m { return nil } - // Ignore any files in the .terraform directory. - if !info.IsDir() && filepath.Dir(subpath) == ".terraform" { - return nil - } - - // Skip .terraform subdirectories, except for the modules subdirectory. - if strings.HasPrefix(subpath, ".terraform"+string(filepath.Separator)) && - !strings.HasPrefix(subpath, filepath.Clean(".terraform/modules")) { - return filepath.SkipDir + // Catch directories so we don't end up with empty directories, + // the files are ignored correctly + if info.IsDir() { + if m := matchIgnoreRule(subpath+string(os.PathSeparator), ignoreRules); m { + return nil + } } // Get the relative path from the initial root directory. @@ -159,7 +154,7 @@ func packWalkFn(root, src, dst string, tarW *tar.Writer, meta *Meta, dereference // If the target is a directory we can recurse into the target // directory by calling the packWalkFn with updated arguments. if info.IsDir() { - return filepath.Walk(target, packWalkFn(root, target, path, tarW, meta, dereference)) + return filepath.Walk(target, packWalkFn(root, target, path, tarW, meta, dereference, ignoreRules)) } // Dereference this symlink by updating the header with the target file diff --git a/vendor/github.com/hashicorp/go-slug/terraformignore.go b/vendor/github.com/hashicorp/go-slug/terraformignore.go new file mode 100644 index 000000000..ac7486934 --- /dev/null +++ b/vendor/github.com/hashicorp/go-slug/terraformignore.go @@ -0,0 +1,225 @@ +package slug + +import ( + "bufio" + "fmt" + "io" + "os" + "path/filepath" + "regexp" + "strings" + "text/scanner" +) + +func parseIgnoreFile(rootPath string) []rule { + // Look for .terraformignore at our root path/src + file, err := os.Open(filepath.Join(rootPath, ".terraformignore")) + defer file.Close() + + // If there's any kind of file error, punt and use the default ignore patterns + if err != nil { + // Only show the error debug if an error *other* than IsNotExist + if !os.IsNotExist(err) { + fmt.Fprintf(os.Stderr, "Error reading .terraformignore, default exclusions will apply: %v \n", err) + } + return defaultExclusions + } + return readRules(file) +} + +func readRules(input io.Reader) []rule { + rules := defaultExclusions + scanner := bufio.NewScanner(input) + scanner.Split(bufio.ScanLines) + + for scanner.Scan() { + pattern := scanner.Text() + // Ignore blank lines + if len(pattern) == 0 { + continue + } + // Trim spaces + pattern = strings.TrimSpace(pattern) + // Ignore comments + if pattern[0] == '#' { + continue + } + // New rule structure + rule := rule{} + // Exclusions + if pattern[0] == '!' { + rule.excluded = true + pattern = pattern[1:] + } + // If it is a directory, add ** so we catch descendants + if pattern[len(pattern)-1] == os.PathSeparator { + pattern = pattern + "**" + } + // If it starts with /, it is absolute + if pattern[0] == os.PathSeparator { + pattern = pattern[1:] + } else { + // Otherwise prepend **/ + pattern = "**" + string(os.PathSeparator) + pattern + } + rule.val = pattern + rule.dirs = strings.Split(pattern, string(os.PathSeparator)) + rules = append(rules, rule) + } + + if err := scanner.Err(); err != nil { + fmt.Fprintf(os.Stderr, "Error reading .terraformignore, default exclusions will apply: %v \n", err) + return defaultExclusions + } + return rules +} + +func matchIgnoreRule(path string, rules []rule) bool { + matched := false + path = filepath.FromSlash(path) + for _, rule := range rules { + match, _ := rule.match(path) + + if match { + matched = !rule.excluded + } + } + + if matched { + debug(true, path, "Skipping excluded path:", path) + } + + return matched +} + +type rule struct { + val string // the value of the rule itself + excluded bool // ! is present, an exclusion rule + dirs []string // directories of the rule + regex *regexp.Regexp // regular expression to match for the rule +} + +func (r *rule) match(path string) (bool, error) { + if r.regex == nil { + if err := r.compile(); err != nil { + return false, filepath.ErrBadPattern + } + } + + b := r.regex.MatchString(path) + debug(false, path, path, r.regex, b) + return b, nil +} + +func (r *rule) compile() error { + regStr := "^" + pattern := r.val + // Go through the pattern and convert it to a regexp. + // Use a scanner to support utf-8 chars. + var scan scanner.Scanner + scan.Init(strings.NewReader(pattern)) + + sl := string(os.PathSeparator) + escSL := sl + if sl == `\` { + escSL += `\` + } + + for scan.Peek() != scanner.EOF { + ch := scan.Next() + if ch == '*' { + if scan.Peek() == '*' { + // is some flavor of "**" + scan.Next() + + // Treat **/ as ** so eat the "/" + if string(scan.Peek()) == sl { + scan.Next() + } + + if scan.Peek() == scanner.EOF { + // is "**EOF" - to align with .gitignore just accept all + regStr += ".*" + } else { + // is "**" + // Note that this allows for any # of /'s (even 0) because + // the .* will eat everything, even /'s + regStr += "(.*" + escSL + ")?" + } + } else { + // is "*" so map it to anything but "/" + regStr += "[^" + escSL + "]*" + } + } else if ch == '?' { + // "?" is any char except "/" + regStr += "[^" + escSL + "]" + } else if ch == '.' || ch == '$' { + // Escape some regexp special chars that have no meaning + // in golang's filepath.Match + regStr += `\` + string(ch) + } else if ch == '\\' { + // escape next char. Note that a trailing \ in the pattern + // will be left alone (but need to escape it) + if sl == `\` { + // On windows map "\" to "\\", meaning an escaped backslash, + // and then just continue because filepath.Match on + // Windows doesn't allow escaping at all + regStr += escSL + continue + } + if scan.Peek() != scanner.EOF { + regStr += `\` + string(scan.Next()) + } else { + regStr += `\` + } + } else { + regStr += string(ch) + } + } + + regStr += "$" + re, err := regexp.Compile(regStr) + if err != nil { + return err + } + + r.regex = re + return nil +} + +/* + Default rules as they would appear in .terraformignore: + .git/ + .terraform/ + !.terraform/modules/ +*/ + +var defaultExclusions = []rule{ + { + val: strings.Join([]string{"**", ".git", "**"}, string(os.PathSeparator)), + excluded: false, + }, + { + val: strings.Join([]string{"**", ".terraform", "**"}, string(os.PathSeparator)), + excluded: false, + }, + { + val: strings.Join([]string{"**", ".terraform", "modules", "**"}, string(os.PathSeparator)), + excluded: true, + }, +} + +func debug(printAll bool, path string, message ...interface{}) { + logLevel := os.Getenv("TF_IGNORE") == "trace" + debugPath := os.Getenv("TF_IGNORE_DEBUG") + isPath := debugPath != "" + if isPath { + isPath = strings.Contains(path, debugPath) + } + + if logLevel { + if printAll || isPath { + fmt.Println(message...) + } + } +} diff --git a/vendor/github.com/hashicorp/go-tfe/README.md b/vendor/github.com/hashicorp/go-tfe/README.md index 83245e48c..3ebdb76b6 100644 --- a/vendor/github.com/hashicorp/go-tfe/README.md +++ b/vendor/github.com/hashicorp/go-tfe/README.md @@ -128,28 +128,109 @@ func main() { ## Running tests +### 1. (Optional) Create a policy sets repo + +If you are planning to run the full suite of tests or work on policy sets, you'll need to set up a policy set repository in GitHub. + +Your policy set repository will need the following: +1. A policy set stored in a subdirectory `policy-sets/foo` +1. A branch other than master named `policies` + +### 2. Set up environment variables + +##### Required: Tests are run against an actual backend so they require a valid backend address -and token. In addition it also needs a Github token for running the OAuth Client -tests: +and token. +1. `TFE_ADDRESS` - URL of a Terraform Cloud or Terraform Enterprise instance to be used for testing, including scheme. Example: `https://tfe.local` +1. `TFE_TOKEN` - A [user API token](https://www.terraform.io/docs/cloud/users-teams-organizations/users.html#api-tokens) for the Terraform Cloud or Terraform Enterprise instance being used for testing. +##### Optional: +1. `GITHUB_TOKEN` - [GitHub personal access token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line). Required for running OAuth client tests. +1. `GITHUB_POLICY_SET_IDENTIFIER` - GitHub policy set repository identifier in the format `username/repository`. Required for running policy set tests. + +You can set your environment variables up however you prefer. The following are instructions for setting up environment variables using [envchain](https://github.com/sorah/envchain). + 1. Make sure you have envchain installed. [Instructions for this can be found in the envchain README](https://github.com/sorah/envchain#installation). + 1. Pick a namespace for storing your environment variables. I suggest `go-tfe` or something similar. + 1. For each environment variable you need to set, run the following command: + ```sh + envchain --set YOUR_NAMESPACE_HERE ENVIRONMENT_VARIABLE_HERE + ``` + **OR** + + Set all of the environment variables at once with the following command: + ```sh + envchain --set YOUR_NAMESPACE_HERE TFE_ADDRESS TFE_TOKEN GITHUB_TOKEN GITHUB_POLICY_SET_IDENTIFIER + ``` + +### 3. Make sure run queue settings are correct + +In order for the tests relating to queuing and capacity to pass, FRQ (fair run queuing) should be +enabled with a limit of 2 concurrent runs per organization on the Terraform Cloud or Terraform Enterprise instance you are using for testing. + +### 4. Run the tests + +#### Running all the tests +As running the all of the tests takes about ~20 minutes, make sure to add a timeout to your +command (as the default timeout is 10m). + +##### With envchain: ```sh -$ export TFE_ADDRESS=https://tfe.local -$ export TFE_TOKEN=xxxxxxxxxxxxxxxxxxx -$ export GITHUB_TOKEN=xxxxxxxxxxxxxxxx -$ export GITHUB_IDENTIFIER=xxxxxxxxxxx +$ envchain YOUR_NAMESPACE_HERE go test ./... -timeout=30m ``` -In order for the tests relating to queuing and capacity to pass, FRQ should be -enabled with a limit of 2 concurrent runs per organization. - -As running the tests takes about ~10 minutes, make sure to add a timeout to your -command (as the default timeout is 10m): - +##### Without envchain: ```sh -$ go test ./... -timeout=15m +$ go test ./... -timeout=30m ``` +#### Running specific tests + +The commands below use notification configurations as an example. + +##### With envchain: +```sh +$ envchain YOUR_NAMESPACE_HERE go test -run TestNotificationConfiguration -v ./... +``` + +##### Without envchain: +```sh +$ go test -run TestNotificationConfiguration -v ./... +``` ## Issues and Contributing If you find an issue with this package, please report an issue. If you'd like, we welcome any contributions. Fork this repository and submit a pull request. + +## Releases + +Documentation updates and test fixes that only touch test files don't require a release or tag. You can just merge these changes into master once they have been approved. + +### Creating a release +1. Merge your approved branch into master. +1. [Create a new release in GitHub](https://help.github.com/en/github/administering-a-repository/creating-releases). + - Click on "Releases" and then "Draft a new release" + - Set the `tag version` to a new tag, using [Semantic Versioning](https://semver.org/) as a guideline. + - Set the `target` as master. + - Set the `Release title` to the tag you created, `vX.Y.Z` + - Use the description section to describe why you're releasing and what changes you've made. You should include links to merged PRs + - Consider using the following headers in the description of your release: + - BREAKING CHANGES: Use this for any changes that aren't backwards compatible. Include details on how to handle these changes. + - FEATURES: Use this for any large new features added, + - ENHANCEMENTS: Use this for smaller new features added + - BUG FIXES: Use this for any bugs that were fixed. + - NOTES: Use this section if you need to include any additional notes on things like upgrading, upcoming deprecations, or any other information you might want to highlight. + + Markdown example: + + ```markdown + ENHANCEMENTS + * Add description of new small feature (#3)[link-to-pull-request] + + BUG FIXES + * Fix description of a bug (#2)[link-to-pull-request] + * Fix description of another bug (#1)[link-to-pull-request] + ``` + + - Don't attach any binaries. The zip and tar.gz assets are automatically created and attached after you publish your release. + - Click "Publish release" to save and publish your release. + \ No newline at end of file diff --git a/vendor/github.com/hashicorp/go-tfe/go.mod b/vendor/github.com/hashicorp/go-tfe/go.mod index f428ce27c..31ad255a1 100644 --- a/vendor/github.com/hashicorp/go-tfe/go.mod +++ b/vendor/github.com/hashicorp/go-tfe/go.mod @@ -5,7 +5,7 @@ require ( github.com/google/go-querystring v1.0.0 github.com/hashicorp/go-cleanhttp v0.5.0 github.com/hashicorp/go-retryablehttp v0.5.2 - github.com/hashicorp/go-slug v0.3.0 + github.com/hashicorp/go-slug v0.4.1 github.com/hashicorp/go-uuid v1.0.1 github.com/stretchr/testify v1.3.0 github.com/svanharmelen/jsonapi v0.0.0-20180618144545-0c0828c3f16d diff --git a/vendor/github.com/hashicorp/go-tfe/go.sum b/vendor/github.com/hashicorp/go-tfe/go.sum index cc8c7bbbb..4c0ab53b1 100644 --- a/vendor/github.com/hashicorp/go-tfe/go.sum +++ b/vendor/github.com/hashicorp/go-tfe/go.sum @@ -8,8 +8,12 @@ github.com/hashicorp/go-cleanhttp v0.5.0 h1:wvCrVc9TjDls6+YGAF2hAifE1E5U1+b4tH6K github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-retryablehttp v0.5.2 h1:AoISa4P4IsW0/m4T6St8Yw38gTl5GtBAgfkhYh1xAz4= github.com/hashicorp/go-retryablehttp v0.5.2/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs= -github.com/hashicorp/go-slug v0.3.0 h1:L0c+AvH/J64iMNF4VqRaRku2DMTEuHioPVS7kMjWIU8= -github.com/hashicorp/go-slug v0.3.0/go.mod h1:I5tq5Lv0E2xcNXNkmx7BSfzi1PsJ2cNjs3cC3LwyhK8= +github.com/hashicorp/go-slug v0.4.0 h1:YSz3afoEZZJVVB46NITf0+opd2cHpaYJ1XSojOyP0x8= +github.com/hashicorp/go-slug v0.4.0/go.mod h1:I5tq5Lv0E2xcNXNkmx7BSfzi1PsJ2cNjs3cC3LwyhK8= +github.com/hashicorp/go-slug v0.4.1-0.20191114211806-d9ee9eb3692a h1:EmBGX5Ja8JEKRHqTDG9+PYq0qL5qyOUmPZFQfH7VfXo= +github.com/hashicorp/go-slug v0.4.1-0.20191114211806-d9ee9eb3692a/go.mod h1:I5tq5Lv0E2xcNXNkmx7BSfzi1PsJ2cNjs3cC3LwyhK8= +github.com/hashicorp/go-slug v0.4.1 h1:/jAo8dNuLgSImoLXaX7Od7QB4TfYCVPam+OpAt5bZqc= +github.com/hashicorp/go-slug v0.4.1/go.mod h1:I5tq5Lv0E2xcNXNkmx7BSfzi1PsJ2cNjs3cC3LwyhK8= github.com/hashicorp/go-uuid v1.0.1 h1:fv1ep09latC32wFoVwnqcnKJGnMSdBanPczbHAYm1BE= github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= diff --git a/vendor/github.com/hashicorp/go-tfe/oauth_client.go b/vendor/github.com/hashicorp/go-tfe/oauth_client.go index b0b16bfb2..35174441b 100644 --- a/vendor/github.com/hashicorp/go-tfe/oauth_client.go +++ b/vendor/github.com/hashicorp/go-tfe/oauth_client.go @@ -40,13 +40,18 @@ type ServiceProviderType string // List of available VCS types. const ( - ServiceProviderBitbucket ServiceProviderType = "bitbucket_hosted" + ServiceProviderAzureDevOpsServer ServiceProviderType = "ado_server" + ServiceProviderAzureDevOpsServices ServiceProviderType = "ado_services" + ServiceProviderBitbucket ServiceProviderType = "bitbucket_hosted" + // Bitbucket Server v5.4.0 and above ServiceProviderBitbucketServer ServiceProviderType = "bitbucket_server" - ServiceProviderGithub ServiceProviderType = "github" - ServiceProviderGithubEE ServiceProviderType = "github_enterprise" - ServiceProviderGitlab ServiceProviderType = "gitlab_hosted" - ServiceProviderGitlabCE ServiceProviderType = "gitlab_community_edition" - ServiceProviderGitlabEE ServiceProviderType = "gitlab_enterprise_edition" + // Bitbucket Server v5.3.0 and below + ServiceProviderBitbucketServerLegacy ServiceProviderType = "bitbucket_server_legacy" + ServiceProviderGithub ServiceProviderType = "github" + ServiceProviderGithubEE ServiceProviderType = "github_enterprise" + ServiceProviderGitlab ServiceProviderType = "gitlab_hosted" + ServiceProviderGitlabCE ServiceProviderType = "gitlab_community_edition" + ServiceProviderGitlabEE ServiceProviderType = "gitlab_enterprise_edition" ) // OAuthClientList represents a list of OAuth clients. diff --git a/vendor/github.com/hashicorp/go-tfe/workspace.go b/vendor/github.com/hashicorp/go-tfe/workspace.go index e9c55d13b..648bcf1d6 100644 --- a/vendor/github.com/hashicorp/go-tfe/workspace.go +++ b/vendor/github.com/hashicorp/go-tfe/workspace.go @@ -179,6 +179,9 @@ type WorkspaceCreateOptions struct { // organization. Name *string `jsonapi:"attr,name"` + // Whether the workspace will use remote or local execution mode. + Operations *bool `jsonapi:"attr,operations,omitempty"` + // Whether to queue all runs. Unless this is set to true, runs triggered by // a webhook will not be queued until at least one run is manually queued. QueueAllRuns *bool `jsonapi:"attr,queue-all-runs,omitempty"` @@ -316,6 +319,9 @@ type WorkspaceUpdateOptions struct { // disabled, any push will trigger a run. FileTriggersEnabled *bool `jsonapi:"attr,file-triggers-enabled,omitempty"` + // Whether the workspace will use remote or local execution mode. + Operations *bool `jsonapi:"attr,operations,omitempty"` + // Whether to queue all runs. Unless this is set to true, runs triggered by // a webhook will not be queued until at least one run is manually queued. QueueAllRuns *bool `jsonapi:"attr,queue-all-runs,omitempty"` diff --git a/vendor/github.com/hashicorp/go-version/.travis.yml b/vendor/github.com/hashicorp/go-version/.travis.yml index 542ca8b7f..01c5dc219 100644 --- a/vendor/github.com/hashicorp/go-version/.travis.yml +++ b/vendor/github.com/hashicorp/go-version/.travis.yml @@ -1,13 +1,13 @@ language: go go: - - 1.0 - - 1.1 - 1.2 - 1.3 - 1.4 - 1.9 - "1.10" + - 1.11 + - 1.12 script: - go test diff --git a/vendor/github.com/hashicorp/go-version/version.go b/vendor/github.com/hashicorp/go-version/version.go index 186fd7cc1..1032c5606 100644 --- a/vendor/github.com/hashicorp/go-version/version.go +++ b/vendor/github.com/hashicorp/go-version/version.go @@ -112,7 +112,7 @@ func Must(v *Version, err error) *Version { // or larger than the other version, respectively. // // If you want boolean results, use the LessThan, Equal, -// or GreaterThan methods. +// GreaterThan, GreaterThanOrEqual or LessThanOrEqual methods. func (v *Version) Compare(other *Version) int { // A quick, efficient equality check if v.String() == other.String() { @@ -288,11 +288,21 @@ func (v *Version) GreaterThan(o *Version) bool { return v.Compare(o) > 0 } +// GreaterThanOrEqualTo tests if this version is greater than or equal to another version. +func (v *Version) GreaterThanOrEqual(o *Version) bool { + return v.Compare(o) >= 0 +} + // LessThan tests if this version is less than another version. func (v *Version) LessThan(o *Version) bool { return v.Compare(o) < 0 } +// LessThanOrEqualTo tests if this version is less than or equal to another version. +func (v *Version) LessThanOrEqual(o *Version) bool { + return v.Compare(o) <= 0 +} + // Metadata returns any metadata that was part of the version // string. // diff --git a/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md b/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md index ccb46bbd8..4c644fcfb 100644 --- a/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md +++ b/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md @@ -1,5 +1,38 @@ # HCL Changelog +## v2.3.0 (Jan 3, 2020) + +### Enhancements + +* ext/tryfunc: Optional functions `try` and `can` to include in your `hcl.EvalContext` when evaluating expressions, which allow users to make decisions based on the success of expressions. ([#330](https://github.com/hashicorp/hcl/pull/330)) +* ext/typeexpr: Now has an optional function `convert` which you can include in your `hcl.EvalContext` when evaluating expressions, allowing users to convert values to specific type constraints using the type constraint expression syntax. ([#330](https://github.com/hashicorp/hcl/pull/330)) +* ext/typeexpr: A new `cty` capsule type `typeexpr.TypeConstraintType` which, when used as either a type constraint for a function parameter or as a type constraint for a `hcldec` attribute specification will cause the given expression to be interpreted as a type constraint expression rather than a value expression. ([#330](https://github.com/hashicorp/hcl/pull/330)) +* ext/customdecode: An optional extension that allows overriding the static decoding behavior for expressions either in function arguments or `hcldec` attribute specifications. ([#330](https://github.com/hashicorp/hcl/pull/330)) +* ext/customdecode: New `cty` capsuletypes `customdecode.ExpressionType` and `customdecode.ExpressionClosureType` which, when used as either a type constraint for a function parameter or as a type constraint for a `hcldec` attribute specification will cause the given expression (and, for the closure type, also the `hcl.EvalContext` it was evaluated in) to be captured for later analysis, rather than immediately evaluated. ([#330](https://github.com/hashicorp/hcl/pull/330)) + +## v2.2.0 (Dec 11, 2019) + +### Enhancements + +* hcldec: Attribute evaluation (as part of `AttrSpec` or `BlockAttrsSpec`) now captures expression evaluation metadata in any errors it produces during type conversions, allowing for better feedback in calling applications that are able to make use of this metadata when printing diagnostic messages. ([#329](https://github.com/hashicorp/hcl/pull/329)) + +### Bugs Fixed + +* hclsyntax: `IndexExpr`, `SplatExpr`, and `RelativeTraversalExpr` will now report a source range that covers all of their child expression nodes. Previously they would report only the operator part, such as `["foo"]`, `[*]`, or `.foo`, which was problematic for callers using source ranges for code analysis. ([#328](https://github.com/hashicorp/hcl/pull/328)) +* hclwrite: Parser will no longer panic when the input includes index, splat, or relative traversal syntax. ([#328](https://github.com/hashicorp/hcl/pull/328)) + +## v2.1.0 (Nov 19, 2019) + +### Enhancements + +* gohcl: When decoding into a struct value with some fields already populated, those values will be retained if not explicitly overwritten in the given HCL body, with similar overriding/merging behavior as `json.Unmarshal` in the Go standard library. +* hclwrite: New interface to set the expression for an attribute to be a raw token sequence, with no special processing. This has some caveats, so if you intend to use it please refer to the godoc comments. ([#320](https://github.com/hashicorp/hcl/pull/320)) + +### Bugs Fixed + +* hclwrite: The `Body.Blocks` method was returing the blocks in an indefined order, rather than preserving the order of declaration in the source input. ([#313](https://github.com/hashicorp/hcl/pull/313)) +* hclwrite: The `TokensForTraversal` function (and thus in turn the `Body.SetAttributeTraversal` method) was not correctly handling index steps in traversals, and thus producing invalid results. ([#319](https://github.com/hashicorp/hcl/pull/319)) + ## v2.0.0 (Oct 2, 2019) Initial release of HCL 2, which is a new implementating combining the HCL 1 diff --git a/vendor/github.com/hashicorp/hcl/v2/README.md b/vendor/github.com/hashicorp/hcl/v2/README.md index d807a4245..3d0d509d5 100644 --- a/vendor/github.com/hashicorp/hcl/v2/README.md +++ b/vendor/github.com/hashicorp/hcl/v2/README.md @@ -8,7 +8,7 @@ towards devops tools, servers, etc. > **NOTE:** This is major version 2 of HCL, whose Go API is incompatible with > major version 1. Both versions are available for selection in Go Modules > projects. HCL 2 _cannot_ be imported from Go projects that are not using Go Modules. For more information, see -> [our version selection guide](https://github.com/golang/go/wiki/Version-Selection). +> [our version selection guide](https://github.com/hashicorp/hcl/wiki/Version-Selection). HCL has both a _native syntax_, intended to be pleasant to read and write for humans, and a JSON-based variant that is easier for machines to generate @@ -51,7 +51,8 @@ func main() { ``` A lower-level API is available for applications that need more control over -the parsing, decoding, and evaluation of configuration. +the parsing, decoding, and evaluation of configuration. For more information, +see [the package documentation](https://pkg.go.dev/github.com/hashicorp/hcl/v2). ## Why? @@ -156,9 +157,9 @@ syntax allows use of arbitrary expressions within JSON strings: For more information, see the detailed specifications: -* [Syntax-agnostic Information Model](hcl/spec.md) -* [HCL Native Syntax](hcl/hclsyntax/spec.md) -* [JSON Representation](hcl/json/spec.md) +* [Syntax-agnostic Information Model](spec.md) +* [HCL Native Syntax](hclsyntax/spec.md) +* [JSON Representation](json/spec.md) ## Changes in 2.0 diff --git a/vendor/github.com/hashicorp/hcl/v2/appveyor.yml b/vendor/github.com/hashicorp/hcl/v2/appveyor.yml new file mode 100644 index 000000000..e382f8f57 --- /dev/null +++ b/vendor/github.com/hashicorp/hcl/v2/appveyor.yml @@ -0,0 +1,13 @@ +build: off + +clone_folder: c:\gopath\src\github.com\hashicorp\hcl + +environment: + GOPATH: c:\gopath + GO111MODULE: on + GOPROXY: https://goproxy.io + +stack: go 1.12 + +test_script: + - go test ./... diff --git a/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/README.md b/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/README.md new file mode 100644 index 000000000..1636f577a --- /dev/null +++ b/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/README.md @@ -0,0 +1,209 @@ +# HCL Custom Static Decoding Extension + +This HCL extension provides a mechanism for defining arguments in an HCL-based +language whose values are derived using custom decoding rules against the +HCL expression syntax, overriding the usual behavior of normal expression +evaluation. + +"Arguments", for the purpose of this extension, currently includes the +following two contexts: + +* For applications using `hcldec` for dynamic decoding, a `hcldec.AttrSpec` + or `hcldec.BlockAttrsSpec` can be given a special type constraint that + opts in to custom decoding behavior for the attribute(s) that are selected + by that specification. + +* When working with the HCL native expression syntax, a function given in + the `hcl.EvalContext` during evaluation can have parameters with special + type constraints that opt in to custom decoding behavior for the argument + expression associated with that parameter in any call. + +The above use-cases are rather abstract, so we'll consider a motivating +real-world example: sometimes we (language designers) need to allow users +to specify type constraints directly in the language itself, such as in +[Terraform's Input Variables](https://www.terraform.io/docs/configuration/variables.html). +Terraform's `variable` blocks include an argument called `type` which takes +a type constraint given using HCL expression building-blocks as defined by +[the HCL `typeexpr` extension](../typeexpr/README.md). + +A "type constraint expression" of that sort is not an expression intended to +be evaluated in the usual way. Instead, the physical expression is +deconstructed using [the static analysis operations](../../spec.md#static-analysis) +to produce a `cty.Type` as the result, rather than a `cty.Value`. + +The purpose of this Custom Static Decoding Extension, then, is to provide a +bridge to allow that sort of custom decoding to be used via mechanisms that +normally deal in `cty.Value`, such as `hcldec` and native syntax function +calls as listed above. + +(Note: [`gohcl`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/gohcl) has +its own mechanism to support this use case, exploiting the fact that it is +working directly with "normal" Go types. Decoding into a struct field of +type `hcl.Expression` obtains the expression directly without evaluating it +first. The Custom Static Decoding Extension is not necessary for that `gohcl` +technique. You can also implement custom decoding by working directly with +the lowest-level HCL API, which separates extraction of and evaluation of +expressions into two steps.) + +## Custom Decoding Types + +This extension relies on a convention implemented in terms of +[_Capsule Types_ in the underlying `cty` type system](https://github.com/zclconf/go-cty/blob/master/docs/types.md#capsule-types). `cty` allows a capsule type to carry arbitrary +extension metadata values as an aid to creating higher-level abstractions like +this extension. + +A custom argument decoding mode, then, is implemented by creating a new `cty` +capsule type that implements the `ExtensionData` custom operation to return +a decoding function when requested. For example: + +```go +var keywordType cty.Type +keywordType = cty.CapsuleWithOps("keyword", reflect.TypeOf(""), &cty.CapsuleOps{ + ExtensionData: func(key interface{}) interface{} { + switch key { + case customdecode.CustomExpressionDecoder: + return customdecode.CustomExpressionDecoderFunc( + func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { + var diags hcl.Diagnostics + kw := hcl.ExprAsKeyword(expr) + if kw == "" { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid keyword", + Detail: "A keyword is required", + Subject: expr.Range().Ptr(), + }) + return cty.UnkownVal(keywordType), diags + } + return cty.CapsuleVal(keywordType, &kw) + }, + ) + default: + return nil + } + }, +}) +``` + +The boilerplate here is a bit fussy, but the important part for our purposes +is the `case customdecode.CustomExpressionDecoder:` clause, which uses +a custom extension key type defined in this package to recognize when a +component implementing this extension is checking to see if a target type +has a custom decode implementation. + +In the above case we've defined a type that decodes expressions as static +keywords, so a keyword like `foo` would decode as an encapsulated `"foo"` +string, while any other sort of expression like `"baz"` or `1 + 1` would +return an error. + +We could then use `keywordType` as a type constraint either for a function +parameter or a `hcldec` attribute specification, which would require the +argument for that function parameter or the expression for the matching +attributes to be a static keyword, rather than an arbitrary expression. +For example, in a `hcldec.AttrSpec`: + +```go +keywordSpec := &hcldec.AttrSpec{ + Name: "keyword", + Type: keywordType, +} +``` + +The above would accept input like the following and would set its result to +a `cty.Value` of `keywordType`, after decoding: + +```hcl +keyword = foo +``` + +## The Expression and Expression Closure `cty` types + +Building on the above, this package also includes two capsule types that use +the above mechanism to allow calling applications to capture expressions +directly and thus defer analysis to a later step, after initial decoding. + +The `customdecode.ExpressionType` type encapsulates an `hcl.Expression` alone, +for situations like our type constraint expression example above where it's +the static structure of the expression we want to inspect, and thus any +variables and functions defined in the evaluation context are irrelevant. + +The `customdecode.ExpressionClosureType` type encapsulates a +`*customdecode.ExpressionClosure` value, which binds the given expression to +the `hcl.EvalContext` it was asked to evaluate against and thus allows the +receiver of that result to later perform normal evaluation of the expression +with all the same variables and functions that would've been available to it +naturally. + +Both of these types can be used as type constraints either for `hcldec` +attribute specifications or for function arguments. Here's an example of +`ExpressionClosureType` to implement a function that can evaluate +an expression with some additional variables defined locally, which we'll +call the `with(...)` function: + +```go +var WithFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "variables", + Type: cty.DynamicPseudoType, + }, + { + Name: "expression", + Type: customdecode.ExpressionClosureType, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + varsVal := args[0] + exprVal := args[1] + if !varsVal.Type().IsObjectType() { + return cty.NilVal, function.NewArgErrorf(0, "must be an object defining local variables") + } + if !varsVal.IsKnown() { + // We can't predict our result type until the variables object + // is known. + return cty.DynamicPseudoType, nil + } + vars := varsVal.AsValueMap() + closure := customdecode.ExpressionClosureFromVal(exprVal) + result, err := evalWithLocals(vars, closure) + if err != nil { + return cty.NilVal, err + } + return result.Type(), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + varsVal := args[0] + exprVal := args[1] + vars := varsVal.AsValueMap() + closure := customdecode.ExpressionClosureFromVal(exprVal) + return evalWithLocals(vars, closure) + }, +}) + +func evalWithLocals(locals map[string]cty.Value, closure *customdecode.ExpressionClosure) (cty.Value, error) { + childCtx := closure.EvalContext.NewChild() + childCtx.Variables = locals + val, diags := closure.Expression.Value(childCtx) + if diags.HasErrors() { + return cty.NilVal, function.NewArgErrorf(1, "couldn't evaluate expression: %s", diags.Error()) + } + return val, nil +} +``` + +If the above function were placed into an `hcl.EvalContext` as `with`, it +could be used in a native syntax call to that function as follows: + +```hcl + foo = with({name = "Cory"}, "${greeting}, ${name}!") +``` + +The above assumes a variable in the main context called `greeting`, to which +the `with` function adds `name` before evaluating the expression given in +its second argument. This makes that second argument context-sensitive -- it +would behave differently if the user wrote the same thing somewhere else -- so +this capability should be used with care to make sure it doesn't cause confusion +for the end-users of your language. + +There are some other examples of this capability to evaluate expressions in +unusual ways in the `tryfunc` directory that is a sibling of this one. diff --git a/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/customdecode.go b/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/customdecode.go new file mode 100644 index 000000000..c9d7a1efb --- /dev/null +++ b/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/customdecode.go @@ -0,0 +1,56 @@ +// Package customdecode contains a HCL extension that allows, in certain +// contexts, expression evaluation to be overridden by custom static analysis. +// +// This mechanism is only supported in certain specific contexts where +// expressions are decoded with a specific target type in mind. For more +// information, see the documentation on CustomExpressionDecoder. +package customdecode + +import ( + "github.com/hashicorp/hcl/v2" + "github.com/zclconf/go-cty/cty" +) + +type customDecoderImpl int + +// CustomExpressionDecoder is a value intended to be used as a cty capsule +// type ExtensionData key for capsule types whose values are to be obtained +// by static analysis of an expression rather than normal evaluation of that +// expression. +// +// When a cooperating capsule type is asked for ExtensionData with this key, +// it must return a non-nil CustomExpressionDecoderFunc value. +// +// This mechanism is not universally supported; instead, it's handled in a few +// specific places where expressions are evaluated with the intent of producing +// a cty.Value of a type given by the calling application. +// +// Specifically, this currently works for type constraints given in +// hcldec.AttrSpec and hcldec.BlockAttrsSpec, and it works for arguments to +// function calls in the HCL native syntax. HCL extensions implemented outside +// of the main HCL module may also implement this; consult their own +// documentation for details. +const CustomExpressionDecoder = customDecoderImpl(1) + +// CustomExpressionDecoderFunc is the type of value that must be returned by +// a capsule type handling the key CustomExpressionDecoder in its ExtensionData +// implementation. +// +// If no error diagnostics are returned, the result value MUST be of the +// capsule type that the decoder function was derived from. If the returned +// error diagnostics prevent producing a value at all, return cty.NilVal. +type CustomExpressionDecoderFunc func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) + +// CustomExpressionDecoderForType takes any cty type and returns its +// custom expression decoder implementation if it has one. If it is not a +// capsule type or it does not implement a custom expression decoder, this +// function returns nil. +func CustomExpressionDecoderForType(ty cty.Type) CustomExpressionDecoderFunc { + if !ty.IsCapsuleType() { + return nil + } + if fn, ok := ty.CapsuleExtensionData(CustomExpressionDecoder).(CustomExpressionDecoderFunc); ok { + return fn + } + return nil +} diff --git a/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/expression_type.go b/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/expression_type.go new file mode 100644 index 000000000..af7c66c23 --- /dev/null +++ b/vendor/github.com/hashicorp/hcl/v2/ext/customdecode/expression_type.go @@ -0,0 +1,146 @@ +package customdecode + +import ( + "fmt" + "reflect" + + "github.com/hashicorp/hcl/v2" + "github.com/zclconf/go-cty/cty" +) + +// ExpressionType is a cty capsule type that carries hcl.Expression values. +// +// This type implements custom decoding in the most general way possible: it +// just captures whatever expression is given to it, with no further processing +// whatsoever. It could therefore be useful in situations where an application +// must defer processing of the expression content until a later step. +// +// ExpressionType only captures the expression, not the evaluation context it +// was destined to be evaluated in. That means this type can be fine for +// situations where the recipient of the value only intends to do static +// analysis, but ExpressionClosureType is more appropriate in situations where +// the recipient will eventually evaluate the given expression. +var ExpressionType cty.Type + +// ExpressionVal returns a new cty value of type ExpressionType, wrapping the +// given expression. +func ExpressionVal(expr hcl.Expression) cty.Value { + return cty.CapsuleVal(ExpressionType, &expr) +} + +// ExpressionFromVal returns the expression encapsulated in the given value, or +// panics if the value is not a known value of ExpressionType. +func ExpressionFromVal(v cty.Value) hcl.Expression { + if !v.Type().Equals(ExpressionType) { + panic("value is not of ExpressionType") + } + ptr := v.EncapsulatedValue().(*hcl.Expression) + return *ptr +} + +// ExpressionClosureType is a cty capsule type that carries hcl.Expression +// values along with their original evaluation contexts. +// +// This is similar to ExpressionType except that during custom decoding it +// also captures the hcl.EvalContext that was provided, allowing callers to +// evaluate the expression later in the same context where it would originally +// have been evaluated, or a context derived from that one. +var ExpressionClosureType cty.Type + +// ExpressionClosure is the type encapsulated in ExpressionClosureType +type ExpressionClosure struct { + Expression hcl.Expression + EvalContext *hcl.EvalContext +} + +// ExpressionClosureVal returns a new cty value of type ExpressionClosureType, +// wrapping the given expression closure. +func ExpressionClosureVal(closure *ExpressionClosure) cty.Value { + return cty.CapsuleVal(ExpressionClosureType, closure) +} + +// Value evaluates the closure's expression using the closure's EvalContext, +// returning the result. +func (c *ExpressionClosure) Value() (cty.Value, hcl.Diagnostics) { + return c.Expression.Value(c.EvalContext) +} + +// ExpressionClosureFromVal returns the expression closure encapsulated in the +// given value, or panics if the value is not a known value of +// ExpressionClosureType. +// +// The caller MUST NOT modify the returned closure or the EvalContext inside +// it. To derive a new EvalContext, either create a child context or make +// a copy. +func ExpressionClosureFromVal(v cty.Value) *ExpressionClosure { + if !v.Type().Equals(ExpressionClosureType) { + panic("value is not of ExpressionClosureType") + } + return v.EncapsulatedValue().(*ExpressionClosure) +} + +func init() { + // Getting hold of a reflect.Type for hcl.Expression is a bit tricky because + // it's an interface type, but we can do it with some indirection. + goExpressionType := reflect.TypeOf((*hcl.Expression)(nil)).Elem() + + ExpressionType = cty.CapsuleWithOps("expression", goExpressionType, &cty.CapsuleOps{ + ExtensionData: func(key interface{}) interface{} { + switch key { + case CustomExpressionDecoder: + return CustomExpressionDecoderFunc( + func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { + return ExpressionVal(expr), nil + }, + ) + default: + return nil + } + }, + TypeGoString: func(_ reflect.Type) string { + return "customdecode.ExpressionType" + }, + GoString: func(raw interface{}) string { + exprPtr := raw.(*hcl.Expression) + return fmt.Sprintf("customdecode.ExpressionVal(%#v)", *exprPtr) + }, + RawEquals: func(a, b interface{}) bool { + aPtr := a.(*hcl.Expression) + bPtr := b.(*hcl.Expression) + return reflect.DeepEqual(*aPtr, *bPtr) + }, + }) + ExpressionClosureType = cty.CapsuleWithOps("expression closure", reflect.TypeOf(ExpressionClosure{}), &cty.CapsuleOps{ + ExtensionData: func(key interface{}) interface{} { + switch key { + case CustomExpressionDecoder: + return CustomExpressionDecoderFunc( + func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { + return ExpressionClosureVal(&ExpressionClosure{ + Expression: expr, + EvalContext: ctx, + }), nil + }, + ) + default: + return nil + } + }, + TypeGoString: func(_ reflect.Type) string { + return "customdecode.ExpressionClosureType" + }, + GoString: func(raw interface{}) string { + closure := raw.(*ExpressionClosure) + return fmt.Sprintf("customdecode.ExpressionClosureVal(%#v)", closure) + }, + RawEquals: func(a, b interface{}) bool { + closureA := a.(*ExpressionClosure) + closureB := b.(*ExpressionClosure) + // The expression itself compares by deep equality, but EvalContexts + // conventionally compare by pointer identity, so we'll comply + // with both conventions here by testing them separately. + return closureA.EvalContext == closureB.EvalContext && + reflect.DeepEqual(closureA.Expression, closureB.Expression) + }, + }) +} diff --git a/vendor/github.com/hashicorp/hcl/v2/ext/tryfunc/README.md b/vendor/github.com/hashicorp/hcl/v2/ext/tryfunc/README.md new file mode 100644 index 000000000..5d56eeca8 --- /dev/null +++ b/vendor/github.com/hashicorp/hcl/v2/ext/tryfunc/README.md @@ -0,0 +1,44 @@ +# "Try" and "can" functions + +This Go package contains two `cty` functions intended for use in an +`hcl.EvalContext` when evaluating HCL native syntax expressions. + +The first function `try` attempts to evaluate each of its argument expressions +in order until one produces a result without any errors. + +```hcl +try(non_existent_variable, 2) # returns 2 +``` + +If none of the expressions succeed, the function call fails with all of the +errors it encountered. + +The second function `can` is similar except that it ignores the result of +the given expression altogether and simply returns `true` if the expression +produced a successful result or `false` if it produced errors. + +Both of these are primarily intended for working with deep data structures +which might not have a dependable shape. For example, we can use `try` to +attempt to fetch a value from deep inside a data structure but produce a +default value if any step of the traversal fails: + +```hcl +result = try(foo.deep[0].lots.of["traversals"], null) +``` + +The final result to `try` should generally be some sort of constant value that +will always evaluate successfully. + +## Using these functions + +Languages built on HCL can make `try` and `can` available to user code by +exporting them in the `hcl.EvalContext` used for expression evaluation: + +```go +ctx := &hcl.EvalContext{ + Functions: map[string]function.Function{ + "try": tryfunc.TryFunc, + "can": tryfunc.CanFunc, + }, +} +``` diff --git a/vendor/github.com/hashicorp/hcl/v2/ext/tryfunc/tryfunc.go b/vendor/github.com/hashicorp/hcl/v2/ext/tryfunc/tryfunc.go new file mode 100644 index 000000000..2f4862f4a --- /dev/null +++ b/vendor/github.com/hashicorp/hcl/v2/ext/tryfunc/tryfunc.go @@ -0,0 +1,150 @@ +// Package tryfunc contains some optional functions that can be exposed in +// HCL-based languages to allow authors to test whether a particular expression +// can succeed and take dynamic action based on that result. +// +// These functions are implemented in terms of the customdecode extension from +// the sibling directory "customdecode", and so they are only useful when +// used within an HCL EvalContext. Other systems using cty functions are +// unlikely to support the HCL-specific "customdecode" extension. +package tryfunc + +import ( + "errors" + "fmt" + "strings" + + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/ext/customdecode" + "github.com/zclconf/go-cty/cty" + "github.com/zclconf/go-cty/cty/function" +) + +// TryFunc is a variadic function that tries to evaluate all of is arguments +// in sequence until one succeeds, in which case it returns that result, or +// returns an error if none of them succeed. +var TryFunc function.Function + +// CanFunc tries to evaluate the expression given in its first argument. +var CanFunc function.Function + +func init() { + TryFunc = function.New(&function.Spec{ + VarParam: &function.Parameter{ + Name: "expressions", + Type: customdecode.ExpressionClosureType, + }, + Type: func(args []cty.Value) (cty.Type, error) { + v, err := try(args) + if err != nil { + return cty.NilType, err + } + return v.Type(), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + return try(args) + }, + }) + CanFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "expression", + Type: customdecode.ExpressionClosureType, + }, + }, + Type: function.StaticReturnType(cty.Bool), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + return can(args[0]) + }, + }) +} + +func try(args []cty.Value) (cty.Value, error) { + if len(args) == 0 { + return cty.NilVal, errors.New("at least one argument is required") + } + + // We'll collect up all of the diagnostics we encounter along the way + // and report them all if none of the expressions succeed, so that the + // user might get some hints on how to make at least one succeed. + var diags hcl.Diagnostics + for _, arg := range args { + closure := customdecode.ExpressionClosureFromVal(arg) + if dependsOnUnknowns(closure.Expression, closure.EvalContext) { + // We can't safely decide if this expression will succeed yet, + // and so our entire result must be unknown until we have + // more information. + return cty.DynamicVal, nil + } + + v, moreDiags := closure.Value() + diags = append(diags, moreDiags...) + if moreDiags.HasErrors() { + continue // try the next one, if there is one to try + } + return v, nil // ignore any accumulated diagnostics if one succeeds + } + + // If we fall out here then none of the expressions succeeded, and so + // we must have at least one diagnostic and we'll return all of them + // so that the user can see the errors related to whichever one they + // were expecting to have succeeded in this case. + // + // Because our function must return a single error value rather than + // diagnostics, we'll construct a suitable error message string + // that will make sense in the context of the function call failure + // diagnostic HCL will eventually wrap this in. + var buf strings.Builder + buf.WriteString("no expression succeeded:\n") + for _, diag := range diags { + if diag.Subject != nil { + buf.WriteString(fmt.Sprintf("- %s (at %s)\n %s\n", diag.Summary, diag.Subject, diag.Detail)) + } else { + buf.WriteString(fmt.Sprintf("- %s\n %s\n", diag.Summary, diag.Detail)) + } + } + buf.WriteString("\nAt least one expression must produce a successful result") + return cty.NilVal, errors.New(buf.String()) +} + +func can(arg cty.Value) (cty.Value, error) { + closure := customdecode.ExpressionClosureFromVal(arg) + if dependsOnUnknowns(closure.Expression, closure.EvalContext) { + // Can't decide yet, then. + return cty.UnknownVal(cty.Bool), nil + } + + _, diags := closure.Value() + if diags.HasErrors() { + return cty.False, nil + } + return cty.True, nil +} + +// dependsOnUnknowns returns true if any of the variables that the given +// expression might access are unknown values or contain unknown values. +// +// This is a conservative result that prefers to return true if there's any +// chance that the expression might derive from an unknown value during its +// evaluation; it is likely to produce false-positives for more complex +// expressions involving deep data structures. +func dependsOnUnknowns(expr hcl.Expression, ctx *hcl.EvalContext) bool { + for _, traversal := range expr.Variables() { + val, diags := traversal.TraverseAbs(ctx) + if diags.HasErrors() { + // If the traversal returned a definitive error then it must + // not traverse through any unknowns. + continue + } + if !val.IsWhollyKnown() { + // The value will be unknown if either it refers directly to + // an unknown value or if the traversal moves through an unknown + // collection. We're using IsWhollyKnown, so this also catches + // situations where the traversal refers to a compound data + // structure that contains any unknown values. That's important, + // because during evaluation the expression might evaluate more + // deeply into this structure and encounter the unknowns. + return true + } + } + return false +} diff --git a/vendor/github.com/hashicorp/hcl/v2/ext/typeexpr/README.md b/vendor/github.com/hashicorp/hcl/v2/ext/typeexpr/README.md index ec7094702..058f1e3d8 100644 --- a/vendor/github.com/hashicorp/hcl/v2/ext/typeexpr/README.md +++ b/vendor/github.com/hashicorp/hcl/v2/ext/typeexpr/README.md @@ -65,3 +65,71 @@ type checking it will be one that has identifiers as its attributes; object types with weird attributes generally show up only from arbitrary object constructors in configuration files, which are usually treated either as maps or as the dynamic pseudo-type. + +## Type Constraints as Values + +Along with defining a convention for writing down types using HCL expression +constructs, this package also includes a mechanism for representing types as +values that can be used as data within an HCL-based language. + +`typeexpr.TypeConstraintType` is a +[`cty` capsule type](https://github.com/zclconf/go-cty/blob/master/docs/types.md#capsule-types) +that encapsulates `cty.Type` values. You can construct such a value directly +using the `TypeConstraintVal` function: + +```go +tyVal := typeexpr.TypeConstraintVal(cty.String) + +// We can unpack the type from a value using TypeConstraintFromVal +ty := typeExpr.TypeConstraintFromVal(tyVal) +``` + +However, the primary purpose of `typeexpr.TypeConstraintType` is to be +specified as the type constraint for an argument, in which case it serves +as a signal for HCL to treat the argument expression as a type constraint +expression as defined above, rather than as a normal value expression. + +"An argument" in the above in practice means the following two locations: + +* As the type constraint for a parameter of a cty function that will be + used in an `hcl.EvalContext`. In that case, function calls in the HCL + native expression syntax will require the argument to be valid type constraint + expression syntax and the function implementation will receive a + `TypeConstraintType` value as the argument value for that parameter. + +* As the type constraint for a `hcldec.AttrSpec` or `hcldec.BlockAttrsSpec` + when decoding an HCL body using `hcldec`. In that case, the attributes + with that type constraint will be required to be valid type constraint + expression syntax and the result will be a `TypeConstraintType` value. + +Note that the special handling of these arguments means that an argument +marked in this way must use the type constraint syntax directly. It is not +valid to pass in a value of `TypeConstraintType` that has been obtained +dynamically via some other expression result. + +`TypeConstraintType` is provided with the intent of using it internally within +application code when incorporating type constraint expression syntax into +an HCL-based language, not to be used for dynamic "programming with types". A +calling application could support programming with types by defining its _own_ +capsule type, but that is not the purpose of `TypeConstraintType`. + +## The "convert" `cty` Function + +Building on the `TypeConstraintType` described in the previous section, this +package also provides `typeexpr.ConvertFunc` which is a cty function that +can be placed into a `cty.EvalContext` (conventionally named "convert") in +order to provide a general type conversion function in an HCL-based language: + +```hcl + foo = convert("true", bool) +``` + +The second parameter uses the mechanism described in the previous section to +require its argument to be a type constraint expression rather than a value +expression. In doing so, it allows converting with any type constraint that +can be expressed in this package's type constraint syntax. In the above example, +the `foo` argument would receive a boolean true, or `cty.True` in `cty` terms. + +The target type constraint must always be provided statically using inline +type constraint syntax. There is no way to _dynamically_ select a type +constraint using this function. diff --git a/vendor/github.com/hashicorp/hcl/v2/ext/typeexpr/type_type.go b/vendor/github.com/hashicorp/hcl/v2/ext/typeexpr/type_type.go new file mode 100644 index 000000000..5462d82c3 --- /dev/null +++ b/vendor/github.com/hashicorp/hcl/v2/ext/typeexpr/type_type.go @@ -0,0 +1,118 @@ +package typeexpr + +import ( + "fmt" + "reflect" + + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/ext/customdecode" + "github.com/zclconf/go-cty/cty" + "github.com/zclconf/go-cty/cty/convert" + "github.com/zclconf/go-cty/cty/function" +) + +// TypeConstraintType is a cty capsule type that allows cty type constraints to +// be used as values. +// +// If TypeConstraintType is used in a context supporting the +// customdecode.CustomExpressionDecoder extension then it will implement +// expression decoding using the TypeConstraint function, thus allowing +// type expressions to be used in contexts where value expressions might +// normally be expected, such as in arguments to function calls. +var TypeConstraintType cty.Type + +// TypeConstraintVal constructs a cty.Value whose type is +// TypeConstraintType. +func TypeConstraintVal(ty cty.Type) cty.Value { + return cty.CapsuleVal(TypeConstraintType, &ty) +} + +// TypeConstraintFromVal extracts the type from a cty.Value of +// TypeConstraintType that was previously constructed using TypeConstraintVal. +// +// If the given value isn't a known, non-null value of TypeConstraintType +// then this function will panic. +func TypeConstraintFromVal(v cty.Value) cty.Type { + if !v.Type().Equals(TypeConstraintType) { + panic("value is not of TypeConstraintType") + } + ptr := v.EncapsulatedValue().(*cty.Type) + return *ptr +} + +// ConvertFunc is a cty function that implements type conversions. +// +// Its signature is as follows: +// convert(value, type_constraint) +// +// ...where type_constraint is a type constraint expression as defined by +// typeexpr.TypeConstraint. +// +// It relies on HCL's customdecode extension and so it's not suitable for use +// in non-HCL contexts or if you are using a HCL syntax implementation that +// does not support customdecode for function arguments. However, it _is_ +// supported for function calls in the HCL native expression syntax. +var ConvertFunc function.Function + +func init() { + TypeConstraintType = cty.CapsuleWithOps("type constraint", reflect.TypeOf(cty.Type{}), &cty.CapsuleOps{ + ExtensionData: func(key interface{}) interface{} { + switch key { + case customdecode.CustomExpressionDecoder: + return customdecode.CustomExpressionDecoderFunc( + func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { + ty, diags := TypeConstraint(expr) + if diags.HasErrors() { + return cty.NilVal, diags + } + return TypeConstraintVal(ty), nil + }, + ) + default: + return nil + } + }, + TypeGoString: func(_ reflect.Type) string { + return "typeexpr.TypeConstraintType" + }, + GoString: func(raw interface{}) string { + tyPtr := raw.(*cty.Type) + return fmt.Sprintf("typeexpr.TypeConstraintVal(%#v)", *tyPtr) + }, + RawEquals: func(a, b interface{}) bool { + aPtr := a.(*cty.Type) + bPtr := b.(*cty.Type) + return (*aPtr).Equals(*bPtr) + }, + }) + + ConvertFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "value", + Type: cty.DynamicPseudoType, + AllowNull: true, + AllowDynamicType: true, + }, + { + Name: "type", + Type: TypeConstraintType, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + wantTypePtr := args[1].EncapsulatedValue().(*cty.Type) + got, err := convert.Convert(args[0], *wantTypePtr) + if err != nil { + return cty.NilType, function.NewArgError(0, err) + } + return got.Type(), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + v, err := convert.Convert(args[0], retType) + if err != nil { + return cty.NilVal, function.NewArgError(0, err) + } + return v, nil + }, + }) +} diff --git a/vendor/github.com/hashicorp/hcl/v2/go.mod b/vendor/github.com/hashicorp/hcl/v2/go.mod index c152e6016..d80c99d9b 100644 --- a/vendor/github.com/hashicorp/hcl/v2/go.mod +++ b/vendor/github.com/hashicorp/hcl/v2/go.mod @@ -6,7 +6,7 @@ require ( github.com/apparentlymart/go-textseg v1.0.0 github.com/davecgh/go-spew v1.1.1 github.com/go-test/deep v1.0.3 - github.com/google/go-cmp v0.2.0 + github.com/google/go-cmp v0.3.1 github.com/kr/pretty v0.1.0 github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348 github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 @@ -14,7 +14,7 @@ require ( github.com/sergi/go-diff v1.0.0 github.com/spf13/pflag v1.0.2 github.com/stretchr/testify v1.2.2 // indirect - github.com/zclconf/go-cty v1.1.0 + github.com/zclconf/go-cty v1.2.0 golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734 golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82 // indirect golang.org/x/text v0.3.2 // indirect diff --git a/vendor/github.com/hashicorp/hcl/v2/go.sum b/vendor/github.com/hashicorp/hcl/v2/go.sum index b3b95415f..76b135fb4 100644 --- a/vendor/github.com/hashicorp/hcl/v2/go.sum +++ b/vendor/github.com/hashicorp/hcl/v2/go.sum @@ -9,8 +9,8 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68= github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= +github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= @@ -29,8 +29,8 @@ github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnIn github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= -github.com/zclconf/go-cty v1.1.0 h1:uJwc9HiBOCpoKIObTQaLR+tsEXx1HBHnOsOOpcdhZgw= -github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= +github.com/zclconf/go-cty v1.2.0 h1:sPHsy7ADcIZQP3vILvTjrh74ZA175TFP5vqiNK1UmlI= +github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734 h1:p/H982KKEjUnLJkM3tt/LemDnOc1GiZL5FCVlORJ5zo= golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= diff --git a/vendor/github.com/hashicorp/hcl/v2/gohcl/decode.go b/vendor/github.com/hashicorp/hcl/v2/gohcl/decode.go index 7ba08eee0..f0d589d77 100644 --- a/vendor/github.com/hashicorp/hcl/v2/gohcl/decode.go +++ b/vendor/github.com/hashicorp/hcl/v2/gohcl/decode.go @@ -147,7 +147,9 @@ func decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) if len(blocks) == 0 { if isSlice || isPtr { - val.Field(fieldIdx).Set(reflect.Zero(field.Type)) + if val.Field(fieldIdx).IsNil() { + val.Field(fieldIdx).Set(reflect.Zero(field.Type)) + } } else { diags = append(diags, &hcl.Diagnostic{ Severity: hcl.DiagError, @@ -166,11 +168,20 @@ func decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) if isPtr { elemType = reflect.PtrTo(ty) } - sli := reflect.MakeSlice(reflect.SliceOf(elemType), len(blocks), len(blocks)) + sli := val.Field(fieldIdx) + if sli.IsNil() { + sli = reflect.MakeSlice(reflect.SliceOf(elemType), len(blocks), len(blocks)) + } for i, block := range blocks { if isPtr { - v := reflect.New(ty) + if i >= sli.Len() { + sli = reflect.Append(sli, reflect.New(ty)) + } + v := sli.Index(i) + if v.IsNil() { + v = reflect.New(ty) + } diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...) sli.Index(i).Set(v) } else { @@ -178,12 +189,19 @@ func decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) } } + if sli.Len() > len(blocks) { + sli.SetLen(len(blocks)) + } + val.Field(fieldIdx).Set(sli) default: block := blocks[0] if isPtr { - v := reflect.New(ty) + v := val.Field(fieldIdx) + if v.IsNil() { + v = reflect.New(ty) + } diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...) val.Field(fieldIdx).Set(v) } else { diff --git a/vendor/github.com/hashicorp/hcl/v2/hcldec/spec.go b/vendor/github.com/hashicorp/hcl/v2/hcldec/spec.go index 6f2d9732c..a70818e1b 100644 --- a/vendor/github.com/hashicorp/hcl/v2/hcldec/spec.go +++ b/vendor/github.com/hashicorp/hcl/v2/hcldec/spec.go @@ -6,6 +6,7 @@ import ( "sort" "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/ext/customdecode" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/convert" "github.com/zclconf/go-cty/cty/function" @@ -193,6 +194,14 @@ func (s *AttrSpec) decode(content *hcl.BodyContent, blockLabels []blockLabel, ct return cty.NullVal(s.Type), nil } + if decodeFn := customdecode.CustomExpressionDecoderForType(s.Type); decodeFn != nil { + v, diags := decodeFn(attr.Expr, ctx) + if v == cty.NilVal { + v = cty.UnknownVal(s.Type) + } + return v, diags + } + val, diags := attr.Expr.Value(ctx) convVal, err := convert.Convert(val, s.Type) @@ -204,8 +213,10 @@ func (s *AttrSpec) decode(content *hcl.BodyContent, blockLabels []blockLabel, ct "Inappropriate value for attribute %q: %s.", s.Name, err.Error(), ), - Subject: attr.Expr.StartRange().Ptr(), - Context: hcl.RangeBetween(attr.NameRange, attr.Expr.StartRange()).Ptr(), + Subject: attr.Expr.Range().Ptr(), + Context: hcl.RangeBetween(attr.NameRange, attr.Expr.Range()).Ptr(), + Expression: attr.Expr, + EvalContext: ctx, }) // We'll return an unknown value of the _correct_ type so that the // incomplete result can still be used for some analysis use-cases. @@ -1221,16 +1232,29 @@ func (s *BlockAttrsSpec) decode(content *hcl.BodyContent, blockLabels []blockLab vals := make(map[string]cty.Value, len(attrs)) for name, attr := range attrs { + if decodeFn := customdecode.CustomExpressionDecoderForType(s.ElementType); decodeFn != nil { + attrVal, attrDiags := decodeFn(attr.Expr, ctx) + diags = append(diags, attrDiags...) + if attrVal == cty.NilVal { + attrVal = cty.UnknownVal(s.ElementType) + } + vals[name] = attrVal + continue + } + attrVal, attrDiags := attr.Expr.Value(ctx) diags = append(diags, attrDiags...) attrVal, err := convert.Convert(attrVal, s.ElementType) if err != nil { diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid attribute value", - Detail: fmt.Sprintf("Invalid value for attribute of %q block: %s.", s.TypeName, err), - Subject: attr.Expr.Range().Ptr(), + Severity: hcl.DiagError, + Summary: "Invalid attribute value", + Detail: fmt.Sprintf("Invalid value for attribute of %q block: %s.", s.TypeName, err), + Subject: attr.Expr.Range().Ptr(), + Context: hcl.RangeBetween(attr.NameRange, attr.Expr.Range()).Ptr(), + Expression: attr.Expr, + EvalContext: ctx, }) attrVal = cty.UnknownVal(s.ElementType) } diff --git a/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go b/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go index 963ed7752..3fe84ddc3 100644 --- a/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go +++ b/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go @@ -5,6 +5,7 @@ import ( "sync" "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/ext/customdecode" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/convert" "github.com/zclconf/go-cty/cty/function" @@ -350,26 +351,38 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti param = varParam } - val, argDiags := argExpr.Value(ctx) - if len(argDiags) > 0 { + var val cty.Value + if decodeFn := customdecode.CustomExpressionDecoderForType(param.Type); decodeFn != nil { + var argDiags hcl.Diagnostics + val, argDiags = decodeFn(argExpr, ctx) diags = append(diags, argDiags...) - } + if val == cty.NilVal { + val = cty.UnknownVal(param.Type) + } + } else { + var argDiags hcl.Diagnostics + val, argDiags = argExpr.Value(ctx) + if len(argDiags) > 0 { + diags = append(diags, argDiags...) + } - // Try to convert our value to the parameter type - val, err := convert.Convert(val, param.Type) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid function argument", - Detail: fmt.Sprintf( - "Invalid value for %q parameter: %s.", - param.Name, err, - ), - Subject: argExpr.StartRange().Ptr(), - Context: e.Range().Ptr(), - Expression: argExpr, - EvalContext: ctx, - }) + // Try to convert our value to the parameter type + var err error + val, err = convert.Convert(val, param.Type) + if err != nil { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Invalid function argument", + Detail: fmt.Sprintf( + "Invalid value for %q parameter: %s.", + param.Name, err, + ), + Subject: argExpr.StartRange().Ptr(), + Context: e.Range().Ptr(), + Expression: argExpr, + EvalContext: ctx, + }) + } } argVals[i] = val @@ -615,8 +628,9 @@ type IndexExpr struct { Collection Expression Key Expression - SrcRange hcl.Range - OpenRange hcl.Range + SrcRange hcl.Range + OpenRange hcl.Range + BracketRange hcl.Range } func (e *IndexExpr) walkChildNodes(w internalWalkFunc) { @@ -631,7 +645,7 @@ func (e *IndexExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { diags = append(diags, collDiags...) diags = append(diags, keyDiags...) - val, indexDiags := hcl.Index(coll, key, &e.SrcRange) + val, indexDiags := hcl.Index(coll, key, &e.BracketRange) setDiagEvalContext(indexDiags, e, ctx) diags = append(diags, indexDiags...) return val, diags diff --git a/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression_vars_gen.go b/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression_vars_gen.go deleted file mode 100644 index 6793771d4..000000000 --- a/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression_vars_gen.go +++ /dev/null @@ -1,99 +0,0 @@ -// This is a 'go generate'-oriented program for producing the "Variables" -// method on every Expression implementation found within this package. -// All expressions share the same implementation for this method, which -// just wraps the package-level function "Variables" and uses an AST walk -// to do its work. - -// +build ignore - -package main - -import ( - "fmt" - "go/ast" - "go/parser" - "go/token" - "os" - "sort" -) - -func main() { - fs := token.NewFileSet() - pkgs, err := parser.ParseDir(fs, ".", nil, 0) - if err != nil { - fmt.Fprintf(os.Stderr, "error while parsing: %s\n", err) - os.Exit(1) - } - pkg := pkgs["hclsyntax"] - - // Walk all the files and collect the receivers of any "Value" methods - // that look like they are trying to implement Expression. - var recvs []string - for _, f := range pkg.Files { - for _, decl := range f.Decls { - fd, ok := decl.(*ast.FuncDecl) - if !ok { - continue - } - if fd.Name.Name != "Value" { - continue - } - results := fd.Type.Results.List - if len(results) != 2 { - continue - } - valResult := fd.Type.Results.List[0].Type.(*ast.SelectorExpr).X.(*ast.Ident) - diagsResult := fd.Type.Results.List[1].Type.(*ast.SelectorExpr).X.(*ast.Ident) - - if valResult.Name != "cty" && diagsResult.Name != "hcl" { - continue - } - - // If we have a method called Value and it returns something in - // "cty" followed by something in "hcl" then that's specific enough - // for now, even though this is not 100% exact as a correct - // implementation of Value. - - recvTy := fd.Recv.List[0].Type - - switch rtt := recvTy.(type) { - case *ast.StarExpr: - name := rtt.X.(*ast.Ident).Name - recvs = append(recvs, fmt.Sprintf("*%s", name)) - default: - fmt.Fprintf(os.Stderr, "don't know what to do with a %T receiver\n", recvTy) - } - - } - } - - sort.Strings(recvs) - - of, err := os.OpenFile("expression_vars.go", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, os.ModePerm) - if err != nil { - fmt.Fprintf(os.Stderr, "failed to open output file: %s\n", err) - os.Exit(1) - } - - fmt.Fprint(of, outputPreamble) - for _, recv := range recvs { - fmt.Fprintf(of, outputMethodFmt, recv) - } - fmt.Fprint(of, "\n") - -} - -const outputPreamble = `package hclsyntax - -// Generated by expression_vars_get.go. DO NOT EDIT. -// Run 'go generate' on this package to update the set of functions here. - -import ( - "github.com/hashicorp/hcl/v2" -)` - -const outputMethodFmt = ` - -func (e %s) Variables() []hcl.Traversal { - return Variables(e) -}` diff --git a/vendor/github.com/hashicorp/hcl/v2/hclsyntax/parser.go b/vendor/github.com/hashicorp/hcl/v2/hclsyntax/parser.go index 6fb284a8f..f67d989e5 100644 --- a/vendor/github.com/hashicorp/hcl/v2/hclsyntax/parser.go +++ b/vendor/github.com/hashicorp/hcl/v2/hclsyntax/parser.go @@ -760,7 +760,7 @@ Traversal: Each: travExpr, Item: itemExpr, - SrcRange: hcl.RangeBetween(dot.Range, lastRange), + SrcRange: hcl.RangeBetween(from.Range(), lastRange), MarkerRange: hcl.RangeBetween(dot.Range, marker.Range), } @@ -819,7 +819,7 @@ Traversal: Each: travExpr, Item: itemExpr, - SrcRange: hcl.RangeBetween(open.Range, travExpr.Range()), + SrcRange: hcl.RangeBetween(from.Range(), travExpr.Range()), MarkerRange: hcl.RangeBetween(open.Range, close.Range), } @@ -867,8 +867,9 @@ Traversal: Collection: ret, Key: keyExpr, - SrcRange: rng, - OpenRange: open.Range, + SrcRange: hcl.RangeBetween(from.Range(), rng), + OpenRange: open.Range, + BracketRange: rng, } } } @@ -899,7 +900,7 @@ func makeRelativeTraversal(expr Expression, next hcl.Traverser, rng hcl.Range) E return &RelativeTraversalExpr{ Source: expr, Traversal: hcl.Traversal{next}, - SrcRange: rng, + SrcRange: hcl.RangeBetween(expr.Range(), rng), } } } diff --git a/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_body.go b/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_body.go index c16d13e3a..119f53e62 100644 --- a/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_body.go +++ b/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_body.go @@ -60,7 +60,7 @@ func (b *Body) Attributes() map[string]*Attribute { // Blocks returns a new slice of all the blocks in the body. func (b *Body) Blocks() []*Block { ret := make([]*Block, 0, len(b.items)) - for n := range b.items { + for _, n := range b.items.List() { if block, isBlock := n.content.(*Block); isBlock { ret = append(ret, block) } @@ -134,6 +134,26 @@ func (b *Body) RemoveBlock(block *Block) bool { return false } +// SetAttributeRaw either replaces the expression of an existing attribute +// of the given name or adds a new attribute definition to the end of the block, +// using the given tokens verbatim as the expression. +// +// The same caveats apply to this function as for NewExpressionRaw on which +// it is based. If possible, prefer to use SetAttributeValue or +// SetAttributeTraversal. +func (b *Body) SetAttributeRaw(name string, tokens Tokens) *Attribute { + attr := b.GetAttribute(name) + expr := NewExpressionRaw(tokens) + if attr != nil { + attr.expr = attr.expr.ReplaceWith(expr) + } else { + attr := newAttribute() + attr.init(name, expr) + b.appendItem(attr) + } + return attr +} + // SetAttributeValue either replaces the expression of an existing attribute // of the given name or adds a new attribute definition to the end of the block. // diff --git a/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_expression.go b/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_expression.go index 854e71690..073c30871 100644 --- a/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_expression.go +++ b/vendor/github.com/hashicorp/hcl/v2/hclwrite/ast_expression.go @@ -21,6 +21,29 @@ func newExpression() *Expression { } } +// NewExpressionRaw constructs an expression containing the given raw tokens. +// +// There is no automatic validation that the given tokens produce a valid +// expression. Callers of thus function must take care to produce invalid +// expression tokens. Where possible, use the higher-level functions +// NewExpressionLiteral or NewExpressionAbsTraversal instead. +// +// Because NewExpressionRaw does not interpret the given tokens in any way, +// an expression created by NewExpressionRaw will produce an empty result +// for calls to its method Variables, even if the given token sequence +// contains a subslice that would normally be interpreted as a traversal under +// parsing. +func NewExpressionRaw(tokens Tokens) *Expression { + expr := newExpression() + // We copy the tokens here in order to make sure that later mutations + // by the caller don't inadvertently cause our expression to become + // invalid. + copyTokens := make(Tokens, len(tokens)) + copy(copyTokens, tokens) + expr.children.AppendUnstructuredTokens(copyTokens) + return expr +} + // NewExpressionLiteral constructs an an expression that represents the given // literal value. // diff --git a/vendor/github.com/hashicorp/hcl/v2/hclwrite/generate.go b/vendor/github.com/hashicorp/hcl/v2/hclwrite/generate.go index 289a30d68..4d439acd7 100644 --- a/vendor/github.com/hashicorp/hcl/v2/hclwrite/generate.go +++ b/vendor/github.com/hashicorp/hcl/v2/hclwrite/generate.go @@ -159,12 +159,12 @@ func appendTokensForValue(val cty.Value, toks Tokens) Tokens { func appendTokensForTraversal(traversal hcl.Traversal, toks Tokens) Tokens { for _, step := range traversal { - appendTokensForTraversalStep(step, toks) + toks = appendTokensForTraversalStep(step, toks) } return toks } -func appendTokensForTraversalStep(step hcl.Traverser, toks Tokens) { +func appendTokensForTraversalStep(step hcl.Traverser, toks Tokens) Tokens { switch ts := step.(type) { case hcl.TraverseRoot: toks = append(toks, &Token{ @@ -188,7 +188,7 @@ func appendTokensForTraversalStep(step hcl.Traverser, toks Tokens) { Type: hclsyntax.TokenOBrack, Bytes: []byte{'['}, }) - appendTokensForValue(ts.Key, toks) + toks = appendTokensForValue(ts.Key, toks) toks = append(toks, &Token{ Type: hclsyntax.TokenCBrack, Bytes: []byte{']'}, @@ -196,6 +196,8 @@ func appendTokensForTraversalStep(step hcl.Traverser, toks Tokens) { default: panic(fmt.Sprintf("unsupported traversal step type %T", step)) } + + return toks } func escapeQuotedStringLit(s string) []byte { diff --git a/vendor/github.com/hashicorp/hcl2/LICENSE b/vendor/github.com/hashicorp/hcl2/LICENSE deleted file mode 100644 index 82b4de97c..000000000 --- a/vendor/github.com/hashicorp/hcl2/LICENSE +++ /dev/null @@ -1,353 +0,0 @@ -Mozilla Public License, version 2.0 - -1. Definitions - -1.1. “Contributor” - - means each individual or legal entity that creates, contributes to the - creation of, or owns Covered Software. - -1.2. “Contributor Version” - - means the combination of the Contributions of others (if any) used by a - Contributor and that particular Contributor’s Contribution. - -1.3. “Contribution” - - means Covered Software of a particular Contributor. - -1.4. “Covered Software” - - means Source Code Form to which the initial Contributor has attached the - notice in Exhibit A, the Executable Form of such Source Code Form, and - Modifications of such Source Code Form, in each case including portions - thereof. - -1.5. “Incompatible With Secondary Licenses” - means - - a. that the initial Contributor has attached the notice described in - Exhibit B to the Covered Software; or - - b. that the Covered Software was made available under the terms of version - 1.1 or earlier of the License, but not also under the terms of a - Secondary License. - -1.6. “Executable Form” - - means any form of the work other than Source Code Form. - -1.7. “Larger Work” - - means a work that combines Covered Software with other material, in a separate - file or files, that is not Covered Software. - -1.8. “License” - - means this document. - -1.9. “Licensable” - - means having the right to grant, to the maximum extent possible, whether at the - time of the initial grant or subsequently, any and all of the rights conveyed by - this License. - -1.10. “Modifications” - - means any of the following: - - a. any file in Source Code Form that results from an addition to, deletion - from, or modification of the contents of Covered Software; or - - b. any new file in Source Code Form that contains any Covered Software. - -1.11. “Patent Claims” of a Contributor - - means any patent claim(s), including without limitation, method, process, - and apparatus claims, in any patent Licensable by such Contributor that - would be infringed, but for the grant of the License, by the making, - using, selling, offering for sale, having made, import, or transfer of - either its Contributions or its Contributor Version. - -1.12. “Secondary License” - - means either the GNU General Public License, Version 2.0, the GNU Lesser - General Public License, Version 2.1, the GNU Affero General Public - License, Version 3.0, or any later versions of those licenses. - -1.13. “Source Code Form” - - means the form of the work preferred for making modifications. - -1.14. “You” (or “Your”) - - means an individual or a legal entity exercising rights under this - License. For legal entities, “You” includes any entity that controls, is - controlled by, or is under common control with You. For purposes of this - definition, “control” means (a) the power, direct or indirect, to cause - the direction or management of such entity, whether by contract or - otherwise, or (b) ownership of more than fifty percent (50%) of the - outstanding shares or beneficial ownership of such entity. - - -2. License Grants and Conditions - -2.1. Grants - - Each Contributor hereby grants You a world-wide, royalty-free, - non-exclusive license: - - a. under intellectual property rights (other than patent or trademark) - Licensable by such Contributor to use, reproduce, make available, - modify, display, perform, distribute, and otherwise exploit its - Contributions, either on an unmodified basis, with Modifications, or as - part of a Larger Work; and - - b. under Patent Claims of such Contributor to make, use, sell, offer for - sale, have made, import, and otherwise transfer either its Contributions - or its Contributor Version. - -2.2. Effective Date - - The licenses granted in Section 2.1 with respect to any Contribution become - effective for each Contribution on the date the Contributor first distributes - such Contribution. - -2.3. Limitations on Grant Scope - - The licenses granted in this Section 2 are the only rights granted under this - License. No additional rights or licenses will be implied from the distribution - or licensing of Covered Software under this License. Notwithstanding Section - 2.1(b) above, no patent license is granted by a Contributor: - - a. for any code that a Contributor has removed from Covered Software; or - - b. for infringements caused by: (i) Your and any other third party’s - modifications of Covered Software, or (ii) the combination of its - Contributions with other software (except as part of its Contributor - Version); or - - c. under Patent Claims infringed by Covered Software in the absence of its - Contributions. - - This License does not grant any rights in the trademarks, service marks, or - logos of any Contributor (except as may be necessary to comply with the - notice requirements in Section 3.4). - -2.4. Subsequent Licenses - - No Contributor makes additional grants as a result of Your choice to - distribute the Covered Software under a subsequent version of this License - (see Section 10.2) or under the terms of a Secondary License (if permitted - under the terms of Section 3.3). - -2.5. Representation - - Each Contributor represents that the Contributor believes its Contributions - are its original creation(s) or it has sufficient rights to grant the - rights to its Contributions conveyed by this License. - -2.6. Fair Use - - This License is not intended to limit any rights You have under applicable - copyright doctrines of fair use, fair dealing, or other equivalents. - -2.7. Conditions - - Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in - Section 2.1. - - -3. Responsibilities - -3.1. Distribution of Source Form - - All distribution of Covered Software in Source Code Form, including any - Modifications that You create or to which You contribute, must be under the - terms of this License. You must inform recipients that the Source Code Form - of the Covered Software is governed by the terms of this License, and how - they can obtain a copy of this License. You may not attempt to alter or - restrict the recipients’ rights in the Source Code Form. - -3.2. Distribution of Executable Form - - If You distribute Covered Software in Executable Form then: - - a. such Covered Software must also be made available in Source Code Form, - as described in Section 3.1, and You must inform recipients of the - Executable Form how they can obtain a copy of such Source Code Form by - reasonable means in a timely manner, at a charge no more than the cost - of distribution to the recipient; and - - b. You may distribute such Executable Form under the terms of this License, - or sublicense it under different terms, provided that the license for - the Executable Form does not attempt to limit or alter the recipients’ - rights in the Source Code Form under this License. - -3.3. Distribution of a Larger Work - - You may create and distribute a Larger Work under terms of Your choice, - provided that You also comply with the requirements of this License for the - Covered Software. If the Larger Work is a combination of Covered Software - with a work governed by one or more Secondary Licenses, and the Covered - Software is not Incompatible With Secondary Licenses, this License permits - You to additionally distribute such Covered Software under the terms of - such Secondary License(s), so that the recipient of the Larger Work may, at - their option, further distribute the Covered Software under the terms of - either this License or such Secondary License(s). - -3.4. Notices - - You may not remove or alter the substance of any license notices (including - copyright notices, patent notices, disclaimers of warranty, or limitations - of liability) contained within the Source Code Form of the Covered - Software, except that You may alter any license notices to the extent - required to remedy known factual inaccuracies. - -3.5. Application of Additional Terms - - You may choose to offer, and to charge a fee for, warranty, support, - indemnity or liability obligations to one or more recipients of Covered - Software. However, You may do so only on Your own behalf, and not on behalf - of any Contributor. You must make it absolutely clear that any such - warranty, support, indemnity, or liability obligation is offered by You - alone, and You hereby agree to indemnify every Contributor for any - liability incurred by such Contributor as a result of warranty, support, - indemnity or liability terms You offer. You may include additional - disclaimers of warranty and limitations of liability specific to any - jurisdiction. - -4. Inability to Comply Due to Statute or Regulation - - If it is impossible for You to comply with any of the terms of this License - with respect to some or all of the Covered Software due to statute, judicial - order, or regulation then You must: (a) comply with the terms of this License - to the maximum extent possible; and (b) describe the limitations and the code - they affect. Such description must be placed in a text file included with all - distributions of the Covered Software under this License. Except to the - extent prohibited by statute or regulation, such description must be - sufficiently detailed for a recipient of ordinary skill to be able to - understand it. - -5. Termination - -5.1. The rights granted under this License will terminate automatically if You - fail to comply with any of its terms. However, if You become compliant, - then the rights granted under this License from a particular Contributor - are reinstated (a) provisionally, unless and until such Contributor - explicitly and finally terminates Your grants, and (b) on an ongoing basis, - if such Contributor fails to notify You of the non-compliance by some - reasonable means prior to 60 days after You have come back into compliance. - Moreover, Your grants from a particular Contributor are reinstated on an - ongoing basis if such Contributor notifies You of the non-compliance by - some reasonable means, this is the first time You have received notice of - non-compliance with this License from such Contributor, and You become - compliant prior to 30 days after Your receipt of the notice. - -5.2. If You initiate litigation against any entity by asserting a patent - infringement claim (excluding declaratory judgment actions, counter-claims, - and cross-claims) alleging that a Contributor Version directly or - indirectly infringes any patent, then the rights granted to You by any and - all Contributors for the Covered Software under Section 2.1 of this License - shall terminate. - -5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user - license agreements (excluding distributors and resellers) which have been - validly granted by You or Your distributors under this License prior to - termination shall survive termination. - -6. Disclaimer of Warranty - - Covered Software is provided under this License on an “as is” basis, without - warranty of any kind, either expressed, implied, or statutory, including, - without limitation, warranties that the Covered Software is free of defects, - merchantable, fit for a particular purpose or non-infringing. The entire - risk as to the quality and performance of the Covered Software is with You. - Should any Covered Software prove defective in any respect, You (not any - Contributor) assume the cost of any necessary servicing, repair, or - correction. This disclaimer of warranty constitutes an essential part of this - License. No use of any Covered Software is authorized under this License - except under this disclaimer. - -7. Limitation of Liability - - Under no circumstances and under no legal theory, whether tort (including - negligence), contract, or otherwise, shall any Contributor, or anyone who - distributes Covered Software as permitted above, be liable to You for any - direct, indirect, special, incidental, or consequential damages of any - character including, without limitation, damages for lost profits, loss of - goodwill, work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses, even if such party shall have been - informed of the possibility of such damages. This limitation of liability - shall not apply to liability for death or personal injury resulting from such - party’s negligence to the extent applicable law prohibits such limitation. - Some jurisdictions do not allow the exclusion or limitation of incidental or - consequential damages, so this exclusion and limitation may not apply to You. - -8. Litigation - - Any litigation relating to this License may be brought only in the courts of - a jurisdiction where the defendant maintains its principal place of business - and such litigation shall be governed by laws of that jurisdiction, without - reference to its conflict-of-law provisions. Nothing in this Section shall - prevent a party’s ability to bring cross-claims or counter-claims. - -9. Miscellaneous - - This License represents the complete agreement concerning the subject matter - hereof. If any provision of this License is held to be unenforceable, such - provision shall be reformed only to the extent necessary to make it - enforceable. Any law or regulation which provides that the language of a - contract shall be construed against the drafter shall not be used to construe - this License against a Contributor. - - -10. Versions of the License - -10.1. New Versions - - Mozilla Foundation is the license steward. Except as provided in Section - 10.3, no one other than the license steward has the right to modify or - publish new versions of this License. Each version will be given a - distinguishing version number. - -10.2. Effect of New Versions - - You may distribute the Covered Software under the terms of the version of - the License under which You originally received the Covered Software, or - under the terms of any subsequent version published by the license - steward. - -10.3. Modified Versions - - If you create software not governed by this License, and you want to - create a new license for such software, you may create and use a modified - version of this License if you rename the license and remove any - references to the name of the license steward (except to note that such - modified license differs from this License). - -10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses - If You choose to distribute Source Code Form that is Incompatible With - Secondary Licenses under the terms of this version of the License, the - notice described in Exhibit B of this License must be attached. - -Exhibit A - Source Code Form License Notice - - This Source Code Form is subject to the - terms of the Mozilla Public License, v. - 2.0. If a copy of the MPL was not - distributed with this file, You can - obtain one at - http://mozilla.org/MPL/2.0/. - -If it is not possible or desirable to put the notice in a particular file, then -You may include the notice in a location (such as a LICENSE file in a relevant -directory) where a recipient would be likely to look for such a notice. - -You may add additional accurate notices of copyright ownership. - -Exhibit B - “Incompatible With Secondary Licenses” Notice - - This Source Code Form is “Incompatible - With Secondary Licenses”, as defined by - the Mozilla Public License, v. 2.0. diff --git a/vendor/github.com/hashicorp/hcl2/gohcl/decode.go b/vendor/github.com/hashicorp/hcl2/gohcl/decode.go deleted file mode 100644 index 3a149a8c2..000000000 --- a/vendor/github.com/hashicorp/hcl2/gohcl/decode.go +++ /dev/null @@ -1,304 +0,0 @@ -package gohcl - -import ( - "fmt" - "reflect" - - "github.com/zclconf/go-cty/cty" - - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty/convert" - "github.com/zclconf/go-cty/cty/gocty" -) - -// DecodeBody extracts the configuration within the given body into the given -// value. This value must be a non-nil pointer to either a struct or -// a map, where in the former case the configuration will be decoded using -// struct tags and in the latter case only attributes are allowed and their -// values are decoded into the map. -// -// The given EvalContext is used to resolve any variables or functions in -// expressions encountered while decoding. This may be nil to require only -// constant values, for simple applications that do not support variables or -// functions. -// -// The returned diagnostics should be inspected with its HasErrors method to -// determine if the populated value is valid and complete. If error diagnostics -// are returned then the given value may have been partially-populated but -// may still be accessed by a careful caller for static analysis and editor -// integration use-cases. -func DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics { - rv := reflect.ValueOf(val) - if rv.Kind() != reflect.Ptr { - panic(fmt.Sprintf("target value must be a pointer, not %s", rv.Type().String())) - } - - return decodeBodyToValue(body, ctx, rv.Elem()) -} - -func decodeBodyToValue(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics { - et := val.Type() - switch et.Kind() { - case reflect.Struct: - return decodeBodyToStruct(body, ctx, val) - case reflect.Map: - return decodeBodyToMap(body, ctx, val) - default: - panic(fmt.Sprintf("target value must be pointer to struct or map, not %s", et.String())) - } -} - -func decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics { - schema, partial := ImpliedBodySchema(val.Interface()) - - var content *hcl.BodyContent - var leftovers hcl.Body - var diags hcl.Diagnostics - if partial { - content, leftovers, diags = body.PartialContent(schema) - } else { - content, diags = body.Content(schema) - } - if content == nil { - return diags - } - - tags := getFieldTags(val.Type()) - - if tags.Remain != nil { - fieldIdx := *tags.Remain - field := val.Type().Field(fieldIdx) - fieldV := val.Field(fieldIdx) - switch { - case bodyType.AssignableTo(field.Type): - fieldV.Set(reflect.ValueOf(leftovers)) - case attrsType.AssignableTo(field.Type): - attrs, attrsDiags := leftovers.JustAttributes() - if len(attrsDiags) > 0 { - diags = append(diags, attrsDiags...) - } - fieldV.Set(reflect.ValueOf(attrs)) - default: - diags = append(diags, decodeBodyToValue(leftovers, ctx, fieldV)...) - } - } - - for name, fieldIdx := range tags.Attributes { - attr := content.Attributes[name] - field := val.Type().Field(fieldIdx) - fieldV := val.Field(fieldIdx) - - if attr == nil { - if !exprType.AssignableTo(field.Type) { - continue - } - - // As a special case, if the target is of type hcl.Expression then - // we'll assign an actual expression that evalues to a cty null, - // so the caller can deal with it within the cty realm rather - // than within the Go realm. - synthExpr := hcl.StaticExpr(cty.NullVal(cty.DynamicPseudoType), body.MissingItemRange()) - fieldV.Set(reflect.ValueOf(synthExpr)) - continue - } - - switch { - case attrType.AssignableTo(field.Type): - fieldV.Set(reflect.ValueOf(attr)) - case exprType.AssignableTo(field.Type): - fieldV.Set(reflect.ValueOf(attr.Expr)) - default: - diags = append(diags, DecodeExpression( - attr.Expr, ctx, fieldV.Addr().Interface(), - )...) - } - } - - blocksByType := content.Blocks.ByType() - - for typeName, fieldIdx := range tags.Blocks { - blocks := blocksByType[typeName] - field := val.Type().Field(fieldIdx) - - ty := field.Type - isSlice := false - isPtr := false - if ty.Kind() == reflect.Slice { - isSlice = true - ty = ty.Elem() - } - if ty.Kind() == reflect.Ptr { - isPtr = true - ty = ty.Elem() - } - - if len(blocks) > 1 && !isSlice { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Duplicate %s block", typeName), - Detail: fmt.Sprintf( - "Only one %s block is allowed. Another was defined at %s.", - typeName, blocks[0].DefRange.String(), - ), - Subject: &blocks[1].DefRange, - }) - continue - } - - if len(blocks) == 0 { - if isSlice || isPtr { - val.Field(fieldIdx).Set(reflect.Zero(field.Type)) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Missing %s block", typeName), - Detail: fmt.Sprintf("A %s block is required.", typeName), - Subject: body.MissingItemRange().Ptr(), - }) - } - continue - } - - switch { - - case isSlice: - elemType := ty - if isPtr { - elemType = reflect.PtrTo(ty) - } - sli := reflect.MakeSlice(reflect.SliceOf(elemType), len(blocks), len(blocks)) - - for i, block := range blocks { - if isPtr { - v := reflect.New(ty) - diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...) - sli.Index(i).Set(v) - } else { - diags = append(diags, decodeBlockToValue(block, ctx, sli.Index(i))...) - } - } - - val.Field(fieldIdx).Set(sli) - - default: - block := blocks[0] - if isPtr { - v := reflect.New(ty) - diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...) - val.Field(fieldIdx).Set(v) - } else { - diags = append(diags, decodeBlockToValue(block, ctx, val.Field(fieldIdx))...) - } - - } - - } - - return diags -} - -func decodeBodyToMap(body hcl.Body, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics { - attrs, diags := body.JustAttributes() - if attrs == nil { - return diags - } - - mv := reflect.MakeMap(v.Type()) - - for k, attr := range attrs { - switch { - case attrType.AssignableTo(v.Type().Elem()): - mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr)) - case exprType.AssignableTo(v.Type().Elem()): - mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr.Expr)) - default: - ev := reflect.New(v.Type().Elem()) - diags = append(diags, DecodeExpression(attr.Expr, ctx, ev.Interface())...) - mv.SetMapIndex(reflect.ValueOf(k), ev.Elem()) - } - } - - v.Set(mv) - - return diags -} - -func decodeBlockToValue(block *hcl.Block, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics { - var diags hcl.Diagnostics - - ty := v.Type() - - switch { - case blockType.AssignableTo(ty): - v.Elem().Set(reflect.ValueOf(block)) - case bodyType.AssignableTo(ty): - v.Elem().Set(reflect.ValueOf(block.Body)) - case attrsType.AssignableTo(ty): - attrs, attrsDiags := block.Body.JustAttributes() - if len(attrsDiags) > 0 { - diags = append(diags, attrsDiags...) - } - v.Elem().Set(reflect.ValueOf(attrs)) - default: - diags = append(diags, decodeBodyToValue(block.Body, ctx, v)...) - - if len(block.Labels) > 0 { - blockTags := getFieldTags(ty) - for li, lv := range block.Labels { - lfieldIdx := blockTags.Labels[li].FieldIndex - v.Field(lfieldIdx).Set(reflect.ValueOf(lv)) - } - } - - } - - return diags -} - -// DecodeExpression extracts the value of the given expression into the given -// value. This value must be something that gocty is able to decode into, -// since the final decoding is delegated to that package. -// -// The given EvalContext is used to resolve any variables or functions in -// expressions encountered while decoding. This may be nil to require only -// constant values, for simple applications that do not support variables or -// functions. -// -// The returned diagnostics should be inspected with its HasErrors method to -// determine if the populated value is valid and complete. If error diagnostics -// are returned then the given value may have been partially-populated but -// may still be accessed by a careful caller for static analysis and editor -// integration use-cases. -func DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics { - srcVal, diags := expr.Value(ctx) - - convTy, err := gocty.ImpliedType(val) - if err != nil { - panic(fmt.Sprintf("unsuitable DecodeExpression target: %s", err)) - } - - srcVal, err = convert.Convert(srcVal, convTy) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unsuitable value type", - Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()), - Subject: expr.StartRange().Ptr(), - Context: expr.Range().Ptr(), - }) - return diags - } - - err = gocty.FromCtyValue(srcVal, val) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unsuitable value type", - Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()), - Subject: expr.StartRange().Ptr(), - Context: expr.Range().Ptr(), - }) - } - - return diags -} diff --git a/vendor/github.com/hashicorp/hcl2/gohcl/doc.go b/vendor/github.com/hashicorp/hcl2/gohcl/doc.go deleted file mode 100644 index aa3c6ea9e..000000000 --- a/vendor/github.com/hashicorp/hcl2/gohcl/doc.go +++ /dev/null @@ -1,53 +0,0 @@ -// Package gohcl allows decoding HCL configurations into Go data structures. -// -// It provides a convenient and concise way of describing the schema for -// configuration and then accessing the resulting data via native Go -// types. -// -// A struct field tag scheme is used, similar to other decoding and -// unmarshalling libraries. The tags are formatted as in the following example: -// -// ThingType string `hcl:"thing_type,attr"` -// -// Within each tag there are two comma-separated tokens. The first is the -// name of the corresponding construct in configuration, while the second -// is a keyword giving the kind of construct expected. The following -// kind keywords are supported: -// -// attr (the default) indicates that the value is to be populated from an attribute -// block indicates that the value is to populated from a block -// label indicates that the value is to populated from a block label -// remain indicates that the value is to be populated from the remaining body after populating other fields -// -// "attr" fields may either be of type *hcl.Expression, in which case the raw -// expression is assigned, or of any type accepted by gocty, in which case -// gocty will be used to assign the value to a native Go type. -// -// "block" fields may be of type *hcl.Block or hcl.Body, in which case the -// corresponding raw value is assigned, or may be a struct that recursively -// uses the same tags. Block fields may also be slices of any of these types, -// in which case multiple blocks of the corresponding type are decoded into -// the slice. -// -// "label" fields are considered only in a struct used as the type of a field -// marked as "block", and are used sequentially to capture the labels of -// the blocks being decoded. In this case, the name token is used only as -// an identifier for the label in diagnostic messages. -// -// "remain" can be placed on a single field that may be either of type -// hcl.Body or hcl.Attributes, in which case any remaining body content is -// placed into this field for delayed processing. If no "remain" field is -// present then any attributes or blocks not matched by another valid tag -// will cause an error diagnostic. -// -// Only a subset of this tagging/typing vocabulary is supported for the -// "Encode" family of functions. See the EncodeIntoBody docs for full details -// on the constraints there. -// -// Broadly-speaking this package deals with two types of error. The first is -// errors in the configuration itself, which are returned as diagnostics -// written with the configuration author as the target audience. The second -// is bugs in the calling program, such as invalid struct tags, which are -// surfaced via panics since there can be no useful runtime handling of such -// errors and they should certainly not be returned to the user as diagnostics. -package gohcl diff --git a/vendor/github.com/hashicorp/hcl2/gohcl/encode.go b/vendor/github.com/hashicorp/hcl2/gohcl/encode.go deleted file mode 100644 index 3cbf7e48a..000000000 --- a/vendor/github.com/hashicorp/hcl2/gohcl/encode.go +++ /dev/null @@ -1,191 +0,0 @@ -package gohcl - -import ( - "fmt" - "reflect" - "sort" - - "github.com/hashicorp/hcl2/hclwrite" - "github.com/zclconf/go-cty/cty/gocty" -) - -// EncodeIntoBody replaces the contents of the given hclwrite Body with -// attributes and blocks derived from the given value, which must be a -// struct value or a pointer to a struct value with the struct tags defined -// in this package. -// -// This function can work only with fully-decoded data. It will ignore any -// fields tagged as "remain", any fields that decode attributes into either -// hcl.Attribute or hcl.Expression values, and any fields that decode blocks -// into hcl.Attributes values. This function does not have enough information -// to complete the decoding of these types. -// -// Any fields tagged as "label" are ignored by this function. Use EncodeAsBlock -// to produce a whole hclwrite.Block including block labels. -// -// As long as a suitable value is given to encode and the destination body -// is non-nil, this function will always complete. It will panic in case of -// any errors in the calling program, such as passing an inappropriate type -// or a nil body. -// -// The layout of the resulting HCL source is derived from the ordering of -// the struct fields, with blank lines around nested blocks of different types. -// Fields representing attributes should usually precede those representing -// blocks so that the attributes can group togather in the result. For more -// control, use the hclwrite API directly. -func EncodeIntoBody(val interface{}, dst *hclwrite.Body) { - rv := reflect.ValueOf(val) - ty := rv.Type() - if ty.Kind() == reflect.Ptr { - rv = rv.Elem() - ty = rv.Type() - } - if ty.Kind() != reflect.Struct { - panic(fmt.Sprintf("value is %s, not struct", ty.Kind())) - } - - tags := getFieldTags(ty) - populateBody(rv, ty, tags, dst) -} - -// EncodeAsBlock creates a new hclwrite.Block populated with the data from -// the given value, which must be a struct or pointer to struct with the -// struct tags defined in this package. -// -// If the given struct type has fields tagged with "label" tags then they -// will be used in order to annotate the created block with labels. -// -// This function has the same constraints as EncodeIntoBody and will panic -// if they are violated. -func EncodeAsBlock(val interface{}, blockType string) *hclwrite.Block { - rv := reflect.ValueOf(val) - ty := rv.Type() - if ty.Kind() == reflect.Ptr { - rv = rv.Elem() - ty = rv.Type() - } - if ty.Kind() != reflect.Struct { - panic(fmt.Sprintf("value is %s, not struct", ty.Kind())) - } - - tags := getFieldTags(ty) - labels := make([]string, len(tags.Labels)) - for i, lf := range tags.Labels { - lv := rv.Field(lf.FieldIndex) - // We just stringify whatever we find. It should always be a string - // but if not then we'll still do something reasonable. - labels[i] = fmt.Sprintf("%s", lv.Interface()) - } - - block := hclwrite.NewBlock(blockType, labels) - populateBody(rv, ty, tags, block.Body()) - return block -} - -func populateBody(rv reflect.Value, ty reflect.Type, tags *fieldTags, dst *hclwrite.Body) { - nameIdxs := make(map[string]int, len(tags.Attributes)+len(tags.Blocks)) - namesOrder := make([]string, 0, len(tags.Attributes)+len(tags.Blocks)) - for n, i := range tags.Attributes { - nameIdxs[n] = i - namesOrder = append(namesOrder, n) - } - for n, i := range tags.Blocks { - nameIdxs[n] = i - namesOrder = append(namesOrder, n) - } - sort.SliceStable(namesOrder, func(i, j int) bool { - ni, nj := namesOrder[i], namesOrder[j] - return nameIdxs[ni] < nameIdxs[nj] - }) - - dst.Clear() - - prevWasBlock := false - for _, name := range namesOrder { - fieldIdx := nameIdxs[name] - field := ty.Field(fieldIdx) - fieldTy := field.Type - fieldVal := rv.Field(fieldIdx) - - if fieldTy.Kind() == reflect.Ptr { - fieldTy = fieldTy.Elem() - fieldVal = fieldVal.Elem() - } - - if _, isAttr := tags.Attributes[name]; isAttr { - - if exprType.AssignableTo(fieldTy) || attrType.AssignableTo(fieldTy) { - continue // ignore undecoded fields - } - if !fieldVal.IsValid() { - continue // ignore (field value is nil pointer) - } - if fieldTy.Kind() == reflect.Ptr && fieldVal.IsNil() { - continue // ignore - } - if prevWasBlock { - dst.AppendNewline() - prevWasBlock = false - } - - valTy, err := gocty.ImpliedType(fieldVal.Interface()) - if err != nil { - panic(fmt.Sprintf("cannot encode %T as HCL expression: %s", fieldVal.Interface(), err)) - } - - val, err := gocty.ToCtyValue(fieldVal.Interface(), valTy) - if err != nil { - // This should never happen, since we should always be able - // to decode into the implied type. - panic(fmt.Sprintf("failed to encode %T as %#v: %s", fieldVal.Interface(), valTy, err)) - } - - dst.SetAttributeValue(name, val) - - } else { // must be a block, then - elemTy := fieldTy - isSeq := false - if elemTy.Kind() == reflect.Slice || elemTy.Kind() == reflect.Array { - isSeq = true - elemTy = elemTy.Elem() - } - - if bodyType.AssignableTo(elemTy) || attrsType.AssignableTo(elemTy) { - continue // ignore undecoded fields - } - prevWasBlock = false - - if isSeq { - l := fieldVal.Len() - for i := 0; i < l; i++ { - elemVal := fieldVal.Index(i) - if !elemVal.IsValid() { - continue // ignore (elem value is nil pointer) - } - if elemTy.Kind() == reflect.Ptr && elemVal.IsNil() { - continue // ignore - } - block := EncodeAsBlock(elemVal.Interface(), name) - if !prevWasBlock { - dst.AppendNewline() - prevWasBlock = true - } - dst.AppendBlock(block) - } - } else { - if !fieldVal.IsValid() { - continue // ignore (field value is nil pointer) - } - if elemTy.Kind() == reflect.Ptr && fieldVal.IsNil() { - continue // ignore - } - block := EncodeAsBlock(fieldVal.Interface(), name) - if !prevWasBlock { - dst.AppendNewline() - prevWasBlock = true - } - dst.AppendBlock(block) - } - } - } -} diff --git a/vendor/github.com/hashicorp/hcl2/gohcl/schema.go b/vendor/github.com/hashicorp/hcl2/gohcl/schema.go deleted file mode 100644 index 88164cb05..000000000 --- a/vendor/github.com/hashicorp/hcl2/gohcl/schema.go +++ /dev/null @@ -1,174 +0,0 @@ -package gohcl - -import ( - "fmt" - "reflect" - "sort" - "strings" - - "github.com/hashicorp/hcl2/hcl" -) - -// ImpliedBodySchema produces a hcl.BodySchema derived from the type of the -// given value, which must be a struct value or a pointer to one. If an -// inappropriate value is passed, this function will panic. -// -// The second return argument indicates whether the given struct includes -// a "remain" field, and thus the returned schema is non-exhaustive. -// -// This uses the tags on the fields of the struct to discover how each -// field's value should be expressed within configuration. If an invalid -// mapping is attempted, this function will panic. -func ImpliedBodySchema(val interface{}) (schema *hcl.BodySchema, partial bool) { - ty := reflect.TypeOf(val) - - if ty.Kind() == reflect.Ptr { - ty = ty.Elem() - } - - if ty.Kind() != reflect.Struct { - panic(fmt.Sprintf("given value must be struct, not %T", val)) - } - - var attrSchemas []hcl.AttributeSchema - var blockSchemas []hcl.BlockHeaderSchema - - tags := getFieldTags(ty) - - attrNames := make([]string, 0, len(tags.Attributes)) - for n := range tags.Attributes { - attrNames = append(attrNames, n) - } - sort.Strings(attrNames) - for _, n := range attrNames { - idx := tags.Attributes[n] - optional := tags.Optional[n] - field := ty.Field(idx) - - var required bool - - switch { - case field.Type.AssignableTo(exprType): - // If we're decoding to hcl.Expression then absense can be - // indicated via a null value, so we don't specify that - // the field is required during decoding. - required = false - case field.Type.Kind() != reflect.Ptr && !optional: - required = true - default: - required = false - } - - attrSchemas = append(attrSchemas, hcl.AttributeSchema{ - Name: n, - Required: required, - }) - } - - blockNames := make([]string, 0, len(tags.Blocks)) - for n := range tags.Blocks { - blockNames = append(blockNames, n) - } - sort.Strings(blockNames) - for _, n := range blockNames { - idx := tags.Blocks[n] - field := ty.Field(idx) - fty := field.Type - if fty.Kind() == reflect.Slice { - fty = fty.Elem() - } - if fty.Kind() == reflect.Ptr { - fty = fty.Elem() - } - if fty.Kind() != reflect.Struct { - panic(fmt.Sprintf( - "hcl 'block' tag kind cannot be applied to %s field %s: struct required", field.Type.String(), field.Name, - )) - } - ftags := getFieldTags(fty) - var labelNames []string - if len(ftags.Labels) > 0 { - labelNames = make([]string, len(ftags.Labels)) - for i, l := range ftags.Labels { - labelNames[i] = l.Name - } - } - - blockSchemas = append(blockSchemas, hcl.BlockHeaderSchema{ - Type: n, - LabelNames: labelNames, - }) - } - - partial = tags.Remain != nil - schema = &hcl.BodySchema{ - Attributes: attrSchemas, - Blocks: blockSchemas, - } - return schema, partial -} - -type fieldTags struct { - Attributes map[string]int - Blocks map[string]int - Labels []labelField - Remain *int - Optional map[string]bool -} - -type labelField struct { - FieldIndex int - Name string -} - -func getFieldTags(ty reflect.Type) *fieldTags { - ret := &fieldTags{ - Attributes: map[string]int{}, - Blocks: map[string]int{}, - Optional: map[string]bool{}, - } - - ct := ty.NumField() - for i := 0; i < ct; i++ { - field := ty.Field(i) - tag := field.Tag.Get("hcl") - if tag == "" { - continue - } - - comma := strings.Index(tag, ",") - var name, kind string - if comma != -1 { - name = tag[:comma] - kind = tag[comma+1:] - } else { - name = tag - kind = "attr" - } - - switch kind { - case "attr": - ret.Attributes[name] = i - case "block": - ret.Blocks[name] = i - case "label": - ret.Labels = append(ret.Labels, labelField{ - FieldIndex: i, - Name: name, - }) - case "remain": - if ret.Remain != nil { - panic("only one 'remain' tag is permitted") - } - idx := i // copy, because this loop will continue assigning to i - ret.Remain = &idx - case "optional": - ret.Attributes[name] = i - ret.Optional[name] = true - default: - panic(fmt.Sprintf("invalid hcl field tag kind %q on %s %q", kind, field.Type.String(), field.Name)) - } - } - - return ret -} diff --git a/vendor/github.com/hashicorp/hcl2/gohcl/types.go b/vendor/github.com/hashicorp/hcl2/gohcl/types.go deleted file mode 100644 index a94f275ad..000000000 --- a/vendor/github.com/hashicorp/hcl2/gohcl/types.go +++ /dev/null @@ -1,16 +0,0 @@ -package gohcl - -import ( - "reflect" - - "github.com/hashicorp/hcl2/hcl" -) - -var victimExpr hcl.Expression -var victimBody hcl.Body - -var exprType = reflect.TypeOf(&victimExpr).Elem() -var bodyType = reflect.TypeOf(&victimBody).Elem() -var blockType = reflect.TypeOf((*hcl.Block)(nil)) -var attrType = reflect.TypeOf((*hcl.Attribute)(nil)) -var attrsType = reflect.TypeOf(hcl.Attributes(nil)) diff --git a/vendor/github.com/hashicorp/hcl2/hcl/diagnostic.go b/vendor/github.com/hashicorp/hcl2/hcl/diagnostic.go deleted file mode 100644 index c320961e1..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/diagnostic.go +++ /dev/null @@ -1,143 +0,0 @@ -package hcl - -import ( - "fmt" -) - -// DiagnosticSeverity represents the severity of a diagnostic. -type DiagnosticSeverity int - -const ( - // DiagInvalid is the invalid zero value of DiagnosticSeverity - DiagInvalid DiagnosticSeverity = iota - - // DiagError indicates that the problem reported by a diagnostic prevents - // further progress in parsing and/or evaluating the subject. - DiagError - - // DiagWarning indicates that the problem reported by a diagnostic warrants - // user attention but does not prevent further progress. It is most - // commonly used for showing deprecation notices. - DiagWarning -) - -// Diagnostic represents information to be presented to a user about an -// error or anomoly in parsing or evaluating configuration. -type Diagnostic struct { - Severity DiagnosticSeverity - - // Summary and Detail contain the English-language description of the - // problem. Summary is a terse description of the general problem and - // detail is a more elaborate, often-multi-sentence description of - // the probem and what might be done to solve it. - Summary string - Detail string - - // Subject and Context are both source ranges relating to the diagnostic. - // - // Subject is a tight range referring to exactly the construct that - // is problematic, while Context is an optional broader range (which should - // fully contain Subject) that ought to be shown around Subject when - // generating isolated source-code snippets in diagnostic messages. - // If Context is nil, the Subject is also the Context. - // - // Some diagnostics have no source ranges at all. If Context is set then - // Subject should always also be set. - Subject *Range - Context *Range - - // For diagnostics that occur when evaluating an expression, Expression - // may refer to that expression and EvalContext may point to the - // EvalContext that was active when evaluating it. This may allow for the - // inclusion of additional useful information when rendering a diagnostic - // message to the user. - // - // It is not always possible to select a single EvalContext for a - // diagnostic, and so in some cases this field may be nil even when an - // expression causes a problem. - // - // EvalContexts form a tree, so the given EvalContext may refer to a parent - // which in turn refers to another parent, etc. For a full picture of all - // of the active variables and functions the caller must walk up this - // chain, preferring definitions that are "closer" to the expression in - // case of colliding names. - Expression Expression - EvalContext *EvalContext -} - -// Diagnostics is a list of Diagnostic instances. -type Diagnostics []*Diagnostic - -// error implementation, so that diagnostics can be returned via APIs -// that normally deal in vanilla Go errors. -// -// This presents only minimal context about the error, for compatibility -// with usual expectations about how errors will present as strings. -func (d *Diagnostic) Error() string { - return fmt.Sprintf("%s: %s; %s", d.Subject, d.Summary, d.Detail) -} - -// error implementation, so that sets of diagnostics can be returned via -// APIs that normally deal in vanilla Go errors. -func (d Diagnostics) Error() string { - count := len(d) - switch { - case count == 0: - return "no diagnostics" - case count == 1: - return d[0].Error() - default: - return fmt.Sprintf("%s, and %d other diagnostic(s)", d[0].Error(), count-1) - } -} - -// Append appends a new error to a Diagnostics and return the whole Diagnostics. -// -// This is provided as a convenience for returning from a function that -// collects and then returns a set of diagnostics: -// -// return nil, diags.Append(&hcl.Diagnostic{ ... }) -// -// Note that this modifies the array underlying the diagnostics slice, so -// must be used carefully within a single codepath. It is incorrect (and rude) -// to extend a diagnostics created by a different subsystem. -func (d Diagnostics) Append(diag *Diagnostic) Diagnostics { - return append(d, diag) -} - -// Extend concatenates the given Diagnostics with the receiver and returns -// the whole new Diagnostics. -// -// This is similar to Append but accepts multiple diagnostics to add. It has -// all the same caveats and constraints. -func (d Diagnostics) Extend(diags Diagnostics) Diagnostics { - return append(d, diags...) -} - -// HasErrors returns true if the receiver contains any diagnostics of -// severity DiagError. -func (d Diagnostics) HasErrors() bool { - for _, diag := range d { - if diag.Severity == DiagError { - return true - } - } - return false -} - -func (d Diagnostics) Errs() []error { - var errs []error - for _, diag := range d { - if diag.Severity == DiagError { - errs = append(errs, diag) - } - } - - return errs -} - -// A DiagnosticWriter emits diagnostics somehow. -type DiagnosticWriter interface { - WriteDiagnostic(*Diagnostic) error - WriteDiagnostics(Diagnostics) error -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/diagnostic_text.go b/vendor/github.com/hashicorp/hcl2/hcl/diagnostic_text.go deleted file mode 100644 index 0b4a2629b..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/diagnostic_text.go +++ /dev/null @@ -1,311 +0,0 @@ -package hcl - -import ( - "bufio" - "bytes" - "errors" - "fmt" - "io" - "sort" - - wordwrap "github.com/mitchellh/go-wordwrap" - "github.com/zclconf/go-cty/cty" -) - -type diagnosticTextWriter struct { - files map[string]*File - wr io.Writer - width uint - color bool -} - -// NewDiagnosticTextWriter creates a DiagnosticWriter that writes diagnostics -// to the given writer as formatted text. -// -// It is designed to produce text appropriate to print in a monospaced font -// in a terminal of a particular width, or optionally with no width limit. -// -// The given width may be zero to disable word-wrapping of the detail text -// and truncation of source code snippets. -// -// If color is set to true, the output will include VT100 escape sequences to -// color-code the severity indicators. It is suggested to turn this off if -// the target writer is not a terminal. -func NewDiagnosticTextWriter(wr io.Writer, files map[string]*File, width uint, color bool) DiagnosticWriter { - return &diagnosticTextWriter{ - files: files, - wr: wr, - width: width, - color: color, - } -} - -func (w *diagnosticTextWriter) WriteDiagnostic(diag *Diagnostic) error { - if diag == nil { - return errors.New("nil diagnostic") - } - - var colorCode, highlightCode, resetCode string - if w.color { - switch diag.Severity { - case DiagError: - colorCode = "\x1b[31m" - case DiagWarning: - colorCode = "\x1b[33m" - } - resetCode = "\x1b[0m" - highlightCode = "\x1b[1;4m" - } - - var severityStr string - switch diag.Severity { - case DiagError: - severityStr = "Error" - case DiagWarning: - severityStr = "Warning" - default: - // should never happen - severityStr = "???????" - } - - fmt.Fprintf(w.wr, "%s%s%s: %s\n\n", colorCode, severityStr, resetCode, diag.Summary) - - if diag.Subject != nil { - snipRange := *diag.Subject - highlightRange := snipRange - if diag.Context != nil { - // Show enough of the source code to include both the subject - // and context ranges, which overlap in all reasonable - // situations. - snipRange = RangeOver(snipRange, *diag.Context) - } - // We can't illustrate an empty range, so we'll turn such ranges into - // single-character ranges, which might not be totally valid (may point - // off the end of a line, or off the end of the file) but are good - // enough for the bounds checks we do below. - if snipRange.Empty() { - snipRange.End.Byte++ - snipRange.End.Column++ - } - if highlightRange.Empty() { - highlightRange.End.Byte++ - highlightRange.End.Column++ - } - - file := w.files[diag.Subject.Filename] - if file == nil || file.Bytes == nil { - fmt.Fprintf(w.wr, " on %s line %d:\n (source code not available)\n\n", diag.Subject.Filename, diag.Subject.Start.Line) - } else { - - var contextLine string - if diag.Subject != nil { - contextLine = contextString(file, diag.Subject.Start.Byte) - if contextLine != "" { - contextLine = ", in " + contextLine - } - } - - fmt.Fprintf(w.wr, " on %s line %d%s:\n", diag.Subject.Filename, diag.Subject.Start.Line, contextLine) - - src := file.Bytes - sc := NewRangeScanner(src, diag.Subject.Filename, bufio.ScanLines) - - for sc.Scan() { - lineRange := sc.Range() - if !lineRange.Overlaps(snipRange) { - continue - } - - beforeRange, highlightedRange, afterRange := lineRange.PartitionAround(highlightRange) - if highlightedRange.Empty() { - fmt.Fprintf(w.wr, "%4d: %s\n", lineRange.Start.Line, sc.Bytes()) - } else { - before := beforeRange.SliceBytes(src) - highlighted := highlightedRange.SliceBytes(src) - after := afterRange.SliceBytes(src) - fmt.Fprintf( - w.wr, "%4d: %s%s%s%s%s\n", - lineRange.Start.Line, - before, - highlightCode, highlighted, resetCode, - after, - ) - } - - } - - w.wr.Write([]byte{'\n'}) - } - - if diag.Expression != nil && diag.EvalContext != nil { - // We will attempt to render the values for any variables - // referenced in the given expression as additional context, for - // situations where the same expression is evaluated multiple - // times in different scopes. - expr := diag.Expression - ctx := diag.EvalContext - - vars := expr.Variables() - stmts := make([]string, 0, len(vars)) - seen := make(map[string]struct{}, len(vars)) - for _, traversal := range vars { - val, diags := traversal.TraverseAbs(ctx) - if diags.HasErrors() { - // Skip anything that generates errors, since we probably - // already have the same error in our diagnostics set - // already. - continue - } - - traversalStr := w.traversalStr(traversal) - if _, exists := seen[traversalStr]; exists { - continue // don't show duplicates when the same variable is referenced multiple times - } - switch { - case !val.IsKnown(): - // Can't say anything about this yet, then. - continue - case val.IsNull(): - stmts = append(stmts, fmt.Sprintf("%s set to null", traversalStr)) - default: - stmts = append(stmts, fmt.Sprintf("%s as %s", traversalStr, w.valueStr(val))) - } - seen[traversalStr] = struct{}{} - } - - sort.Strings(stmts) // FIXME: Should maybe use a traversal-aware sort that can sort numeric indexes properly? - last := len(stmts) - 1 - - for i, stmt := range stmts { - switch i { - case 0: - w.wr.Write([]byte{'w', 'i', 't', 'h', ' '}) - default: - w.wr.Write([]byte{' ', ' ', ' ', ' ', ' '}) - } - w.wr.Write([]byte(stmt)) - switch i { - case last: - w.wr.Write([]byte{'.', '\n', '\n'}) - default: - w.wr.Write([]byte{',', '\n'}) - } - } - } - } - - if diag.Detail != "" { - detail := diag.Detail - if w.width != 0 { - detail = wordwrap.WrapString(detail, w.width) - } - fmt.Fprintf(w.wr, "%s\n\n", detail) - } - - return nil -} - -func (w *diagnosticTextWriter) WriteDiagnostics(diags Diagnostics) error { - for _, diag := range diags { - err := w.WriteDiagnostic(diag) - if err != nil { - return err - } - } - return nil -} - -func (w *diagnosticTextWriter) traversalStr(traversal Traversal) string { - // This is a specialized subset of traversal rendering tailored to - // producing helpful contextual messages in diagnostics. It is not - // comprehensive nor intended to be used for other purposes. - - var buf bytes.Buffer - for _, step := range traversal { - switch tStep := step.(type) { - case TraverseRoot: - buf.WriteString(tStep.Name) - case TraverseAttr: - buf.WriteByte('.') - buf.WriteString(tStep.Name) - case TraverseIndex: - buf.WriteByte('[') - if keyTy := tStep.Key.Type(); keyTy.IsPrimitiveType() { - buf.WriteString(w.valueStr(tStep.Key)) - } else { - // We'll just use a placeholder for more complex values, - // since otherwise our result could grow ridiculously long. - buf.WriteString("...") - } - buf.WriteByte(']') - } - } - return buf.String() -} - -func (w *diagnosticTextWriter) valueStr(val cty.Value) string { - // This is a specialized subset of value rendering tailored to producing - // helpful but concise messages in diagnostics. It is not comprehensive - // nor intended to be used for other purposes. - - ty := val.Type() - switch { - case val.IsNull(): - return "null" - case !val.IsKnown(): - // Should never happen here because we should filter before we get - // in here, but we'll do something reasonable rather than panic. - return "(not yet known)" - case ty == cty.Bool: - if val.True() { - return "true" - } - return "false" - case ty == cty.Number: - bf := val.AsBigFloat() - return bf.Text('g', 10) - case ty == cty.String: - // Go string syntax is not exactly the same as HCL native string syntax, - // but we'll accept the minor edge-cases where this is different here - // for now, just to get something reasonable here. - return fmt.Sprintf("%q", val.AsString()) - case ty.IsCollectionType() || ty.IsTupleType(): - l := val.LengthInt() - switch l { - case 0: - return "empty " + ty.FriendlyName() - case 1: - return ty.FriendlyName() + " with 1 element" - default: - return fmt.Sprintf("%s with %d elements", ty.FriendlyName(), l) - } - case ty.IsObjectType(): - atys := ty.AttributeTypes() - l := len(atys) - switch l { - case 0: - return "object with no attributes" - case 1: - var name string - for k := range atys { - name = k - } - return fmt.Sprintf("object with 1 attribute %q", name) - default: - return fmt.Sprintf("object with %d attributes", l) - } - default: - return ty.FriendlyName() - } -} - -func contextString(file *File, offset int) string { - type contextStringer interface { - ContextString(offset int) string - } - - if cser, ok := file.Nav.(contextStringer); ok { - return cser.ContextString(offset) - } - return "" -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/didyoumean.go b/vendor/github.com/hashicorp/hcl2/hcl/didyoumean.go deleted file mode 100644 index c12833440..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/didyoumean.go +++ /dev/null @@ -1,24 +0,0 @@ -package hcl - -import ( - "github.com/agext/levenshtein" -) - -// nameSuggestion tries to find a name from the given slice of suggested names -// that is close to the given name and returns it if found. If no suggestion -// is close enough, returns the empty string. -// -// The suggestions are tried in order, so earlier suggestions take precedence -// if the given string is similar to two or more suggestions. -// -// This function is intended to be used with a relatively-small number of -// suggestions. It's not optimized for hundreds or thousands of them. -func nameSuggestion(given string, suggestions []string) string { - for _, suggestion := range suggestions { - dist := levenshtein.Distance(given, suggestion, nil) - if dist < 3 { // threshold determined experimentally - return suggestion - } - } - return "" -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/doc.go b/vendor/github.com/hashicorp/hcl2/hcl/doc.go deleted file mode 100644 index 01318c96f..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/doc.go +++ /dev/null @@ -1 +0,0 @@ -package hcl diff --git a/vendor/github.com/hashicorp/hcl2/hcl/eval_context.go b/vendor/github.com/hashicorp/hcl2/hcl/eval_context.go deleted file mode 100644 index 915910ad8..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/eval_context.go +++ /dev/null @@ -1,25 +0,0 @@ -package hcl - -import ( - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/function" -) - -// An EvalContext provides the variables and functions that should be used -// to evaluate an expression. -type EvalContext struct { - Variables map[string]cty.Value - Functions map[string]function.Function - parent *EvalContext -} - -// NewChild returns a new EvalContext that is a child of the receiver. -func (ctx *EvalContext) NewChild() *EvalContext { - return &EvalContext{parent: ctx} -} - -// Parent returns the parent of the receiver, or nil if the receiver has -// no parent. -func (ctx *EvalContext) Parent() *EvalContext { - return ctx.parent -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/expr_call.go b/vendor/github.com/hashicorp/hcl2/hcl/expr_call.go deleted file mode 100644 index 6963fbae3..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/expr_call.go +++ /dev/null @@ -1,46 +0,0 @@ -package hcl - -// ExprCall tests if the given expression is a function call and, -// if so, extracts the function name and the expressions that represent -// the arguments. If the given expression is not statically a function call, -// error diagnostics are returned. -// -// A particular Expression implementation can support this function by -// offering a method called ExprCall that takes no arguments and returns -// *StaticCall. This method should return nil if a static call cannot -// be extracted. Alternatively, an implementation can support -// UnwrapExpression to delegate handling of this function to a wrapped -// Expression object. -func ExprCall(expr Expression) (*StaticCall, Diagnostics) { - type exprCall interface { - ExprCall() *StaticCall - } - - physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool { - _, supported := expr.(exprCall) - return supported - }) - - if exC, supported := physExpr.(exprCall); supported { - if call := exC.ExprCall(); call != nil { - return call, nil - } - } - return nil, Diagnostics{ - &Diagnostic{ - Severity: DiagError, - Summary: "Invalid expression", - Detail: "A static function call is required.", - Subject: expr.StartRange().Ptr(), - }, - } -} - -// StaticCall represents a function call that was extracted statically from -// an expression using ExprCall. -type StaticCall struct { - Name string - NameRange Range - Arguments []Expression - ArgsRange Range -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/expr_list.go b/vendor/github.com/hashicorp/hcl2/hcl/expr_list.go deleted file mode 100644 index d05cca0b9..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/expr_list.go +++ /dev/null @@ -1,37 +0,0 @@ -package hcl - -// ExprList tests if the given expression is a static list construct and, -// if so, extracts the expressions that represent the list elements. -// If the given expression is not a static list, error diagnostics are -// returned. -// -// A particular Expression implementation can support this function by -// offering a method called ExprList that takes no arguments and returns -// []Expression. This method should return nil if a static list cannot -// be extracted. Alternatively, an implementation can support -// UnwrapExpression to delegate handling of this function to a wrapped -// Expression object. -func ExprList(expr Expression) ([]Expression, Diagnostics) { - type exprList interface { - ExprList() []Expression - } - - physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool { - _, supported := expr.(exprList) - return supported - }) - - if exL, supported := physExpr.(exprList); supported { - if list := exL.ExprList(); list != nil { - return list, nil - } - } - return nil, Diagnostics{ - &Diagnostic{ - Severity: DiagError, - Summary: "Invalid expression", - Detail: "A static list expression is required.", - Subject: expr.StartRange().Ptr(), - }, - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/expr_map.go b/vendor/github.com/hashicorp/hcl2/hcl/expr_map.go deleted file mode 100644 index 96d1ce4bf..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/expr_map.go +++ /dev/null @@ -1,44 +0,0 @@ -package hcl - -// ExprMap tests if the given expression is a static map construct and, -// if so, extracts the expressions that represent the map elements. -// If the given expression is not a static map, error diagnostics are -// returned. -// -// A particular Expression implementation can support this function by -// offering a method called ExprMap that takes no arguments and returns -// []KeyValuePair. This method should return nil if a static map cannot -// be extracted. Alternatively, an implementation can support -// UnwrapExpression to delegate handling of this function to a wrapped -// Expression object. -func ExprMap(expr Expression) ([]KeyValuePair, Diagnostics) { - type exprMap interface { - ExprMap() []KeyValuePair - } - - physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool { - _, supported := expr.(exprMap) - return supported - }) - - if exM, supported := physExpr.(exprMap); supported { - if pairs := exM.ExprMap(); pairs != nil { - return pairs, nil - } - } - return nil, Diagnostics{ - &Diagnostic{ - Severity: DiagError, - Summary: "Invalid expression", - Detail: "A static map expression is required.", - Subject: expr.StartRange().Ptr(), - }, - } -} - -// KeyValuePair represents a pair of expressions that serve as a single item -// within a map or object definition construct. -type KeyValuePair struct { - Key Expression - Value Expression -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/expr_unwrap.go b/vendor/github.com/hashicorp/hcl2/hcl/expr_unwrap.go deleted file mode 100644 index 6d5d205c4..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/expr_unwrap.go +++ /dev/null @@ -1,68 +0,0 @@ -package hcl - -type unwrapExpression interface { - UnwrapExpression() Expression -} - -// UnwrapExpression removes any "wrapper" expressions from the given expression, -// to recover the representation of the physical expression given in source -// code. -// -// Sometimes wrapping expressions are used to modify expression behavior, e.g. -// in extensions that need to make some local variables available to certain -// sub-trees of the configuration. This can make it difficult to reliably -// type-assert on the physical AST types used by the underlying syntax. -// -// Unwrapping an expression may modify its behavior by stripping away any -// additional constraints or capabilities being applied to the Value and -// Variables methods, so this function should generally only be used prior -// to operations that concern themselves with the static syntax of the input -// configuration, and not with the effective value of the expression. -// -// Wrapper expression types must support unwrapping by implementing a method -// called UnwrapExpression that takes no arguments and returns the embedded -// Expression. Implementations of this method should peel away only one level -// of wrapping, if multiple are present. This method may return nil to -// indicate _dynamically_ that no wrapped expression is available, for -// expression types that might only behave as wrappers in certain cases. -func UnwrapExpression(expr Expression) Expression { - for { - unwrap, wrapped := expr.(unwrapExpression) - if !wrapped { - return expr - } - innerExpr := unwrap.UnwrapExpression() - if innerExpr == nil { - return expr - } - expr = innerExpr - } -} - -// UnwrapExpressionUntil is similar to UnwrapExpression except it gives the -// caller an opportunity to test each level of unwrapping to see each a -// particular expression is accepted. -// -// This could be used, for example, to unwrap until a particular other -// interface is satisfied, regardless of wrap wrapping level it is satisfied -// at. -// -// The given callback function must return false to continue wrapping, or -// true to accept and return the proposed expression given. If the callback -// function rejects even the final, physical expression then the result of -// this function is nil. -func UnwrapExpressionUntil(expr Expression, until func(Expression) bool) Expression { - for { - if until(expr) { - return expr - } - unwrap, wrapped := expr.(unwrapExpression) - if !wrapped { - return nil - } - expr = unwrap.UnwrapExpression() - if expr == nil { - return nil - } - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/diagnostics.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/diagnostics.go deleted file mode 100644 index 94eaf5892..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/diagnostics.go +++ /dev/null @@ -1,23 +0,0 @@ -package hclsyntax - -import ( - "github.com/hashicorp/hcl2/hcl" -) - -// setDiagEvalContext is an internal helper that will impose a particular -// EvalContext on a set of diagnostics in-place, for any diagnostic that -// does not already have an EvalContext set. -// -// We generally expect diagnostics to be immutable, but this is safe to use -// on any Diagnostics where none of the contained Diagnostic objects have yet -// been seen by a caller. Its purpose is to apply additional context to a -// set of diagnostics produced by a "deeper" component as the stack unwinds -// during expression evaluation. -func setDiagEvalContext(diags hcl.Diagnostics, expr hcl.Expression, ctx *hcl.EvalContext) { - for _, diag := range diags { - if diag.Expression == nil { - diag.Expression = expr - diag.EvalContext = ctx - } - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/didyoumean.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/didyoumean.go deleted file mode 100644 index ccc1c0ae2..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/didyoumean.go +++ /dev/null @@ -1,24 +0,0 @@ -package hclsyntax - -import ( - "github.com/agext/levenshtein" -) - -// nameSuggestion tries to find a name from the given slice of suggested names -// that is close to the given name and returns it if found. If no suggestion -// is close enough, returns the empty string. -// -// The suggestions are tried in order, so earlier suggestions take precedence -// if the given string is similar to two or more suggestions. -// -// This function is intended to be used with a relatively-small number of -// suggestions. It's not optimized for hundreds or thousands of them. -func nameSuggestion(given string, suggestions []string) string { - for _, suggestion := range suggestions { - dist := levenshtein.Distance(given, suggestion, nil) - if dist < 3 { // threshold determined experimentally - return suggestion - } - } - return "" -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/doc.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/doc.go deleted file mode 100644 index 617bc29dc..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/doc.go +++ /dev/null @@ -1,7 +0,0 @@ -// Package hclsyntax contains the parser, AST, etc for HCL's native language, -// as opposed to the JSON variant. -// -// In normal use applications should rarely depend on this package directly, -// instead preferring the higher-level interface of the main hcl package and -// its companion package hclparse. -package hclsyntax diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression.go deleted file mode 100644 index d3f7a74d3..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression.go +++ /dev/null @@ -1,1468 +0,0 @@ -package hclsyntax - -import ( - "fmt" - "sync" - - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" - "github.com/zclconf/go-cty/cty/function" -) - -// Expression is the abstract type for nodes that behave as HCL expressions. -type Expression interface { - Node - - // The hcl.Expression methods are duplicated here, rather than simply - // embedded, because both Node and hcl.Expression have a Range method - // and so they conflict. - - Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) - Variables() []hcl.Traversal - StartRange() hcl.Range -} - -// Assert that Expression implements hcl.Expression -var assertExprImplExpr hcl.Expression = Expression(nil) - -// LiteralValueExpr is an expression that just always returns a given value. -type LiteralValueExpr struct { - Val cty.Value - SrcRange hcl.Range -} - -func (e *LiteralValueExpr) walkChildNodes(w internalWalkFunc) { - // Literal values have no child nodes -} - -func (e *LiteralValueExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - return e.Val, nil -} - -func (e *LiteralValueExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *LiteralValueExpr) StartRange() hcl.Range { - return e.SrcRange -} - -// Implementation for hcl.AbsTraversalForExpr. -func (e *LiteralValueExpr) AsTraversal() hcl.Traversal { - // This one's a little weird: the contract for AsTraversal is to interpret - // an expression as if it were traversal syntax, and traversal syntax - // doesn't have the special keywords "null", "true", and "false" so these - // are expected to be treated like variables in that case. - // Since our parser already turned them into LiteralValueExpr by the time - // we get here, we need to undo this and infer the name that would've - // originally led to our value. - // We don't do anything for any other values, since they don't overlap - // with traversal roots. - - if e.Val.IsNull() { - // In practice the parser only generates null values of the dynamic - // pseudo-type for literals, so we can safely assume that any null - // was orignally the keyword "null". - return hcl.Traversal{ - hcl.TraverseRoot{ - Name: "null", - SrcRange: e.SrcRange, - }, - } - } - - switch e.Val { - case cty.True: - return hcl.Traversal{ - hcl.TraverseRoot{ - Name: "true", - SrcRange: e.SrcRange, - }, - } - case cty.False: - return hcl.Traversal{ - hcl.TraverseRoot{ - Name: "false", - SrcRange: e.SrcRange, - }, - } - default: - // No traversal is possible for any other value. - return nil - } -} - -// ScopeTraversalExpr is an Expression that retrieves a value from the scope -// using a traversal. -type ScopeTraversalExpr struct { - Traversal hcl.Traversal - SrcRange hcl.Range -} - -func (e *ScopeTraversalExpr) walkChildNodes(w internalWalkFunc) { - // Scope traversals have no child nodes -} - -func (e *ScopeTraversalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - val, diags := e.Traversal.TraverseAbs(ctx) - setDiagEvalContext(diags, e, ctx) - return val, diags -} - -func (e *ScopeTraversalExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *ScopeTraversalExpr) StartRange() hcl.Range { - return e.SrcRange -} - -// Implementation for hcl.AbsTraversalForExpr. -func (e *ScopeTraversalExpr) AsTraversal() hcl.Traversal { - return e.Traversal -} - -// RelativeTraversalExpr is an Expression that retrieves a value from another -// value using a _relative_ traversal. -type RelativeTraversalExpr struct { - Source Expression - Traversal hcl.Traversal - SrcRange hcl.Range -} - -func (e *RelativeTraversalExpr) walkChildNodes(w internalWalkFunc) { - w(e.Source) -} - -func (e *RelativeTraversalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - src, diags := e.Source.Value(ctx) - ret, travDiags := e.Traversal.TraverseRel(src) - setDiagEvalContext(travDiags, e, ctx) - diags = append(diags, travDiags...) - return ret, diags -} - -func (e *RelativeTraversalExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *RelativeTraversalExpr) StartRange() hcl.Range { - return e.SrcRange -} - -// Implementation for hcl.AbsTraversalForExpr. -func (e *RelativeTraversalExpr) AsTraversal() hcl.Traversal { - // We can produce a traversal only if our source can. - st, diags := hcl.AbsTraversalForExpr(e.Source) - if diags.HasErrors() { - return nil - } - - ret := make(hcl.Traversal, len(st)+len(e.Traversal)) - copy(ret, st) - copy(ret[len(st):], e.Traversal) - return ret -} - -// FunctionCallExpr is an Expression that calls a function from the EvalContext -// and returns its result. -type FunctionCallExpr struct { - Name string - Args []Expression - - // If true, the final argument should be a tuple, list or set which will - // expand to be one argument per element. - ExpandFinal bool - - NameRange hcl.Range - OpenParenRange hcl.Range - CloseParenRange hcl.Range -} - -func (e *FunctionCallExpr) walkChildNodes(w internalWalkFunc) { - for _, arg := range e.Args { - w(arg) - } -} - -func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - var diags hcl.Diagnostics - - var f function.Function - exists := false - hasNonNilMap := false - thisCtx := ctx - for thisCtx != nil { - if thisCtx.Functions == nil { - thisCtx = thisCtx.Parent() - continue - } - hasNonNilMap = true - f, exists = thisCtx.Functions[e.Name] - if exists { - break - } - thisCtx = thisCtx.Parent() - } - - if !exists { - if !hasNonNilMap { - return cty.DynamicVal, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Function calls not allowed", - Detail: "Functions may not be called here.", - Subject: e.Range().Ptr(), - Expression: e, - EvalContext: ctx, - }, - } - } - - avail := make([]string, 0, len(ctx.Functions)) - for name := range ctx.Functions { - avail = append(avail, name) - } - suggestion := nameSuggestion(e.Name, avail) - if suggestion != "" { - suggestion = fmt.Sprintf(" Did you mean %q?", suggestion) - } - - return cty.DynamicVal, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Call to unknown function", - Detail: fmt.Sprintf("There is no function named %q.%s", e.Name, suggestion), - Subject: &e.NameRange, - Context: e.Range().Ptr(), - Expression: e, - EvalContext: ctx, - }, - } - } - - params := f.Params() - varParam := f.VarParam() - - args := e.Args - if e.ExpandFinal { - if len(args) < 1 { - // should never happen if the parser is behaving - panic("ExpandFinal set on function call with no arguments") - } - expandExpr := args[len(args)-1] - expandVal, expandDiags := expandExpr.Value(ctx) - diags = append(diags, expandDiags...) - if expandDiags.HasErrors() { - return cty.DynamicVal, diags - } - - switch { - case expandVal.Type().IsTupleType() || expandVal.Type().IsListType() || expandVal.Type().IsSetType(): - if expandVal.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid expanding argument value", - Detail: "The expanding argument (indicated by ...) must not be null.", - Subject: expandExpr.Range().Ptr(), - Context: e.Range().Ptr(), - Expression: expandExpr, - EvalContext: ctx, - }) - return cty.DynamicVal, diags - } - if !expandVal.IsKnown() { - return cty.DynamicVal, diags - } - - newArgs := make([]Expression, 0, (len(args)-1)+expandVal.LengthInt()) - newArgs = append(newArgs, args[:len(args)-1]...) - it := expandVal.ElementIterator() - for it.Next() { - _, val := it.Element() - newArgs = append(newArgs, &LiteralValueExpr{ - Val: val, - SrcRange: expandExpr.Range(), - }) - } - args = newArgs - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid expanding argument value", - Detail: "The expanding argument (indicated by ...) must be of a tuple, list, or set type.", - Subject: expandExpr.Range().Ptr(), - Context: e.Range().Ptr(), - Expression: expandExpr, - EvalContext: ctx, - }) - return cty.DynamicVal, diags - } - } - - if len(args) < len(params) { - missing := params[len(args)] - qual := "" - if varParam != nil { - qual = " at least" - } - return cty.DynamicVal, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Not enough function arguments", - Detail: fmt.Sprintf( - "Function %q expects%s %d argument(s). Missing value for %q.", - e.Name, qual, len(params), missing.Name, - ), - Subject: &e.CloseParenRange, - Context: e.Range().Ptr(), - Expression: e, - EvalContext: ctx, - }, - } - } - - if varParam == nil && len(args) > len(params) { - return cty.DynamicVal, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Too many function arguments", - Detail: fmt.Sprintf( - "Function %q expects only %d argument(s).", - e.Name, len(params), - ), - Subject: args[len(params)].StartRange().Ptr(), - Context: e.Range().Ptr(), - Expression: e, - EvalContext: ctx, - }, - } - } - - argVals := make([]cty.Value, len(args)) - - for i, argExpr := range args { - var param *function.Parameter - if i < len(params) { - param = ¶ms[i] - } else { - param = varParam - } - - val, argDiags := argExpr.Value(ctx) - if len(argDiags) > 0 { - diags = append(diags, argDiags...) - } - - // Try to convert our value to the parameter type - val, err := convert.Convert(val, param.Type) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid function argument", - Detail: fmt.Sprintf( - "Invalid value for %q parameter: %s.", - param.Name, err, - ), - Subject: argExpr.StartRange().Ptr(), - Context: e.Range().Ptr(), - Expression: argExpr, - EvalContext: ctx, - }) - } - - argVals[i] = val - } - - if diags.HasErrors() { - // Don't try to execute the function if we already have errors with - // the arguments, because the result will probably be a confusing - // error message. - return cty.DynamicVal, diags - } - - resultVal, err := f.Call(argVals) - if err != nil { - switch terr := err.(type) { - case function.ArgError: - i := terr.Index - var param *function.Parameter - if i < len(params) { - param = ¶ms[i] - } else { - param = varParam - } - argExpr := e.Args[i] - - // TODO: we should also unpick a PathError here and show the - // path to the deep value where the error was detected. - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid function argument", - Detail: fmt.Sprintf( - "Invalid value for %q parameter: %s.", - param.Name, err, - ), - Subject: argExpr.StartRange().Ptr(), - Context: e.Range().Ptr(), - Expression: argExpr, - EvalContext: ctx, - }) - - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Error in function call", - Detail: fmt.Sprintf( - "Call to function %q failed: %s.", - e.Name, err, - ), - Subject: e.StartRange().Ptr(), - Context: e.Range().Ptr(), - Expression: e, - EvalContext: ctx, - }) - } - - return cty.DynamicVal, diags - } - - return resultVal, diags -} - -func (e *FunctionCallExpr) Range() hcl.Range { - return hcl.RangeBetween(e.NameRange, e.CloseParenRange) -} - -func (e *FunctionCallExpr) StartRange() hcl.Range { - return hcl.RangeBetween(e.NameRange, e.OpenParenRange) -} - -// Implementation for hcl.ExprCall. -func (e *FunctionCallExpr) ExprCall() *hcl.StaticCall { - ret := &hcl.StaticCall{ - Name: e.Name, - NameRange: e.NameRange, - Arguments: make([]hcl.Expression, len(e.Args)), - ArgsRange: hcl.RangeBetween(e.OpenParenRange, e.CloseParenRange), - } - // Need to convert our own Expression objects into hcl.Expression. - for i, arg := range e.Args { - ret.Arguments[i] = arg - } - return ret -} - -type ConditionalExpr struct { - Condition Expression - TrueResult Expression - FalseResult Expression - - SrcRange hcl.Range -} - -func (e *ConditionalExpr) walkChildNodes(w internalWalkFunc) { - w(e.Condition) - w(e.TrueResult) - w(e.FalseResult) -} - -func (e *ConditionalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - trueResult, trueDiags := e.TrueResult.Value(ctx) - falseResult, falseDiags := e.FalseResult.Value(ctx) - var diags hcl.Diagnostics - - resultType := cty.DynamicPseudoType - convs := make([]convert.Conversion, 2) - - switch { - // If either case is a dynamic null value (which would result from a - // literal null in the config), we know that it can convert to the expected - // type of the opposite case, and we don't need to speculatively reduce the - // final result type to DynamicPseudoType. - - // If we know that either Type is a DynamicPseudoType, we can be certain - // that the other value can convert since it's a pass-through, and we don't - // need to unify the types. If the final evaluation results in the dynamic - // value being returned, there's no conversion we can do, so we return the - // value directly. - case trueResult.RawEquals(cty.NullVal(cty.DynamicPseudoType)): - resultType = falseResult.Type() - convs[0] = convert.GetConversionUnsafe(cty.DynamicPseudoType, resultType) - case falseResult.RawEquals(cty.NullVal(cty.DynamicPseudoType)): - resultType = trueResult.Type() - convs[1] = convert.GetConversionUnsafe(cty.DynamicPseudoType, resultType) - case trueResult.Type() == cty.DynamicPseudoType, falseResult.Type() == cty.DynamicPseudoType: - // the final resultType type is still unknown - // we don't need to get the conversion, because both are a noop. - - default: - // Try to find a type that both results can be converted to. - resultType, convs = convert.UnifyUnsafe([]cty.Type{trueResult.Type(), falseResult.Type()}) - } - - if resultType == cty.NilType { - return cty.DynamicVal, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Inconsistent conditional result types", - Detail: fmt.Sprintf( - // FIXME: Need a helper function for showing natural-language type diffs, - // since this will generate some useless messages in some cases, like - // "These expressions are object and object respectively" if the - // object types don't exactly match. - "The true and false result expressions must have consistent types. The given expressions are %s and %s, respectively.", - trueResult.Type().FriendlyName(), falseResult.Type().FriendlyName(), - ), - Subject: hcl.RangeBetween(e.TrueResult.Range(), e.FalseResult.Range()).Ptr(), - Context: &e.SrcRange, - Expression: e, - EvalContext: ctx, - }, - } - } - - condResult, condDiags := e.Condition.Value(ctx) - diags = append(diags, condDiags...) - if condResult.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Null condition", - Detail: "The condition value is null. Conditions must either be true or false.", - Subject: e.Condition.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.Condition, - EvalContext: ctx, - }) - return cty.UnknownVal(resultType), diags - } - if !condResult.IsKnown() { - return cty.UnknownVal(resultType), diags - } - condResult, err := convert.Convert(condResult, cty.Bool) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect condition type", - Detail: fmt.Sprintf("The condition expression must be of type bool."), - Subject: e.Condition.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.Condition, - EvalContext: ctx, - }) - return cty.UnknownVal(resultType), diags - } - - if condResult.True() { - diags = append(diags, trueDiags...) - if convs[0] != nil { - var err error - trueResult, err = convs[0](trueResult) - if err != nil { - // Unsafe conversion failed with the concrete result value - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Inconsistent conditional result types", - Detail: fmt.Sprintf( - "The true result value has the wrong type: %s.", - err.Error(), - ), - Subject: e.TrueResult.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.TrueResult, - EvalContext: ctx, - }) - trueResult = cty.UnknownVal(resultType) - } - } - return trueResult, diags - } else { - diags = append(diags, falseDiags...) - if convs[1] != nil { - var err error - falseResult, err = convs[1](falseResult) - if err != nil { - // Unsafe conversion failed with the concrete result value - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Inconsistent conditional result types", - Detail: fmt.Sprintf( - "The false result value has the wrong type: %s.", - err.Error(), - ), - Subject: e.FalseResult.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.FalseResult, - EvalContext: ctx, - }) - falseResult = cty.UnknownVal(resultType) - } - } - return falseResult, diags - } -} - -func (e *ConditionalExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *ConditionalExpr) StartRange() hcl.Range { - return e.Condition.StartRange() -} - -type IndexExpr struct { - Collection Expression - Key Expression - - SrcRange hcl.Range - OpenRange hcl.Range -} - -func (e *IndexExpr) walkChildNodes(w internalWalkFunc) { - w(e.Collection) - w(e.Key) -} - -func (e *IndexExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - var diags hcl.Diagnostics - coll, collDiags := e.Collection.Value(ctx) - key, keyDiags := e.Key.Value(ctx) - diags = append(diags, collDiags...) - diags = append(diags, keyDiags...) - - val, indexDiags := hcl.Index(coll, key, &e.SrcRange) - setDiagEvalContext(indexDiags, e, ctx) - diags = append(diags, indexDiags...) - return val, diags -} - -func (e *IndexExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *IndexExpr) StartRange() hcl.Range { - return e.OpenRange -} - -type TupleConsExpr struct { - Exprs []Expression - - SrcRange hcl.Range - OpenRange hcl.Range -} - -func (e *TupleConsExpr) walkChildNodes(w internalWalkFunc) { - for _, expr := range e.Exprs { - w(expr) - } -} - -func (e *TupleConsExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - var vals []cty.Value - var diags hcl.Diagnostics - - vals = make([]cty.Value, len(e.Exprs)) - for i, expr := range e.Exprs { - val, valDiags := expr.Value(ctx) - vals[i] = val - diags = append(diags, valDiags...) - } - - return cty.TupleVal(vals), diags -} - -func (e *TupleConsExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *TupleConsExpr) StartRange() hcl.Range { - return e.OpenRange -} - -// Implementation for hcl.ExprList -func (e *TupleConsExpr) ExprList() []hcl.Expression { - ret := make([]hcl.Expression, len(e.Exprs)) - for i, expr := range e.Exprs { - ret[i] = expr - } - return ret -} - -type ObjectConsExpr struct { - Items []ObjectConsItem - - SrcRange hcl.Range - OpenRange hcl.Range -} - -type ObjectConsItem struct { - KeyExpr Expression - ValueExpr Expression -} - -func (e *ObjectConsExpr) walkChildNodes(w internalWalkFunc) { - for _, item := range e.Items { - w(item.KeyExpr) - w(item.ValueExpr) - } -} - -func (e *ObjectConsExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - var vals map[string]cty.Value - var diags hcl.Diagnostics - - // This will get set to true if we fail to produce any of our keys, - // either because they are actually unknown or if the evaluation produces - // errors. In all of these case we must return DynamicPseudoType because - // we're unable to know the full set of keys our object has, and thus - // we can't produce a complete value of the intended type. - // - // We still evaluate all of the item keys and values to make sure that we - // get as complete as possible a set of diagnostics. - known := true - - vals = make(map[string]cty.Value, len(e.Items)) - for _, item := range e.Items { - key, keyDiags := item.KeyExpr.Value(ctx) - diags = append(diags, keyDiags...) - - val, valDiags := item.ValueExpr.Value(ctx) - diags = append(diags, valDiags...) - - if keyDiags.HasErrors() { - known = false - continue - } - - if key.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Null value as key", - Detail: "Can't use a null value as a key.", - Subject: item.ValueExpr.Range().Ptr(), - Expression: item.KeyExpr, - EvalContext: ctx, - }) - known = false - continue - } - - var err error - key, err = convert.Convert(key, cty.String) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect key type", - Detail: fmt.Sprintf("Can't use this value as a key: %s.", err.Error()), - Subject: item.KeyExpr.Range().Ptr(), - Expression: item.KeyExpr, - EvalContext: ctx, - }) - known = false - continue - } - - if !key.IsKnown() { - known = false - continue - } - - keyStr := key.AsString() - - vals[keyStr] = val - } - - if !known { - return cty.DynamicVal, diags - } - - return cty.ObjectVal(vals), diags -} - -func (e *ObjectConsExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *ObjectConsExpr) StartRange() hcl.Range { - return e.OpenRange -} - -// Implementation for hcl.ExprMap -func (e *ObjectConsExpr) ExprMap() []hcl.KeyValuePair { - ret := make([]hcl.KeyValuePair, len(e.Items)) - for i, item := range e.Items { - ret[i] = hcl.KeyValuePair{ - Key: item.KeyExpr, - Value: item.ValueExpr, - } - } - return ret -} - -// ObjectConsKeyExpr is a special wrapper used only for ObjectConsExpr keys, -// which deals with the special case that a naked identifier in that position -// must be interpreted as a literal string rather than evaluated directly. -type ObjectConsKeyExpr struct { - Wrapped Expression -} - -func (e *ObjectConsKeyExpr) literalName() string { - // This is our logic for deciding whether to behave like a literal string. - // We lean on our AbsTraversalForExpr implementation here, which already - // deals with some awkward cases like the expression being the result - // of the keywords "null", "true" and "false" which we'd want to interpret - // as keys here too. - return hcl.ExprAsKeyword(e.Wrapped) -} - -func (e *ObjectConsKeyExpr) walkChildNodes(w internalWalkFunc) { - // We only treat our wrapped expression as a real expression if we're - // not going to interpret it as a literal. - if e.literalName() == "" { - w(e.Wrapped) - } -} - -func (e *ObjectConsKeyExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - // Because we accept a naked identifier as a literal key rather than a - // reference, it's confusing to accept a traversal containing periods - // here since we can't tell if the user intends to create a key with - // periods or actually reference something. To avoid confusing downstream - // errors we'll just prohibit a naked multi-step traversal here and - // require the user to state their intent more clearly. - // (This is handled at evaluation time rather than parse time because - // an application using static analysis _can_ accept a naked multi-step - // traversal here, if desired.) - if travExpr, isTraversal := e.Wrapped.(*ScopeTraversalExpr); isTraversal && len(travExpr.Traversal) > 1 { - var diags hcl.Diagnostics - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Ambiguous attribute key", - Detail: "If this expression is intended to be a reference, wrap it in parentheses. If it's instead intended as a literal name containing periods, wrap it in quotes to create a string literal.", - Subject: e.Range().Ptr(), - }) - return cty.DynamicVal, diags - } - - if ln := e.literalName(); ln != "" { - return cty.StringVal(ln), nil - } - return e.Wrapped.Value(ctx) -} - -func (e *ObjectConsKeyExpr) Range() hcl.Range { - return e.Wrapped.Range() -} - -func (e *ObjectConsKeyExpr) StartRange() hcl.Range { - return e.Wrapped.StartRange() -} - -// Implementation for hcl.AbsTraversalForExpr. -func (e *ObjectConsKeyExpr) AsTraversal() hcl.Traversal { - // We can produce a traversal only if our wrappee can. - st, diags := hcl.AbsTraversalForExpr(e.Wrapped) - if diags.HasErrors() { - return nil - } - - return st -} - -func (e *ObjectConsKeyExpr) UnwrapExpression() Expression { - return e.Wrapped -} - -// ForExpr represents iteration constructs: -// -// tuple = [for i, v in list: upper(v) if i > 2] -// object = {for k, v in map: k => upper(v)} -// object_of_tuples = {for v in list: v.key: v...} -type ForExpr struct { - KeyVar string // empty if ignoring the key - ValVar string - - CollExpr Expression - - KeyExpr Expression // nil when producing a tuple - ValExpr Expression - CondExpr Expression // null if no "if" clause is present - - Group bool // set if the ellipsis is used on the value in an object for - - SrcRange hcl.Range - OpenRange hcl.Range - CloseRange hcl.Range -} - -func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - var diags hcl.Diagnostics - - collVal, collDiags := e.CollExpr.Value(ctx) - diags = append(diags, collDiags...) - - if collVal.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Iteration over null value", - Detail: "A null value cannot be used as the collection in a 'for' expression.", - Subject: e.CollExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CollExpr, - EvalContext: ctx, - }) - return cty.DynamicVal, diags - } - if collVal.Type() == cty.DynamicPseudoType { - return cty.DynamicVal, diags - } - if !collVal.CanIterateElements() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Iteration over non-iterable value", - Detail: fmt.Sprintf( - "A value of type %s cannot be used as the collection in a 'for' expression.", - collVal.Type().FriendlyName(), - ), - Subject: e.CollExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CollExpr, - EvalContext: ctx, - }) - return cty.DynamicVal, diags - } - if !collVal.IsKnown() { - return cty.DynamicVal, diags - } - - // Before we start we'll do an early check to see if any CondExpr we've - // been given is of the wrong type. This isn't 100% reliable (it may - // be DynamicVal until real values are given) but it should catch some - // straightforward cases and prevent a barrage of repeated errors. - if e.CondExpr != nil { - childCtx := ctx.NewChild() - childCtx.Variables = map[string]cty.Value{} - if e.KeyVar != "" { - childCtx.Variables[e.KeyVar] = cty.DynamicVal - } - childCtx.Variables[e.ValVar] = cty.DynamicVal - - result, condDiags := e.CondExpr.Value(childCtx) - diags = append(diags, condDiags...) - if result.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Condition is null", - Detail: "The value of the 'if' clause must not be null.", - Subject: e.CondExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CondExpr, - EvalContext: ctx, - }) - return cty.DynamicVal, diags - } - _, err := convert.Convert(result, cty.Bool) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' condition", - Detail: fmt.Sprintf("The 'if' clause value is invalid: %s.", err.Error()), - Subject: e.CondExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CondExpr, - EvalContext: ctx, - }) - return cty.DynamicVal, diags - } - if condDiags.HasErrors() { - return cty.DynamicVal, diags - } - } - - if e.KeyExpr != nil { - // Producing an object - var vals map[string]cty.Value - var groupVals map[string][]cty.Value - if e.Group { - groupVals = map[string][]cty.Value{} - } else { - vals = map[string]cty.Value{} - } - - it := collVal.ElementIterator() - - known := true - for it.Next() { - k, v := it.Element() - childCtx := ctx.NewChild() - childCtx.Variables = map[string]cty.Value{} - if e.KeyVar != "" { - childCtx.Variables[e.KeyVar] = k - } - childCtx.Variables[e.ValVar] = v - - if e.CondExpr != nil { - includeRaw, condDiags := e.CondExpr.Value(childCtx) - diags = append(diags, condDiags...) - if includeRaw.IsNull() { - if known { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' condition", - Detail: "The value of the 'if' clause must not be null.", - Subject: e.CondExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CondExpr, - EvalContext: childCtx, - }) - } - known = false - continue - } - include, err := convert.Convert(includeRaw, cty.Bool) - if err != nil { - if known { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' condition", - Detail: fmt.Sprintf("The 'if' clause value is invalid: %s.", err.Error()), - Subject: e.CondExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CondExpr, - EvalContext: childCtx, - }) - } - known = false - continue - } - if !include.IsKnown() { - known = false - continue - } - - if include.False() { - // Skip this element - continue - } - } - - keyRaw, keyDiags := e.KeyExpr.Value(childCtx) - diags = append(diags, keyDiags...) - if keyRaw.IsNull() { - if known { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid object key", - Detail: "Key expression in 'for' expression must not produce a null value.", - Subject: e.KeyExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.KeyExpr, - EvalContext: childCtx, - }) - } - known = false - continue - } - if !keyRaw.IsKnown() { - known = false - continue - } - - key, err := convert.Convert(keyRaw, cty.String) - if err != nil { - if known { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid object key", - Detail: fmt.Sprintf("The key expression produced an invalid result: %s.", err.Error()), - Subject: e.KeyExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.KeyExpr, - EvalContext: childCtx, - }) - } - known = false - continue - } - - val, valDiags := e.ValExpr.Value(childCtx) - diags = append(diags, valDiags...) - - if e.Group { - k := key.AsString() - groupVals[k] = append(groupVals[k], val) - } else { - k := key.AsString() - if _, exists := vals[k]; exists { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate object key", - Detail: fmt.Sprintf( - "Two different items produced the key %q in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.", - k, - ), - Subject: e.KeyExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.KeyExpr, - EvalContext: childCtx, - }) - } else { - vals[key.AsString()] = val - } - } - } - - if !known { - return cty.DynamicVal, diags - } - - if e.Group { - vals = map[string]cty.Value{} - for k, gvs := range groupVals { - vals[k] = cty.TupleVal(gvs) - } - } - - return cty.ObjectVal(vals), diags - - } else { - // Producing a tuple - vals := []cty.Value{} - - it := collVal.ElementIterator() - - known := true - for it.Next() { - k, v := it.Element() - childCtx := ctx.NewChild() - childCtx.Variables = map[string]cty.Value{} - if e.KeyVar != "" { - childCtx.Variables[e.KeyVar] = k - } - childCtx.Variables[e.ValVar] = v - - if e.CondExpr != nil { - includeRaw, condDiags := e.CondExpr.Value(childCtx) - diags = append(diags, condDiags...) - if includeRaw.IsNull() { - if known { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' condition", - Detail: "The value of the 'if' clause must not be null.", - Subject: e.CondExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CondExpr, - EvalContext: childCtx, - }) - } - known = false - continue - } - if !includeRaw.IsKnown() { - // We will eventually return DynamicVal, but we'll continue - // iterating in case there are other diagnostics to gather - // for later elements. - known = false - continue - } - - include, err := convert.Convert(includeRaw, cty.Bool) - if err != nil { - if known { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' condition", - Detail: fmt.Sprintf("The 'if' clause value is invalid: %s.", err.Error()), - Subject: e.CondExpr.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.CondExpr, - EvalContext: childCtx, - }) - } - known = false - continue - } - - if include.False() { - // Skip this element - continue - } - } - - val, valDiags := e.ValExpr.Value(childCtx) - diags = append(diags, valDiags...) - vals = append(vals, val) - } - - if !known { - return cty.DynamicVal, diags - } - - return cty.TupleVal(vals), diags - } -} - -func (e *ForExpr) walkChildNodes(w internalWalkFunc) { - w(e.CollExpr) - - scopeNames := map[string]struct{}{} - if e.KeyVar != "" { - scopeNames[e.KeyVar] = struct{}{} - } - if e.ValVar != "" { - scopeNames[e.ValVar] = struct{}{} - } - - if e.KeyExpr != nil { - w(ChildScope{ - LocalNames: scopeNames, - Expr: e.KeyExpr, - }) - } - w(ChildScope{ - LocalNames: scopeNames, - Expr: e.ValExpr, - }) - if e.CondExpr != nil { - w(ChildScope{ - LocalNames: scopeNames, - Expr: e.CondExpr, - }) - } -} - -func (e *ForExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *ForExpr) StartRange() hcl.Range { - return e.OpenRange -} - -type SplatExpr struct { - Source Expression - Each Expression - Item *AnonSymbolExpr - - SrcRange hcl.Range - MarkerRange hcl.Range -} - -func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - sourceVal, diags := e.Source.Value(ctx) - if diags.HasErrors() { - // We'll evaluate our "Each" expression here just to see if it - // produces any more diagnostics we can report. Since we're not - // assigning a value to our AnonSymbolExpr here it will return - // DynamicVal, which should short-circuit any use of it. - _, itemDiags := e.Item.Value(ctx) - diags = append(diags, itemDiags...) - return cty.DynamicVal, diags - } - - sourceTy := sourceVal.Type() - if sourceTy == cty.DynamicPseudoType { - // If we don't even know the _type_ of our source value yet then - // we'll need to defer all processing, since we can't decide our - // result type either. - return cty.DynamicVal, diags - } - - // A "special power" of splat expressions is that they can be applied - // both to tuples/lists and to other values, and in the latter case - // the value will be treated as an implicit single-item tuple, or as - // an empty tuple if the value is null. - autoUpgrade := !(sourceTy.IsTupleType() || sourceTy.IsListType() || sourceTy.IsSetType()) - - if sourceVal.IsNull() { - if autoUpgrade { - return cty.EmptyTupleVal, diags - } - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Splat of null value", - Detail: "Splat expressions (with the * symbol) cannot be applied to null sequences.", - Subject: e.Source.Range().Ptr(), - Context: hcl.RangeBetween(e.Source.Range(), e.MarkerRange).Ptr(), - Expression: e.Source, - EvalContext: ctx, - }) - return cty.DynamicVal, diags - } - - if autoUpgrade { - sourceVal = cty.TupleVal([]cty.Value{sourceVal}) - sourceTy = sourceVal.Type() - } - - // We'll compute our result type lazily if we need it. In the normal case - // it's inferred automatically from the value we construct. - resultTy := func() (cty.Type, hcl.Diagnostics) { - chiCtx := ctx.NewChild() - var diags hcl.Diagnostics - switch { - case sourceTy.IsListType() || sourceTy.IsSetType(): - ety := sourceTy.ElementType() - e.Item.setValue(chiCtx, cty.UnknownVal(ety)) - val, itemDiags := e.Each.Value(chiCtx) - diags = append(diags, itemDiags...) - e.Item.clearValue(chiCtx) // clean up our temporary value - return cty.List(val.Type()), diags - case sourceTy.IsTupleType(): - etys := sourceTy.TupleElementTypes() - resultTys := make([]cty.Type, 0, len(etys)) - for _, ety := range etys { - e.Item.setValue(chiCtx, cty.UnknownVal(ety)) - val, itemDiags := e.Each.Value(chiCtx) - diags = append(diags, itemDiags...) - e.Item.clearValue(chiCtx) // clean up our temporary value - resultTys = append(resultTys, val.Type()) - } - return cty.Tuple(resultTys), diags - default: - // Should never happen because of our promotion to list above. - return cty.DynamicPseudoType, diags - } - } - - if !sourceVal.IsKnown() { - // We can't produce a known result in this case, but we'll still - // indicate what the result type would be, allowing any downstream type - // checking to proceed. - ty, tyDiags := resultTy() - diags = append(diags, tyDiags...) - return cty.UnknownVal(ty), diags - } - - vals := make([]cty.Value, 0, sourceVal.LengthInt()) - it := sourceVal.ElementIterator() - if ctx == nil { - // we need a context to use our AnonSymbolExpr, so we'll just - // make an empty one here to use as a placeholder. - ctx = ctx.NewChild() - } - isKnown := true - for it.Next() { - _, sourceItem := it.Element() - e.Item.setValue(ctx, sourceItem) - newItem, itemDiags := e.Each.Value(ctx) - diags = append(diags, itemDiags...) - if itemDiags.HasErrors() { - isKnown = false - } - vals = append(vals, newItem) - } - e.Item.clearValue(ctx) // clean up our temporary value - - if !isKnown { - // We'll ingore the resultTy diagnostics in this case since they - // will just be the same errors we saw while iterating above. - ty, _ := resultTy() - return cty.UnknownVal(ty), diags - } - - switch { - case sourceTy.IsListType() || sourceTy.IsSetType(): - if len(vals) == 0 { - ty, tyDiags := resultTy() - diags = append(diags, tyDiags...) - return cty.ListValEmpty(ty.ElementType()), diags - } - return cty.ListVal(vals), diags - default: - return cty.TupleVal(vals), diags - } -} - -func (e *SplatExpr) walkChildNodes(w internalWalkFunc) { - w(e.Source) - w(e.Each) -} - -func (e *SplatExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *SplatExpr) StartRange() hcl.Range { - return e.MarkerRange -} - -// AnonSymbolExpr is used as a placeholder for a value in an expression that -// can be applied dynamically to any value at runtime. -// -// This is a rather odd, synthetic expression. It is used as part of the -// representation of splat expressions as a placeholder for the current item -// being visited in the splat evaluation. -// -// AnonSymbolExpr cannot be evaluated in isolation. If its Value is called -// directly then cty.DynamicVal will be returned. Instead, it is evaluated -// in terms of another node (i.e. a splat expression) which temporarily -// assigns it a value. -type AnonSymbolExpr struct { - SrcRange hcl.Range - - // values and its associated lock are used to isolate concurrent - // evaluations of a symbol from one another. It is the calling application's - // responsibility to ensure that the same splat expression is not evalauted - // concurrently within the _same_ EvalContext, but it is fine and safe to - // do cuncurrent evaluations with distinct EvalContexts. - values map[*hcl.EvalContext]cty.Value - valuesLock sync.RWMutex -} - -func (e *AnonSymbolExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - if ctx == nil { - return cty.DynamicVal, nil - } - - e.valuesLock.RLock() - defer e.valuesLock.RUnlock() - - val, exists := e.values[ctx] - if !exists { - return cty.DynamicVal, nil - } - return val, nil -} - -// setValue sets a temporary local value for the expression when evaluated -// in the given context, which must be non-nil. -func (e *AnonSymbolExpr) setValue(ctx *hcl.EvalContext, val cty.Value) { - e.valuesLock.Lock() - defer e.valuesLock.Unlock() - - if e.values == nil { - e.values = make(map[*hcl.EvalContext]cty.Value) - } - if ctx == nil { - panic("can't setValue for a nil EvalContext") - } - e.values[ctx] = val -} - -func (e *AnonSymbolExpr) clearValue(ctx *hcl.EvalContext) { - e.valuesLock.Lock() - defer e.valuesLock.Unlock() - - if e.values == nil { - return - } - if ctx == nil { - panic("can't clearValue for a nil EvalContext") - } - delete(e.values, ctx) -} - -func (e *AnonSymbolExpr) walkChildNodes(w internalWalkFunc) { - // AnonSymbolExpr is a leaf node in the tree -} - -func (e *AnonSymbolExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *AnonSymbolExpr) StartRange() hcl.Range { - return e.SrcRange -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_ops.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_ops.go deleted file mode 100644 index 7f59f1a27..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_ops.go +++ /dev/null @@ -1,268 +0,0 @@ -package hclsyntax - -import ( - "fmt" - - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" - "github.com/zclconf/go-cty/cty/function" - "github.com/zclconf/go-cty/cty/function/stdlib" -) - -type Operation struct { - Impl function.Function - Type cty.Type -} - -var ( - OpLogicalOr = &Operation{ - Impl: stdlib.OrFunc, - Type: cty.Bool, - } - OpLogicalAnd = &Operation{ - Impl: stdlib.AndFunc, - Type: cty.Bool, - } - OpLogicalNot = &Operation{ - Impl: stdlib.NotFunc, - Type: cty.Bool, - } - - OpEqual = &Operation{ - Impl: stdlib.EqualFunc, - Type: cty.Bool, - } - OpNotEqual = &Operation{ - Impl: stdlib.NotEqualFunc, - Type: cty.Bool, - } - - OpGreaterThan = &Operation{ - Impl: stdlib.GreaterThanFunc, - Type: cty.Bool, - } - OpGreaterThanOrEqual = &Operation{ - Impl: stdlib.GreaterThanOrEqualToFunc, - Type: cty.Bool, - } - OpLessThan = &Operation{ - Impl: stdlib.LessThanFunc, - Type: cty.Bool, - } - OpLessThanOrEqual = &Operation{ - Impl: stdlib.LessThanOrEqualToFunc, - Type: cty.Bool, - } - - OpAdd = &Operation{ - Impl: stdlib.AddFunc, - Type: cty.Number, - } - OpSubtract = &Operation{ - Impl: stdlib.SubtractFunc, - Type: cty.Number, - } - OpMultiply = &Operation{ - Impl: stdlib.MultiplyFunc, - Type: cty.Number, - } - OpDivide = &Operation{ - Impl: stdlib.DivideFunc, - Type: cty.Number, - } - OpModulo = &Operation{ - Impl: stdlib.ModuloFunc, - Type: cty.Number, - } - OpNegate = &Operation{ - Impl: stdlib.NegateFunc, - Type: cty.Number, - } -) - -var binaryOps []map[TokenType]*Operation - -func init() { - // This operation table maps from the operator's token type - // to the AST operation type. All expressions produced from - // binary operators are BinaryOp nodes. - // - // Binary operator groups are listed in order of precedence, with - // the *lowest* precedence first. Operators within the same group - // have left-to-right associativity. - binaryOps = []map[TokenType]*Operation{ - { - TokenOr: OpLogicalOr, - }, - { - TokenAnd: OpLogicalAnd, - }, - { - TokenEqualOp: OpEqual, - TokenNotEqual: OpNotEqual, - }, - { - TokenGreaterThan: OpGreaterThan, - TokenGreaterThanEq: OpGreaterThanOrEqual, - TokenLessThan: OpLessThan, - TokenLessThanEq: OpLessThanOrEqual, - }, - { - TokenPlus: OpAdd, - TokenMinus: OpSubtract, - }, - { - TokenStar: OpMultiply, - TokenSlash: OpDivide, - TokenPercent: OpModulo, - }, - } -} - -type BinaryOpExpr struct { - LHS Expression - Op *Operation - RHS Expression - - SrcRange hcl.Range -} - -func (e *BinaryOpExpr) walkChildNodes(w internalWalkFunc) { - w(e.LHS) - w(e.RHS) -} - -func (e *BinaryOpExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - impl := e.Op.Impl // assumed to be a function taking exactly two arguments - params := impl.Params() - lhsParam := params[0] - rhsParam := params[1] - - var diags hcl.Diagnostics - - givenLHSVal, lhsDiags := e.LHS.Value(ctx) - givenRHSVal, rhsDiags := e.RHS.Value(ctx) - diags = append(diags, lhsDiags...) - diags = append(diags, rhsDiags...) - - lhsVal, err := convert.Convert(givenLHSVal, lhsParam.Type) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid operand", - Detail: fmt.Sprintf("Unsuitable value for left operand: %s.", err), - Subject: e.LHS.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.LHS, - EvalContext: ctx, - }) - } - rhsVal, err := convert.Convert(givenRHSVal, rhsParam.Type) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid operand", - Detail: fmt.Sprintf("Unsuitable value for right operand: %s.", err), - Subject: e.RHS.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.RHS, - EvalContext: ctx, - }) - } - - if diags.HasErrors() { - // Don't actually try the call if we have errors already, since the - // this will probably just produce a confusing duplicative diagnostic. - return cty.UnknownVal(e.Op.Type), diags - } - - args := []cty.Value{lhsVal, rhsVal} - result, err := impl.Call(args) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - // FIXME: This diagnostic is useless. - Severity: hcl.DiagError, - Summary: "Operation failed", - Detail: fmt.Sprintf("Error during operation: %s.", err), - Subject: &e.SrcRange, - Expression: e, - EvalContext: ctx, - }) - return cty.UnknownVal(e.Op.Type), diags - } - - return result, diags -} - -func (e *BinaryOpExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *BinaryOpExpr) StartRange() hcl.Range { - return e.LHS.StartRange() -} - -type UnaryOpExpr struct { - Op *Operation - Val Expression - - SrcRange hcl.Range - SymbolRange hcl.Range -} - -func (e *UnaryOpExpr) walkChildNodes(w internalWalkFunc) { - w(e.Val) -} - -func (e *UnaryOpExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - impl := e.Op.Impl // assumed to be a function taking exactly one argument - params := impl.Params() - param := params[0] - - givenVal, diags := e.Val.Value(ctx) - - val, err := convert.Convert(givenVal, param.Type) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid operand", - Detail: fmt.Sprintf("Unsuitable value for unary operand: %s.", err), - Subject: e.Val.Range().Ptr(), - Context: &e.SrcRange, - Expression: e.Val, - EvalContext: ctx, - }) - } - - if diags.HasErrors() { - // Don't actually try the call if we have errors already, since the - // this will probably just produce a confusing duplicative diagnostic. - return cty.UnknownVal(e.Op.Type), diags - } - - args := []cty.Value{val} - result, err := impl.Call(args) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - // FIXME: This diagnostic is useless. - Severity: hcl.DiagError, - Summary: "Operation failed", - Detail: fmt.Sprintf("Error during operation: %s.", err), - Subject: &e.SrcRange, - Expression: e, - EvalContext: ctx, - }) - return cty.UnknownVal(e.Op.Type), diags - } - - return result, diags -} - -func (e *UnaryOpExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *UnaryOpExpr) StartRange() hcl.Range { - return e.SymbolRange -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_template.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_template.go deleted file mode 100644 index ca3dae189..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_template.go +++ /dev/null @@ -1,220 +0,0 @@ -package hclsyntax - -import ( - "bytes" - "fmt" - - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" -) - -type TemplateExpr struct { - Parts []Expression - - SrcRange hcl.Range -} - -func (e *TemplateExpr) walkChildNodes(w internalWalkFunc) { - for _, part := range e.Parts { - w(part) - } -} - -func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - buf := &bytes.Buffer{} - var diags hcl.Diagnostics - isKnown := true - - for _, part := range e.Parts { - partVal, partDiags := part.Value(ctx) - diags = append(diags, partDiags...) - - if partVal.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid template interpolation value", - Detail: fmt.Sprintf( - "The expression result is null. Cannot include a null value in a string template.", - ), - Subject: part.Range().Ptr(), - Context: &e.SrcRange, - Expression: part, - EvalContext: ctx, - }) - continue - } - - if !partVal.IsKnown() { - // If any part is unknown then the result as a whole must be - // unknown too. We'll keep on processing the rest of the parts - // anyway, because we want to still emit any diagnostics resulting - // from evaluating those. - isKnown = false - continue - } - - strVal, err := convert.Convert(partVal, cty.String) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid template interpolation value", - Detail: fmt.Sprintf( - "Cannot include the given value in a string template: %s.", - err.Error(), - ), - Subject: part.Range().Ptr(), - Context: &e.SrcRange, - Expression: part, - EvalContext: ctx, - }) - continue - } - - buf.WriteString(strVal.AsString()) - } - - if !isKnown { - return cty.UnknownVal(cty.String), diags - } - - return cty.StringVal(buf.String()), diags -} - -func (e *TemplateExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *TemplateExpr) StartRange() hcl.Range { - return e.Parts[0].StartRange() -} - -// IsStringLiteral returns true if and only if the template consists only of -// single string literal, as would be created for a simple quoted string like -// "foo". -// -// If this function returns true, then calling Value on the same expression -// with a nil EvalContext will return the literal value. -// -// Note that "${"foo"}", "${1}", etc aren't considered literal values for the -// purposes of this method, because the intent of this method is to identify -// situations where the user seems to be explicitly intending literal string -// interpretation, not situations that result in literals as a technicality -// of the template expression unwrapping behavior. -func (e *TemplateExpr) IsStringLiteral() bool { - if len(e.Parts) != 1 { - return false - } - _, ok := e.Parts[0].(*LiteralValueExpr) - return ok -} - -// TemplateJoinExpr is used to convert tuples of strings produced by template -// constructs (i.e. for loops) into flat strings, by converting the values -// tos strings and joining them. This AST node is not used directly; it's -// produced as part of the AST of a "for" loop in a template. -type TemplateJoinExpr struct { - Tuple Expression -} - -func (e *TemplateJoinExpr) walkChildNodes(w internalWalkFunc) { - w(e.Tuple) -} - -func (e *TemplateJoinExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - tuple, diags := e.Tuple.Value(ctx) - - if tuple.IsNull() { - // This indicates a bug in the code that constructed the AST. - panic("TemplateJoinExpr got null tuple") - } - if tuple.Type() == cty.DynamicPseudoType { - return cty.UnknownVal(cty.String), diags - } - if !tuple.Type().IsTupleType() { - // This indicates a bug in the code that constructed the AST. - panic("TemplateJoinExpr got non-tuple tuple") - } - if !tuple.IsKnown() { - return cty.UnknownVal(cty.String), diags - } - - buf := &bytes.Buffer{} - it := tuple.ElementIterator() - for it.Next() { - _, val := it.Element() - - if val.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid template interpolation value", - Detail: fmt.Sprintf( - "An iteration result is null. Cannot include a null value in a string template.", - ), - Subject: e.Range().Ptr(), - Expression: e, - EvalContext: ctx, - }) - continue - } - if val.Type() == cty.DynamicPseudoType { - return cty.UnknownVal(cty.String), diags - } - strVal, err := convert.Convert(val, cty.String) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid template interpolation value", - Detail: fmt.Sprintf( - "Cannot include one of the interpolation results into the string template: %s.", - err.Error(), - ), - Subject: e.Range().Ptr(), - Expression: e, - EvalContext: ctx, - }) - continue - } - if !val.IsKnown() { - return cty.UnknownVal(cty.String), diags - } - - buf.WriteString(strVal.AsString()) - } - - return cty.StringVal(buf.String()), diags -} - -func (e *TemplateJoinExpr) Range() hcl.Range { - return e.Tuple.Range() -} - -func (e *TemplateJoinExpr) StartRange() hcl.Range { - return e.Tuple.StartRange() -} - -// TemplateWrapExpr is used instead of a TemplateExpr when a template -// consists _only_ of a single interpolation sequence. In that case, the -// template's result is the single interpolation's result, verbatim with -// no type conversions. -type TemplateWrapExpr struct { - Wrapped Expression - - SrcRange hcl.Range -} - -func (e *TemplateWrapExpr) walkChildNodes(w internalWalkFunc) { - w(e.Wrapped) -} - -func (e *TemplateWrapExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - return e.Wrapped.Value(ctx) -} - -func (e *TemplateWrapExpr) Range() hcl.Range { - return e.SrcRange -} - -func (e *TemplateWrapExpr) StartRange() hcl.Range { - return e.SrcRange -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_vars.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_vars.go deleted file mode 100644 index 9177092ce..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_vars.go +++ /dev/null @@ -1,76 +0,0 @@ -package hclsyntax - -// Generated by expression_vars_get.go. DO NOT EDIT. -// Run 'go generate' on this package to update the set of functions here. - -import ( - "github.com/hashicorp/hcl2/hcl" -) - -func (e *AnonSymbolExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *BinaryOpExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *ConditionalExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *ForExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *FunctionCallExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *IndexExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *LiteralValueExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *ObjectConsExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *ObjectConsKeyExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *RelativeTraversalExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *ScopeTraversalExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *SplatExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *TemplateExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *TemplateJoinExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *TemplateWrapExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *TupleConsExpr) Variables() []hcl.Traversal { - return Variables(e) -} - -func (e *UnaryOpExpr) Variables() []hcl.Traversal { - return Variables(e) -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_vars_gen.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_vars_gen.go deleted file mode 100644 index 88f198009..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/expression_vars_gen.go +++ /dev/null @@ -1,99 +0,0 @@ -// This is a 'go generate'-oriented program for producing the "Variables" -// method on every Expression implementation found within this package. -// All expressions share the same implementation for this method, which -// just wraps the package-level function "Variables" and uses an AST walk -// to do its work. - -// +build ignore - -package main - -import ( - "fmt" - "go/ast" - "go/parser" - "go/token" - "os" - "sort" -) - -func main() { - fs := token.NewFileSet() - pkgs, err := parser.ParseDir(fs, ".", nil, 0) - if err != nil { - fmt.Fprintf(os.Stderr, "error while parsing: %s\n", err) - os.Exit(1) - } - pkg := pkgs["hclsyntax"] - - // Walk all the files and collect the receivers of any "Value" methods - // that look like they are trying to implement Expression. - var recvs []string - for _, f := range pkg.Files { - for _, decl := range f.Decls { - fd, ok := decl.(*ast.FuncDecl) - if !ok { - continue - } - if fd.Name.Name != "Value" { - continue - } - results := fd.Type.Results.List - if len(results) != 2 { - continue - } - valResult := fd.Type.Results.List[0].Type.(*ast.SelectorExpr).X.(*ast.Ident) - diagsResult := fd.Type.Results.List[1].Type.(*ast.SelectorExpr).X.(*ast.Ident) - - if valResult.Name != "cty" && diagsResult.Name != "hcl" { - continue - } - - // If we have a method called Value and it returns something in - // "cty" followed by something in "hcl" then that's specific enough - // for now, even though this is not 100% exact as a correct - // implementation of Value. - - recvTy := fd.Recv.List[0].Type - - switch rtt := recvTy.(type) { - case *ast.StarExpr: - name := rtt.X.(*ast.Ident).Name - recvs = append(recvs, fmt.Sprintf("*%s", name)) - default: - fmt.Fprintf(os.Stderr, "don't know what to do with a %T receiver\n", recvTy) - } - - } - } - - sort.Strings(recvs) - - of, err := os.OpenFile("expression_vars.go", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, os.ModePerm) - if err != nil { - fmt.Fprintf(os.Stderr, "failed to open output file: %s\n", err) - os.Exit(1) - } - - fmt.Fprint(of, outputPreamble) - for _, recv := range recvs { - fmt.Fprintf(of, outputMethodFmt, recv) - } - fmt.Fprint(of, "\n") - -} - -const outputPreamble = `package hclsyntax - -// Generated by expression_vars_get.go. DO NOT EDIT. -// Run 'go generate' on this package to update the set of functions here. - -import ( - "github.com/hashicorp/hcl2/hcl" -)` - -const outputMethodFmt = ` - -func (e %s) Variables() []hcl.Traversal { - return Variables(e) -}` diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/file.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/file.go deleted file mode 100644 index 490c02556..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/file.go +++ /dev/null @@ -1,20 +0,0 @@ -package hclsyntax - -import ( - "github.com/hashicorp/hcl2/hcl" -) - -// File is the top-level object resulting from parsing a configuration file. -type File struct { - Body *Body - Bytes []byte -} - -func (f *File) AsHCLFile() *hcl.File { - return &hcl.File{ - Body: f.Body, - Bytes: f.Bytes, - - // TODO: The Nav object, once we have an implementation of it - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/generate.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/generate.go deleted file mode 100644 index 841656a6a..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/generate.go +++ /dev/null @@ -1,9 +0,0 @@ -package hclsyntax - -//go:generate go run expression_vars_gen.go -//go:generate ruby unicode2ragel.rb --url=http://www.unicode.org/Public/9.0.0/ucd/DerivedCoreProperties.txt -m UnicodeDerived -p ID_Start,ID_Continue -o unicode_derived.rl -//go:generate ragel -Z scan_tokens.rl -//go:generate gofmt -w scan_tokens.go -//go:generate ragel -Z scan_string_lit.rl -//go:generate gofmt -w scan_string_lit.go -//go:generate stringer -type TokenType -output token_type_string.go diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/keywords.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/keywords.go deleted file mode 100644 index eef8b9626..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/keywords.go +++ /dev/null @@ -1,21 +0,0 @@ -package hclsyntax - -import ( - "bytes" -) - -type Keyword []byte - -var forKeyword = Keyword([]byte{'f', 'o', 'r'}) -var inKeyword = Keyword([]byte{'i', 'n'}) -var ifKeyword = Keyword([]byte{'i', 'f'}) -var elseKeyword = Keyword([]byte{'e', 'l', 's', 'e'}) -var endifKeyword = Keyword([]byte{'e', 'n', 'd', 'i', 'f'}) -var endforKeyword = Keyword([]byte{'e', 'n', 'd', 'f', 'o', 'r'}) - -func (kw Keyword) TokenMatches(token Token) bool { - if token.Type != TokenIdent { - return false - } - return bytes.Equal([]byte(kw), token.Bytes) -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/navigation.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/navigation.go deleted file mode 100644 index c8c97f37c..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/navigation.go +++ /dev/null @@ -1,59 +0,0 @@ -package hclsyntax - -import ( - "bytes" - "fmt" - - "github.com/hashicorp/hcl2/hcl" -) - -type navigation struct { - root *Body -} - -// Implementation of hcled.ContextString -func (n navigation) ContextString(offset int) string { - // We will walk our top-level blocks until we find one that contains - // the given offset, and then construct a representation of the header - // of the block. - - var block *Block - for _, candidate := range n.root.Blocks { - if candidate.Range().ContainsOffset(offset) { - block = candidate - break - } - } - - if block == nil { - return "" - } - - if len(block.Labels) == 0 { - // Easy case! - return block.Type - } - - buf := &bytes.Buffer{} - buf.WriteString(block.Type) - for _, label := range block.Labels { - fmt.Fprintf(buf, " %q", label) - } - return buf.String() -} - -func (n navigation) ContextDefRange(offset int) hcl.Range { - var block *Block - for _, candidate := range n.root.Blocks { - if candidate.Range().ContainsOffset(offset) { - block = candidate - break - } - } - - if block == nil { - return hcl.Range{} - } - - return block.DefRange() -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/node.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/node.go deleted file mode 100644 index 75812e63d..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/node.go +++ /dev/null @@ -1,22 +0,0 @@ -package hclsyntax - -import ( - "github.com/hashicorp/hcl2/hcl" -) - -// Node is the abstract type that every AST node implements. -// -// This is a closed interface, so it cannot be implemented from outside of -// this package. -type Node interface { - // This is the mechanism by which the public-facing walk functions - // are implemented. Implementations should call the given function - // for each child node and then replace that node with its return value. - // The return value might just be the same node, for non-transforming - // walks. - walkChildNodes(w internalWalkFunc) - - Range() hcl.Range -} - -type internalWalkFunc func(Node) diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser.go deleted file mode 100644 index 772ebae2b..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser.go +++ /dev/null @@ -1,2044 +0,0 @@ -package hclsyntax - -import ( - "bytes" - "fmt" - "strconv" - "unicode/utf8" - - "github.com/apparentlymart/go-textseg/textseg" - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty" -) - -type parser struct { - *peeker - - // set to true if any recovery is attempted. The parser can use this - // to attempt to reduce error noise by suppressing "bad token" errors - // in recovery mode, assuming that the recovery heuristics have failed - // in this case and left the peeker in a wrong place. - recovery bool -} - -func (p *parser) ParseBody(end TokenType) (*Body, hcl.Diagnostics) { - attrs := Attributes{} - blocks := Blocks{} - var diags hcl.Diagnostics - - startRange := p.PrevRange() - var endRange hcl.Range - -Token: - for { - next := p.Peek() - if next.Type == end { - endRange = p.NextRange() - p.Read() - break Token - } - - switch next.Type { - case TokenNewline: - p.Read() - continue - case TokenIdent: - item, itemDiags := p.ParseBodyItem() - diags = append(diags, itemDiags...) - switch titem := item.(type) { - case *Block: - blocks = append(blocks, titem) - case *Attribute: - if existing, exists := attrs[titem.Name]; exists { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Attribute redefined", - Detail: fmt.Sprintf( - "The argument %q was already set at %s. Each argument may be set only once.", - titem.Name, existing.NameRange.String(), - ), - Subject: &titem.NameRange, - }) - } else { - attrs[titem.Name] = titem - } - default: - // This should never happen for valid input, but may if a - // syntax error was detected in ParseBodyItem that prevented - // it from even producing a partially-broken item. In that - // case, it would've left at least one error in the diagnostics - // slice we already dealt with above. - // - // We'll assume ParseBodyItem attempted recovery to leave - // us in a reasonable position to try parsing the next item. - continue - } - default: - bad := p.Read() - if !p.recovery { - if bad.Type == TokenOQuote { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid argument name", - Detail: "Argument names must not be quoted.", - Subject: &bad.Range, - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Argument or block definition required", - Detail: "An argument or block definition is required here.", - Subject: &bad.Range, - }) - } - } - endRange = p.PrevRange() // arbitrary, but somewhere inside the body means better diagnostics - - p.recover(end) // attempt to recover to the token after the end of this body - break Token - } - } - - return &Body{ - Attributes: attrs, - Blocks: blocks, - - SrcRange: hcl.RangeBetween(startRange, endRange), - EndRange: hcl.Range{ - Filename: endRange.Filename, - Start: endRange.End, - End: endRange.End, - }, - }, diags -} - -func (p *parser) ParseBodyItem() (Node, hcl.Diagnostics) { - ident := p.Read() - if ident.Type != TokenIdent { - p.recoverAfterBodyItem() - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Argument or block definition required", - Detail: "An argument or block definition is required here.", - Subject: &ident.Range, - }, - } - } - - next := p.Peek() - - switch next.Type { - case TokenEqual: - return p.finishParsingBodyAttribute(ident, false) - case TokenOQuote, TokenOBrace, TokenIdent: - return p.finishParsingBodyBlock(ident) - default: - p.recoverAfterBodyItem() - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Argument or block definition required", - Detail: "An argument or block definition is required here. To set an argument, use the equals sign \"=\" to introduce the argument value.", - Subject: &ident.Range, - }, - } - } - - return nil, nil -} - -// parseSingleAttrBody is a weird variant of ParseBody that deals with the -// body of a nested block containing only one attribute value all on a single -// line, like foo { bar = baz } . It expects to find a single attribute item -// immediately followed by the end token type with no intervening newlines. -func (p *parser) parseSingleAttrBody(end TokenType) (*Body, hcl.Diagnostics) { - ident := p.Read() - if ident.Type != TokenIdent { - p.recoverAfterBodyItem() - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Argument or block definition required", - Detail: "An argument or block definition is required here.", - Subject: &ident.Range, - }, - } - } - - var attr *Attribute - var diags hcl.Diagnostics - - next := p.Peek() - - switch next.Type { - case TokenEqual: - node, attrDiags := p.finishParsingBodyAttribute(ident, true) - diags = append(diags, attrDiags...) - attr = node.(*Attribute) - case TokenOQuote, TokenOBrace, TokenIdent: - p.recoverAfterBodyItem() - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Argument definition required", - Detail: fmt.Sprintf("A single-line block definition can contain only a single argument. If you meant to define argument %q, use an equals sign to assign it a value. To define a nested block, place it on a line of its own within its parent block.", ident.Bytes), - Subject: hcl.RangeBetween(ident.Range, next.Range).Ptr(), - }, - } - default: - p.recoverAfterBodyItem() - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Argument or block definition required", - Detail: "An argument or block definition is required here. To set an argument, use the equals sign \"=\" to introduce the argument value.", - Subject: &ident.Range, - }, - } - } - - return &Body{ - Attributes: Attributes{ - string(ident.Bytes): attr, - }, - - SrcRange: attr.SrcRange, - EndRange: hcl.Range{ - Filename: attr.SrcRange.Filename, - Start: attr.SrcRange.End, - End: attr.SrcRange.End, - }, - }, diags - -} - -func (p *parser) finishParsingBodyAttribute(ident Token, singleLine bool) (Node, hcl.Diagnostics) { - eqTok := p.Read() // eat equals token - if eqTok.Type != TokenEqual { - // should never happen if caller behaves - panic("finishParsingBodyAttribute called with next not equals") - } - - var endRange hcl.Range - - expr, diags := p.ParseExpression() - if p.recovery && diags.HasErrors() { - // recovery within expressions tends to be tricky, so we've probably - // landed somewhere weird. We'll try to reset to the start of a body - // item so parsing can continue. - endRange = p.PrevRange() - p.recoverAfterBodyItem() - } else { - endRange = p.PrevRange() - if !singleLine { - end := p.Peek() - if end.Type != TokenNewline && end.Type != TokenEOF { - if !p.recovery { - summary := "Missing newline after argument" - detail := "An argument definition must end with a newline." - - if end.Type == TokenComma { - summary = "Unexpected comma after argument" - detail = "Argument definitions must be separated by newlines, not commas. " + detail - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: summary, - Detail: detail, - Subject: &end.Range, - Context: hcl.RangeBetween(ident.Range, end.Range).Ptr(), - }) - } - endRange = p.PrevRange() - p.recoverAfterBodyItem() - } else { - endRange = p.PrevRange() - p.Read() // eat newline - } - } - } - - return &Attribute{ - Name: string(ident.Bytes), - Expr: expr, - - SrcRange: hcl.RangeBetween(ident.Range, endRange), - NameRange: ident.Range, - EqualsRange: eqTok.Range, - }, diags -} - -func (p *parser) finishParsingBodyBlock(ident Token) (Node, hcl.Diagnostics) { - var blockType = string(ident.Bytes) - var diags hcl.Diagnostics - var labels []string - var labelRanges []hcl.Range - - var oBrace Token - -Token: - for { - tok := p.Peek() - - switch tok.Type { - - case TokenOBrace: - oBrace = p.Read() - break Token - - case TokenOQuote: - label, labelRange, labelDiags := p.parseQuotedStringLiteral() - diags = append(diags, labelDiags...) - labels = append(labels, label) - labelRanges = append(labelRanges, labelRange) - // parseQuoteStringLiteral recovers up to the closing quote - // if it encounters problems, so we can continue looking for - // more labels and eventually the block body even. - - case TokenIdent: - tok = p.Read() // eat token - label, labelRange := string(tok.Bytes), tok.Range - labels = append(labels, label) - labelRanges = append(labelRanges, labelRange) - - default: - switch tok.Type { - case TokenEqual: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid block definition", - Detail: "The equals sign \"=\" indicates an argument definition, and must not be used when defining a block.", - Subject: &tok.Range, - Context: hcl.RangeBetween(ident.Range, tok.Range).Ptr(), - }) - case TokenNewline: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid block definition", - Detail: "A block definition must have block content delimited by \"{\" and \"}\", starting on the same line as the block header.", - Subject: &tok.Range, - Context: hcl.RangeBetween(ident.Range, tok.Range).Ptr(), - }) - default: - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid block definition", - Detail: "Either a quoted string block label or an opening brace (\"{\") is expected here.", - Subject: &tok.Range, - Context: hcl.RangeBetween(ident.Range, tok.Range).Ptr(), - }) - } - } - - p.recoverAfterBodyItem() - - return &Block{ - Type: blockType, - Labels: labels, - Body: &Body{ - SrcRange: ident.Range, - EndRange: ident.Range, - }, - - TypeRange: ident.Range, - LabelRanges: labelRanges, - OpenBraceRange: ident.Range, // placeholder - CloseBraceRange: ident.Range, // placeholder - }, diags - } - } - - // Once we fall out here, the peeker is pointed just after our opening - // brace, so we can begin our nested body parsing. - var body *Body - var bodyDiags hcl.Diagnostics - switch p.Peek().Type { - case TokenNewline, TokenEOF, TokenCBrace: - body, bodyDiags = p.ParseBody(TokenCBrace) - default: - // Special one-line, single-attribute block parsing mode. - body, bodyDiags = p.parseSingleAttrBody(TokenCBrace) - switch p.Peek().Type { - case TokenCBrace: - p.Read() // the happy path - just consume the closing brace - case TokenComma: - // User seems to be trying to use the object-constructor - // comma-separated style, which isn't permitted for blocks. - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid single-argument block definition", - Detail: "Single-line block syntax can include only one argument definition. To define multiple arguments, use the multi-line block syntax with one argument definition per line.", - Subject: p.Peek().Range.Ptr(), - }) - p.recover(TokenCBrace) - case TokenNewline: - // We don't allow weird mixtures of single and multi-line syntax. - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid single-argument block definition", - Detail: "An argument definition on the same line as its containing block creates a single-line block definition, which must also be closed on the same line. Place the block's closing brace immediately after the argument definition.", - Subject: p.Peek().Range.Ptr(), - }) - p.recover(TokenCBrace) - default: - // Some other weird thing is going on. Since we can't guess a likely - // user intent for this one, we'll skip it if we're already in - // recovery mode. - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid single-argument block definition", - Detail: "A single-line block definition must end with a closing brace immediately after its single argument definition.", - Subject: p.Peek().Range.Ptr(), - }) - } - p.recover(TokenCBrace) - } - } - diags = append(diags, bodyDiags...) - cBraceRange := p.PrevRange() - - eol := p.Peek() - if eol.Type == TokenNewline || eol.Type == TokenEOF { - p.Read() // eat newline - } else { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing newline after block definition", - Detail: "A block definition must end with a newline.", - Subject: &eol.Range, - Context: hcl.RangeBetween(ident.Range, eol.Range).Ptr(), - }) - } - p.recoverAfterBodyItem() - } - - // We must never produce a nil body, since the caller may attempt to - // do analysis of a partial result when there's an error, so we'll - // insert a placeholder if we otherwise failed to produce a valid - // body due to one of the syntax error paths above. - if body == nil && diags.HasErrors() { - body = &Body{ - SrcRange: hcl.RangeBetween(oBrace.Range, cBraceRange), - EndRange: cBraceRange, - } - } - - return &Block{ - Type: blockType, - Labels: labels, - Body: body, - - TypeRange: ident.Range, - LabelRanges: labelRanges, - OpenBraceRange: oBrace.Range, - CloseBraceRange: cBraceRange, - }, diags -} - -func (p *parser) ParseExpression() (Expression, hcl.Diagnostics) { - return p.parseTernaryConditional() -} - -func (p *parser) parseTernaryConditional() (Expression, hcl.Diagnostics) { - // The ternary conditional operator (.. ? .. : ..) behaves somewhat - // like a binary operator except that the "symbol" is itself - // an expression enclosed in two punctuation characters. - // The middle expression is parsed as if the ? and : symbols - // were parentheses. The "rhs" (the "false expression") is then - // treated right-associatively so it behaves similarly to the - // middle in terms of precedence. - - startRange := p.NextRange() - var condExpr, trueExpr, falseExpr Expression - var diags hcl.Diagnostics - - condExpr, condDiags := p.parseBinaryOps(binaryOps) - diags = append(diags, condDiags...) - if p.recovery && condDiags.HasErrors() { - return condExpr, diags - } - - questionMark := p.Peek() - if questionMark.Type != TokenQuestion { - return condExpr, diags - } - - p.Read() // eat question mark - - trueExpr, trueDiags := p.ParseExpression() - diags = append(diags, trueDiags...) - if p.recovery && trueDiags.HasErrors() { - return condExpr, diags - } - - colon := p.Peek() - if colon.Type != TokenColon { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing false expression in conditional", - Detail: "The conditional operator (...?...:...) requires a false expression, delimited by a colon.", - Subject: &colon.Range, - Context: hcl.RangeBetween(startRange, colon.Range).Ptr(), - }) - return condExpr, diags - } - - p.Read() // eat colon - - falseExpr, falseDiags := p.ParseExpression() - diags = append(diags, falseDiags...) - if p.recovery && falseDiags.HasErrors() { - return condExpr, diags - } - - return &ConditionalExpr{ - Condition: condExpr, - TrueResult: trueExpr, - FalseResult: falseExpr, - - SrcRange: hcl.RangeBetween(startRange, falseExpr.Range()), - }, diags -} - -// parseBinaryOps calls itself recursively to work through all of the -// operator precedence groups, and then eventually calls parseExpressionTerm -// for each operand. -func (p *parser) parseBinaryOps(ops []map[TokenType]*Operation) (Expression, hcl.Diagnostics) { - if len(ops) == 0 { - // We've run out of operators, so now we'll just try to parse a term. - return p.parseExpressionWithTraversals() - } - - thisLevel := ops[0] - remaining := ops[1:] - - var lhs, rhs Expression - var operation *Operation - var diags hcl.Diagnostics - - // Parse a term that might be the first operand of a binary - // operation or it might just be a standalone term. - // We won't know until we've parsed it and can look ahead - // to see if there's an operator token for this level. - lhs, lhsDiags := p.parseBinaryOps(remaining) - diags = append(diags, lhsDiags...) - if p.recovery && lhsDiags.HasErrors() { - return lhs, diags - } - - // We'll keep eating up operators until we run out, so that operators - // with the same precedence will combine in a left-associative manner: - // a+b+c => (a+b)+c, not a+(b+c) - // - // Should we later want to have right-associative operators, a way - // to achieve that would be to call back up to ParseExpression here - // instead of iteratively parsing only the remaining operators. - for { - next := p.Peek() - var newOp *Operation - var ok bool - if newOp, ok = thisLevel[next.Type]; !ok { - break - } - - // Are we extending an expression started on the previous iteration? - if operation != nil { - lhs = &BinaryOpExpr{ - LHS: lhs, - Op: operation, - RHS: rhs, - - SrcRange: hcl.RangeBetween(lhs.Range(), rhs.Range()), - } - } - - operation = newOp - p.Read() // eat operator token - var rhsDiags hcl.Diagnostics - rhs, rhsDiags = p.parseBinaryOps(remaining) - diags = append(diags, rhsDiags...) - if p.recovery && rhsDiags.HasErrors() { - return lhs, diags - } - } - - if operation == nil { - return lhs, diags - } - - return &BinaryOpExpr{ - LHS: lhs, - Op: operation, - RHS: rhs, - - SrcRange: hcl.RangeBetween(lhs.Range(), rhs.Range()), - }, diags -} - -func (p *parser) parseExpressionWithTraversals() (Expression, hcl.Diagnostics) { - term, diags := p.parseExpressionTerm() - ret, moreDiags := p.parseExpressionTraversals(term) - diags = append(diags, moreDiags...) - return ret, diags -} - -func (p *parser) parseExpressionTraversals(from Expression) (Expression, hcl.Diagnostics) { - var diags hcl.Diagnostics - ret := from - -Traversal: - for { - next := p.Peek() - - switch next.Type { - case TokenDot: - // Attribute access or splat - dot := p.Read() - attrTok := p.Peek() - - switch attrTok.Type { - case TokenIdent: - attrTok = p.Read() // eat token - name := string(attrTok.Bytes) - rng := hcl.RangeBetween(dot.Range, attrTok.Range) - step := hcl.TraverseAttr{ - Name: name, - SrcRange: rng, - } - - ret = makeRelativeTraversal(ret, step, rng) - - case TokenNumberLit: - // This is a weird form we inherited from HIL, allowing numbers - // to be used as attributes as a weird way of writing [n]. - // This was never actually a first-class thing in HIL, but - // HIL tolerated sequences like .0. in its variable names and - // calling applications like Terraform exploited that to - // introduce indexing syntax where none existed. - numTok := p.Read() // eat token - attrTok = numTok - - // This syntax is ambiguous if multiple indices are used in - // succession, like foo.0.1.baz: that actually parses as - // a fractional number 0.1. Since we're only supporting this - // syntax for compatibility with legacy Terraform - // configurations, and Terraform does not tend to have lists - // of lists, we'll choose to reject that here with a helpful - // error message, rather than failing later because the index - // isn't a whole number. - if dotIdx := bytes.IndexByte(numTok.Bytes, '.'); dotIdx >= 0 { - first := numTok.Bytes[:dotIdx] - second := numTok.Bytes[dotIdx+1:] - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid legacy index syntax", - Detail: fmt.Sprintf("When using the legacy index syntax, chaining two indexes together is not permitted. Use the proper index syntax instead, like [%s][%s].", first, second), - Subject: &attrTok.Range, - }) - rng := hcl.RangeBetween(dot.Range, numTok.Range) - step := hcl.TraverseIndex{ - Key: cty.DynamicVal, - SrcRange: rng, - } - ret = makeRelativeTraversal(ret, step, rng) - break - } - - numVal, numDiags := p.numberLitValue(numTok) - diags = append(diags, numDiags...) - - rng := hcl.RangeBetween(dot.Range, numTok.Range) - step := hcl.TraverseIndex{ - Key: numVal, - SrcRange: rng, - } - - ret = makeRelativeTraversal(ret, step, rng) - - case TokenStar: - // "Attribute-only" splat expression. - // (This is a kinda weird construct inherited from HIL, which - // behaves a bit like a [*] splat except that it is only able - // to do attribute traversals into each of its elements, - // whereas foo[*] can support _any_ traversal. - marker := p.Read() // eat star - trav := make(hcl.Traversal, 0, 1) - var firstRange, lastRange hcl.Range - firstRange = p.NextRange() - for p.Peek().Type == TokenDot { - dot := p.Read() - - if p.Peek().Type == TokenNumberLit { - // Continuing the "weird stuff inherited from HIL" - // theme, we also allow numbers as attribute names - // inside splats and interpret them as indexing - // into a list, for expressions like: - // foo.bar.*.baz.0.foo - numTok := p.Read() - - // Weird special case if the user writes something - // like foo.bar.*.baz.0.0.foo, where 0.0 parses - // as a number. - if dotIdx := bytes.IndexByte(numTok.Bytes, '.'); dotIdx >= 0 { - first := numTok.Bytes[:dotIdx] - second := numTok.Bytes[dotIdx+1:] - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid legacy index syntax", - Detail: fmt.Sprintf("When using the legacy index syntax, chaining two indexes together is not permitted. Use the proper index syntax with a full splat expression [*] instead, like [%s][%s].", first, second), - Subject: &attrTok.Range, - }) - trav = append(trav, hcl.TraverseIndex{ - Key: cty.DynamicVal, - SrcRange: hcl.RangeBetween(dot.Range, numTok.Range), - }) - lastRange = numTok.Range - continue - } - - numVal, numDiags := p.numberLitValue(numTok) - diags = append(diags, numDiags...) - trav = append(trav, hcl.TraverseIndex{ - Key: numVal, - SrcRange: hcl.RangeBetween(dot.Range, numTok.Range), - }) - lastRange = numTok.Range - continue - } - - if p.Peek().Type != TokenIdent { - if !p.recovery { - if p.Peek().Type == TokenStar { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Nested splat expression not allowed", - Detail: "A splat expression (*) cannot be used inside another attribute-only splat expression.", - Subject: p.Peek().Range.Ptr(), - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid attribute name", - Detail: "An attribute name is required after a dot.", - Subject: &attrTok.Range, - }) - } - } - p.setRecovery() - continue Traversal - } - - attrTok := p.Read() - trav = append(trav, hcl.TraverseAttr{ - Name: string(attrTok.Bytes), - SrcRange: hcl.RangeBetween(dot.Range, attrTok.Range), - }) - lastRange = attrTok.Range - } - - itemExpr := &AnonSymbolExpr{ - SrcRange: hcl.RangeBetween(dot.Range, marker.Range), - } - var travExpr Expression - if len(trav) == 0 { - travExpr = itemExpr - } else { - travExpr = &RelativeTraversalExpr{ - Source: itemExpr, - Traversal: trav, - SrcRange: hcl.RangeBetween(firstRange, lastRange), - } - } - - ret = &SplatExpr{ - Source: ret, - Each: travExpr, - Item: itemExpr, - - SrcRange: hcl.RangeBetween(dot.Range, lastRange), - MarkerRange: hcl.RangeBetween(dot.Range, marker.Range), - } - - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid attribute name", - Detail: "An attribute name is required after a dot.", - Subject: &attrTok.Range, - }) - // This leaves the peeker in a bad place, so following items - // will probably be misparsed until we hit something that - // allows us to re-sync. - // - // We will probably need to do something better here eventually - // in order to support autocomplete triggered by typing a - // period. - p.setRecovery() - } - - case TokenOBrack: - // Indexing of a collection. - // This may or may not be a hcl.Traverser, depending on whether - // the key value is something constant. - - open := p.Read() - switch p.Peek().Type { - case TokenStar: - // This is a full splat expression, like foo[*], which consumes - // the rest of the traversal steps after it using a recursive - // call to this function. - p.Read() // consume star - close := p.Read() - if close.Type != TokenCBrack && !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing close bracket on splat index", - Detail: "The star for a full splat operator must be immediately followed by a closing bracket (\"]\").", - Subject: &close.Range, - }) - close = p.recover(TokenCBrack) - } - // Splat expressions use a special "anonymous symbol" as a - // placeholder in an expression to be evaluated once for each - // item in the source expression. - itemExpr := &AnonSymbolExpr{ - SrcRange: hcl.RangeBetween(open.Range, close.Range), - } - // Now we'll recursively call this same function to eat any - // remaining traversal steps against the anonymous symbol. - travExpr, nestedDiags := p.parseExpressionTraversals(itemExpr) - diags = append(diags, nestedDiags...) - - ret = &SplatExpr{ - Source: ret, - Each: travExpr, - Item: itemExpr, - - SrcRange: hcl.RangeBetween(open.Range, travExpr.Range()), - MarkerRange: hcl.RangeBetween(open.Range, close.Range), - } - - default: - - var close Token - p.PushIncludeNewlines(false) // arbitrary newlines allowed in brackets - keyExpr, keyDiags := p.ParseExpression() - diags = append(diags, keyDiags...) - if p.recovery && keyDiags.HasErrors() { - close = p.recover(TokenCBrack) - } else { - close = p.Read() - if close.Type != TokenCBrack && !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing close bracket on index", - Detail: "The index operator must end with a closing bracket (\"]\").", - Subject: &close.Range, - }) - close = p.recover(TokenCBrack) - } - } - p.PopIncludeNewlines() - - if lit, isLit := keyExpr.(*LiteralValueExpr); isLit { - litKey, _ := lit.Value(nil) - rng := hcl.RangeBetween(open.Range, close.Range) - step := hcl.TraverseIndex{ - Key: litKey, - SrcRange: rng, - } - ret = makeRelativeTraversal(ret, step, rng) - } else if tmpl, isTmpl := keyExpr.(*TemplateExpr); isTmpl && tmpl.IsStringLiteral() { - litKey, _ := tmpl.Value(nil) - rng := hcl.RangeBetween(open.Range, close.Range) - step := hcl.TraverseIndex{ - Key: litKey, - SrcRange: rng, - } - ret = makeRelativeTraversal(ret, step, rng) - } else { - rng := hcl.RangeBetween(open.Range, close.Range) - ret = &IndexExpr{ - Collection: ret, - Key: keyExpr, - - SrcRange: rng, - OpenRange: open.Range, - } - } - } - - default: - break Traversal - } - } - - return ret, diags -} - -// makeRelativeTraversal takes an expression and a traverser and returns -// a traversal expression that combines the two. If the given expression -// is already a traversal, it is extended in place (mutating it) and -// returned. If it isn't, a new RelativeTraversalExpr is created and returned. -func makeRelativeTraversal(expr Expression, next hcl.Traverser, rng hcl.Range) Expression { - switch texpr := expr.(type) { - case *ScopeTraversalExpr: - texpr.Traversal = append(texpr.Traversal, next) - texpr.SrcRange = hcl.RangeBetween(texpr.SrcRange, rng) - return texpr - case *RelativeTraversalExpr: - texpr.Traversal = append(texpr.Traversal, next) - texpr.SrcRange = hcl.RangeBetween(texpr.SrcRange, rng) - return texpr - default: - return &RelativeTraversalExpr{ - Source: expr, - Traversal: hcl.Traversal{next}, - SrcRange: rng, - } - } -} - -func (p *parser) parseExpressionTerm() (Expression, hcl.Diagnostics) { - start := p.Peek() - - switch start.Type { - case TokenOParen: - p.Read() // eat open paren - - p.PushIncludeNewlines(false) - - expr, diags := p.ParseExpression() - if diags.HasErrors() { - // attempt to place the peeker after our closing paren - // before we return, so that the next parser has some - // chance of finding a valid expression. - p.recover(TokenCParen) - p.PopIncludeNewlines() - return expr, diags - } - - close := p.Peek() - if close.Type != TokenCParen { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unbalanced parentheses", - Detail: "Expected a closing parenthesis to terminate the expression.", - Subject: &close.Range, - Context: hcl.RangeBetween(start.Range, close.Range).Ptr(), - }) - p.setRecovery() - } - - p.Read() // eat closing paren - p.PopIncludeNewlines() - - return expr, diags - - case TokenNumberLit: - tok := p.Read() // eat number token - - numVal, diags := p.numberLitValue(tok) - return &LiteralValueExpr{ - Val: numVal, - SrcRange: tok.Range, - }, diags - - case TokenIdent: - tok := p.Read() // eat identifier token - - if p.Peek().Type == TokenOParen { - return p.finishParsingFunctionCall(tok) - } - - name := string(tok.Bytes) - switch name { - case "true": - return &LiteralValueExpr{ - Val: cty.True, - SrcRange: tok.Range, - }, nil - case "false": - return &LiteralValueExpr{ - Val: cty.False, - SrcRange: tok.Range, - }, nil - case "null": - return &LiteralValueExpr{ - Val: cty.NullVal(cty.DynamicPseudoType), - SrcRange: tok.Range, - }, nil - default: - return &ScopeTraversalExpr{ - Traversal: hcl.Traversal{ - hcl.TraverseRoot{ - Name: name, - SrcRange: tok.Range, - }, - }, - SrcRange: tok.Range, - }, nil - } - - case TokenOQuote, TokenOHeredoc: - open := p.Read() // eat opening marker - closer := p.oppositeBracket(open.Type) - exprs, passthru, _, diags := p.parseTemplateInner(closer, tokenOpensFlushHeredoc(open)) - - closeRange := p.PrevRange() - - if passthru { - if len(exprs) != 1 { - panic("passthru set with len(exprs) != 1") - } - return &TemplateWrapExpr{ - Wrapped: exprs[0], - SrcRange: hcl.RangeBetween(open.Range, closeRange), - }, diags - } - - return &TemplateExpr{ - Parts: exprs, - SrcRange: hcl.RangeBetween(open.Range, closeRange), - }, diags - - case TokenMinus: - tok := p.Read() // eat minus token - - // Important to use parseExpressionWithTraversals rather than parseExpression - // here, otherwise we can capture a following binary expression into - // our negation. - // e.g. -46+5 should parse as (-46)+5, not -(46+5) - operand, diags := p.parseExpressionWithTraversals() - return &UnaryOpExpr{ - Op: OpNegate, - Val: operand, - - SrcRange: hcl.RangeBetween(tok.Range, operand.Range()), - SymbolRange: tok.Range, - }, diags - - case TokenBang: - tok := p.Read() // eat bang token - - // Important to use parseExpressionWithTraversals rather than parseExpression - // here, otherwise we can capture a following binary expression into - // our negation. - operand, diags := p.parseExpressionWithTraversals() - return &UnaryOpExpr{ - Op: OpLogicalNot, - Val: operand, - - SrcRange: hcl.RangeBetween(tok.Range, operand.Range()), - SymbolRange: tok.Range, - }, diags - - case TokenOBrack: - return p.parseTupleCons() - - case TokenOBrace: - return p.parseObjectCons() - - default: - var diags hcl.Diagnostics - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid expression", - Detail: "Expected the start of an expression, but found an invalid expression token.", - Subject: &start.Range, - }) - } - p.setRecovery() - - // Return a placeholder so that the AST is still structurally sound - // even in the presence of parse errors. - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: start.Range, - }, diags - } -} - -func (p *parser) numberLitValue(tok Token) (cty.Value, hcl.Diagnostics) { - // The cty.ParseNumberVal is always the same behavior as converting a - // string to a number, ensuring we always interpret decimal numbers in - // the same way. - numVal, err := cty.ParseNumberVal(string(tok.Bytes)) - if err != nil { - ret := cty.UnknownVal(cty.Number) - return ret, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid number literal", - // FIXME: not a very good error message, but convert only - // gives us "a number is required", so not much help either. - Detail: "Failed to recognize the value of this number literal.", - Subject: &tok.Range, - }, - } - } - return numVal, nil -} - -// finishParsingFunctionCall parses a function call assuming that the function -// name was already read, and so the peeker should be pointing at the opening -// parenthesis after the name. -func (p *parser) finishParsingFunctionCall(name Token) (Expression, hcl.Diagnostics) { - openTok := p.Read() - if openTok.Type != TokenOParen { - // should never happen if callers behave - panic("finishParsingFunctionCall called with non-parenthesis as next token") - } - - var args []Expression - var diags hcl.Diagnostics - var expandFinal bool - var closeTok Token - - // Arbitrary newlines are allowed inside the function call parentheses. - p.PushIncludeNewlines(false) - -Token: - for { - tok := p.Peek() - - if tok.Type == TokenCParen { - closeTok = p.Read() // eat closing paren - break Token - } - - arg, argDiags := p.ParseExpression() - args = append(args, arg) - diags = append(diags, argDiags...) - if p.recovery && argDiags.HasErrors() { - // if there was a parse error in the argument then we've - // probably been left in a weird place in the token stream, - // so we'll bail out with a partial argument list. - p.recover(TokenCParen) - break Token - } - - sep := p.Read() - if sep.Type == TokenCParen { - closeTok = sep - break Token - } - - if sep.Type == TokenEllipsis { - expandFinal = true - - if p.Peek().Type != TokenCParen { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing closing parenthesis", - Detail: "An expanded function argument (with ...) must be immediately followed by closing parentheses.", - Subject: &sep.Range, - Context: hcl.RangeBetween(name.Range, sep.Range).Ptr(), - }) - } - closeTok = p.recover(TokenCParen) - } else { - closeTok = p.Read() // eat closing paren - } - break Token - } - - if sep.Type != TokenComma { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing argument separator", - Detail: "A comma is required to separate each function argument from the next.", - Subject: &sep.Range, - Context: hcl.RangeBetween(name.Range, sep.Range).Ptr(), - }) - closeTok = p.recover(TokenCParen) - break Token - } - - if p.Peek().Type == TokenCParen { - // A trailing comma after the last argument gets us in here. - closeTok = p.Read() // eat closing paren - break Token - } - - } - - p.PopIncludeNewlines() - - return &FunctionCallExpr{ - Name: string(name.Bytes), - Args: args, - - ExpandFinal: expandFinal, - - NameRange: name.Range, - OpenParenRange: openTok.Range, - CloseParenRange: closeTok.Range, - }, diags -} - -func (p *parser) parseTupleCons() (Expression, hcl.Diagnostics) { - open := p.Read() - if open.Type != TokenOBrack { - // Should never happen if callers are behaving - panic("parseTupleCons called without peeker pointing to open bracket") - } - - p.PushIncludeNewlines(false) - defer p.PopIncludeNewlines() - - if forKeyword.TokenMatches(p.Peek()) { - return p.finishParsingForExpr(open) - } - - var close Token - - var diags hcl.Diagnostics - var exprs []Expression - - for { - next := p.Peek() - if next.Type == TokenCBrack { - close = p.Read() // eat closer - break - } - - expr, exprDiags := p.ParseExpression() - exprs = append(exprs, expr) - diags = append(diags, exprDiags...) - - if p.recovery && exprDiags.HasErrors() { - // If expression parsing failed then we are probably in a strange - // place in the token stream, so we'll bail out and try to reset - // to after our closing bracket to allow parsing to continue. - close = p.recover(TokenCBrack) - break - } - - next = p.Peek() - if next.Type == TokenCBrack { - close = p.Read() // eat closer - break - } - - if next.Type != TokenComma { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing item separator", - Detail: "Expected a comma to mark the beginning of the next item.", - Subject: &next.Range, - Context: hcl.RangeBetween(open.Range, next.Range).Ptr(), - }) - } - close = p.recover(TokenCBrack) - break - } - - p.Read() // eat comma - - } - - return &TupleConsExpr{ - Exprs: exprs, - - SrcRange: hcl.RangeBetween(open.Range, close.Range), - OpenRange: open.Range, - }, diags -} - -func (p *parser) parseObjectCons() (Expression, hcl.Diagnostics) { - open := p.Read() - if open.Type != TokenOBrace { - // Should never happen if callers are behaving - panic("parseObjectCons called without peeker pointing to open brace") - } - - // We must temporarily stop looking at newlines here while we check for - // a "for" keyword, since for expressions are _not_ newline-sensitive, - // even though object constructors are. - p.PushIncludeNewlines(false) - isFor := forKeyword.TokenMatches(p.Peek()) - p.PopIncludeNewlines() - if isFor { - return p.finishParsingForExpr(open) - } - - p.PushIncludeNewlines(true) - defer p.PopIncludeNewlines() - - var close Token - - var diags hcl.Diagnostics - var items []ObjectConsItem - - for { - next := p.Peek() - if next.Type == TokenNewline { - p.Read() // eat newline - continue - } - - if next.Type == TokenCBrace { - close = p.Read() // eat closer - break - } - - var key Expression - var keyDiags hcl.Diagnostics - key, keyDiags = p.ParseExpression() - diags = append(diags, keyDiags...) - - if p.recovery && keyDiags.HasErrors() { - // If expression parsing failed then we are probably in a strange - // place in the token stream, so we'll bail out and try to reset - // to after our closing brace to allow parsing to continue. - close = p.recover(TokenCBrace) - break - } - - // We wrap up the key expression in a special wrapper that deals - // with our special case that naked identifiers as object keys - // are interpreted as literal strings. - key = &ObjectConsKeyExpr{Wrapped: key} - - next = p.Peek() - if next.Type != TokenEqual && next.Type != TokenColon { - if !p.recovery { - switch next.Type { - case TokenNewline, TokenComma: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing attribute value", - Detail: "Expected an attribute value, introduced by an equals sign (\"=\").", - Subject: &next.Range, - Context: hcl.RangeBetween(open.Range, next.Range).Ptr(), - }) - case TokenIdent: - // Although this might just be a plain old missing equals - // sign before a reference, one way to get here is to try - // to write an attribute name containing a period followed - // by a digit, which was valid in HCL1, like this: - // foo1.2_bar = "baz" - // We can't know exactly what the user intended here, but - // we'll augment our message with an extra hint in this case - // in case it is helpful. - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing key/value separator", - Detail: "Expected an equals sign (\"=\") to mark the beginning of the attribute value. If you intended to given an attribute name containing periods or spaces, write the name in quotes to create a string literal.", - Subject: &next.Range, - Context: hcl.RangeBetween(open.Range, next.Range).Ptr(), - }) - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing key/value separator", - Detail: "Expected an equals sign (\"=\") to mark the beginning of the attribute value.", - Subject: &next.Range, - Context: hcl.RangeBetween(open.Range, next.Range).Ptr(), - }) - } - } - close = p.recover(TokenCBrace) - break - } - - p.Read() // eat equals sign or colon - - value, valueDiags := p.ParseExpression() - diags = append(diags, valueDiags...) - - if p.recovery && valueDiags.HasErrors() { - // If expression parsing failed then we are probably in a strange - // place in the token stream, so we'll bail out and try to reset - // to after our closing brace to allow parsing to continue. - close = p.recover(TokenCBrace) - break - } - - items = append(items, ObjectConsItem{ - KeyExpr: key, - ValueExpr: value, - }) - - next = p.Peek() - if next.Type == TokenCBrace { - close = p.Read() // eat closer - break - } - - if next.Type != TokenComma && next.Type != TokenNewline { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing attribute separator", - Detail: "Expected a newline or comma to mark the beginning of the next attribute.", - Subject: &next.Range, - Context: hcl.RangeBetween(open.Range, next.Range).Ptr(), - }) - } - close = p.recover(TokenCBrace) - break - } - - p.Read() // eat comma or newline - - } - - return &ObjectConsExpr{ - Items: items, - - SrcRange: hcl.RangeBetween(open.Range, close.Range), - OpenRange: open.Range, - }, diags -} - -func (p *parser) finishParsingForExpr(open Token) (Expression, hcl.Diagnostics) { - p.PushIncludeNewlines(false) - defer p.PopIncludeNewlines() - introducer := p.Read() - if !forKeyword.TokenMatches(introducer) { - // Should never happen if callers are behaving - panic("finishParsingForExpr called without peeker pointing to 'for' identifier") - } - - var makeObj bool - var closeType TokenType - switch open.Type { - case TokenOBrace: - makeObj = true - closeType = TokenCBrace - case TokenOBrack: - makeObj = false // making a tuple - closeType = TokenCBrack - default: - // Should never happen if callers are behaving - panic("finishParsingForExpr called with invalid open token") - } - - var diags hcl.Diagnostics - var keyName, valName string - - if p.Peek().Type != TokenIdent { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "For expression requires variable name after 'for'.", - Subject: p.Peek().Range.Ptr(), - Context: hcl.RangeBetween(open.Range, p.Peek().Range).Ptr(), - }) - } - close := p.recover(closeType) - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }, diags - } - - valName = string(p.Read().Bytes) - - if p.Peek().Type == TokenComma { - // What we just read was actually the key, then. - keyName = valName - p.Read() // eat comma - - if p.Peek().Type != TokenIdent { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "For expression requires value variable name after comma.", - Subject: p.Peek().Range.Ptr(), - Context: hcl.RangeBetween(open.Range, p.Peek().Range).Ptr(), - }) - } - close := p.recover(closeType) - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }, diags - } - - valName = string(p.Read().Bytes) - } - - if !inKeyword.TokenMatches(p.Peek()) { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "For expression requires the 'in' keyword after its name declarations.", - Subject: p.Peek().Range.Ptr(), - Context: hcl.RangeBetween(open.Range, p.Peek().Range).Ptr(), - }) - } - close := p.recover(closeType) - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }, diags - } - p.Read() // eat 'in' keyword - - collExpr, collDiags := p.ParseExpression() - diags = append(diags, collDiags...) - if p.recovery && collDiags.HasErrors() { - close := p.recover(closeType) - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }, diags - } - - if p.Peek().Type != TokenColon { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "For expression requires a colon after the collection expression.", - Subject: p.Peek().Range.Ptr(), - Context: hcl.RangeBetween(open.Range, p.Peek().Range).Ptr(), - }) - } - close := p.recover(closeType) - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }, diags - } - p.Read() // eat colon - - var keyExpr, valExpr Expression - var keyDiags, valDiags hcl.Diagnostics - valExpr, valDiags = p.ParseExpression() - if p.Peek().Type == TokenFatArrow { - // What we just parsed was actually keyExpr - p.Read() // eat the fat arrow - keyExpr, keyDiags = valExpr, valDiags - - valExpr, valDiags = p.ParseExpression() - } - diags = append(diags, keyDiags...) - diags = append(diags, valDiags...) - if p.recovery && (keyDiags.HasErrors() || valDiags.HasErrors()) { - close := p.recover(closeType) - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }, diags - } - - group := false - var ellipsis Token - if p.Peek().Type == TokenEllipsis { - ellipsis = p.Read() - group = true - } - - var condExpr Expression - var condDiags hcl.Diagnostics - if ifKeyword.TokenMatches(p.Peek()) { - p.Read() // eat "if" - condExpr, condDiags = p.ParseExpression() - diags = append(diags, condDiags...) - if p.recovery && condDiags.HasErrors() { - close := p.recover(p.oppositeBracket(open.Type)) - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }, diags - } - } - - var close Token - if p.Peek().Type == closeType { - close = p.Read() - } else { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "Extra characters after the end of the 'for' expression.", - Subject: p.Peek().Range.Ptr(), - Context: hcl.RangeBetween(open.Range, p.Peek().Range).Ptr(), - }) - } - close = p.recover(closeType) - } - - if !makeObj { - if keyExpr != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "Key expression is not valid when building a tuple.", - Subject: keyExpr.Range().Ptr(), - Context: hcl.RangeBetween(open.Range, close.Range).Ptr(), - }) - } - - if group { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "Grouping ellipsis (...) cannot be used when building a tuple.", - Subject: &ellipsis.Range, - Context: hcl.RangeBetween(open.Range, close.Range).Ptr(), - }) - } - } else { - if keyExpr == nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' expression", - Detail: "Key expression is required when building an object.", - Subject: valExpr.Range().Ptr(), - Context: hcl.RangeBetween(open.Range, close.Range).Ptr(), - }) - } - } - - return &ForExpr{ - KeyVar: keyName, - ValVar: valName, - CollExpr: collExpr, - KeyExpr: keyExpr, - ValExpr: valExpr, - CondExpr: condExpr, - Group: group, - - SrcRange: hcl.RangeBetween(open.Range, close.Range), - OpenRange: open.Range, - CloseRange: close.Range, - }, diags -} - -// parseQuotedStringLiteral is a helper for parsing quoted strings that -// aren't allowed to contain any interpolations, such as block labels. -func (p *parser) parseQuotedStringLiteral() (string, hcl.Range, hcl.Diagnostics) { - oQuote := p.Read() - if oQuote.Type != TokenOQuote { - return "", oQuote.Range, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid string literal", - Detail: "A quoted string is required here.", - Subject: &oQuote.Range, - }, - } - } - - var diags hcl.Diagnostics - ret := &bytes.Buffer{} - var cQuote Token - -Token: - for { - tok := p.Read() - switch tok.Type { - - case TokenCQuote: - cQuote = tok - break Token - - case TokenQuotedLit: - s, sDiags := p.decodeStringLit(tok) - diags = append(diags, sDiags...) - ret.WriteString(s) - - case TokenTemplateControl, TokenTemplateInterp: - which := "$" - if tok.Type == TokenTemplateControl { - which = "%" - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid string literal", - Detail: fmt.Sprintf( - "Template sequences are not allowed in this string. To include a literal %q, double it (as \"%s%s\") to escape it.", - which, which, which, - ), - Subject: &tok.Range, - Context: hcl.RangeBetween(oQuote.Range, tok.Range).Ptr(), - }) - - // Now that we're returning an error callers won't attempt to use - // the result for any real operations, but they might try to use - // the partial AST for other analyses, so we'll leave a marker - // to indicate that there was something invalid in the string to - // help avoid misinterpretation of the partial result - ret.WriteString(which) - ret.WriteString("{ ... }") - - p.recover(TokenTemplateSeqEnd) // we'll try to keep parsing after the sequence ends - - case TokenEOF: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unterminated string literal", - Detail: "Unable to find the closing quote mark before the end of the file.", - Subject: &tok.Range, - Context: hcl.RangeBetween(oQuote.Range, tok.Range).Ptr(), - }) - break Token - - default: - // Should never happen, as long as the scanner is behaving itself - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid string literal", - Detail: "This item is not valid in a string literal.", - Subject: &tok.Range, - Context: hcl.RangeBetween(oQuote.Range, tok.Range).Ptr(), - }) - p.recover(TokenCQuote) - break Token - - } - - } - - return ret.String(), hcl.RangeBetween(oQuote.Range, cQuote.Range), diags -} - -// decodeStringLit processes the given token, which must be either a -// TokenQuotedLit or a TokenStringLit, returning the string resulting from -// resolving any escape sequences. -// -// If any error diagnostics are returned, the returned string may be incomplete -// or otherwise invalid. -func (p *parser) decodeStringLit(tok Token) (string, hcl.Diagnostics) { - var quoted bool - switch tok.Type { - case TokenQuotedLit: - quoted = true - case TokenStringLit: - quoted = false - default: - panic("decodeQuotedLit can only be used with TokenStringLit and TokenQuotedLit tokens") - } - var diags hcl.Diagnostics - - ret := make([]byte, 0, len(tok.Bytes)) - slices := scanStringLit(tok.Bytes, quoted) - - // We will mutate rng constantly as we walk through our token slices below. - // Any diagnostics must take a copy of this rng rather than simply pointing - // to it, e.g. by using rng.Ptr() rather than &rng. - rng := tok.Range - rng.End = rng.Start - -Slices: - for _, slice := range slices { - if len(slice) == 0 { - continue - } - - // Advance the start of our range to where the previous token ended - rng.Start = rng.End - - // Advance the end of our range to after our token. - b := slice - for len(b) > 0 { - adv, ch, _ := textseg.ScanGraphemeClusters(b, true) - rng.End.Byte += adv - switch ch[0] { - case '\r', '\n': - rng.End.Line++ - rng.End.Column = 1 - default: - rng.End.Column++ - } - b = b[adv:] - } - - TokenType: - switch slice[0] { - case '\\': - if !quoted { - // If we're not in quoted mode then just treat this token as - // normal. (Slices can still start with backslash even if we're - // not specifically looking for backslash sequences.) - break TokenType - } - if len(slice) < 2 { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid escape sequence", - Detail: "Backslash must be followed by an escape sequence selector character.", - Subject: rng.Ptr(), - }) - break TokenType - } - - switch slice[1] { - - case 'n': - ret = append(ret, '\n') - continue Slices - case 'r': - ret = append(ret, '\r') - continue Slices - case 't': - ret = append(ret, '\t') - continue Slices - case '"': - ret = append(ret, '"') - continue Slices - case '\\': - ret = append(ret, '\\') - continue Slices - case 'u', 'U': - if slice[1] == 'u' && len(slice) != 6 { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid escape sequence", - Detail: "The \\u escape sequence must be followed by four hexadecimal digits.", - Subject: rng.Ptr(), - }) - break TokenType - } else if slice[1] == 'U' && len(slice) != 10 { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid escape sequence", - Detail: "The \\U escape sequence must be followed by eight hexadecimal digits.", - Subject: rng.Ptr(), - }) - break TokenType - } - - numHex := string(slice[2:]) - num, err := strconv.ParseUint(numHex, 16, 32) - if err != nil { - // Should never happen because the scanner won't match - // a sequence of digits that isn't valid. - panic(err) - } - - r := rune(num) - l := utf8.RuneLen(r) - if l == -1 { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid escape sequence", - Detail: fmt.Sprintf("Cannot encode character U+%04x in UTF-8.", num), - Subject: rng.Ptr(), - }) - break TokenType - } - for i := 0; i < l; i++ { - ret = append(ret, 0) - } - rb := ret[len(ret)-l:] - utf8.EncodeRune(rb, r) - - continue Slices - - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid escape sequence", - Detail: fmt.Sprintf("The symbol %q is not a valid escape sequence selector.", slice[1:]), - Subject: rng.Ptr(), - }) - ret = append(ret, slice[1:]...) - continue Slices - } - - case '$', '%': - if len(slice) != 3 { - // Not long enough to be our escape sequence, so it's literal. - break TokenType - } - - if slice[1] == slice[0] && slice[2] == '{' { - ret = append(ret, slice[0]) - ret = append(ret, '{') - continue Slices - } - - break TokenType - } - - // If we fall out here or break out of here from the switch above - // then this slice is just a literal. - ret = append(ret, slice...) - } - - return string(ret), diags -} - -// setRecovery turns on recovery mode without actually doing any recovery. -// This can be used when a parser knowingly leaves the peeker in a useless -// place and wants to suppress errors that might result from that decision. -func (p *parser) setRecovery() { - p.recovery = true -} - -// recover seeks forward in the token stream until it finds TokenType "end", -// then returns with the peeker pointed at the following token. -// -// If the given token type is a bracketer, this function will additionally -// count nested instances of the brackets to try to leave the peeker at -// the end of the _current_ instance of that bracketer, skipping over any -// nested instances. This is a best-effort operation and may have -// unpredictable results on input with bad bracketer nesting. -func (p *parser) recover(end TokenType) Token { - start := p.oppositeBracket(end) - p.recovery = true - - nest := 0 - for { - tok := p.Read() - ty := tok.Type - if end == TokenTemplateSeqEnd && ty == TokenTemplateControl { - // normalize so that our matching behavior can work, since - // TokenTemplateControl/TokenTemplateInterp are asymmetrical - // with TokenTemplateSeqEnd and thus we need to count both - // openers if that's the closer we're looking for. - ty = TokenTemplateInterp - } - - switch ty { - case start: - nest++ - case end: - if nest < 1 { - return tok - } - - nest-- - case TokenEOF: - return tok - } - } -} - -// recoverOver seeks forward in the token stream until it finds a block -// starting with TokenType "start", then finds the corresponding end token, -// leaving the peeker pointed at the token after that end token. -// -// The given token type _must_ be a bracketer. For example, if the given -// start token is TokenOBrace then the parser will be left at the _end_ of -// the next brace-delimited block encountered, or at EOF if no such block -// is found or it is unclosed. -func (p *parser) recoverOver(start TokenType) { - end := p.oppositeBracket(start) - - // find the opening bracket first -Token: - for { - tok := p.Read() - switch tok.Type { - case start, TokenEOF: - break Token - } - } - - // Now use our existing recover function to locate the _end_ of the - // container we've found. - p.recover(end) -} - -func (p *parser) recoverAfterBodyItem() { - p.recovery = true - var open []TokenType - -Token: - for { - tok := p.Read() - - switch tok.Type { - - case TokenNewline: - if len(open) == 0 { - break Token - } - - case TokenEOF: - break Token - - case TokenOBrace, TokenOBrack, TokenOParen, TokenOQuote, TokenOHeredoc, TokenTemplateInterp, TokenTemplateControl: - open = append(open, tok.Type) - - case TokenCBrace, TokenCBrack, TokenCParen, TokenCQuote, TokenCHeredoc: - opener := p.oppositeBracket(tok.Type) - for len(open) > 0 && open[len(open)-1] != opener { - open = open[:len(open)-1] - } - if len(open) > 0 { - open = open[:len(open)-1] - } - - case TokenTemplateSeqEnd: - for len(open) > 0 && open[len(open)-1] != TokenTemplateInterp && open[len(open)-1] != TokenTemplateControl { - open = open[:len(open)-1] - } - if len(open) > 0 { - open = open[:len(open)-1] - } - - } - } -} - -// oppositeBracket finds the bracket that opposes the given bracketer, or -// NilToken if the given token isn't a bracketer. -// -// "Bracketer", for the sake of this function, is one end of a matching -// open/close set of tokens that establish a bracketing context. -func (p *parser) oppositeBracket(ty TokenType) TokenType { - switch ty { - - case TokenOBrace: - return TokenCBrace - case TokenOBrack: - return TokenCBrack - case TokenOParen: - return TokenCParen - case TokenOQuote: - return TokenCQuote - case TokenOHeredoc: - return TokenCHeredoc - - case TokenCBrace: - return TokenOBrace - case TokenCBrack: - return TokenOBrack - case TokenCParen: - return TokenOParen - case TokenCQuote: - return TokenOQuote - case TokenCHeredoc: - return TokenOHeredoc - - case TokenTemplateControl: - return TokenTemplateSeqEnd - case TokenTemplateInterp: - return TokenTemplateSeqEnd - case TokenTemplateSeqEnd: - // This is ambigous, but we return Interp here because that's - // what's assumed by the "recover" method. - return TokenTemplateInterp - - default: - return TokenNil - } -} - -func errPlaceholderExpr(rng hcl.Range) Expression { - return &LiteralValueExpr{ - Val: cty.DynamicVal, - SrcRange: rng, - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser_template.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser_template.go deleted file mode 100644 index a141626fe..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser_template.go +++ /dev/null @@ -1,799 +0,0 @@ -package hclsyntax - -import ( - "fmt" - "strings" - "unicode" - - "github.com/apparentlymart/go-textseg/textseg" - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty" -) - -func (p *parser) ParseTemplate() (Expression, hcl.Diagnostics) { - return p.parseTemplate(TokenEOF, false) -} - -func (p *parser) parseTemplate(end TokenType, flushHeredoc bool) (Expression, hcl.Diagnostics) { - exprs, passthru, rng, diags := p.parseTemplateInner(end, flushHeredoc) - - if passthru { - if len(exprs) != 1 { - panic("passthru set with len(exprs) != 1") - } - return &TemplateWrapExpr{ - Wrapped: exprs[0], - SrcRange: rng, - }, diags - } - - return &TemplateExpr{ - Parts: exprs, - SrcRange: rng, - }, diags -} - -func (p *parser) parseTemplateInner(end TokenType, flushHeredoc bool) ([]Expression, bool, hcl.Range, hcl.Diagnostics) { - parts, diags := p.parseTemplateParts(end) - if flushHeredoc { - flushHeredocTemplateParts(parts) // Trim off leading spaces on lines per the flush heredoc spec - } - tp := templateParser{ - Tokens: parts.Tokens, - SrcRange: parts.SrcRange, - } - exprs, exprsDiags := tp.parseRoot() - diags = append(diags, exprsDiags...) - - passthru := false - if len(parts.Tokens) == 2 { // one real token and one synthetic "end" token - if _, isInterp := parts.Tokens[0].(*templateInterpToken); isInterp { - passthru = true - } - } - - return exprs, passthru, parts.SrcRange, diags -} - -type templateParser struct { - Tokens []templateToken - SrcRange hcl.Range - - pos int -} - -func (p *templateParser) parseRoot() ([]Expression, hcl.Diagnostics) { - var exprs []Expression - var diags hcl.Diagnostics - - for { - next := p.Peek() - if _, isEnd := next.(*templateEndToken); isEnd { - break - } - - expr, exprDiags := p.parseExpr() - diags = append(diags, exprDiags...) - exprs = append(exprs, expr) - } - - return exprs, diags -} - -func (p *templateParser) parseExpr() (Expression, hcl.Diagnostics) { - next := p.Peek() - switch tok := next.(type) { - - case *templateLiteralToken: - p.Read() // eat literal - return &LiteralValueExpr{ - Val: cty.StringVal(tok.Val), - SrcRange: tok.SrcRange, - }, nil - - case *templateInterpToken: - p.Read() // eat interp - return tok.Expr, nil - - case *templateIfToken: - return p.parseIf() - - case *templateForToken: - return p.parseFor() - - case *templateEndToken: - p.Read() // eat erroneous token - return errPlaceholderExpr(tok.SrcRange), hcl.Diagnostics{ - { - // This is a particularly unhelpful diagnostic, so callers - // should attempt to pre-empt it and produce a more helpful - // diagnostic that is context-aware. - Severity: hcl.DiagError, - Summary: "Unexpected end of template", - Detail: "The control directives within this template are unbalanced.", - Subject: &tok.SrcRange, - }, - } - - case *templateEndCtrlToken: - p.Read() // eat erroneous token - return errPlaceholderExpr(tok.SrcRange), hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Unexpected %s directive", tok.Name()), - Detail: "The control directives within this template are unbalanced.", - Subject: &tok.SrcRange, - }, - } - - default: - // should never happen, because above should be exhaustive - panic(fmt.Sprintf("unhandled template token type %T", next)) - } -} - -func (p *templateParser) parseIf() (Expression, hcl.Diagnostics) { - open := p.Read() - openIf, isIf := open.(*templateIfToken) - if !isIf { - // should never happen if caller is behaving - panic("parseIf called with peeker not pointing at if token") - } - - var ifExprs, elseExprs []Expression - var diags hcl.Diagnostics - var endifRange hcl.Range - - currentExprs := &ifExprs -Token: - for { - next := p.Peek() - if end, isEnd := next.(*templateEndToken); isEnd { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unexpected end of template", - Detail: fmt.Sprintf( - "The if directive at %s is missing its corresponding endif directive.", - openIf.SrcRange, - ), - Subject: &end.SrcRange, - }) - return errPlaceholderExpr(end.SrcRange), diags - } - if end, isCtrlEnd := next.(*templateEndCtrlToken); isCtrlEnd { - p.Read() // eat end directive - - switch end.Type { - - case templateElse: - if currentExprs == &ifExprs { - currentExprs = &elseExprs - continue Token - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unexpected else directive", - Detail: fmt.Sprintf( - "Already in the else clause for the if started at %s.", - openIf.SrcRange, - ), - Subject: &end.SrcRange, - }) - - case templateEndIf: - endifRange = end.SrcRange - break Token - - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Unexpected %s directive", end.Name()), - Detail: fmt.Sprintf( - "Expecting an endif directive for the if started at %s.", - openIf.SrcRange, - ), - Subject: &end.SrcRange, - }) - } - - return errPlaceholderExpr(end.SrcRange), diags - } - - expr, exprDiags := p.parseExpr() - diags = append(diags, exprDiags...) - *currentExprs = append(*currentExprs, expr) - } - - if len(ifExprs) == 0 { - ifExprs = append(ifExprs, &LiteralValueExpr{ - Val: cty.StringVal(""), - SrcRange: hcl.Range{ - Filename: openIf.SrcRange.Filename, - Start: openIf.SrcRange.End, - End: openIf.SrcRange.End, - }, - }) - } - if len(elseExprs) == 0 { - elseExprs = append(elseExprs, &LiteralValueExpr{ - Val: cty.StringVal(""), - SrcRange: hcl.Range{ - Filename: endifRange.Filename, - Start: endifRange.Start, - End: endifRange.Start, - }, - }) - } - - trueExpr := &TemplateExpr{ - Parts: ifExprs, - SrcRange: hcl.RangeBetween(ifExprs[0].Range(), ifExprs[len(ifExprs)-1].Range()), - } - falseExpr := &TemplateExpr{ - Parts: elseExprs, - SrcRange: hcl.RangeBetween(elseExprs[0].Range(), elseExprs[len(elseExprs)-1].Range()), - } - - return &ConditionalExpr{ - Condition: openIf.CondExpr, - TrueResult: trueExpr, - FalseResult: falseExpr, - - SrcRange: hcl.RangeBetween(openIf.SrcRange, endifRange), - }, diags -} - -func (p *templateParser) parseFor() (Expression, hcl.Diagnostics) { - open := p.Read() - openFor, isFor := open.(*templateForToken) - if !isFor { - // should never happen if caller is behaving - panic("parseFor called with peeker not pointing at for token") - } - - var contentExprs []Expression - var diags hcl.Diagnostics - var endforRange hcl.Range - -Token: - for { - next := p.Peek() - if end, isEnd := next.(*templateEndToken); isEnd { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unexpected end of template", - Detail: fmt.Sprintf( - "The for directive at %s is missing its corresponding endfor directive.", - openFor.SrcRange, - ), - Subject: &end.SrcRange, - }) - return errPlaceholderExpr(end.SrcRange), diags - } - if end, isCtrlEnd := next.(*templateEndCtrlToken); isCtrlEnd { - p.Read() // eat end directive - - switch end.Type { - - case templateElse: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unexpected else directive", - Detail: "An else clause is not expected for a for directive.", - Subject: &end.SrcRange, - }) - - case templateEndFor: - endforRange = end.SrcRange - break Token - - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Unexpected %s directive", end.Name()), - Detail: fmt.Sprintf( - "Expecting an endfor directive corresponding to the for directive at %s.", - openFor.SrcRange, - ), - Subject: &end.SrcRange, - }) - } - - return errPlaceholderExpr(end.SrcRange), diags - } - - expr, exprDiags := p.parseExpr() - diags = append(diags, exprDiags...) - contentExprs = append(contentExprs, expr) - } - - if len(contentExprs) == 0 { - contentExprs = append(contentExprs, &LiteralValueExpr{ - Val: cty.StringVal(""), - SrcRange: hcl.Range{ - Filename: openFor.SrcRange.Filename, - Start: openFor.SrcRange.End, - End: openFor.SrcRange.End, - }, - }) - } - - contentExpr := &TemplateExpr{ - Parts: contentExprs, - SrcRange: hcl.RangeBetween(contentExprs[0].Range(), contentExprs[len(contentExprs)-1].Range()), - } - - forExpr := &ForExpr{ - KeyVar: openFor.KeyVar, - ValVar: openFor.ValVar, - - CollExpr: openFor.CollExpr, - ValExpr: contentExpr, - - SrcRange: hcl.RangeBetween(openFor.SrcRange, endforRange), - OpenRange: openFor.SrcRange, - CloseRange: endforRange, - } - - return &TemplateJoinExpr{ - Tuple: forExpr, - }, diags -} - -func (p *templateParser) Peek() templateToken { - return p.Tokens[p.pos] -} - -func (p *templateParser) Read() templateToken { - ret := p.Peek() - if _, end := ret.(*templateEndToken); !end { - p.pos++ - } - return ret -} - -// parseTemplateParts produces a flat sequence of "template tokens", which are -// either literal values (with any "trimming" already applied), interpolation -// sequences, or control flow markers. -// -// A further pass is required on the result to turn it into an AST. -func (p *parser) parseTemplateParts(end TokenType) (*templateParts, hcl.Diagnostics) { - var parts []templateToken - var diags hcl.Diagnostics - - startRange := p.NextRange() - ltrimNext := false - nextCanTrimPrev := false - var endRange hcl.Range - -Token: - for { - next := p.Read() - if next.Type == end { - // all done! - endRange = next.Range - break - } - - ltrim := ltrimNext - ltrimNext = false - canTrimPrev := nextCanTrimPrev - nextCanTrimPrev = false - - switch next.Type { - case TokenStringLit, TokenQuotedLit: - str, strDiags := p.decodeStringLit(next) - diags = append(diags, strDiags...) - - if ltrim { - str = strings.TrimLeftFunc(str, unicode.IsSpace) - } - - parts = append(parts, &templateLiteralToken{ - Val: str, - SrcRange: next.Range, - }) - nextCanTrimPrev = true - - case TokenTemplateInterp: - // if the opener is ${~ then we want to eat any trailing whitespace - // in the preceding literal token, assuming it is indeed a literal - // token. - if canTrimPrev && len(next.Bytes) == 3 && next.Bytes[2] == '~' && len(parts) > 0 { - prevExpr := parts[len(parts)-1] - if lexpr, ok := prevExpr.(*templateLiteralToken); ok { - lexpr.Val = strings.TrimRightFunc(lexpr.Val, unicode.IsSpace) - } - } - - p.PushIncludeNewlines(false) - expr, exprDiags := p.ParseExpression() - diags = append(diags, exprDiags...) - close := p.Peek() - if close.Type != TokenTemplateSeqEnd { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Extra characters after interpolation expression", - Detail: "Expected a closing brace to end the interpolation expression, but found extra characters.", - Subject: &close.Range, - Context: hcl.RangeBetween(startRange, close.Range).Ptr(), - }) - } - p.recover(TokenTemplateSeqEnd) - } else { - p.Read() // eat closing brace - - // If the closer is ~} then we want to eat any leading - // whitespace on the next token, if it turns out to be a - // literal token. - if len(close.Bytes) == 2 && close.Bytes[0] == '~' { - ltrimNext = true - } - } - p.PopIncludeNewlines() - parts = append(parts, &templateInterpToken{ - Expr: expr, - SrcRange: hcl.RangeBetween(next.Range, close.Range), - }) - - case TokenTemplateControl: - // if the opener is %{~ then we want to eat any trailing whitespace - // in the preceding literal token, assuming it is indeed a literal - // token. - if canTrimPrev && len(next.Bytes) == 3 && next.Bytes[2] == '~' && len(parts) > 0 { - prevExpr := parts[len(parts)-1] - if lexpr, ok := prevExpr.(*templateLiteralToken); ok { - lexpr.Val = strings.TrimRightFunc(lexpr.Val, unicode.IsSpace) - } - } - p.PushIncludeNewlines(false) - - kw := p.Peek() - if kw.Type != TokenIdent { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid template directive", - Detail: "A template directive keyword (\"if\", \"for\", etc) is expected at the beginning of a %{ sequence.", - Subject: &kw.Range, - Context: hcl.RangeBetween(next.Range, kw.Range).Ptr(), - }) - } - p.recover(TokenTemplateSeqEnd) - p.PopIncludeNewlines() - continue Token - } - p.Read() // eat keyword token - - switch { - - case ifKeyword.TokenMatches(kw): - condExpr, exprDiags := p.ParseExpression() - diags = append(diags, exprDiags...) - parts = append(parts, &templateIfToken{ - CondExpr: condExpr, - SrcRange: hcl.RangeBetween(next.Range, p.NextRange()), - }) - - case elseKeyword.TokenMatches(kw): - parts = append(parts, &templateEndCtrlToken{ - Type: templateElse, - SrcRange: hcl.RangeBetween(next.Range, p.NextRange()), - }) - - case endifKeyword.TokenMatches(kw): - parts = append(parts, &templateEndCtrlToken{ - Type: templateEndIf, - SrcRange: hcl.RangeBetween(next.Range, p.NextRange()), - }) - - case forKeyword.TokenMatches(kw): - var keyName, valName string - if p.Peek().Type != TokenIdent { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' directive", - Detail: "For directive requires variable name after 'for'.", - Subject: p.Peek().Range.Ptr(), - }) - } - p.recover(TokenTemplateSeqEnd) - p.PopIncludeNewlines() - continue Token - } - - valName = string(p.Read().Bytes) - - if p.Peek().Type == TokenComma { - // What we just read was actually the key, then. - keyName = valName - p.Read() // eat comma - - if p.Peek().Type != TokenIdent { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' directive", - Detail: "For directive requires value variable name after comma.", - Subject: p.Peek().Range.Ptr(), - }) - } - p.recover(TokenTemplateSeqEnd) - p.PopIncludeNewlines() - continue Token - } - - valName = string(p.Read().Bytes) - } - - if !inKeyword.TokenMatches(p.Peek()) { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid 'for' directive", - Detail: "For directive requires 'in' keyword after names.", - Subject: p.Peek().Range.Ptr(), - }) - } - p.recover(TokenTemplateSeqEnd) - p.PopIncludeNewlines() - continue Token - } - p.Read() // eat 'in' keyword - - collExpr, collDiags := p.ParseExpression() - diags = append(diags, collDiags...) - parts = append(parts, &templateForToken{ - KeyVar: keyName, - ValVar: valName, - CollExpr: collExpr, - - SrcRange: hcl.RangeBetween(next.Range, p.NextRange()), - }) - - case endforKeyword.TokenMatches(kw): - parts = append(parts, &templateEndCtrlToken{ - Type: templateEndFor, - SrcRange: hcl.RangeBetween(next.Range, p.NextRange()), - }) - - default: - if !p.recovery { - suggestions := []string{"if", "for", "else", "endif", "endfor"} - given := string(kw.Bytes) - suggestion := nameSuggestion(given, suggestions) - if suggestion != "" { - suggestion = fmt.Sprintf(" Did you mean %q?", suggestion) - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid template control keyword", - Detail: fmt.Sprintf("%q is not a valid template control keyword.%s", given, suggestion), - Subject: &kw.Range, - Context: hcl.RangeBetween(next.Range, kw.Range).Ptr(), - }) - } - p.recover(TokenTemplateSeqEnd) - p.PopIncludeNewlines() - continue Token - - } - - close := p.Peek() - if close.Type != TokenTemplateSeqEnd { - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Extra characters in %s marker", kw.Bytes), - Detail: "Expected a closing brace to end the sequence, but found extra characters.", - Subject: &close.Range, - Context: hcl.RangeBetween(startRange, close.Range).Ptr(), - }) - } - p.recover(TokenTemplateSeqEnd) - } else { - p.Read() // eat closing brace - - // If the closer is ~} then we want to eat any leading - // whitespace on the next token, if it turns out to be a - // literal token. - if len(close.Bytes) == 2 && close.Bytes[0] == '~' { - ltrimNext = true - } - } - p.PopIncludeNewlines() - - default: - if !p.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unterminated template string", - Detail: "No closing marker was found for the string.", - Subject: &next.Range, - Context: hcl.RangeBetween(startRange, next.Range).Ptr(), - }) - } - final := p.recover(end) - endRange = final.Range - break Token - } - } - - if len(parts) == 0 { - // If a sequence has no content, we'll treat it as if it had an - // empty string in it because that's what the user probably means - // if they write "" in configuration. - parts = append(parts, &templateLiteralToken{ - Val: "", - SrcRange: hcl.Range{ - // Range is the zero-character span immediately after the - // opening quote. - Filename: startRange.Filename, - Start: startRange.End, - End: startRange.End, - }, - }) - } - - // Always end with an end token, so the parser can produce diagnostics - // about unclosed items with proper position information. - parts = append(parts, &templateEndToken{ - SrcRange: endRange, - }) - - ret := &templateParts{ - Tokens: parts, - SrcRange: hcl.RangeBetween(startRange, endRange), - } - - return ret, diags -} - -// flushHeredocTemplateParts modifies in-place the line-leading literal strings -// to apply the flush heredoc processing rule: find the line with the smallest -// number of whitespace characters as prefix and then trim that number of -// characters from all of the lines. -// -// This rule is applied to static tokens rather than to the rendered result, -// so interpolating a string with leading whitespace cannot affect the chosen -// prefix length. -func flushHeredocTemplateParts(parts *templateParts) { - if len(parts.Tokens) == 0 { - // Nothing to do - return - } - - const maxInt = int((^uint(0)) >> 1) - - minSpaces := maxInt - newline := true - var adjust []*templateLiteralToken - for _, ttok := range parts.Tokens { - if newline { - newline = false - var spaces int - if lit, ok := ttok.(*templateLiteralToken); ok { - orig := lit.Val - trimmed := strings.TrimLeftFunc(orig, unicode.IsSpace) - // If a token is entirely spaces and ends with a newline - // then it's a "blank line" and thus not considered for - // space-prefix-counting purposes. - if len(trimmed) == 0 && strings.HasSuffix(orig, "\n") { - spaces = maxInt - } else { - spaceBytes := len(lit.Val) - len(trimmed) - spaces, _ = textseg.TokenCount([]byte(orig[:spaceBytes]), textseg.ScanGraphemeClusters) - adjust = append(adjust, lit) - } - } else if _, ok := ttok.(*templateEndToken); ok { - break // don't process the end token since it never has spaces before it - } - if spaces < minSpaces { - minSpaces = spaces - } - } - if lit, ok := ttok.(*templateLiteralToken); ok { - if strings.HasSuffix(lit.Val, "\n") { - newline = true // The following token, if any, begins a new line - } - } - } - - for _, lit := range adjust { - // Since we want to count space _characters_ rather than space _bytes_, - // we can't just do a straightforward slice operation here and instead - // need to hunt for the split point with a scanner. - valBytes := []byte(lit.Val) - spaceByteCount := 0 - for i := 0; i < minSpaces; i++ { - adv, _, _ := textseg.ScanGraphemeClusters(valBytes, true) - spaceByteCount += adv - valBytes = valBytes[adv:] - } - lit.Val = lit.Val[spaceByteCount:] - lit.SrcRange.Start.Column += minSpaces - lit.SrcRange.Start.Byte += spaceByteCount - } -} - -type templateParts struct { - Tokens []templateToken - SrcRange hcl.Range -} - -// templateToken is a higher-level token that represents a single atom within -// the template language. Our template parsing first raises the raw token -// stream to a sequence of templateToken, and then transforms the result into -// an expression tree. -type templateToken interface { - templateToken() templateToken -} - -type templateLiteralToken struct { - Val string - SrcRange hcl.Range - isTemplateToken -} - -type templateInterpToken struct { - Expr Expression - SrcRange hcl.Range - isTemplateToken -} - -type templateIfToken struct { - CondExpr Expression - SrcRange hcl.Range - isTemplateToken -} - -type templateForToken struct { - KeyVar string // empty if ignoring key - ValVar string - CollExpr Expression - SrcRange hcl.Range - isTemplateToken -} - -type templateEndCtrlType int - -const ( - templateEndIf templateEndCtrlType = iota - templateElse - templateEndFor -) - -type templateEndCtrlToken struct { - Type templateEndCtrlType - SrcRange hcl.Range - isTemplateToken -} - -func (t *templateEndCtrlToken) Name() string { - switch t.Type { - case templateEndIf: - return "endif" - case templateElse: - return "else" - case templateEndFor: - return "endfor" - default: - // should never happen - panic("invalid templateEndCtrlType") - } -} - -type templateEndToken struct { - SrcRange hcl.Range - isTemplateToken -} - -type isTemplateToken [0]int - -func (t isTemplateToken) templateToken() templateToken { - return t -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser_traversal.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser_traversal.go deleted file mode 100644 index 2ff3ed6c1..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser_traversal.go +++ /dev/null @@ -1,159 +0,0 @@ -package hclsyntax - -import ( - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty" -) - -// ParseTraversalAbs parses an absolute traversal that is assumed to consume -// all of the remaining tokens in the peeker. The usual parser recovery -// behavior is not supported here because traversals are not expected to -// be parsed as part of a larger program. -func (p *parser) ParseTraversalAbs() (hcl.Traversal, hcl.Diagnostics) { - var ret hcl.Traversal - var diags hcl.Diagnostics - - // Absolute traversal must always begin with a variable name - varTok := p.Read() - if varTok.Type != TokenIdent { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Variable name required", - Detail: "Must begin with a variable name.", - Subject: &varTok.Range, - }) - return ret, diags - } - - varName := string(varTok.Bytes) - ret = append(ret, hcl.TraverseRoot{ - Name: varName, - SrcRange: varTok.Range, - }) - - for { - next := p.Peek() - - if next.Type == TokenEOF { - return ret, diags - } - - switch next.Type { - case TokenDot: - // Attribute access - dot := p.Read() // eat dot - nameTok := p.Read() - if nameTok.Type != TokenIdent { - if nameTok.Type == TokenStar { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Attribute name required", - Detail: "Splat expressions (.*) may not be used here.", - Subject: &nameTok.Range, - Context: hcl.RangeBetween(varTok.Range, nameTok.Range).Ptr(), - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Attribute name required", - Detail: "Dot must be followed by attribute name.", - Subject: &nameTok.Range, - Context: hcl.RangeBetween(varTok.Range, nameTok.Range).Ptr(), - }) - } - return ret, diags - } - - attrName := string(nameTok.Bytes) - ret = append(ret, hcl.TraverseAttr{ - Name: attrName, - SrcRange: hcl.RangeBetween(dot.Range, nameTok.Range), - }) - case TokenOBrack: - // Index - open := p.Read() // eat open bracket - next := p.Peek() - - switch next.Type { - case TokenNumberLit: - tok := p.Read() // eat number - numVal, numDiags := p.numberLitValue(tok) - diags = append(diags, numDiags...) - - close := p.Read() - if close.Type != TokenCBrack { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unclosed index brackets", - Detail: "Index key must be followed by a closing bracket.", - Subject: &close.Range, - Context: hcl.RangeBetween(open.Range, close.Range).Ptr(), - }) - } - - ret = append(ret, hcl.TraverseIndex{ - Key: numVal, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }) - - if diags.HasErrors() { - return ret, diags - } - - case TokenOQuote: - str, _, strDiags := p.parseQuotedStringLiteral() - diags = append(diags, strDiags...) - - close := p.Read() - if close.Type != TokenCBrack { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unclosed index brackets", - Detail: "Index key must be followed by a closing bracket.", - Subject: &close.Range, - Context: hcl.RangeBetween(open.Range, close.Range).Ptr(), - }) - } - - ret = append(ret, hcl.TraverseIndex{ - Key: cty.StringVal(str), - SrcRange: hcl.RangeBetween(open.Range, close.Range), - }) - - if diags.HasErrors() { - return ret, diags - } - - default: - if next.Type == TokenStar { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Attribute name required", - Detail: "Splat expressions ([*]) may not be used here.", - Subject: &next.Range, - Context: hcl.RangeBetween(varTok.Range, next.Range).Ptr(), - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Index value required", - Detail: "Index brackets must contain either a literal number or a literal string.", - Subject: &next.Range, - Context: hcl.RangeBetween(varTok.Range, next.Range).Ptr(), - }) - } - return ret, diags - } - - default: - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid character", - Detail: "Expected an attribute access or an index operator.", - Subject: &next.Range, - Context: hcl.RangeBetween(varTok.Range, next.Range).Ptr(), - }) - return ret, diags - } - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/peeker.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/peeker.go deleted file mode 100644 index 5a4b50e2f..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/peeker.go +++ /dev/null @@ -1,212 +0,0 @@ -package hclsyntax - -import ( - "bytes" - "fmt" - "path/filepath" - "runtime" - "strings" - - "github.com/hashicorp/hcl2/hcl" -) - -// This is set to true at init() time in tests, to enable more useful output -// if a stack discipline error is detected. It should not be enabled in -// normal mode since there is a performance penalty from accessing the -// runtime stack to produce the traces, but could be temporarily set to -// true for debugging if desired. -var tracePeekerNewlinesStack = false - -type peeker struct { - Tokens Tokens - NextIndex int - - IncludeComments bool - IncludeNewlinesStack []bool - - // used only when tracePeekerNewlinesStack is set - newlineStackChanges []peekerNewlineStackChange -} - -// for use in debugging the stack usage only -type peekerNewlineStackChange struct { - Pushing bool // if false, then popping - Frame runtime.Frame - Include bool -} - -func newPeeker(tokens Tokens, includeComments bool) *peeker { - return &peeker{ - Tokens: tokens, - IncludeComments: includeComments, - - IncludeNewlinesStack: []bool{true}, - } -} - -func (p *peeker) Peek() Token { - ret, _ := p.nextToken() - return ret -} - -func (p *peeker) Read() Token { - ret, nextIdx := p.nextToken() - p.NextIndex = nextIdx - return ret -} - -func (p *peeker) NextRange() hcl.Range { - return p.Peek().Range -} - -func (p *peeker) PrevRange() hcl.Range { - if p.NextIndex == 0 { - return p.NextRange() - } - - return p.Tokens[p.NextIndex-1].Range -} - -func (p *peeker) nextToken() (Token, int) { - for i := p.NextIndex; i < len(p.Tokens); i++ { - tok := p.Tokens[i] - switch tok.Type { - case TokenComment: - if !p.IncludeComments { - // Single-line comment tokens, starting with # or //, absorb - // the trailing newline that terminates them as part of their - // bytes. When we're filtering out comments, we must as a - // special case transform these to newline tokens in order - // to properly parse newline-terminated block items. - - if p.includingNewlines() { - if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' { - fakeNewline := Token{ - Type: TokenNewline, - Bytes: tok.Bytes[len(tok.Bytes)-1 : len(tok.Bytes)], - - // We use the whole token range as the newline - // range, even though that's a little... weird, - // because otherwise we'd need to go count - // characters again in order to figure out the - // column of the newline, and that complexity - // isn't justified when ranges of newlines are - // so rarely printed anyway. - Range: tok.Range, - } - return fakeNewline, i + 1 - } - } - - continue - } - case TokenNewline: - if !p.includingNewlines() { - continue - } - } - - return tok, i + 1 - } - - // if we fall out here then we'll return the EOF token, and leave - // our index pointed off the end of the array so we'll keep - // returning EOF in future too. - return p.Tokens[len(p.Tokens)-1], len(p.Tokens) -} - -func (p *peeker) includingNewlines() bool { - return p.IncludeNewlinesStack[len(p.IncludeNewlinesStack)-1] -} - -func (p *peeker) PushIncludeNewlines(include bool) { - if tracePeekerNewlinesStack { - // Record who called us so that we can more easily track down any - // mismanagement of the stack in the parser. - callers := []uintptr{0} - runtime.Callers(2, callers) - frames := runtime.CallersFrames(callers) - frame, _ := frames.Next() - p.newlineStackChanges = append(p.newlineStackChanges, peekerNewlineStackChange{ - true, frame, include, - }) - } - - p.IncludeNewlinesStack = append(p.IncludeNewlinesStack, include) -} - -func (p *peeker) PopIncludeNewlines() bool { - stack := p.IncludeNewlinesStack - remain, ret := stack[:len(stack)-1], stack[len(stack)-1] - p.IncludeNewlinesStack = remain - - if tracePeekerNewlinesStack { - // Record who called us so that we can more easily track down any - // mismanagement of the stack in the parser. - callers := []uintptr{0} - runtime.Callers(2, callers) - frames := runtime.CallersFrames(callers) - frame, _ := frames.Next() - p.newlineStackChanges = append(p.newlineStackChanges, peekerNewlineStackChange{ - false, frame, ret, - }) - } - - return ret -} - -// AssertEmptyNewlinesStack checks if the IncludeNewlinesStack is empty, doing -// panicking if it is not. This can be used to catch stack mismanagement that -// might otherwise just cause confusing downstream errors. -// -// This function is a no-op if the stack is empty when called. -// -// If newlines stack tracing is enabled by setting the global variable -// tracePeekerNewlinesStack at init time, a full log of all of the push/pop -// calls will be produced to help identify which caller in the parser is -// misbehaving. -func (p *peeker) AssertEmptyIncludeNewlinesStack() { - if len(p.IncludeNewlinesStack) != 1 { - // Should never happen; indicates mismanagement of the stack inside - // the parser. - if p.newlineStackChanges != nil { // only if traceNewlinesStack is enabled above - panic(fmt.Errorf( - "non-empty IncludeNewlinesStack after parse with %d calls unaccounted for:\n%s", - len(p.IncludeNewlinesStack)-1, - formatPeekerNewlineStackChanges(p.newlineStackChanges), - )) - } else { - panic(fmt.Errorf("non-empty IncludeNewlinesStack after parse: %#v", p.IncludeNewlinesStack)) - } - } -} - -func formatPeekerNewlineStackChanges(changes []peekerNewlineStackChange) string { - indent := 0 - var buf bytes.Buffer - for _, change := range changes { - funcName := change.Frame.Function - if idx := strings.LastIndexByte(funcName, '.'); idx != -1 { - funcName = funcName[idx+1:] - } - filename := change.Frame.File - if idx := strings.LastIndexByte(filename, filepath.Separator); idx != -1 { - filename = filename[idx+1:] - } - - switch change.Pushing { - - case true: - buf.WriteString(strings.Repeat(" ", indent)) - fmt.Fprintf(&buf, "PUSH %#v (%s at %s:%d)\n", change.Include, funcName, filename, change.Frame.Line) - indent++ - - case false: - indent-- - buf.WriteString(strings.Repeat(" ", indent)) - fmt.Fprintf(&buf, "POP %#v (%s at %s:%d)\n", change.Include, funcName, filename, change.Frame.Line) - - } - } - return buf.String() -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/public.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/public.go deleted file mode 100644 index cf0ee2976..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/public.go +++ /dev/null @@ -1,171 +0,0 @@ -package hclsyntax - -import ( - "github.com/hashicorp/hcl2/hcl" -) - -// ParseConfig parses the given buffer as a whole HCL config file, returning -// a *hcl.File representing its contents. If HasErrors called on the returned -// diagnostics returns true, the returned body is likely to be incomplete -// and should therefore be used with care. -// -// The body in the returned file has dynamic type *hclsyntax.Body, so callers -// may freely type-assert this to get access to the full hclsyntax API in -// situations where detailed access is required. However, most common use-cases -// should be served using the hcl.Body interface to ensure compatibility with -// other configurationg syntaxes, such as JSON. -func ParseConfig(src []byte, filename string, start hcl.Pos) (*hcl.File, hcl.Diagnostics) { - tokens, diags := LexConfig(src, filename, start) - peeker := newPeeker(tokens, false) - parser := &parser{peeker: peeker} - body, parseDiags := parser.ParseBody(TokenEOF) - diags = append(diags, parseDiags...) - - // Panic if the parser uses incorrect stack discipline with the peeker's - // newlines stack, since otherwise it will produce confusing downstream - // errors. - peeker.AssertEmptyIncludeNewlinesStack() - - return &hcl.File{ - Body: body, - Bytes: src, - - Nav: navigation{ - root: body, - }, - }, diags -} - -// ParseExpression parses the given buffer as a standalone HCL expression, -// returning it as an instance of Expression. -func ParseExpression(src []byte, filename string, start hcl.Pos) (Expression, hcl.Diagnostics) { - tokens, diags := LexExpression(src, filename, start) - peeker := newPeeker(tokens, false) - parser := &parser{peeker: peeker} - - // Bare expressions are always parsed in "ignore newlines" mode, as if - // they were wrapped in parentheses. - parser.PushIncludeNewlines(false) - - expr, parseDiags := parser.ParseExpression() - diags = append(diags, parseDiags...) - - next := parser.Peek() - if next.Type != TokenEOF && !parser.recovery { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Extra characters after expression", - Detail: "An expression was successfully parsed, but extra characters were found after it.", - Subject: &next.Range, - }) - } - - parser.PopIncludeNewlines() - - // Panic if the parser uses incorrect stack discipline with the peeker's - // newlines stack, since otherwise it will produce confusing downstream - // errors. - peeker.AssertEmptyIncludeNewlinesStack() - - return expr, diags -} - -// ParseTemplate parses the given buffer as a standalone HCL template, -// returning it as an instance of Expression. -func ParseTemplate(src []byte, filename string, start hcl.Pos) (Expression, hcl.Diagnostics) { - tokens, diags := LexTemplate(src, filename, start) - peeker := newPeeker(tokens, false) - parser := &parser{peeker: peeker} - expr, parseDiags := parser.ParseTemplate() - diags = append(diags, parseDiags...) - - // Panic if the parser uses incorrect stack discipline with the peeker's - // newlines stack, since otherwise it will produce confusing downstream - // errors. - peeker.AssertEmptyIncludeNewlinesStack() - - return expr, diags -} - -// ParseTraversalAbs parses the given buffer as a standalone absolute traversal. -// -// Parsing as a traversal is more limited than parsing as an expession since -// it allows only attribute and indexing operations on variables. Traverals -// are useful as a syntax for referring to objects without necessarily -// evaluating them. -func ParseTraversalAbs(src []byte, filename string, start hcl.Pos) (hcl.Traversal, hcl.Diagnostics) { - tokens, diags := LexExpression(src, filename, start) - peeker := newPeeker(tokens, false) - parser := &parser{peeker: peeker} - - // Bare traverals are always parsed in "ignore newlines" mode, as if - // they were wrapped in parentheses. - parser.PushIncludeNewlines(false) - - expr, parseDiags := parser.ParseTraversalAbs() - diags = append(diags, parseDiags...) - - parser.PopIncludeNewlines() - - // Panic if the parser uses incorrect stack discipline with the peeker's - // newlines stack, since otherwise it will produce confusing downstream - // errors. - peeker.AssertEmptyIncludeNewlinesStack() - - return expr, diags -} - -// LexConfig performs lexical analysis on the given buffer, treating it as a -// whole HCL config file, and returns the resulting tokens. -// -// Only minimal validation is done during lexical analysis, so the returned -// diagnostics may include errors about lexical issues such as bad character -// encodings or unrecognized characters, but full parsing is required to -// detect _all_ syntax errors. -func LexConfig(src []byte, filename string, start hcl.Pos) (Tokens, hcl.Diagnostics) { - tokens := scanTokens(src, filename, start, scanNormal) - diags := checkInvalidTokens(tokens) - return tokens, diags -} - -// LexExpression performs lexical analysis on the given buffer, treating it as -// a standalone HCL expression, and returns the resulting tokens. -// -// Only minimal validation is done during lexical analysis, so the returned -// diagnostics may include errors about lexical issues such as bad character -// encodings or unrecognized characters, but full parsing is required to -// detect _all_ syntax errors. -func LexExpression(src []byte, filename string, start hcl.Pos) (Tokens, hcl.Diagnostics) { - // This is actually just the same thing as LexConfig, since configs - // and expressions lex in the same way. - tokens := scanTokens(src, filename, start, scanNormal) - diags := checkInvalidTokens(tokens) - return tokens, diags -} - -// LexTemplate performs lexical analysis on the given buffer, treating it as a -// standalone HCL template, and returns the resulting tokens. -// -// Only minimal validation is done during lexical analysis, so the returned -// diagnostics may include errors about lexical issues such as bad character -// encodings or unrecognized characters, but full parsing is required to -// detect _all_ syntax errors. -func LexTemplate(src []byte, filename string, start hcl.Pos) (Tokens, hcl.Diagnostics) { - tokens := scanTokens(src, filename, start, scanTemplate) - diags := checkInvalidTokens(tokens) - return tokens, diags -} - -// ValidIdentifier tests if the given string could be a valid identifier in -// a native syntax expression. -// -// This is useful when accepting names from the user that will be used as -// variable or attribute names in the scope, to ensure that any name chosen -// will be traversable using the variable or attribute traversal syntax. -func ValidIdentifier(s string) bool { - // This is a kinda-expensive way to do something pretty simple, but it - // is easiest to do with our existing scanner-related infrastructure here - // and nobody should be validating identifiers in a tight loop. - tokens := scanTokens([]byte(s), "", hcl.Pos{}, scanIdentOnly) - return len(tokens) == 2 && tokens[0].Type == TokenIdent && tokens[1].Type == TokenEOF -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_string_lit.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_string_lit.go deleted file mode 100644 index 2895ade75..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_string_lit.go +++ /dev/null @@ -1,301 +0,0 @@ -//line scan_string_lit.rl:1 - -package hclsyntax - -// This file is generated from scan_string_lit.rl. DO NOT EDIT. - -//line scan_string_lit.go:9 -var _hclstrtok_actions []byte = []byte{ - 0, 1, 0, 1, 1, 2, 1, 0, -} - -var _hclstrtok_key_offsets []byte = []byte{ - 0, 0, 2, 4, 6, 10, 14, 18, - 22, 27, 31, 36, 41, 46, 51, 57, - 62, 74, 85, 96, 107, 118, 129, 140, - 151, -} - -var _hclstrtok_trans_keys []byte = []byte{ - 128, 191, 128, 191, 128, 191, 10, 13, - 36, 37, 10, 13, 36, 37, 10, 13, - 36, 37, 10, 13, 36, 37, 10, 13, - 36, 37, 123, 10, 13, 36, 37, 10, - 13, 36, 37, 92, 10, 13, 36, 37, - 92, 10, 13, 36, 37, 92, 10, 13, - 36, 37, 92, 10, 13, 36, 37, 92, - 123, 10, 13, 36, 37, 92, 85, 117, - 128, 191, 192, 223, 224, 239, 240, 247, - 248, 255, 10, 13, 36, 37, 92, 48, - 57, 65, 70, 97, 102, 10, 13, 36, - 37, 92, 48, 57, 65, 70, 97, 102, - 10, 13, 36, 37, 92, 48, 57, 65, - 70, 97, 102, 10, 13, 36, 37, 92, - 48, 57, 65, 70, 97, 102, 10, 13, - 36, 37, 92, 48, 57, 65, 70, 97, - 102, 10, 13, 36, 37, 92, 48, 57, - 65, 70, 97, 102, 10, 13, 36, 37, - 92, 48, 57, 65, 70, 97, 102, 10, - 13, 36, 37, 92, 48, 57, 65, 70, - 97, 102, -} - -var _hclstrtok_single_lengths []byte = []byte{ - 0, 0, 0, 0, 4, 4, 4, 4, - 5, 4, 5, 5, 5, 5, 6, 5, - 2, 5, 5, 5, 5, 5, 5, 5, - 5, -} - -var _hclstrtok_range_lengths []byte = []byte{ - 0, 1, 1, 1, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 5, 3, 3, 3, 3, 3, 3, 3, - 3, -} - -var _hclstrtok_index_offsets []byte = []byte{ - 0, 0, 2, 4, 6, 11, 16, 21, - 26, 32, 37, 43, 49, 55, 61, 68, - 74, 82, 91, 100, 109, 118, 127, 136, - 145, -} - -var _hclstrtok_indicies []byte = []byte{ - 0, 1, 2, 1, 3, 1, 5, 6, - 7, 8, 4, 10, 11, 12, 13, 9, - 14, 11, 12, 13, 9, 10, 11, 15, - 13, 9, 10, 11, 12, 13, 14, 9, - 10, 11, 12, 15, 9, 17, 18, 19, - 20, 21, 16, 23, 24, 25, 26, 27, - 22, 0, 24, 25, 26, 27, 22, 23, - 24, 28, 26, 27, 22, 23, 24, 25, - 26, 27, 0, 22, 23, 24, 25, 28, - 27, 22, 29, 30, 22, 2, 3, 31, - 22, 0, 23, 24, 25, 26, 27, 32, - 32, 32, 22, 23, 24, 25, 26, 27, - 33, 33, 33, 22, 23, 24, 25, 26, - 27, 34, 34, 34, 22, 23, 24, 25, - 26, 27, 30, 30, 30, 22, 23, 24, - 25, 26, 27, 35, 35, 35, 22, 23, - 24, 25, 26, 27, 36, 36, 36, 22, - 23, 24, 25, 26, 27, 37, 37, 37, - 22, 23, 24, 25, 26, 27, 0, 0, - 0, 22, -} - -var _hclstrtok_trans_targs []byte = []byte{ - 11, 0, 1, 2, 4, 5, 6, 7, - 9, 4, 5, 6, 7, 9, 5, 8, - 10, 11, 12, 13, 15, 16, 10, 11, - 12, 13, 15, 16, 14, 17, 21, 3, - 18, 19, 20, 22, 23, 24, -} - -var _hclstrtok_trans_actions []byte = []byte{ - 0, 0, 0, 0, 0, 1, 1, 1, - 1, 3, 5, 5, 5, 5, 0, 0, - 0, 1, 1, 1, 1, 1, 3, 5, - 5, 5, 5, 5, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, -} - -var _hclstrtok_eof_actions []byte = []byte{ - 0, 0, 0, 0, 0, 3, 3, 3, - 3, 3, 0, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, -} - -const hclstrtok_start int = 4 -const hclstrtok_first_final int = 4 -const hclstrtok_error int = 0 - -const hclstrtok_en_quoted int = 10 -const hclstrtok_en_unquoted int = 4 - -//line scan_string_lit.rl:10 - -func scanStringLit(data []byte, quoted bool) [][]byte { - var ret [][]byte - -//line scan_string_lit.rl:61 - - // Ragel state - p := 0 // "Pointer" into data - pe := len(data) // End-of-data "pointer" - ts := 0 - te := 0 - eof := pe - - var cs int // current state - switch { - case quoted: - cs = hclstrtok_en_quoted - default: - cs = hclstrtok_en_unquoted - } - - // Make Go compiler happy - _ = ts - _ = eof - - /*token := func () { - ret = append(ret, data[ts:te]) - }*/ - -//line scan_string_lit.go:154 - { - } - -//line scan_string_lit.go:158 - { - var _klen int - var _trans int - var _acts int - var _nacts uint - var _keys int - if p == pe { - goto _test_eof - } - if cs == 0 { - goto _out - } - _resume: - _keys = int(_hclstrtok_key_offsets[cs]) - _trans = int(_hclstrtok_index_offsets[cs]) - - _klen = int(_hclstrtok_single_lengths[cs]) - if _klen > 0 { - _lower := int(_keys) - var _mid int - _upper := int(_keys + _klen - 1) - for { - if _upper < _lower { - break - } - - _mid = _lower + ((_upper - _lower) >> 1) - switch { - case data[p] < _hclstrtok_trans_keys[_mid]: - _upper = _mid - 1 - case data[p] > _hclstrtok_trans_keys[_mid]: - _lower = _mid + 1 - default: - _trans += int(_mid - int(_keys)) - goto _match - } - } - _keys += _klen - _trans += _klen - } - - _klen = int(_hclstrtok_range_lengths[cs]) - if _klen > 0 { - _lower := int(_keys) - var _mid int - _upper := int(_keys + (_klen << 1) - 2) - for { - if _upper < _lower { - break - } - - _mid = _lower + (((_upper - _lower) >> 1) & ^1) - switch { - case data[p] < _hclstrtok_trans_keys[_mid]: - _upper = _mid - 2 - case data[p] > _hclstrtok_trans_keys[_mid+1]: - _lower = _mid + 2 - default: - _trans += int((_mid - int(_keys)) >> 1) - goto _match - } - } - _trans += _klen - } - - _match: - _trans = int(_hclstrtok_indicies[_trans]) - cs = int(_hclstrtok_trans_targs[_trans]) - - if _hclstrtok_trans_actions[_trans] == 0 { - goto _again - } - - _acts = int(_hclstrtok_trans_actions[_trans]) - _nacts = uint(_hclstrtok_actions[_acts]) - _acts++ - for ; _nacts > 0; _nacts-- { - _acts++ - switch _hclstrtok_actions[_acts-1] { - case 0: -//line scan_string_lit.rl:40 - - // If te is behind p then we've skipped over some literal - // characters which we must now return. - if te < p { - ret = append(ret, data[te:p]) - } - ts = p - - case 1: -//line scan_string_lit.rl:48 - - te = p - ret = append(ret, data[ts:te]) - -//line scan_string_lit.go:253 - } - } - - _again: - if cs == 0 { - goto _out - } - p++ - if p != pe { - goto _resume - } - _test_eof: - { - } - if p == eof { - __acts := _hclstrtok_eof_actions[cs] - __nacts := uint(_hclstrtok_actions[__acts]) - __acts++ - for ; __nacts > 0; __nacts-- { - __acts++ - switch _hclstrtok_actions[__acts-1] { - case 1: -//line scan_string_lit.rl:48 - - te = p - ret = append(ret, data[ts:te]) - -//line scan_string_lit.go:278 - } - } - } - - _out: - { - } - } - -//line scan_string_lit.rl:89 - - if te < p { - // Collect any leftover literal characters at the end of the input - ret = append(ret, data[te:p]) - } - - // If we fall out here without being in a final state then we've - // encountered something that the scanner can't match, which should - // be impossible (the scanner matches all bytes _somehow_) but we'll - // tolerate it and let the caller deal with it. - if cs < hclstrtok_first_final { - ret = append(ret, data[p:len(data)]) - } - - return ret -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_string_lit.rl b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_string_lit.rl deleted file mode 100644 index f8ac11751..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_string_lit.rl +++ /dev/null @@ -1,105 +0,0 @@ - -package hclsyntax - -// This file is generated from scan_string_lit.rl. DO NOT EDIT. -%%{ - # (except you are actually in scan_string_lit.rl here, so edit away!) - - machine hclstrtok; - write data; -}%% - -func scanStringLit(data []byte, quoted bool) [][]byte { - var ret [][]byte - - %%{ - include UnicodeDerived "unicode_derived.rl"; - - UTF8Cont = 0x80 .. 0xBF; - AnyUTF8 = ( - 0x00..0x7F | - 0xC0..0xDF . UTF8Cont | - 0xE0..0xEF . UTF8Cont . UTF8Cont | - 0xF0..0xF7 . UTF8Cont . UTF8Cont . UTF8Cont - ); - BadUTF8 = any - AnyUTF8; - - Hex = ('0'..'9' | 'a'..'f' | 'A'..'F'); - - # Our goal with this patterns is to capture user intent as best as - # possible, even if the input is invalid. The caller will then verify - # whether each token is valid and generate suitable error messages - # if not. - UnicodeEscapeShort = "\\u" . Hex{0,4}; - UnicodeEscapeLong = "\\U" . Hex{0,8}; - UnicodeEscape = (UnicodeEscapeShort | UnicodeEscapeLong); - SimpleEscape = "\\" . (AnyUTF8 - ('U'|'u'))?; - TemplateEscape = ("$" . ("$" . ("{"?))?) | ("%" . ("%" . ("{"?))?); - Newline = ("\r\n" | "\r" | "\n"); - - action Begin { - // If te is behind p then we've skipped over some literal - // characters which we must now return. - if te < p { - ret = append(ret, data[te:p]) - } - ts = p; - } - action End { - te = p; - ret = append(ret, data[ts:te]); - } - - QuotedToken = (UnicodeEscape | SimpleEscape | TemplateEscape | Newline) >Begin %End; - UnquotedToken = (TemplateEscape | Newline) >Begin %End; - QuotedLiteral = (any - ("\\" | "$" | "%" | "\r" | "\n")); - UnquotedLiteral = (any - ("$" | "%" | "\r" | "\n")); - - quoted := (QuotedToken | QuotedLiteral)**; - unquoted := (UnquotedToken | UnquotedLiteral)**; - - }%% - - // Ragel state - p := 0 // "Pointer" into data - pe := len(data) // End-of-data "pointer" - ts := 0 - te := 0 - eof := pe - - var cs int // current state - switch { - case quoted: - cs = hclstrtok_en_quoted - default: - cs = hclstrtok_en_unquoted - } - - // Make Go compiler happy - _ = ts - _ = eof - - /*token := func () { - ret = append(ret, data[ts:te]) - }*/ - - %%{ - write init nocs; - write exec; - }%% - - if te < p { - // Collect any leftover literal characters at the end of the input - ret = append(ret, data[te:p]) - } - - // If we fall out here without being in a final state then we've - // encountered something that the scanner can't match, which should - // be impossible (the scanner matches all bytes _somehow_) but we'll - // tolerate it and let the caller deal with it. - if cs < hclstrtok_first_final { - ret = append(ret, data[p:len(data)]) - } - - return ret -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_tokens.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_tokens.go deleted file mode 100644 index 581e35e00..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_tokens.go +++ /dev/null @@ -1,5265 +0,0 @@ -//line scan_tokens.rl:1 - -package hclsyntax - -import ( - "bytes" - - "github.com/hashicorp/hcl2/hcl" -) - -// This file is generated from scan_tokens.rl. DO NOT EDIT. - -//line scan_tokens.go:15 -var _hcltok_actions []byte = []byte{ - 0, 1, 0, 1, 1, 1, 3, 1, 4, - 1, 7, 1, 8, 1, 9, 1, 10, - 1, 11, 1, 12, 1, 13, 1, 14, - 1, 15, 1, 16, 1, 17, 1, 18, - 1, 19, 1, 20, 1, 23, 1, 24, - 1, 25, 1, 26, 1, 27, 1, 28, - 1, 29, 1, 30, 1, 31, 1, 32, - 1, 35, 1, 36, 1, 37, 1, 38, - 1, 39, 1, 40, 1, 41, 1, 42, - 1, 43, 1, 44, 1, 47, 1, 48, - 1, 49, 1, 50, 1, 51, 1, 52, - 1, 53, 1, 56, 1, 57, 1, 58, - 1, 59, 1, 60, 1, 61, 1, 62, - 1, 63, 1, 64, 1, 65, 1, 66, - 1, 67, 1, 68, 1, 69, 1, 70, - 1, 71, 1, 72, 1, 73, 1, 74, - 1, 75, 1, 76, 1, 77, 1, 78, - 1, 79, 1, 80, 1, 81, 1, 82, - 1, 83, 1, 84, 1, 85, 2, 0, - 14, 2, 0, 25, 2, 0, 29, 2, - 0, 37, 2, 0, 41, 2, 1, 2, - 2, 4, 5, 2, 4, 6, 2, 4, - 21, 2, 4, 22, 2, 4, 33, 2, - 4, 34, 2, 4, 45, 2, 4, 46, - 2, 4, 54, 2, 4, 55, -} - -var _hcltok_key_offsets []int16 = []int16{ - 0, 0, 1, 2, 4, 9, 13, 15, - 57, 98, 144, 145, 149, 155, 155, 157, - 159, 168, 174, 181, 182, 185, 186, 190, - 195, 204, 208, 212, 220, 222, 224, 226, - 229, 261, 263, 265, 269, 273, 276, 287, - 300, 319, 332, 348, 360, 376, 391, 412, - 422, 434, 445, 459, 474, 484, 496, 505, - 517, 519, 523, 544, 553, 563, 569, 575, - 576, 625, 627, 631, 633, 639, 646, 654, - 661, 664, 670, 674, 678, 680, 684, 688, - 692, 698, 706, 714, 720, 722, 726, 728, - 734, 738, 742, 746, 750, 755, 762, 768, - 770, 772, 776, 778, 784, 788, 792, 802, - 807, 821, 836, 838, 846, 848, 853, 867, - 872, 874, 878, 879, 883, 889, 895, 905, - 915, 926, 934, 937, 940, 944, 948, 950, - 953, 953, 956, 958, 988, 990, 992, 996, - 1001, 1005, 1010, 1012, 1014, 1016, 1025, 1029, - 1033, 1039, 1041, 1049, 1057, 1069, 1072, 1078, - 1082, 1084, 1088, 1108, 1110, 1112, 1123, 1129, - 1131, 1133, 1135, 1139, 1145, 1151, 1153, 1158, - 1162, 1164, 1172, 1190, 1230, 1240, 1244, 1246, - 1248, 1249, 1253, 1257, 1261, 1265, 1269, 1274, - 1278, 1282, 1286, 1288, 1290, 1294, 1304, 1308, - 1310, 1314, 1318, 1322, 1335, 1337, 1339, 1343, - 1345, 1349, 1351, 1353, 1383, 1387, 1391, 1395, - 1398, 1405, 1410, 1421, 1425, 1441, 1455, 1459, - 1464, 1468, 1472, 1478, 1480, 1486, 1488, 1492, - 1494, 1500, 1505, 1510, 1520, 1522, 1524, 1528, - 1532, 1534, 1547, 1549, 1553, 1557, 1565, 1567, - 1571, 1573, 1574, 1577, 1582, 1584, 1586, 1590, - 1592, 1596, 1602, 1622, 1628, 1634, 1636, 1637, - 1647, 1648, 1656, 1663, 1665, 1668, 1670, 1672, - 1674, 1679, 1683, 1687, 1692, 1702, 1712, 1716, - 1720, 1734, 1760, 1770, 1772, 1774, 1777, 1779, - 1782, 1784, 1788, 1790, 1791, 1795, 1797, 1800, - 1807, 1815, 1817, 1819, 1823, 1825, 1831, 1842, - 1845, 1847, 1851, 1856, 1886, 1891, 1893, 1896, - 1901, 1915, 1922, 1936, 1941, 1954, 1958, 1971, - 1976, 1994, 1995, 2004, 2008, 2020, 2025, 2032, - 2039, 2046, 2048, 2052, 2074, 2079, 2080, 2084, - 2086, 2136, 2139, 2150, 2154, 2156, 2162, 2168, - 2170, 2175, 2177, 2181, 2183, 2184, 2186, 2188, - 2194, 2196, 2198, 2202, 2208, 2221, 2223, 2229, - 2233, 2241, 2252, 2260, 2263, 2293, 2299, 2302, - 2307, 2309, 2313, 2317, 2321, 2323, 2330, 2332, - 2341, 2348, 2356, 2358, 2378, 2390, 2394, 2396, - 2414, 2453, 2455, 2459, 2461, 2468, 2472, 2500, - 2502, 2504, 2506, 2508, 2511, 2513, 2517, 2521, - 2523, 2526, 2528, 2530, 2533, 2535, 2537, 2538, - 2540, 2542, 2546, 2550, 2553, 2566, 2568, 2574, - 2578, 2580, 2584, 2588, 2602, 2605, 2614, 2616, - 2620, 2626, 2626, 2628, 2630, 2639, 2645, 2652, - 2653, 2656, 2657, 2661, 2666, 2675, 2679, 2683, - 2691, 2693, 2695, 2697, 2700, 2732, 2734, 2736, - 2740, 2744, 2747, 2758, 2771, 2790, 2803, 2819, - 2831, 2847, 2862, 2883, 2893, 2905, 2916, 2930, - 2945, 2955, 2967, 2976, 2988, 2990, 2994, 3015, - 3024, 3034, 3040, 3046, 3047, 3096, 3098, 3102, - 3104, 3110, 3117, 3125, 3132, 3135, 3141, 3145, - 3149, 3151, 3155, 3159, 3163, 3169, 3177, 3185, - 3191, 3193, 3197, 3199, 3205, 3209, 3213, 3217, - 3221, 3226, 3233, 3239, 3241, 3243, 3247, 3249, - 3255, 3259, 3263, 3273, 3278, 3292, 3307, 3309, - 3317, 3319, 3324, 3338, 3343, 3345, 3349, 3350, - 3354, 3360, 3366, 3376, 3386, 3397, 3405, 3408, - 3411, 3415, 3419, 3421, 3424, 3424, 3427, 3429, - 3459, 3461, 3463, 3467, 3472, 3476, 3481, 3483, - 3485, 3487, 3496, 3500, 3504, 3510, 3512, 3520, - 3528, 3540, 3543, 3549, 3553, 3555, 3559, 3579, - 3581, 3583, 3594, 3600, 3602, 3604, 3606, 3610, - 3616, 3622, 3624, 3629, 3633, 3635, 3643, 3661, - 3701, 3711, 3715, 3717, 3719, 3720, 3724, 3728, - 3732, 3736, 3740, 3745, 3749, 3753, 3757, 3759, - 3761, 3765, 3775, 3779, 3781, 3785, 3789, 3793, - 3806, 3808, 3810, 3814, 3816, 3820, 3822, 3824, - 3854, 3858, 3862, 3866, 3869, 3876, 3881, 3892, - 3896, 3912, 3926, 3930, 3935, 3939, 3943, 3949, - 3951, 3957, 3959, 3963, 3965, 3971, 3976, 3981, - 3991, 3993, 3995, 3999, 4003, 4005, 4018, 4020, - 4024, 4028, 4036, 4038, 4042, 4044, 4045, 4048, - 4053, 4055, 4057, 4061, 4063, 4067, 4073, 4093, - 4099, 4105, 4107, 4108, 4118, 4119, 4127, 4134, - 4136, 4139, 4141, 4143, 4145, 4150, 4154, 4158, - 4163, 4173, 4183, 4187, 4191, 4205, 4231, 4241, - 4243, 4245, 4248, 4250, 4253, 4255, 4259, 4261, - 4262, 4266, 4268, 4270, 4277, 4281, 4288, 4295, - 4304, 4320, 4332, 4350, 4361, 4373, 4381, 4399, - 4407, 4437, 4440, 4450, 4460, 4472, 4483, 4492, - 4505, 4517, 4521, 4527, 4554, 4563, 4566, 4571, - 4577, 4582, 4603, 4607, 4613, 4613, 4620, 4629, - 4637, 4640, 4644, 4650, 4656, 4659, 4663, 4670, - 4676, 4685, 4694, 4698, 4702, 4706, 4710, 4717, - 4721, 4725, 4735, 4741, 4745, 4751, 4755, 4758, - 4764, 4770, 4782, 4786, 4790, 4800, 4804, 4815, - 4817, 4819, 4823, 4835, 4840, 4864, 4868, 4874, - 4896, 4905, 4909, 4912, 4913, 4921, 4929, 4935, - 4945, 4952, 4970, 4973, 4976, 4984, 4990, 4994, - 4998, 5002, 5008, 5016, 5021, 5027, 5031, 5039, - 5046, 5050, 5057, 5063, 5071, 5079, 5085, 5091, - 5102, 5106, 5118, 5127, 5144, 5161, 5164, 5168, - 5170, 5176, 5178, 5182, 5197, 5201, 5205, 5209, - 5213, 5217, 5219, 5225, 5230, 5234, 5240, 5247, - 5250, 5268, 5270, 5315, 5321, 5327, 5331, 5335, - 5341, 5345, 5351, 5357, 5364, 5366, 5372, 5378, - 5382, 5386, 5394, 5407, 5413, 5420, 5428, 5434, - 5443, 5449, 5453, 5458, 5462, 5470, 5474, 5478, - 5508, 5514, 5520, 5526, 5532, 5539, 5545, 5552, - 5557, 5567, 5571, 5578, 5584, 5588, 5595, 5599, - 5605, 5608, 5612, 5616, 5620, 5624, 5629, 5634, - 5638, 5649, 5653, 5657, 5663, 5671, 5675, 5692, - 5696, 5702, 5712, 5718, 5724, 5727, 5732, 5741, - 5745, 5749, 5755, 5759, 5765, 5773, 5791, 5792, - 5802, 5803, 5812, 5820, 5822, 5825, 5827, 5829, - 5831, 5836, 5849, 5853, 5868, 5897, 5908, 5910, - 5914, 5918, 5923, 5927, 5929, 5936, 5940, 5948, - 5952, 5964, 5966, 5968, 5970, 5972, 5974, 5975, - 5977, 5979, 5981, 5983, 5985, 5986, 5988, 5990, - 5992, 5994, 5996, 6000, 6006, 6006, 6008, 6010, - 6019, 6025, 6032, 6033, 6036, 6037, 6041, 6046, - 6055, 6059, 6063, 6071, 6073, 6075, 6077, 6080, - 6112, 6114, 6116, 6120, 6124, 6127, 6138, 6151, - 6170, 6183, 6199, 6211, 6227, 6242, 6263, 6273, - 6285, 6296, 6310, 6325, 6335, 6347, 6356, 6368, - 6370, 6374, 6395, 6404, 6414, 6420, 6426, 6427, - 6476, 6478, 6482, 6484, 6490, 6497, 6505, 6512, - 6515, 6521, 6525, 6529, 6531, 6535, 6539, 6543, - 6549, 6557, 6565, 6571, 6573, 6577, 6579, 6585, - 6589, 6593, 6597, 6601, 6606, 6613, 6619, 6621, - 6623, 6627, 6629, 6635, 6639, 6643, 6653, 6658, - 6672, 6687, 6689, 6697, 6699, 6704, 6718, 6723, - 6725, 6729, 6730, 6734, 6740, 6746, 6756, 6766, - 6777, 6785, 6788, 6791, 6795, 6799, 6801, 6804, - 6804, 6807, 6809, 6839, 6841, 6843, 6847, 6852, - 6856, 6861, 6863, 6865, 6867, 6876, 6880, 6884, - 6890, 6892, 6900, 6908, 6920, 6923, 6929, 6933, - 6935, 6939, 6959, 6961, 6963, 6974, 6980, 6982, - 6984, 6986, 6990, 6996, 7002, 7004, 7009, 7013, - 7015, 7023, 7041, 7081, 7091, 7095, 7097, 7099, - 7100, 7104, 7108, 7112, 7116, 7120, 7125, 7129, - 7133, 7137, 7139, 7141, 7145, 7155, 7159, 7161, - 7165, 7169, 7173, 7186, 7188, 7190, 7194, 7196, - 7200, 7202, 7204, 7234, 7238, 7242, 7246, 7249, - 7256, 7261, 7272, 7276, 7292, 7306, 7310, 7315, - 7319, 7323, 7329, 7331, 7337, 7339, 7343, 7345, - 7351, 7356, 7361, 7371, 7373, 7375, 7379, 7383, - 7385, 7398, 7400, 7404, 7408, 7416, 7418, 7422, - 7424, 7425, 7428, 7433, 7435, 7437, 7441, 7443, - 7447, 7453, 7473, 7479, 7485, 7487, 7488, 7498, - 7499, 7507, 7514, 7516, 7519, 7521, 7523, 7525, - 7530, 7534, 7538, 7543, 7553, 7563, 7567, 7571, - 7585, 7611, 7621, 7623, 7625, 7628, 7630, 7633, - 7635, 7639, 7641, 7642, 7646, 7648, 7650, 7657, - 7661, 7668, 7675, 7684, 7700, 7712, 7730, 7741, - 7753, 7761, 7779, 7787, 7817, 7820, 7830, 7840, - 7852, 7863, 7872, 7885, 7897, 7901, 7907, 7934, - 7943, 7946, 7951, 7957, 7962, 7983, 7987, 7993, - 7993, 8000, 8009, 8017, 8020, 8024, 8030, 8036, - 8039, 8043, 8050, 8056, 8065, 8074, 8078, 8082, - 8086, 8090, 8097, 8101, 8105, 8115, 8121, 8125, - 8131, 8135, 8138, 8144, 8150, 8162, 8166, 8170, - 8180, 8184, 8195, 8197, 8199, 8203, 8215, 8220, - 8244, 8248, 8254, 8276, 8285, 8289, 8292, 8293, - 8301, 8309, 8315, 8325, 8332, 8350, 8353, 8356, - 8364, 8370, 8374, 8378, 8382, 8388, 8396, 8401, - 8407, 8411, 8419, 8426, 8430, 8437, 8443, 8451, - 8459, 8465, 8471, 8482, 8486, 8498, 8507, 8524, - 8541, 8544, 8548, 8550, 8556, 8558, 8562, 8577, - 8581, 8585, 8589, 8593, 8597, 8599, 8605, 8610, - 8614, 8620, 8627, 8630, 8648, 8650, 8695, 8701, - 8707, 8711, 8715, 8721, 8725, 8731, 8737, 8744, - 8746, 8752, 8758, 8762, 8766, 8774, 8787, 8793, - 8800, 8808, 8814, 8823, 8829, 8833, 8838, 8842, - 8850, 8854, 8858, 8888, 8894, 8900, 8906, 8912, - 8919, 8925, 8932, 8937, 8947, 8951, 8958, 8964, - 8968, 8975, 8979, 8985, 8988, 8992, 8996, 9000, - 9004, 9009, 9014, 9018, 9029, 9033, 9037, 9043, - 9051, 9055, 9072, 9076, 9082, 9092, 9098, 9104, - 9107, 9112, 9121, 9125, 9129, 9135, 9139, 9145, - 9153, 9171, 9172, 9182, 9183, 9192, 9200, 9202, - 9205, 9207, 9209, 9211, 9216, 9229, 9233, 9248, - 9277, 9288, 9290, 9294, 9298, 9303, 9307, 9309, - 9316, 9320, 9328, 9332, 9407, 9409, 9410, 9411, - 9412, 9413, 9414, 9416, 9421, 9423, 9425, 9426, - 9470, 9471, 9472, 9474, 9479, 9483, 9483, 9485, - 9487, 9498, 9508, 9516, 9517, 9519, 9520, 9524, - 9528, 9538, 9542, 9549, 9560, 9567, 9571, 9577, - 9588, 9620, 9669, 9684, 9699, 9704, 9706, 9711, - 9743, 9751, 9753, 9775, 9797, 9799, 9815, 9831, - 9833, 9835, 9835, 9836, 9837, 9838, 9840, 9841, - 9853, 9855, 9857, 9859, 9873, 9887, 9889, 9892, - 9895, 9897, 9898, 9899, 9901, 9903, 9905, 9919, - 9933, 9935, 9938, 9941, 9943, 9944, 9945, 9947, - 9949, 9951, 10000, 10044, 10046, 10051, 10055, 10055, - 10057, 10059, 10070, 10080, 10088, 10089, 10091, 10092, - 10096, 10100, 10110, 10114, 10121, 10132, 10139, 10143, - 10149, 10160, 10192, 10241, 10256, 10271, 10276, 10278, - 10283, 10315, 10323, 10325, 10347, 10369, -} - -var _hcltok_trans_keys []byte = []byte{ - 46, 42, 42, 47, 46, 69, 101, 48, - 57, 43, 45, 48, 57, 48, 57, 45, - 95, 194, 195, 198, 199, 203, 205, 206, - 207, 210, 212, 213, 214, 215, 216, 217, - 219, 220, 221, 222, 223, 224, 225, 226, - 227, 228, 233, 234, 237, 239, 240, 65, - 90, 97, 122, 196, 202, 208, 218, 229, - 236, 95, 194, 195, 198, 199, 203, 205, - 206, 207, 210, 212, 213, 214, 215, 216, - 217, 219, 220, 221, 222, 223, 224, 225, - 226, 227, 228, 233, 234, 237, 239, 240, - 65, 90, 97, 122, 196, 202, 208, 218, - 229, 236, 10, 13, 45, 95, 194, 195, - 198, 199, 203, 204, 205, 206, 207, 210, - 212, 213, 214, 215, 216, 217, 219, 220, - 221, 222, 223, 224, 225, 226, 227, 228, - 233, 234, 237, 239, 240, 243, 48, 57, - 65, 90, 97, 122, 196, 218, 229, 236, - 10, 170, 181, 183, 186, 128, 150, 152, - 182, 184, 255, 192, 255, 128, 255, 173, - 130, 133, 146, 159, 165, 171, 175, 255, - 181, 190, 184, 185, 192, 255, 140, 134, - 138, 142, 161, 163, 255, 182, 130, 136, - 137, 176, 151, 152, 154, 160, 190, 136, - 144, 192, 255, 135, 129, 130, 132, 133, - 144, 170, 176, 178, 144, 154, 160, 191, - 128, 169, 174, 255, 148, 169, 157, 158, - 189, 190, 192, 255, 144, 255, 139, 140, - 178, 255, 186, 128, 181, 160, 161, 162, - 163, 164, 165, 166, 167, 168, 169, 170, - 171, 172, 173, 174, 175, 176, 177, 178, - 179, 180, 181, 182, 183, 184, 185, 186, - 187, 188, 189, 190, 191, 128, 173, 128, - 155, 160, 180, 182, 189, 148, 161, 163, - 255, 176, 164, 165, 132, 169, 177, 141, - 142, 145, 146, 179, 181, 186, 187, 158, - 133, 134, 137, 138, 143, 150, 152, 155, - 164, 165, 178, 255, 188, 129, 131, 133, - 138, 143, 144, 147, 168, 170, 176, 178, - 179, 181, 182, 184, 185, 190, 255, 157, - 131, 134, 137, 138, 142, 144, 146, 152, - 159, 165, 182, 255, 129, 131, 133, 141, - 143, 145, 147, 168, 170, 176, 178, 179, - 181, 185, 188, 255, 134, 138, 142, 143, - 145, 159, 164, 165, 176, 184, 186, 255, - 129, 131, 133, 140, 143, 144, 147, 168, - 170, 176, 178, 179, 181, 185, 188, 191, - 177, 128, 132, 135, 136, 139, 141, 150, - 151, 156, 157, 159, 163, 166, 175, 156, - 130, 131, 133, 138, 142, 144, 146, 149, - 153, 154, 158, 159, 163, 164, 168, 170, - 174, 185, 190, 191, 144, 151, 128, 130, - 134, 136, 138, 141, 166, 175, 128, 131, - 133, 140, 142, 144, 146, 168, 170, 185, - 189, 255, 133, 137, 151, 142, 148, 155, - 159, 164, 165, 176, 255, 128, 131, 133, - 140, 142, 144, 146, 168, 170, 179, 181, - 185, 188, 191, 158, 128, 132, 134, 136, - 138, 141, 149, 150, 160, 163, 166, 175, - 177, 178, 129, 131, 133, 140, 142, 144, - 146, 186, 189, 255, 133, 137, 143, 147, - 152, 158, 164, 165, 176, 185, 192, 255, - 189, 130, 131, 133, 150, 154, 177, 179, - 187, 138, 150, 128, 134, 143, 148, 152, - 159, 166, 175, 178, 179, 129, 186, 128, - 142, 144, 153, 132, 138, 141, 165, 167, - 129, 130, 135, 136, 148, 151, 153, 159, - 161, 163, 170, 171, 173, 185, 187, 189, - 134, 128, 132, 136, 141, 144, 153, 156, - 159, 128, 181, 183, 185, 152, 153, 160, - 169, 190, 191, 128, 135, 137, 172, 177, - 191, 128, 132, 134, 151, 153, 188, 134, - 128, 129, 130, 131, 137, 138, 139, 140, - 141, 142, 143, 144, 153, 154, 155, 156, - 157, 158, 159, 160, 161, 162, 163, 164, - 165, 166, 167, 168, 169, 170, 173, 175, - 176, 177, 178, 179, 181, 182, 183, 188, - 189, 190, 191, 132, 152, 172, 184, 185, - 187, 128, 191, 128, 137, 144, 255, 158, - 159, 134, 187, 136, 140, 142, 143, 137, - 151, 153, 142, 143, 158, 159, 137, 177, - 142, 143, 182, 183, 191, 255, 128, 130, - 133, 136, 150, 152, 255, 145, 150, 151, - 155, 156, 160, 168, 178, 255, 128, 143, - 160, 255, 182, 183, 190, 255, 129, 255, - 173, 174, 192, 255, 129, 154, 160, 255, - 171, 173, 185, 255, 128, 140, 142, 148, - 160, 180, 128, 147, 160, 172, 174, 176, - 178, 179, 148, 150, 152, 155, 158, 159, - 170, 255, 139, 141, 144, 153, 160, 255, - 184, 255, 128, 170, 176, 255, 182, 255, - 128, 158, 160, 171, 176, 187, 134, 173, - 176, 180, 128, 171, 176, 255, 138, 143, - 155, 255, 128, 155, 160, 255, 159, 189, - 190, 192, 255, 167, 128, 137, 144, 153, - 176, 189, 140, 143, 154, 170, 180, 255, - 180, 255, 128, 183, 128, 137, 141, 189, - 128, 136, 144, 146, 148, 182, 184, 185, - 128, 181, 187, 191, 150, 151, 158, 159, - 152, 154, 156, 158, 134, 135, 142, 143, - 190, 255, 190, 128, 180, 182, 188, 130, - 132, 134, 140, 144, 147, 150, 155, 160, - 172, 178, 180, 182, 188, 128, 129, 130, - 131, 132, 133, 134, 176, 177, 178, 179, - 180, 181, 182, 183, 191, 255, 129, 147, - 149, 176, 178, 190, 192, 255, 144, 156, - 161, 144, 156, 165, 176, 130, 135, 149, - 164, 166, 168, 138, 147, 152, 157, 170, - 185, 188, 191, 142, 133, 137, 160, 255, - 137, 255, 128, 174, 176, 255, 159, 165, - 170, 180, 255, 167, 173, 128, 165, 176, - 255, 168, 174, 176, 190, 192, 255, 128, - 150, 160, 166, 168, 174, 176, 182, 184, - 190, 128, 134, 136, 142, 144, 150, 152, - 158, 160, 191, 128, 129, 130, 131, 132, - 133, 134, 135, 144, 145, 255, 133, 135, - 161, 175, 177, 181, 184, 188, 160, 151, - 152, 187, 192, 255, 133, 173, 177, 255, - 143, 159, 187, 255, 176, 191, 182, 183, - 184, 191, 192, 255, 150, 255, 128, 146, - 147, 148, 152, 153, 154, 155, 156, 158, - 159, 160, 161, 162, 163, 164, 165, 166, - 167, 168, 169, 170, 171, 172, 173, 174, - 175, 176, 129, 255, 141, 255, 144, 189, - 141, 143, 172, 255, 191, 128, 175, 180, - 189, 151, 159, 162, 255, 175, 137, 138, - 184, 255, 183, 255, 168, 255, 128, 179, - 188, 134, 143, 154, 159, 184, 186, 190, - 255, 128, 173, 176, 255, 148, 159, 189, - 255, 129, 142, 154, 159, 191, 255, 128, - 182, 128, 141, 144, 153, 160, 182, 186, - 255, 128, 130, 155, 157, 160, 175, 178, - 182, 129, 134, 137, 142, 145, 150, 160, - 166, 168, 174, 176, 255, 155, 166, 175, - 128, 170, 172, 173, 176, 185, 158, 159, - 160, 255, 164, 175, 135, 138, 188, 255, - 164, 169, 171, 172, 173, 174, 175, 180, - 181, 182, 183, 184, 185, 187, 188, 189, - 190, 191, 165, 186, 174, 175, 154, 255, - 190, 128, 134, 147, 151, 157, 168, 170, - 182, 184, 188, 128, 129, 131, 132, 134, - 255, 147, 255, 190, 255, 144, 145, 136, - 175, 188, 255, 128, 143, 160, 175, 179, - 180, 141, 143, 176, 180, 182, 255, 189, - 255, 191, 144, 153, 161, 186, 129, 154, - 166, 255, 191, 255, 130, 135, 138, 143, - 146, 151, 154, 156, 144, 145, 146, 147, - 148, 150, 151, 152, 155, 157, 158, 160, - 170, 171, 172, 175, 161, 169, 128, 129, - 130, 131, 133, 135, 138, 139, 140, 141, - 142, 143, 144, 145, 146, 147, 148, 149, - 152, 156, 157, 160, 161, 162, 163, 164, - 166, 168, 169, 170, 171, 172, 173, 174, - 176, 177, 153, 155, 178, 179, 128, 139, - 141, 166, 168, 186, 188, 189, 191, 255, - 142, 143, 158, 255, 187, 255, 128, 180, - 189, 128, 156, 160, 255, 145, 159, 161, - 255, 128, 159, 176, 255, 139, 143, 187, - 255, 128, 157, 160, 255, 144, 132, 135, - 150, 255, 158, 159, 170, 175, 148, 151, - 188, 255, 128, 167, 176, 255, 164, 255, - 183, 255, 128, 149, 160, 167, 136, 188, - 128, 133, 138, 181, 183, 184, 191, 255, - 150, 159, 183, 255, 128, 158, 160, 178, - 180, 181, 128, 149, 160, 185, 128, 183, - 190, 191, 191, 128, 131, 133, 134, 140, - 147, 149, 151, 153, 179, 184, 186, 160, - 188, 128, 156, 128, 135, 137, 166, 128, - 181, 128, 149, 160, 178, 128, 145, 128, - 178, 129, 130, 131, 132, 133, 135, 136, - 138, 139, 140, 141, 144, 145, 146, 147, - 150, 151, 152, 153, 154, 155, 156, 162, - 163, 171, 176, 177, 178, 128, 134, 135, - 165, 176, 190, 144, 168, 176, 185, 128, - 180, 182, 191, 182, 144, 179, 155, 133, - 137, 141, 143, 157, 255, 190, 128, 145, - 147, 183, 136, 128, 134, 138, 141, 143, - 157, 159, 168, 176, 255, 171, 175, 186, - 255, 128, 131, 133, 140, 143, 144, 147, - 168, 170, 176, 178, 179, 181, 185, 188, - 191, 144, 151, 128, 132, 135, 136, 139, - 141, 157, 163, 166, 172, 176, 180, 128, - 138, 144, 153, 134, 136, 143, 154, 255, - 128, 181, 184, 255, 129, 151, 158, 255, - 129, 131, 133, 143, 154, 255, 128, 137, - 128, 153, 157, 171, 176, 185, 160, 255, - 170, 190, 192, 255, 128, 184, 128, 136, - 138, 182, 184, 191, 128, 144, 153, 178, - 255, 168, 144, 145, 183, 255, 128, 142, - 145, 149, 129, 141, 144, 146, 147, 148, - 175, 255, 132, 255, 128, 144, 129, 143, - 144, 153, 145, 152, 135, 255, 160, 168, - 169, 171, 172, 173, 174, 188, 189, 190, - 191, 161, 167, 185, 255, 128, 158, 160, - 169, 144, 173, 176, 180, 128, 131, 144, - 153, 163, 183, 189, 255, 144, 255, 133, - 143, 191, 255, 143, 159, 160, 128, 129, - 255, 159, 160, 171, 172, 255, 173, 255, - 179, 255, 128, 176, 177, 178, 128, 129, - 171, 175, 189, 255, 128, 136, 144, 153, - 157, 158, 133, 134, 137, 144, 145, 146, - 147, 148, 149, 154, 155, 156, 157, 158, - 159, 168, 169, 170, 150, 153, 165, 169, - 173, 178, 187, 255, 131, 132, 140, 169, - 174, 255, 130, 132, 149, 157, 173, 186, - 188, 160, 161, 163, 164, 167, 168, 132, - 134, 149, 157, 186, 139, 140, 191, 255, - 134, 128, 132, 138, 144, 146, 255, 166, - 167, 129, 155, 187, 149, 181, 143, 175, - 137, 169, 131, 140, 141, 192, 255, 128, - 182, 187, 255, 173, 180, 182, 255, 132, - 155, 159, 161, 175, 128, 160, 163, 164, - 165, 184, 185, 186, 161, 162, 128, 134, - 136, 152, 155, 161, 163, 164, 166, 170, - 133, 143, 151, 255, 139, 143, 154, 255, - 164, 167, 185, 187, 128, 131, 133, 159, - 161, 162, 169, 178, 180, 183, 130, 135, - 137, 139, 148, 151, 153, 155, 157, 159, - 164, 190, 141, 143, 145, 146, 161, 162, - 167, 170, 172, 178, 180, 183, 185, 188, - 128, 137, 139, 155, 161, 163, 165, 169, - 171, 187, 155, 156, 151, 255, 156, 157, - 160, 181, 255, 186, 187, 255, 162, 255, - 160, 168, 161, 167, 158, 255, 160, 132, - 135, 133, 134, 176, 255, 170, 181, 186, - 191, 176, 180, 182, 183, 186, 189, 134, - 140, 136, 138, 142, 161, 163, 255, 130, - 137, 136, 255, 144, 170, 176, 178, 160, - 191, 128, 138, 174, 175, 177, 255, 148, - 150, 164, 167, 173, 176, 185, 189, 190, - 192, 255, 144, 146, 175, 141, 255, 166, - 176, 178, 255, 186, 138, 170, 180, 181, - 160, 161, 162, 164, 165, 166, 167, 168, - 169, 170, 171, 172, 173, 174, 175, 176, - 177, 178, 179, 180, 181, 182, 184, 186, - 187, 188, 189, 190, 183, 185, 154, 164, - 168, 128, 149, 128, 152, 189, 132, 185, - 144, 152, 161, 177, 255, 169, 177, 129, - 132, 141, 142, 145, 146, 179, 181, 186, - 188, 190, 255, 142, 156, 157, 159, 161, - 176, 177, 133, 138, 143, 144, 147, 168, - 170, 176, 178, 179, 181, 182, 184, 185, - 158, 153, 156, 178, 180, 189, 133, 141, - 143, 145, 147, 168, 170, 176, 178, 179, - 181, 185, 144, 185, 160, 161, 189, 133, - 140, 143, 144, 147, 168, 170, 176, 178, - 179, 181, 185, 177, 156, 157, 159, 161, - 131, 156, 133, 138, 142, 144, 146, 149, - 153, 154, 158, 159, 163, 164, 168, 170, - 174, 185, 144, 189, 133, 140, 142, 144, - 146, 168, 170, 185, 152, 154, 160, 161, - 128, 189, 133, 140, 142, 144, 146, 168, - 170, 179, 181, 185, 158, 160, 161, 177, - 178, 189, 133, 140, 142, 144, 146, 186, - 142, 148, 150, 159, 161, 186, 191, 189, - 133, 150, 154, 177, 179, 187, 128, 134, - 129, 176, 178, 179, 132, 138, 141, 165, - 167, 189, 129, 130, 135, 136, 148, 151, - 153, 159, 161, 163, 170, 171, 173, 176, - 178, 179, 134, 128, 132, 156, 159, 128, - 128, 135, 137, 172, 136, 140, 128, 129, - 130, 131, 137, 138, 139, 140, 141, 142, - 143, 144, 153, 154, 155, 156, 157, 158, - 159, 160, 161, 162, 163, 164, 165, 166, - 167, 168, 169, 170, 172, 173, 174, 175, - 176, 177, 178, 179, 180, 181, 182, 184, - 188, 189, 190, 191, 132, 152, 185, 187, - 191, 128, 170, 161, 144, 149, 154, 157, - 165, 166, 174, 176, 181, 255, 130, 141, - 143, 159, 155, 255, 128, 140, 142, 145, - 160, 177, 128, 145, 160, 172, 174, 176, - 151, 156, 170, 128, 168, 176, 255, 138, - 255, 128, 150, 160, 255, 149, 255, 167, - 133, 179, 133, 139, 131, 160, 174, 175, - 186, 255, 166, 255, 128, 163, 141, 143, - 154, 189, 169, 172, 174, 177, 181, 182, - 129, 130, 132, 133, 134, 176, 177, 178, - 179, 180, 181, 182, 183, 177, 191, 165, - 170, 175, 177, 180, 255, 168, 174, 176, - 255, 128, 134, 136, 142, 144, 150, 152, - 158, 128, 129, 130, 131, 132, 133, 134, - 135, 144, 145, 255, 133, 135, 161, 169, - 177, 181, 184, 188, 160, 151, 154, 128, - 146, 147, 148, 152, 153, 154, 155, 156, - 158, 159, 160, 161, 162, 163, 164, 165, - 166, 167, 168, 169, 170, 171, 172, 173, - 174, 175, 176, 129, 255, 141, 143, 160, - 169, 172, 255, 191, 128, 174, 130, 134, - 139, 163, 255, 130, 179, 187, 189, 178, - 183, 138, 165, 176, 255, 135, 159, 189, - 255, 132, 178, 143, 160, 164, 166, 175, - 186, 190, 128, 168, 186, 128, 130, 132, - 139, 160, 182, 190, 255, 176, 178, 180, - 183, 184, 190, 255, 128, 130, 155, 157, - 160, 170, 178, 180, 128, 162, 164, 169, - 171, 172, 173, 174, 175, 180, 181, 182, - 183, 185, 186, 187, 188, 189, 190, 191, - 165, 179, 157, 190, 128, 134, 147, 151, - 159, 168, 170, 182, 184, 188, 176, 180, - 182, 255, 161, 186, 144, 145, 146, 147, - 148, 150, 151, 152, 155, 157, 158, 160, - 170, 171, 172, 175, 161, 169, 128, 129, - 130, 131, 133, 138, 139, 140, 141, 142, - 143, 144, 145, 146, 147, 148, 149, 152, - 156, 157, 160, 161, 162, 163, 164, 166, - 168, 169, 170, 171, 172, 173, 174, 176, - 177, 153, 155, 178, 179, 145, 255, 139, - 143, 182, 255, 158, 175, 128, 144, 147, - 149, 151, 153, 179, 128, 135, 137, 164, - 128, 130, 131, 132, 133, 134, 135, 136, - 138, 139, 140, 141, 144, 145, 146, 147, - 150, 151, 152, 153, 154, 156, 162, 163, - 171, 176, 177, 178, 131, 183, 131, 175, - 144, 168, 131, 166, 182, 144, 178, 131, - 178, 154, 156, 129, 132, 128, 145, 147, - 171, 159, 255, 144, 157, 161, 135, 138, - 128, 175, 135, 132, 133, 128, 174, 152, - 155, 132, 128, 170, 128, 153, 160, 190, - 192, 255, 128, 136, 138, 174, 128, 178, - 255, 160, 168, 169, 171, 172, 173, 174, - 188, 189, 190, 191, 161, 167, 144, 173, - 128, 131, 163, 183, 189, 255, 133, 143, - 145, 255, 147, 159, 128, 176, 177, 178, - 128, 136, 144, 153, 144, 145, 146, 147, - 148, 149, 154, 155, 156, 157, 158, 159, - 150, 153, 131, 140, 255, 160, 163, 164, - 165, 184, 185, 186, 161, 162, 133, 255, - 170, 181, 183, 186, 128, 150, 152, 182, - 184, 255, 192, 255, 128, 255, 173, 130, - 133, 146, 159, 165, 171, 175, 255, 181, - 190, 184, 185, 192, 255, 140, 134, 138, - 142, 161, 163, 255, 182, 130, 136, 137, - 176, 151, 152, 154, 160, 190, 136, 144, - 192, 255, 135, 129, 130, 132, 133, 144, - 170, 176, 178, 144, 154, 160, 191, 128, - 169, 174, 255, 148, 169, 157, 158, 189, - 190, 192, 255, 144, 255, 139, 140, 178, - 255, 186, 128, 181, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, - 172, 173, 174, 175, 176, 177, 178, 179, - 180, 181, 182, 183, 184, 185, 186, 187, - 188, 189, 190, 191, 128, 173, 128, 155, - 160, 180, 182, 189, 148, 161, 163, 255, - 176, 164, 165, 132, 169, 177, 141, 142, - 145, 146, 179, 181, 186, 187, 158, 133, - 134, 137, 138, 143, 150, 152, 155, 164, - 165, 178, 255, 188, 129, 131, 133, 138, - 143, 144, 147, 168, 170, 176, 178, 179, - 181, 182, 184, 185, 190, 255, 157, 131, - 134, 137, 138, 142, 144, 146, 152, 159, - 165, 182, 255, 129, 131, 133, 141, 143, - 145, 147, 168, 170, 176, 178, 179, 181, - 185, 188, 255, 134, 138, 142, 143, 145, - 159, 164, 165, 176, 184, 186, 255, 129, - 131, 133, 140, 143, 144, 147, 168, 170, - 176, 178, 179, 181, 185, 188, 191, 177, - 128, 132, 135, 136, 139, 141, 150, 151, - 156, 157, 159, 163, 166, 175, 156, 130, - 131, 133, 138, 142, 144, 146, 149, 153, - 154, 158, 159, 163, 164, 168, 170, 174, - 185, 190, 191, 144, 151, 128, 130, 134, - 136, 138, 141, 166, 175, 128, 131, 133, - 140, 142, 144, 146, 168, 170, 185, 189, - 255, 133, 137, 151, 142, 148, 155, 159, - 164, 165, 176, 255, 128, 131, 133, 140, - 142, 144, 146, 168, 170, 179, 181, 185, - 188, 191, 158, 128, 132, 134, 136, 138, - 141, 149, 150, 160, 163, 166, 175, 177, - 178, 129, 131, 133, 140, 142, 144, 146, - 186, 189, 255, 133, 137, 143, 147, 152, - 158, 164, 165, 176, 185, 192, 255, 189, - 130, 131, 133, 150, 154, 177, 179, 187, - 138, 150, 128, 134, 143, 148, 152, 159, - 166, 175, 178, 179, 129, 186, 128, 142, - 144, 153, 132, 138, 141, 165, 167, 129, - 130, 135, 136, 148, 151, 153, 159, 161, - 163, 170, 171, 173, 185, 187, 189, 134, - 128, 132, 136, 141, 144, 153, 156, 159, - 128, 181, 183, 185, 152, 153, 160, 169, - 190, 191, 128, 135, 137, 172, 177, 191, - 128, 132, 134, 151, 153, 188, 134, 128, - 129, 130, 131, 137, 138, 139, 140, 141, - 142, 143, 144, 153, 154, 155, 156, 157, - 158, 159, 160, 161, 162, 163, 164, 165, - 166, 167, 168, 169, 170, 173, 175, 176, - 177, 178, 179, 181, 182, 183, 188, 189, - 190, 191, 132, 152, 172, 184, 185, 187, - 128, 191, 128, 137, 144, 255, 158, 159, - 134, 187, 136, 140, 142, 143, 137, 151, - 153, 142, 143, 158, 159, 137, 177, 142, - 143, 182, 183, 191, 255, 128, 130, 133, - 136, 150, 152, 255, 145, 150, 151, 155, - 156, 160, 168, 178, 255, 128, 143, 160, - 255, 182, 183, 190, 255, 129, 255, 173, - 174, 192, 255, 129, 154, 160, 255, 171, - 173, 185, 255, 128, 140, 142, 148, 160, - 180, 128, 147, 160, 172, 174, 176, 178, - 179, 148, 150, 152, 155, 158, 159, 170, - 255, 139, 141, 144, 153, 160, 255, 184, - 255, 128, 170, 176, 255, 182, 255, 128, - 158, 160, 171, 176, 187, 134, 173, 176, - 180, 128, 171, 176, 255, 138, 143, 155, - 255, 128, 155, 160, 255, 159, 189, 190, - 192, 255, 167, 128, 137, 144, 153, 176, - 189, 140, 143, 154, 170, 180, 255, 180, - 255, 128, 183, 128, 137, 141, 189, 128, - 136, 144, 146, 148, 182, 184, 185, 128, - 181, 187, 191, 150, 151, 158, 159, 152, - 154, 156, 158, 134, 135, 142, 143, 190, - 255, 190, 128, 180, 182, 188, 130, 132, - 134, 140, 144, 147, 150, 155, 160, 172, - 178, 180, 182, 188, 128, 129, 130, 131, - 132, 133, 134, 176, 177, 178, 179, 180, - 181, 182, 183, 191, 255, 129, 147, 149, - 176, 178, 190, 192, 255, 144, 156, 161, - 144, 156, 165, 176, 130, 135, 149, 164, - 166, 168, 138, 147, 152, 157, 170, 185, - 188, 191, 142, 133, 137, 160, 255, 137, - 255, 128, 174, 176, 255, 159, 165, 170, - 180, 255, 167, 173, 128, 165, 176, 255, - 168, 174, 176, 190, 192, 255, 128, 150, - 160, 166, 168, 174, 176, 182, 184, 190, - 128, 134, 136, 142, 144, 150, 152, 158, - 160, 191, 128, 129, 130, 131, 132, 133, - 134, 135, 144, 145, 255, 133, 135, 161, - 175, 177, 181, 184, 188, 160, 151, 152, - 187, 192, 255, 133, 173, 177, 255, 143, - 159, 187, 255, 176, 191, 182, 183, 184, - 191, 192, 255, 150, 255, 128, 146, 147, - 148, 152, 153, 154, 155, 156, 158, 159, - 160, 161, 162, 163, 164, 165, 166, 167, - 168, 169, 170, 171, 172, 173, 174, 175, - 176, 129, 255, 141, 255, 144, 189, 141, - 143, 172, 255, 191, 128, 175, 180, 189, - 151, 159, 162, 255, 175, 137, 138, 184, - 255, 183, 255, 168, 255, 128, 179, 188, - 134, 143, 154, 159, 184, 186, 190, 255, - 128, 173, 176, 255, 148, 159, 189, 255, - 129, 142, 154, 159, 191, 255, 128, 182, - 128, 141, 144, 153, 160, 182, 186, 255, - 128, 130, 155, 157, 160, 175, 178, 182, - 129, 134, 137, 142, 145, 150, 160, 166, - 168, 174, 176, 255, 155, 166, 175, 128, - 170, 172, 173, 176, 185, 158, 159, 160, - 255, 164, 175, 135, 138, 188, 255, 164, - 169, 171, 172, 173, 174, 175, 180, 181, - 182, 183, 184, 185, 187, 188, 189, 190, - 191, 165, 186, 174, 175, 154, 255, 190, - 128, 134, 147, 151, 157, 168, 170, 182, - 184, 188, 128, 129, 131, 132, 134, 255, - 147, 255, 190, 255, 144, 145, 136, 175, - 188, 255, 128, 143, 160, 175, 179, 180, - 141, 143, 176, 180, 182, 255, 189, 255, - 191, 144, 153, 161, 186, 129, 154, 166, - 255, 191, 255, 130, 135, 138, 143, 146, - 151, 154, 156, 144, 145, 146, 147, 148, - 150, 151, 152, 155, 157, 158, 160, 170, - 171, 172, 175, 161, 169, 128, 129, 130, - 131, 133, 135, 138, 139, 140, 141, 142, - 143, 144, 145, 146, 147, 148, 149, 152, - 156, 157, 160, 161, 162, 163, 164, 166, - 168, 169, 170, 171, 172, 173, 174, 176, - 177, 153, 155, 178, 179, 128, 139, 141, - 166, 168, 186, 188, 189, 191, 255, 142, - 143, 158, 255, 187, 255, 128, 180, 189, - 128, 156, 160, 255, 145, 159, 161, 255, - 128, 159, 176, 255, 139, 143, 187, 255, - 128, 157, 160, 255, 144, 132, 135, 150, - 255, 158, 159, 170, 175, 148, 151, 188, - 255, 128, 167, 176, 255, 164, 255, 183, - 255, 128, 149, 160, 167, 136, 188, 128, - 133, 138, 181, 183, 184, 191, 255, 150, - 159, 183, 255, 128, 158, 160, 178, 180, - 181, 128, 149, 160, 185, 128, 183, 190, - 191, 191, 128, 131, 133, 134, 140, 147, - 149, 151, 153, 179, 184, 186, 160, 188, - 128, 156, 128, 135, 137, 166, 128, 181, - 128, 149, 160, 178, 128, 145, 128, 178, - 129, 130, 131, 132, 133, 135, 136, 138, - 139, 140, 141, 144, 145, 146, 147, 150, - 151, 152, 153, 154, 155, 156, 162, 163, - 171, 176, 177, 178, 128, 134, 135, 165, - 176, 190, 144, 168, 176, 185, 128, 180, - 182, 191, 182, 144, 179, 155, 133, 137, - 141, 143, 157, 255, 190, 128, 145, 147, - 183, 136, 128, 134, 138, 141, 143, 157, - 159, 168, 176, 255, 171, 175, 186, 255, - 128, 131, 133, 140, 143, 144, 147, 168, - 170, 176, 178, 179, 181, 185, 188, 191, - 144, 151, 128, 132, 135, 136, 139, 141, - 157, 163, 166, 172, 176, 180, 128, 138, - 144, 153, 134, 136, 143, 154, 255, 128, - 181, 184, 255, 129, 151, 158, 255, 129, - 131, 133, 143, 154, 255, 128, 137, 128, - 153, 157, 171, 176, 185, 160, 255, 170, - 190, 192, 255, 128, 184, 128, 136, 138, - 182, 184, 191, 128, 144, 153, 178, 255, - 168, 144, 145, 183, 255, 128, 142, 145, - 149, 129, 141, 144, 146, 147, 148, 175, - 255, 132, 255, 128, 144, 129, 143, 144, - 153, 145, 152, 135, 255, 160, 168, 169, - 171, 172, 173, 174, 188, 189, 190, 191, - 161, 167, 185, 255, 128, 158, 160, 169, - 144, 173, 176, 180, 128, 131, 144, 153, - 163, 183, 189, 255, 144, 255, 133, 143, - 191, 255, 143, 159, 160, 128, 129, 255, - 159, 160, 171, 172, 255, 173, 255, 179, - 255, 128, 176, 177, 178, 128, 129, 171, - 175, 189, 255, 128, 136, 144, 153, 157, - 158, 133, 134, 137, 144, 145, 146, 147, - 148, 149, 154, 155, 156, 157, 158, 159, - 168, 169, 170, 150, 153, 165, 169, 173, - 178, 187, 255, 131, 132, 140, 169, 174, - 255, 130, 132, 149, 157, 173, 186, 188, - 160, 161, 163, 164, 167, 168, 132, 134, - 149, 157, 186, 139, 140, 191, 255, 134, - 128, 132, 138, 144, 146, 255, 166, 167, - 129, 155, 187, 149, 181, 143, 175, 137, - 169, 131, 140, 141, 192, 255, 128, 182, - 187, 255, 173, 180, 182, 255, 132, 155, - 159, 161, 175, 128, 160, 163, 164, 165, - 184, 185, 186, 161, 162, 128, 134, 136, - 152, 155, 161, 163, 164, 166, 170, 133, - 143, 151, 255, 139, 143, 154, 255, 164, - 167, 185, 187, 128, 131, 133, 159, 161, - 162, 169, 178, 180, 183, 130, 135, 137, - 139, 148, 151, 153, 155, 157, 159, 164, - 190, 141, 143, 145, 146, 161, 162, 167, - 170, 172, 178, 180, 183, 185, 188, 128, - 137, 139, 155, 161, 163, 165, 169, 171, - 187, 155, 156, 151, 255, 156, 157, 160, - 181, 255, 186, 187, 255, 162, 255, 160, - 168, 161, 167, 158, 255, 160, 132, 135, - 133, 134, 176, 255, 128, 191, 154, 164, - 168, 128, 149, 150, 191, 128, 152, 153, - 191, 181, 128, 159, 160, 189, 190, 191, - 189, 128, 131, 132, 185, 186, 191, 144, - 128, 151, 152, 161, 162, 176, 177, 255, - 169, 177, 129, 132, 141, 142, 145, 146, - 179, 181, 186, 188, 190, 191, 192, 255, - 142, 158, 128, 155, 156, 161, 162, 175, - 176, 177, 178, 191, 169, 177, 180, 183, - 128, 132, 133, 138, 139, 142, 143, 144, - 145, 146, 147, 185, 186, 191, 157, 128, - 152, 153, 158, 159, 177, 178, 180, 181, - 191, 142, 146, 169, 177, 180, 189, 128, - 132, 133, 185, 186, 191, 144, 185, 128, - 159, 160, 161, 162, 191, 169, 177, 180, - 189, 128, 132, 133, 140, 141, 142, 143, - 144, 145, 146, 147, 185, 186, 191, 158, - 177, 128, 155, 156, 161, 162, 191, 131, - 145, 155, 157, 128, 132, 133, 138, 139, - 141, 142, 149, 150, 152, 153, 159, 160, - 162, 163, 164, 165, 167, 168, 170, 171, - 173, 174, 185, 186, 191, 144, 128, 191, - 141, 145, 169, 189, 128, 132, 133, 185, - 186, 191, 128, 151, 152, 154, 155, 159, - 160, 161, 162, 191, 128, 141, 145, 169, - 180, 189, 129, 132, 133, 185, 186, 191, - 158, 128, 159, 160, 161, 162, 176, 177, - 178, 179, 191, 141, 145, 189, 128, 132, - 133, 186, 187, 191, 142, 128, 147, 148, - 150, 151, 158, 159, 161, 162, 185, 186, - 191, 178, 188, 128, 132, 133, 150, 151, - 153, 154, 189, 190, 191, 128, 134, 135, - 191, 128, 177, 129, 179, 180, 191, 128, - 131, 137, 141, 152, 160, 164, 166, 172, - 177, 189, 129, 132, 133, 134, 135, 138, - 139, 147, 148, 167, 168, 169, 170, 179, - 180, 191, 133, 128, 134, 135, 155, 156, - 159, 160, 191, 128, 129, 191, 136, 128, - 172, 173, 191, 128, 135, 136, 140, 141, - 191, 191, 128, 170, 171, 190, 161, 128, - 143, 144, 149, 150, 153, 154, 157, 158, - 164, 165, 166, 167, 173, 174, 176, 177, - 180, 181, 255, 130, 141, 143, 159, 134, - 187, 136, 140, 142, 143, 137, 151, 153, - 142, 143, 158, 159, 137, 177, 191, 142, - 143, 182, 183, 192, 255, 129, 151, 128, - 133, 134, 135, 136, 255, 145, 150, 151, - 155, 191, 192, 255, 128, 143, 144, 159, - 160, 255, 182, 183, 190, 191, 192, 255, - 128, 129, 255, 173, 174, 192, 255, 128, - 129, 154, 155, 159, 160, 255, 171, 173, - 185, 191, 192, 255, 141, 128, 145, 146, - 159, 160, 177, 178, 191, 173, 128, 145, - 146, 159, 160, 176, 177, 191, 128, 179, - 180, 191, 151, 156, 128, 191, 128, 159, - 160, 255, 184, 191, 192, 255, 169, 128, - 170, 171, 175, 176, 255, 182, 191, 192, - 255, 128, 158, 159, 191, 128, 143, 144, - 173, 174, 175, 176, 180, 181, 191, 128, - 171, 172, 175, 176, 255, 138, 191, 192, - 255, 128, 150, 151, 159, 160, 255, 149, - 191, 192, 255, 167, 128, 191, 128, 132, - 133, 179, 180, 191, 128, 132, 133, 139, - 140, 191, 128, 130, 131, 160, 161, 173, - 174, 175, 176, 185, 186, 255, 166, 191, - 192, 255, 128, 163, 164, 191, 128, 140, - 141, 143, 144, 153, 154, 189, 190, 191, - 128, 136, 137, 191, 173, 128, 168, 169, - 177, 178, 180, 181, 182, 183, 191, 0, - 127, 192, 255, 150, 151, 158, 159, 152, - 154, 156, 158, 134, 135, 142, 143, 190, - 191, 192, 255, 181, 189, 191, 128, 190, - 133, 181, 128, 129, 130, 140, 141, 143, - 144, 147, 148, 149, 150, 155, 156, 159, - 160, 172, 173, 177, 178, 188, 189, 191, - 177, 191, 128, 190, 128, 143, 144, 156, - 157, 191, 130, 135, 148, 164, 166, 168, - 128, 137, 138, 149, 150, 151, 152, 157, - 158, 169, 170, 185, 186, 187, 188, 191, - 142, 128, 132, 133, 137, 138, 159, 160, - 255, 137, 191, 192, 255, 175, 128, 255, - 159, 165, 170, 175, 177, 180, 191, 192, - 255, 166, 173, 128, 167, 168, 175, 176, - 255, 168, 174, 176, 191, 192, 255, 167, - 175, 183, 191, 128, 150, 151, 159, 160, - 190, 135, 143, 151, 128, 158, 159, 191, - 128, 132, 133, 135, 136, 160, 161, 169, - 170, 176, 177, 181, 182, 183, 184, 188, - 189, 191, 160, 151, 154, 187, 192, 255, - 128, 132, 133, 173, 174, 176, 177, 255, - 143, 159, 187, 191, 192, 255, 128, 175, - 176, 191, 150, 191, 192, 255, 141, 191, - 192, 255, 128, 143, 144, 189, 190, 191, - 141, 143, 160, 169, 172, 191, 192, 255, - 191, 128, 174, 175, 190, 128, 157, 158, - 159, 160, 255, 176, 191, 192, 255, 128, - 150, 151, 159, 160, 161, 162, 255, 175, - 137, 138, 184, 191, 192, 255, 128, 182, - 183, 255, 130, 134, 139, 163, 191, 192, - 255, 128, 129, 130, 179, 180, 191, 187, - 189, 128, 177, 178, 183, 184, 191, 128, - 137, 138, 165, 166, 175, 176, 255, 135, - 159, 189, 191, 192, 255, 128, 131, 132, - 178, 179, 191, 143, 165, 191, 128, 159, - 160, 175, 176, 185, 186, 190, 128, 168, - 169, 191, 131, 186, 128, 139, 140, 159, - 160, 182, 183, 189, 190, 255, 176, 178, - 180, 183, 184, 190, 191, 192, 255, 129, - 128, 130, 131, 154, 155, 157, 158, 159, - 160, 170, 171, 177, 178, 180, 181, 191, - 128, 167, 175, 129, 134, 135, 136, 137, - 142, 143, 144, 145, 150, 151, 159, 160, - 255, 155, 166, 175, 128, 162, 163, 191, - 164, 175, 135, 138, 188, 191, 192, 255, - 174, 175, 154, 191, 192, 255, 157, 169, - 183, 189, 191, 128, 134, 135, 146, 147, - 151, 152, 158, 159, 190, 130, 133, 128, - 255, 178, 191, 192, 255, 128, 146, 147, - 255, 190, 191, 192, 255, 128, 143, 144, - 255, 144, 145, 136, 175, 188, 191, 192, - 255, 181, 128, 175, 176, 255, 189, 191, - 192, 255, 128, 160, 161, 186, 187, 191, - 128, 129, 154, 155, 165, 166, 255, 191, - 192, 255, 128, 129, 130, 135, 136, 137, - 138, 143, 144, 145, 146, 151, 152, 153, - 154, 156, 157, 191, 128, 191, 128, 129, - 130, 131, 133, 138, 139, 140, 141, 142, - 143, 144, 145, 146, 147, 148, 149, 152, - 156, 157, 160, 161, 162, 163, 164, 166, - 168, 169, 170, 171, 172, 173, 174, 176, - 177, 132, 151, 153, 155, 158, 175, 178, - 179, 180, 191, 140, 167, 187, 190, 128, - 255, 142, 143, 158, 191, 192, 255, 187, - 191, 192, 255, 128, 180, 181, 191, 128, - 156, 157, 159, 160, 255, 145, 191, 192, - 255, 128, 159, 160, 175, 176, 255, 139, - 143, 182, 191, 192, 255, 144, 132, 135, - 150, 191, 192, 255, 158, 175, 148, 151, - 188, 191, 192, 255, 128, 167, 168, 175, - 176, 255, 164, 191, 192, 255, 183, 191, - 192, 255, 128, 149, 150, 159, 160, 167, - 168, 191, 136, 182, 188, 128, 133, 134, - 137, 138, 184, 185, 190, 191, 255, 150, - 159, 183, 191, 192, 255, 179, 128, 159, - 160, 181, 182, 191, 128, 149, 150, 159, - 160, 185, 186, 191, 128, 183, 184, 189, - 190, 191, 128, 148, 152, 129, 143, 144, - 179, 180, 191, 128, 159, 160, 188, 189, - 191, 128, 156, 157, 191, 136, 128, 164, - 165, 191, 128, 181, 182, 191, 128, 149, - 150, 159, 160, 178, 179, 191, 128, 145, - 146, 191, 128, 178, 179, 191, 128, 130, - 131, 132, 133, 134, 135, 136, 138, 139, - 140, 141, 144, 145, 146, 147, 150, 151, - 152, 153, 154, 156, 162, 163, 171, 176, - 177, 178, 129, 191, 128, 130, 131, 183, - 184, 191, 128, 130, 131, 175, 176, 191, - 128, 143, 144, 168, 169, 191, 128, 130, - 131, 166, 167, 191, 182, 128, 143, 144, - 178, 179, 191, 128, 130, 131, 178, 179, - 191, 128, 154, 156, 129, 132, 133, 191, - 146, 128, 171, 172, 191, 135, 137, 142, - 158, 128, 168, 169, 175, 176, 255, 159, - 191, 192, 255, 144, 128, 156, 157, 161, - 162, 191, 128, 134, 135, 138, 139, 191, - 128, 175, 176, 191, 134, 128, 131, 132, - 135, 136, 191, 128, 174, 175, 191, 128, - 151, 152, 155, 156, 191, 132, 128, 191, - 128, 170, 171, 191, 128, 153, 154, 191, - 160, 190, 192, 255, 128, 184, 185, 191, - 137, 128, 174, 175, 191, 128, 129, 177, - 178, 255, 144, 191, 192, 255, 128, 142, - 143, 144, 145, 146, 149, 129, 148, 150, - 191, 175, 191, 192, 255, 132, 191, 192, - 255, 128, 144, 129, 143, 145, 191, 144, - 153, 128, 143, 145, 152, 154, 191, 135, - 191, 192, 255, 160, 168, 169, 171, 172, - 173, 174, 188, 189, 190, 191, 128, 159, - 161, 167, 170, 187, 185, 191, 192, 255, - 128, 143, 144, 173, 174, 191, 128, 131, - 132, 162, 163, 183, 184, 188, 189, 255, - 133, 143, 145, 191, 192, 255, 128, 146, - 147, 159, 160, 191, 160, 128, 191, 128, - 129, 191, 192, 255, 159, 160, 171, 128, - 170, 172, 191, 192, 255, 173, 191, 192, - 255, 179, 191, 192, 255, 128, 176, 177, - 178, 129, 191, 128, 129, 130, 191, 171, - 175, 189, 191, 192, 255, 128, 136, 137, - 143, 144, 153, 154, 191, 144, 145, 146, - 147, 148, 149, 154, 155, 156, 157, 158, - 159, 128, 143, 150, 153, 160, 191, 149, - 157, 173, 186, 188, 160, 161, 163, 164, - 167, 168, 132, 134, 149, 157, 186, 191, - 139, 140, 192, 255, 133, 145, 128, 134, - 135, 137, 138, 255, 166, 167, 129, 155, - 187, 149, 181, 143, 175, 137, 169, 131, - 140, 191, 192, 255, 160, 163, 164, 165, - 184, 185, 186, 128, 159, 161, 162, 166, - 191, 133, 191, 192, 255, 132, 160, 163, - 167, 179, 184, 186, 128, 164, 165, 168, - 169, 187, 188, 191, 130, 135, 137, 139, - 144, 147, 151, 153, 155, 157, 159, 163, - 171, 179, 184, 189, 191, 128, 140, 141, - 148, 149, 160, 161, 164, 165, 166, 167, - 190, 138, 164, 170, 128, 155, 156, 160, - 161, 187, 188, 191, 128, 191, 155, 156, - 128, 191, 151, 191, 192, 255, 156, 157, - 160, 128, 191, 181, 191, 192, 255, 158, - 159, 186, 128, 185, 187, 191, 192, 255, - 162, 191, 192, 255, 160, 168, 128, 159, - 161, 167, 169, 191, 158, 191, 192, 255, - 10, 13, 128, 191, 192, 223, 224, 239, - 240, 247, 248, 255, 128, 191, 128, 191, - 128, 191, 128, 191, 128, 191, 10, 128, - 191, 128, 191, 128, 191, 36, 123, 37, - 123, 10, 128, 191, 128, 191, 128, 191, - 36, 123, 37, 123, 170, 181, 183, 186, - 128, 150, 152, 182, 184, 255, 192, 255, - 128, 255, 173, 130, 133, 146, 159, 165, - 171, 175, 255, 181, 190, 184, 185, 192, - 255, 140, 134, 138, 142, 161, 163, 255, - 182, 130, 136, 137, 176, 151, 152, 154, - 160, 190, 136, 144, 192, 255, 135, 129, - 130, 132, 133, 144, 170, 176, 178, 144, - 154, 160, 191, 128, 169, 174, 255, 148, - 169, 157, 158, 189, 190, 192, 255, 144, - 255, 139, 140, 178, 255, 186, 128, 181, - 160, 161, 162, 163, 164, 165, 166, 167, - 168, 169, 170, 171, 172, 173, 174, 175, - 176, 177, 178, 179, 180, 181, 182, 183, - 184, 185, 186, 187, 188, 189, 190, 191, - 128, 173, 128, 155, 160, 180, 182, 189, - 148, 161, 163, 255, 176, 164, 165, 132, - 169, 177, 141, 142, 145, 146, 179, 181, - 186, 187, 158, 133, 134, 137, 138, 143, - 150, 152, 155, 164, 165, 178, 255, 188, - 129, 131, 133, 138, 143, 144, 147, 168, - 170, 176, 178, 179, 181, 182, 184, 185, - 190, 255, 157, 131, 134, 137, 138, 142, - 144, 146, 152, 159, 165, 182, 255, 129, - 131, 133, 141, 143, 145, 147, 168, 170, - 176, 178, 179, 181, 185, 188, 255, 134, - 138, 142, 143, 145, 159, 164, 165, 176, - 184, 186, 255, 129, 131, 133, 140, 143, - 144, 147, 168, 170, 176, 178, 179, 181, - 185, 188, 191, 177, 128, 132, 135, 136, - 139, 141, 150, 151, 156, 157, 159, 163, - 166, 175, 156, 130, 131, 133, 138, 142, - 144, 146, 149, 153, 154, 158, 159, 163, - 164, 168, 170, 174, 185, 190, 191, 144, - 151, 128, 130, 134, 136, 138, 141, 166, - 175, 128, 131, 133, 140, 142, 144, 146, - 168, 170, 185, 189, 255, 133, 137, 151, - 142, 148, 155, 159, 164, 165, 176, 255, - 128, 131, 133, 140, 142, 144, 146, 168, - 170, 179, 181, 185, 188, 191, 158, 128, - 132, 134, 136, 138, 141, 149, 150, 160, - 163, 166, 175, 177, 178, 129, 131, 133, - 140, 142, 144, 146, 186, 189, 255, 133, - 137, 143, 147, 152, 158, 164, 165, 176, - 185, 192, 255, 189, 130, 131, 133, 150, - 154, 177, 179, 187, 138, 150, 128, 134, - 143, 148, 152, 159, 166, 175, 178, 179, - 129, 186, 128, 142, 144, 153, 132, 138, - 141, 165, 167, 129, 130, 135, 136, 148, - 151, 153, 159, 161, 163, 170, 171, 173, - 185, 187, 189, 134, 128, 132, 136, 141, - 144, 153, 156, 159, 128, 181, 183, 185, - 152, 153, 160, 169, 190, 191, 128, 135, - 137, 172, 177, 191, 128, 132, 134, 151, - 153, 188, 134, 128, 129, 130, 131, 137, - 138, 139, 140, 141, 142, 143, 144, 153, - 154, 155, 156, 157, 158, 159, 160, 161, - 162, 163, 164, 165, 166, 167, 168, 169, - 170, 173, 175, 176, 177, 178, 179, 181, - 182, 183, 188, 189, 190, 191, 132, 152, - 172, 184, 185, 187, 128, 191, 128, 137, - 144, 255, 158, 159, 134, 187, 136, 140, - 142, 143, 137, 151, 153, 142, 143, 158, - 159, 137, 177, 142, 143, 182, 183, 191, - 255, 128, 130, 133, 136, 150, 152, 255, - 145, 150, 151, 155, 156, 160, 168, 178, - 255, 128, 143, 160, 255, 182, 183, 190, - 255, 129, 255, 173, 174, 192, 255, 129, - 154, 160, 255, 171, 173, 185, 255, 128, - 140, 142, 148, 160, 180, 128, 147, 160, - 172, 174, 176, 178, 179, 148, 150, 152, - 155, 158, 159, 170, 255, 139, 141, 144, - 153, 160, 255, 184, 255, 128, 170, 176, - 255, 182, 255, 128, 158, 160, 171, 176, - 187, 134, 173, 176, 180, 128, 171, 176, - 255, 138, 143, 155, 255, 128, 155, 160, - 255, 159, 189, 190, 192, 255, 167, 128, - 137, 144, 153, 176, 189, 140, 143, 154, - 170, 180, 255, 180, 255, 128, 183, 128, - 137, 141, 189, 128, 136, 144, 146, 148, - 182, 184, 185, 128, 181, 187, 191, 150, - 151, 158, 159, 152, 154, 156, 158, 134, - 135, 142, 143, 190, 255, 190, 128, 180, - 182, 188, 130, 132, 134, 140, 144, 147, - 150, 155, 160, 172, 178, 180, 182, 188, - 128, 129, 130, 131, 132, 133, 134, 176, - 177, 178, 179, 180, 181, 182, 183, 191, - 255, 129, 147, 149, 176, 178, 190, 192, - 255, 144, 156, 161, 144, 156, 165, 176, - 130, 135, 149, 164, 166, 168, 138, 147, - 152, 157, 170, 185, 188, 191, 142, 133, - 137, 160, 255, 137, 255, 128, 174, 176, - 255, 159, 165, 170, 180, 255, 167, 173, - 128, 165, 176, 255, 168, 174, 176, 190, - 192, 255, 128, 150, 160, 166, 168, 174, - 176, 182, 184, 190, 128, 134, 136, 142, - 144, 150, 152, 158, 160, 191, 128, 129, - 130, 131, 132, 133, 134, 135, 144, 145, - 255, 133, 135, 161, 175, 177, 181, 184, - 188, 160, 151, 152, 187, 192, 255, 133, - 173, 177, 255, 143, 159, 187, 255, 176, - 191, 182, 183, 184, 191, 192, 255, 150, - 255, 128, 146, 147, 148, 152, 153, 154, - 155, 156, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, - 172, 173, 174, 175, 176, 129, 255, 141, - 255, 144, 189, 141, 143, 172, 255, 191, - 128, 175, 180, 189, 151, 159, 162, 255, - 175, 137, 138, 184, 255, 183, 255, 168, - 255, 128, 179, 188, 134, 143, 154, 159, - 184, 186, 190, 255, 128, 173, 176, 255, - 148, 159, 189, 255, 129, 142, 154, 159, - 191, 255, 128, 182, 128, 141, 144, 153, - 160, 182, 186, 255, 128, 130, 155, 157, - 160, 175, 178, 182, 129, 134, 137, 142, - 145, 150, 160, 166, 168, 174, 176, 255, - 155, 166, 175, 128, 170, 172, 173, 176, - 185, 158, 159, 160, 255, 164, 175, 135, - 138, 188, 255, 164, 169, 171, 172, 173, - 174, 175, 180, 181, 182, 183, 184, 185, - 187, 188, 189, 190, 191, 165, 186, 174, - 175, 154, 255, 190, 128, 134, 147, 151, - 157, 168, 170, 182, 184, 188, 128, 129, - 131, 132, 134, 255, 147, 255, 190, 255, - 144, 145, 136, 175, 188, 255, 128, 143, - 160, 175, 179, 180, 141, 143, 176, 180, - 182, 255, 189, 255, 191, 144, 153, 161, - 186, 129, 154, 166, 255, 191, 255, 130, - 135, 138, 143, 146, 151, 154, 156, 144, - 145, 146, 147, 148, 150, 151, 152, 155, - 157, 158, 160, 170, 171, 172, 175, 161, - 169, 128, 129, 130, 131, 133, 135, 138, - 139, 140, 141, 142, 143, 144, 145, 146, - 147, 148, 149, 152, 156, 157, 160, 161, - 162, 163, 164, 166, 168, 169, 170, 171, - 172, 173, 174, 176, 177, 153, 155, 178, - 179, 128, 139, 141, 166, 168, 186, 188, - 189, 191, 255, 142, 143, 158, 255, 187, - 255, 128, 180, 189, 128, 156, 160, 255, - 145, 159, 161, 255, 128, 159, 176, 255, - 139, 143, 187, 255, 128, 157, 160, 255, - 144, 132, 135, 150, 255, 158, 159, 170, - 175, 148, 151, 188, 255, 128, 167, 176, - 255, 164, 255, 183, 255, 128, 149, 160, - 167, 136, 188, 128, 133, 138, 181, 183, - 184, 191, 255, 150, 159, 183, 255, 128, - 158, 160, 178, 180, 181, 128, 149, 160, - 185, 128, 183, 190, 191, 191, 128, 131, - 133, 134, 140, 147, 149, 151, 153, 179, - 184, 186, 160, 188, 128, 156, 128, 135, - 137, 166, 128, 181, 128, 149, 160, 178, - 128, 145, 128, 178, 129, 130, 131, 132, - 133, 135, 136, 138, 139, 140, 141, 144, - 145, 146, 147, 150, 151, 152, 153, 154, - 155, 156, 162, 163, 171, 176, 177, 178, - 128, 134, 135, 165, 176, 190, 144, 168, - 176, 185, 128, 180, 182, 191, 182, 144, - 179, 155, 133, 137, 141, 143, 157, 255, - 190, 128, 145, 147, 183, 136, 128, 134, - 138, 141, 143, 157, 159, 168, 176, 255, - 171, 175, 186, 255, 128, 131, 133, 140, - 143, 144, 147, 168, 170, 176, 178, 179, - 181, 185, 188, 191, 144, 151, 128, 132, - 135, 136, 139, 141, 157, 163, 166, 172, - 176, 180, 128, 138, 144, 153, 134, 136, - 143, 154, 255, 128, 181, 184, 255, 129, - 151, 158, 255, 129, 131, 133, 143, 154, - 255, 128, 137, 128, 153, 157, 171, 176, - 185, 160, 255, 170, 190, 192, 255, 128, - 184, 128, 136, 138, 182, 184, 191, 128, - 144, 153, 178, 255, 168, 144, 145, 183, - 255, 128, 142, 145, 149, 129, 141, 144, - 146, 147, 148, 175, 255, 132, 255, 128, - 144, 129, 143, 144, 153, 145, 152, 135, - 255, 160, 168, 169, 171, 172, 173, 174, - 188, 189, 190, 191, 161, 167, 185, 255, - 128, 158, 160, 169, 144, 173, 176, 180, - 128, 131, 144, 153, 163, 183, 189, 255, - 144, 255, 133, 143, 191, 255, 143, 159, - 160, 128, 129, 255, 159, 160, 171, 172, - 255, 173, 255, 179, 255, 128, 176, 177, - 178, 128, 129, 171, 175, 189, 255, 128, - 136, 144, 153, 157, 158, 133, 134, 137, - 144, 145, 146, 147, 148, 149, 154, 155, - 156, 157, 158, 159, 168, 169, 170, 150, - 153, 165, 169, 173, 178, 187, 255, 131, - 132, 140, 169, 174, 255, 130, 132, 149, - 157, 173, 186, 188, 160, 161, 163, 164, - 167, 168, 132, 134, 149, 157, 186, 139, - 140, 191, 255, 134, 128, 132, 138, 144, - 146, 255, 166, 167, 129, 155, 187, 149, - 181, 143, 175, 137, 169, 131, 140, 141, - 192, 255, 128, 182, 187, 255, 173, 180, - 182, 255, 132, 155, 159, 161, 175, 128, - 160, 163, 164, 165, 184, 185, 186, 161, - 162, 128, 134, 136, 152, 155, 161, 163, - 164, 166, 170, 133, 143, 151, 255, 139, - 143, 154, 255, 164, 167, 185, 187, 128, - 131, 133, 159, 161, 162, 169, 178, 180, - 183, 130, 135, 137, 139, 148, 151, 153, - 155, 157, 159, 164, 190, 141, 143, 145, - 146, 161, 162, 167, 170, 172, 178, 180, - 183, 185, 188, 128, 137, 139, 155, 161, - 163, 165, 169, 171, 187, 155, 156, 151, - 255, 156, 157, 160, 181, 255, 186, 187, - 255, 162, 255, 160, 168, 161, 167, 158, - 255, 160, 132, 135, 133, 134, 176, 255, - 128, 191, 154, 164, 168, 128, 149, 150, - 191, 128, 152, 153, 191, 181, 128, 159, - 160, 189, 190, 191, 189, 128, 131, 132, - 185, 186, 191, 144, 128, 151, 152, 161, - 162, 176, 177, 255, 169, 177, 129, 132, - 141, 142, 145, 146, 179, 181, 186, 188, - 190, 191, 192, 255, 142, 158, 128, 155, - 156, 161, 162, 175, 176, 177, 178, 191, - 169, 177, 180, 183, 128, 132, 133, 138, - 139, 142, 143, 144, 145, 146, 147, 185, - 186, 191, 157, 128, 152, 153, 158, 159, - 177, 178, 180, 181, 191, 142, 146, 169, - 177, 180, 189, 128, 132, 133, 185, 186, - 191, 144, 185, 128, 159, 160, 161, 162, - 191, 169, 177, 180, 189, 128, 132, 133, - 140, 141, 142, 143, 144, 145, 146, 147, - 185, 186, 191, 158, 177, 128, 155, 156, - 161, 162, 191, 131, 145, 155, 157, 128, - 132, 133, 138, 139, 141, 142, 149, 150, - 152, 153, 159, 160, 162, 163, 164, 165, - 167, 168, 170, 171, 173, 174, 185, 186, - 191, 144, 128, 191, 141, 145, 169, 189, - 128, 132, 133, 185, 186, 191, 128, 151, - 152, 154, 155, 159, 160, 161, 162, 191, - 128, 141, 145, 169, 180, 189, 129, 132, - 133, 185, 186, 191, 158, 128, 159, 160, - 161, 162, 176, 177, 178, 179, 191, 141, - 145, 189, 128, 132, 133, 186, 187, 191, - 142, 128, 147, 148, 150, 151, 158, 159, - 161, 162, 185, 186, 191, 178, 188, 128, - 132, 133, 150, 151, 153, 154, 189, 190, - 191, 128, 134, 135, 191, 128, 177, 129, - 179, 180, 191, 128, 131, 137, 141, 152, - 160, 164, 166, 172, 177, 189, 129, 132, - 133, 134, 135, 138, 139, 147, 148, 167, - 168, 169, 170, 179, 180, 191, 133, 128, - 134, 135, 155, 156, 159, 160, 191, 128, - 129, 191, 136, 128, 172, 173, 191, 128, - 135, 136, 140, 141, 191, 191, 128, 170, - 171, 190, 161, 128, 143, 144, 149, 150, - 153, 154, 157, 158, 164, 165, 166, 167, - 173, 174, 176, 177, 180, 181, 255, 130, - 141, 143, 159, 134, 187, 136, 140, 142, - 143, 137, 151, 153, 142, 143, 158, 159, - 137, 177, 191, 142, 143, 182, 183, 192, - 255, 129, 151, 128, 133, 134, 135, 136, - 255, 145, 150, 151, 155, 191, 192, 255, - 128, 143, 144, 159, 160, 255, 182, 183, - 190, 191, 192, 255, 128, 129, 255, 173, - 174, 192, 255, 128, 129, 154, 155, 159, - 160, 255, 171, 173, 185, 191, 192, 255, - 141, 128, 145, 146, 159, 160, 177, 178, - 191, 173, 128, 145, 146, 159, 160, 176, - 177, 191, 128, 179, 180, 191, 151, 156, - 128, 191, 128, 159, 160, 255, 184, 191, - 192, 255, 169, 128, 170, 171, 175, 176, - 255, 182, 191, 192, 255, 128, 158, 159, - 191, 128, 143, 144, 173, 174, 175, 176, - 180, 181, 191, 128, 171, 172, 175, 176, - 255, 138, 191, 192, 255, 128, 150, 151, - 159, 160, 255, 149, 191, 192, 255, 167, - 128, 191, 128, 132, 133, 179, 180, 191, - 128, 132, 133, 139, 140, 191, 128, 130, - 131, 160, 161, 173, 174, 175, 176, 185, - 186, 255, 166, 191, 192, 255, 128, 163, - 164, 191, 128, 140, 141, 143, 144, 153, - 154, 189, 190, 191, 128, 136, 137, 191, - 173, 128, 168, 169, 177, 178, 180, 181, - 182, 183, 191, 0, 127, 192, 255, 150, - 151, 158, 159, 152, 154, 156, 158, 134, - 135, 142, 143, 190, 191, 192, 255, 181, - 189, 191, 128, 190, 133, 181, 128, 129, - 130, 140, 141, 143, 144, 147, 148, 149, - 150, 155, 156, 159, 160, 172, 173, 177, - 178, 188, 189, 191, 177, 191, 128, 190, - 128, 143, 144, 156, 157, 191, 130, 135, - 148, 164, 166, 168, 128, 137, 138, 149, - 150, 151, 152, 157, 158, 169, 170, 185, - 186, 187, 188, 191, 142, 128, 132, 133, - 137, 138, 159, 160, 255, 137, 191, 192, - 255, 175, 128, 255, 159, 165, 170, 175, - 177, 180, 191, 192, 255, 166, 173, 128, - 167, 168, 175, 176, 255, 168, 174, 176, - 191, 192, 255, 167, 175, 183, 191, 128, - 150, 151, 159, 160, 190, 135, 143, 151, - 128, 158, 159, 191, 128, 132, 133, 135, - 136, 160, 161, 169, 170, 176, 177, 181, - 182, 183, 184, 188, 189, 191, 160, 151, - 154, 187, 192, 255, 128, 132, 133, 173, - 174, 176, 177, 255, 143, 159, 187, 191, - 192, 255, 128, 175, 176, 191, 150, 191, - 192, 255, 141, 191, 192, 255, 128, 143, - 144, 189, 190, 191, 141, 143, 160, 169, - 172, 191, 192, 255, 191, 128, 174, 175, - 190, 128, 157, 158, 159, 160, 255, 176, - 191, 192, 255, 128, 150, 151, 159, 160, - 161, 162, 255, 175, 137, 138, 184, 191, - 192, 255, 128, 182, 183, 255, 130, 134, - 139, 163, 191, 192, 255, 128, 129, 130, - 179, 180, 191, 187, 189, 128, 177, 178, - 183, 184, 191, 128, 137, 138, 165, 166, - 175, 176, 255, 135, 159, 189, 191, 192, - 255, 128, 131, 132, 178, 179, 191, 143, - 165, 191, 128, 159, 160, 175, 176, 185, - 186, 190, 128, 168, 169, 191, 131, 186, - 128, 139, 140, 159, 160, 182, 183, 189, - 190, 255, 176, 178, 180, 183, 184, 190, - 191, 192, 255, 129, 128, 130, 131, 154, - 155, 157, 158, 159, 160, 170, 171, 177, - 178, 180, 181, 191, 128, 167, 175, 129, - 134, 135, 136, 137, 142, 143, 144, 145, - 150, 151, 159, 160, 255, 155, 166, 175, - 128, 162, 163, 191, 164, 175, 135, 138, - 188, 191, 192, 255, 174, 175, 154, 191, - 192, 255, 157, 169, 183, 189, 191, 128, - 134, 135, 146, 147, 151, 152, 158, 159, - 190, 130, 133, 128, 255, 178, 191, 192, - 255, 128, 146, 147, 255, 190, 191, 192, - 255, 128, 143, 144, 255, 144, 145, 136, - 175, 188, 191, 192, 255, 181, 128, 175, - 176, 255, 189, 191, 192, 255, 128, 160, - 161, 186, 187, 191, 128, 129, 154, 155, - 165, 166, 255, 191, 192, 255, 128, 129, - 130, 135, 136, 137, 138, 143, 144, 145, - 146, 151, 152, 153, 154, 156, 157, 191, - 128, 191, 128, 129, 130, 131, 133, 138, - 139, 140, 141, 142, 143, 144, 145, 146, - 147, 148, 149, 152, 156, 157, 160, 161, - 162, 163, 164, 166, 168, 169, 170, 171, - 172, 173, 174, 176, 177, 132, 151, 153, - 155, 158, 175, 178, 179, 180, 191, 140, - 167, 187, 190, 128, 255, 142, 143, 158, - 191, 192, 255, 187, 191, 192, 255, 128, - 180, 181, 191, 128, 156, 157, 159, 160, - 255, 145, 191, 192, 255, 128, 159, 160, - 175, 176, 255, 139, 143, 182, 191, 192, - 255, 144, 132, 135, 150, 191, 192, 255, - 158, 175, 148, 151, 188, 191, 192, 255, - 128, 167, 168, 175, 176, 255, 164, 191, - 192, 255, 183, 191, 192, 255, 128, 149, - 150, 159, 160, 167, 168, 191, 136, 182, - 188, 128, 133, 134, 137, 138, 184, 185, - 190, 191, 255, 150, 159, 183, 191, 192, - 255, 179, 128, 159, 160, 181, 182, 191, - 128, 149, 150, 159, 160, 185, 186, 191, - 128, 183, 184, 189, 190, 191, 128, 148, - 152, 129, 143, 144, 179, 180, 191, 128, - 159, 160, 188, 189, 191, 128, 156, 157, - 191, 136, 128, 164, 165, 191, 128, 181, - 182, 191, 128, 149, 150, 159, 160, 178, - 179, 191, 128, 145, 146, 191, 128, 178, - 179, 191, 128, 130, 131, 132, 133, 134, - 135, 136, 138, 139, 140, 141, 144, 145, - 146, 147, 150, 151, 152, 153, 154, 156, - 162, 163, 171, 176, 177, 178, 129, 191, - 128, 130, 131, 183, 184, 191, 128, 130, - 131, 175, 176, 191, 128, 143, 144, 168, - 169, 191, 128, 130, 131, 166, 167, 191, - 182, 128, 143, 144, 178, 179, 191, 128, - 130, 131, 178, 179, 191, 128, 154, 156, - 129, 132, 133, 191, 146, 128, 171, 172, - 191, 135, 137, 142, 158, 128, 168, 169, - 175, 176, 255, 159, 191, 192, 255, 144, - 128, 156, 157, 161, 162, 191, 128, 134, - 135, 138, 139, 191, 128, 175, 176, 191, - 134, 128, 131, 132, 135, 136, 191, 128, - 174, 175, 191, 128, 151, 152, 155, 156, - 191, 132, 128, 191, 128, 170, 171, 191, - 128, 153, 154, 191, 160, 190, 192, 255, - 128, 184, 185, 191, 137, 128, 174, 175, - 191, 128, 129, 177, 178, 255, 144, 191, - 192, 255, 128, 142, 143, 144, 145, 146, - 149, 129, 148, 150, 191, 175, 191, 192, - 255, 132, 191, 192, 255, 128, 144, 129, - 143, 145, 191, 144, 153, 128, 143, 145, - 152, 154, 191, 135, 191, 192, 255, 160, - 168, 169, 171, 172, 173, 174, 188, 189, - 190, 191, 128, 159, 161, 167, 170, 187, - 185, 191, 192, 255, 128, 143, 144, 173, - 174, 191, 128, 131, 132, 162, 163, 183, - 184, 188, 189, 255, 133, 143, 145, 191, - 192, 255, 128, 146, 147, 159, 160, 191, - 160, 128, 191, 128, 129, 191, 192, 255, - 159, 160, 171, 128, 170, 172, 191, 192, - 255, 173, 191, 192, 255, 179, 191, 192, - 255, 128, 176, 177, 178, 129, 191, 128, - 129, 130, 191, 171, 175, 189, 191, 192, - 255, 128, 136, 137, 143, 144, 153, 154, - 191, 144, 145, 146, 147, 148, 149, 154, - 155, 156, 157, 158, 159, 128, 143, 150, - 153, 160, 191, 149, 157, 173, 186, 188, - 160, 161, 163, 164, 167, 168, 132, 134, - 149, 157, 186, 191, 139, 140, 192, 255, - 133, 145, 128, 134, 135, 137, 138, 255, - 166, 167, 129, 155, 187, 149, 181, 143, - 175, 137, 169, 131, 140, 191, 192, 255, - 160, 163, 164, 165, 184, 185, 186, 128, - 159, 161, 162, 166, 191, 133, 191, 192, - 255, 132, 160, 163, 167, 179, 184, 186, - 128, 164, 165, 168, 169, 187, 188, 191, - 130, 135, 137, 139, 144, 147, 151, 153, - 155, 157, 159, 163, 171, 179, 184, 189, - 191, 128, 140, 141, 148, 149, 160, 161, - 164, 165, 166, 167, 190, 138, 164, 170, - 128, 155, 156, 160, 161, 187, 188, 191, - 128, 191, 155, 156, 128, 191, 151, 191, - 192, 255, 156, 157, 160, 128, 191, 181, - 191, 192, 255, 158, 159, 186, 128, 185, - 187, 191, 192, 255, 162, 191, 192, 255, - 160, 168, 128, 159, 161, 167, 169, 191, - 158, 191, 192, 255, 9, 10, 13, 32, - 33, 34, 35, 38, 46, 47, 60, 61, - 62, 64, 92, 95, 123, 124, 125, 126, - 127, 194, 195, 198, 199, 203, 204, 205, - 206, 207, 210, 212, 213, 214, 215, 216, - 217, 219, 220, 221, 222, 223, 224, 225, - 226, 227, 228, 233, 234, 237, 238, 239, - 240, 0, 36, 37, 45, 48, 57, 58, - 63, 65, 90, 91, 96, 97, 122, 192, - 193, 196, 218, 229, 236, 241, 247, 9, - 32, 10, 61, 10, 38, 46, 42, 47, - 46, 69, 101, 48, 57, 60, 61, 61, - 62, 61, 45, 95, 194, 195, 198, 199, - 203, 204, 205, 206, 207, 210, 212, 213, - 214, 215, 216, 217, 219, 220, 221, 222, - 223, 224, 225, 226, 227, 228, 233, 234, - 237, 239, 240, 243, 48, 57, 65, 90, - 97, 122, 196, 218, 229, 236, 124, 125, - 128, 191, 170, 181, 186, 128, 191, 151, - 183, 128, 255, 192, 255, 0, 127, 173, - 130, 133, 146, 159, 165, 171, 175, 191, - 192, 255, 181, 190, 128, 175, 176, 183, - 184, 185, 186, 191, 134, 139, 141, 162, - 128, 135, 136, 255, 182, 130, 137, 176, - 151, 152, 154, 160, 136, 191, 192, 255, - 128, 143, 144, 170, 171, 175, 176, 178, - 179, 191, 128, 159, 160, 191, 176, 128, - 138, 139, 173, 174, 255, 148, 150, 164, - 167, 173, 176, 185, 189, 190, 192, 255, - 144, 128, 145, 146, 175, 176, 191, 128, - 140, 141, 255, 166, 176, 178, 191, 192, - 255, 186, 128, 137, 138, 170, 171, 179, - 180, 181, 182, 191, 160, 161, 162, 164, - 165, 166, 167, 168, 169, 170, 171, 172, - 173, 174, 175, 176, 177, 178, 179, 180, - 181, 182, 183, 184, 185, 186, 187, 188, - 189, 190, 128, 191, 128, 129, 130, 131, - 137, 138, 139, 140, 141, 142, 143, 144, - 153, 154, 155, 156, 157, 158, 159, 160, - 161, 162, 163, 164, 165, 166, 167, 168, - 169, 170, 171, 172, 173, 174, 175, 176, - 177, 178, 179, 180, 182, 183, 184, 188, - 189, 190, 191, 132, 187, 129, 130, 132, - 133, 134, 176, 177, 178, 179, 180, 181, - 182, 183, 128, 191, 128, 129, 130, 131, - 132, 133, 134, 135, 144, 136, 143, 145, - 191, 192, 255, 182, 183, 184, 128, 191, - 128, 191, 191, 128, 190, 192, 255, 128, - 146, 147, 148, 152, 153, 154, 155, 156, - 158, 159, 160, 161, 162, 163, 164, 165, - 166, 167, 168, 169, 170, 171, 172, 173, - 174, 175, 176, 129, 191, 192, 255, 158, - 159, 128, 157, 160, 191, 192, 255, 128, - 191, 164, 169, 171, 172, 173, 174, 175, - 180, 181, 182, 183, 184, 185, 187, 188, - 189, 190, 191, 128, 163, 165, 186, 144, - 145, 146, 147, 148, 150, 151, 152, 155, - 157, 158, 160, 170, 171, 172, 175, 128, - 159, 161, 169, 173, 191, 128, 191, 10, - 13, 34, 36, 37, 92, 128, 191, 192, - 223, 224, 239, 240, 247, 248, 255, 10, - 13, 34, 92, 36, 37, 128, 191, 192, - 223, 224, 239, 240, 247, 248, 255, 10, - 13, 36, 123, 123, 126, 126, 37, 123, - 126, 10, 13, 128, 191, 192, 223, 224, - 239, 240, 247, 248, 255, 128, 191, 128, - 191, 128, 191, 10, 13, 36, 37, 128, - 191, 192, 223, 224, 239, 240, 247, 248, - 255, 10, 13, 36, 37, 128, 191, 192, - 223, 224, 239, 240, 247, 248, 255, 10, - 13, 10, 13, 123, 10, 13, 126, 10, - 13, 126, 126, 128, 191, 128, 191, 128, - 191, 10, 13, 36, 37, 128, 191, 192, - 223, 224, 239, 240, 247, 248, 255, 10, - 13, 36, 37, 128, 191, 192, 223, 224, - 239, 240, 247, 248, 255, 10, 13, 10, - 13, 123, 10, 13, 126, 10, 13, 126, - 126, 128, 191, 128, 191, 128, 191, 95, - 194, 195, 198, 199, 203, 204, 205, 206, - 207, 210, 212, 213, 214, 215, 216, 217, - 219, 220, 221, 222, 223, 224, 225, 226, - 227, 228, 233, 234, 237, 238, 239, 240, - 65, 90, 97, 122, 128, 191, 192, 193, - 196, 218, 229, 236, 241, 247, 248, 255, - 45, 95, 194, 195, 198, 199, 203, 204, - 205, 206, 207, 210, 212, 213, 214, 215, - 216, 217, 219, 220, 221, 222, 223, 224, - 225, 226, 227, 228, 233, 234, 237, 239, - 240, 243, 48, 57, 65, 90, 97, 122, - 196, 218, 229, 236, 128, 191, 170, 181, - 186, 128, 191, 151, 183, 128, 255, 192, - 255, 0, 127, 173, 130, 133, 146, 159, - 165, 171, 175, 191, 192, 255, 181, 190, - 128, 175, 176, 183, 184, 185, 186, 191, - 134, 139, 141, 162, 128, 135, 136, 255, - 182, 130, 137, 176, 151, 152, 154, 160, - 136, 191, 192, 255, 128, 143, 144, 170, - 171, 175, 176, 178, 179, 191, 128, 159, - 160, 191, 176, 128, 138, 139, 173, 174, - 255, 148, 150, 164, 167, 173, 176, 185, - 189, 190, 192, 255, 144, 128, 145, 146, - 175, 176, 191, 128, 140, 141, 255, 166, - 176, 178, 191, 192, 255, 186, 128, 137, - 138, 170, 171, 179, 180, 181, 182, 191, - 160, 161, 162, 164, 165, 166, 167, 168, - 169, 170, 171, 172, 173, 174, 175, 176, - 177, 178, 179, 180, 181, 182, 183, 184, - 185, 186, 187, 188, 189, 190, 128, 191, - 128, 129, 130, 131, 137, 138, 139, 140, - 141, 142, 143, 144, 153, 154, 155, 156, - 157, 158, 159, 160, 161, 162, 163, 164, - 165, 166, 167, 168, 169, 170, 171, 172, - 173, 174, 175, 176, 177, 178, 179, 180, - 182, 183, 184, 188, 189, 190, 191, 132, - 187, 129, 130, 132, 133, 134, 176, 177, - 178, 179, 180, 181, 182, 183, 128, 191, - 128, 129, 130, 131, 132, 133, 134, 135, - 144, 136, 143, 145, 191, 192, 255, 182, - 183, 184, 128, 191, 128, 191, 191, 128, - 190, 192, 255, 128, 146, 147, 148, 152, - 153, 154, 155, 156, 158, 159, 160, 161, - 162, 163, 164, 165, 166, 167, 168, 169, - 170, 171, 172, 173, 174, 175, 176, 129, - 191, 192, 255, 158, 159, 128, 157, 160, - 191, 192, 255, 128, 191, 164, 169, 171, - 172, 173, 174, 175, 180, 181, 182, 183, - 184, 185, 187, 188, 189, 190, 191, 128, - 163, 165, 186, 144, 145, 146, 147, 148, - 150, 151, 152, 155, 157, 158, 160, 170, - 171, 172, 175, 128, 159, 161, 169, 173, - 191, 128, 191, -} - -var _hcltok_single_lengths []byte = []byte{ - 0, 1, 1, 2, 3, 2, 0, 32, - 31, 36, 1, 4, 0, 0, 0, 0, - 1, 2, 1, 1, 1, 1, 0, 1, - 1, 0, 0, 2, 0, 0, 0, 1, - 32, 0, 0, 0, 0, 1, 3, 1, - 1, 1, 0, 2, 0, 1, 1, 2, - 0, 3, 0, 1, 0, 2, 1, 2, - 0, 0, 5, 1, 4, 0, 0, 1, - 43, 0, 0, 0, 2, 3, 2, 1, - 1, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 1, 1, 0, 0, - 0, 0, 0, 0, 0, 0, 4, 1, - 0, 15, 0, 0, 0, 1, 6, 1, - 0, 0, 1, 0, 2, 0, 0, 0, - 9, 0, 1, 1, 0, 0, 0, 3, - 0, 1, 0, 28, 0, 0, 0, 1, - 0, 1, 0, 0, 0, 1, 0, 0, - 0, 0, 0, 0, 0, 1, 0, 2, - 0, 0, 18, 0, 0, 1, 0, 0, - 0, 0, 0, 0, 0, 0, 1, 0, - 0, 0, 16, 36, 0, 0, 0, 0, - 1, 0, 0, 0, 0, 0, 1, 0, - 0, 0, 0, 0, 0, 2, 0, 0, - 0, 0, 0, 1, 0, 0, 0, 0, - 0, 0, 0, 28, 0, 0, 0, 1, - 1, 1, 1, 0, 0, 2, 0, 1, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 1, 1, 4, 0, 0, 2, 2, - 0, 11, 0, 0, 0, 0, 0, 0, - 0, 1, 1, 3, 0, 0, 4, 0, - 0, 0, 18, 0, 0, 0, 1, 4, - 1, 4, 1, 0, 3, 2, 2, 2, - 1, 0, 0, 1, 8, 0, 0, 0, - 4, 12, 0, 2, 0, 3, 0, 1, - 0, 2, 0, 1, 2, 0, 3, 1, - 2, 0, 0, 0, 0, 0, 1, 1, - 0, 0, 1, 28, 3, 0, 1, 1, - 2, 1, 0, 1, 1, 2, 1, 1, - 2, 1, 1, 0, 2, 1, 1, 1, - 1, 0, 0, 6, 1, 1, 0, 0, - 46, 1, 1, 0, 0, 0, 0, 2, - 1, 0, 0, 0, 1, 0, 0, 0, - 0, 0, 0, 0, 13, 2, 0, 0, - 0, 9, 0, 1, 28, 0, 1, 3, - 0, 2, 0, 0, 0, 1, 0, 1, - 1, 2, 0, 18, 2, 0, 0, 16, - 35, 0, 0, 0, 1, 0, 28, 0, - 0, 0, 0, 1, 0, 2, 0, 0, - 1, 0, 0, 1, 0, 0, 1, 0, - 0, 0, 0, 1, 11, 0, 0, 0, - 0, 4, 0, 12, 1, 7, 0, 4, - 0, 0, 0, 0, 1, 2, 1, 1, - 1, 1, 0, 1, 1, 0, 0, 2, - 0, 0, 0, 1, 32, 0, 0, 0, - 0, 1, 3, 1, 1, 1, 0, 2, - 0, 1, 1, 2, 0, 3, 0, 1, - 0, 2, 1, 2, 0, 0, 5, 1, - 4, 0, 0, 1, 43, 0, 0, 0, - 2, 3, 2, 1, 1, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 1, 1, 0, 0, 0, 0, 0, 0, - 0, 0, 4, 1, 0, 15, 0, 0, - 0, 1, 6, 1, 0, 0, 1, 0, - 2, 0, 0, 0, 9, 0, 1, 1, - 0, 0, 0, 3, 0, 1, 0, 28, - 0, 0, 0, 1, 0, 1, 0, 0, - 0, 1, 0, 0, 0, 0, 0, 0, - 0, 1, 0, 2, 0, 0, 18, 0, - 0, 1, 0, 0, 0, 0, 0, 0, - 0, 0, 1, 0, 0, 0, 16, 36, - 0, 0, 0, 0, 1, 0, 0, 0, - 0, 0, 1, 0, 0, 0, 0, 0, - 0, 2, 0, 0, 0, 0, 0, 1, - 0, 0, 0, 0, 0, 0, 0, 28, - 0, 0, 0, 1, 1, 1, 1, 0, - 0, 2, 0, 1, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 1, 1, 4, - 0, 0, 2, 2, 0, 11, 0, 0, - 0, 0, 0, 0, 0, 1, 1, 3, - 0, 0, 4, 0, 0, 0, 18, 0, - 0, 0, 1, 4, 1, 4, 1, 0, - 3, 2, 2, 2, 1, 0, 0, 1, - 8, 0, 0, 0, 4, 12, 0, 2, - 0, 3, 0, 1, 0, 2, 0, 1, - 2, 0, 0, 3, 0, 1, 1, 1, - 2, 2, 4, 1, 6, 2, 4, 2, - 4, 1, 4, 0, 6, 1, 3, 1, - 2, 0, 2, 11, 1, 1, 1, 0, - 1, 1, 0, 2, 0, 3, 3, 2, - 1, 0, 0, 0, 1, 0, 1, 0, - 1, 1, 0, 2, 0, 0, 1, 0, - 0, 0, 0, 0, 0, 0, 1, 0, - 0, 0, 0, 0, 0, 0, 1, 0, - 0, 0, 4, 3, 2, 2, 0, 6, - 1, 0, 1, 1, 0, 2, 0, 4, - 3, 0, 1, 1, 0, 0, 0, 0, - 0, 0, 0, 1, 0, 0, 0, 1, - 0, 3, 0, 2, 0, 0, 0, 3, - 0, 2, 1, 1, 3, 1, 0, 0, - 0, 0, 0, 5, 2, 0, 0, 0, - 0, 0, 0, 1, 0, 0, 1, 1, - 0, 0, 35, 4, 0, 0, 0, 0, - 0, 0, 0, 1, 0, 0, 0, 0, - 0, 0, 3, 0, 1, 0, 0, 3, - 0, 0, 1, 0, 0, 0, 0, 28, - 0, 0, 0, 0, 1, 0, 3, 1, - 4, 0, 1, 0, 0, 1, 0, 0, - 1, 0, 0, 0, 0, 1, 1, 0, - 7, 0, 0, 2, 2, 0, 11, 0, - 0, 0, 0, 0, 1, 1, 3, 0, - 0, 4, 0, 0, 0, 12, 1, 4, - 1, 5, 2, 0, 3, 2, 2, 2, - 1, 7, 0, 7, 17, 3, 0, 2, - 0, 3, 0, 0, 1, 0, 2, 0, - 2, 0, 0, 0, 0, 0, 1, 0, - 0, 0, 2, 2, 1, 0, 0, 0, - 2, 2, 4, 0, 0, 0, 0, 1, - 2, 1, 1, 1, 1, 0, 1, 1, - 0, 0, 2, 0, 0, 0, 1, 32, - 0, 0, 0, 0, 1, 3, 1, 1, - 1, 0, 2, 0, 1, 1, 2, 0, - 3, 0, 1, 0, 2, 1, 2, 0, - 0, 5, 1, 4, 0, 0, 1, 43, - 0, 0, 0, 2, 3, 2, 1, 1, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 1, 1, 0, 0, 0, - 0, 0, 0, 0, 0, 4, 1, 0, - 15, 0, 0, 0, 1, 6, 1, 0, - 0, 1, 0, 2, 0, 0, 0, 9, - 0, 1, 1, 0, 0, 0, 3, 0, - 1, 0, 28, 0, 0, 0, 1, 0, - 1, 0, 0, 0, 1, 0, 0, 0, - 0, 0, 0, 0, 1, 0, 2, 0, - 0, 18, 0, 0, 1, 0, 0, 0, - 0, 0, 0, 0, 0, 1, 0, 0, - 0, 16, 36, 0, 0, 0, 0, 1, - 0, 0, 0, 0, 0, 1, 0, 0, - 0, 0, 0, 0, 2, 0, 0, 0, - 0, 0, 1, 0, 0, 0, 0, 0, - 0, 0, 28, 0, 0, 0, 1, 1, - 1, 1, 0, 0, 2, 0, 1, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 1, 1, 4, 0, 0, 2, 2, 0, - 11, 0, 0, 0, 0, 0, 0, 0, - 1, 1, 3, 0, 0, 4, 0, 0, - 0, 18, 0, 0, 0, 1, 4, 1, - 4, 1, 0, 3, 2, 2, 2, 1, - 0, 0, 1, 8, 0, 0, 0, 4, - 12, 0, 2, 0, 3, 0, 1, 0, - 2, 0, 1, 2, 0, 0, 3, 0, - 1, 1, 1, 2, 2, 4, 1, 6, - 2, 4, 2, 4, 1, 4, 0, 6, - 1, 3, 1, 2, 0, 2, 11, 1, - 1, 1, 0, 1, 1, 0, 2, 0, - 3, 3, 2, 1, 0, 0, 0, 1, - 0, 1, 0, 1, 1, 0, 2, 0, - 0, 1, 0, 0, 0, 0, 0, 0, - 0, 1, 0, 0, 0, 0, 0, 0, - 0, 1, 0, 0, 0, 4, 3, 2, - 2, 0, 6, 1, 0, 1, 1, 0, - 2, 0, 4, 3, 0, 1, 1, 0, - 0, 0, 0, 0, 0, 0, 1, 0, - 0, 0, 1, 0, 3, 0, 2, 0, - 0, 0, 3, 0, 2, 1, 1, 3, - 1, 0, 0, 0, 0, 0, 5, 2, - 0, 0, 0, 0, 0, 0, 1, 0, - 0, 1, 1, 0, 0, 35, 4, 0, - 0, 0, 0, 0, 0, 0, 1, 0, - 0, 0, 0, 0, 0, 3, 0, 1, - 0, 0, 3, 0, 0, 1, 0, 0, - 0, 0, 28, 0, 0, 0, 0, 1, - 0, 3, 1, 4, 0, 1, 0, 0, - 1, 0, 0, 1, 0, 0, 0, 0, - 1, 1, 0, 7, 0, 0, 2, 2, - 0, 11, 0, 0, 0, 0, 0, 1, - 1, 3, 0, 0, 4, 0, 0, 0, - 12, 1, 4, 1, 5, 2, 0, 3, - 2, 2, 2, 1, 7, 0, 7, 17, - 3, 0, 2, 0, 3, 0, 0, 1, - 0, 2, 0, 53, 2, 1, 1, 1, - 1, 1, 2, 3, 2, 2, 1, 34, - 1, 1, 0, 3, 2, 0, 0, 0, - 1, 2, 4, 1, 0, 1, 0, 0, - 0, 0, 1, 1, 1, 0, 0, 1, - 30, 47, 13, 9, 3, 0, 1, 28, - 2, 0, 18, 16, 0, 6, 4, 2, - 2, 0, 1, 1, 1, 2, 1, 2, - 0, 0, 0, 4, 2, 2, 3, 3, - 2, 1, 1, 0, 0, 0, 4, 2, - 2, 3, 3, 2, 1, 1, 0, 0, - 0, 33, 34, 0, 3, 2, 0, 0, - 0, 1, 2, 4, 1, 0, 1, 0, - 0, 0, 0, 1, 1, 1, 0, 0, - 1, 30, 47, 13, 9, 3, 0, 1, - 28, 2, 0, 18, 16, 0, -} - -var _hcltok_range_lengths []byte = []byte{ - 0, 0, 0, 0, 1, 1, 1, 5, - 5, 5, 0, 0, 3, 0, 1, 1, - 4, 2, 3, 0, 1, 0, 2, 2, - 4, 2, 2, 3, 1, 1, 1, 1, - 0, 1, 1, 2, 2, 1, 4, 6, - 9, 6, 8, 5, 8, 7, 10, 4, - 6, 4, 7, 7, 5, 5, 4, 5, - 1, 2, 8, 4, 3, 3, 3, 0, - 3, 1, 2, 1, 2, 2, 3, 3, - 1, 3, 2, 2, 1, 2, 2, 2, - 3, 4, 4, 3, 1, 2, 1, 3, - 2, 2, 2, 2, 2, 3, 3, 1, - 1, 2, 1, 3, 2, 2, 3, 2, - 7, 0, 1, 4, 1, 2, 4, 2, - 1, 2, 0, 2, 2, 3, 5, 5, - 1, 4, 1, 1, 2, 2, 1, 0, - 0, 1, 1, 1, 1, 1, 2, 2, - 2, 2, 1, 1, 1, 4, 2, 2, - 3, 1, 4, 4, 6, 1, 3, 1, - 1, 2, 1, 1, 1, 5, 3, 1, - 1, 1, 2, 3, 3, 1, 2, 2, - 1, 4, 1, 2, 5, 2, 1, 1, - 0, 2, 2, 2, 2, 2, 2, 2, - 2, 2, 1, 1, 2, 4, 2, 1, - 2, 2, 2, 6, 1, 1, 2, 1, - 2, 1, 1, 1, 2, 2, 2, 1, - 3, 2, 5, 2, 8, 6, 2, 2, - 2, 2, 3, 1, 3, 1, 2, 1, - 3, 2, 2, 3, 1, 1, 1, 1, - 1, 1, 1, 2, 2, 4, 1, 2, - 1, 0, 1, 1, 1, 1, 0, 1, - 2, 3, 1, 3, 3, 1, 0, 3, - 0, 2, 3, 1, 0, 0, 0, 0, - 2, 2, 2, 2, 1, 5, 2, 2, - 5, 7, 5, 0, 1, 0, 1, 1, - 1, 1, 1, 0, 1, 1, 0, 3, - 3, 1, 1, 2, 1, 3, 5, 1, - 1, 2, 2, 1, 1, 1, 1, 2, - 6, 3, 7, 2, 6, 1, 6, 2, - 8, 0, 4, 2, 5, 2, 3, 3, - 3, 1, 2, 8, 2, 0, 2, 1, - 2, 1, 5, 2, 1, 3, 3, 0, - 2, 1, 2, 1, 0, 1, 1, 3, - 1, 1, 2, 3, 0, 0, 3, 2, - 4, 1, 4, 1, 1, 3, 1, 1, - 1, 1, 2, 2, 1, 3, 1, 4, - 3, 3, 1, 1, 5, 2, 1, 1, - 2, 1, 2, 1, 3, 2, 0, 1, - 1, 1, 1, 1, 1, 1, 2, 1, - 1, 1, 1, 1, 1, 1, 0, 1, - 1, 2, 2, 1, 1, 1, 3, 2, - 1, 0, 2, 1, 1, 1, 1, 0, - 3, 0, 1, 1, 4, 2, 3, 0, - 1, 0, 2, 2, 4, 2, 2, 3, - 1, 1, 1, 1, 0, 1, 1, 2, - 2, 1, 4, 6, 9, 6, 8, 5, - 8, 7, 10, 4, 6, 4, 7, 7, - 5, 5, 4, 5, 1, 2, 8, 4, - 3, 3, 3, 0, 3, 1, 2, 1, - 2, 2, 3, 3, 1, 3, 2, 2, - 1, 2, 2, 2, 3, 4, 4, 3, - 1, 2, 1, 3, 2, 2, 2, 2, - 2, 3, 3, 1, 1, 2, 1, 3, - 2, 2, 3, 2, 7, 0, 1, 4, - 1, 2, 4, 2, 1, 2, 0, 2, - 2, 3, 5, 5, 1, 4, 1, 1, - 2, 2, 1, 0, 0, 1, 1, 1, - 1, 1, 2, 2, 2, 2, 1, 1, - 1, 4, 2, 2, 3, 1, 4, 4, - 6, 1, 3, 1, 1, 2, 1, 1, - 1, 5, 3, 1, 1, 1, 2, 3, - 3, 1, 2, 2, 1, 4, 1, 2, - 5, 2, 1, 1, 0, 2, 2, 2, - 2, 2, 2, 2, 2, 2, 1, 1, - 2, 4, 2, 1, 2, 2, 2, 6, - 1, 1, 2, 1, 2, 1, 1, 1, - 2, 2, 2, 1, 3, 2, 5, 2, - 8, 6, 2, 2, 2, 2, 3, 1, - 3, 1, 2, 1, 3, 2, 2, 3, - 1, 1, 1, 1, 1, 1, 1, 2, - 2, 4, 1, 2, 1, 0, 1, 1, - 1, 1, 0, 1, 2, 3, 1, 3, - 3, 1, 0, 3, 0, 2, 3, 1, - 0, 0, 0, 0, 2, 2, 2, 2, - 1, 5, 2, 2, 5, 7, 5, 0, - 1, 0, 1, 1, 1, 1, 1, 0, - 1, 1, 1, 2, 2, 3, 3, 4, - 7, 5, 7, 5, 3, 3, 7, 3, - 13, 1, 3, 5, 3, 5, 3, 6, - 5, 2, 2, 8, 4, 1, 2, 3, - 2, 10, 2, 2, 0, 2, 3, 3, - 1, 2, 3, 3, 1, 2, 3, 3, - 4, 4, 2, 1, 2, 2, 3, 2, - 2, 5, 3, 2, 3, 2, 1, 3, - 3, 6, 2, 2, 5, 2, 5, 1, - 1, 2, 4, 1, 11, 1, 3, 8, - 4, 2, 1, 0, 4, 3, 3, 3, - 2, 9, 1, 1, 4, 3, 2, 2, - 2, 3, 4, 2, 3, 2, 4, 3, - 2, 2, 3, 3, 4, 3, 3, 4, - 2, 5, 4, 8, 7, 1, 2, 1, - 3, 1, 2, 5, 1, 2, 2, 2, - 2, 1, 3, 2, 2, 3, 3, 1, - 9, 1, 5, 1, 3, 2, 2, 3, - 2, 3, 3, 3, 1, 3, 3, 2, - 2, 4, 5, 3, 3, 4, 3, 3, - 3, 2, 2, 2, 4, 2, 2, 1, - 3, 3, 3, 3, 3, 3, 2, 2, - 3, 2, 3, 3, 2, 3, 2, 3, - 1, 2, 2, 2, 2, 2, 2, 2, - 2, 2, 2, 2, 3, 2, 3, 2, - 3, 5, 3, 3, 1, 2, 3, 2, - 2, 1, 2, 3, 4, 3, 0, 3, - 0, 2, 3, 1, 0, 0, 0, 0, - 2, 3, 2, 4, 6, 4, 1, 1, - 2, 1, 2, 1, 3, 2, 3, 2, - 5, 1, 1, 1, 1, 1, 0, 1, - 1, 1, 0, 0, 0, 1, 1, 1, - 0, 0, 0, 3, 0, 1, 1, 4, - 2, 3, 0, 1, 0, 2, 2, 4, - 2, 2, 3, 1, 1, 1, 1, 0, - 1, 1, 2, 2, 1, 4, 6, 9, - 6, 8, 5, 8, 7, 10, 4, 6, - 4, 7, 7, 5, 5, 4, 5, 1, - 2, 8, 4, 3, 3, 3, 0, 3, - 1, 2, 1, 2, 2, 3, 3, 1, - 3, 2, 2, 1, 2, 2, 2, 3, - 4, 4, 3, 1, 2, 1, 3, 2, - 2, 2, 2, 2, 3, 3, 1, 1, - 2, 1, 3, 2, 2, 3, 2, 7, - 0, 1, 4, 1, 2, 4, 2, 1, - 2, 0, 2, 2, 3, 5, 5, 1, - 4, 1, 1, 2, 2, 1, 0, 0, - 1, 1, 1, 1, 1, 2, 2, 2, - 2, 1, 1, 1, 4, 2, 2, 3, - 1, 4, 4, 6, 1, 3, 1, 1, - 2, 1, 1, 1, 5, 3, 1, 1, - 1, 2, 3, 3, 1, 2, 2, 1, - 4, 1, 2, 5, 2, 1, 1, 0, - 2, 2, 2, 2, 2, 2, 2, 2, - 2, 1, 1, 2, 4, 2, 1, 2, - 2, 2, 6, 1, 1, 2, 1, 2, - 1, 1, 1, 2, 2, 2, 1, 3, - 2, 5, 2, 8, 6, 2, 2, 2, - 2, 3, 1, 3, 1, 2, 1, 3, - 2, 2, 3, 1, 1, 1, 1, 1, - 1, 1, 2, 2, 4, 1, 2, 1, - 0, 1, 1, 1, 1, 0, 1, 2, - 3, 1, 3, 3, 1, 0, 3, 0, - 2, 3, 1, 0, 0, 0, 0, 2, - 2, 2, 2, 1, 5, 2, 2, 5, - 7, 5, 0, 1, 0, 1, 1, 1, - 1, 1, 0, 1, 1, 1, 2, 2, - 3, 3, 4, 7, 5, 7, 5, 3, - 3, 7, 3, 13, 1, 3, 5, 3, - 5, 3, 6, 5, 2, 2, 8, 4, - 1, 2, 3, 2, 10, 2, 2, 0, - 2, 3, 3, 1, 2, 3, 3, 1, - 2, 3, 3, 4, 4, 2, 1, 2, - 2, 3, 2, 2, 5, 3, 2, 3, - 2, 1, 3, 3, 6, 2, 2, 5, - 2, 5, 1, 1, 2, 4, 1, 11, - 1, 3, 8, 4, 2, 1, 0, 4, - 3, 3, 3, 2, 9, 1, 1, 4, - 3, 2, 2, 2, 3, 4, 2, 3, - 2, 4, 3, 2, 2, 3, 3, 4, - 3, 3, 4, 2, 5, 4, 8, 7, - 1, 2, 1, 3, 1, 2, 5, 1, - 2, 2, 2, 2, 1, 3, 2, 2, - 3, 3, 1, 9, 1, 5, 1, 3, - 2, 2, 3, 2, 3, 3, 3, 1, - 3, 3, 2, 2, 4, 5, 3, 3, - 4, 3, 3, 3, 2, 2, 2, 4, - 2, 2, 1, 3, 3, 3, 3, 3, - 3, 2, 2, 3, 2, 3, 3, 2, - 3, 2, 3, 1, 2, 2, 2, 2, - 2, 2, 2, 2, 2, 2, 2, 3, - 2, 3, 2, 3, 5, 3, 3, 1, - 2, 3, 2, 2, 1, 2, 3, 4, - 3, 0, 3, 0, 2, 3, 1, 0, - 0, 0, 0, 2, 3, 2, 4, 6, - 4, 1, 1, 2, 1, 2, 1, 3, - 2, 3, 2, 11, 0, 0, 0, 0, - 0, 0, 0, 1, 0, 0, 0, 5, - 0, 0, 1, 1, 1, 0, 1, 1, - 5, 4, 2, 0, 1, 0, 2, 2, - 5, 2, 3, 5, 3, 2, 3, 5, - 1, 1, 1, 3, 1, 1, 2, 2, - 3, 1, 2, 3, 1, 5, 6, 0, - 0, 0, 0, 0, 0, 0, 0, 5, - 1, 1, 1, 5, 6, 0, 0, 0, - 0, 0, 0, 1, 1, 1, 5, 6, - 0, 0, 0, 0, 0, 0, 1, 1, - 1, 8, 5, 1, 1, 1, 0, 1, - 1, 5, 4, 2, 0, 1, 0, 2, - 2, 5, 2, 3, 5, 3, 2, 3, - 5, 1, 1, 1, 3, 1, 1, 2, - 2, 3, 1, 2, 3, 1, -} - -var _hcltok_index_offsets []int16 = []int16{ - 0, 0, 2, 4, 7, 12, 16, 18, - 56, 93, 135, 137, 142, 146, 147, 149, - 151, 157, 162, 167, 169, 172, 174, 177, - 181, 187, 190, 193, 199, 201, 203, 205, - 208, 241, 243, 245, 248, 251, 254, 262, - 270, 281, 289, 298, 306, 315, 324, 336, - 343, 350, 358, 366, 375, 381, 389, 395, - 403, 405, 408, 422, 428, 436, 440, 444, - 446, 493, 495, 498, 500, 505, 511, 517, - 522, 525, 529, 532, 535, 537, 540, 543, - 546, 550, 555, 560, 564, 566, 569, 571, - 575, 578, 581, 584, 587, 591, 596, 600, - 602, 604, 607, 609, 613, 616, 619, 627, - 631, 639, 655, 657, 662, 664, 668, 679, - 683, 685, 688, 690, 693, 698, 702, 708, - 714, 725, 730, 733, 736, 739, 742, 744, - 748, 749, 752, 754, 784, 786, 788, 791, - 795, 798, 802, 804, 806, 808, 814, 817, - 820, 824, 826, 831, 836, 843, 846, 850, - 854, 856, 859, 879, 881, 883, 890, 894, - 896, 898, 900, 903, 907, 911, 913, 917, - 920, 922, 927, 945, 984, 990, 993, 995, - 997, 999, 1002, 1005, 1008, 1011, 1014, 1018, - 1021, 1024, 1027, 1029, 1031, 1034, 1041, 1044, - 1046, 1049, 1052, 1055, 1063, 1065, 1067, 1070, - 1072, 1075, 1077, 1079, 1109, 1112, 1115, 1118, - 1121, 1126, 1130, 1137, 1140, 1149, 1158, 1161, - 1165, 1168, 1171, 1175, 1177, 1181, 1183, 1186, - 1188, 1192, 1196, 1200, 1208, 1210, 1212, 1216, - 1220, 1222, 1235, 1237, 1240, 1243, 1248, 1250, - 1253, 1255, 1257, 1260, 1265, 1267, 1269, 1274, - 1276, 1279, 1283, 1303, 1307, 1311, 1313, 1315, - 1323, 1325, 1332, 1337, 1339, 1343, 1346, 1349, - 1352, 1356, 1359, 1362, 1366, 1376, 1382, 1385, - 1388, 1398, 1418, 1424, 1427, 1429, 1433, 1435, - 1438, 1440, 1444, 1446, 1448, 1452, 1454, 1458, - 1463, 1469, 1471, 1473, 1476, 1478, 1482, 1489, - 1492, 1494, 1497, 1501, 1531, 1536, 1538, 1541, - 1545, 1554, 1559, 1567, 1571, 1579, 1583, 1591, - 1595, 1606, 1608, 1614, 1617, 1625, 1629, 1634, - 1639, 1644, 1646, 1649, 1664, 1668, 1670, 1673, - 1675, 1724, 1727, 1734, 1737, 1739, 1743, 1747, - 1750, 1754, 1756, 1759, 1761, 1763, 1765, 1767, - 1771, 1773, 1775, 1778, 1782, 1796, 1799, 1803, - 1806, 1811, 1822, 1827, 1830, 1860, 1864, 1867, - 1872, 1874, 1878, 1881, 1884, 1886, 1891, 1893, - 1899, 1904, 1910, 1912, 1932, 1940, 1943, 1945, - 1963, 2001, 2003, 2006, 2008, 2013, 2016, 2045, - 2047, 2049, 2051, 2053, 2056, 2058, 2062, 2065, - 2067, 2070, 2072, 2074, 2077, 2079, 2081, 2083, - 2085, 2087, 2090, 2093, 2096, 2109, 2111, 2115, - 2118, 2120, 2125, 2128, 2142, 2145, 2154, 2156, - 2161, 2165, 2166, 2168, 2170, 2176, 2181, 2186, - 2188, 2191, 2193, 2196, 2200, 2206, 2209, 2212, - 2218, 2220, 2222, 2224, 2227, 2260, 2262, 2264, - 2267, 2270, 2273, 2281, 2289, 2300, 2308, 2317, - 2325, 2334, 2343, 2355, 2362, 2369, 2377, 2385, - 2394, 2400, 2408, 2414, 2422, 2424, 2427, 2441, - 2447, 2455, 2459, 2463, 2465, 2512, 2514, 2517, - 2519, 2524, 2530, 2536, 2541, 2544, 2548, 2551, - 2554, 2556, 2559, 2562, 2565, 2569, 2574, 2579, - 2583, 2585, 2588, 2590, 2594, 2597, 2600, 2603, - 2606, 2610, 2615, 2619, 2621, 2623, 2626, 2628, - 2632, 2635, 2638, 2646, 2650, 2658, 2674, 2676, - 2681, 2683, 2687, 2698, 2702, 2704, 2707, 2709, - 2712, 2717, 2721, 2727, 2733, 2744, 2749, 2752, - 2755, 2758, 2761, 2763, 2767, 2768, 2771, 2773, - 2803, 2805, 2807, 2810, 2814, 2817, 2821, 2823, - 2825, 2827, 2833, 2836, 2839, 2843, 2845, 2850, - 2855, 2862, 2865, 2869, 2873, 2875, 2878, 2898, - 2900, 2902, 2909, 2913, 2915, 2917, 2919, 2922, - 2926, 2930, 2932, 2936, 2939, 2941, 2946, 2964, - 3003, 3009, 3012, 3014, 3016, 3018, 3021, 3024, - 3027, 3030, 3033, 3037, 3040, 3043, 3046, 3048, - 3050, 3053, 3060, 3063, 3065, 3068, 3071, 3074, - 3082, 3084, 3086, 3089, 3091, 3094, 3096, 3098, - 3128, 3131, 3134, 3137, 3140, 3145, 3149, 3156, - 3159, 3168, 3177, 3180, 3184, 3187, 3190, 3194, - 3196, 3200, 3202, 3205, 3207, 3211, 3215, 3219, - 3227, 3229, 3231, 3235, 3239, 3241, 3254, 3256, - 3259, 3262, 3267, 3269, 3272, 3274, 3276, 3279, - 3284, 3286, 3288, 3293, 3295, 3298, 3302, 3322, - 3326, 3330, 3332, 3334, 3342, 3344, 3351, 3356, - 3358, 3362, 3365, 3368, 3371, 3375, 3378, 3381, - 3385, 3395, 3401, 3404, 3407, 3417, 3437, 3443, - 3446, 3448, 3452, 3454, 3457, 3459, 3463, 3465, - 3467, 3471, 3473, 3475, 3481, 3484, 3489, 3494, - 3500, 3510, 3518, 3530, 3537, 3547, 3553, 3565, - 3571, 3589, 3592, 3600, 3606, 3616, 3623, 3630, - 3638, 3646, 3649, 3654, 3674, 3680, 3683, 3687, - 3691, 3695, 3707, 3710, 3715, 3716, 3722, 3729, - 3735, 3738, 3741, 3745, 3749, 3752, 3755, 3760, - 3764, 3770, 3776, 3779, 3783, 3786, 3789, 3794, - 3797, 3800, 3806, 3810, 3813, 3817, 3820, 3823, - 3827, 3831, 3838, 3841, 3844, 3850, 3853, 3860, - 3862, 3864, 3867, 3876, 3881, 3895, 3899, 3903, - 3918, 3924, 3927, 3930, 3932, 3937, 3943, 3947, - 3955, 3961, 3971, 3974, 3977, 3982, 3986, 3989, - 3992, 3995, 3999, 4004, 4008, 4012, 4015, 4020, - 4025, 4028, 4034, 4038, 4044, 4049, 4053, 4057, - 4065, 4068, 4076, 4082, 4092, 4103, 4106, 4109, - 4111, 4115, 4117, 4120, 4131, 4135, 4138, 4141, - 4144, 4147, 4149, 4153, 4157, 4160, 4164, 4169, - 4172, 4182, 4184, 4225, 4231, 4235, 4238, 4241, - 4245, 4248, 4252, 4256, 4261, 4263, 4267, 4271, - 4274, 4277, 4282, 4291, 4295, 4300, 4305, 4309, - 4316, 4320, 4323, 4327, 4330, 4335, 4338, 4341, - 4371, 4375, 4379, 4383, 4387, 4392, 4396, 4402, - 4406, 4414, 4417, 4422, 4426, 4429, 4434, 4437, - 4441, 4444, 4447, 4450, 4453, 4456, 4460, 4464, - 4467, 4477, 4480, 4483, 4488, 4494, 4497, 4512, - 4515, 4519, 4525, 4529, 4533, 4536, 4540, 4547, - 4550, 4553, 4559, 4562, 4566, 4571, 4587, 4589, - 4597, 4599, 4607, 4613, 4615, 4619, 4622, 4625, - 4628, 4632, 4643, 4646, 4658, 4682, 4690, 4692, - 4696, 4699, 4704, 4707, 4709, 4714, 4717, 4723, - 4726, 4734, 4736, 4738, 4740, 4742, 4744, 4746, - 4748, 4750, 4752, 4755, 4758, 4760, 4762, 4764, - 4766, 4769, 4772, 4777, 4781, 4782, 4784, 4786, - 4792, 4797, 4802, 4804, 4807, 4809, 4812, 4816, - 4822, 4825, 4828, 4834, 4836, 4838, 4840, 4843, - 4876, 4878, 4880, 4883, 4886, 4889, 4897, 4905, - 4916, 4924, 4933, 4941, 4950, 4959, 4971, 4978, - 4985, 4993, 5001, 5010, 5016, 5024, 5030, 5038, - 5040, 5043, 5057, 5063, 5071, 5075, 5079, 5081, - 5128, 5130, 5133, 5135, 5140, 5146, 5152, 5157, - 5160, 5164, 5167, 5170, 5172, 5175, 5178, 5181, - 5185, 5190, 5195, 5199, 5201, 5204, 5206, 5210, - 5213, 5216, 5219, 5222, 5226, 5231, 5235, 5237, - 5239, 5242, 5244, 5248, 5251, 5254, 5262, 5266, - 5274, 5290, 5292, 5297, 5299, 5303, 5314, 5318, - 5320, 5323, 5325, 5328, 5333, 5337, 5343, 5349, - 5360, 5365, 5368, 5371, 5374, 5377, 5379, 5383, - 5384, 5387, 5389, 5419, 5421, 5423, 5426, 5430, - 5433, 5437, 5439, 5441, 5443, 5449, 5452, 5455, - 5459, 5461, 5466, 5471, 5478, 5481, 5485, 5489, - 5491, 5494, 5514, 5516, 5518, 5525, 5529, 5531, - 5533, 5535, 5538, 5542, 5546, 5548, 5552, 5555, - 5557, 5562, 5580, 5619, 5625, 5628, 5630, 5632, - 5634, 5637, 5640, 5643, 5646, 5649, 5653, 5656, - 5659, 5662, 5664, 5666, 5669, 5676, 5679, 5681, - 5684, 5687, 5690, 5698, 5700, 5702, 5705, 5707, - 5710, 5712, 5714, 5744, 5747, 5750, 5753, 5756, - 5761, 5765, 5772, 5775, 5784, 5793, 5796, 5800, - 5803, 5806, 5810, 5812, 5816, 5818, 5821, 5823, - 5827, 5831, 5835, 5843, 5845, 5847, 5851, 5855, - 5857, 5870, 5872, 5875, 5878, 5883, 5885, 5888, - 5890, 5892, 5895, 5900, 5902, 5904, 5909, 5911, - 5914, 5918, 5938, 5942, 5946, 5948, 5950, 5958, - 5960, 5967, 5972, 5974, 5978, 5981, 5984, 5987, - 5991, 5994, 5997, 6001, 6011, 6017, 6020, 6023, - 6033, 6053, 6059, 6062, 6064, 6068, 6070, 6073, - 6075, 6079, 6081, 6083, 6087, 6089, 6091, 6097, - 6100, 6105, 6110, 6116, 6126, 6134, 6146, 6153, - 6163, 6169, 6181, 6187, 6205, 6208, 6216, 6222, - 6232, 6239, 6246, 6254, 6262, 6265, 6270, 6290, - 6296, 6299, 6303, 6307, 6311, 6323, 6326, 6331, - 6332, 6338, 6345, 6351, 6354, 6357, 6361, 6365, - 6368, 6371, 6376, 6380, 6386, 6392, 6395, 6399, - 6402, 6405, 6410, 6413, 6416, 6422, 6426, 6429, - 6433, 6436, 6439, 6443, 6447, 6454, 6457, 6460, - 6466, 6469, 6476, 6478, 6480, 6483, 6492, 6497, - 6511, 6515, 6519, 6534, 6540, 6543, 6546, 6548, - 6553, 6559, 6563, 6571, 6577, 6587, 6590, 6593, - 6598, 6602, 6605, 6608, 6611, 6615, 6620, 6624, - 6628, 6631, 6636, 6641, 6644, 6650, 6654, 6660, - 6665, 6669, 6673, 6681, 6684, 6692, 6698, 6708, - 6719, 6722, 6725, 6727, 6731, 6733, 6736, 6747, - 6751, 6754, 6757, 6760, 6763, 6765, 6769, 6773, - 6776, 6780, 6785, 6788, 6798, 6800, 6841, 6847, - 6851, 6854, 6857, 6861, 6864, 6868, 6872, 6877, - 6879, 6883, 6887, 6890, 6893, 6898, 6907, 6911, - 6916, 6921, 6925, 6932, 6936, 6939, 6943, 6946, - 6951, 6954, 6957, 6987, 6991, 6995, 6999, 7003, - 7008, 7012, 7018, 7022, 7030, 7033, 7038, 7042, - 7045, 7050, 7053, 7057, 7060, 7063, 7066, 7069, - 7072, 7076, 7080, 7083, 7093, 7096, 7099, 7104, - 7110, 7113, 7128, 7131, 7135, 7141, 7145, 7149, - 7152, 7156, 7163, 7166, 7169, 7175, 7178, 7182, - 7187, 7203, 7205, 7213, 7215, 7223, 7229, 7231, - 7235, 7238, 7241, 7244, 7248, 7259, 7262, 7274, - 7298, 7306, 7308, 7312, 7315, 7320, 7323, 7325, - 7330, 7333, 7339, 7342, 7407, 7410, 7412, 7414, - 7416, 7418, 7420, 7423, 7428, 7431, 7434, 7436, - 7476, 7478, 7480, 7482, 7487, 7491, 7492, 7494, - 7496, 7503, 7510, 7517, 7519, 7521, 7523, 7526, - 7529, 7535, 7538, 7543, 7550, 7555, 7558, 7562, - 7569, 7601, 7650, 7665, 7678, 7683, 7685, 7689, - 7720, 7726, 7728, 7749, 7769, 7771, 7783, 7794, - 7797, 7800, 7801, 7803, 7805, 7807, 7810, 7812, - 7820, 7822, 7824, 7826, 7836, 7845, 7848, 7852, - 7856, 7859, 7861, 7863, 7865, 7867, 7869, 7879, - 7888, 7891, 7895, 7899, 7902, 7904, 7906, 7908, - 7910, 7912, 7954, 7994, 7996, 8001, 8005, 8006, - 8008, 8010, 8017, 8024, 8031, 8033, 8035, 8037, - 8040, 8043, 8049, 8052, 8057, 8064, 8069, 8072, - 8076, 8083, 8115, 8164, 8179, 8192, 8197, 8199, - 8203, 8234, 8240, 8242, 8263, 8283, -} - -var _hcltok_indicies []int16 = []int16{ - 1, 0, 3, 2, 3, 4, 2, 6, - 8, 8, 7, 5, 9, 9, 7, 5, - 7, 5, 10, 11, 12, 13, 15, 16, - 17, 18, 19, 20, 21, 22, 23, 24, - 25, 26, 27, 28, 29, 30, 31, 32, - 33, 34, 35, 36, 37, 39, 40, 41, - 42, 43, 11, 11, 14, 14, 38, 0, - 11, 12, 13, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, - 28, 29, 30, 31, 32, 33, 34, 35, - 36, 37, 39, 40, 41, 42, 43, 11, - 11, 14, 14, 38, 0, 44, 45, 11, - 11, 46, 13, 15, 16, 17, 16, 47, - 48, 20, 49, 22, 23, 50, 51, 52, - 53, 54, 55, 56, 57, 58, 59, 60, - 61, 62, 37, 39, 63, 41, 64, 65, - 66, 11, 11, 11, 14, 38, 0, 44, - 0, 11, 11, 11, 11, 0, 11, 11, - 11, 0, 11, 0, 11, 11, 0, 0, - 0, 0, 0, 0, 11, 0, 0, 0, - 0, 11, 11, 11, 11, 11, 0, 0, - 11, 0, 0, 11, 0, 11, 0, 0, - 11, 0, 0, 0, 11, 11, 11, 11, - 11, 11, 0, 11, 11, 0, 11, 11, - 0, 0, 0, 0, 0, 0, 11, 11, - 0, 0, 11, 0, 11, 11, 11, 0, - 67, 68, 69, 70, 14, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, - 82, 83, 84, 85, 86, 87, 88, 89, - 90, 91, 92, 93, 94, 95, 96, 97, - 0, 11, 0, 11, 0, 11, 11, 0, - 11, 11, 0, 0, 0, 11, 0, 0, - 0, 0, 0, 0, 0, 11, 0, 0, - 0, 0, 0, 0, 0, 11, 11, 11, - 11, 11, 11, 11, 11, 11, 11, 11, - 0, 0, 0, 0, 0, 0, 0, 0, - 11, 11, 11, 11, 11, 11, 11, 11, - 11, 0, 0, 0, 0, 0, 0, 0, - 0, 11, 11, 11, 11, 11, 11, 11, - 11, 11, 0, 11, 11, 11, 11, 11, - 11, 11, 11, 0, 11, 11, 11, 11, - 11, 11, 11, 11, 11, 11, 11, 0, - 11, 11, 11, 11, 11, 11, 0, 11, - 11, 11, 11, 11, 11, 0, 0, 0, - 0, 0, 0, 0, 0, 11, 11, 11, - 11, 11, 11, 11, 11, 0, 11, 11, - 11, 11, 11, 11, 11, 11, 0, 11, - 11, 11, 11, 11, 0, 0, 0, 0, - 0, 0, 0, 0, 11, 11, 11, 11, - 11, 11, 0, 11, 11, 11, 11, 11, - 11, 11, 0, 11, 0, 11, 11, 0, - 11, 11, 11, 11, 11, 11, 11, 11, - 11, 11, 11, 11, 11, 0, 11, 11, - 11, 11, 11, 0, 11, 11, 11, 11, - 11, 11, 11, 0, 11, 11, 11, 0, - 11, 11, 11, 0, 11, 0, 98, 99, - 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 16, - 115, 116, 117, 118, 119, 120, 121, 122, - 123, 124, 125, 126, 127, 128, 129, 130, - 131, 132, 14, 15, 133, 134, 135, 136, - 137, 14, 16, 14, 0, 11, 0, 11, - 11, 0, 0, 11, 0, 0, 0, 0, - 11, 0, 0, 0, 0, 0, 11, 0, - 0, 0, 0, 0, 11, 11, 11, 11, - 11, 0, 0, 0, 11, 0, 0, 0, - 11, 11, 11, 0, 0, 0, 11, 11, - 0, 0, 0, 11, 11, 11, 0, 0, - 0, 11, 11, 11, 11, 0, 11, 11, - 11, 11, 0, 0, 0, 0, 0, 11, - 11, 11, 11, 0, 0, 11, 11, 11, - 0, 0, 11, 11, 11, 11, 0, 11, - 11, 0, 11, 11, 0, 0, 0, 11, - 11, 11, 0, 0, 0, 0, 11, 11, - 11, 11, 11, 0, 0, 0, 0, 11, - 0, 11, 11, 0, 11, 11, 0, 11, - 0, 11, 11, 11, 0, 11, 11, 0, - 0, 0, 11, 0, 0, 0, 0, 0, - 0, 0, 11, 11, 11, 11, 0, 11, - 11, 11, 11, 11, 11, 11, 0, 138, - 139, 140, 141, 142, 143, 144, 145, 146, - 14, 147, 148, 149, 150, 151, 0, 11, - 0, 0, 0, 0, 0, 11, 11, 0, - 11, 11, 11, 0, 11, 11, 11, 11, - 11, 11, 11, 11, 11, 11, 0, 11, - 11, 11, 0, 0, 11, 11, 11, 0, - 0, 11, 0, 0, 11, 11, 11, 11, - 11, 0, 0, 0, 0, 11, 11, 11, - 11, 11, 11, 0, 11, 11, 11, 11, - 11, 0, 152, 109, 153, 154, 155, 14, - 156, 157, 16, 14, 0, 11, 11, 11, - 11, 0, 0, 0, 11, 0, 0, 11, - 11, 11, 0, 0, 0, 11, 11, 0, - 119, 0, 16, 14, 14, 158, 0, 14, - 0, 11, 16, 159, 160, 16, 161, 162, - 16, 57, 163, 164, 165, 166, 167, 16, - 168, 169, 170, 16, 171, 172, 173, 15, - 174, 175, 176, 15, 177, 16, 14, 0, - 0, 11, 11, 0, 0, 0, 11, 11, - 11, 11, 0, 11, 11, 0, 0, 0, - 0, 11, 11, 0, 0, 11, 11, 0, - 0, 0, 0, 0, 0, 11, 11, 11, - 0, 0, 0, 11, 0, 0, 0, 11, - 11, 0, 11, 11, 11, 11, 0, 11, - 11, 11, 11, 0, 11, 11, 11, 11, - 11, 11, 0, 0, 0, 11, 11, 11, - 11, 0, 178, 179, 0, 14, 0, 11, - 0, 0, 11, 16, 180, 181, 182, 183, - 57, 184, 185, 55, 186, 187, 188, 189, - 190, 191, 192, 193, 194, 14, 0, 0, - 11, 0, 11, 11, 11, 11, 11, 11, - 11, 0, 11, 11, 11, 0, 11, 0, - 0, 11, 0, 11, 0, 0, 11, 11, - 11, 11, 0, 11, 11, 11, 0, 0, - 11, 11, 11, 11, 0, 11, 11, 0, - 0, 11, 11, 11, 11, 11, 0, 195, - 196, 197, 198, 199, 200, 201, 202, 203, - 204, 205, 201, 206, 207, 208, 209, 38, - 0, 210, 211, 16, 212, 213, 214, 215, - 216, 217, 218, 219, 220, 16, 14, 221, - 222, 223, 224, 16, 225, 226, 227, 228, - 229, 230, 231, 232, 233, 234, 235, 236, - 237, 238, 239, 16, 144, 14, 240, 0, - 11, 11, 11, 11, 11, 0, 0, 0, - 11, 0, 11, 11, 0, 11, 0, 11, - 11, 0, 0, 0, 11, 11, 11, 0, - 0, 0, 11, 11, 11, 0, 0, 0, - 0, 11, 0, 0, 11, 0, 0, 11, - 11, 11, 0, 0, 11, 0, 11, 11, - 11, 0, 11, 11, 11, 11, 11, 11, - 0, 0, 0, 11, 11, 0, 11, 11, - 0, 11, 11, 0, 11, 11, 0, 11, - 11, 11, 11, 11, 11, 11, 0, 11, - 0, 11, 0, 11, 11, 0, 11, 0, - 11, 11, 0, 11, 0, 11, 0, 241, - 212, 242, 243, 244, 245, 246, 247, 248, - 249, 250, 98, 251, 16, 252, 253, 254, - 16, 255, 129, 256, 257, 258, 259, 260, - 261, 262, 263, 16, 0, 0, 0, 11, - 11, 11, 0, 11, 11, 0, 11, 11, - 0, 0, 0, 0, 0, 11, 11, 11, - 11, 0, 11, 11, 11, 11, 11, 11, - 0, 0, 0, 11, 11, 11, 11, 11, - 11, 11, 11, 11, 0, 11, 11, 11, - 11, 11, 11, 11, 11, 0, 11, 11, - 0, 0, 0, 0, 11, 11, 11, 0, - 0, 0, 11, 0, 0, 0, 11, 11, - 0, 11, 11, 11, 0, 11, 0, 0, - 0, 11, 11, 0, 11, 11, 11, 0, - 11, 11, 11, 0, 0, 0, 0, 11, - 16, 181, 264, 265, 14, 16, 14, 0, - 0, 11, 0, 11, 16, 264, 14, 0, - 16, 266, 14, 0, 0, 11, 16, 267, - 268, 269, 172, 270, 271, 16, 272, 273, - 274, 14, 0, 0, 11, 11, 11, 0, - 11, 11, 0, 11, 11, 11, 11, 0, - 0, 11, 0, 0, 11, 11, 0, 11, - 0, 16, 14, 0, 275, 16, 276, 0, - 14, 0, 11, 0, 11, 277, 16, 278, - 279, 0, 11, 0, 0, 0, 11, 11, - 11, 11, 0, 280, 281, 282, 16, 283, - 284, 285, 286, 287, 288, 289, 290, 291, - 292, 293, 294, 295, 296, 14, 0, 11, - 11, 11, 0, 0, 0, 0, 11, 11, - 0, 0, 11, 0, 0, 0, 0, 0, - 0, 0, 11, 0, 11, 0, 0, 0, - 0, 0, 0, 11, 11, 11, 11, 11, - 0, 0, 11, 0, 0, 0, 11, 0, - 0, 11, 0, 0, 11, 0, 0, 11, - 0, 0, 0, 11, 11, 11, 0, 0, - 0, 11, 11, 11, 11, 0, 297, 16, - 298, 16, 299, 300, 301, 302, 14, 0, - 11, 11, 11, 11, 11, 0, 0, 0, - 11, 0, 0, 11, 11, 11, 11, 11, - 11, 11, 11, 11, 11, 0, 11, 11, - 11, 11, 11, 11, 11, 11, 11, 11, - 11, 11, 11, 11, 11, 11, 11, 11, - 11, 0, 11, 11, 11, 11, 11, 0, - 303, 16, 14, 0, 11, 304, 16, 100, - 14, 0, 11, 305, 0, 14, 0, 11, - 16, 306, 14, 0, 0, 11, 307, 0, - 16, 308, 14, 0, 0, 11, 11, 11, - 11, 0, 11, 11, 11, 11, 0, 11, - 11, 11, 11, 11, 0, 0, 11, 0, - 11, 11, 11, 0, 11, 0, 11, 11, - 11, 0, 0, 0, 0, 0, 0, 0, - 11, 11, 11, 0, 11, 0, 0, 0, - 11, 11, 11, 11, 0, 309, 310, 69, - 311, 312, 313, 314, 315, 316, 317, 318, - 319, 320, 321, 322, 323, 324, 325, 326, - 327, 328, 329, 331, 332, 333, 334, 335, - 336, 330, 0, 11, 11, 11, 11, 0, - 11, 0, 11, 11, 0, 11, 11, 11, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 11, 11, 11, 11, 11, 0, 11, - 11, 11, 11, 11, 11, 11, 0, 11, - 11, 11, 0, 11, 11, 11, 11, 11, - 11, 11, 0, 11, 11, 11, 0, 11, - 11, 11, 11, 11, 11, 11, 0, 11, - 11, 11, 0, 11, 11, 11, 11, 11, - 11, 11, 11, 11, 11, 0, 11, 0, - 11, 11, 11, 11, 11, 0, 11, 11, - 0, 11, 11, 11, 11, 11, 11, 11, - 0, 11, 11, 11, 0, 11, 11, 11, - 11, 0, 11, 11, 11, 11, 0, 11, - 11, 11, 11, 0, 11, 0, 11, 11, - 0, 11, 11, 11, 11, 11, 11, 11, - 11, 11, 11, 11, 11, 11, 11, 0, - 11, 11, 11, 0, 11, 0, 11, 11, - 0, 11, 0, 337, 338, 339, 101, 102, - 103, 104, 105, 340, 107, 108, 109, 110, - 111, 112, 341, 342, 167, 343, 258, 117, - 344, 119, 229, 269, 122, 345, 346, 347, - 348, 349, 350, 351, 352, 353, 354, 131, - 355, 16, 14, 15, 16, 134, 135, 136, - 137, 14, 14, 0, 11, 11, 0, 11, - 11, 11, 11, 11, 11, 0, 0, 0, - 11, 0, 11, 11, 11, 11, 0, 11, - 11, 11, 0, 11, 11, 0, 11, 11, - 11, 0, 0, 11, 11, 11, 0, 0, - 11, 11, 0, 11, 0, 11, 0, 11, - 11, 11, 0, 0, 11, 11, 0, 11, - 11, 0, 11, 11, 11, 0, 356, 140, - 142, 143, 144, 145, 146, 14, 357, 148, - 358, 150, 359, 0, 11, 11, 0, 0, - 0, 0, 11, 0, 0, 11, 11, 11, - 11, 11, 0, 360, 109, 361, 154, 155, - 14, 156, 157, 16, 14, 0, 11, 11, - 11, 11, 0, 0, 0, 11, 16, 159, - 160, 16, 362, 363, 219, 308, 163, 164, - 165, 364, 167, 365, 366, 367, 368, 369, - 370, 371, 372, 373, 374, 175, 176, 15, - 375, 16, 14, 0, 0, 0, 0, 11, - 11, 11, 0, 0, 0, 0, 0, 11, - 11, 0, 11, 11, 11, 0, 11, 11, - 0, 0, 0, 11, 11, 0, 11, 11, - 11, 11, 0, 11, 0, 11, 11, 11, - 11, 11, 0, 0, 0, 0, 0, 11, - 11, 11, 11, 11, 11, 0, 11, 0, - 16, 180, 181, 376, 183, 57, 184, 185, - 55, 186, 187, 377, 14, 190, 378, 192, - 193, 194, 14, 0, 11, 11, 11, 11, - 11, 11, 11, 0, 11, 11, 0, 11, - 0, 379, 380, 197, 198, 199, 381, 201, - 202, 382, 383, 384, 201, 206, 207, 208, - 209, 38, 0, 210, 211, 16, 212, 213, - 215, 385, 217, 386, 219, 220, 16, 14, - 387, 222, 223, 224, 16, 225, 226, 227, - 228, 229, 230, 231, 232, 388, 234, 235, - 389, 237, 238, 239, 16, 144, 14, 240, - 0, 0, 11, 0, 0, 11, 0, 11, - 11, 11, 11, 11, 0, 11, 11, 0, - 390, 391, 392, 393, 394, 395, 396, 397, - 247, 398, 319, 399, 213, 400, 401, 402, - 403, 404, 401, 405, 406, 407, 258, 408, - 260, 409, 410, 271, 0, 11, 0, 11, - 0, 11, 0, 11, 0, 11, 11, 0, - 11, 0, 11, 11, 11, 0, 11, 11, - 0, 0, 11, 11, 11, 0, 11, 0, - 11, 0, 11, 11, 0, 11, 0, 11, - 0, 11, 0, 11, 0, 11, 0, 0, - 0, 11, 11, 11, 0, 11, 11, 0, - 16, 267, 229, 411, 401, 412, 271, 16, - 413, 414, 274, 14, 0, 11, 0, 11, - 11, 11, 0, 0, 0, 11, 11, 0, - 277, 16, 278, 415, 0, 11, 11, 0, - 16, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 416, 14, 0, 0, 0, - 11, 16, 417, 16, 265, 300, 301, 302, - 14, 0, 0, 11, 419, 419, 419, 419, - 418, 419, 419, 419, 418, 419, 418, 419, - 419, 418, 418, 418, 418, 418, 418, 419, - 418, 418, 418, 418, 419, 419, 419, 419, - 419, 418, 418, 419, 418, 418, 419, 418, - 419, 418, 418, 419, 418, 418, 418, 419, - 419, 419, 419, 419, 419, 418, 419, 419, - 418, 419, 419, 418, 418, 418, 418, 418, - 418, 419, 419, 418, 418, 419, 418, 419, - 419, 419, 418, 421, 422, 423, 424, 425, - 426, 427, 428, 429, 430, 431, 432, 433, - 434, 435, 436, 437, 438, 439, 440, 441, - 442, 443, 444, 445, 446, 447, 448, 449, - 450, 451, 452, 418, 419, 418, 419, 418, - 419, 419, 418, 419, 419, 418, 418, 418, - 419, 418, 418, 418, 418, 418, 418, 418, - 419, 418, 418, 418, 418, 418, 418, 418, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 418, 418, 418, 418, 418, - 418, 418, 418, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 418, 418, 418, 418, - 418, 418, 418, 418, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 418, 419, 419, - 419, 419, 419, 419, 419, 419, 418, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 418, 419, 419, 419, 419, 419, - 419, 418, 419, 419, 419, 419, 419, 419, - 418, 418, 418, 418, 418, 418, 418, 418, - 419, 419, 419, 419, 419, 419, 419, 419, - 418, 419, 419, 419, 419, 419, 419, 419, - 419, 418, 419, 419, 419, 419, 419, 418, - 418, 418, 418, 418, 418, 418, 418, 419, - 419, 419, 419, 419, 419, 418, 419, 419, - 419, 419, 419, 419, 419, 418, 419, 418, - 419, 419, 418, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 418, 419, 419, 419, 419, 419, 418, 419, - 419, 419, 419, 419, 419, 419, 418, 419, - 419, 419, 418, 419, 419, 419, 418, 419, - 418, 453, 454, 455, 456, 457, 458, 459, - 460, 461, 462, 463, 464, 465, 466, 467, - 468, 469, 470, 471, 472, 473, 474, 475, - 476, 477, 478, 479, 480, 481, 482, 483, - 484, 485, 486, 487, 488, 425, 489, 490, - 491, 492, 493, 494, 425, 470, 425, 418, - 419, 418, 419, 419, 418, 418, 419, 418, - 418, 418, 418, 419, 418, 418, 418, 418, - 418, 419, 418, 418, 418, 418, 418, 419, - 419, 419, 419, 419, 418, 418, 418, 419, - 418, 418, 418, 419, 419, 419, 418, 418, - 418, 419, 419, 418, 418, 418, 419, 419, - 419, 418, 418, 418, 419, 419, 419, 419, - 418, 419, 419, 419, 419, 418, 418, 418, - 418, 418, 419, 419, 419, 419, 418, 418, - 419, 419, 419, 418, 418, 419, 419, 419, - 419, 418, 419, 419, 418, 419, 419, 418, - 418, 418, 419, 419, 419, 418, 418, 418, - 418, 419, 419, 419, 419, 419, 418, 418, - 418, 418, 419, 418, 419, 419, 418, 419, - 419, 418, 419, 418, 419, 419, 419, 418, - 419, 419, 418, 418, 418, 419, 418, 418, - 418, 418, 418, 418, 418, 419, 419, 419, - 419, 418, 419, 419, 419, 419, 419, 419, - 419, 418, 495, 496, 497, 498, 499, 500, - 501, 502, 503, 425, 504, 505, 506, 507, - 508, 418, 419, 418, 418, 418, 418, 418, - 419, 419, 418, 419, 419, 419, 418, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 418, 419, 419, 419, 418, 418, 419, - 419, 419, 418, 418, 419, 418, 418, 419, - 419, 419, 419, 419, 418, 418, 418, 418, - 419, 419, 419, 419, 419, 419, 418, 419, - 419, 419, 419, 419, 418, 509, 464, 510, - 511, 512, 425, 513, 514, 470, 425, 418, - 419, 419, 419, 419, 418, 418, 418, 419, - 418, 418, 419, 419, 419, 418, 418, 418, - 419, 419, 418, 475, 418, 470, 425, 425, - 515, 418, 425, 418, 419, 470, 516, 517, - 470, 518, 519, 470, 520, 521, 522, 523, - 524, 525, 470, 526, 527, 528, 470, 529, - 530, 531, 489, 532, 533, 534, 489, 535, - 470, 425, 418, 418, 419, 419, 418, 418, - 418, 419, 419, 419, 419, 418, 419, 419, - 418, 418, 418, 418, 419, 419, 418, 418, - 419, 419, 418, 418, 418, 418, 418, 418, - 419, 419, 419, 418, 418, 418, 419, 418, - 418, 418, 419, 419, 418, 419, 419, 419, - 419, 418, 419, 419, 419, 419, 418, 419, - 419, 419, 419, 419, 419, 418, 418, 418, - 419, 419, 419, 419, 418, 536, 537, 418, - 425, 418, 419, 418, 418, 419, 470, 538, - 539, 540, 541, 520, 542, 543, 544, 545, - 546, 547, 548, 549, 550, 551, 552, 553, - 425, 418, 418, 419, 418, 419, 419, 419, - 419, 419, 419, 419, 418, 419, 419, 419, - 418, 419, 418, 418, 419, 418, 419, 418, - 418, 419, 419, 419, 419, 418, 419, 419, - 419, 418, 418, 419, 419, 419, 419, 418, - 419, 419, 418, 418, 419, 419, 419, 419, - 419, 418, 554, 555, 556, 557, 558, 559, - 560, 561, 562, 563, 564, 560, 566, 567, - 568, 569, 565, 418, 570, 571, 470, 572, - 573, 574, 575, 576, 577, 578, 579, 580, - 470, 425, 581, 582, 583, 584, 470, 585, - 586, 587, 588, 589, 590, 591, 592, 593, - 594, 595, 596, 597, 598, 599, 470, 501, - 425, 600, 418, 419, 419, 419, 419, 419, - 418, 418, 418, 419, 418, 419, 419, 418, - 419, 418, 419, 419, 418, 418, 418, 419, - 419, 419, 418, 418, 418, 419, 419, 419, - 418, 418, 418, 418, 419, 418, 418, 419, - 418, 418, 419, 419, 419, 418, 418, 419, - 418, 419, 419, 419, 418, 419, 419, 419, - 419, 419, 419, 418, 418, 418, 419, 419, - 418, 419, 419, 418, 419, 419, 418, 419, - 419, 418, 419, 419, 419, 419, 419, 419, - 419, 418, 419, 418, 419, 418, 419, 419, - 418, 419, 418, 419, 419, 418, 419, 418, - 419, 418, 601, 572, 602, 603, 604, 605, - 606, 607, 608, 609, 610, 453, 611, 470, - 612, 613, 614, 470, 615, 485, 616, 617, - 618, 619, 620, 621, 622, 623, 470, 418, - 418, 418, 419, 419, 419, 418, 419, 419, - 418, 419, 419, 418, 418, 418, 418, 418, - 419, 419, 419, 419, 418, 419, 419, 419, - 419, 419, 419, 418, 418, 418, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 418, - 419, 419, 419, 419, 419, 419, 419, 419, - 418, 419, 419, 418, 418, 418, 418, 419, - 419, 419, 418, 418, 418, 419, 418, 418, - 418, 419, 419, 418, 419, 419, 419, 418, - 419, 418, 418, 418, 419, 419, 418, 419, - 419, 419, 418, 419, 419, 419, 418, 418, - 418, 418, 419, 470, 539, 624, 625, 425, - 470, 425, 418, 418, 419, 418, 419, 470, - 624, 425, 418, 470, 626, 425, 418, 418, - 419, 470, 627, 628, 629, 530, 630, 631, - 470, 632, 633, 634, 425, 418, 418, 419, - 419, 419, 418, 419, 419, 418, 419, 419, - 419, 419, 418, 418, 419, 418, 418, 419, - 419, 418, 419, 418, 470, 425, 418, 635, - 470, 636, 418, 425, 418, 419, 418, 419, - 637, 470, 638, 639, 418, 419, 418, 418, - 418, 419, 419, 419, 419, 418, 640, 641, - 642, 470, 643, 644, 645, 646, 647, 648, - 649, 650, 651, 652, 653, 654, 655, 656, - 425, 418, 419, 419, 419, 418, 418, 418, - 418, 419, 419, 418, 418, 419, 418, 418, - 418, 418, 418, 418, 418, 419, 418, 419, - 418, 418, 418, 418, 418, 418, 419, 419, - 419, 419, 419, 418, 418, 419, 418, 418, - 418, 419, 418, 418, 419, 418, 418, 419, - 418, 418, 419, 418, 418, 418, 419, 419, - 419, 418, 418, 418, 419, 419, 419, 419, - 418, 657, 470, 658, 470, 659, 660, 661, - 662, 425, 418, 419, 419, 419, 419, 419, - 418, 418, 418, 419, 418, 418, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 418, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 418, 419, 419, 419, - 419, 419, 418, 663, 470, 425, 418, 419, - 664, 470, 455, 425, 418, 419, 665, 418, - 425, 418, 419, 470, 666, 425, 418, 418, - 419, 667, 418, 470, 668, 425, 418, 418, - 419, 670, 669, 419, 419, 419, 419, 670, - 669, 419, 670, 669, 670, 670, 419, 670, - 669, 419, 670, 419, 670, 669, 419, 670, - 419, 670, 419, 669, 670, 670, 670, 670, - 670, 670, 670, 670, 669, 419, 419, 670, - 670, 419, 670, 419, 670, 669, 670, 670, - 670, 670, 670, 419, 670, 419, 670, 419, - 670, 669, 670, 670, 419, 670, 419, 670, - 669, 670, 670, 670, 670, 670, 419, 670, - 419, 670, 669, 419, 419, 670, 419, 670, - 669, 670, 670, 670, 419, 670, 419, 670, - 419, 670, 419, 670, 669, 670, 419, 670, - 419, 670, 669, 419, 670, 670, 670, 670, - 419, 670, 419, 670, 419, 670, 419, 670, - 419, 670, 419, 670, 669, 419, 670, 669, - 670, 670, 670, 419, 670, 419, 670, 669, - 670, 419, 670, 419, 670, 669, 419, 670, - 670, 670, 670, 419, 670, 419, 670, 669, - 419, 670, 419, 670, 419, 670, 669, 670, - 670, 419, 670, 419, 670, 669, 419, 670, - 419, 670, 419, 670, 419, 669, 670, 670, - 670, 419, 670, 419, 670, 669, 419, 670, - 669, 670, 670, 419, 670, 669, 670, 670, - 670, 419, 670, 670, 670, 670, 670, 670, - 419, 419, 670, 419, 670, 419, 670, 419, - 670, 669, 670, 419, 670, 419, 670, 669, - 419, 670, 669, 670, 419, 670, 669, 670, - 419, 670, 669, 419, 419, 670, 669, 419, - 670, 419, 670, 419, 670, 419, 670, 419, - 670, 419, 669, 670, 670, 419, 670, 670, - 670, 670, 419, 419, 670, 670, 670, 670, - 670, 419, 670, 670, 670, 670, 670, 669, - 419, 670, 670, 419, 670, 419, 669, 670, - 670, 419, 670, 669, 419, 419, 670, 419, - 669, 670, 670, 669, 419, 670, 419, 669, - 670, 669, 419, 670, 419, 670, 419, 669, - 670, 670, 669, 419, 670, 419, 670, 419, - 670, 669, 670, 419, 670, 419, 670, 669, - 419, 670, 669, 419, 419, 670, 669, 670, - 419, 669, 670, 669, 419, 670, 419, 670, - 419, 669, 670, 669, 419, 419, 670, 669, - 670, 419, 670, 419, 670, 669, 419, 670, - 419, 669, 670, 669, 419, 419, 670, 419, - 669, 670, 669, 419, 419, 670, 669, 670, - 419, 670, 669, 670, 419, 670, 669, 670, - 419, 670, 419, 670, 419, 669, 670, 669, - 419, 419, 670, 669, 670, 419, 670, 419, - 670, 669, 419, 670, 669, 670, 670, 419, - 670, 419, 670, 669, 669, 419, 669, 419, - 670, 670, 419, 670, 670, 670, 670, 670, - 670, 670, 669, 419, 670, 670, 670, 419, - 669, 670, 670, 670, 419, 670, 419, 670, - 419, 670, 419, 670, 419, 670, 669, 419, - 419, 670, 669, 670, 419, 670, 669, 419, - 419, 670, 419, 419, 419, 670, 419, 670, - 419, 670, 419, 670, 419, 669, 419, 670, - 419, 670, 419, 669, 670, 669, 419, 670, - 419, 669, 670, 419, 670, 670, 670, 669, - 419, 670, 419, 419, 670, 419, 669, 670, - 670, 669, 419, 670, 670, 670, 670, 419, - 670, 419, 669, 670, 670, 670, 419, 670, - 669, 670, 419, 670, 419, 670, 419, 670, - 419, 670, 669, 670, 670, 419, 670, 669, - 419, 670, 419, 670, 419, 669, 670, 670, - 669, 419, 670, 419, 669, 670, 669, 419, - 670, 669, 419, 670, 419, 670, 669, 670, - 670, 670, 669, 419, 419, 419, 670, 669, - 419, 670, 419, 669, 670, 669, 419, 670, - 419, 670, 419, 669, 670, 670, 670, 669, - 419, 670, 419, 669, 670, 670, 670, 670, - 669, 419, 670, 419, 670, 669, 419, 419, - 670, 419, 670, 669, 670, 419, 670, 419, - 669, 670, 670, 669, 419, 670, 419, 670, - 669, 419, 670, 670, 670, 419, 670, 419, - 669, 419, 670, 669, 670, 419, 419, 670, - 419, 670, 419, 669, 670, 670, 670, 670, - 669, 419, 670, 419, 670, 419, 670, 419, - 670, 419, 670, 669, 670, 670, 670, 419, - 670, 419, 670, 419, 670, 419, 669, 670, - 670, 419, 419, 670, 669, 670, 419, 670, - 670, 669, 419, 670, 419, 670, 669, 419, - 419, 670, 670, 670, 670, 419, 670, 419, - 670, 419, 669, 670, 670, 419, 669, 670, - 669, 419, 670, 419, 669, 670, 669, 419, - 670, 419, 669, 670, 419, 670, 670, 669, - 419, 670, 670, 419, 669, 670, 669, 419, - 670, 419, 670, 669, 670, 419, 670, 419, - 669, 670, 669, 419, 670, 419, 670, 419, - 670, 419, 670, 419, 670, 669, 671, 669, - 672, 673, 674, 675, 676, 677, 678, 679, - 680, 681, 682, 674, 683, 684, 685, 686, - 687, 674, 688, 689, 690, 691, 692, 693, - 694, 695, 696, 697, 698, 699, 700, 701, - 702, 674, 703, 671, 683, 671, 704, 671, - 669, 670, 670, 670, 670, 419, 669, 670, - 670, 669, 419, 670, 669, 419, 419, 670, - 669, 419, 670, 419, 669, 670, 669, 419, - 419, 670, 419, 669, 670, 670, 669, 419, - 670, 670, 670, 669, 419, 670, 419, 670, - 670, 669, 419, 419, 670, 419, 669, 670, - 669, 419, 670, 669, 419, 419, 670, 419, - 670, 669, 419, 670, 419, 419, 670, 419, - 670, 419, 669, 670, 670, 669, 419, 670, - 670, 419, 670, 669, 419, 670, 419, 670, - 669, 419, 670, 419, 669, 419, 670, 670, - 670, 419, 670, 669, 670, 419, 670, 669, - 419, 670, 669, 670, 419, 670, 669, 419, - 670, 669, 419, 670, 419, 670, 669, 419, - 670, 669, 419, 670, 669, 705, 706, 707, - 708, 709, 710, 711, 712, 713, 714, 715, - 716, 676, 717, 718, 719, 720, 721, 718, - 722, 723, 724, 725, 726, 727, 728, 729, - 730, 671, 669, 670, 419, 670, 669, 670, - 419, 670, 669, 670, 419, 670, 669, 670, - 419, 670, 669, 419, 670, 419, 670, 669, - 670, 419, 670, 669, 670, 419, 419, 419, - 670, 669, 670, 419, 670, 669, 670, 670, - 670, 670, 419, 670, 419, 669, 670, 669, - 419, 419, 670, 419, 670, 669, 670, 419, - 670, 669, 419, 670, 669, 670, 670, 419, - 670, 669, 419, 670, 669, 670, 419, 670, - 669, 419, 670, 669, 419, 670, 669, 419, - 670, 669, 670, 669, 419, 419, 670, 669, - 670, 419, 670, 669, 419, 670, 419, 669, - 670, 669, 419, 674, 731, 671, 674, 732, - 674, 733, 683, 671, 669, 670, 669, 419, - 670, 669, 419, 674, 732, 683, 671, 669, - 674, 734, 671, 683, 671, 669, 670, 669, - 419, 674, 735, 692, 736, 718, 737, 730, - 674, 738, 739, 740, 671, 683, 671, 669, - 670, 669, 419, 670, 419, 670, 669, 419, - 670, 419, 670, 419, 669, 670, 670, 669, - 419, 670, 419, 670, 669, 419, 670, 669, - 674, 683, 425, 669, 741, 674, 742, 683, - 671, 669, 425, 670, 669, 419, 670, 669, - 419, 743, 674, 744, 745, 671, 669, 419, - 670, 669, 670, 670, 669, 419, 419, 670, - 419, 670, 669, 674, 746, 747, 748, 749, - 750, 751, 752, 753, 754, 755, 756, 671, - 683, 671, 669, 670, 419, 670, 670, 670, - 670, 670, 670, 670, 419, 670, 419, 670, - 670, 670, 670, 670, 670, 669, 419, 670, - 670, 419, 670, 419, 669, 670, 419, 670, - 670, 670, 419, 670, 670, 419, 670, 670, - 419, 670, 670, 419, 670, 670, 669, 419, - 674, 757, 674, 733, 758, 759, 760, 671, - 683, 671, 669, 670, 669, 419, 670, 670, - 670, 419, 670, 670, 670, 419, 670, 419, - 670, 669, 419, 419, 419, 419, 670, 670, - 419, 419, 419, 419, 419, 670, 670, 670, - 670, 670, 670, 670, 419, 670, 419, 670, - 419, 669, 670, 670, 670, 419, 670, 419, - 670, 669, 683, 425, 761, 674, 683, 425, - 670, 669, 419, 762, 674, 763, 683, 425, - 670, 669, 419, 670, 419, 764, 683, 671, - 669, 425, 670, 669, 419, 674, 765, 671, - 683, 671, 669, 670, 669, 419, 766, 766, - 766, 768, 769, 770, 766, 767, 767, 771, - 768, 771, 769, 771, 767, 772, 773, 772, - 775, 774, 776, 774, 777, 774, 779, 778, - 781, 782, 780, 781, 783, 780, 785, 784, - 786, 784, 787, 784, 789, 788, 791, 792, - 790, 791, 793, 790, 795, 795, 795, 795, - 794, 795, 795, 795, 794, 795, 794, 795, - 795, 794, 794, 794, 794, 794, 794, 795, - 794, 794, 794, 794, 795, 795, 795, 795, - 795, 794, 794, 795, 794, 794, 795, 794, - 795, 794, 794, 795, 794, 794, 794, 795, - 795, 795, 795, 795, 795, 794, 795, 795, - 794, 795, 795, 794, 794, 794, 794, 794, - 794, 795, 795, 794, 794, 795, 794, 795, - 795, 795, 794, 797, 798, 799, 800, 801, - 802, 803, 804, 805, 806, 807, 808, 809, - 810, 811, 812, 813, 814, 815, 816, 817, - 818, 819, 820, 821, 822, 823, 824, 825, - 826, 827, 828, 794, 795, 794, 795, 794, - 795, 795, 794, 795, 795, 794, 794, 794, - 795, 794, 794, 794, 794, 794, 794, 794, - 795, 794, 794, 794, 794, 794, 794, 794, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 794, 794, 794, 794, 794, - 794, 794, 794, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 794, 794, 794, 794, - 794, 794, 794, 794, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 794, 795, 795, - 795, 795, 795, 795, 795, 795, 794, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 794, 795, 795, 795, 795, 795, - 795, 794, 795, 795, 795, 795, 795, 795, - 794, 794, 794, 794, 794, 794, 794, 794, - 795, 795, 795, 795, 795, 795, 795, 795, - 794, 795, 795, 795, 795, 795, 795, 795, - 795, 794, 795, 795, 795, 795, 795, 794, - 794, 794, 794, 794, 794, 794, 794, 795, - 795, 795, 795, 795, 795, 794, 795, 795, - 795, 795, 795, 795, 795, 794, 795, 794, - 795, 795, 794, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 794, 795, 795, 795, 795, 795, 794, 795, - 795, 795, 795, 795, 795, 795, 794, 795, - 795, 795, 794, 795, 795, 795, 794, 795, - 794, 829, 830, 831, 832, 833, 834, 835, - 836, 837, 838, 839, 840, 841, 842, 843, - 844, 845, 846, 847, 848, 849, 850, 851, - 852, 853, 854, 855, 856, 857, 858, 859, - 860, 861, 862, 863, 864, 801, 865, 866, - 867, 868, 869, 870, 801, 846, 801, 794, - 795, 794, 795, 795, 794, 794, 795, 794, - 794, 794, 794, 795, 794, 794, 794, 794, - 794, 795, 794, 794, 794, 794, 794, 795, - 795, 795, 795, 795, 794, 794, 794, 795, - 794, 794, 794, 795, 795, 795, 794, 794, - 794, 795, 795, 794, 794, 794, 795, 795, - 795, 794, 794, 794, 795, 795, 795, 795, - 794, 795, 795, 795, 795, 794, 794, 794, - 794, 794, 795, 795, 795, 795, 794, 794, - 795, 795, 795, 794, 794, 795, 795, 795, - 795, 794, 795, 795, 794, 795, 795, 794, - 794, 794, 795, 795, 795, 794, 794, 794, - 794, 795, 795, 795, 795, 795, 794, 794, - 794, 794, 795, 794, 795, 795, 794, 795, - 795, 794, 795, 794, 795, 795, 795, 794, - 795, 795, 794, 794, 794, 795, 794, 794, - 794, 794, 794, 794, 794, 795, 795, 795, - 795, 794, 795, 795, 795, 795, 795, 795, - 795, 794, 871, 872, 873, 874, 875, 876, - 877, 878, 879, 801, 880, 881, 882, 883, - 884, 794, 795, 794, 794, 794, 794, 794, - 795, 795, 794, 795, 795, 795, 794, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 794, 795, 795, 795, 794, 794, 795, - 795, 795, 794, 794, 795, 794, 794, 795, - 795, 795, 795, 795, 794, 794, 794, 794, - 795, 795, 795, 795, 795, 795, 794, 795, - 795, 795, 795, 795, 794, 885, 840, 886, - 887, 888, 801, 889, 890, 846, 801, 794, - 795, 795, 795, 795, 794, 794, 794, 795, - 794, 794, 795, 795, 795, 794, 794, 794, - 795, 795, 794, 851, 794, 846, 801, 801, - 891, 794, 801, 794, 795, 846, 892, 893, - 846, 894, 895, 846, 896, 897, 898, 899, - 900, 901, 846, 902, 903, 904, 846, 905, - 906, 907, 865, 908, 909, 910, 865, 911, - 846, 801, 794, 794, 795, 795, 794, 794, - 794, 795, 795, 795, 795, 794, 795, 795, - 794, 794, 794, 794, 795, 795, 794, 794, - 795, 795, 794, 794, 794, 794, 794, 794, - 795, 795, 795, 794, 794, 794, 795, 794, - 794, 794, 795, 795, 794, 795, 795, 795, - 795, 794, 795, 795, 795, 795, 794, 795, - 795, 795, 795, 795, 795, 794, 794, 794, - 795, 795, 795, 795, 794, 912, 913, 794, - 801, 794, 795, 794, 794, 795, 846, 914, - 915, 916, 917, 896, 918, 919, 920, 921, - 922, 923, 924, 925, 926, 927, 928, 929, - 801, 794, 794, 795, 794, 795, 795, 795, - 795, 795, 795, 795, 794, 795, 795, 795, - 794, 795, 794, 794, 795, 794, 795, 794, - 794, 795, 795, 795, 795, 794, 795, 795, - 795, 794, 794, 795, 795, 795, 795, 794, - 795, 795, 794, 794, 795, 795, 795, 795, - 795, 794, 930, 931, 932, 933, 934, 935, - 936, 937, 938, 939, 940, 936, 942, 943, - 944, 945, 941, 794, 946, 947, 846, 948, - 949, 950, 951, 952, 953, 954, 955, 956, - 846, 801, 957, 958, 959, 960, 846, 961, - 962, 963, 964, 965, 966, 967, 968, 969, - 970, 971, 972, 973, 974, 975, 846, 877, - 801, 976, 794, 795, 795, 795, 795, 795, - 794, 794, 794, 795, 794, 795, 795, 794, - 795, 794, 795, 795, 794, 794, 794, 795, - 795, 795, 794, 794, 794, 795, 795, 795, - 794, 794, 794, 794, 795, 794, 794, 795, - 794, 794, 795, 795, 795, 794, 794, 795, - 794, 795, 795, 795, 794, 795, 795, 795, - 795, 795, 795, 794, 794, 794, 795, 795, - 794, 795, 795, 794, 795, 795, 794, 795, - 795, 794, 795, 795, 795, 795, 795, 795, - 795, 794, 795, 794, 795, 794, 795, 795, - 794, 795, 794, 795, 795, 794, 795, 794, - 795, 794, 977, 948, 978, 979, 980, 981, - 982, 983, 984, 985, 986, 829, 987, 846, - 988, 989, 990, 846, 991, 861, 992, 993, - 994, 995, 996, 997, 998, 999, 846, 794, - 794, 794, 795, 795, 795, 794, 795, 795, - 794, 795, 795, 794, 794, 794, 794, 794, - 795, 795, 795, 795, 794, 795, 795, 795, - 795, 795, 795, 794, 794, 794, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 794, - 795, 795, 795, 795, 795, 795, 795, 795, - 794, 795, 795, 794, 794, 794, 794, 795, - 795, 795, 794, 794, 794, 795, 794, 794, - 794, 795, 795, 794, 795, 795, 795, 794, - 795, 794, 794, 794, 795, 795, 794, 795, - 795, 795, 794, 795, 795, 795, 794, 794, - 794, 794, 795, 846, 915, 1000, 1001, 801, - 846, 801, 794, 794, 795, 794, 795, 846, - 1000, 801, 794, 846, 1002, 801, 794, 794, - 795, 846, 1003, 1004, 1005, 906, 1006, 1007, - 846, 1008, 1009, 1010, 801, 794, 794, 795, - 795, 795, 794, 795, 795, 794, 795, 795, - 795, 795, 794, 794, 795, 794, 794, 795, - 795, 794, 795, 794, 846, 801, 794, 1011, - 846, 1012, 794, 801, 794, 795, 794, 795, - 1013, 846, 1014, 1015, 794, 795, 794, 794, - 794, 795, 795, 795, 795, 794, 1016, 1017, - 1018, 846, 1019, 1020, 1021, 1022, 1023, 1024, - 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, - 801, 794, 795, 795, 795, 794, 794, 794, - 794, 795, 795, 794, 794, 795, 794, 794, - 794, 794, 794, 794, 794, 795, 794, 795, - 794, 794, 794, 794, 794, 794, 795, 795, - 795, 795, 795, 794, 794, 795, 794, 794, - 794, 795, 794, 794, 795, 794, 794, 795, - 794, 794, 795, 794, 794, 794, 795, 795, - 795, 794, 794, 794, 795, 795, 795, 795, - 794, 1033, 846, 1034, 846, 1035, 1036, 1037, - 1038, 801, 794, 795, 795, 795, 795, 795, - 794, 794, 794, 795, 794, 794, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 794, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 794, 795, 795, 795, - 795, 795, 794, 1039, 846, 801, 794, 795, - 1040, 846, 831, 801, 794, 795, 1041, 794, - 801, 794, 795, 846, 1042, 801, 794, 794, - 795, 1043, 794, 846, 1044, 801, 794, 794, - 795, 1046, 1045, 795, 795, 795, 795, 1046, - 1045, 795, 1046, 1045, 1046, 1046, 795, 1046, - 1045, 795, 1046, 795, 1046, 1045, 795, 1046, - 795, 1046, 795, 1045, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1045, 795, 795, 1046, - 1046, 795, 1046, 795, 1046, 1045, 1046, 1046, - 1046, 1046, 1046, 795, 1046, 795, 1046, 795, - 1046, 1045, 1046, 1046, 795, 1046, 795, 1046, - 1045, 1046, 1046, 1046, 1046, 1046, 795, 1046, - 795, 1046, 1045, 795, 795, 1046, 795, 1046, - 1045, 1046, 1046, 1046, 795, 1046, 795, 1046, - 795, 1046, 795, 1046, 1045, 1046, 795, 1046, - 795, 1046, 1045, 795, 1046, 1046, 1046, 1046, - 795, 1046, 795, 1046, 795, 1046, 795, 1046, - 795, 1046, 795, 1046, 1045, 795, 1046, 1045, - 1046, 1046, 1046, 795, 1046, 795, 1046, 1045, - 1046, 795, 1046, 795, 1046, 1045, 795, 1046, - 1046, 1046, 1046, 795, 1046, 795, 1046, 1045, - 795, 1046, 795, 1046, 795, 1046, 1045, 1046, - 1046, 795, 1046, 795, 1046, 1045, 795, 1046, - 795, 1046, 795, 1046, 795, 1045, 1046, 1046, - 1046, 795, 1046, 795, 1046, 1045, 795, 1046, - 1045, 1046, 1046, 795, 1046, 1045, 1046, 1046, - 1046, 795, 1046, 1046, 1046, 1046, 1046, 1046, - 795, 795, 1046, 795, 1046, 795, 1046, 795, - 1046, 1045, 1046, 795, 1046, 795, 1046, 1045, - 795, 1046, 1045, 1046, 795, 1046, 1045, 1046, - 795, 1046, 1045, 795, 795, 1046, 1045, 795, - 1046, 795, 1046, 795, 1046, 795, 1046, 795, - 1046, 795, 1045, 1046, 1046, 795, 1046, 1046, - 1046, 1046, 795, 795, 1046, 1046, 1046, 1046, - 1046, 795, 1046, 1046, 1046, 1046, 1046, 1045, - 795, 1046, 1046, 795, 1046, 795, 1045, 1046, - 1046, 795, 1046, 1045, 795, 795, 1046, 795, - 1045, 1046, 1046, 1045, 795, 1046, 795, 1045, - 1046, 1045, 795, 1046, 795, 1046, 795, 1045, - 1046, 1046, 1045, 795, 1046, 795, 1046, 795, - 1046, 1045, 1046, 795, 1046, 795, 1046, 1045, - 795, 1046, 1045, 795, 795, 1046, 1045, 1046, - 795, 1045, 1046, 1045, 795, 1046, 795, 1046, - 795, 1045, 1046, 1045, 795, 795, 1046, 1045, - 1046, 795, 1046, 795, 1046, 1045, 795, 1046, - 795, 1045, 1046, 1045, 795, 795, 1046, 795, - 1045, 1046, 1045, 795, 795, 1046, 1045, 1046, - 795, 1046, 1045, 1046, 795, 1046, 1045, 1046, - 795, 1046, 795, 1046, 795, 1045, 1046, 1045, - 795, 795, 1046, 1045, 1046, 795, 1046, 795, - 1046, 1045, 795, 1046, 1045, 1046, 1046, 795, - 1046, 795, 1046, 1045, 1045, 795, 1045, 795, - 1046, 1046, 795, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1045, 795, 1046, 1046, 1046, 795, - 1045, 1046, 1046, 1046, 795, 1046, 795, 1046, - 795, 1046, 795, 1046, 795, 1046, 1045, 795, - 795, 1046, 1045, 1046, 795, 1046, 1045, 795, - 795, 1046, 795, 795, 795, 1046, 795, 1046, - 795, 1046, 795, 1046, 795, 1045, 795, 1046, - 795, 1046, 795, 1045, 1046, 1045, 795, 1046, - 795, 1045, 1046, 795, 1046, 1046, 1046, 1045, - 795, 1046, 795, 795, 1046, 795, 1045, 1046, - 1046, 1045, 795, 1046, 1046, 1046, 1046, 795, - 1046, 795, 1045, 1046, 1046, 1046, 795, 1046, - 1045, 1046, 795, 1046, 795, 1046, 795, 1046, - 795, 1046, 1045, 1046, 1046, 795, 1046, 1045, - 795, 1046, 795, 1046, 795, 1045, 1046, 1046, - 1045, 795, 1046, 795, 1045, 1046, 1045, 795, - 1046, 1045, 795, 1046, 795, 1046, 1045, 1046, - 1046, 1046, 1045, 795, 795, 795, 1046, 1045, - 795, 1046, 795, 1045, 1046, 1045, 795, 1046, - 795, 1046, 795, 1045, 1046, 1046, 1046, 1045, - 795, 1046, 795, 1045, 1046, 1046, 1046, 1046, - 1045, 795, 1046, 795, 1046, 1045, 795, 795, - 1046, 795, 1046, 1045, 1046, 795, 1046, 795, - 1045, 1046, 1046, 1045, 795, 1046, 795, 1046, - 1045, 795, 1046, 1046, 1046, 795, 1046, 795, - 1045, 795, 1046, 1045, 1046, 795, 795, 1046, - 795, 1046, 795, 1045, 1046, 1046, 1046, 1046, - 1045, 795, 1046, 795, 1046, 795, 1046, 795, - 1046, 795, 1046, 1045, 1046, 1046, 1046, 795, - 1046, 795, 1046, 795, 1046, 795, 1045, 1046, - 1046, 795, 795, 1046, 1045, 1046, 795, 1046, - 1046, 1045, 795, 1046, 795, 1046, 1045, 795, - 795, 1046, 1046, 1046, 1046, 795, 1046, 795, - 1046, 795, 1045, 1046, 1046, 795, 1045, 1046, - 1045, 795, 1046, 795, 1045, 1046, 1045, 795, - 1046, 795, 1045, 1046, 795, 1046, 1046, 1045, - 795, 1046, 1046, 795, 1045, 1046, 1045, 795, - 1046, 795, 1046, 1045, 1046, 795, 1046, 795, - 1045, 1046, 1045, 795, 1046, 795, 1046, 795, - 1046, 795, 1046, 795, 1046, 1045, 1047, 1045, - 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, - 1056, 1057, 1058, 1050, 1059, 1060, 1061, 1062, - 1063, 1050, 1064, 1065, 1066, 1067, 1068, 1069, - 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, - 1078, 1050, 1079, 1047, 1059, 1047, 1080, 1047, - 1045, 1046, 1046, 1046, 1046, 795, 1045, 1046, - 1046, 1045, 795, 1046, 1045, 795, 795, 1046, - 1045, 795, 1046, 795, 1045, 1046, 1045, 795, - 795, 1046, 795, 1045, 1046, 1046, 1045, 795, - 1046, 1046, 1046, 1045, 795, 1046, 795, 1046, - 1046, 1045, 795, 795, 1046, 795, 1045, 1046, - 1045, 795, 1046, 1045, 795, 795, 1046, 795, - 1046, 1045, 795, 1046, 795, 795, 1046, 795, - 1046, 795, 1045, 1046, 1046, 1045, 795, 1046, - 1046, 795, 1046, 1045, 795, 1046, 795, 1046, - 1045, 795, 1046, 795, 1045, 795, 1046, 1046, - 1046, 795, 1046, 1045, 1046, 795, 1046, 1045, - 795, 1046, 1045, 1046, 795, 1046, 1045, 795, - 1046, 1045, 795, 1046, 795, 1046, 1045, 795, - 1046, 1045, 795, 1046, 1045, 1081, 1082, 1083, - 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, - 1092, 1052, 1093, 1094, 1095, 1096, 1097, 1094, - 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, - 1106, 1047, 1045, 1046, 795, 1046, 1045, 1046, - 795, 1046, 1045, 1046, 795, 1046, 1045, 1046, - 795, 1046, 1045, 795, 1046, 795, 1046, 1045, - 1046, 795, 1046, 1045, 1046, 795, 795, 795, - 1046, 1045, 1046, 795, 1046, 1045, 1046, 1046, - 1046, 1046, 795, 1046, 795, 1045, 1046, 1045, - 795, 795, 1046, 795, 1046, 1045, 1046, 795, - 1046, 1045, 795, 1046, 1045, 1046, 1046, 795, - 1046, 1045, 795, 1046, 1045, 1046, 795, 1046, - 1045, 795, 1046, 1045, 795, 1046, 1045, 795, - 1046, 1045, 1046, 1045, 795, 795, 1046, 1045, - 1046, 795, 1046, 1045, 795, 1046, 795, 1045, - 1046, 1045, 795, 1050, 1107, 1047, 1050, 1108, - 1050, 1109, 1059, 1047, 1045, 1046, 1045, 795, - 1046, 1045, 795, 1050, 1108, 1059, 1047, 1045, - 1050, 1110, 1047, 1059, 1047, 1045, 1046, 1045, - 795, 1050, 1111, 1068, 1112, 1094, 1113, 1106, - 1050, 1114, 1115, 1116, 1047, 1059, 1047, 1045, - 1046, 1045, 795, 1046, 795, 1046, 1045, 795, - 1046, 795, 1046, 795, 1045, 1046, 1046, 1045, - 795, 1046, 795, 1046, 1045, 795, 1046, 1045, - 1050, 1059, 801, 1045, 1117, 1050, 1118, 1059, - 1047, 1045, 801, 1046, 1045, 795, 1046, 1045, - 795, 1119, 1050, 1120, 1121, 1047, 1045, 795, - 1046, 1045, 1046, 1046, 1045, 795, 795, 1046, - 795, 1046, 1045, 1050, 1122, 1123, 1124, 1125, - 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1047, - 1059, 1047, 1045, 1046, 795, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 795, 1046, 795, 1046, - 1046, 1046, 1046, 1046, 1046, 1045, 795, 1046, - 1046, 795, 1046, 795, 1045, 1046, 795, 1046, - 1046, 1046, 795, 1046, 1046, 795, 1046, 1046, - 795, 1046, 1046, 795, 1046, 1046, 1045, 795, - 1050, 1133, 1050, 1109, 1134, 1135, 1136, 1047, - 1059, 1047, 1045, 1046, 1045, 795, 1046, 1046, - 1046, 795, 1046, 1046, 1046, 795, 1046, 795, - 1046, 1045, 795, 795, 795, 795, 1046, 1046, - 795, 795, 795, 795, 795, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 795, 1046, 795, 1046, - 795, 1045, 1046, 1046, 1046, 795, 1046, 795, - 1046, 1045, 1059, 801, 1137, 1050, 1059, 801, - 1046, 1045, 795, 1138, 1050, 1139, 1059, 801, - 1046, 1045, 795, 1046, 795, 1140, 1059, 1047, - 1045, 801, 1046, 1045, 795, 1050, 1141, 1047, - 1059, 1047, 1045, 1046, 1045, 795, 1142, 1143, - 1144, 1142, 1145, 1146, 1147, 1149, 1150, 1151, - 1152, 1153, 1154, 670, 670, 419, 1155, 1156, - 1157, 1158, 670, 1161, 1162, 1164, 1165, 1166, - 1160, 1167, 1168, 1169, 1170, 1171, 1172, 1173, - 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, - 1182, 1183, 1184, 1185, 1186, 1188, 1189, 1190, - 1191, 1192, 1193, 670, 1148, 7, 1148, 419, - 1148, 419, 1160, 1163, 1187, 1194, 1159, 1142, - 1142, 1195, 1143, 1196, 1198, 1197, 4, 1147, - 1200, 1197, 1201, 1197, 2, 1147, 1197, 6, - 8, 8, 7, 1202, 1203, 1204, 1197, 1205, - 1206, 1197, 1207, 1197, 419, 419, 1209, 1210, - 489, 470, 1211, 470, 1212, 1213, 1214, 1215, - 1216, 1217, 1218, 1219, 1220, 1221, 1222, 544, - 1223, 520, 1224, 1225, 1226, 1227, 1228, 1229, - 1230, 1231, 1232, 1233, 1234, 1235, 419, 419, - 419, 425, 565, 1208, 1236, 1197, 1237, 1197, - 670, 1238, 419, 419, 419, 670, 1238, 670, - 670, 419, 1238, 419, 1238, 419, 1238, 419, - 670, 670, 670, 670, 670, 1238, 419, 670, - 670, 670, 419, 670, 419, 1238, 419, 670, - 670, 670, 670, 419, 1238, 670, 419, 670, - 419, 670, 419, 670, 670, 419, 670, 1238, - 419, 670, 419, 670, 419, 670, 1238, 670, - 419, 1238, 670, 419, 670, 419, 1238, 670, - 670, 670, 670, 670, 1238, 419, 419, 670, - 419, 670, 1238, 670, 419, 1238, 670, 670, - 1238, 419, 419, 670, 419, 670, 419, 670, - 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, - 1246, 1247, 1248, 1249, 715, 1250, 1251, 1252, - 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260, - 1261, 1260, 1262, 1263, 1264, 1265, 1266, 671, - 1238, 1267, 1268, 1269, 1270, 1271, 1272, 1273, - 1274, 1275, 1276, 1277, 1278, 1279, 1280, 1281, - 1282, 1283, 1284, 1285, 725, 1286, 1287, 1288, - 692, 1289, 1290, 1291, 1292, 1293, 1294, 671, - 1295, 1296, 1297, 1298, 1299, 1300, 1301, 1302, - 674, 1303, 671, 674, 1304, 1305, 1306, 1307, - 683, 1238, 1308, 1309, 1310, 1311, 703, 1312, - 1313, 683, 1314, 1315, 1316, 1317, 1318, 671, - 1238, 1319, 1278, 1320, 1321, 1322, 683, 1323, - 1324, 674, 671, 683, 425, 1238, 1288, 671, - 674, 683, 425, 683, 425, 1325, 683, 1238, - 425, 674, 1326, 1327, 674, 1328, 1329, 681, - 1330, 1331, 1332, 1333, 1334, 1284, 1335, 1336, - 1337, 1338, 1339, 1340, 1341, 1342, 1343, 1344, - 1345, 1346, 1303, 1347, 674, 683, 425, 1238, - 1348, 1349, 683, 671, 1238, 425, 671, 1238, - 674, 1350, 731, 1351, 1352, 1353, 1354, 1355, - 1356, 1357, 1358, 671, 1359, 1360, 1361, 1362, - 1363, 1364, 671, 683, 1238, 1366, 1367, 1368, - 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, - 1372, 1378, 1379, 1380, 1381, 1365, 1377, 1365, - 1238, 1365, 1238, 1382, 1382, 1383, 1384, 1385, - 1386, 1387, 1388, 1389, 1390, 1387, 767, 1391, - 1391, 1391, 1392, 1391, 1391, 768, 769, 770, - 1391, 767, 1382, 1382, 1393, 1396, 1397, 1395, - 1398, 1399, 1398, 1400, 1391, 1402, 1401, 1396, - 1403, 1395, 1405, 1404, 1394, 1394, 1394, 768, - 769, 770, 1394, 767, 767, 1406, 773, 1406, - 1407, 1406, 775, 1408, 1409, 1410, 1411, 1412, - 1413, 1414, 1411, 776, 775, 1408, 1415, 1415, - 777, 779, 1416, 1415, 776, 1418, 1419, 1417, - 1418, 1419, 1420, 1417, 775, 1408, 1421, 1415, - 775, 1408, 1415, 1423, 1422, 1425, 1424, 776, - 1426, 777, 1426, 779, 1426, 785, 1427, 1428, - 1429, 1430, 1431, 1432, 1433, 1430, 786, 785, - 1427, 1434, 1434, 787, 789, 1435, 1434, 786, - 1437, 1438, 1436, 1437, 1438, 1439, 1436, 785, - 1427, 1440, 1434, 785, 1427, 1434, 1442, 1441, - 1444, 1443, 786, 1445, 787, 1445, 789, 1445, - 795, 1448, 1449, 1451, 1452, 1453, 1447, 1454, - 1455, 1456, 1457, 1458, 1459, 1460, 1461, 1462, - 1463, 1464, 1465, 1466, 1467, 1468, 1469, 1470, - 1471, 1472, 1473, 1475, 1476, 1477, 1478, 1479, - 1480, 795, 795, 1446, 1447, 1450, 1474, 1481, - 1446, 1046, 795, 795, 1483, 1484, 865, 846, - 1485, 846, 1486, 1487, 1488, 1489, 1490, 1491, - 1492, 1493, 1494, 1495, 1496, 920, 1497, 896, - 1498, 1499, 1500, 1501, 1502, 1503, 1504, 1505, - 1506, 1507, 1508, 1509, 795, 795, 795, 801, - 941, 1482, 1046, 1510, 795, 795, 795, 1046, - 1510, 1046, 1046, 795, 1510, 795, 1510, 795, - 1510, 795, 1046, 1046, 1046, 1046, 1046, 1510, - 795, 1046, 1046, 1046, 795, 1046, 795, 1510, - 795, 1046, 1046, 1046, 1046, 795, 1510, 1046, - 795, 1046, 795, 1046, 795, 1046, 1046, 795, - 1046, 1510, 795, 1046, 795, 1046, 795, 1046, - 1510, 1046, 795, 1510, 1046, 795, 1046, 795, - 1510, 1046, 1046, 1046, 1046, 1046, 1510, 795, - 795, 1046, 795, 1046, 1510, 1046, 795, 1510, - 1046, 1046, 1510, 795, 795, 1046, 795, 1046, - 795, 1046, 1510, 1511, 1512, 1513, 1514, 1515, - 1516, 1517, 1518, 1519, 1520, 1521, 1091, 1522, - 1523, 1524, 1525, 1526, 1527, 1528, 1529, 1530, - 1531, 1532, 1533, 1532, 1534, 1535, 1536, 1537, - 1538, 1047, 1510, 1539, 1540, 1541, 1542, 1543, - 1544, 1545, 1546, 1547, 1548, 1549, 1550, 1551, - 1552, 1553, 1554, 1555, 1556, 1557, 1101, 1558, - 1559, 1560, 1068, 1561, 1562, 1563, 1564, 1565, - 1566, 1047, 1567, 1568, 1569, 1570, 1571, 1572, - 1573, 1574, 1050, 1575, 1047, 1050, 1576, 1577, - 1578, 1579, 1059, 1510, 1580, 1581, 1582, 1583, - 1079, 1584, 1585, 1059, 1586, 1587, 1588, 1589, - 1590, 1047, 1510, 1591, 1550, 1592, 1593, 1594, - 1059, 1595, 1596, 1050, 1047, 1059, 801, 1510, - 1560, 1047, 1050, 1059, 801, 1059, 801, 1597, - 1059, 1510, 801, 1050, 1598, 1599, 1050, 1600, - 1601, 1057, 1602, 1603, 1604, 1605, 1606, 1556, - 1607, 1608, 1609, 1610, 1611, 1612, 1613, 1614, - 1615, 1616, 1617, 1618, 1575, 1619, 1050, 1059, - 801, 1510, 1620, 1621, 1059, 1047, 1510, 801, - 1047, 1510, 1050, 1622, 1107, 1623, 1624, 1625, - 1626, 1627, 1628, 1629, 1630, 1047, 1631, 1632, - 1633, 1634, 1635, 1636, 1047, 1059, 1510, 1638, - 1639, 1640, 1641, 1642, 1643, 1644, 1645, 1646, - 1647, 1648, 1644, 1650, 1651, 1652, 1653, 1637, - 1649, 1637, 1510, 1637, 1510, -} - -var _hcltok_trans_targs []int16 = []int16{ - 1459, 1459, 2, 3, 1459, 1459, 4, 1467, - 5, 6, 8, 9, 286, 12, 13, 14, - 15, 16, 287, 288, 19, 289, 21, 22, - 290, 291, 292, 293, 294, 295, 296, 297, - 298, 299, 328, 348, 353, 127, 128, 129, - 356, 151, 371, 375, 1459, 10, 11, 17, - 18, 20, 23, 24, 25, 26, 27, 28, - 29, 30, 31, 32, 64, 105, 120, 131, - 154, 170, 283, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, - 46, 47, 48, 49, 50, 51, 52, 53, - 54, 55, 56, 57, 58, 59, 60, 61, - 62, 63, 65, 66, 67, 68, 69, 70, - 71, 72, 73, 74, 75, 76, 77, 78, - 79, 80, 81, 82, 83, 84, 85, 86, - 87, 88, 89, 90, 91, 92, 93, 94, - 95, 96, 97, 98, 99, 100, 101, 102, - 103, 104, 106, 107, 108, 109, 110, 111, - 112, 113, 114, 115, 116, 117, 118, 119, - 121, 122, 123, 124, 125, 126, 130, 132, - 133, 134, 135, 136, 137, 138, 139, 140, - 141, 142, 143, 144, 145, 146, 147, 148, - 149, 150, 152, 153, 155, 156, 157, 158, - 159, 160, 161, 162, 163, 164, 165, 166, - 167, 168, 169, 171, 203, 227, 230, 231, - 233, 242, 243, 246, 250, 268, 275, 277, - 279, 281, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, - 186, 187, 188, 189, 190, 191, 192, 193, - 194, 195, 196, 197, 198, 199, 200, 201, - 202, 204, 205, 206, 207, 208, 209, 210, - 211, 212, 213, 214, 215, 216, 217, 218, - 219, 220, 221, 222, 223, 224, 225, 226, - 228, 229, 232, 234, 235, 236, 237, 238, - 239, 240, 241, 244, 245, 247, 248, 249, - 251, 252, 253, 254, 255, 256, 257, 258, - 259, 260, 261, 262, 263, 264, 265, 266, - 267, 269, 270, 271, 272, 273, 274, 276, - 278, 280, 282, 284, 285, 300, 301, 302, - 303, 304, 305, 306, 307, 308, 309, 310, - 311, 312, 313, 314, 315, 316, 317, 318, - 319, 320, 321, 322, 323, 324, 325, 326, - 327, 329, 330, 331, 332, 333, 334, 335, - 336, 337, 338, 339, 340, 341, 342, 343, - 344, 345, 346, 347, 349, 350, 351, 352, - 354, 355, 357, 358, 359, 360, 361, 362, - 363, 364, 365, 366, 367, 368, 369, 370, - 372, 373, 374, 376, 382, 404, 409, 411, - 413, 377, 378, 379, 380, 381, 383, 384, - 385, 386, 387, 388, 389, 390, 391, 392, - 393, 394, 395, 396, 397, 398, 399, 400, - 401, 402, 403, 405, 406, 407, 408, 410, - 412, 414, 1459, 1471, 1459, 437, 438, 439, - 440, 417, 441, 442, 443, 444, 445, 446, - 447, 448, 449, 450, 451, 452, 453, 454, - 455, 456, 457, 458, 459, 460, 461, 462, - 463, 464, 465, 466, 467, 469, 470, 471, - 472, 473, 474, 475, 476, 477, 478, 479, - 480, 481, 482, 483, 484, 485, 419, 486, - 487, 488, 489, 490, 491, 492, 493, 494, - 495, 496, 497, 498, 499, 500, 501, 502, - 503, 418, 504, 505, 506, 507, 508, 510, - 511, 512, 513, 514, 515, 516, 517, 518, - 519, 520, 521, 522, 523, 525, 526, 527, - 528, 529, 530, 534, 536, 537, 538, 539, - 434, 540, 541, 542, 543, 544, 545, 546, - 547, 548, 549, 550, 551, 552, 553, 554, - 556, 557, 559, 560, 561, 562, 563, 564, - 432, 565, 566, 567, 568, 569, 570, 571, - 572, 573, 575, 607, 631, 634, 635, 637, - 646, 647, 650, 654, 672, 532, 679, 681, - 683, 685, 576, 577, 578, 579, 580, 581, - 582, 583, 584, 585, 586, 587, 588, 589, - 590, 591, 592, 593, 594, 595, 596, 597, - 598, 599, 600, 601, 602, 603, 604, 605, - 606, 608, 609, 610, 611, 612, 613, 614, - 615, 616, 617, 618, 619, 620, 621, 622, - 623, 624, 625, 626, 627, 628, 629, 630, - 632, 633, 636, 638, 639, 640, 641, 642, - 643, 644, 645, 648, 649, 651, 652, 653, - 655, 656, 657, 658, 659, 660, 661, 662, - 663, 664, 665, 666, 667, 668, 669, 670, - 671, 673, 674, 675, 676, 677, 678, 680, - 682, 684, 686, 688, 689, 1459, 1459, 690, - 827, 828, 759, 829, 830, 831, 832, 833, - 834, 788, 835, 724, 836, 837, 838, 839, - 840, 841, 842, 843, 744, 844, 845, 846, - 847, 848, 849, 850, 851, 852, 853, 769, - 854, 856, 857, 858, 859, 860, 861, 862, - 863, 864, 865, 702, 866, 867, 868, 869, - 870, 871, 872, 873, 874, 740, 875, 876, - 877, 878, 879, 810, 881, 882, 885, 887, - 888, 889, 890, 891, 892, 895, 896, 898, - 899, 900, 902, 903, 904, 905, 906, 907, - 908, 909, 910, 911, 912, 914, 915, 916, - 917, 920, 922, 923, 925, 927, 1509, 1510, - 929, 930, 931, 1509, 1509, 932, 1523, 1523, - 1524, 935, 1523, 936, 1525, 1526, 1529, 1530, - 1534, 1534, 1535, 941, 1534, 942, 1536, 1537, - 1540, 1541, 1545, 1546, 1545, 968, 969, 970, - 971, 948, 972, 973, 974, 975, 976, 977, - 978, 979, 980, 981, 982, 983, 984, 985, - 986, 987, 988, 989, 990, 991, 992, 993, - 994, 995, 996, 997, 998, 1000, 1001, 1002, - 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, - 1011, 1012, 1013, 1014, 1015, 1016, 950, 1017, - 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, - 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, - 1034, 949, 1035, 1036, 1037, 1038, 1039, 1041, - 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, - 1050, 1051, 1052, 1053, 1054, 1056, 1057, 1058, - 1059, 1060, 1061, 1065, 1067, 1068, 1069, 1070, - 965, 1071, 1072, 1073, 1074, 1075, 1076, 1077, - 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, - 1087, 1088, 1090, 1091, 1092, 1093, 1094, 1095, - 963, 1096, 1097, 1098, 1099, 1100, 1101, 1102, - 1103, 1104, 1106, 1138, 1162, 1165, 1166, 1168, - 1177, 1178, 1181, 1185, 1203, 1063, 1210, 1212, - 1214, 1216, 1107, 1108, 1109, 1110, 1111, 1112, - 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, - 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, - 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, - 1137, 1139, 1140, 1141, 1142, 1143, 1144, 1145, - 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153, - 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, - 1163, 1164, 1167, 1169, 1170, 1171, 1172, 1173, - 1174, 1175, 1176, 1179, 1180, 1182, 1183, 1184, - 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, - 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201, - 1202, 1204, 1205, 1206, 1207, 1208, 1209, 1211, - 1213, 1215, 1217, 1219, 1220, 1545, 1545, 1221, - 1358, 1359, 1290, 1360, 1361, 1362, 1363, 1364, - 1365, 1319, 1366, 1255, 1367, 1368, 1369, 1370, - 1371, 1372, 1373, 1374, 1275, 1375, 1376, 1377, - 1378, 1379, 1380, 1381, 1382, 1383, 1384, 1300, - 1385, 1387, 1388, 1389, 1390, 1391, 1392, 1393, - 1394, 1395, 1396, 1233, 1397, 1398, 1399, 1400, - 1401, 1402, 1403, 1404, 1405, 1271, 1406, 1407, - 1408, 1409, 1410, 1341, 1412, 1413, 1416, 1418, - 1419, 1420, 1421, 1422, 1423, 1426, 1427, 1429, - 1430, 1431, 1433, 1434, 1435, 1436, 1437, 1438, - 1439, 1440, 1441, 1442, 1443, 1445, 1446, 1447, - 1448, 1451, 1453, 1454, 1456, 1458, 1460, 1459, - 1461, 1462, 1459, 1463, 1459, 1464, 1465, 1466, - 1468, 1469, 1470, 1459, 1472, 1459, 1473, 1459, - 1474, 1475, 1476, 1477, 1478, 1479, 1480, 1481, - 1482, 1483, 1484, 1485, 1486, 1487, 1488, 1489, - 1490, 1491, 1492, 1493, 1494, 1495, 1496, 1497, - 1498, 1499, 1500, 1501, 1502, 1503, 1504, 1505, - 1506, 1507, 1508, 1459, 1459, 1459, 1459, 1459, - 1459, 1, 1459, 7, 1459, 1459, 1459, 1459, - 1459, 415, 416, 420, 421, 422, 423, 424, - 425, 426, 427, 428, 429, 430, 431, 433, - 435, 436, 468, 509, 524, 531, 533, 535, - 555, 558, 574, 687, 1459, 1459, 1459, 691, - 692, 693, 694, 695, 696, 697, 698, 699, - 700, 701, 703, 704, 705, 706, 707, 708, - 709, 710, 711, 712, 713, 714, 715, 716, - 717, 718, 719, 720, 721, 722, 723, 725, - 726, 727, 728, 729, 730, 731, 732, 733, - 734, 735, 736, 737, 738, 739, 741, 742, - 743, 745, 746, 747, 748, 749, 750, 751, - 752, 753, 754, 755, 756, 757, 758, 760, - 761, 762, 763, 764, 765, 766, 767, 768, - 770, 771, 772, 773, 774, 775, 776, 777, - 778, 779, 780, 781, 782, 783, 784, 785, - 786, 787, 789, 790, 791, 792, 793, 794, - 795, 796, 797, 798, 799, 800, 801, 802, - 803, 804, 805, 806, 807, 808, 809, 811, - 812, 813, 814, 815, 816, 817, 818, 819, - 820, 821, 822, 823, 824, 825, 826, 855, - 880, 883, 884, 886, 893, 894, 897, 901, - 913, 918, 919, 921, 924, 926, 1511, 1509, - 1512, 1517, 1519, 1509, 1520, 1521, 1522, 1509, - 928, 1509, 1509, 1513, 1514, 1516, 1509, 1515, - 1509, 1509, 1509, 1518, 1509, 1509, 1509, 933, - 934, 938, 939, 1523, 1531, 1532, 1533, 1523, - 937, 1523, 1523, 934, 1527, 1528, 1523, 1523, - 1523, 1523, 1523, 940, 944, 945, 1534, 1542, - 1543, 1544, 1534, 943, 1534, 1534, 940, 1538, - 1539, 1534, 1534, 1534, 1534, 1534, 1545, 1547, - 1548, 1549, 1550, 1551, 1552, 1553, 1554, 1555, - 1556, 1557, 1558, 1559, 1560, 1561, 1562, 1563, - 1564, 1565, 1566, 1567, 1568, 1569, 1570, 1571, - 1572, 1573, 1574, 1575, 1576, 1577, 1578, 1579, - 1580, 1581, 1545, 946, 947, 951, 952, 953, - 954, 955, 956, 957, 958, 959, 960, 961, - 962, 964, 966, 967, 999, 1040, 1055, 1062, - 1064, 1066, 1086, 1089, 1105, 1218, 1545, 1222, - 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, - 1231, 1232, 1234, 1235, 1236, 1237, 1238, 1239, - 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, - 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1256, - 1257, 1258, 1259, 1260, 1261, 1262, 1263, 1264, - 1265, 1266, 1267, 1268, 1269, 1270, 1272, 1273, - 1274, 1276, 1277, 1278, 1279, 1280, 1281, 1282, - 1283, 1284, 1285, 1286, 1287, 1288, 1289, 1291, - 1292, 1293, 1294, 1295, 1296, 1297, 1298, 1299, - 1301, 1302, 1303, 1304, 1305, 1306, 1307, 1308, - 1309, 1310, 1311, 1312, 1313, 1314, 1315, 1316, - 1317, 1318, 1320, 1321, 1322, 1323, 1324, 1325, - 1326, 1327, 1328, 1329, 1330, 1331, 1332, 1333, - 1334, 1335, 1336, 1337, 1338, 1339, 1340, 1342, - 1343, 1344, 1345, 1346, 1347, 1348, 1349, 1350, - 1351, 1352, 1353, 1354, 1355, 1356, 1357, 1386, - 1411, 1414, 1415, 1417, 1424, 1425, 1428, 1432, - 1444, 1449, 1450, 1452, 1455, 1457, -} - -var _hcltok_trans_actions []byte = []byte{ - 145, 107, 0, 0, 91, 141, 0, 7, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 121, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 143, 193, 149, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 147, 125, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 31, 169, - 0, 0, 0, 35, 33, 0, 55, 41, - 175, 0, 53, 0, 175, 175, 0, 0, - 75, 61, 181, 0, 73, 0, 181, 181, - 0, 0, 85, 187, 89, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 87, 79, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 93, - 0, 0, 119, 0, 111, 0, 7, 7, - 7, 0, 0, 113, 0, 115, 0, 123, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 7, 7, - 7, 196, 196, 196, 196, 196, 196, 7, - 7, 196, 7, 127, 139, 135, 97, 133, - 103, 0, 129, 0, 101, 95, 109, 99, - 131, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 105, 117, 137, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 13, - 0, 0, 172, 17, 0, 7, 7, 23, - 0, 25, 27, 0, 0, 0, 151, 0, - 15, 19, 9, 0, 21, 11, 29, 0, - 0, 0, 0, 43, 0, 178, 178, 49, - 0, 157, 154, 1, 175, 175, 45, 37, - 47, 39, 51, 0, 0, 0, 63, 0, - 184, 184, 69, 0, 163, 160, 1, 181, - 181, 65, 57, 67, 59, 71, 77, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 7, 7, 7, - 190, 190, 190, 190, 190, 190, 7, 7, - 190, 7, 81, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 83, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, -} - -var _hcltok_to_state_actions []byte = []byte{ - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 3, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 3, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 166, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 166, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 3, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, -} - -var _hcltok_from_state_actions []byte = []byte{ - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 5, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 5, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 5, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 5, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 5, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, -} - -var _hcltok_eof_trans []int16 = []int16{ - 0, 1, 1, 1, 6, 6, 6, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 419, - 419, 421, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 419, 419, 419, 419, 419, 419, - 419, 419, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 670, 670, 670, 670, 670, 670, 670, 670, - 767, 772, 772, 772, 773, 773, 775, 775, - 775, 779, 0, 0, 785, 785, 785, 789, - 0, 0, 795, 795, 797, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 795, 795, 795, - 795, 795, 795, 795, 795, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, - 1046, 1046, 1046, 0, 1196, 1197, 1198, 1200, - 1198, 1198, 1198, 1203, 1198, 1198, 1198, 1209, - 1198, 1198, 1239, 1239, 1239, 1239, 1239, 1239, - 1239, 1239, 1239, 1239, 1239, 1239, 1239, 1239, - 1239, 1239, 1239, 1239, 1239, 1239, 1239, 1239, - 1239, 1239, 1239, 1239, 1239, 1239, 1239, 1239, - 1239, 1239, 1239, 1239, 1239, 0, 1392, 1394, - 1395, 1399, 1399, 1392, 1402, 1395, 1405, 1395, - 1407, 1407, 1407, 0, 1416, 1418, 1418, 1416, - 1416, 1423, 1425, 1427, 1427, 1427, 0, 1435, - 1437, 1437, 1435, 1435, 1442, 1444, 1446, 1446, - 1446, 0, 1483, 1511, 1511, 1511, 1511, 1511, - 1511, 1511, 1511, 1511, 1511, 1511, 1511, 1511, - 1511, 1511, 1511, 1511, 1511, 1511, 1511, 1511, - 1511, 1511, 1511, 1511, 1511, 1511, 1511, 1511, - 1511, 1511, 1511, 1511, 1511, 1511, -} - -const hcltok_start int = 1459 -const hcltok_first_final int = 1459 -const hcltok_error int = 0 - -const hcltok_en_stringTemplate int = 1509 -const hcltok_en_heredocTemplate int = 1523 -const hcltok_en_bareTemplate int = 1534 -const hcltok_en_identOnly int = 1545 -const hcltok_en_main int = 1459 - -//line scan_tokens.rl:16 - -func scanTokens(data []byte, filename string, start hcl.Pos, mode scanMode) []Token { - stripData := stripUTF8BOM(data) - start.Byte += len(data) - len(stripData) - data = stripData - - f := &tokenAccum{ - Filename: filename, - Bytes: data, - Pos: start, - StartByte: start.Byte, - } - -//line scan_tokens.rl:305 - - // Ragel state - p := 0 // "Pointer" into data - pe := len(data) // End-of-data "pointer" - ts := 0 - te := 0 - act := 0 - eof := pe - var stack []int - var top int - - var cs int // current state - switch mode { - case scanNormal: - cs = hcltok_en_main - case scanTemplate: - cs = hcltok_en_bareTemplate - case scanIdentOnly: - cs = hcltok_en_identOnly - default: - panic("invalid scanMode") - } - - braces := 0 - var retBraces []int // stack of brace levels that cause us to use fret - var heredocs []heredocInProgress // stack of heredocs we're currently processing - -//line scan_tokens.rl:340 - - // Make Go compiler happy - _ = ts - _ = te - _ = act - _ = eof - - token := func(ty TokenType) { - f.emitToken(ty, ts, te) - } - selfToken := func() { - b := data[ts:te] - if len(b) != 1 { - // should never happen - panic("selfToken only works for single-character tokens") - } - f.emitToken(TokenType(b[0]), ts, te) - } - -//line scan_tokens.go:4289 - { - top = 0 - ts = 0 - te = 0 - act = 0 - } - -//line scan_tokens.go:4297 - { - var _klen int - var _trans int - var _acts int - var _nacts uint - var _keys int - if p == pe { - goto _test_eof - } - if cs == 0 { - goto _out - } - _resume: - _acts = int(_hcltok_from_state_actions[cs]) - _nacts = uint(_hcltok_actions[_acts]) - _acts++ - for ; _nacts > 0; _nacts-- { - _acts++ - switch _hcltok_actions[_acts-1] { - case 3: -//line NONE:1 - ts = p - -//line scan_tokens.go:4320 - } - } - - _keys = int(_hcltok_key_offsets[cs]) - _trans = int(_hcltok_index_offsets[cs]) - - _klen = int(_hcltok_single_lengths[cs]) - if _klen > 0 { - _lower := int(_keys) - var _mid int - _upper := int(_keys + _klen - 1) - for { - if _upper < _lower { - break - } - - _mid = _lower + ((_upper - _lower) >> 1) - switch { - case data[p] < _hcltok_trans_keys[_mid]: - _upper = _mid - 1 - case data[p] > _hcltok_trans_keys[_mid]: - _lower = _mid + 1 - default: - _trans += int(_mid - int(_keys)) - goto _match - } - } - _keys += _klen - _trans += _klen - } - - _klen = int(_hcltok_range_lengths[cs]) - if _klen > 0 { - _lower := int(_keys) - var _mid int - _upper := int(_keys + (_klen << 1) - 2) - for { - if _upper < _lower { - break - } - - _mid = _lower + (((_upper - _lower) >> 1) & ^1) - switch { - case data[p] < _hcltok_trans_keys[_mid]: - _upper = _mid - 2 - case data[p] > _hcltok_trans_keys[_mid+1]: - _lower = _mid + 2 - default: - _trans += int((_mid - int(_keys)) >> 1) - goto _match - } - } - _trans += _klen - } - - _match: - _trans = int(_hcltok_indicies[_trans]) - _eof_trans: - cs = int(_hcltok_trans_targs[_trans]) - - if _hcltok_trans_actions[_trans] == 0 { - goto _again - } - - _acts = int(_hcltok_trans_actions[_trans]) - _nacts = uint(_hcltok_actions[_acts]) - _acts++ - for ; _nacts > 0; _nacts-- { - _acts++ - switch _hcltok_actions[_acts-1] { - case 0: -//line scan_tokens.rl:224 - p-- - - case 4: -//line NONE:1 - te = p + 1 - - case 5: -//line scan_tokens.rl:248 - act = 4 - case 6: -//line scan_tokens.rl:250 - act = 6 - case 7: -//line scan_tokens.rl:160 - te = p + 1 - { - token(TokenTemplateInterp) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 8: -//line scan_tokens.rl:170 - te = p + 1 - { - token(TokenTemplateControl) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 9: -//line scan_tokens.rl:84 - te = p + 1 - { - token(TokenCQuote) - top-- - cs = stack[top] - { - stack = stack[:len(stack)-1] - } - goto _again - - } - case 10: -//line scan_tokens.rl:248 - te = p + 1 - { - token(TokenQuotedLit) - } - case 11: -//line scan_tokens.rl:251 - te = p + 1 - { - token(TokenBadUTF8) - } - case 12: -//line scan_tokens.rl:160 - te = p - p-- - { - token(TokenTemplateInterp) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 13: -//line scan_tokens.rl:170 - te = p - p-- - { - token(TokenTemplateControl) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 14: -//line scan_tokens.rl:248 - te = p - p-- - { - token(TokenQuotedLit) - } - case 15: -//line scan_tokens.rl:249 - te = p - p-- - { - token(TokenQuotedNewline) - } - case 16: -//line scan_tokens.rl:250 - te = p - p-- - { - token(TokenInvalid) - } - case 17: -//line scan_tokens.rl:251 - te = p - p-- - { - token(TokenBadUTF8) - } - case 18: -//line scan_tokens.rl:248 - p = (te) - 1 - { - token(TokenQuotedLit) - } - case 19: -//line scan_tokens.rl:251 - p = (te) - 1 - { - token(TokenBadUTF8) - } - case 20: -//line NONE:1 - switch act { - case 4: - { - p = (te) - 1 - token(TokenQuotedLit) - } - case 6: - { - p = (te) - 1 - token(TokenInvalid) - } - } - - case 21: -//line scan_tokens.rl:148 - act = 11 - case 22: -//line scan_tokens.rl:259 - act = 12 - case 23: -//line scan_tokens.rl:160 - te = p + 1 - { - token(TokenTemplateInterp) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 24: -//line scan_tokens.rl:170 - te = p + 1 - { - token(TokenTemplateControl) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 25: -//line scan_tokens.rl:111 - te = p + 1 - { - // This action is called specificially when a heredoc literal - // ends with a newline character. - - // This might actually be our end marker. - topdoc := &heredocs[len(heredocs)-1] - if topdoc.StartOfLine { - maybeMarker := bytes.TrimSpace(data[ts:te]) - if bytes.Equal(maybeMarker, topdoc.Marker) { - // We actually emit two tokens here: the end-of-heredoc - // marker first, and then separately the newline that - // follows it. This then avoids issues with the closing - // marker consuming a newline that would normally be used - // to mark the end of an attribute definition. - // We might have either a \n sequence or an \r\n sequence - // here, so we must handle both. - nls := te - 1 - nle := te - te-- - if data[te-1] == '\r' { - // back up one more byte - nls-- - te-- - } - token(TokenCHeredoc) - ts = nls - te = nle - token(TokenNewline) - heredocs = heredocs[:len(heredocs)-1] - top-- - cs = stack[top] - { - stack = stack[:len(stack)-1] - } - goto _again - - } - } - - topdoc.StartOfLine = true - token(TokenStringLit) - } - case 26: -//line scan_tokens.rl:259 - te = p + 1 - { - token(TokenBadUTF8) - } - case 27: -//line scan_tokens.rl:160 - te = p - p-- - { - token(TokenTemplateInterp) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 28: -//line scan_tokens.rl:170 - te = p - p-- - { - token(TokenTemplateControl) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 29: -//line scan_tokens.rl:148 - te = p - p-- - { - // This action is called when a heredoc literal _doesn't_ end - // with a newline character, e.g. because we're about to enter - // an interpolation sequence. - heredocs[len(heredocs)-1].StartOfLine = false - token(TokenStringLit) - } - case 30: -//line scan_tokens.rl:259 - te = p - p-- - { - token(TokenBadUTF8) - } - case 31: -//line scan_tokens.rl:148 - p = (te) - 1 - { - // This action is called when a heredoc literal _doesn't_ end - // with a newline character, e.g. because we're about to enter - // an interpolation sequence. - heredocs[len(heredocs)-1].StartOfLine = false - token(TokenStringLit) - } - case 32: -//line NONE:1 - switch act { - case 0: - { - cs = 0 - goto _again - } - case 11: - { - p = (te) - 1 - - // This action is called when a heredoc literal _doesn't_ end - // with a newline character, e.g. because we're about to enter - // an interpolation sequence. - heredocs[len(heredocs)-1].StartOfLine = false - token(TokenStringLit) - } - case 12: - { - p = (te) - 1 - token(TokenBadUTF8) - } - } - - case 33: -//line scan_tokens.rl:156 - act = 15 - case 34: -//line scan_tokens.rl:266 - act = 16 - case 35: -//line scan_tokens.rl:160 - te = p + 1 - { - token(TokenTemplateInterp) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 36: -//line scan_tokens.rl:170 - te = p + 1 - { - token(TokenTemplateControl) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 37: -//line scan_tokens.rl:156 - te = p + 1 - { - token(TokenStringLit) - } - case 38: -//line scan_tokens.rl:266 - te = p + 1 - { - token(TokenBadUTF8) - } - case 39: -//line scan_tokens.rl:160 - te = p - p-- - { - token(TokenTemplateInterp) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 40: -//line scan_tokens.rl:170 - te = p - p-- - { - token(TokenTemplateControl) - braces++ - retBraces = append(retBraces, braces) - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false - } - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1459 - goto _again - } - } - case 41: -//line scan_tokens.rl:156 - te = p - p-- - { - token(TokenStringLit) - } - case 42: -//line scan_tokens.rl:266 - te = p - p-- - { - token(TokenBadUTF8) - } - case 43: -//line scan_tokens.rl:156 - p = (te) - 1 - { - token(TokenStringLit) - } - case 44: -//line NONE:1 - switch act { - case 0: - { - cs = 0 - goto _again - } - case 15: - { - p = (te) - 1 - - token(TokenStringLit) - } - case 16: - { - p = (te) - 1 - token(TokenBadUTF8) - } - } - - case 45: -//line scan_tokens.rl:270 - act = 17 - case 46: -//line scan_tokens.rl:271 - act = 18 - case 47: -//line scan_tokens.rl:271 - te = p + 1 - { - token(TokenBadUTF8) - } - case 48: -//line scan_tokens.rl:272 - te = p + 1 - { - token(TokenInvalid) - } - case 49: -//line scan_tokens.rl:270 - te = p - p-- - { - token(TokenIdent) - } - case 50: -//line scan_tokens.rl:271 - te = p - p-- - { - token(TokenBadUTF8) - } - case 51: -//line scan_tokens.rl:270 - p = (te) - 1 - { - token(TokenIdent) - } - case 52: -//line scan_tokens.rl:271 - p = (te) - 1 - { - token(TokenBadUTF8) - } - case 53: -//line NONE:1 - switch act { - case 17: - { - p = (te) - 1 - token(TokenIdent) - } - case 18: - { - p = (te) - 1 - token(TokenBadUTF8) - } - } - - case 54: -//line scan_tokens.rl:278 - act = 22 - case 55: -//line scan_tokens.rl:301 - act = 39 - case 56: -//line scan_tokens.rl:280 - te = p + 1 - { - token(TokenComment) - } - case 57: -//line scan_tokens.rl:281 - te = p + 1 - { - token(TokenNewline) - } - case 58: -//line scan_tokens.rl:283 - te = p + 1 - { - token(TokenEqualOp) - } - case 59: -//line scan_tokens.rl:284 - te = p + 1 - { - token(TokenNotEqual) - } - case 60: -//line scan_tokens.rl:285 - te = p + 1 - { - token(TokenGreaterThanEq) - } - case 61: -//line scan_tokens.rl:286 - te = p + 1 - { - token(TokenLessThanEq) - } - case 62: -//line scan_tokens.rl:287 - te = p + 1 - { - token(TokenAnd) - } - case 63: -//line scan_tokens.rl:288 - te = p + 1 - { - token(TokenOr) - } - case 64: -//line scan_tokens.rl:289 - te = p + 1 - { - token(TokenEllipsis) - } - case 65: -//line scan_tokens.rl:290 - te = p + 1 - { - token(TokenFatArrow) - } - case 66: -//line scan_tokens.rl:291 - te = p + 1 - { - selfToken() - } - case 67: -//line scan_tokens.rl:180 - te = p + 1 - { - token(TokenOBrace) - braces++ - } - case 68: -//line scan_tokens.rl:185 - te = p + 1 - { - if len(retBraces) > 0 && retBraces[len(retBraces)-1] == braces { - token(TokenTemplateSeqEnd) - braces-- - retBraces = retBraces[0 : len(retBraces)-1] - top-- - cs = stack[top] - { - stack = stack[:len(stack)-1] - } - goto _again - - } else { - token(TokenCBrace) - braces-- - } - } - case 69: -//line scan_tokens.rl:197 - te = p + 1 - { - // Only consume from the retBraces stack and return if we are at - // a suitable brace nesting level, otherwise things will get - // confused. (Not entering this branch indicates a syntax error, - // which we will catch in the parser.) - if len(retBraces) > 0 && retBraces[len(retBraces)-1] == braces { - token(TokenTemplateSeqEnd) - braces-- - retBraces = retBraces[0 : len(retBraces)-1] - top-- - cs = stack[top] - { - stack = stack[:len(stack)-1] - } - goto _again - - } else { - // We intentionally generate a TokenTemplateSeqEnd here, - // even though the user apparently wanted a brace, because - // we want to allow the parser to catch the incorrect use - // of a ~} to balance a generic opening brace, rather than - // a template sequence. - token(TokenTemplateSeqEnd) - braces-- - } - } - case 70: -//line scan_tokens.rl:79 - te = p + 1 - { - token(TokenOQuote) - { - stack = append(stack, 0) - stack[top] = cs - top++ - cs = 1509 - goto _again - } - } - case 71: -//line scan_tokens.rl:89 - te = p + 1 - { - token(TokenOHeredoc) - // the token is currently the whole heredoc introducer, like - // < 0; _nacts-- { - _acts++ - switch _hcltok_actions[_acts-1] { - case 1: -//line NONE:1 - ts = 0 - - case 2: -//line NONE:1 - act = 0 - -//line scan_tokens.go:5073 - } - } - - if cs == 0 { - goto _out - } - p++ - if p != pe { - goto _resume - } - _test_eof: - { - } - if p == eof { - if _hcltok_eof_trans[cs] > 0 { - _trans = int(_hcltok_eof_trans[cs] - 1) - goto _eof_trans - } - } - - _out: - { - } - } - -//line scan_tokens.rl:363 - - // If we fall out here without being in a final state then we've - // encountered something that the scanner can't match, which we'll - // deal with as an invalid. - if cs < hcltok_first_final { - if mode == scanTemplate && len(stack) == 0 { - // If we're scanning a bare template then any straggling - // top-level stuff is actually literal string, rather than - // invalid. This handles the case where the template ends - // with a single "$" or "%", which trips us up because we - // want to see another character to decide if it's a sequence - // or an escape. - f.emitToken(TokenStringLit, ts, len(data)) - } else { - f.emitToken(TokenInvalid, ts, len(data)) - } - } - - // We always emit a synthetic EOF token at the end, since it gives the - // parser position information for an "unexpected EOF" diagnostic. - f.emitToken(TokenEOF, len(data), len(data)) - - return f.Tokens -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_tokens.rl b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_tokens.rl deleted file mode 100644 index 4443dc480..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/scan_tokens.rl +++ /dev/null @@ -1,395 +0,0 @@ - -package hclsyntax - -import ( - "bytes" - - "github.com/hashicorp/hcl2/hcl" -) - -// This file is generated from scan_tokens.rl. DO NOT EDIT. -%%{ - # (except when you are actually in scan_tokens.rl here, so edit away!) - - machine hcltok; - write data; -}%% - -func scanTokens(data []byte, filename string, start hcl.Pos, mode scanMode) []Token { - stripData := stripUTF8BOM(data) - start.Byte += len(data) - len(stripData) - data = stripData - - f := &tokenAccum{ - Filename: filename, - Bytes: data, - Pos: start, - StartByte: start.Byte, - } - - %%{ - include UnicodeDerived "unicode_derived.rl"; - - UTF8Cont = 0x80 .. 0xBF; - AnyUTF8 = ( - 0x00..0x7F | - 0xC0..0xDF . UTF8Cont | - 0xE0..0xEF . UTF8Cont . UTF8Cont | - 0xF0..0xF7 . UTF8Cont . UTF8Cont . UTF8Cont - ); - BrokenUTF8 = any - AnyUTF8; - - NumberLitContinue = (digit|'.'|('e'|'E') ('+'|'-')? digit); - NumberLit = digit ("" | (NumberLitContinue - '.') | (NumberLitContinue* (NumberLitContinue - '.'))); - Ident = (ID_Start | '_') (ID_Continue | '-')*; - - # Symbols that just represent themselves are handled as a single rule. - SelfToken = "[" | "]" | "(" | ")" | "." | "," | "*" | "/" | "%" | "+" | "-" | "=" | "<" | ">" | "!" | "?" | ":" | "\n" | "&" | "|" | "~" | "^" | ";" | "`" | "'"; - - EqualOp = "=="; - NotEqual = "!="; - GreaterThanEqual = ">="; - LessThanEqual = "<="; - LogicalAnd = "&&"; - LogicalOr = "||"; - - Ellipsis = "..."; - FatArrow = "=>"; - - Newline = '\r' ? '\n'; - EndOfLine = Newline; - - BeginStringTmpl = '"'; - BeginHeredocTmpl = '<<' ('-')? Ident Newline; - - Comment = ( - # The :>> operator in these is a "finish-guarded concatenation", - # which terminates the sequence on its left when it completes - # the sequence on its right. - # In the single-line comment cases this is allowing us to make - # the trailing EndOfLine optional while still having the overall - # pattern terminate. In the multi-line case it ensures that - # the first comment in the file ends at the first */, rather than - # gobbling up all of the "any*" until the _final_ */ in the file. - ("#" (any - EndOfLine)* :>> EndOfLine?) | - ("//" (any - EndOfLine)* :>> EndOfLine?) | - ("/*" any* :>> "*/") - ); - - # Note: hclwrite assumes that only ASCII spaces appear between tokens, - # and uses this assumption to recreate the spaces between tokens by - # looking at byte offset differences. This means it will produce - # incorrect results in the presence of tabs, but that's acceptable - # because the canonical style (which hclwrite itself can impose - # automatically is to never use tabs). - Spaces = (' ' | 0x09)+; - - action beginStringTemplate { - token(TokenOQuote); - fcall stringTemplate; - } - - action endStringTemplate { - token(TokenCQuote); - fret; - } - - action beginHeredocTemplate { - token(TokenOHeredoc); - // the token is currently the whole heredoc introducer, like - // < 0 { - heredocs[len(heredocs)-1].StartOfLine = false; - } - fcall main; - } - - action beginTemplateControl { - token(TokenTemplateControl); - braces++; - retBraces = append(retBraces, braces); - if len(heredocs) > 0 { - heredocs[len(heredocs)-1].StartOfLine = false; - } - fcall main; - } - - action openBrace { - token(TokenOBrace); - braces++; - } - - action closeBrace { - if len(retBraces) > 0 && retBraces[len(retBraces)-1] == braces { - token(TokenTemplateSeqEnd); - braces--; - retBraces = retBraces[0:len(retBraces)-1] - fret; - } else { - token(TokenCBrace); - braces--; - } - } - - action closeTemplateSeqEatWhitespace { - // Only consume from the retBraces stack and return if we are at - // a suitable brace nesting level, otherwise things will get - // confused. (Not entering this branch indicates a syntax error, - // which we will catch in the parser.) - if len(retBraces) > 0 && retBraces[len(retBraces)-1] == braces { - token(TokenTemplateSeqEnd); - braces--; - retBraces = retBraces[0:len(retBraces)-1] - fret; - } else { - // We intentionally generate a TokenTemplateSeqEnd here, - // even though the user apparently wanted a brace, because - // we want to allow the parser to catch the incorrect use - // of a ~} to balance a generic opening brace, rather than - // a template sequence. - token(TokenTemplateSeqEnd); - braces--; - } - } - - TemplateInterp = "${" ("~")?; - TemplateControl = "%{" ("~")?; - EndStringTmpl = '"'; - NewlineChars = ("\r"|"\n"); - NewlineCharsSeq = NewlineChars+; - StringLiteralChars = (AnyUTF8 - NewlineChars); - TemplateIgnoredNonBrace = (^'{' %{ fhold; }); - TemplateNotInterp = '$' (TemplateIgnoredNonBrace | TemplateInterp); - TemplateNotControl = '%' (TemplateIgnoredNonBrace | TemplateControl); - QuotedStringLiteralWithEsc = ('\\' StringLiteralChars) | (StringLiteralChars - ("$" | '%' | '"' | "\\")); - TemplateStringLiteral = ( - (TemplateNotInterp) | - (TemplateNotControl) | - (QuotedStringLiteralWithEsc)+ - ); - HeredocStringLiteral = ( - (TemplateNotInterp) | - (TemplateNotControl) | - (StringLiteralChars - ("$" | '%'))* - ); - BareStringLiteral = ( - (TemplateNotInterp) | - (TemplateNotControl) | - (StringLiteralChars - ("$" | '%'))* - ) Newline?; - - stringTemplate := |* - TemplateInterp => beginTemplateInterp; - TemplateControl => beginTemplateControl; - EndStringTmpl => endStringTemplate; - TemplateStringLiteral => { token(TokenQuotedLit); }; - NewlineCharsSeq => { token(TokenQuotedNewline); }; - AnyUTF8 => { token(TokenInvalid); }; - BrokenUTF8 => { token(TokenBadUTF8); }; - *|; - - heredocTemplate := |* - TemplateInterp => beginTemplateInterp; - TemplateControl => beginTemplateControl; - HeredocStringLiteral EndOfLine => heredocLiteralEOL; - HeredocStringLiteral => heredocLiteralMidline; - BrokenUTF8 => { token(TokenBadUTF8); }; - *|; - - bareTemplate := |* - TemplateInterp => beginTemplateInterp; - TemplateControl => beginTemplateControl; - BareStringLiteral => bareTemplateLiteral; - BrokenUTF8 => { token(TokenBadUTF8); }; - *|; - - identOnly := |* - Ident => { token(TokenIdent) }; - BrokenUTF8 => { token(TokenBadUTF8) }; - AnyUTF8 => { token(TokenInvalid) }; - *|; - - main := |* - Spaces => {}; - NumberLit => { token(TokenNumberLit) }; - Ident => { token(TokenIdent) }; - - Comment => { token(TokenComment) }; - Newline => { token(TokenNewline) }; - - EqualOp => { token(TokenEqualOp); }; - NotEqual => { token(TokenNotEqual); }; - GreaterThanEqual => { token(TokenGreaterThanEq); }; - LessThanEqual => { token(TokenLessThanEq); }; - LogicalAnd => { token(TokenAnd); }; - LogicalOr => { token(TokenOr); }; - Ellipsis => { token(TokenEllipsis); }; - FatArrow => { token(TokenFatArrow); }; - SelfToken => { selfToken() }; - - "{" => openBrace; - "}" => closeBrace; - - "~}" => closeTemplateSeqEatWhitespace; - - BeginStringTmpl => beginStringTemplate; - BeginHeredocTmpl => beginHeredocTemplate; - - BrokenUTF8 => { token(TokenBadUTF8) }; - AnyUTF8 => { token(TokenInvalid) }; - *|; - - }%% - - // Ragel state - p := 0 // "Pointer" into data - pe := len(data) // End-of-data "pointer" - ts := 0 - te := 0 - act := 0 - eof := pe - var stack []int - var top int - - var cs int // current state - switch mode { - case scanNormal: - cs = hcltok_en_main - case scanTemplate: - cs = hcltok_en_bareTemplate - case scanIdentOnly: - cs = hcltok_en_identOnly - default: - panic("invalid scanMode") - } - - braces := 0 - var retBraces []int // stack of brace levels that cause us to use fret - var heredocs []heredocInProgress // stack of heredocs we're currently processing - - %%{ - prepush { - stack = append(stack, 0); - } - postpop { - stack = stack[:len(stack)-1]; - } - }%% - - // Make Go compiler happy - _ = ts - _ = te - _ = act - _ = eof - - token := func (ty TokenType) { - f.emitToken(ty, ts, te) - } - selfToken := func () { - b := data[ts:te] - if len(b) != 1 { - // should never happen - panic("selfToken only works for single-character tokens") - } - f.emitToken(TokenType(b[0]), ts, te) - } - - %%{ - write init nocs; - write exec; - }%% - - // If we fall out here without being in a final state then we've - // encountered something that the scanner can't match, which we'll - // deal with as an invalid. - if cs < hcltok_first_final { - if mode == scanTemplate && len(stack) == 0 { - // If we're scanning a bare template then any straggling - // top-level stuff is actually literal string, rather than - // invalid. This handles the case where the template ends - // with a single "$" or "%", which trips us up because we - // want to see another character to decide if it's a sequence - // or an escape. - f.emitToken(TokenStringLit, ts, len(data)) - } else { - f.emitToken(TokenInvalid, ts, len(data)) - } - } - - // We always emit a synthetic EOF token at the end, since it gives the - // parser position information for an "unexpected EOF" diagnostic. - f.emitToken(TokenEOF, len(data), len(data)) - - return f.Tokens -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/spec.md b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/spec.md deleted file mode 100644 index d7faeedce..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/spec.md +++ /dev/null @@ -1,926 +0,0 @@ -# HCL Native Syntax Specification - -This is the specification of the syntax and semantics of the native syntax -for HCL. HCL is a system for defining configuration languages for applications. -The HCL information model is designed to support multiple concrete syntaxes -for configuration, but this native syntax is considered the primary format -and is optimized for human authoring and maintenance, as opposed to machine -generation of configuration. - -The language consists of three integrated sub-languages: - -- The _structural_ language defines the overall hierarchical configuration - structure, and is a serialization of HCL bodies, blocks and attributes. - -- The _expression_ language is used to express attribute values, either as - literals or as derivations of other values. - -- The _template_ language is used to compose values together into strings, - as one of several types of expression in the expression language. - -In normal use these three sub-languages are used together within configuration -files to describe an overall configuration, with the structural language -being used at the top level. The expression and template languages can also -be used in isolation, to implement features such as REPLs, debuggers, and -integration into more limited HCL syntaxes such as the JSON profile. - -## Syntax Notation - -Within this specification a semi-formal notation is used to illustrate the -details of syntax. This notation is intended for human consumption rather -than machine consumption, with the following conventions: - -- A naked name starting with an uppercase letter is a global production, - common to all of the syntax specifications in this document. -- A naked name starting with a lowercase letter is a local production, - meaningful only within the specification where it is defined. -- Double and single quotes (`"` and `'`) are used to mark literal character - sequences, which may be either punctuation markers or keywords. -- The default operator for combining items, which has no punctuation, - is concatenation. -- The symbol `|` indicates that any one of its left and right operands may - be present. -- The `*` symbol indicates zero or more repetitions of the item to its left. -- The `?` symbol indicates zero or one of the item to its left. -- Parentheses (`(` and `)`) are used to group items together to apply - the `|`, `*` and `?` operators to them collectively. - -The grammar notation does not fully describe the language. The prose may -augment or conflict with the illustrated grammar. In case of conflict, prose -has priority. - -## Source Code Representation - -Source code is unicode text expressed in the UTF-8 encoding. The language -itself does not perform unicode normalization, so syntax features such as -identifiers are sequences of unicode code points and so e.g. a precombined -accented character is distinct from a letter associated with a combining -accent. (String literals have some special handling with regard to Unicode -normalization which will be covered later in the relevant section.) - -UTF-8 encoded Unicode byte order marks are not permitted. Invalid or -non-normalized UTF-8 encoding is always a parse error. - -## Lexical Elements - -### Comments and Whitespace - -Comments and Whitespace are recognized as lexical elements but are ignored -except as described below. - -Whitespace is defined as a sequence of zero or more space characters -(U+0020). Newline sequences (either U+000A or U+000D followed by U+000A) -are _not_ considered whitespace but are ignored as such in certain contexts. - -Horizontal tab characters (U+0009) are not considered to be whitespace and -are not valid within HCL native syntax. - -Comments serve as program documentation and come in two forms: - -- _Line comments_ start with either the `//` or `#` sequences and end with - the next newline sequence. A line comments is considered equivalent to a - newline sequence. - -- _Inline comments_ start with the `/*` sequence and end with the `*/` - sequence, and may have any characters within except the ending sequence. - An inline comments is considered equivalent to a whitespace sequence. - -Comments and whitespace cannot begin within within other comments, or within -template literals except inside an interpolation sequence or template directive. - -### Identifiers - -Identifiers name entities such as blocks, attributes and expression variables. -Identifiers are interpreted as per [UAX #31][uax31] Section 2. Specifically, -their syntax is defined in terms of the `ID_Start` and `ID_Continue` -character properties as follows: - -```ebnf -Identifier = ID_Start (ID_Continue | '-')*; -``` - -The Unicode specification provides the normative requirements for identifier -parsing. Non-normatively, the spirit of this specification is that `ID_Start` -consists of Unicode letter and certain unambiguous punctuation tokens, while -`ID_Continue` augments that set with Unicode digits, combining marks, etc. - -The dash character `-` is additionally allowed in identifiers, even though -that is not part of the unicode `ID_Continue` definition. This is to allow -attribute names and block type names to contain dashes, although underscores -as word separators are considered the idiomatic usage. - -[uax31]: http://unicode.org/reports/tr31/ "Unicode Identifier and Pattern Syntax" - -### Keywords - -There are no globally-reserved words, but in some contexts certain identifiers -are reserved to function as keywords. These are discussed further in the -relevant documentation sections that follow. In such situations, the -identifier's role as a keyword supersedes any other valid interpretation that -may be possible. Outside of these specific situations, the keywords have no -special meaning and are interpreted as regular identifiers. - -### Operators and Delimiters - -The following character sequences represent operators, delimiters, and other -special tokens: - -``` -+ && == < : { [ ( ${ -- || != > ? } ] ) %{ -* ! <= = . -/ >= => , -% ... -``` - -### Numeric Literals - -A numeric literal is a decimal representation of a -real number. It has an integer part, a fractional part, -and an exponent part. - -```ebnf -NumericLit = decimal+ ("." decimal+)? (expmark decimal+)?; -decimal = '0' .. '9'; -expmark = ('e' | 'E') ("+" | "-")?; -``` - -## Structural Elements - -The structural language consists of syntax representing the following -constructs: - -- _Attributes_, which assign a value to a specified name. -- _Blocks_, which create a child body annotated by a type and optional labels. -- _Body Content_, which consists of a collection of attributes and blocks. - -These constructs correspond to the similarly-named concepts in the -language-agnostic HCL information model. - -```ebnf -ConfigFile = Body; -Body = (Attribute | Block | OneLineBlock)*; -Attribute = Identifier "=" Expression Newline; -Block = Identifier (StringLit|Identifier)* "{" Newline Body "}" Newline; -OneLineBlock = Identifier (StringLit|Identifier)* "{" (Identifier "=" Expression)? "}" Newline; -``` - -### Configuration Files - -A _configuration file_ is a sequence of characters whose top-level is -interpreted as a Body. - -### Bodies - -A _body_ is a collection of associated attributes and blocks. The meaning of -this association is defined by the calling application. - -### Attribute Definitions - -An _attribute definition_ assigns a value to a particular attribute name within -a body. Each distinct attribute name may be defined no more than once within a -single body. - -The attribute value is given as an expression, which is retained literally -for later evaluation by the calling application. - -### Blocks - -A _block_ creates a child body that is annotated with a block _type_ and -zero or more block _labels_. Blocks create a structural hierarchy which can be -interpreted by the calling application. - -Block labels can either be quoted literal strings or naked identifiers. - -## Expressions - -The expression sub-language is used within attribute definitions to specify -values. - -```ebnf -Expression = ( - ExprTerm | - Operation | - Conditional -); -``` - -### Types - -The value types used within the expression language are those defined by the -syntax-agnostic HCL information model. An expression may return any valid -type, but only a subset of the available types have first-class syntax. -A calling application may make other types available via _variables_ and -_functions_. - -### Expression Terms - -Expression _terms_ are the operands for unary and binary expressions, as well -as acting as expressions in their own right. - -```ebnf -ExprTerm = ( - LiteralValue | - CollectionValue | - TemplateExpr | - VariableExpr | - FunctionCall | - ForExpr | - ExprTerm Index | - ExprTerm GetAttr | - ExprTerm Splat | - "(" Expression ")" -); -``` - -The productions for these different term types are given in their corresponding -sections. - -Between the `(` and `)` characters denoting a sub-expression, newline -characters are ignored as whitespace. - -### Literal Values - -A _literal value_ immediately represents a particular value of a primitive -type. - -```ebnf -LiteralValue = ( - NumericLit | - "true" | - "false" | - "null" -); -``` - -- Numeric literals represent values of type _number_. -- The `true` and `false` keywords represent values of type _bool_. -- The `null` keyword represents a null value of the dynamic pseudo-type. - -String literals are not directly available in the expression sub-language, but -are available via the template sub-language, which can in turn be incorporated -via _template expressions_. - -### Collection Values - -A _collection value_ combines zero or more other expressions to produce a -collection value. - -```ebnf -CollectionValue = tuple | object; -tuple = "[" ( - (Expression ("," Expression)* ","?)? -) "]"; -object = "{" ( - (objectelem ("," objectelem)* ","?)? -) "}"; -objectelem = (Identifier | Expression) "=" Expression; -``` - -Only tuple and object values can be directly constructed via native syntax. -Tuple and object values can in turn be converted to list, set and map values -with other operations, which behaves as defined by the syntax-agnostic HCL -information model. - -When specifying an object element, an identifier is interpreted as a literal -attribute name as opposed to a variable reference. To populate an item key -from a variable, use parentheses to disambiguate: - -- `{foo = "baz"}` is interpreted as an attribute literally named `foo`. -- `{(foo) = "baz"}` is interpreted as an attribute whose name is taken - from the variable named `foo`. - -Between the open and closing delimiters of these sequences, newline sequences -are ignored as whitespace. - -There is a syntax ambiguity between _for expressions_ and collection values -whose first element is a reference to a variable named `for`. The -_for expression_ interpretation has priority, so to produce a tuple whose -first element is the value of a variable named `for`, or an object with a -key named `for`, use parentheses to disambiguate: - -- `[for, foo, baz]` is a syntax error. -- `[(for), foo, baz]` is a tuple whose first element is the value of variable - `for`. -- `{for: 1, baz: 2}` is a syntax error. -- `{(for): 1, baz: 2}` is an object with an attribute literally named `for`. -- `{baz: 2, for: 1}` is equivalent to the previous example, and resolves the - ambiguity by reordering. - -### Template Expressions - -A _template expression_ embeds a program written in the template sub-language -as an expression. Template expressions come in two forms: - -- A _quoted_ template expression is delimited by quote characters (`"`) and - defines a template as a single-line expression with escape characters. -- A _heredoc_ template expression is introduced by a `<<` sequence and - defines a template via a multi-line sequence terminated by a user-chosen - delimiter. - -In both cases the template interpolation and directive syntax is available for -use within the delimiters, and any text outside of these special sequences is -interpreted as a literal string. - -In _quoted_ template expressions any literal string sequences within the -template behave in a special way: literal newline sequences are not permitted -and instead _escape sequences_ can be included, starting with the -backslash `\`: - -``` - \n Unicode newline control character - \r Unicode carriage return control character - \t Unicode tab control character - \" Literal quote mark, used to prevent interpretation as end of string - \\ Literal backslash, used to prevent interpretation as escape sequence - \uNNNN Unicode character from Basic Multilingual Plane (NNNN is four hexadecimal digits) - \UNNNNNNNN Unicode character from supplementary planes (NNNNNNNN is eight hexadecimal digits) -``` - -The _heredoc_ template expression type is introduced by either `<<` or `<<-`, -followed by an identifier. The template expression ends when the given -identifier subsequently appears again on a line of its own. - -If a heredoc template is introduced with the `<<-` symbol, any literal string -at the start of each line is analyzed to find the minimum number of leading -spaces, and then that number of prefix spaces is removed from all line-leading -literal strings. The final closing marker may also have an arbitrary number -of spaces preceding it on its line. - -```ebnf -TemplateExpr = quotedTemplate | heredocTemplate; -quotedTemplate = (as defined in prose above); -heredocTemplate = ( - ("<<" | "<<-") Identifier Newline - (content as defined in prose above) - Identifier Newline -); -``` - -A quoted template expression containing only a single literal string serves -as a syntax for defining literal string _expressions_. In certain contexts -the template syntax is restricted in this manner: - -```ebnf -StringLit = '"' (quoted literals as defined in prose above) '"'; -``` - -The `StringLit` production permits the escape sequences discussed for quoted -template expressions as above, but does _not_ permit template interpolation -or directive sequences. - -### Variables and Variable Expressions - -A _variable_ is a value that has been assigned a symbolic name. Variables are -made available for use in expressions by the calling application, by populating -the _global scope_ used for expression evaluation. - -Variables can also be created by expressions themselves, which always creates -a _child scope_ that incorporates the variables from its parent scope but -(re-)defines zero or more names with new values. - -The value of a variable is accessed using a _variable expression_, which is -a standalone `Identifier` whose name corresponds to a defined variable: - -```ebnf -VariableExpr = Identifier; -``` - -Variables in a particular scope are immutable, but child scopes may _hide_ -a variable from an ancestor scope by defining a new variable of the same name. -When looking up variables, the most locally-defined variable of the given name -is used, and ancestor-scoped variables of the same name cannot be accessed. - -No direct syntax is provided for declaring or assigning variables, but other -expression constructs implicitly create child scopes and define variables as -part of their evaluation. - -### Functions and Function Calls - -A _function_ is an operation that has been assigned a symbolic name. Functions -are made available for use in expressions by the calling application, by -populating the _function table_ used for expression evaluation. - -The namespace of functions is distinct from the namespace of variables. A -function and a variable may share the same name with no implication that they -are in any way related. - -A function can be executed via a _function call_ expression: - -```ebnf -FunctionCall = Identifier "(" arguments ")"; -Arguments = ( - () || - (Expression ("," Expression)* ("," | "...")?) -); -``` - -The definition of functions and the semantics of calling them are defined by -the language-agnostic HCL information model. The given arguments are mapped -onto the function's _parameters_ and the result of a function call expression -is the return value of the named function when given those arguments. - -If the final argument expression is followed by the ellipsis symbol (`...`), -the final argument expression must evaluate to either a list or tuple value. -The elements of the value are each mapped to a single parameter of the -named function, beginning at the first parameter remaining after all other -argument expressions have been mapped. - -Within the parentheses that delimit the function arguments, newline sequences -are ignored as whitespace. - -### For Expressions - -A _for expression_ is a construct for constructing a collection by projecting -the items from another collection. - -```ebnf -ForExpr = forTupleExpr | forObjectExpr; -forTupleExpr = "[" forIntro Expression forCond? "]"; -forObjectExpr = "{" forIntro Expression "=>" Expression "..."? forCond? "}"; -forIntro = "for" Identifier ("," Identifier)? "in" Expression ":"; -forCond = "if" Expression; -``` - -The punctuation used to delimit a for expression decide whether it will produce -a tuple value (`[` and `]`) or an object value (`{` and `}`). - -The "introduction" is equivalent in both cases: the keyword `for` followed by -either one or two identifiers separated by a comma which define the temporary -variable names used for iteration, followed by the keyword `in` and then -an expression that must evaluate to a value that can be iterated. The -introduction is then terminated by the colon (`:`) symbol. - -If only one identifier is provided, it is the name of a variable that will -be temporarily assigned the value of each element during iteration. If both -are provided, the first is the key and the second is the value. - -Tuple, object, list, map, and set types are iterable. The type of collection -used defines how the key and value variables are populated: - -- For tuple and list types, the _key_ is the zero-based index into the - sequence for each element, and the _value_ is the element value. The - elements are visited in index order. -- For object and map types, the _key_ is the string attribute name or element - key, and the _value_ is the attribute or element value. The elements are - visited in the order defined by a lexicographic sort of the attribute names - or keys. -- For set types, the _key_ and _value_ are both the element value. The elements - are visited in an undefined but consistent order. - -The expression after the colon and (in the case of object `for`) the expression -after the `=>` are both evaluated once for each element of the source -collection, in a local scope that defines the key and value variable names -specified. - -The results of evaluating these expressions for each input element are used -to populate an element in the new collection. In the case of tuple `for`, the -single expression becomes an element, appending values to the tuple in visit -order. In the case of object `for`, the pair of expressions is used as an -attribute name and value respectively, creating an element in the resulting -object. - -In the case of object `for`, it is an error if two input elements produce -the same result from the attribute name expression, since duplicate -attributes are not possible. If the ellipsis symbol (`...`) appears -immediately after the value expression, this activates the grouping mode in -which each value in the resulting object is a _tuple_ of all of the values -that were produced against each distinct key. - -- `[for v in ["a", "b"]: v]` returns `["a", "b"]`. -- `[for i, v in ["a", "b"]: i]` returns `[0, 1]`. -- `{for i, v in ["a", "b"]: v => i}` returns `{a = 0, b = 1}`. -- `{for i, v in ["a", "a", "b"]: k => v}` produces an error, because attribute - `a` is defined twice. -- `{for i, v in ["a", "a", "b"]: v => i...}` returns `{a = [0, 1], b = [2]}`. - -If the `if` keyword is used after the element expression(s), it applies an -additional predicate that can be used to conditionally filter elements from -the source collection from consideration. The expression following `if` is -evaluated once for each source element, in the same scope used for the -element expression(s). It must evaluate to a boolean value; if `true`, the -element will be evaluated as normal, while if `false` the element will be -skipped. - -- `[for i, v in ["a", "b", "c"]: v if i < 2]` returns `["a", "b"]`. - -If the collection value, element expression(s) or condition expression return -unknown values that are otherwise type-valid, the result is a value of the -dynamic pseudo-type. - -### Index Operator - -The _index_ operator returns the value of a single element of a collection -value. It is a postfix operator and can be applied to any value that has -a tuple, object, map, or list type. - -```ebnf -Index = "[" Expression "]"; -``` - -The expression delimited by the brackets is the _key_ by which an element -will be looked up. - -If the index operator is applied to a value of tuple or list type, the -key expression must be an non-negative integer number representing the -zero-based element index to access. If applied to a value of object or map -type, the key expression must be a string representing the attribute name -or element key. If the given key value is not of the appropriate type, a -conversion is attempted using the conversion rules from the HCL -syntax-agnostic information model. - -An error is produced if the given key expression does not correspond to -an element in the collection, either because it is of an unconvertable type, -because it is outside the range of elements for a tuple or list, or because -the given attribute or key does not exist. - -If either the collection or the key are an unknown value of an -otherwise-suitable type, the return value is an unknown value whose type -matches what type would be returned given known values, or a value of the -dynamic pseudo-type if type information alone cannot determine a suitable -return type. - -Within the brackets that delimit the index key, newline sequences are ignored -as whitespace. - -### Attribute Access Operator - -The _attribute access_ operator returns the value of a single attribute in -an object value. It is a postfix operator and can be applied to any value -that has an object type. - -```ebnf -GetAttr = "." Identifier; -``` - -The given identifier is interpreted as the name of the attribute to access. -An error is produced if the object to which the operator is applied does not -have an attribute with the given name. - -If the object is an unknown value of a type that has the attribute named, the -result is an unknown value of the attribute's type. - -### Splat Operators - -The _splat operators_ allow convenient access to attributes or elements of -elements in a tuple, list, or set value. - -There are two kinds of "splat" operator: - -- The _attribute-only_ splat operator supports only attribute lookups into - the elements from a list, but supports an arbitrary number of them. - -- The _full_ splat operator additionally supports indexing into the elements - from a list, and allows any combination of attribute access and index - operations. - -```ebnf -Splat = attrSplat | fullSplat; -attrSplat = "." "*" GetAttr*; -fullSplat = "[" "*" "]" (GetAttr | Index)*; -``` - -The splat operators can be thought of as shorthands for common operations that -could otherwise be performed using _for expressions_: - -- `tuple.*.foo.bar[0]` is approximately equivalent to - `[for v in tuple: v.foo.bar][0]`. -- `tuple[*].foo.bar[0]` is approximately equivalent to - `[for v in tuple: v.foo.bar[0]]` - -Note the difference in how the trailing index operator is interpreted in -each case. This different interpretation is the key difference between the -_attribute-only_ and _full_ splat operators. - -Splat operators have one additional behavior compared to the equivalent -_for expressions_ shown above: if a splat operator is applied to a value that -is _not_ of tuple, list, or set type, the value is coerced automatically into -a single-value list of the value type: - -- `any_object.*.id` is equivalent to `[any_object.id]`, assuming that `any_object` - is a single object. -- `any_number.*` is equivalent to `[any_number]`, assuming that `any_number` - is a single number. - -If applied to a null value that is not tuple, list, or set, the result is always -an empty tuple, which allows conveniently converting a possibly-null scalar -value into a tuple of zero or one elements. It is illegal to apply a splat -operator to a null value of tuple, list, or set type. - -### Operations - -Operations apply a particular operator to either one or two expression terms. - -```ebnf -Operation = unaryOp | binaryOp; -unaryOp = ("-" | "!") ExprTerm; -binaryOp = ExprTerm binaryOperator ExprTerm; -binaryOperator = compareOperator | arithmeticOperator | logicOperator; -compareOperator = "==" | "!=" | "<" | ">" | "<=" | ">="; -arithmeticOperator = "+" | "-" | "*" | "/" | "%"; -logicOperator = "&&" | "||" | "!"; -``` - -The unary operators have the highest precedence. - -The binary operators are grouped into the following precedence levels: - -``` -Level Operators - 6 * / % - 5 + - - 4 > >= < <= - 3 == != - 2 && - 1 || -``` - -Higher values of "level" bind tighter. Operators within the same precedence -level have left-to-right associativity. For example, `x / y * z` is equivalent -to `(x / y) * z`. - -### Comparison Operators - -Comparison operators always produce boolean values, as a result of testing -the relationship between two values. - -The two equality operators apply to values of any type: - -``` -a == b equal -a != b not equal -``` - -Two values are equal if the are of identical types and their values are -equal as defined in the HCL syntax-agnostic information model. The equality -operators are commutative and opposite, such that `(a == b) == !(a != b)` -and `(a == b) == (b == a)` for all values `a` and `b`. - -The four numeric comparison operators apply only to numbers: - -``` -a < b less than -a <= b less than or equal to -a > b greater than -a >= b greater than or equal to -``` - -If either operand of a comparison operator is a correctly-typed unknown value -or a value of the dynamic pseudo-type, the result is an unknown boolean. - -### Arithmetic Operators - -Arithmetic operators apply only to number values and always produce number -values as results. - -``` -a + b sum (addition) -a - b difference (subtraction) -a * b product (multiplication) -a / b quotient (division) -a % b remainder (modulo) --a negation -``` - -Arithmetic operations are considered to be performed in an arbitrary-precision -number space. - -If either operand of an arithmetic operator is an unknown number or a value -of the dynamic pseudo-type, the result is an unknown number. - -### Logic Operators - -Logic operators apply only to boolean values and always produce boolean values -as results. - -``` -a && b logical AND -a || b logical OR -!a logical NOT -``` - -If either operand of a logic operator is an unknown bool value or a value -of the dynamic pseudo-type, the result is an unknown bool value. - -### Conditional Operator - -The conditional operator allows selecting from one of two expressions based on -the outcome of a boolean expression. - -```ebnf -Conditional = Expression "?" Expression ":" Expression; -``` - -The first expression is the _predicate_, which is evaluated and must produce -a boolean result. If the predicate value is `true`, the result of the second -expression is the result of the conditional. If the predicate value is -`false`, the result of the third expression is the result of the conditional. - -The second and third expressions must be of the same type or must be able to -unify into a common type using the type unification rules defined in the -HCL syntax-agnostic information model. This unified type is the result type -of the conditional, with both expressions converted as necessary to the -unified type. - -If the predicate is an unknown boolean value or a value of the dynamic -pseudo-type then the result is an unknown value of the unified type of the -other two expressions. - -If either the second or third expressions produce errors when evaluated, -these errors are passed through only if the erroneous expression is selected. -This allows for expressions such as -`length(some_list) > 0 ? some_list[0] : default` (given some suitable `length` -function) without producing an error when the predicate is `false`. - -## Templates - -The template sub-language is used within template expressions to concisely -combine strings and other values to produce other strings. It can also be -used in isolation as a standalone template language. - -```ebnf -Template = ( - TemplateLiteral | - TemplateInterpolation | - TemplateDirective -)* -TemplateDirective = TemplateIf | TemplateFor; -``` - -A template behaves like an expression that always returns a string value. -The different elements of the template are evaluated and combined into a -single string to return. If any of the elements produce an unknown string -or a value of the dynamic pseudo-type, the result is an unknown string. - -An important use-case for standalone templates is to enable the use of -expressions in alternative HCL syntaxes where a native expression grammar is -not available. For example, the HCL JSON profile treats the values of JSON -strings as standalone templates when attributes are evaluated in expression -mode. - -### Template Literals - -A template literal is a literal sequence of characters to include in the -resulting string. When the template sub-language is used standalone, a -template literal can contain any unicode character, with the exception -of the sequences that introduce interpolations and directives, and for the -sequences that escape those introductions. - -The interpolation and directive introductions are escaped by doubling their -leading characters. The `${` sequence is escaped as `$${` and the `%{` -sequence is escaped as `%%{`. - -When the template sub-language is embedded in the expression language via -_template expressions_, additional constraints and transforms are applied to -template literals as described in the definition of template expressions. - -The value of a template literal can be modified by _strip markers_ in any -interpolations or directives that are adjacent to it. A strip marker is -a tilde (`~`) placed immediately after the opening `{` or before the closing -`}` of a template sequence: - -- `hello ${~ "world" }` produces `"helloworld"`. -- `%{ if true ~} hello %{~ endif }` produces `"hello"`. - -When a strip marker is present, any spaces adjacent to it in the corresponding -string literal (if any) are removed before producing the final value. Space -characters are interpreted as per Unicode's definition. - -Stripping is done at syntax level rather than value level. Values returned -by interpolations or directives are not subject to stripping: - -- `${"hello" ~}${" world"}` produces `"hello world"`, and not `"helloworld"`, - because the space is not in a template literal directly adjacent to the - strip marker. - -### Template Interpolations - -An _interpolation sequence_ evaluates an expression (written in the -expression sub-language), converts the result to a string value, and -replaces itself with the resulting string. - -```ebnf -TemplateInterpolation = ("${" | "${~") Expression ("}" | "~}"; -``` - -If the expression result cannot be converted to a string, an error is -produced. - -### Template If Directive - -The template `if` directive is the template equivalent of the -_conditional expression_, allowing selection of one of two sub-templates based -on the value of a predicate expression. - -```ebnf -TemplateIf = ( - ("%{" | "%{~") "if" Expression ("}" | "~}") - Template - ( - ("%{" | "%{~") "else" ("}" | "~}") - Template - )? - ("%{" | "%{~") "endif" ("}" | "~}") -); -``` - -The evaluation of the `if` directive is equivalent to the conditional -expression, with the following exceptions: - -- The two sub-templates always produce strings, and thus the result value is - also always a string. -- The `else` clause may be omitted, in which case the conditional's third - expression result is implied to be the empty string. - -### Template For Directive - -The template `for` directive is the template equivalent of the _for expression_, -producing zero or more copies of its sub-template based on the elements of -a collection. - -```ebnf -TemplateFor = ( - ("%{" | "%{~") "for" Identifier ("," Identifier) "in" Expression ("}" | "~}") - Template - ("%{" | "%{~") "endfor" ("}" | "~}") -); -``` - -The evaluation of the `for` directive is equivalent to the _for expression_ -when producing a tuple, with the following exceptions: - -- The sub-template always produces a string. -- There is no equivalent of the "if" clause on the for expression. -- The elements of the resulting tuple are all converted to strings and - concatenated to produce a flat string result. - -### Template Interpolation Unwrapping - -As a special case, a template that consists only of a single interpolation, -with no surrounding literals, directives or other interpolations, is -"unwrapped". In this case, the result of the interpolation expression is -returned verbatim, without conversion to string. - -This special case exists primarily to enable the native template language -to be used inside strings in alternative HCL syntaxes that lack a first-class -template or expression syntax. Unwrapping allows arbitrary expressions to be -used to populate attributes when strings in such languages are interpreted -as templates. - -- `${true}` produces the boolean value `true` -- `${"${true}"}` produces the boolean value `true`, because both the inner - and outer interpolations are subject to unwrapping. -- `hello ${true}` produces the string `"hello true"` -- `${""}${true}` produces the string `"true"` because there are two - interpolation sequences, even though one produces an empty result. -- `%{ for v in [true] }${v}%{ endif }` produces the string `true` because - the presence of the `for` directive circumvents the unwrapping even though - the final result is a single value. - -In some contexts this unwrapping behavior may be circumvented by the calling -application, by converting the final template result to string. This is -necessary, for example, if a standalone template is being used to produce -the direct contents of a file, since the result in that case must always be a -string. - -## Static Analysis - -The HCL static analysis operations are implemented for some expression types -in the native syntax, as described in the following sections. - -A goal for static analysis of the native syntax is for the interpretation to -be as consistent as possible with the dynamic evaluation interpretation of -the given expression, though some deviations are intentionally made in order -to maximize the potential for analysis. - -### Static List - -The tuple construction syntax can be interpreted as a static list. All of -the expression elements given are returned as the static list elements, -with no further interpretation. - -### Static Map - -The object construction syntax can be interpreted as a static map. All of the -key/value pairs given are returned as the static pairs, with no further -interpretation. - -The usual requirement that an attribute name be interpretable as a string -does not apply to this static analysis, allowing callers to provide map-like -constructs with different key types by building on the map syntax. - -### Static Call - -The function call syntax can be interpreted as a static call. The called -function name is returned verbatim and the given argument expressions are -returned as the static arguments, with no further interpretation. - -### Static Traversal - -A variable expression and any attached attribute access operations and -constant index operations can be interpreted as a static traversal. - -The keywords `true`, `false` and `null` can also be interpreted as -static traversals, behaving as if they were references to variables of those -names, to allow callers to redefine the meaning of those keywords in certain -contexts. diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/structure.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/structure.go deleted file mode 100644 index 476025d1b..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/structure.go +++ /dev/null @@ -1,394 +0,0 @@ -package hclsyntax - -import ( - "fmt" - "strings" - - "github.com/hashicorp/hcl2/hcl" -) - -// AsHCLBlock returns the block data expressed as a *hcl.Block. -func (b *Block) AsHCLBlock() *hcl.Block { - if b == nil { - return nil - } - - lastHeaderRange := b.TypeRange - if len(b.LabelRanges) > 0 { - lastHeaderRange = b.LabelRanges[len(b.LabelRanges)-1] - } - - return &hcl.Block{ - Type: b.Type, - Labels: b.Labels, - Body: b.Body, - - DefRange: hcl.RangeBetween(b.TypeRange, lastHeaderRange), - TypeRange: b.TypeRange, - LabelRanges: b.LabelRanges, - } -} - -// Body is the implementation of hcl.Body for the HCL native syntax. -type Body struct { - Attributes Attributes - Blocks Blocks - - // These are used with PartialContent to produce a "remaining items" - // body to return. They are nil on all bodies fresh out of the parser. - hiddenAttrs map[string]struct{} - hiddenBlocks map[string]struct{} - - SrcRange hcl.Range - EndRange hcl.Range // Final token of the body, for reporting missing items -} - -// Assert that *Body implements hcl.Body -var assertBodyImplBody hcl.Body = &Body{} - -func (b *Body) walkChildNodes(w internalWalkFunc) { - w(b.Attributes) - w(b.Blocks) -} - -func (b *Body) Range() hcl.Range { - return b.SrcRange -} - -func (b *Body) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) { - content, remainHCL, diags := b.PartialContent(schema) - - // No we'll see if anything actually remains, to produce errors about - // extraneous items. - remain := remainHCL.(*Body) - - for name, attr := range b.Attributes { - if _, hidden := remain.hiddenAttrs[name]; !hidden { - var suggestions []string - for _, attrS := range schema.Attributes { - if _, defined := content.Attributes[attrS.Name]; defined { - continue - } - suggestions = append(suggestions, attrS.Name) - } - suggestion := nameSuggestion(name, suggestions) - if suggestion != "" { - suggestion = fmt.Sprintf(" Did you mean %q?", suggestion) - } else { - // Is there a block of the same name? - for _, blockS := range schema.Blocks { - if blockS.Type == name { - suggestion = fmt.Sprintf(" Did you mean to define a block of type %q?", name) - break - } - } - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unsupported argument", - Detail: fmt.Sprintf("An argument named %q is not expected here.%s", name, suggestion), - Subject: &attr.NameRange, - }) - } - } - - for _, block := range b.Blocks { - blockTy := block.Type - if _, hidden := remain.hiddenBlocks[blockTy]; !hidden { - var suggestions []string - for _, blockS := range schema.Blocks { - suggestions = append(suggestions, blockS.Type) - } - suggestion := nameSuggestion(blockTy, suggestions) - if suggestion != "" { - suggestion = fmt.Sprintf(" Did you mean %q?", suggestion) - } else { - // Is there an attribute of the same name? - for _, attrS := range schema.Attributes { - if attrS.Name == blockTy { - suggestion = fmt.Sprintf(" Did you mean to define argument %q? If so, use the equals sign to assign it a value.", blockTy) - break - } - } - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unsupported block type", - Detail: fmt.Sprintf("Blocks of type %q are not expected here.%s", blockTy, suggestion), - Subject: &block.TypeRange, - }) - } - } - - return content, diags -} - -func (b *Body) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) { - attrs := make(hcl.Attributes) - var blocks hcl.Blocks - var diags hcl.Diagnostics - hiddenAttrs := make(map[string]struct{}) - hiddenBlocks := make(map[string]struct{}) - - if b.hiddenAttrs != nil { - for k, v := range b.hiddenAttrs { - hiddenAttrs[k] = v - } - } - if b.hiddenBlocks != nil { - for k, v := range b.hiddenBlocks { - hiddenBlocks[k] = v - } - } - - for _, attrS := range schema.Attributes { - name := attrS.Name - attr, exists := b.Attributes[name] - _, hidden := hiddenAttrs[name] - if hidden || !exists { - if attrS.Required { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing required argument", - Detail: fmt.Sprintf("The argument %q is required, but no definition was found.", attrS.Name), - Subject: b.MissingItemRange().Ptr(), - }) - } - continue - } - - hiddenAttrs[name] = struct{}{} - attrs[name] = attr.AsHCLAttribute() - } - - blocksWanted := make(map[string]hcl.BlockHeaderSchema) - for _, blockS := range schema.Blocks { - blocksWanted[blockS.Type] = blockS - } - - for _, block := range b.Blocks { - if _, hidden := hiddenBlocks[block.Type]; hidden { - continue - } - blockS, wanted := blocksWanted[block.Type] - if !wanted { - continue - } - - if len(block.Labels) > len(blockS.LabelNames) { - name := block.Type - if len(blockS.LabelNames) == 0 { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Extraneous label for %s", name), - Detail: fmt.Sprintf( - "No labels are expected for %s blocks.", name, - ), - Subject: block.LabelRanges[0].Ptr(), - Context: hcl.RangeBetween(block.TypeRange, block.OpenBraceRange).Ptr(), - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Extraneous label for %s", name), - Detail: fmt.Sprintf( - "Only %d labels (%s) are expected for %s blocks.", - len(blockS.LabelNames), strings.Join(blockS.LabelNames, ", "), name, - ), - Subject: block.LabelRanges[len(blockS.LabelNames)].Ptr(), - Context: hcl.RangeBetween(block.TypeRange, block.OpenBraceRange).Ptr(), - }) - } - continue - } - - if len(block.Labels) < len(blockS.LabelNames) { - name := block.Type - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Missing %s for %s", blockS.LabelNames[len(block.Labels)], name), - Detail: fmt.Sprintf( - "All %s blocks must have %d labels (%s).", - name, len(blockS.LabelNames), strings.Join(blockS.LabelNames, ", "), - ), - Subject: &block.OpenBraceRange, - Context: hcl.RangeBetween(block.TypeRange, block.OpenBraceRange).Ptr(), - }) - continue - } - - blocks = append(blocks, block.AsHCLBlock()) - } - - // We hide blocks only after we've processed all of them, since otherwise - // we can't process more than one of the same type. - for _, blockS := range schema.Blocks { - hiddenBlocks[blockS.Type] = struct{}{} - } - - remain := &Body{ - Attributes: b.Attributes, - Blocks: b.Blocks, - - hiddenAttrs: hiddenAttrs, - hiddenBlocks: hiddenBlocks, - - SrcRange: b.SrcRange, - EndRange: b.EndRange, - } - - return &hcl.BodyContent{ - Attributes: attrs, - Blocks: blocks, - - MissingItemRange: b.MissingItemRange(), - }, remain, diags -} - -func (b *Body) JustAttributes() (hcl.Attributes, hcl.Diagnostics) { - attrs := make(hcl.Attributes) - var diags hcl.Diagnostics - - if len(b.Blocks) > 0 { - example := b.Blocks[0] - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: fmt.Sprintf("Unexpected %q block", example.Type), - Detail: "Blocks are not allowed here.", - Subject: &example.TypeRange, - }) - // we will continue processing anyway, and return the attributes - // we are able to find so that certain analyses can still be done - // in the face of errors. - } - - if b.Attributes == nil { - return attrs, diags - } - - for name, attr := range b.Attributes { - if _, hidden := b.hiddenAttrs[name]; hidden { - continue - } - attrs[name] = attr.AsHCLAttribute() - } - - return attrs, diags -} - -func (b *Body) MissingItemRange() hcl.Range { - return hcl.Range{ - Filename: b.SrcRange.Filename, - Start: b.SrcRange.Start, - End: b.SrcRange.Start, - } -} - -// Attributes is the collection of attribute definitions within a body. -type Attributes map[string]*Attribute - -func (a Attributes) walkChildNodes(w internalWalkFunc) { - for _, attr := range a { - w(attr) - } -} - -// Range returns the range of some arbitrary point within the set of -// attributes, or an invalid range if there are no attributes. -// -// This is provided only to complete the Node interface, but has no practical -// use. -func (a Attributes) Range() hcl.Range { - // An attributes doesn't really have a useful range to report, since - // it's just a grouping construct. So we'll arbitrarily take the - // range of one of the attributes, or produce an invalid range if we have - // none. In practice, there's little reason to ask for the range of - // an Attributes. - for _, attr := range a { - return attr.Range() - } - return hcl.Range{ - Filename: "", - } -} - -// Attribute represents a single attribute definition within a body. -type Attribute struct { - Name string - Expr Expression - - SrcRange hcl.Range - NameRange hcl.Range - EqualsRange hcl.Range -} - -func (a *Attribute) walkChildNodes(w internalWalkFunc) { - w(a.Expr) -} - -func (a *Attribute) Range() hcl.Range { - return a.SrcRange -} - -// AsHCLAttribute returns the block data expressed as a *hcl.Attribute. -func (a *Attribute) AsHCLAttribute() *hcl.Attribute { - if a == nil { - return nil - } - return &hcl.Attribute{ - Name: a.Name, - Expr: a.Expr, - - Range: a.SrcRange, - NameRange: a.NameRange, - } -} - -// Blocks is the list of nested blocks within a body. -type Blocks []*Block - -func (bs Blocks) walkChildNodes(w internalWalkFunc) { - for _, block := range bs { - w(block) - } -} - -// Range returns the range of some arbitrary point within the list of -// blocks, or an invalid range if there are no blocks. -// -// This is provided only to complete the Node interface, but has no practical -// use. -func (bs Blocks) Range() hcl.Range { - if len(bs) > 0 { - return bs[0].Range() - } - return hcl.Range{ - Filename: "", - } -} - -// Block represents a nested block structure -type Block struct { - Type string - Labels []string - Body *Body - - TypeRange hcl.Range - LabelRanges []hcl.Range - OpenBraceRange hcl.Range - CloseBraceRange hcl.Range -} - -func (b *Block) walkChildNodes(w internalWalkFunc) { - w(b.Body) -} - -func (b *Block) Range() hcl.Range { - return hcl.RangeBetween(b.TypeRange, b.CloseBraceRange) -} - -func (b *Block) DefRange() hcl.Range { - return hcl.RangeBetween(b.TypeRange, b.OpenBraceRange) -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/structure_at_pos.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/structure_at_pos.go deleted file mode 100644 index d8f023ba0..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/structure_at_pos.go +++ /dev/null @@ -1,118 +0,0 @@ -package hclsyntax - -import ( - "github.com/hashicorp/hcl2/hcl" -) - -// ----------------------------------------------------------------------------- -// The methods in this file are all optional extension methods that serve to -// implement the methods of the same name on *hcl.File when its root body -// is provided by this package. -// ----------------------------------------------------------------------------- - -// BlocksAtPos implements the method of the same name for an *hcl.File that -// is backed by a *Body. -func (b *Body) BlocksAtPos(pos hcl.Pos) []*hcl.Block { - list, _ := b.blocksAtPos(pos, true) - return list -} - -// InnermostBlockAtPos implements the method of the same name for an *hcl.File -// that is backed by a *Body. -func (b *Body) InnermostBlockAtPos(pos hcl.Pos) *hcl.Block { - _, innermost := b.blocksAtPos(pos, false) - return innermost.AsHCLBlock() -} - -// OutermostBlockAtPos implements the method of the same name for an *hcl.File -// that is backed by a *Body. -func (b *Body) OutermostBlockAtPos(pos hcl.Pos) *hcl.Block { - return b.outermostBlockAtPos(pos).AsHCLBlock() -} - -// blocksAtPos is the internal engine of both BlocksAtPos and -// InnermostBlockAtPos, which both need to do the same logic but return a -// differently-shaped result. -// -// list is nil if makeList is false, avoiding an allocation. Innermost is -// always set, and if the returned list is non-nil it will always match the -// final element from that list. -func (b *Body) blocksAtPos(pos hcl.Pos, makeList bool) (list []*hcl.Block, innermost *Block) { - current := b - -Blocks: - for current != nil { - for _, block := range current.Blocks { - wholeRange := hcl.RangeBetween(block.TypeRange, block.CloseBraceRange) - if wholeRange.ContainsPos(pos) { - innermost = block - if makeList { - list = append(list, innermost.AsHCLBlock()) - } - current = block.Body - continue Blocks - } - } - - // If we fall out here then none of the current body's nested blocks - // contain the position we are looking for, and so we're done. - break - } - - return -} - -// outermostBlockAtPos is the internal version of OutermostBlockAtPos that -// returns a hclsyntax.Block rather than an hcl.Block, allowing for further -// analysis if necessary. -func (b *Body) outermostBlockAtPos(pos hcl.Pos) *Block { - // This is similar to blocksAtPos, but simpler because we know it only - // ever needs to search the first level of nested blocks. - - for _, block := range b.Blocks { - wholeRange := hcl.RangeBetween(block.TypeRange, block.CloseBraceRange) - if wholeRange.ContainsPos(pos) { - return block - } - } - - return nil -} - -// AttributeAtPos implements the method of the same name for an *hcl.File -// that is backed by a *Body. -func (b *Body) AttributeAtPos(pos hcl.Pos) *hcl.Attribute { - return b.attributeAtPos(pos).AsHCLAttribute() -} - -// attributeAtPos is the internal version of AttributeAtPos that returns a -// hclsyntax.Block rather than an hcl.Block, allowing for further analysis if -// necessary. -func (b *Body) attributeAtPos(pos hcl.Pos) *Attribute { - searchBody := b - _, block := b.blocksAtPos(pos, false) - if block != nil { - searchBody = block.Body - } - - for _, attr := range searchBody.Attributes { - if attr.SrcRange.ContainsPos(pos) { - return attr - } - } - - return nil -} - -// OutermostExprAtPos implements the method of the same name for an *hcl.File -// that is backed by a *Body. -func (b *Body) OutermostExprAtPos(pos hcl.Pos) hcl.Expression { - attr := b.attributeAtPos(pos) - if attr == nil { - return nil - } - if !attr.Expr.Range().ContainsPos(pos) { - return nil - } - return attr.Expr -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/token.go b/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/token.go deleted file mode 100644 index 3d898fd73..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/token.go +++ /dev/null @@ -1,320 +0,0 @@ -package hclsyntax - -import ( - "bytes" - "fmt" - - "github.com/apparentlymart/go-textseg/textseg" - "github.com/hashicorp/hcl2/hcl" -) - -// Token represents a sequence of bytes from some HCL code that has been -// tagged with a type and its range within the source file. -type Token struct { - Type TokenType - Bytes []byte - Range hcl.Range -} - -// Tokens is a slice of Token. -type Tokens []Token - -// TokenType is an enumeration used for the Type field on Token. -type TokenType rune - -const ( - // Single-character tokens are represented by their own character, for - // convenience in producing these within the scanner. However, the values - // are otherwise arbitrary and just intended to be mnemonic for humans - // who might see them in debug output. - - TokenOBrace TokenType = '{' - TokenCBrace TokenType = '}' - TokenOBrack TokenType = '[' - TokenCBrack TokenType = ']' - TokenOParen TokenType = '(' - TokenCParen TokenType = ')' - TokenOQuote TokenType = '«' - TokenCQuote TokenType = '»' - TokenOHeredoc TokenType = 'H' - TokenCHeredoc TokenType = 'h' - - TokenStar TokenType = '*' - TokenSlash TokenType = '/' - TokenPlus TokenType = '+' - TokenMinus TokenType = '-' - TokenPercent TokenType = '%' - - TokenEqual TokenType = '=' - TokenEqualOp TokenType = '≔' - TokenNotEqual TokenType = '≠' - TokenLessThan TokenType = '<' - TokenLessThanEq TokenType = '≤' - TokenGreaterThan TokenType = '>' - TokenGreaterThanEq TokenType = '≥' - - TokenAnd TokenType = '∧' - TokenOr TokenType = '∨' - TokenBang TokenType = '!' - - TokenDot TokenType = '.' - TokenComma TokenType = ',' - - TokenEllipsis TokenType = '…' - TokenFatArrow TokenType = '⇒' - - TokenQuestion TokenType = '?' - TokenColon TokenType = ':' - - TokenTemplateInterp TokenType = '∫' - TokenTemplateControl TokenType = 'λ' - TokenTemplateSeqEnd TokenType = '∎' - - TokenQuotedLit TokenType = 'Q' // might contain backslash escapes - TokenStringLit TokenType = 'S' // cannot contain backslash escapes - TokenNumberLit TokenType = 'N' - TokenIdent TokenType = 'I' - - TokenComment TokenType = 'C' - - TokenNewline TokenType = '\n' - TokenEOF TokenType = '␄' - - // The rest are not used in the language but recognized by the scanner so - // we can generate good diagnostics in the parser when users try to write - // things that might work in other languages they are familiar with, or - // simply make incorrect assumptions about the HCL language. - - TokenBitwiseAnd TokenType = '&' - TokenBitwiseOr TokenType = '|' - TokenBitwiseNot TokenType = '~' - TokenBitwiseXor TokenType = '^' - TokenStarStar TokenType = '➚' - TokenApostrophe TokenType = '\'' - TokenBacktick TokenType = '`' - TokenSemicolon TokenType = ';' - TokenTabs TokenType = '␉' - TokenInvalid TokenType = '�' - TokenBadUTF8 TokenType = '💩' - TokenQuotedNewline TokenType = '␤' - - // TokenNil is a placeholder for when a token is required but none is - // available, e.g. when reporting errors. The scanner will never produce - // this as part of a token stream. - TokenNil TokenType = '\x00' -) - -func (t TokenType) GoString() string { - return fmt.Sprintf("hclsyntax.%s", t.String()) -} - -type scanMode int - -const ( - scanNormal scanMode = iota - scanTemplate - scanIdentOnly -) - -type tokenAccum struct { - Filename string - Bytes []byte - Pos hcl.Pos - Tokens []Token - StartByte int -} - -func (f *tokenAccum) emitToken(ty TokenType, startOfs, endOfs int) { - // Walk through our buffer to figure out how much we need to adjust - // the start pos to get our end pos. - - start := f.Pos - start.Column += startOfs + f.StartByte - f.Pos.Byte // Safe because only ASCII spaces can be in the offset - start.Byte = startOfs + f.StartByte - - end := start - end.Byte = endOfs + f.StartByte - b := f.Bytes[startOfs:endOfs] - for len(b) > 0 { - advance, seq, _ := textseg.ScanGraphemeClusters(b, true) - if (len(seq) == 1 && seq[0] == '\n') || (len(seq) == 2 && seq[0] == '\r' && seq[1] == '\n') { - end.Line++ - end.Column = 1 - } else { - end.Column++ - } - b = b[advance:] - } - - f.Pos = end - - f.Tokens = append(f.Tokens, Token{ - Type: ty, - Bytes: f.Bytes[startOfs:endOfs], - Range: hcl.Range{ - Filename: f.Filename, - Start: start, - End: end, - }, - }) -} - -type heredocInProgress struct { - Marker []byte - StartOfLine bool -} - -func tokenOpensFlushHeredoc(tok Token) bool { - if tok.Type != TokenOHeredoc { - return false - } - return bytes.HasPrefix(tok.Bytes, []byte{'<', '<', '-'}) -} - -// checkInvalidTokens does a simple pass across the given tokens and generates -// diagnostics for tokens that should _never_ appear in HCL source. This -// is intended to avoid the need for the parser to have special support -// for them all over. -// -// Returns a diagnostics with no errors if everything seems acceptable. -// Otherwise, returns zero or more error diagnostics, though tries to limit -// repetition of the same information. -func checkInvalidTokens(tokens Tokens) hcl.Diagnostics { - var diags hcl.Diagnostics - - toldBitwise := 0 - toldExponent := 0 - toldBacktick := 0 - toldApostrophe := 0 - toldSemicolon := 0 - toldTabs := 0 - toldBadUTF8 := 0 - - for _, tok := range tokens { - // copy token so it's safe to point to it - tok := tok - - switch tok.Type { - case TokenBitwiseAnd, TokenBitwiseOr, TokenBitwiseXor, TokenBitwiseNot: - if toldBitwise < 4 { - var suggestion string - switch tok.Type { - case TokenBitwiseAnd: - suggestion = " Did you mean boolean AND (\"&&\")?" - case TokenBitwiseOr: - suggestion = " Did you mean boolean OR (\"&&\")?" - case TokenBitwiseNot: - suggestion = " Did you mean boolean NOT (\"!\")?" - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unsupported operator", - Detail: fmt.Sprintf("Bitwise operators are not supported.%s", suggestion), - Subject: &tok.Range, - }) - toldBitwise++ - } - case TokenStarStar: - if toldExponent < 1 { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unsupported operator", - Detail: "\"**\" is not a supported operator. Exponentiation is not supported as an operator.", - Subject: &tok.Range, - }) - - toldExponent++ - } - case TokenBacktick: - // Only report for alternating (even) backticks, so we won't report both start and ends of the same - // backtick-quoted string. - if (toldBacktick % 2) == 0 { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid character", - Detail: "The \"`\" character is not valid. To create a multi-line string, use the \"heredoc\" syntax, like \"< -# -# This script uses the unicode spec to generate a Ragel state machine -# that recognizes unicode alphanumeric characters. It generates 5 -# character classes: uupper, ulower, ualpha, udigit, and ualnum. -# Currently supported encodings are UTF-8 [default] and UCS-4. -# -# Usage: unicode2ragel.rb [options] -# -e, --encoding [ucs4 | utf8] Data encoding -# -h, --help Show this message -# -# This script was originally written as part of the Ferret search -# engine library. -# -# Author: Rakan El-Khalil - -require 'optparse' -require 'open-uri' - -ENCODINGS = [ :utf8, :ucs4 ] -ALPHTYPES = { :utf8 => "byte", :ucs4 => "rune" } -DEFAULT_CHART_URL = "http://www.unicode.org/Public/5.1.0/ucd/DerivedCoreProperties.txt" -DEFAULT_MACHINE_NAME= "WChar" - -### -# Display vars & default option - -TOTAL_WIDTH = 80 -RANGE_WIDTH = 23 -@encoding = :utf8 -@chart_url = DEFAULT_CHART_URL -machine_name = DEFAULT_MACHINE_NAME -properties = [] -@output = $stdout - -### -# Option parsing - -cli_opts = OptionParser.new do |opts| - opts.on("-e", "--encoding [ucs4 | utf8]", "Data encoding") do |o| - @encoding = o.downcase.to_sym - end - opts.on("-h", "--help", "Show this message") do - puts opts - exit - end - opts.on("-u", "--url URL", "URL to process") do |o| - @chart_url = o - end - opts.on("-m", "--machine MACHINE_NAME", "Machine name") do |o| - machine_name = o - end - opts.on("-p", "--properties x,y,z", Array, "Properties to add to machine") do |o| - properties = o - end - opts.on("-o", "--output FILE", "output file") do |o| - @output = File.new(o, "w+") - end -end - -cli_opts.parse(ARGV) -unless ENCODINGS.member? @encoding - puts "Invalid encoding: #{@encoding}" - puts cli_opts - exit -end - -## -# Downloads the document at url and yields every alpha line's hex -# range and description. - -def each_alpha( url, property ) - open( url ) do |file| - file.each_line do |line| - next if line =~ /^#/; - next if line !~ /; #{property} #/; - - range, description = line.split(/;/) - range.strip! - description.gsub!(/.*#/, '').strip! - - if range =~ /\.\./ - start, stop = range.split '..' - else start = stop = range - end - - yield start.hex .. stop.hex, description - end - end -end - -### -# Formats to hex at minimum width - -def to_hex( n ) - r = "%0X" % n - r = "0#{r}" unless (r.length % 2).zero? - r -end - -### -# UCS4 is just a straight hex conversion of the unicode codepoint. - -def to_ucs4( range ) - rangestr = "0x" + to_hex(range.begin) - rangestr << "..0x" + to_hex(range.end) if range.begin != range.end - [ rangestr ] -end - -## -# 0x00 - 0x7f -> 0zzzzzzz[7] -# 0x80 - 0x7ff -> 110yyyyy[5] 10zzzzzz[6] -# 0x800 - 0xffff -> 1110xxxx[4] 10yyyyyy[6] 10zzzzzz[6] -# 0x010000 - 0x10ffff -> 11110www[3] 10xxxxxx[6] 10yyyyyy[6] 10zzzzzz[6] - -UTF8_BOUNDARIES = [0x7f, 0x7ff, 0xffff, 0x10ffff] - -def to_utf8_enc( n ) - r = 0 - if n <= 0x7f - r = n - elsif n <= 0x7ff - y = 0xc0 | (n >> 6) - z = 0x80 | (n & 0x3f) - r = y << 8 | z - elsif n <= 0xffff - x = 0xe0 | (n >> 12) - y = 0x80 | (n >> 6) & 0x3f - z = 0x80 | n & 0x3f - r = x << 16 | y << 8 | z - elsif n <= 0x10ffff - w = 0xf0 | (n >> 18) - x = 0x80 | (n >> 12) & 0x3f - y = 0x80 | (n >> 6) & 0x3f - z = 0x80 | n & 0x3f - r = w << 24 | x << 16 | y << 8 | z - end - - to_hex(r) -end - -def from_utf8_enc( n ) - n = n.hex - r = 0 - if n <= 0x7f - r = n - elsif n <= 0xdfff - y = (n >> 8) & 0x1f - z = n & 0x3f - r = y << 6 | z - elsif n <= 0xefffff - x = (n >> 16) & 0x0f - y = (n >> 8) & 0x3f - z = n & 0x3f - r = x << 10 | y << 6 | z - elsif n <= 0xf7ffffff - w = (n >> 24) & 0x07 - x = (n >> 16) & 0x3f - y = (n >> 8) & 0x3f - z = n & 0x3f - r = w << 18 | x << 12 | y << 6 | z - end - r -end - -### -# Given a range, splits it up into ranges that can be continuously -# encoded into utf8. Eg: 0x00 .. 0xff => [0x00..0x7f, 0x80..0xff] -# This is not strictly needed since the current [5.1] unicode standard -# doesn't have ranges that straddle utf8 boundaries. This is included -# for completeness as there is no telling if that will ever change. - -def utf8_ranges( range ) - ranges = [] - UTF8_BOUNDARIES.each do |max| - if range.begin <= max - if range.end <= max - ranges << range - return ranges - end - - ranges << (range.begin .. max) - range = (max + 1) .. range.end - end - end - ranges -end - -def build_range( start, stop ) - size = start.size/2 - left = size - 1 - return [""] if size < 1 - - a = start[0..1] - b = stop[0..1] - - ### - # Shared prefix - - if a == b - return build_range(start[2..-1], stop[2..-1]).map do |elt| - "0x#{a} " + elt - end - end - - ### - # Unshared prefix, end of run - - return ["0x#{a}..0x#{b} "] if left.zero? - - ### - # Unshared prefix, not end of run - # Range can be 0x123456..0x56789A - # Which is equivalent to: - # 0x123456 .. 0x12FFFF - # 0x130000 .. 0x55FFFF - # 0x560000 .. 0x56789A - - ret = [] - ret << build_range(start, a + "FF" * left) - - ### - # Only generate middle range if need be. - - if a.hex+1 != b.hex - max = to_hex(b.hex - 1) - max = "FF" if b == "FF" - ret << "0x#{to_hex(a.hex+1)}..0x#{max} " + "0x00..0xFF " * left - end - - ### - # Don't generate last range if it is covered by first range - - ret << build_range(b + "00" * left, stop) unless b == "FF" - ret.flatten! -end - -def to_utf8( range ) - utf8_ranges( range ).map do |r| - begin_enc = to_utf8_enc(r.begin) - end_enc = to_utf8_enc(r.end) - build_range begin_enc, end_enc - end.flatten! -end - -## -# Perform a 3-way comparison of the number of codepoints advertised by -# the unicode spec for the given range, the originally parsed range, -# and the resulting utf8 encoded range. - -def count_codepoints( code ) - code.split(' ').inject(1) do |acc, elt| - if elt =~ /0x(.+)\.\.0x(.+)/ - if @encoding == :utf8 - acc * (from_utf8_enc($2) - from_utf8_enc($1) + 1) - else - acc * ($2.hex - $1.hex + 1) - end - else - acc - end - end -end - -def is_valid?( range, desc, codes ) - spec_count = 1 - spec_count = $1.to_i if desc =~ /\[(\d+)\]/ - range_count = range.end - range.begin + 1 - - sum = codes.inject(0) { |acc, elt| acc + count_codepoints(elt) } - sum == spec_count and sum == range_count -end - -## -# Generate the state maching to stdout - -def generate_machine( name, property ) - pipe = " " - @output.puts " #{name} = " - each_alpha( @chart_url, property ) do |range, desc| - - codes = (@encoding == :ucs4) ? to_ucs4(range) : to_utf8(range) - - #raise "Invalid encoding of range #{range}: #{codes.inspect}" unless - # is_valid? range, desc, codes - - range_width = codes.map { |a| a.size }.max - range_width = RANGE_WIDTH if range_width < RANGE_WIDTH - - desc_width = TOTAL_WIDTH - RANGE_WIDTH - 11 - desc_width -= (range_width - RANGE_WIDTH) if range_width > RANGE_WIDTH - - if desc.size > desc_width - desc = desc[0..desc_width - 4] + "..." - end - - codes.each_with_index do |r, idx| - desc = "" unless idx.zero? - code = "%-#{range_width}s" % r - @output.puts " #{pipe} #{code} ##{desc}" - pipe = "|" - end - end - @output.puts " ;" - @output.puts "" -end - -@output.puts < 0 && ret[0] == '.' { - ret = ret[1:] - } - return ret -} - -func navigationStepsRev(v node, offset int) []string { - switch tv := v.(type) { - case *objectVal: - // Do any of our properties have an object that contains the target - // offset? - for _, attr := range tv.Attrs { - k := attr.Name - av := attr.Value - - switch av.(type) { - case *objectVal, *arrayVal: - // okay - default: - continue - } - - if av.Range().ContainsOffset(offset) { - return append(navigationStepsRev(av, offset), "."+k) - } - } - case *arrayVal: - // Do any of our elements contain the target offset? - for i, elem := range tv.Values { - - switch elem.(type) { - case *objectVal, *arrayVal: - // okay - default: - continue - } - - if elem.Range().ContainsOffset(offset) { - return append(navigationStepsRev(elem, offset), fmt.Sprintf("[%d]", i)) - } - } - } - - return nil -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/json/parser.go b/vendor/github.com/hashicorp/hcl2/hcl/json/parser.go deleted file mode 100644 index d368ea8fc..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/json/parser.go +++ /dev/null @@ -1,496 +0,0 @@ -package json - -import ( - "encoding/json" - "fmt" - - "github.com/hashicorp/hcl2/hcl" - "github.com/zclconf/go-cty/cty" -) - -func parseFileContent(buf []byte, filename string) (node, hcl.Diagnostics) { - tokens := scan(buf, pos{ - Filename: filename, - Pos: hcl.Pos{ - Byte: 0, - Line: 1, - Column: 1, - }, - }) - p := newPeeker(tokens) - node, diags := parseValue(p) - if len(diags) == 0 && p.Peek().Type != tokenEOF { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Extraneous data after value", - Detail: "Extra characters appear after the JSON value.", - Subject: p.Peek().Range.Ptr(), - }) - } - return node, diags -} - -func parseValue(p *peeker) (node, hcl.Diagnostics) { - tok := p.Peek() - - wrapInvalid := func(n node, diags hcl.Diagnostics) (node, hcl.Diagnostics) { - if n != nil { - return n, diags - } - return invalidVal{tok.Range}, diags - } - - switch tok.Type { - case tokenBraceO: - return wrapInvalid(parseObject(p)) - case tokenBrackO: - return wrapInvalid(parseArray(p)) - case tokenNumber: - return wrapInvalid(parseNumber(p)) - case tokenString: - return wrapInvalid(parseString(p)) - case tokenKeyword: - return wrapInvalid(parseKeyword(p)) - case tokenBraceC: - return wrapInvalid(nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Missing JSON value", - Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.", - Subject: &tok.Range, - }, - }) - case tokenBrackC: - return wrapInvalid(nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Missing array element value", - Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.", - Subject: &tok.Range, - }, - }) - case tokenEOF: - return wrapInvalid(nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Missing value", - Detail: "The JSON data ends prematurely.", - Subject: &tok.Range, - }, - }) - default: - return wrapInvalid(nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid start of value", - Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.", - Subject: &tok.Range, - }, - }) - } -} - -func tokenCanStartValue(tok token) bool { - switch tok.Type { - case tokenBraceO, tokenBrackO, tokenNumber, tokenString, tokenKeyword: - return true - default: - return false - } -} - -func parseObject(p *peeker) (node, hcl.Diagnostics) { - var diags hcl.Diagnostics - - open := p.Read() - attrs := []*objectAttr{} - - // recover is used to shift the peeker to what seems to be the end of - // our object, so that when we encounter an error we leave the peeker - // at a reasonable point in the token stream to continue parsing. - recover := func(tok token) { - open := 1 - for { - switch tok.Type { - case tokenBraceO: - open++ - case tokenBraceC: - open-- - if open <= 1 { - return - } - case tokenEOF: - // Ran out of source before we were able to recover, - // so we'll bail here and let the caller deal with it. - return - } - tok = p.Read() - } - } - -Token: - for { - if p.Peek().Type == tokenBraceC { - break Token - } - - keyNode, keyDiags := parseValue(p) - diags = diags.Extend(keyDiags) - if keyNode == nil { - return nil, diags - } - - keyStrNode, ok := keyNode.(*stringVal) - if !ok { - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid object property name", - Detail: "A JSON object property name must be a string", - Subject: keyNode.StartRange().Ptr(), - }) - } - - key := keyStrNode.Value - - colon := p.Read() - if colon.Type != tokenColon { - recover(colon) - - if colon.Type == tokenBraceC || colon.Type == tokenComma { - // Catch common mistake of using braces instead of brackets - // for an object. - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing object value", - Detail: "A JSON object attribute must have a value, introduced by a colon.", - Subject: &colon.Range, - }) - } - - if colon.Type == tokenEquals { - // Possible confusion with native HCL syntax. - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing property value colon", - Detail: "JSON uses a colon as its name/value delimiter, not an equals sign.", - Subject: &colon.Range, - }) - } - - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing property value colon", - Detail: "A colon must appear between an object property's name and its value.", - Subject: &colon.Range, - }) - } - - valNode, valDiags := parseValue(p) - diags = diags.Extend(valDiags) - if valNode == nil { - return nil, diags - } - - attrs = append(attrs, &objectAttr{ - Name: key, - Value: valNode, - NameRange: keyStrNode.SrcRange, - }) - - switch p.Peek().Type { - case tokenComma: - comma := p.Read() - if p.Peek().Type == tokenBraceC { - // Special error message for this common mistake - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Trailing comma in object", - Detail: "JSON does not permit a trailing comma after the final property in an object.", - Subject: &comma.Range, - }) - } - continue Token - case tokenEOF: - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unclosed object", - Detail: "No closing brace was found for this JSON object.", - Subject: &open.Range, - }) - case tokenBrackC: - // Consume the bracket anyway, so that we don't return with the peeker - // at a strange place. - p.Read() - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Mismatched braces", - Detail: "A JSON object must be closed with a brace, not a bracket.", - Subject: p.Peek().Range.Ptr(), - }) - case tokenBraceC: - break Token - default: - recover(p.Read()) - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing attribute seperator comma", - Detail: "A comma must appear between each property definition in an object.", - Subject: p.Peek().Range.Ptr(), - }) - } - - } - - close := p.Read() - return &objectVal{ - Attrs: attrs, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - OpenRange: open.Range, - CloseRange: close.Range, - }, diags -} - -func parseArray(p *peeker) (node, hcl.Diagnostics) { - var diags hcl.Diagnostics - - open := p.Read() - vals := []node{} - - // recover is used to shift the peeker to what seems to be the end of - // our array, so that when we encounter an error we leave the peeker - // at a reasonable point in the token stream to continue parsing. - recover := func(tok token) { - open := 1 - for { - switch tok.Type { - case tokenBrackO: - open++ - case tokenBrackC: - open-- - if open <= 1 { - return - } - case tokenEOF: - // Ran out of source before we were able to recover, - // so we'll bail here and let the caller deal with it. - return - } - tok = p.Read() - } - } - -Token: - for { - if p.Peek().Type == tokenBrackC { - break Token - } - - valNode, valDiags := parseValue(p) - diags = diags.Extend(valDiags) - if valNode == nil { - return nil, diags - } - - vals = append(vals, valNode) - - switch p.Peek().Type { - case tokenComma: - comma := p.Read() - if p.Peek().Type == tokenBrackC { - // Special error message for this common mistake - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Trailing comma in array", - Detail: "JSON does not permit a trailing comma after the final value in an array.", - Subject: &comma.Range, - }) - } - continue Token - case tokenColon: - recover(p.Read()) - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid array value", - Detail: "A colon is not used to introduce values in a JSON array.", - Subject: p.Peek().Range.Ptr(), - }) - case tokenEOF: - recover(p.Read()) - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Unclosed object", - Detail: "No closing bracket was found for this JSON array.", - Subject: &open.Range, - }) - case tokenBraceC: - recover(p.Read()) - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Mismatched brackets", - Detail: "A JSON array must be closed with a bracket, not a brace.", - Subject: p.Peek().Range.Ptr(), - }) - case tokenBrackC: - break Token - default: - recover(p.Read()) - return nil, diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing attribute seperator comma", - Detail: "A comma must appear between each value in an array.", - Subject: p.Peek().Range.Ptr(), - }) - } - - } - - close := p.Read() - return &arrayVal{ - Values: vals, - SrcRange: hcl.RangeBetween(open.Range, close.Range), - OpenRange: open.Range, - }, diags -} - -func parseNumber(p *peeker) (node, hcl.Diagnostics) { - tok := p.Read() - - // Use encoding/json to validate the number syntax. - // TODO: Do this more directly to produce better diagnostics. - var num json.Number - err := json.Unmarshal(tok.Bytes, &num) - if err != nil { - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid JSON number", - Detail: fmt.Sprintf("There is a syntax error in the given JSON number."), - Subject: &tok.Range, - }, - } - } - - // We want to guarantee that we parse numbers the same way as cty (and thus - // native syntax HCL) would here, so we'll use the cty parser even though - // in most other cases we don't actually introduce cty concepts until - // decoding time. We'll unwrap the parsed float immediately afterwards, so - // the cty value is just a temporary helper. - nv, err := cty.ParseNumberVal(string(num)) - if err != nil { - // Should never happen if above passed, since JSON numbers are a subset - // of what cty can parse... - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid JSON number", - Detail: fmt.Sprintf("There is a syntax error in the given JSON number."), - Subject: &tok.Range, - }, - } - } - - return &numberVal{ - Value: nv.AsBigFloat(), - SrcRange: tok.Range, - }, nil -} - -func parseString(p *peeker) (node, hcl.Diagnostics) { - tok := p.Read() - var str string - err := json.Unmarshal(tok.Bytes, &str) - - if err != nil { - var errRange hcl.Range - if serr, ok := err.(*json.SyntaxError); ok { - errOfs := serr.Offset - errPos := tok.Range.Start - errPos.Byte += int(errOfs) - - // TODO: Use the byte offset to properly count unicode - // characters for the column, and mark the whole of the - // character that was wrong as part of our range. - errPos.Column += int(errOfs) - - errEndPos := errPos - errEndPos.Byte++ - errEndPos.Column++ - - errRange = hcl.Range{ - Filename: tok.Range.Filename, - Start: errPos, - End: errEndPos, - } - } else { - errRange = tok.Range - } - - var contextRange *hcl.Range - if errRange != tok.Range { - contextRange = &tok.Range - } - - // FIXME: Eventually we should parse strings directly here so - // we can produce a more useful error message in the face fo things - // such as invalid escapes, etc. - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid JSON string", - Detail: fmt.Sprintf("There is a syntax error in the given JSON string."), - Subject: &errRange, - Context: contextRange, - }, - } - } - - return &stringVal{ - Value: str, - SrcRange: tok.Range, - }, nil -} - -func parseKeyword(p *peeker) (node, hcl.Diagnostics) { - tok := p.Read() - s := string(tok.Bytes) - - switch s { - case "true": - return &booleanVal{ - Value: true, - SrcRange: tok.Range, - }, nil - case "false": - return &booleanVal{ - Value: false, - SrcRange: tok.Range, - }, nil - case "null": - return &nullVal{ - SrcRange: tok.Range, - }, nil - case "undefined", "NaN", "Infinity": - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid JSON keyword", - Detail: fmt.Sprintf("The JavaScript identifier %q cannot be used in JSON.", s), - Subject: &tok.Range, - }, - } - default: - var dym string - if suggest := keywordSuggestion(s); suggest != "" { - dym = fmt.Sprintf(" Did you mean %q?", suggest) - } - - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Invalid JSON keyword", - Detail: fmt.Sprintf("%q is not a valid JSON keyword.%s", s, dym), - Subject: &tok.Range, - }, - } - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/json/peeker.go b/vendor/github.com/hashicorp/hcl2/hcl/json/peeker.go deleted file mode 100644 index fc7bbf582..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/json/peeker.go +++ /dev/null @@ -1,25 +0,0 @@ -package json - -type peeker struct { - tokens []token - pos int -} - -func newPeeker(tokens []token) *peeker { - return &peeker{ - tokens: tokens, - pos: 0, - } -} - -func (p *peeker) Peek() token { - return p.tokens[p.pos] -} - -func (p *peeker) Read() token { - ret := p.tokens[p.pos] - if ret.Type != tokenEOF { - p.pos++ - } - return ret -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/json/public.go b/vendor/github.com/hashicorp/hcl2/hcl/json/public.go deleted file mode 100644 index 2728aa130..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/json/public.go +++ /dev/null @@ -1,94 +0,0 @@ -package json - -import ( - "fmt" - "io/ioutil" - "os" - - "github.com/hashicorp/hcl2/hcl" -) - -// Parse attempts to parse the given buffer as JSON and, if successful, returns -// a hcl.File for the HCL configuration represented by it. -// -// This is not a generic JSON parser. Instead, it deals only with the profile -// of JSON used to express HCL configuration. -// -// The returned file is valid only if the returned diagnostics returns false -// from its HasErrors method. If HasErrors returns true, the file represents -// the subset of data that was able to be parsed, which may be none. -func Parse(src []byte, filename string) (*hcl.File, hcl.Diagnostics) { - rootNode, diags := parseFileContent(src, filename) - - switch rootNode.(type) { - case *objectVal, *arrayVal: - // okay - default: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Root value must be object", - Detail: "The root value in a JSON-based configuration must be either a JSON object or a JSON array of objects.", - Subject: rootNode.StartRange().Ptr(), - }) - - // Since we've already produced an error message for this being - // invalid, we'll return an empty placeholder here so that trying to - // extract content from our root body won't produce a redundant - // error saying the same thing again in more general terms. - fakePos := hcl.Pos{ - Byte: 0, - Line: 1, - Column: 1, - } - fakeRange := hcl.Range{ - Filename: filename, - Start: fakePos, - End: fakePos, - } - rootNode = &objectVal{ - Attrs: []*objectAttr{}, - SrcRange: fakeRange, - OpenRange: fakeRange, - } - } - - file := &hcl.File{ - Body: &body{ - val: rootNode, - }, - Bytes: src, - Nav: navigation{rootNode}, - } - return file, diags -} - -// ParseFile is a convenience wrapper around Parse that first attempts to load -// data from the given filename, passing the result to Parse if successful. -// -// If the file cannot be read, an error diagnostic with nil context is returned. -func ParseFile(filename string) (*hcl.File, hcl.Diagnostics) { - f, err := os.Open(filename) - if err != nil { - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Failed to open file", - Detail: fmt.Sprintf("The file %q could not be opened.", filename), - }, - } - } - defer f.Close() - - src, err := ioutil.ReadAll(f) - if err != nil { - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Failed to read file", - Detail: fmt.Sprintf("The file %q was opened, but an error occured while reading it.", filename), - }, - } - } - - return Parse(src, filename) -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/json/scanner.go b/vendor/github.com/hashicorp/hcl2/hcl/json/scanner.go deleted file mode 100644 index da7288423..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/json/scanner.go +++ /dev/null @@ -1,297 +0,0 @@ -package json - -import ( - "fmt" - - "github.com/apparentlymart/go-textseg/textseg" - "github.com/hashicorp/hcl2/hcl" -) - -//go:generate stringer -type tokenType scanner.go -type tokenType rune - -const ( - tokenBraceO tokenType = '{' - tokenBraceC tokenType = '}' - tokenBrackO tokenType = '[' - tokenBrackC tokenType = ']' - tokenComma tokenType = ',' - tokenColon tokenType = ':' - tokenKeyword tokenType = 'K' - tokenString tokenType = 'S' - tokenNumber tokenType = 'N' - tokenEOF tokenType = '␄' - tokenInvalid tokenType = 0 - tokenEquals tokenType = '=' // used only for reminding the user of JSON syntax -) - -type token struct { - Type tokenType - Bytes []byte - Range hcl.Range -} - -// scan returns the primary tokens for the given JSON buffer in sequence. -// -// The responsibility of this pass is to just mark the slices of the buffer -// as being of various types. It is lax in how it interprets the multi-byte -// token types keyword, string and number, preferring to capture erroneous -// extra bytes that we presume the user intended to be part of the token -// so that we can generate more helpful diagnostics in the parser. -func scan(buf []byte, start pos) []token { - var tokens []token - p := start - for { - if len(buf) == 0 { - tokens = append(tokens, token{ - Type: tokenEOF, - Bytes: nil, - Range: posRange(p, p), - }) - return tokens - } - - buf, p = skipWhitespace(buf, p) - - if len(buf) == 0 { - tokens = append(tokens, token{ - Type: tokenEOF, - Bytes: nil, - Range: posRange(p, p), - }) - return tokens - } - - start = p - - first := buf[0] - switch { - case first == '{' || first == '}' || first == '[' || first == ']' || first == ',' || first == ':' || first == '=': - p.Pos.Column++ - p.Pos.Byte++ - tokens = append(tokens, token{ - Type: tokenType(first), - Bytes: buf[0:1], - Range: posRange(start, p), - }) - buf = buf[1:] - case first == '"': - var tokBuf []byte - tokBuf, buf, p = scanString(buf, p) - tokens = append(tokens, token{ - Type: tokenString, - Bytes: tokBuf, - Range: posRange(start, p), - }) - case byteCanStartNumber(first): - var tokBuf []byte - tokBuf, buf, p = scanNumber(buf, p) - tokens = append(tokens, token{ - Type: tokenNumber, - Bytes: tokBuf, - Range: posRange(start, p), - }) - case byteCanStartKeyword(first): - var tokBuf []byte - tokBuf, buf, p = scanKeyword(buf, p) - tokens = append(tokens, token{ - Type: tokenKeyword, - Bytes: tokBuf, - Range: posRange(start, p), - }) - default: - tokens = append(tokens, token{ - Type: tokenInvalid, - Bytes: buf[:1], - Range: start.Range(1, 1), - }) - // If we've encountered an invalid then we might as well stop - // scanning since the parser won't proceed beyond this point. - return tokens - } - } -} - -func byteCanStartNumber(b byte) bool { - switch b { - // We are slightly more tolerant than JSON requires here since we - // expect the parser will make a stricter interpretation of the - // number bytes, but we specifically don't allow 'e' or 'E' here - // since we want the scanner to treat that as the start of an - // invalid keyword instead, to produce more intelligible error messages. - case '-', '+', '.', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9': - return true - default: - return false - } -} - -func scanNumber(buf []byte, start pos) ([]byte, []byte, pos) { - // The scanner doesn't check that the sequence of digit-ish bytes is - // in a valid order. The parser must do this when decoding a number - // token. - var i int - p := start -Byte: - for i = 0; i < len(buf); i++ { - switch buf[i] { - case '-', '+', '.', 'e', 'E', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9': - p.Pos.Byte++ - p.Pos.Column++ - default: - break Byte - } - } - return buf[:i], buf[i:], p -} - -func byteCanStartKeyword(b byte) bool { - switch { - // We allow any sequence of alphabetical characters here, even though - // JSON is more constrained, so that we can collect what we presume - // the user intended to be a single keyword and then check its validity - // in the parser, where we can generate better diagnostics. - // So e.g. we want to be able to say: - // unrecognized keyword "True". Did you mean "true"? - case isAlphabetical(b): - return true - default: - return false - } -} - -func scanKeyword(buf []byte, start pos) ([]byte, []byte, pos) { - var i int - p := start -Byte: - for i = 0; i < len(buf); i++ { - b := buf[i] - switch { - case isAlphabetical(b) || b == '_': - p.Pos.Byte++ - p.Pos.Column++ - default: - break Byte - } - } - return buf[:i], buf[i:], p -} - -func scanString(buf []byte, start pos) ([]byte, []byte, pos) { - // The scanner doesn't validate correct use of escapes, etc. It pays - // attention to escapes only for the purpose of identifying the closing - // quote character. It's the parser's responsibility to do proper - // validation. - // - // The scanner also doesn't specifically detect unterminated string - // literals, though they can be identified in the parser by checking if - // the final byte in a string token is the double-quote character. - - // Skip the opening quote symbol - i := 1 - p := start - p.Pos.Byte++ - p.Pos.Column++ - escaping := false -Byte: - for i < len(buf) { - b := buf[i] - - switch { - case b == '\\': - escaping = !escaping - p.Pos.Byte++ - p.Pos.Column++ - i++ - case b == '"': - p.Pos.Byte++ - p.Pos.Column++ - i++ - if !escaping { - break Byte - } - escaping = false - case b < 32: - break Byte - default: - // Advance by one grapheme cluster, so that we consider each - // grapheme to be a "column". - // Ignoring error because this scanner cannot produce errors. - advance, _, _ := textseg.ScanGraphemeClusters(buf[i:], true) - - p.Pos.Byte += advance - p.Pos.Column++ - i += advance - - escaping = false - } - } - return buf[:i], buf[i:], p -} - -func skipWhitespace(buf []byte, start pos) ([]byte, pos) { - var i int - p := start -Byte: - for i = 0; i < len(buf); i++ { - switch buf[i] { - case ' ': - p.Pos.Byte++ - p.Pos.Column++ - case '\n': - p.Pos.Byte++ - p.Pos.Column = 1 - p.Pos.Line++ - case '\r': - // For the purpose of line/column counting we consider a - // carriage return to take up no space, assuming that it will - // be paired up with a newline (on Windows, for example) that - // will account for both of them. - p.Pos.Byte++ - case '\t': - // We arbitrarily count a tab as if it were two spaces, because - // we need to choose _some_ number here. This means any system - // that renders code on-screen with markers must itself treat - // tabs as a pair of spaces for rendering purposes, or instead - // use the byte offset and back into its own column position. - p.Pos.Byte++ - p.Pos.Column += 2 - default: - break Byte - } - } - return buf[i:], p -} - -type pos struct { - Filename string - Pos hcl.Pos -} - -func (p *pos) Range(byteLen, charLen int) hcl.Range { - start := p.Pos - end := p.Pos - end.Byte += byteLen - end.Column += charLen - return hcl.Range{ - Filename: p.Filename, - Start: start, - End: end, - } -} - -func posRange(start, end pos) hcl.Range { - return hcl.Range{ - Filename: start.Filename, - Start: start.Pos, - End: end.Pos, - } -} - -func (t token) GoString() string { - return fmt.Sprintf("json.token{json.%s, []byte(%q), %#v}", t.Type, t.Bytes, t.Range) -} - -func isAlphabetical(b byte) bool { - return (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/json/spec.md b/vendor/github.com/hashicorp/hcl2/hcl/json/spec.md deleted file mode 100644 index dac5729d4..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/json/spec.md +++ /dev/null @@ -1,405 +0,0 @@ -# HCL JSON Syntax Specification - -This is the specification for the JSON serialization for hcl. HCL is a system -for defining configuration languages for applications. The HCL information -model is designed to support multiple concrete syntaxes for configuration, -and this JSON-based format complements [the native syntax](../hclsyntax/spec.md) -by being easy to machine-generate, whereas the native syntax is oriented -towards human authoring and maintenance - -This syntax is defined in terms of JSON as defined in -[RFC7159](https://tools.ietf.org/html/rfc7159). As such it inherits the JSON -grammar as-is, and merely defines a specific methodology for interpreting -JSON constructs into HCL structural elements and expressions. - -This mapping is defined such that valid JSON-serialized HCL input can be -_produced_ using standard JSON implementations in various programming languages. -_Parsing_ such JSON has some additional constraints not beyond what is normally -supported by JSON parsers, so a specialized parser may be required that -is able to: - -- Preserve the relative ordering of properties defined in an object. -- Preserve multiple definitions of the same property name. -- Preserve numeric values to the precision required by the number type - in [the HCL syntax-agnostic information model](../spec.md). -- Retain source location information for parsed tokens/constructs in order - to produce good error messages. - -## Structural Elements - -[The HCL syntax-agnostic information model](../spec.md) defines a _body_ as an -abstract container for attribute definitions and child blocks. A body is -represented in JSON as either a single JSON object or a JSON array of objects. - -Body processing is in terms of JSON object properties, visited in the order -they appear in the input. Where a body is represented by a single JSON object, -the properties of that object are visited in order. Where a body is -represented by a JSON array, each of its elements are visited in order and -each element has its properties visited in order. If any element of the array -is not a JSON object then the input is erroneous. - -When a body is being processed in the _dynamic attributes_ mode, the allowance -of a JSON array in the previous paragraph does not apply and instead a single -JSON object is always required. - -As defined in the language-agnostic model, body processing is in terms -of a schema which provides context for interpreting the body's content. For -JSON bodies, the schema is crucial to allow differentiation of attribute -definitions and block definitions, both of which are represented via object -properties. - -The special property name `"//"`, when used in an object representing a HCL -body, is parsed and ignored. A property with this name can be used to -include human-readable comments. (This special property name is _not_ -processed in this way for any _other_ HCL constructs that are represented as -JSON objects.) - -### Attributes - -Where the given schema describes an attribute with a given name, the object -property with the matching name — if present — serves as the attribute's -definition. - -When a body is being processed in the _dynamic attributes_ mode, each object -property serves as an attribute definition for the attribute whose name -matches the property name. - -The value of an attribute definition property is interpreted as an _expression_, -as described in a later section. - -Given a schema that calls for an attribute named "foo", a JSON object like -the following provides a definition for that attribute: - -```json -{ - "foo": "bar baz" -} -``` - -### Blocks - -Where the given schema describes a block with a given type name, each object -property with the matching name serves as a definition of zero or more blocks -of that type. - -Processing of child blocks is in terms of nested JSON objects and arrays. -If the schema defines one or more _labels_ for the block type, a nested JSON -object or JSON array of objects is required for each labelling level. These -are flattened to a single ordered sequence of object properties using the -same algorithm as for body content as defined above. Each object property -serves as a label value at the corresponding level. - -After any labelling levels, the next nested value is either a JSON object -representing a single block body, or a JSON array of JSON objects that each -represent a single block body. Use of an array accommodates the definition -of multiple blocks that have identical type and labels. - -Given a schema that calls for a block type named "foo" with no labels, the -following JSON objects are all valid definitions of zero or more blocks of this -type: - -```json -{ - "foo": { - "child_attr": "baz" - } -} -``` - -```json -{ - "foo": [ - { - "child_attr": "baz" - }, - { - "child_attr": "boz" - } - ] -} -``` - -```json -{ - "foo": [] -} -``` - -The first of these defines a single child block of type "foo". The second -defines _two_ such blocks. The final example shows a degenerate definition -of zero blocks, though generators should prefer to omit the property entirely -in this scenario. - -Given a schema that calls for a block type named "foo" with _two_ labels, the -extra label levels must be represented as objects or arrays of objects as in -the following examples: - -```json -{ - "foo": { - "bar": { - "baz": { - "child_attr": "baz" - }, - "boz": { - "child_attr": "baz" - } - }, - "boz": { - "baz": { - "child_attr": "baz" - } - } - } -} -``` - -```json -{ - "foo": { - "bar": { - "baz": { - "child_attr": "baz" - }, - "boz": { - "child_attr": "baz" - } - }, - "boz": { - "baz": [ - { - "child_attr": "baz" - }, - { - "child_attr": "boz" - } - ] - } - } -} -``` - -```json -{ - "foo": [ - { - "bar": { - "baz": { - "child_attr": "baz" - }, - "boz": { - "child_attr": "baz" - } - } - }, - { - "bar": { - "baz": [ - { - "child_attr": "baz" - }, - { - "child_attr": "boz" - } - ] - } - } - ] -} -``` - -```json -{ - "foo": { - "bar": { - "baz": { - "child_attr": "baz" - }, - "boz": { - "child_attr": "baz" - } - }, - "bar": { - "baz": [ - { - "child_attr": "baz" - }, - { - "child_attr": "boz" - } - ] - } - } -} -``` - -Arrays can be introduced at either the label definition or block body -definition levels to define multiple definitions of the same block type -or labels while preserving order. - -A JSON HCL parser _must_ support duplicate definitions of the same property -name within a single object, preserving all of them and the relative ordering -between them. The array-based forms are also required so that JSON HCL -configurations can be produced with JSON producing libraries that are not -able to preserve property definition order and multiple definitions of -the same property. - -## Expressions - -JSON lacks a native expression syntax, so the HCL JSON syntax instead defines -a mapping for each of the JSON value types, including a special mapping for -strings that allows optional use of arbitrary expressions. - -### Objects - -When interpreted as an expression, a JSON object represents a value of a HCL -object type. - -Each property of the JSON object represents an attribute of the HCL object type. -The property name string given in the JSON input is interpreted as a string -expression as described below, and its result is converted to string as defined -by the syntax-agnostic information model. If such a conversion is not possible, -an error is produced and evaluation fails. - -An instance of the constructed object type is then created, whose values -are interpreted by again recursively applying the mapping rules defined in -this section to each of the property values. - -If any evaluated property name strings produce null values, an error is -produced and evaluation fails. If any produce _unknown_ values, the _entire -object's_ result is an unknown value of the dynamic pseudo-type, signalling -that the type of the object cannot be determined. - -It is an error to define the same property name multiple times within a single -JSON object interpreted as an expression. In full expression mode, this -constraint applies to the name expression results after conversion to string, -rather than the raw string that may contain interpolation expressions. - -### Arrays - -When interpreted as an expression, a JSON array represents a value of a HCL -tuple type. - -Each element of the JSON array represents an element of the HCL tuple type. -The tuple type is constructed by enumerating the JSON array elements, creating -for each an element whose type is the result of recursively applying the -expression mapping rules. Correspondence is preserved between the array element -indices and the tuple element indices. - -An instance of the constructed tuple type is then created, whose values are -interpreted by again recursively applying the mapping rules defined in this -section. - -### Numbers - -When interpreted as an expression, a JSON number represents a HCL number value. - -HCL numbers are arbitrary-precision decimal values, so a JSON HCL parser must -be able to translate exactly the value given to a number of corresponding -precision, within the constraints set by the HCL syntax-agnostic information -model. - -In practice, off-the-shelf JSON serializers often do not support customizing the -processing of numbers, and instead force processing as 32-bit or 64-bit -floating point values. - -A _producer_ of JSON HCL that uses such a serializer can provide numeric values -as JSON strings where they have precision too great for representation in the -serializer's chosen numeric type in situations where the result will be -converted to number (using the standard conversion rules) by a calling -application. - -Alternatively, for expressions that are evaluated in full expression mode an -embedded template interpolation can be used to faithfully represent a number, -such as `"${1e150}"`, which will then be evaluated by the underlying HCL native -syntax expression evaluator. - -### Boolean Values - -The JSON boolean values `true` and `false`, when interpreted as expressions, -represent the corresponding HCL boolean values. - -### The Null Value - -The JSON value `null`, when interpreted as an expression, represents a -HCL null value of the dynamic pseudo-type. - -### Strings - -When interpreted as an expression, a JSON string may be interpreted in one of -two ways depending on the evaluation mode. - -If evaluating in literal-only mode (as defined by the syntax-agnostic -information model) the literal string is intepreted directly as a HCL string -value, by directly using the exact sequence of unicode characters represented. -Template interpolations and directives MUST NOT be processed in this mode, -allowing any characters that appear as introduction sequences to pass through -literally: - -```json -"Hello world! Template sequences like ${ are not intepreted here." -``` - -When evaluating in full expression mode (again, as defined by the syntax- -agnostic information model) the literal string is instead interpreted as a -_standalone template_ in the HCL Native Syntax. The expression evaluation -result is then the direct result of evaluating that template with the current -variable scope and function table. - -```json -"Hello, ${name}! Template sequences are interpreted in full expression mode." -``` - -In particular the _Template Interpolation Unwrapping_ requirement from the -HCL native syntax specification must be implemented, allowing the use of -single-interpolation templates to represent expressions that would not -otherwise be representable in JSON, such as the following example where -the result must be a number, rather than a string representation of a number: - -```json -"${ a + b }" -``` - -## Static Analysis - -The HCL static analysis operations are implemented for JSON values that -represent expressions, as described in the following sections. - -Due to the limited expressive power of the JSON syntax alone, use of these -static analyses functions rather than normal expression evaluation is used -as additional context for how a JSON value is to be interpreted, which means -that static analyses can result in a different interpretation of a given -expression than normal evaluation. - -### Static List - -An expression interpreted as a static list must be a JSON array. Each of the -values in the array is interpreted as an expression and returned. - -### Static Map - -An expression interpreted as a static map must be a JSON object. Each of the -key/value pairs in the object is presented as a pair of expressions. Since -object property names are always strings, evaluating the key expression with -a non-`nil` evaluation context will evaluate any template sequences given -in the property name. - -### Static Call - -An expression interpreted as a static call must be a string. The content of -the string is interpreted as a native syntax expression (not a _template_, -unlike normal evaluation) and then the static call analysis is delegated to -that expression. - -If the original expression is not a string or its contents cannot be parsed -as a native syntax expression then static call analysis is not supported. - -### Static Traversal - -An expression interpreted as a static traversal must be a string. The content -of the string is interpreted as a native syntax expression (not a _template_, -unlike normal evaluation) and then static traversal analysis is delegated -to that expression. - -If the original expression is not a string or its contents cannot be parsed -as a native syntax expression then static call analysis is not supported. diff --git a/vendor/github.com/hashicorp/hcl2/hcl/json/structure.go b/vendor/github.com/hashicorp/hcl2/hcl/json/structure.go deleted file mode 100644 index 74847c79a..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/json/structure.go +++ /dev/null @@ -1,637 +0,0 @@ -package json - -import ( - "fmt" - - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hcl/hclsyntax" - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" -) - -// body is the implementation of "Body" used for files processed with the JSON -// parser. -type body struct { - val node - - // If non-nil, the keys of this map cause the corresponding attributes to - // be treated as non-existing. This is used when Body.PartialContent is - // called, to produce the "remaining content" Body. - hiddenAttrs map[string]struct{} -} - -// expression is the implementation of "Expression" used for files processed -// with the JSON parser. -type expression struct { - src node -} - -func (b *body) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) { - content, newBody, diags := b.PartialContent(schema) - - hiddenAttrs := newBody.(*body).hiddenAttrs - - var nameSuggestions []string - for _, attrS := range schema.Attributes { - if _, ok := hiddenAttrs[attrS.Name]; !ok { - // Only suggest an attribute name if we didn't use it already. - nameSuggestions = append(nameSuggestions, attrS.Name) - } - } - for _, blockS := range schema.Blocks { - // Blocks can appear multiple times, so we'll suggest their type - // names regardless of whether they've already been used. - nameSuggestions = append(nameSuggestions, blockS.Type) - } - - jsonAttrs, attrDiags := b.collectDeepAttrs(b.val, nil) - diags = append(diags, attrDiags...) - - for _, attr := range jsonAttrs { - k := attr.Name - if k == "//" { - // Ignore "//" keys in objects representing bodies, to allow - // their use as comments. - continue - } - - if _, ok := hiddenAttrs[k]; !ok { - suggestion := nameSuggestion(k, nameSuggestions) - if suggestion != "" { - suggestion = fmt.Sprintf(" Did you mean %q?", suggestion) - } - - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Extraneous JSON object property", - Detail: fmt.Sprintf("No argument or block type is named %q.%s", k, suggestion), - Subject: &attr.NameRange, - Context: attr.Range().Ptr(), - }) - } - } - - return content, diags -} - -func (b *body) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) { - var diags hcl.Diagnostics - - jsonAttrs, attrDiags := b.collectDeepAttrs(b.val, nil) - diags = append(diags, attrDiags...) - - usedNames := map[string]struct{}{} - if b.hiddenAttrs != nil { - for k := range b.hiddenAttrs { - usedNames[k] = struct{}{} - } - } - - content := &hcl.BodyContent{ - Attributes: map[string]*hcl.Attribute{}, - Blocks: nil, - - MissingItemRange: b.MissingItemRange(), - } - - // Create some more convenient data structures for our work below. - attrSchemas := map[string]hcl.AttributeSchema{} - blockSchemas := map[string]hcl.BlockHeaderSchema{} - for _, attrS := range schema.Attributes { - attrSchemas[attrS.Name] = attrS - } - for _, blockS := range schema.Blocks { - blockSchemas[blockS.Type] = blockS - } - - for _, jsonAttr := range jsonAttrs { - attrName := jsonAttr.Name - if _, used := b.hiddenAttrs[attrName]; used { - continue - } - - if attrS, defined := attrSchemas[attrName]; defined { - if existing, exists := content.Attributes[attrName]; exists { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate argument", - Detail: fmt.Sprintf("The argument %q was already set at %s.", attrName, existing.Range), - Subject: &jsonAttr.NameRange, - Context: jsonAttr.Range().Ptr(), - }) - continue - } - - content.Attributes[attrS.Name] = &hcl.Attribute{ - Name: attrS.Name, - Expr: &expression{src: jsonAttr.Value}, - Range: hcl.RangeBetween(jsonAttr.NameRange, jsonAttr.Value.Range()), - NameRange: jsonAttr.NameRange, - } - usedNames[attrName] = struct{}{} - - } else if blockS, defined := blockSchemas[attrName]; defined { - bv := jsonAttr.Value - blockDiags := b.unpackBlock(bv, blockS.Type, &jsonAttr.NameRange, blockS.LabelNames, nil, nil, &content.Blocks) - diags = append(diags, blockDiags...) - usedNames[attrName] = struct{}{} - } - - // We ignore anything that isn't defined because that's the - // PartialContent contract. The Content method will catch leftovers. - } - - // Make sure we got all the required attributes. - for _, attrS := range schema.Attributes { - if !attrS.Required { - continue - } - if _, defined := content.Attributes[attrS.Name]; !defined { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing required argument", - Detail: fmt.Sprintf("The argument %q is required, but no definition was found.", attrS.Name), - Subject: b.MissingItemRange().Ptr(), - }) - } - } - - unusedBody := &body{ - val: b.val, - hiddenAttrs: usedNames, - } - - return content, unusedBody, diags -} - -// JustAttributes for JSON bodies interprets all properties of the wrapped -// JSON object as attributes and returns them. -func (b *body) JustAttributes() (hcl.Attributes, hcl.Diagnostics) { - var diags hcl.Diagnostics - attrs := make(map[string]*hcl.Attribute) - - obj, ok := b.val.(*objectVal) - if !ok { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect JSON value type", - Detail: "A JSON object is required here, setting the arguments for this block.", - Subject: b.val.StartRange().Ptr(), - }) - return attrs, diags - } - - for _, jsonAttr := range obj.Attrs { - name := jsonAttr.Name - if name == "//" { - // Ignore "//" keys in objects representing bodies, to allow - // their use as comments. - continue - } - - if _, hidden := b.hiddenAttrs[name]; hidden { - continue - } - - if existing, exists := attrs[name]; exists { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate attribute definition", - Detail: fmt.Sprintf("The argument %q was already set at %s.", name, existing.Range), - Subject: &jsonAttr.NameRange, - }) - continue - } - - attrs[name] = &hcl.Attribute{ - Name: name, - Expr: &expression{src: jsonAttr.Value}, - Range: hcl.RangeBetween(jsonAttr.NameRange, jsonAttr.Value.Range()), - NameRange: jsonAttr.NameRange, - } - } - - // No diagnostics possible here, since the parser already took care of - // finding duplicates and every JSON value can be a valid attribute value. - return attrs, diags -} - -func (b *body) MissingItemRange() hcl.Range { - switch tv := b.val.(type) { - case *objectVal: - return tv.CloseRange - case *arrayVal: - return tv.OpenRange - default: - // Should not happen in correct operation, but might show up if the - // input is invalid and we are producing partial results. - return tv.StartRange() - } -} - -func (b *body) unpackBlock(v node, typeName string, typeRange *hcl.Range, labelsLeft []string, labelsUsed []string, labelRanges []hcl.Range, blocks *hcl.Blocks) (diags hcl.Diagnostics) { - if len(labelsLeft) > 0 { - labelName := labelsLeft[0] - jsonAttrs, attrDiags := b.collectDeepAttrs(v, &labelName) - diags = append(diags, attrDiags...) - - if len(jsonAttrs) == 0 { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Missing block label", - Detail: fmt.Sprintf("At least one object property is required, whose name represents the %s block's %s.", typeName, labelName), - Subject: v.StartRange().Ptr(), - }) - return - } - labelsUsed := append(labelsUsed, "") - labelRanges := append(labelRanges, hcl.Range{}) - for _, p := range jsonAttrs { - pk := p.Name - labelsUsed[len(labelsUsed)-1] = pk - labelRanges[len(labelRanges)-1] = p.NameRange - diags = append(diags, b.unpackBlock(p.Value, typeName, typeRange, labelsLeft[1:], labelsUsed, labelRanges, blocks)...) - } - return - } - - // By the time we get here, we've peeled off all the labels and we're ready - // to deal with the block's actual content. - - // need to copy the label slices because their underlying arrays will - // continue to be mutated after we return. - labels := make([]string, len(labelsUsed)) - copy(labels, labelsUsed) - labelR := make([]hcl.Range, len(labelRanges)) - copy(labelR, labelRanges) - - switch tv := v.(type) { - case *nullVal: - // There is no block content, e.g the value is null. - return - case *objectVal: - // Single instance of the block - *blocks = append(*blocks, &hcl.Block{ - Type: typeName, - Labels: labels, - Body: &body{ - val: tv, - }, - - DefRange: tv.OpenRange, - TypeRange: *typeRange, - LabelRanges: labelR, - }) - case *arrayVal: - // Multiple instances of the block - for _, av := range tv.Values { - *blocks = append(*blocks, &hcl.Block{ - Type: typeName, - Labels: labels, - Body: &body{ - val: av, // might be mistyped; we'll find out when content is requested for this body - }, - - DefRange: tv.OpenRange, - TypeRange: *typeRange, - LabelRanges: labelR, - }) - } - default: - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect JSON value type", - Detail: fmt.Sprintf("Either a JSON object or a JSON array is required, representing the contents of one or more %q blocks.", typeName), - Subject: v.StartRange().Ptr(), - }) - } - return -} - -// collectDeepAttrs takes either a single object or an array of objects and -// flattens it into a list of object attributes, collecting attributes from -// all of the objects in a given array. -// -// Ordering is preserved, so a list of objects that each have one property -// will result in those properties being returned in the same order as the -// objects appeared in the array. -// -// This is appropriate for use only for objects representing bodies or labels -// within a block. -// -// The labelName argument, if non-null, is used to tailor returned error -// messages to refer to block labels rather than attributes and child blocks. -// It has no other effect. -func (b *body) collectDeepAttrs(v node, labelName *string) ([]*objectAttr, hcl.Diagnostics) { - var diags hcl.Diagnostics - var attrs []*objectAttr - - switch tv := v.(type) { - case *nullVal: - // If a value is null, then we don't return any attributes or return an error. - - case *objectVal: - attrs = append(attrs, tv.Attrs...) - - case *arrayVal: - for _, ev := range tv.Values { - switch tev := ev.(type) { - case *objectVal: - attrs = append(attrs, tev.Attrs...) - default: - if labelName != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect JSON value type", - Detail: fmt.Sprintf("A JSON object is required here, to specify %s labels for this block.", *labelName), - Subject: ev.StartRange().Ptr(), - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect JSON value type", - Detail: "A JSON object is required here, to define arguments and child blocks.", - Subject: ev.StartRange().Ptr(), - }) - } - } - } - - default: - if labelName != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect JSON value type", - Detail: fmt.Sprintf("Either a JSON object or JSON array of objects is required here, to specify %s labels for this block.", *labelName), - Subject: v.StartRange().Ptr(), - }) - } else { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Incorrect JSON value type", - Detail: "Either a JSON object or JSON array of objects is required here, to define arguments and child blocks.", - Subject: v.StartRange().Ptr(), - }) - } - } - - return attrs, diags -} - -func (e *expression) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { - switch v := e.src.(type) { - case *stringVal: - if ctx != nil { - // Parse string contents as a HCL native language expression. - // We only do this if we have a context, so passing a nil context - // is how the caller specifies that interpolations are not allowed - // and that the string should just be returned verbatim. - templateSrc := v.Value - expr, diags := hclsyntax.ParseTemplate( - []byte(templateSrc), - v.SrcRange.Filename, - - // This won't produce _exactly_ the right result, since - // the hclsyntax parser can't "see" any escapes we removed - // while parsing JSON, but it's better than nothing. - hcl.Pos{ - Line: v.SrcRange.Start.Line, - - // skip over the opening quote mark - Byte: v.SrcRange.Start.Byte + 1, - Column: v.SrcRange.Start.Column + 1, - }, - ) - if diags.HasErrors() { - return cty.DynamicVal, diags - } - val, evalDiags := expr.Value(ctx) - diags = append(diags, evalDiags...) - return val, diags - } - - return cty.StringVal(v.Value), nil - case *numberVal: - return cty.NumberVal(v.Value), nil - case *booleanVal: - return cty.BoolVal(v.Value), nil - case *arrayVal: - var diags hcl.Diagnostics - vals := []cty.Value{} - for _, jsonVal := range v.Values { - val, valDiags := (&expression{src: jsonVal}).Value(ctx) - vals = append(vals, val) - diags = append(diags, valDiags...) - } - return cty.TupleVal(vals), diags - case *objectVal: - var diags hcl.Diagnostics - attrs := map[string]cty.Value{} - attrRanges := map[string]hcl.Range{} - known := true - for _, jsonAttr := range v.Attrs { - // In this one context we allow keys to contain interpolation - // expressions too, assuming we're evaluating in interpolation - // mode. This achieves parity with the native syntax where - // object expressions can have dynamic keys, while block contents - // may not. - name, nameDiags := (&expression{src: &stringVal{ - Value: jsonAttr.Name, - SrcRange: jsonAttr.NameRange, - }}).Value(ctx) - valExpr := &expression{src: jsonAttr.Value} - val, valDiags := valExpr.Value(ctx) - diags = append(diags, nameDiags...) - diags = append(diags, valDiags...) - - var err error - name, err = convert.Convert(name, cty.String) - if err != nil { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid object key expression", - Detail: fmt.Sprintf("Cannot use this expression as an object key: %s.", err), - Subject: &jsonAttr.NameRange, - Expression: valExpr, - EvalContext: ctx, - }) - continue - } - if name.IsNull() { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid object key expression", - Detail: "Cannot use null value as an object key.", - Subject: &jsonAttr.NameRange, - Expression: valExpr, - EvalContext: ctx, - }) - continue - } - if !name.IsKnown() { - // This is a bit of a weird case, since our usual rules require - // us to tolerate unknowns and just represent the result as - // best we can but if we don't know the key then we can't - // know the type of our object at all, and thus we must turn - // the whole thing into cty.DynamicVal. This is consistent with - // how this situation is handled in the native syntax. - // We'll keep iterating so we can collect other errors in - // subsequent attributes. - known = false - continue - } - nameStr := name.AsString() - if _, defined := attrs[nameStr]; defined { - diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Duplicate object attribute", - Detail: fmt.Sprintf("An attribute named %q was already defined at %s.", nameStr, attrRanges[nameStr]), - Subject: &jsonAttr.NameRange, - Expression: e, - EvalContext: ctx, - }) - continue - } - attrs[nameStr] = val - attrRanges[nameStr] = jsonAttr.NameRange - } - if !known { - // We encountered an unknown key somewhere along the way, so - // we can't know what our type will eventually be. - return cty.DynamicVal, diags - } - return cty.ObjectVal(attrs), diags - case *nullVal: - return cty.NullVal(cty.DynamicPseudoType), nil - default: - // Default to DynamicVal so that ASTs containing invalid nodes can - // still be partially-evaluated. - return cty.DynamicVal, nil - } -} - -func (e *expression) Variables() []hcl.Traversal { - var vars []hcl.Traversal - - switch v := e.src.(type) { - case *stringVal: - templateSrc := v.Value - expr, diags := hclsyntax.ParseTemplate( - []byte(templateSrc), - v.SrcRange.Filename, - - // This won't produce _exactly_ the right result, since - // the hclsyntax parser can't "see" any escapes we removed - // while parsing JSON, but it's better than nothing. - hcl.Pos{ - Line: v.SrcRange.Start.Line, - - // skip over the opening quote mark - Byte: v.SrcRange.Start.Byte + 1, - Column: v.SrcRange.Start.Column + 1, - }, - ) - if diags.HasErrors() { - return vars - } - return expr.Variables() - - case *arrayVal: - for _, jsonVal := range v.Values { - vars = append(vars, (&expression{src: jsonVal}).Variables()...) - } - case *objectVal: - for _, jsonAttr := range v.Attrs { - keyExpr := &stringVal{ // we're going to treat key as an expression in this context - Value: jsonAttr.Name, - SrcRange: jsonAttr.NameRange, - } - vars = append(vars, (&expression{src: keyExpr}).Variables()...) - vars = append(vars, (&expression{src: jsonAttr.Value}).Variables()...) - } - } - - return vars -} - -func (e *expression) Range() hcl.Range { - return e.src.Range() -} - -func (e *expression) StartRange() hcl.Range { - return e.src.StartRange() -} - -// Implementation for hcl.AbsTraversalForExpr. -func (e *expression) AsTraversal() hcl.Traversal { - // In JSON-based syntax a traversal is given as a string containing - // traversal syntax as defined by hclsyntax.ParseTraversalAbs. - - switch v := e.src.(type) { - case *stringVal: - traversal, diags := hclsyntax.ParseTraversalAbs([]byte(v.Value), v.SrcRange.Filename, v.SrcRange.Start) - if diags.HasErrors() { - return nil - } - return traversal - default: - return nil - } -} - -// Implementation for hcl.ExprCall. -func (e *expression) ExprCall() *hcl.StaticCall { - // In JSON-based syntax a static call is given as a string containing - // an expression in the native syntax that also supports ExprCall. - - switch v := e.src.(type) { - case *stringVal: - expr, diags := hclsyntax.ParseExpression([]byte(v.Value), v.SrcRange.Filename, v.SrcRange.Start) - if diags.HasErrors() { - return nil - } - - call, diags := hcl.ExprCall(expr) - if diags.HasErrors() { - return nil - } - - return call - default: - return nil - } -} - -// Implementation for hcl.ExprList. -func (e *expression) ExprList() []hcl.Expression { - switch v := e.src.(type) { - case *arrayVal: - ret := make([]hcl.Expression, len(v.Values)) - for i, node := range v.Values { - ret[i] = &expression{src: node} - } - return ret - default: - return nil - } -} - -// Implementation for hcl.ExprMap. -func (e *expression) ExprMap() []hcl.KeyValuePair { - switch v := e.src.(type) { - case *objectVal: - ret := make([]hcl.KeyValuePair, len(v.Attrs)) - for i, jsonAttr := range v.Attrs { - ret[i] = hcl.KeyValuePair{ - Key: &expression{src: &stringVal{ - Value: jsonAttr.Name, - SrcRange: jsonAttr.NameRange, - }}, - Value: &expression{src: jsonAttr.Value}, - } - } - return ret - default: - return nil - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/json/tokentype_string.go b/vendor/github.com/hashicorp/hcl2/hcl/json/tokentype_string.go deleted file mode 100644 index bbcce5b30..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/json/tokentype_string.go +++ /dev/null @@ -1,29 +0,0 @@ -// Code generated by "stringer -type tokenType scanner.go"; DO NOT EDIT. - -package json - -import "strconv" - -const _tokenType_name = "tokenInvalidtokenCommatokenColontokenEqualstokenKeywordtokenNumbertokenStringtokenBrackOtokenBrackCtokenBraceOtokenBraceCtokenEOF" - -var _tokenType_map = map[tokenType]string{ - 0: _tokenType_name[0:12], - 44: _tokenType_name[12:22], - 58: _tokenType_name[22:32], - 61: _tokenType_name[32:43], - 75: _tokenType_name[43:55], - 78: _tokenType_name[55:66], - 83: _tokenType_name[66:77], - 91: _tokenType_name[77:88], - 93: _tokenType_name[88:99], - 123: _tokenType_name[99:110], - 125: _tokenType_name[110:121], - 9220: _tokenType_name[121:129], -} - -func (i tokenType) String() string { - if str, ok := _tokenType_map[i]; ok { - return str - } - return "tokenType(" + strconv.FormatInt(int64(i), 10) + ")" -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/merged.go b/vendor/github.com/hashicorp/hcl2/hcl/merged.go deleted file mode 100644 index 96e62a58d..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/merged.go +++ /dev/null @@ -1,226 +0,0 @@ -package hcl - -import ( - "fmt" -) - -// MergeFiles combines the given files to produce a single body that contains -// configuration from all of the given files. -// -// The ordering of the given files decides the order in which contained -// elements will be returned. If any top-level attributes are defined with -// the same name across multiple files, a diagnostic will be produced from -// the Content and PartialContent methods describing this error in a -// user-friendly way. -func MergeFiles(files []*File) Body { - var bodies []Body - for _, file := range files { - bodies = append(bodies, file.Body) - } - return MergeBodies(bodies) -} - -// MergeBodies is like MergeFiles except it deals directly with bodies, rather -// than with entire files. -func MergeBodies(bodies []Body) Body { - if len(bodies) == 0 { - // Swap out for our singleton empty body, to reduce the number of - // empty slices we have hanging around. - return emptyBody - } - - // If any of the given bodies are already merged bodies, we'll unpack - // to flatten to a single mergedBodies, since that's conceptually simpler. - // This also, as a side-effect, eliminates any empty bodies, since - // empties are merged bodies with no inner bodies. - var newLen int - var flatten bool - for _, body := range bodies { - if children, merged := body.(mergedBodies); merged { - newLen += len(children) - flatten = true - } else { - newLen++ - } - } - - if !flatten { // not just newLen == len, because we might have mergedBodies with single bodies inside - return mergedBodies(bodies) - } - - if newLen == 0 { - // Don't allocate a new empty when we already have one - return emptyBody - } - - new := make([]Body, 0, newLen) - for _, body := range bodies { - if children, merged := body.(mergedBodies); merged { - new = append(new, children...) - } else { - new = append(new, body) - } - } - return mergedBodies(new) -} - -var emptyBody = mergedBodies([]Body{}) - -// EmptyBody returns a body with no content. This body can be used as a -// placeholder when a body is required but no body content is available. -func EmptyBody() Body { - return emptyBody -} - -type mergedBodies []Body - -// Content returns the content produced by applying the given schema to all -// of the merged bodies and merging the result. -// -// Although required attributes _are_ supported, they should be used sparingly -// with merged bodies since in this case there is no contextual information -// with which to return good diagnostics. Applications working with merged -// bodies may wish to mark all attributes as optional and then check for -// required attributes afterwards, to produce better diagnostics. -func (mb mergedBodies) Content(schema *BodySchema) (*BodyContent, Diagnostics) { - // the returned body will always be empty in this case, because mergedContent - // will only ever call Content on the child bodies. - content, _, diags := mb.mergedContent(schema, false) - return content, diags -} - -func (mb mergedBodies) PartialContent(schema *BodySchema) (*BodyContent, Body, Diagnostics) { - return mb.mergedContent(schema, true) -} - -func (mb mergedBodies) JustAttributes() (Attributes, Diagnostics) { - attrs := make(map[string]*Attribute) - var diags Diagnostics - - for _, body := range mb { - thisAttrs, thisDiags := body.JustAttributes() - - if len(thisDiags) != 0 { - diags = append(diags, thisDiags...) - } - - if thisAttrs != nil { - for name, attr := range thisAttrs { - if existing := attrs[name]; existing != nil { - diags = diags.Append(&Diagnostic{ - Severity: DiagError, - Summary: "Duplicate argument", - Detail: fmt.Sprintf( - "Argument %q was already set at %s", - name, existing.NameRange.String(), - ), - Subject: &attr.NameRange, - }) - continue - } - - attrs[name] = attr - } - } - } - - return attrs, diags -} - -func (mb mergedBodies) MissingItemRange() Range { - if len(mb) == 0 { - // Nothing useful to return here, so we'll return some garbage. - return Range{ - Filename: "", - } - } - - // arbitrarily use the first body's missing item range - return mb[0].MissingItemRange() -} - -func (mb mergedBodies) mergedContent(schema *BodySchema, partial bool) (*BodyContent, Body, Diagnostics) { - // We need to produce a new schema with none of the attributes marked as - // required, since _any one_ of our bodies can contribute an attribute value. - // We'll separately check that all required attributes are present at - // the end. - mergedSchema := &BodySchema{ - Blocks: schema.Blocks, - } - for _, attrS := range schema.Attributes { - mergedAttrS := attrS - mergedAttrS.Required = false - mergedSchema.Attributes = append(mergedSchema.Attributes, mergedAttrS) - } - - var mergedLeftovers []Body - content := &BodyContent{ - Attributes: map[string]*Attribute{}, - } - - var diags Diagnostics - for _, body := range mb { - var thisContent *BodyContent - var thisLeftovers Body - var thisDiags Diagnostics - - if partial { - thisContent, thisLeftovers, thisDiags = body.PartialContent(mergedSchema) - } else { - thisContent, thisDiags = body.Content(mergedSchema) - } - - if thisLeftovers != nil { - mergedLeftovers = append(mergedLeftovers, thisLeftovers) - } - if len(thisDiags) != 0 { - diags = append(diags, thisDiags...) - } - - if thisContent.Attributes != nil { - for name, attr := range thisContent.Attributes { - if existing := content.Attributes[name]; existing != nil { - diags = diags.Append(&Diagnostic{ - Severity: DiagError, - Summary: "Duplicate argument", - Detail: fmt.Sprintf( - "Argument %q was already set at %s", - name, existing.NameRange.String(), - ), - Subject: &attr.NameRange, - }) - continue - } - content.Attributes[name] = attr - } - } - - if len(thisContent.Blocks) != 0 { - content.Blocks = append(content.Blocks, thisContent.Blocks...) - } - } - - // Finally, we check for required attributes. - for _, attrS := range schema.Attributes { - if !attrS.Required { - continue - } - - if content.Attributes[attrS.Name] == nil { - // We don't have any context here to produce a good diagnostic, - // which is why we warn in the Content docstring to minimize the - // use of required attributes on merged bodies. - diags = diags.Append(&Diagnostic{ - Severity: DiagError, - Summary: "Missing required argument", - Detail: fmt.Sprintf( - "The argument %q is required, but was not set.", - attrS.Name, - ), - }) - } - } - - leftoverBody := MergeBodies(mergedLeftovers) - return content, leftoverBody, diags -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/ops.go b/vendor/github.com/hashicorp/hcl2/hcl/ops.go deleted file mode 100644 index 5d2910c13..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/ops.go +++ /dev/null @@ -1,288 +0,0 @@ -package hcl - -import ( - "fmt" - "math/big" - - "github.com/zclconf/go-cty/cty" - "github.com/zclconf/go-cty/cty/convert" -) - -// Index is a helper function that performs the same operation as the index -// operator in the HCL expression language. That is, the result is the -// same as it would be for collection[key] in a configuration expression. -// -// This is exported so that applications can perform indexing in a manner -// consistent with how the language does it, including handling of null and -// unknown values, etc. -// -// Diagnostics are produced if the given combination of values is not valid. -// Therefore a pointer to a source range must be provided to use in diagnostics, -// though nil can be provided if the calling application is going to -// ignore the subject of the returned diagnostics anyway. -func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics) { - if collection.IsNull() { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Attempt to index null value", - Detail: "This value is null, so it does not have any indices.", - Subject: srcRange, - }, - } - } - if key.IsNull() { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Invalid index", - Detail: "Can't use a null value as an indexing key.", - Subject: srcRange, - }, - } - } - ty := collection.Type() - kty := key.Type() - if kty == cty.DynamicPseudoType || ty == cty.DynamicPseudoType { - return cty.DynamicVal, nil - } - - switch { - - case ty.IsListType() || ty.IsTupleType() || ty.IsMapType(): - var wantType cty.Type - switch { - case ty.IsListType() || ty.IsTupleType(): - wantType = cty.Number - case ty.IsMapType(): - wantType = cty.String - default: - // should never happen - panic("don't know what key type we want") - } - - key, keyErr := convert.Convert(key, wantType) - if keyErr != nil { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Invalid index", - Detail: fmt.Sprintf( - "The given key does not identify an element in this collection value: %s.", - keyErr.Error(), - ), - Subject: srcRange, - }, - } - } - - has := collection.HasIndex(key) - if !has.IsKnown() { - if ty.IsTupleType() { - return cty.DynamicVal, nil - } else { - return cty.UnknownVal(ty.ElementType()), nil - } - } - if has.False() { - // We have a more specialized error message for the situation of - // using a fractional number to index into a sequence, because - // that will tend to happen if the user is trying to use division - // to calculate an index and not realizing that HCL does float - // division rather than integer division. - if (ty.IsListType() || ty.IsTupleType()) && key.Type().Equals(cty.Number) { - if key.IsKnown() && !key.IsNull() { - bf := key.AsBigFloat() - if _, acc := bf.Int(nil); acc != big.Exact { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Invalid index", - Detail: fmt.Sprintf("The given key does not identify an element in this collection value: indexing a sequence requires a whole number, but the given index (%g) has a fractional part.", bf), - Subject: srcRange, - }, - } - } - } - } - - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Invalid index", - Detail: "The given key does not identify an element in this collection value.", - Subject: srcRange, - }, - } - } - - return collection.Index(key), nil - - case ty.IsObjectType(): - key, keyErr := convert.Convert(key, cty.String) - if keyErr != nil { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Invalid index", - Detail: fmt.Sprintf( - "The given key does not identify an element in this collection value: %s.", - keyErr.Error(), - ), - Subject: srcRange, - }, - } - } - if !collection.IsKnown() { - return cty.DynamicVal, nil - } - if !key.IsKnown() { - return cty.DynamicVal, nil - } - - attrName := key.AsString() - - if !ty.HasAttribute(attrName) { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Invalid index", - Detail: "The given key does not identify an element in this collection value.", - Subject: srcRange, - }, - } - } - - return collection.GetAttr(attrName), nil - - default: - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Invalid index", - Detail: "This value does not have any indices.", - Subject: srcRange, - }, - } - } - -} - -// GetAttr is a helper function that performs the same operation as the -// attribute access in the HCL expression language. That is, the result is the -// same as it would be for obj.attr in a configuration expression. -// -// This is exported so that applications can access attributes in a manner -// consistent with how the language does it, including handling of null and -// unknown values, etc. -// -// Diagnostics are produced if the given combination of values is not valid. -// Therefore a pointer to a source range must be provided to use in diagnostics, -// though nil can be provided if the calling application is going to -// ignore the subject of the returned diagnostics anyway. -func GetAttr(obj cty.Value, attrName string, srcRange *Range) (cty.Value, Diagnostics) { - if obj.IsNull() { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Attempt to get attribute from null value", - Detail: "This value is null, so it does not have any attributes.", - Subject: srcRange, - }, - } - } - - ty := obj.Type() - switch { - case ty.IsObjectType(): - if !ty.HasAttribute(attrName) { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Unsupported attribute", - Detail: fmt.Sprintf("This object does not have an attribute named %q.", attrName), - Subject: srcRange, - }, - } - } - - if !obj.IsKnown() { - return cty.UnknownVal(ty.AttributeType(attrName)), nil - } - - return obj.GetAttr(attrName), nil - case ty.IsMapType(): - if !obj.IsKnown() { - return cty.UnknownVal(ty.ElementType()), nil - } - - idx := cty.StringVal(attrName) - if obj.HasIndex(idx).False() { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Missing map element", - Detail: fmt.Sprintf("This map does not have an element with the key %q.", attrName), - Subject: srcRange, - }, - } - } - - return obj.Index(idx), nil - case ty == cty.DynamicPseudoType: - return cty.DynamicVal, nil - default: - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Unsupported attribute", - Detail: "This value does not have any attributes.", - Subject: srcRange, - }, - } - } - -} - -// ApplyPath is a helper function that applies a cty.Path to a value using the -// indexing and attribute access operations from HCL. -// -// This is similar to calling the path's own Apply method, but ApplyPath uses -// the more relaxed typing rules that apply to these operations in HCL, rather -// than cty's relatively-strict rules. ApplyPath is implemented in terms of -// Index and GetAttr, and so it has the same behavior for individual steps -// but will stop and return any errors returned by intermediate steps. -// -// Diagnostics are produced if the given path cannot be applied to the given -// value. Therefore a pointer to a source range must be provided to use in -// diagnostics, though nil can be provided if the calling application is going -// to ignore the subject of the returned diagnostics anyway. -func ApplyPath(val cty.Value, path cty.Path, srcRange *Range) (cty.Value, Diagnostics) { - var diags Diagnostics - - for _, step := range path { - var stepDiags Diagnostics - switch ts := step.(type) { - case cty.IndexStep: - val, stepDiags = Index(val, ts.Key, srcRange) - case cty.GetAttrStep: - val, stepDiags = GetAttr(val, ts.Name, srcRange) - default: - // Should never happen because the above are all of the step types. - diags = diags.Append(&Diagnostic{ - Severity: DiagError, - Summary: "Invalid path step", - Detail: fmt.Sprintf("Go type %T is not a valid path step. This is a bug in this program.", step), - Subject: srcRange, - }) - return cty.DynamicVal, diags - } - - diags = append(diags, stepDiags...) - if stepDiags.HasErrors() { - return cty.DynamicVal, diags - } - } - - return val, diags -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/pos.go b/vendor/github.com/hashicorp/hcl2/hcl/pos.go deleted file mode 100644 index 06db8bfbd..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/pos.go +++ /dev/null @@ -1,275 +0,0 @@ -package hcl - -import "fmt" - -// Pos represents a single position in a source file, by addressing the -// start byte of a unicode character encoded in UTF-8. -// -// Pos is generally used only in the context of a Range, which then defines -// which source file the position is within. -type Pos struct { - // Line is the source code line where this position points. Lines are - // counted starting at 1 and incremented for each newline character - // encountered. - Line int - - // Column is the source code column where this position points, in - // unicode characters, with counting starting at 1. - // - // Column counts characters as they appear visually, so for example a - // latin letter with a combining diacritic mark counts as one character. - // This is intended for rendering visual markers against source code in - // contexts where these diacritics would be rendered in a single character - // cell. Technically speaking, Column is counting grapheme clusters as - // used in unicode normalization. - Column int - - // Byte is the byte offset into the file where the indicated character - // begins. This is a zero-based offset to the first byte of the first - // UTF-8 codepoint sequence in the character, and thus gives a position - // that can be resolved _without_ awareness of Unicode characters. - Byte int -} - -// InitialPos is a suitable position to use to mark the start of a file. -var InitialPos = Pos{Byte: 0, Line: 1, Column: 1} - -// Range represents a span of characters between two positions in a source -// file. -// -// This struct is usually used by value in types that represent AST nodes, -// but by pointer in types that refer to the positions of other objects, -// such as in diagnostics. -type Range struct { - // Filename is the name of the file into which this range's positions - // point. - Filename string - - // Start and End represent the bounds of this range. Start is inclusive - // and End is exclusive. - Start, End Pos -} - -// RangeBetween returns a new range that spans from the beginning of the -// start range to the end of the end range. -// -// The result is meaningless if the two ranges do not belong to the same -// source file or if the end range appears before the start range. -func RangeBetween(start, end Range) Range { - return Range{ - Filename: start.Filename, - Start: start.Start, - End: end.End, - } -} - -// RangeOver returns a new range that covers both of the given ranges and -// possibly additional content between them if the two ranges do not overlap. -// -// If either range is empty then it is ignored. The result is empty if both -// given ranges are empty. -// -// The result is meaningless if the two ranges to not belong to the same -// source file. -func RangeOver(a, b Range) Range { - if a.Empty() { - return b - } - if b.Empty() { - return a - } - - var start, end Pos - if a.Start.Byte < b.Start.Byte { - start = a.Start - } else { - start = b.Start - } - if a.End.Byte > b.End.Byte { - end = a.End - } else { - end = b.End - } - return Range{ - Filename: a.Filename, - Start: start, - End: end, - } -} - -// ContainsPos returns true if and only if the given position is contained within -// the receiving range. -// -// In the unlikely case that the line/column information disagree with the byte -// offset information in the given position or receiving range, the byte -// offsets are given priority. -func (r Range) ContainsPos(pos Pos) bool { - return r.ContainsOffset(pos.Byte) -} - -// ContainsOffset returns true if and only if the given byte offset is within -// the receiving Range. -func (r Range) ContainsOffset(offset int) bool { - return offset >= r.Start.Byte && offset < r.End.Byte -} - -// Ptr returns a pointer to a copy of the receiver. This is a convenience when -// ranges in places where pointers are required, such as in Diagnostic, but -// the range in question is returned from a method. Go would otherwise not -// allow one to take the address of a function call. -func (r Range) Ptr() *Range { - return &r -} - -// String returns a compact string representation of the receiver. -// Callers should generally prefer to present a range more visually, -// e.g. via markers directly on the relevant portion of source code. -func (r Range) String() string { - if r.Start.Line == r.End.Line { - return fmt.Sprintf( - "%s:%d,%d-%d", - r.Filename, - r.Start.Line, r.Start.Column, - r.End.Column, - ) - } else { - return fmt.Sprintf( - "%s:%d,%d-%d,%d", - r.Filename, - r.Start.Line, r.Start.Column, - r.End.Line, r.End.Column, - ) - } -} - -func (r Range) Empty() bool { - return r.Start.Byte == r.End.Byte -} - -// CanSliceBytes returns true if SliceBytes could return an accurate -// sub-slice of the given slice. -// -// This effectively tests whether the start and end offsets of the range -// are within the bounds of the slice, and thus whether SliceBytes can be -// trusted to produce an accurate start and end position within that slice. -func (r Range) CanSliceBytes(b []byte) bool { - switch { - case r.Start.Byte < 0 || r.Start.Byte > len(b): - return false - case r.End.Byte < 0 || r.End.Byte > len(b): - return false - case r.End.Byte < r.Start.Byte: - return false - default: - return true - } -} - -// SliceBytes returns a sub-slice of the given slice that is covered by the -// receiving range, assuming that the given slice is the source code of the -// file indicated by r.Filename. -// -// If the receiver refers to any byte offsets that are outside of the slice -// then the result is constrained to the overlapping portion only, to avoid -// a panic. Use CanSliceBytes to determine if the result is guaranteed to -// be an accurate span of the requested range. -func (r Range) SliceBytes(b []byte) []byte { - start := r.Start.Byte - end := r.End.Byte - if start < 0 { - start = 0 - } else if start > len(b) { - start = len(b) - } - if end < 0 { - end = 0 - } else if end > len(b) { - end = len(b) - } - if end < start { - end = start - } - return b[start:end] -} - -// Overlaps returns true if the receiver and the other given range share any -// characters in common. -func (r Range) Overlaps(other Range) bool { - switch { - case r.Filename != other.Filename: - // If the ranges are in different files then they can't possibly overlap - return false - case r.Empty() || other.Empty(): - // Empty ranges can never overlap - return false - case r.ContainsOffset(other.Start.Byte) || r.ContainsOffset(other.End.Byte): - return true - case other.ContainsOffset(r.Start.Byte) || other.ContainsOffset(r.End.Byte): - return true - default: - return false - } -} - -// Overlap finds a range that is either identical to or a sub-range of both -// the receiver and the other given range. It returns an empty range -// within the receiver if there is no overlap between the two ranges. -// -// A non-empty result is either identical to or a subset of the receiver. -func (r Range) Overlap(other Range) Range { - if !r.Overlaps(other) { - // Start == End indicates an empty range - return Range{ - Filename: r.Filename, - Start: r.Start, - End: r.Start, - } - } - - var start, end Pos - if r.Start.Byte > other.Start.Byte { - start = r.Start - } else { - start = other.Start - } - if r.End.Byte < other.End.Byte { - end = r.End - } else { - end = other.End - } - - return Range{ - Filename: r.Filename, - Start: start, - End: end, - } -} - -// PartitionAround finds the portion of the given range that overlaps with -// the reciever and returns three ranges: the portion of the reciever that -// precedes the overlap, the overlap itself, and then the portion of the -// reciever that comes after the overlap. -// -// If the two ranges do not overlap then all three returned ranges are empty. -// -// If the given range aligns with or extends beyond either extent of the -// reciever then the corresponding outer range will be empty. -func (r Range) PartitionAround(other Range) (before, overlap, after Range) { - overlap = r.Overlap(other) - if overlap.Empty() { - return overlap, overlap, overlap - } - - before = Range{ - Filename: r.Filename, - Start: r.Start, - End: overlap.Start, - } - after = Range{ - Filename: r.Filename, - Start: overlap.End, - End: r.End, - } - - return before, overlap, after -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/pos_scanner.go b/vendor/github.com/hashicorp/hcl2/hcl/pos_scanner.go deleted file mode 100644 index 17c0d7c6b..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/pos_scanner.go +++ /dev/null @@ -1,152 +0,0 @@ -package hcl - -import ( - "bufio" - "bytes" - - "github.com/apparentlymart/go-textseg/textseg" -) - -// RangeScanner is a helper that will scan over a buffer using a bufio.SplitFunc -// and visit a source range for each token matched. -// -// For example, this can be used with bufio.ScanLines to find the source range -// for each line in the file, skipping over the actual newline characters, which -// may be useful when printing source code snippets as part of diagnostic -// messages. -// -// The line and column information in the returned ranges is produced by -// counting newline characters and grapheme clusters respectively, which -// mimics the behavior we expect from a parser when producing ranges. -type RangeScanner struct { - filename string - b []byte - cb bufio.SplitFunc - - pos Pos // position of next byte to process in b - cur Range // latest range - tok []byte // slice of b that is covered by cur - err error // error from last scan, if any -} - -// NewRangeScanner creates a new RangeScanner for the given buffer, producing -// ranges for the given filename. -// -// Since ranges have grapheme-cluster granularity rather than byte granularity, -// the scanner will produce incorrect results if the given SplitFunc creates -// tokens between grapheme cluster boundaries. In particular, it is incorrect -// to use RangeScanner with bufio.ScanRunes because it will produce tokens -// around individual UTF-8 sequences, which will split any multi-sequence -// grapheme clusters. -func NewRangeScanner(b []byte, filename string, cb bufio.SplitFunc) *RangeScanner { - return NewRangeScannerFragment(b, filename, InitialPos, cb) -} - -// NewRangeScannerFragment is like NewRangeScanner but the ranges it produces -// will be offset by the given starting position, which is appropriate for -// sub-slices of a file, whereas NewRangeScanner assumes it is scanning an -// entire file. -func NewRangeScannerFragment(b []byte, filename string, start Pos, cb bufio.SplitFunc) *RangeScanner { - return &RangeScanner{ - filename: filename, - b: b, - cb: cb, - pos: start, - } -} - -func (sc *RangeScanner) Scan() bool { - if sc.pos.Byte >= len(sc.b) || sc.err != nil { - // All done - return false - } - - // Since we're operating on an in-memory buffer, we always pass the whole - // remainder of the buffer to our SplitFunc and set isEOF to let it know - // that it has the whole thing. - advance, token, err := sc.cb(sc.b[sc.pos.Byte:], true) - - // Since we are setting isEOF to true this should never happen, but - // if it does we will just abort and assume the SplitFunc is misbehaving. - if advance == 0 && token == nil && err == nil { - return false - } - - if err != nil { - sc.err = err - sc.cur = Range{ - Filename: sc.filename, - Start: sc.pos, - End: sc.pos, - } - sc.tok = nil - return false - } - - sc.tok = token - start := sc.pos - end := sc.pos - new := sc.pos - - // adv is similar to token but it also includes any subsequent characters - // we're being asked to skip over by the SplitFunc. - // adv is a slice covering any additional bytes we are skipping over, based - // on what the SplitFunc told us to do with advance. - adv := sc.b[sc.pos.Byte : sc.pos.Byte+advance] - - // We now need to scan over our token to count the grapheme clusters - // so we can correctly advance Column, and count the newlines so we - // can correctly advance Line. - advR := bytes.NewReader(adv) - gsc := bufio.NewScanner(advR) - advanced := 0 - gsc.Split(textseg.ScanGraphemeClusters) - for gsc.Scan() { - gr := gsc.Bytes() - new.Byte += len(gr) - new.Column++ - - // We rely here on the fact that \r\n is considered a grapheme cluster - // and so we don't need to worry about miscounting additional lines - // on files with Windows-style line endings. - if len(gr) != 0 && (gr[0] == '\r' || gr[0] == '\n') { - new.Column = 1 - new.Line++ - } - - if advanced < len(token) { - // If we've not yet found the end of our token then we'll - // also push our "end" marker along. - // (if advance > len(token) then we'll stop moving "end" early - // so that the caller only sees the range covered by token.) - end = new - } - advanced += len(gr) - } - - sc.cur = Range{ - Filename: sc.filename, - Start: start, - End: end, - } - sc.pos = new - return true -} - -// Range returns a range that covers the latest token obtained after a call -// to Scan returns true. -func (sc *RangeScanner) Range() Range { - return sc.cur -} - -// Bytes returns the slice of the input buffer that is covered by the range -// that would be returned by Range. -func (sc *RangeScanner) Bytes() []byte { - return sc.tok -} - -// Err can be called after Scan returns false to determine if the latest read -// resulted in an error, and obtain that error if so. -func (sc *RangeScanner) Err() error { - return sc.err -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/schema.go b/vendor/github.com/hashicorp/hcl2/hcl/schema.go deleted file mode 100644 index 891257acb..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/schema.go +++ /dev/null @@ -1,21 +0,0 @@ -package hcl - -// BlockHeaderSchema represents the shape of a block header, and is -// used for matching blocks within bodies. -type BlockHeaderSchema struct { - Type string - LabelNames []string -} - -// AttributeSchema represents the requirements for an attribute, and is used -// for matching attributes within bodies. -type AttributeSchema struct { - Name string - Required bool -} - -// BodySchema represents the desired shallow structure of a body. -type BodySchema struct { - Attributes []AttributeSchema - Blocks []BlockHeaderSchema -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/spec.md b/vendor/github.com/hashicorp/hcl2/hcl/spec.md deleted file mode 100644 index 97ef61318..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/spec.md +++ /dev/null @@ -1,691 +0,0 @@ -# HCL Syntax-Agnostic Information Model - -This is the specification for the general information model (abstract types and -semantics) for hcl. HCL is a system for defining configuration languages for -applications. The HCL information model is designed to support multiple -concrete syntaxes for configuration, each with a mapping to the model defined -in this specification. - -The two primary syntaxes intended for use in conjunction with this model are -[the HCL native syntax](./hclsyntax/spec.md) and [the JSON syntax](./json/spec.md). -In principle other syntaxes are possible as long as either their language model -is sufficiently rich to express the concepts described in this specification -or the language targets a well-defined subset of the specification. - -## Structural Elements - -The primary structural element is the _body_, which is a container representing -a set of zero or more _attributes_ and a set of zero or more _blocks_. - -A _configuration file_ is the top-level object, and will usually be produced -by reading a file from disk and parsing it as a particular syntax. A -configuration file has its own _body_, representing the top-level attributes -and blocks. - -An _attribute_ is a name and value pair associated with a body. Attribute names -are unique within a given body. Attribute values are provided as _expressions_, -which are discussed in detail in a later section. - -A _block_ is a nested structure that has a _type name_, zero or more string -_labels_ (e.g. identifiers), and a nested body. - -Together the structural elements create a hierarchical data structure, with -attributes intended to represent the direct properties of a particular object -in the calling application, and blocks intended to represent child objects -of a particular object. - -## Body Content - -To support the expression of the HCL concepts in languages whose information -model is a subset of HCL's, such as JSON, a _body_ is an opaque container -whose content can only be accessed by providing information on the expected -structure of the content. - -The specification for each syntax must describe how its physical constructs -are mapped on to body content given a schema. For syntaxes that have -first-class syntax distinguishing attributes and bodies this can be relatively -straightforward, while more detailed mapping rules may be required in syntaxes -where the representation of attributes vs. blocks is ambiguous. - -### Schema-driven Processing - -Schema-driven processing is the primary way to access body content. -A _body schema_ is a description of what is expected within a particular body, -which can then be used to extract the _body content_, which then provides -access to the specific attributes and blocks requested. - -A _body schema_ consists of a list of _attribute schemata_ and -_block header schemata_: - -- An _attribute schema_ provides the name of an attribute and whether its - presence is required. - -- A _block header schema_ provides a block type name and the semantic names - assigned to each of the labels of that block type, if any. - -Within a schema, it is an error to request the same attribute name twice or -to request a block type whose name is also an attribute name. While this can -in principle be supported in some syntaxes, in other syntaxes the attribute -and block namespaces are combined and so an attribute cannot coexist with -a block whose type name is identical to the attribute name. - -The result of applying a body schema to a body is _body content_, which -consists of an _attribute map_ and a _block sequence_: - -- The _attribute map_ is a map data structure whose keys are attribute names - and whose values are _expressions_ that represent the corresponding attribute - values. - -- The _block sequence_ is an ordered sequence of blocks, with each specifying - a block _type name_, the sequence of _labels_ specified for the block, - and the body object (not body _content_) representing the block's own body. - -After obtaining _body content_, the calling application may continue processing -by evaluating attribute expressions and/or recursively applying further -schema-driven processing to the child block bodies. - -**Note:** The _body schema_ is intentionally minimal, to reduce the set of -mapping rules that must be defined for each syntax. Higher-level utility -libraries may be provided to assist in the construction of a schema and -perform additional processing, such as automatically evaluating attribute -expressions and assigning their result values into a data structure, or -recursively applying a schema to child blocks. Such utilities are not part of -this core specification and will vary depending on the capabilities and idiom -of the implementation language. - -### _Dynamic Attributes_ Processing - -The _schema-driven_ processing model is useful when the expected structure -of a body is known a priori by the calling application. Some blocks are -instead more free-form, such as a user-provided set of arbitrary key/value -pairs. - -The alternative _dynamic attributes_ processing mode allows for this more -ad-hoc approach. Processing in this mode behaves as if a schema had been -constructed without any _block header schemata_ and with an attribute -schema for each distinct key provided within the physical representation -of the body. - -The means by which _distinct keys_ are identified is dependent on the -physical syntax; this processing mode assumes that the syntax has a way -to enumerate keys provided by the author and identify expressions that -correspond with those keys, but does not define the means by which this is -done. - -The result of _dynamic attributes_ processing is an _attribute map_ as -defined in the previous section. No _block sequence_ is produced in this -processing mode. - -### Partial Processing of Body Content - -Under _schema-driven processing_, by default the given schema is assumed -to be exhaustive, such that any attribute or block not matched by schema -elements is considered an error. This allows feedback about unsupported -attributes and blocks (such as typos) to be provided. - -An alternative is _partial processing_, where any additional elements within -the body are not considered an error. - -Under partial processing, the result is both body content as described -above _and_ a new body that represents any body elements that remain after -the schema has been processed. - -Specifically: - -- Any attribute whose name is specified in the schema is returned in body - content and elided from the new body. - -- Any block whose type is specified in the schema is returned in body content - and elided from the new body. - -- Any attribute or block _not_ meeting the above conditions is placed into - the new body, unmodified. - -The new body can then be recursively processed using any of the body -processing models. This facility allows different subsets of body content -to be processed by different parts of the calling application. - -Processing a body in two steps — first partial processing of a source body, -then exhaustive processing of the returned body — is equivalent to single-step -processing with a schema that is the union of the schemata used -across the two steps. - -## Expressions - -Attribute values are represented by _expressions_. Depending on the concrete -syntax in use, an expression may just be a literal value or it may describe -a computation in terms of literal values, variables, and functions. - -Each syntax defines its own representation of expressions. For syntaxes based -in languages that do not have any non-literal expression syntax, it is -recommended to embed the template language from -[the native syntax](./hclsyntax/spec.md) e.g. as a post-processing step on -string literals. - -### Expression Evaluation - -In order to obtain a concrete value, each expression must be _evaluated_. -Evaluation is performed in terms of an evaluation context, which -consists of the following: - -- An _evaluation mode_, which is defined below. -- A _variable scope_, which provides a set of named variables for use in - expressions. -- A _function table_, which provides a set of named functions for use in - expressions. - -The _evaluation mode_ allows for two different interpretations of an -expression: - -- In _literal-only mode_, variables and functions are not available and it - is assumed that the calling application's intent is to treat the attribute - value as a literal. - -- In _full expression mode_, variables and functions are defined and it is - assumed that the calling application wishes to provide a full expression - language for definition of the attribute value. - -The actual behavior of these two modes depends on the syntax in use. For -languages with first-class expression syntax, these two modes may be considered -equivalent, with _literal-only mode_ simply not defining any variables or -functions. For languages that embed arbitrary expressions via string templates, -_literal-only mode_ may disable such processing, allowing literal strings to -pass through without interpretation as templates. - -Since literal-only mode does not support variables and functions, it is an -error for the calling application to enable this mode and yet provide a -variable scope and/or function table. - -## Values and Value Types - -The result of expression evaluation is a _value_. Each value has a _type_, -which is dynamically determined during evaluation. The _variable scope_ in -the evaluation context is a map from variable name to value, using the same -definition of value. - -The type system for HCL values is intended to be of a level abstraction -suitable for configuration of various applications. A well-defined, -implementation-language-agnostic type system is defined to allow for -consistent processing of configuration across many implementation languages. -Concrete implementations may provide additional functionality to lower -HCL values and types to corresponding native language types, which may then -impose additional constraints on the values outside of the scope of this -specification. - -Two values are _equal_ if and only if they have identical types and their -values are equal according to the rules of their shared type. - -### Primitive Types - -The primitive types are _string_, _bool_, and _number_. - -A _string_ is a sequence of unicode characters. Two strings are equal if -NFC normalization ([UAX#15](http://unicode.org/reports/tr15/) -of each string produces two identical sequences of characters. -NFC normalization ensures that, for example, a precomposed combination of a -latin letter and a diacritic compares equal with the letter followed by -a combining diacritic. - -The _bool_ type has only two non-null values: _true_ and _false_. Two bool -values are equal if and only if they are either both true or both false. - -A _number_ is an arbitrary-precision floating point value. An implementation -_must_ make the full-precision values available to the calling application -for interpretation into any suitable number representation. An implementation -may in practice implement numbers with limited precision so long as the -following constraints are met: - -- Integers are represented with at least 256 bits. -- Non-integer numbers are represented as floating point values with a - mantissa of at least 256 bits and a signed binary exponent of at least - 16 bits. -- An error is produced if an integer value given in source cannot be - represented precisely. -- An error is produced if a non-integer value cannot be represented due to - overflow. -- A non-integer number is rounded to the nearest possible value when a - value is of too high a precision to be represented. - -The _number_ type also requires representation of both positive and negative -infinity. A "not a number" (NaN) value is _not_ provided nor used. - -Two number values are equal if they are numerically equal to the precision -associated with the number. Positive infinity and negative infinity are -equal to themselves but not to each other. Positive infinity is greater than -any other number value, and negative infinity is less than any other number -value. - -Some syntaxes may be unable to represent numeric literals of arbitrary -precision. This must be defined in the syntax specification as part of its -description of mapping numeric literals to HCL values. - -### Structural Types - -_Structural types_ are types that are constructed by combining other types. -Each distinct combination of other types is itself a distinct type. There -are two structural type _kinds_: - -- _Object types_ are constructed of a set of named attributes, each of which - has a type. Attribute names are always strings. (_Object_ attributes are a - distinct idea from _body_ attributes, though calling applications - may choose to blur the distinction by use of common naming schemes.) -- _Tuple types_ are constructed of a sequence of elements, each of which - has a type. - -Values of structural types are compared for equality in terms of their -attributes or elements. A structural type value is equal to another if and -only if all of the corresponding attributes or elements are equal. - -Two structural types are identical if they are of the same kind and -have attributes or elements with identical types. - -### Collection Types - -_Collection types_ are types that combine together an arbitrary number of -values of some other single type. There are three collection type _kinds_: - -- _List types_ represent ordered sequences of values of their element type. -- _Map types_ represent values of their element type accessed via string keys. -- _Set types_ represent unordered sets of distinct values of their element type. - -For each of these kinds and each distinct element type there is a distinct -collection type. For example, "list of string" is a distinct type from -"set of string", and "list of number" is a distinct type from "list of string". - -Values of collection types are compared for equality in terms of their -elements. A collection type value is equal to another if and only if both -have the same number of elements and their corresponding elements are equal. - -Two collection types are identical if they are of the same kind and have -the same element type. - -### Null values - -Each type has a null value. The null value of a type represents the absence -of a value, but with type information retained to allow for type checking. - -Null values are used primarily to represent the conditional absence of a -body attribute. In a syntax with a conditional operator, one of the result -values of that conditional may be null to indicate that the attribute should be -considered not present in that case. - -Calling applications _should_ consider an attribute with a null value as -equivalent to the value not being present at all. - -A null value of a particular type is equal to itself. - -### Unknown Values and the Dynamic Pseudo-type - -An _unknown value_ is a placeholder for a value that is not yet known. -Operations on unknown values themselves return unknown values that have a -type appropriate to the operation. For example, adding together two unknown -numbers yields an unknown number, while comparing two unknown values of any -type for equality yields an unknown bool. - -Each type has a distinct unknown value. For example, an unknown _number_ is -a distinct value from an unknown _string_. - -_The dynamic pseudo-type_ is a placeholder for a type that is not yet known. -The only values of this type are its null value and its unknown value. It is -referred to as a _pseudo-type_ because it should not be considered a type in -its own right, but rather as a placeholder for a type yet to be established. -The unknown value of the dynamic pseudo-type is referred to as _the dynamic -value_. - -Operations on values of the dynamic pseudo-type behave as if it is a value -of the expected type, optimistically assuming that once the value and type -are known they will be valid for the operation. For example, adding together -a number and the dynamic value produces an unknown number. - -Unknown values and the dynamic pseudo-type can be used as a mechanism for -partial type checking and semantic checking: by evaluating an expression with -all variables set to an unknown value, the expression can be evaluated to -produce an unknown value of a given type, or produce an error if any operation -is provably invalid with only type information. - -Unknown values and the dynamic pseudo-type must never be returned from -operations unless at least one operand is unknown or dynamic. Calling -applications are guaranteed that unless the global scope includes unknown -values, or the function table includes functions that return unknown values, -no expression will evaluate to an unknown value. The calling application is -thus in total control over the use and meaning of unknown values. - -The dynamic pseudo-type is identical only to itself. - -### Capsule Types - -A _capsule type_ is a custom type defined by the calling application. A value -of a capsule type is considered opaque to HCL, but may be accepted -by functions provided by the calling application. - -A particular capsule type is identical only to itself. The equality of two -values of the same capsule type is defined by the calling application. No -other operations are supported for values of capsule types. - -Support for capsule types in a HCL implementation is optional. Capsule types -are intended to allow calling applications to pass through values that are -not part of the standard type system. For example, an application that -deals with raw binary data may define a capsule type representing a byte -array, and provide functions that produce or operate on byte arrays. - -### Type Specifications - -In certain situations it is necessary to define expectations about the expected -type of a value. Whereas two _types_ have a commutative _identity_ relationship, -a type has a non-commutative _matches_ relationship with a _type specification_. -A type specification is, in practice, just a different interpretation of a -type such that: - -- Any type _matches_ any type that it is identical to. - -- Any type _matches_ the dynamic pseudo-type. - -For example, given a type specification "list of dynamic pseudo-type", the -concrete types "list of string" and "list of map" match, but the -type "set of string" does not. - -## Functions and Function Calls - -The evaluation context used to evaluate an expression includes a function -table, which represents an application-defined set of named functions -available for use in expressions. - -Each syntax defines whether function calls are supported and how they are -physically represented in source code, but the semantics of function calls are -defined here to ensure consistent results across syntaxes and to allow -applications to provide functions that are interoperable with all syntaxes. - -A _function_ is defined from the following elements: - -- Zero or more _positional parameters_, each with a name used for documentation, - a type specification for expected argument values, and a flag for whether - each of null values, unknown values, and values of the dynamic pseudo-type - are accepted. - -- Zero or one _variadic parameters_, with the same structure as the _positional_ - parameters, which if present collects any additional arguments provided at - the function call site. - -- A _result type definition_, which specifies the value type returned for each - valid sequence of argument values. - -- A _result value definition_, which specifies the value returned for each - valid sequence of argument values. - -A _function call_, regardless of source syntax, consists of a sequence of -argument values. The argument values are each mapped to a corresponding -parameter as follows: - -- For each of the function's positional parameters in sequence, take the next - argument. If there are no more arguments, the call is erroneous. - -- If the function has a variadic parameter, take all remaining arguments that - where not yet assigned to a positional parameter and collect them into - a sequence of variadic arguments that each correspond to the variadic - parameter. - -- If the function has _no_ variadic parameter, it is an error if any arguments - remain after taking one argument for each positional parameter. - -After mapping each argument to a parameter, semantic checking proceeds -for each argument: - -- If the argument value corresponding to a parameter does not match the - parameter's type specification, the call is erroneous. - -- If the argument value corresponding to a parameter is null and the parameter - is not specified as accepting nulls, the call is erroneous. - -- If the argument value corresponding to a parameter is the dynamic value - and the parameter is not specified as accepting values of the dynamic - pseudo-type, the call is valid but its _result type_ is forced to be the - dynamic pseudo type. - -- If neither of the above conditions holds for any argument, the call is - valid and the function's value type definition is used to determine the - call's _result type_. A function _may_ vary its result type depending on - the argument _values_ as well as the argument _types_; for example, a - function that decodes a JSON value will return a different result type - depending on the data structure described by the given JSON source code. - -If semantic checking succeeds without error, the call is _executed_: - -- For each argument, if its value is unknown and its corresponding parameter - is not specified as accepting unknowns, the _result value_ is forced to be an - unknown value of the result type. - -- If the previous condition does not apply, the function's result value - definition is used to determine the call's _result value_. - -The result of a function call expression is either an error, if one of the -erroneous conditions above applies, or the _result value_. - -## Type Conversions and Unification - -Values given in configuration may not always match the expectations of the -operations applied to them or to the calling application. In such situations, -automatic type conversion is attempted as a convenience to the user. - -Along with conversions to a _specified_ type, it is sometimes necessary to -ensure that a selection of values are all of the _same_ type, without any -constraint on which type that is. This is the process of _type unification_, -which attempts to find the most general type that all of the given types can -be converted to. - -Both type conversions and unification are defined in the syntax-agnostic -model to ensure consistency of behavior between syntaxes. - -Type conversions are broadly characterized into two categories: _safe_ and -_unsafe_. A conversion is "safe" if any distinct value of the source type -has a corresponding distinct value in the target type. A conversion is -"unsafe" if either the target type values are _not_ distinct (information -may be lost in conversion) or if some values of the source type do not have -any corresponding value in the target type. An unsafe conversion may result -in an error. - -A given type can always be converted to itself, which is a no-op. - -### Conversion of Null Values - -All null values are safely convertable to a null value of any other type, -regardless of other type-specific rules specified in the sections below. - -### Conversion to and from the Dynamic Pseudo-type - -Conversion _from_ the dynamic pseudo-type _to_ any other type always succeeds, -producing an unknown value of the target type. - -Conversion of any value _to_ the dynamic pseudo-type is a no-op. The result -is the input value, verbatim. This is the only situation where the conversion -result value is not of the given target type. - -### Primitive Type Conversions - -Bidirectional conversions are available between the string and number types, -and between the string and boolean types. - -The bool value true corresponds to the string containing the characters "true", -while the bool value false corresponds to the string containing the characters -"false". Conversion from bool to string is safe, while the converse is -unsafe. The strings "1" and "0" are alternative string representations -of true and false respectively. It is an error to convert a string other than -the four in this paragraph to type bool. - -A number value is converted to string by translating its integer portion -into a sequence of decimal digits (`0` through `9`), and then if it has a -non-zero fractional part, a period `.` followed by a sequence of decimal -digits representing its fractional part. No exponent portion is included. -The number is converted at its full precision. Conversion from number to -string is safe. - -A string is converted to a number value by reversing the above mapping. -No exponent portion is allowed. Conversion from string to number is unsafe. -It is an error to convert a string that does not comply with the expected -syntax to type number. - -No direct conversion is available between the bool and number types. - -### Collection and Structural Type Conversions - -Conversion from set types to list types is _safe_, as long as their -element types are safely convertable. If the element types are _unsafely_ -convertable, then the collection conversion is also unsafe. Each set element -becomes a corresponding list element, in an undefined order. Although no -particular ordering is required, implementations _should_ produce list -elements in a consistent order for a given input set, as a convenience -to calling applications. - -Conversion from list types to set types is _unsafe_, as long as their element -types are convertable. Each distinct list item becomes a distinct set item. -If two list items are equal, one of the two is lost in the conversion. - -Conversion from tuple types to list types permitted if all of the -tuple element types are convertable to the target list element type. -The safety of the conversion depends on the safety of each of the element -conversions. Each element in turn is converted to the list element type, -producing a list of identical length. - -Conversion from tuple types to set types is permitted, behaving as if the -tuple type was first converted to a list of the same element type and then -that list converted to the target set type. - -Conversion from object types to map types is permitted if all of the object -attribute types are convertable to the target map element type. The safety -of the conversion depends on the safety of each of the attribute conversions. -Each attribute in turn is converted to the map element type, and map element -keys are set to the name of each corresponding object attribute. - -Conversion from list and set types to tuple types is permitted, following -the opposite steps as the converse conversions. Such conversions are _unsafe_. -It is an error to convert a list or set to a tuple type whose number of -elements does not match the list or set length. - -Conversion from map types to object types is permitted if each map key -corresponds to an attribute in the target object type. It is an error to -convert from a map value whose set of keys does not exactly match the target -type's attributes. The conversion takes the opposite steps of the converse -conversion. - -Conversion from one object type to another is permitted as long as the -common attribute names have convertable types. Any attribute present in the -target type but not in the source type is populated with a null value of -the appropriate type. - -Conversion from one tuple type to another is permitted as long as the -tuples have the same length and the elements have convertable types. - -### Type Unification - -Type unification is an operation that takes a list of types and attempts -to find a single type to which they can all be converted. Since some -type pairs have bidirectional conversions, preference is given to _safe_ -conversions. In technical terms, all possible types are arranged into -a lattice, from which a most general supertype is selected where possible. - -The type resulting from type unification may be one of the input types, or -it may be an entirely new type produced by combination of two or more -input types. - -The following rules do not guarantee a valid result. In addition to these -rules, unification fails if any of the given types are not convertable -(per the above rules) to the selected result type. - -The following unification rules apply transitively. That is, if a rule is -defined from A to B, and one from B to C, then A can unify to C. - -Number and bool types both unify with string by preferring string. - -Two collection types of the same kind unify according to the unification -of their element types. - -List and set types unify by preferring the list type. - -Map and object types unify by preferring the object type. - -List, set and tuple types unify by preferring the tuple type. - -The dynamic pseudo-type unifies with any other type by selecting that other -type. The dynamic pseudo-type is the result type only if _all_ input types -are the dynamic pseudo-type. - -Two object types unify by constructing a new type whose attributes are -the union of those of the two input types. Any common attributes themselves -have their types unified. - -Two tuple types of the same length unify constructing a new type of the -same length whose elements are the unification of the corresponding elements -in the two input types. - -## Static Analysis - -In most applications, full expression evaluation is sufficient for understanding -the provided configuration. However, some specialized applications require more -direct access to the physical structures in the expressions, which can for -example allow the construction of new language constructs in terms of the -existing syntax elements. - -Since static analysis analyses the physical structure of configuration, the -details will vary depending on syntax. Each syntax must decide which of its -physical structures corresponds to the following analyses, producing error -diagnostics if they are applied to inappropriate expressions. - -The following are the required static analysis functions: - -- **Static List**: Require list/tuple construction syntax to be used and - return a list of expressions for each of the elements given. - -- **Static Map**: Require map/object construction syntax to be used and - return a list of key/value pairs -- both expressions -- for each of - the elements given. The usual constraint that a map key must be a string - must not apply to this analysis, thus allowing applications to interpret - arbitrary keys as they see fit. - -- **Static Call**: Require function call syntax to be used and return an - object describing the called function name and a list of expressions - representing each of the call arguments. - -- **Static Traversal**: Require a reference to a symbol in the variable - scope and return a description of the path from the root scope to the - accessed attribute or index. - -The intent of a calling application using these features is to require a more -rigid interpretation of the configuration than in expression evaluation. -Syntax implementations should make use of the extra contextual information -provided in order to make an intuitive mapping onto the constructs of the -underlying syntax, possibly interpreting the expression slightly differently -than it would be interpreted in normal evaluation. - -Each syntax must define which of its expression elements each of the analyses -above applies to, and how those analyses behave given those expression elements. - -## Implementation Considerations - -Implementations of this specification are free to adopt any strategy that -produces behavior consistent with the specification. This non-normative -section describes some possible implementation strategies that are consistent -with the goals of this specification. - -### Language-agnosticism - -The language-agnosticism of this specification assumes that certain behaviors -are implemented separately for each syntax: - -- Matching of a body schema with the physical elements of a body in the - source language, to determine correspondence between physical constructs - and schema elements. - -- Implementing the _dynamic attributes_ body processing mode by either - interpreting all physical constructs as attributes or producing an error - if non-attribute constructs are present. - -- Providing an evaluation function for all possible expressions that produces - a value given an evaluation context. - -- Providing the static analysis functionality described above in a manner that - makes sense within the convention of the syntax. - -The suggested implementation strategy is to use an implementation language's -closest concept to an _abstract type_, _virtual type_ or _interface type_ -to represent both Body and Expression. Each language-specific implementation -can then provide an implementation of each of these types wrapping AST nodes -or other physical constructs from the language parser. diff --git a/vendor/github.com/hashicorp/hcl2/hcl/static_expr.go b/vendor/github.com/hashicorp/hcl2/hcl/static_expr.go deleted file mode 100644 index 98ada87b6..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/static_expr.go +++ /dev/null @@ -1,40 +0,0 @@ -package hcl - -import ( - "github.com/zclconf/go-cty/cty" -) - -type staticExpr struct { - val cty.Value - rng Range -} - -// StaticExpr returns an Expression that always evaluates to the given value. -// -// This is useful to substitute default values for expressions that are -// not explicitly given in configuration and thus would otherwise have no -// Expression to return. -// -// Since expressions are expected to have a source range, the caller must -// provide one. Ideally this should be a real source range, but it can -// be a synthetic one (with an empty-string filename) if no suitable range -// is available. -func StaticExpr(val cty.Value, rng Range) Expression { - return staticExpr{val, rng} -} - -func (e staticExpr) Value(ctx *EvalContext) (cty.Value, Diagnostics) { - return e.val, nil -} - -func (e staticExpr) Variables() []Traversal { - return nil -} - -func (e staticExpr) Range() Range { - return e.rng -} - -func (e staticExpr) StartRange() Range { - return e.rng -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/structure.go b/vendor/github.com/hashicorp/hcl2/hcl/structure.go deleted file mode 100644 index aab09457d..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/structure.go +++ /dev/null @@ -1,151 +0,0 @@ -package hcl - -import ( - "github.com/zclconf/go-cty/cty" -) - -// File is the top-level node that results from parsing a HCL file. -type File struct { - Body Body - Bytes []byte - - // Nav is used to integrate with the "hcled" editor integration package, - // and with diagnostic information formatters. It is not for direct use - // by a calling application. - Nav interface{} -} - -// Block represents a nested block within a Body. -type Block struct { - Type string - Labels []string - Body Body - - DefRange Range // Range that can be considered the "definition" for seeking in an editor - TypeRange Range // Range for the block type declaration specifically. - LabelRanges []Range // Ranges for the label values specifically. -} - -// Blocks is a sequence of Block. -type Blocks []*Block - -// Attributes is a set of attributes keyed by their names. -type Attributes map[string]*Attribute - -// Body is a container for attributes and blocks. It serves as the primary -// unit of hierarchical structure within configuration. -// -// The content of a body cannot be meaningfully interpreted without a schema, -// so Body represents the raw body content and has methods that allow the -// content to be extracted in terms of a given schema. -type Body interface { - // Content verifies that the entire body content conforms to the given - // schema and then returns it, and/or returns diagnostics. The returned - // body content is valid if non-nil, regardless of whether Diagnostics - // are provided, but diagnostics should still be eventually shown to - // the user. - Content(schema *BodySchema) (*BodyContent, Diagnostics) - - // PartialContent is like Content except that it permits the configuration - // to contain additional blocks or attributes not specified in the - // schema. If any are present, the returned Body is non-nil and contains - // the remaining items from the body that were not selected by the schema. - PartialContent(schema *BodySchema) (*BodyContent, Body, Diagnostics) - - // JustAttributes attempts to interpret all of the contents of the body - // as attributes, allowing for the contents to be accessed without a priori - // knowledge of the structure. - // - // The behavior of this method depends on the body's source language. - // Some languages, like JSON, can't distinguish between attributes and - // blocks without schema hints, but for languages that _can_ error - // diagnostics will be generated if any blocks are present in the body. - // - // Diagnostics may be produced for other reasons too, such as duplicate - // declarations of the same attribute. - JustAttributes() (Attributes, Diagnostics) - - // MissingItemRange returns a range that represents where a missing item - // might hypothetically be inserted. This is used when producing - // diagnostics about missing required attributes or blocks. Not all bodies - // will have an obvious single insertion point, so the result here may - // be rather arbitrary. - MissingItemRange() Range -} - -// BodyContent is the result of applying a BodySchema to a Body. -type BodyContent struct { - Attributes Attributes - Blocks Blocks - - MissingItemRange Range -} - -// Attribute represents an attribute from within a body. -type Attribute struct { - Name string - Expr Expression - - Range Range - NameRange Range -} - -// Expression is a literal value or an expression provided in the -// configuration, which can be evaluated within a scope to produce a value. -type Expression interface { - // Value returns the value resulting from evaluating the expression - // in the given evaluation context. - // - // The context may be nil, in which case the expression may contain - // only constants and diagnostics will be produced for any non-constant - // sub-expressions. (The exact definition of this depends on the source - // language.) - // - // The context may instead be set but have either its Variables or - // Functions maps set to nil, in which case only use of these features - // will return diagnostics. - // - // Different diagnostics are provided depending on whether the given - // context maps are nil or empty. In the former case, the message - // tells the user that variables/functions are not permitted at all, - // while in the latter case usage will produce a "not found" error for - // the specific symbol in question. - Value(ctx *EvalContext) (cty.Value, Diagnostics) - - // Variables returns a list of variables referenced in the receiving - // expression. These are expressed as absolute Traversals, so may include - // additional information about how the variable is used, such as - // attribute lookups, which the calling application can potentially use - // to only selectively populate the scope. - Variables() []Traversal - - Range() Range - StartRange() Range -} - -// OfType filters the receiving block sequence by block type name, -// returning a new block sequence including only the blocks of the -// requested type. -func (els Blocks) OfType(typeName string) Blocks { - ret := make(Blocks, 0) - for _, el := range els { - if el.Type == typeName { - ret = append(ret, el) - } - } - return ret -} - -// ByType transforms the receiving block sequence into a map from type -// name to block sequences of only that type. -func (els Blocks) ByType() map[string]Blocks { - ret := make(map[string]Blocks) - for _, el := range els { - ty := el.Type - if ret[ty] == nil { - ret[ty] = make(Blocks, 0, 1) - } - ret[ty] = append(ret[ty], el) - } - return ret -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/structure_at_pos.go b/vendor/github.com/hashicorp/hcl2/hcl/structure_at_pos.go deleted file mode 100644 index 8521814e5..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/structure_at_pos.go +++ /dev/null @@ -1,117 +0,0 @@ -package hcl - -// ----------------------------------------------------------------------------- -// The methods in this file all have the general pattern of making a best-effort -// to find one or more constructs that contain a given source position. -// -// These all operate by delegating to an optional method of the same name and -// signature on the file's root body, allowing each syntax to potentially -// provide its own implementations of these. For syntaxes that don't implement -// them, the result is always nil. -// ----------------------------------------------------------------------------- - -// BlocksAtPos attempts to find all of the blocks that contain the given -// position, ordered so that the outermost block is first and the innermost -// block is last. This is a best-effort method that may not be able to produce -// a complete result for all positions or for all HCL syntaxes. -// -// If the returned slice is non-empty, the first element is guaranteed to -// represent the same block as would be the result of OutermostBlockAtPos and -// the last element the result of InnermostBlockAtPos. However, the -// implementation may return two different objects describing the same block, -// so comparison by pointer identity is not possible. -// -// The result is nil if no blocks at all contain the given position. -func (f *File) BlocksAtPos(pos Pos) []*Block { - // The root body of the file must implement this interface in order - // to support BlocksAtPos. - type Interface interface { - BlocksAtPos(pos Pos) []*Block - } - - impl, ok := f.Body.(Interface) - if !ok { - return nil - } - return impl.BlocksAtPos(pos) -} - -// OutermostBlockAtPos attempts to find a top-level block in the receiving file -// that contains the given position. This is a best-effort method that may not -// be able to produce a result for all positions or for all HCL syntaxes. -// -// The result is nil if no single block could be selected for any reason. -func (f *File) OutermostBlockAtPos(pos Pos) *Block { - // The root body of the file must implement this interface in order - // to support OutermostBlockAtPos. - type Interface interface { - OutermostBlockAtPos(pos Pos) *Block - } - - impl, ok := f.Body.(Interface) - if !ok { - return nil - } - return impl.OutermostBlockAtPos(pos) -} - -// InnermostBlockAtPos attempts to find the most deeply-nested block in the -// receiving file that contains the given position. This is a best-effort -// method that may not be able to produce a result for all positions or for -// all HCL syntaxes. -// -// The result is nil if no single block could be selected for any reason. -func (f *File) InnermostBlockAtPos(pos Pos) *Block { - // The root body of the file must implement this interface in order - // to support InnermostBlockAtPos. - type Interface interface { - InnermostBlockAtPos(pos Pos) *Block - } - - impl, ok := f.Body.(Interface) - if !ok { - return nil - } - return impl.InnermostBlockAtPos(pos) -} - -// OutermostExprAtPos attempts to find an expression in the receiving file -// that contains the given position. This is a best-effort method that may not -// be able to produce a result for all positions or for all HCL syntaxes. -// -// Since expressions are often nested inside one another, this method returns -// the outermost "root" expression that is not contained by any other. -// -// The result is nil if no single expression could be selected for any reason. -func (f *File) OutermostExprAtPos(pos Pos) Expression { - // The root body of the file must implement this interface in order - // to support OutermostExprAtPos. - type Interface interface { - OutermostExprAtPos(pos Pos) Expression - } - - impl, ok := f.Body.(Interface) - if !ok { - return nil - } - return impl.OutermostExprAtPos(pos) -} - -// AttributeAtPos attempts to find an attribute definition in the receiving -// file that contains the given position. This is a best-effort method that may -// not be able to produce a result for all positions or for all HCL syntaxes. -// -// The result is nil if no single attribute could be selected for any reason. -func (f *File) AttributeAtPos(pos Pos) *Attribute { - // The root body of the file must implement this interface in order - // to support OutermostExprAtPos. - type Interface interface { - AttributeAtPos(pos Pos) *Attribute - } - - impl, ok := f.Body.(Interface) - if !ok { - return nil - } - return impl.AttributeAtPos(pos) -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/traversal.go b/vendor/github.com/hashicorp/hcl2/hcl/traversal.go deleted file mode 100644 index d71019700..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/traversal.go +++ /dev/null @@ -1,293 +0,0 @@ -package hcl - -import ( - "fmt" - - "github.com/zclconf/go-cty/cty" -) - -// A Traversal is a description of traversing through a value through a -// series of operations such as attribute lookup, index lookup, etc. -// -// It is used to look up values in scopes, for example. -// -// The traversal operations are implementations of interface Traverser. -// This is a closed set of implementations, so the interface cannot be -// implemented from outside this package. -// -// A traversal can be absolute (its first value is a symbol name) or relative -// (starts from an existing value). -type Traversal []Traverser - -// TraversalJoin appends a relative traversal to an absolute traversal to -// produce a new absolute traversal. -func TraversalJoin(abs Traversal, rel Traversal) Traversal { - if abs.IsRelative() { - panic("first argument to TraversalJoin must be absolute") - } - if !rel.IsRelative() { - panic("second argument to TraversalJoin must be relative") - } - - ret := make(Traversal, len(abs)+len(rel)) - copy(ret, abs) - copy(ret[len(abs):], rel) - return ret -} - -// TraverseRel applies the receiving traversal to the given value, returning -// the resulting value. This is supported only for relative traversals, -// and will panic if applied to an absolute traversal. -func (t Traversal) TraverseRel(val cty.Value) (cty.Value, Diagnostics) { - if !t.IsRelative() { - panic("can't use TraverseRel on an absolute traversal") - } - - current := val - var diags Diagnostics - for _, tr := range t { - var newDiags Diagnostics - current, newDiags = tr.TraversalStep(current) - diags = append(diags, newDiags...) - if newDiags.HasErrors() { - return cty.DynamicVal, diags - } - } - return current, diags -} - -// TraverseAbs applies the receiving traversal to the given eval context, -// returning the resulting value. This is supported only for absolute -// traversals, and will panic if applied to a relative traversal. -func (t Traversal) TraverseAbs(ctx *EvalContext) (cty.Value, Diagnostics) { - if t.IsRelative() { - panic("can't use TraverseAbs on a relative traversal") - } - - split := t.SimpleSplit() - root := split.Abs[0].(TraverseRoot) - name := root.Name - - thisCtx := ctx - hasNonNil := false - for thisCtx != nil { - if thisCtx.Variables == nil { - thisCtx = thisCtx.parent - continue - } - hasNonNil = true - val, exists := thisCtx.Variables[name] - if exists { - return split.Rel.TraverseRel(val) - } - thisCtx = thisCtx.parent - } - - if !hasNonNil { - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Variables not allowed", - Detail: "Variables may not be used here.", - Subject: &root.SrcRange, - }, - } - } - - suggestions := make([]string, 0, len(ctx.Variables)) - thisCtx = ctx - for thisCtx != nil { - for k := range thisCtx.Variables { - suggestions = append(suggestions, k) - } - thisCtx = thisCtx.parent - } - suggestion := nameSuggestion(name, suggestions) - if suggestion != "" { - suggestion = fmt.Sprintf(" Did you mean %q?", suggestion) - } - - return cty.DynamicVal, Diagnostics{ - { - Severity: DiagError, - Summary: "Unknown variable", - Detail: fmt.Sprintf("There is no variable named %q.%s", name, suggestion), - Subject: &root.SrcRange, - }, - } -} - -// IsRelative returns true if the receiver is a relative traversal, or false -// otherwise. -func (t Traversal) IsRelative() bool { - if len(t) == 0 { - return true - } - if _, firstIsRoot := t[0].(TraverseRoot); firstIsRoot { - return false - } - return true -} - -// SimpleSplit returns a TraversalSplit where the name lookup is the absolute -// part and the remainder is the relative part. Supported only for -// absolute traversals, and will panic if applied to a relative traversal. -// -// This can be used by applications that have a relatively-simple variable -// namespace where only the top-level is directly populated in the scope, with -// everything else handled by relative lookups from those initial values. -func (t Traversal) SimpleSplit() TraversalSplit { - if t.IsRelative() { - panic("can't use SimpleSplit on a relative traversal") - } - return TraversalSplit{ - Abs: t[0:1], - Rel: t[1:], - } -} - -// RootName returns the root name for a absolute traversal. Will panic if -// called on a relative traversal. -func (t Traversal) RootName() string { - if t.IsRelative() { - panic("can't use RootName on a relative traversal") - - } - return t[0].(TraverseRoot).Name -} - -// SourceRange returns the source range for the traversal. -func (t Traversal) SourceRange() Range { - if len(t) == 0 { - // Nothing useful to return here, but we'll return something - // that's correctly-typed at least. - return Range{} - } - - return RangeBetween(t[0].SourceRange(), t[len(t)-1].SourceRange()) -} - -// TraversalSplit represents a pair of traversals, the first of which is -// an absolute traversal and the second of which is relative to the first. -// -// This is used by calling applications that only populate prefixes of the -// traversals in the scope, with Abs representing the part coming from the -// scope and Rel representing the remaining steps once that part is -// retrieved. -type TraversalSplit struct { - Abs Traversal - Rel Traversal -} - -// TraverseAbs traverses from a scope to the value resulting from the -// absolute traversal. -func (t TraversalSplit) TraverseAbs(ctx *EvalContext) (cty.Value, Diagnostics) { - return t.Abs.TraverseAbs(ctx) -} - -// TraverseRel traverses from a given value, assumed to be the result of -// TraverseAbs on some scope, to a final result for the entire split traversal. -func (t TraversalSplit) TraverseRel(val cty.Value) (cty.Value, Diagnostics) { - return t.Rel.TraverseRel(val) -} - -// Traverse is a convenience function to apply TraverseAbs followed by -// TraverseRel. -func (t TraversalSplit) Traverse(ctx *EvalContext) (cty.Value, Diagnostics) { - v1, diags := t.TraverseAbs(ctx) - if diags.HasErrors() { - return cty.DynamicVal, diags - } - v2, newDiags := t.TraverseRel(v1) - diags = append(diags, newDiags...) - return v2, diags -} - -// Join concatenates together the Abs and Rel parts to produce a single -// absolute traversal. -func (t TraversalSplit) Join() Traversal { - return TraversalJoin(t.Abs, t.Rel) -} - -// RootName returns the root name for the absolute part of the split. -func (t TraversalSplit) RootName() string { - return t.Abs.RootName() -} - -// A Traverser is a step within a Traversal. -type Traverser interface { - TraversalStep(cty.Value) (cty.Value, Diagnostics) - SourceRange() Range - isTraverserSigil() isTraverser -} - -// Embed this in a struct to declare it as a Traverser -type isTraverser struct { -} - -func (tr isTraverser) isTraverserSigil() isTraverser { - return isTraverser{} -} - -// TraverseRoot looks up a root name in a scope. It is used as the first step -// of an absolute Traversal, and cannot itself be traversed directly. -type TraverseRoot struct { - isTraverser - Name string - SrcRange Range -} - -// TraversalStep on a TraverseName immediately panics, because absolute -// traversals cannot be directly traversed. -func (tn TraverseRoot) TraversalStep(cty.Value) (cty.Value, Diagnostics) { - panic("Cannot traverse an absolute traversal") -} - -func (tn TraverseRoot) SourceRange() Range { - return tn.SrcRange -} - -// TraverseAttr looks up an attribute in its initial value. -type TraverseAttr struct { - isTraverser - Name string - SrcRange Range -} - -func (tn TraverseAttr) TraversalStep(val cty.Value) (cty.Value, Diagnostics) { - return GetAttr(val, tn.Name, &tn.SrcRange) -} - -func (tn TraverseAttr) SourceRange() Range { - return tn.SrcRange -} - -// TraverseIndex applies the index operation to its initial value. -type TraverseIndex struct { - isTraverser - Key cty.Value - SrcRange Range -} - -func (tn TraverseIndex) TraversalStep(val cty.Value) (cty.Value, Diagnostics) { - return Index(val, tn.Key, &tn.SrcRange) -} - -func (tn TraverseIndex) SourceRange() Range { - return tn.SrcRange -} - -// TraverseSplat applies the splat operation to its initial value. -type TraverseSplat struct { - isTraverser - Each Traversal - SrcRange Range -} - -func (tn TraverseSplat) TraversalStep(val cty.Value) (cty.Value, Diagnostics) { - panic("TraverseSplat not yet implemented") -} - -func (tn TraverseSplat) SourceRange() Range { - return tn.SrcRange -} diff --git a/vendor/github.com/hashicorp/hcl2/hcl/traversal_for_expr.go b/vendor/github.com/hashicorp/hcl2/hcl/traversal_for_expr.go deleted file mode 100644 index f69d5fe9b..000000000 --- a/vendor/github.com/hashicorp/hcl2/hcl/traversal_for_expr.go +++ /dev/null @@ -1,124 +0,0 @@ -package hcl - -// AbsTraversalForExpr attempts to interpret the given expression as -// an absolute traversal, or returns error diagnostic(s) if that is -// not possible for the given expression. -// -// A particular Expression implementation can support this function by -// offering a method called AsTraversal that takes no arguments and -// returns either a valid absolute traversal or nil to indicate that -// no traversal is possible. Alternatively, an implementation can support -// UnwrapExpression to delegate handling of this function to a wrapped -// Expression object. -// -// In most cases the calling application is interested in the value -// that results from an expression, but in rarer cases the application -// needs to see the the name of the variable and subsequent -// attributes/indexes itself, for example to allow users to give references -// to the variables themselves rather than to their values. An implementer -// of this function should at least support attribute and index steps. -func AbsTraversalForExpr(expr Expression) (Traversal, Diagnostics) { - type asTraversal interface { - AsTraversal() Traversal - } - - physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool { - _, supported := expr.(asTraversal) - return supported - }) - - if asT, supported := physExpr.(asTraversal); supported { - if traversal := asT.AsTraversal(); traversal != nil { - return traversal, nil - } - } - return nil, Diagnostics{ - &Diagnostic{ - Severity: DiagError, - Summary: "Invalid expression", - Detail: "A single static variable reference is required: only attribute access and indexing with constant keys. No calculations, function calls, template expressions, etc are allowed here.", - Subject: expr.Range().Ptr(), - }, - } -} - -// RelTraversalForExpr is similar to AbsTraversalForExpr but it returns -// a relative traversal instead. Due to the nature of HCL expressions, the -// first element of the returned traversal is always a TraverseAttr, and -// then it will be followed by zero or more other expressions. -// -// Any expression accepted by AbsTraversalForExpr is also accepted by -// RelTraversalForExpr. -func RelTraversalForExpr(expr Expression) (Traversal, Diagnostics) { - traversal, diags := AbsTraversalForExpr(expr) - if len(traversal) > 0 { - ret := make(Traversal, len(traversal)) - copy(ret, traversal) - root := traversal[0].(TraverseRoot) - ret[0] = TraverseAttr{ - Name: root.Name, - SrcRange: root.SrcRange, - } - return ret, diags - } - return traversal, diags -} - -// ExprAsKeyword attempts to interpret the given expression as a static keyword, -// returning the keyword string if possible, and the empty string if not. -// -// A static keyword, for the sake of this function, is a single identifier. -// For example, the following attribute has an expression that would produce -// the keyword "foo": -// -// example = foo -// -// This function is a variant of AbsTraversalForExpr, which uses the same -// interface on the given expression. This helper constrains the result -// further by requiring only a single root identifier. -// -// This function is intended to be used with the following idiom, to recognize -// situations where one of a fixed set of keywords is required and arbitrary -// expressions are not allowed: -// -// switch hcl.ExprAsKeyword(expr) { -// case "allow": -// // (take suitable action for keyword "allow") -// case "deny": -// // (take suitable action for keyword "deny") -// default: -// diags = append(diags, &hcl.Diagnostic{ -// // ... "invalid keyword" diagnostic message ... -// }) -// } -// -// The above approach will generate the same message for both the use of an -// unrecognized keyword and for not using a keyword at all, which is usually -// reasonable if the message specifies that the given value must be a keyword -// from that fixed list. -// -// Note that in the native syntax the keywords "true", "false", and "null" are -// recognized as literal values during parsing and so these reserved words -// cannot not be accepted as keywords by this function. -// -// Since interpreting an expression as a keyword bypasses usual expression -// evaluation, it should be used sparingly for situations where e.g. one of -// a fixed set of keywords is used in a structural way in a special attribute -// to affect the further processing of a block. -func ExprAsKeyword(expr Expression) string { - type asTraversal interface { - AsTraversal() Traversal - } - - physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool { - _, supported := expr.(asTraversal) - return supported - }) - - if asT, supported := physExpr.(asTraversal); supported { - if traversal := asT.AsTraversal(); len(traversal) == 1 { - return traversal.RootName() - } - } - return "" -} diff --git a/vendor/github.com/hashicorp/hcl2/hclparse/parser.go b/vendor/github.com/hashicorp/hcl2/hclparse/parser.go deleted file mode 100644 index 6d47f1268..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclparse/parser.go +++ /dev/null @@ -1,123 +0,0 @@ -package hclparse - -import ( - "fmt" - "io/ioutil" - - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hcl/hclsyntax" - "github.com/hashicorp/hcl2/hcl/json" -) - -// NOTE: This is the public interface for parsing. The actual parsers are -// in other packages alongside this one, with this package just wrapping them -// to provide a unified interface for the caller across all supported formats. - -// Parser is the main interface for parsing configuration files. As well as -// parsing files, a parser also retains a registry of all of the files it -// has parsed so that multiple attempts to parse the same file will return -// the same object and so the collected files can be used when printing -// diagnostics. -// -// Any diagnostics for parsing a file are only returned once on the first -// call to parse that file. Callers are expected to collect up diagnostics -// and present them together, so returning diagnostics for the same file -// multiple times would create a confusing result. -type Parser struct { - files map[string]*hcl.File -} - -// NewParser creates a new parser, ready to parse configuration files. -func NewParser() *Parser { - return &Parser{ - files: map[string]*hcl.File{}, - } -} - -// ParseHCL parses the given buffer (which is assumed to have been loaded from -// the given filename) as a native-syntax configuration file and returns the -// hcl.File object representing it. -func (p *Parser) ParseHCL(src []byte, filename string) (*hcl.File, hcl.Diagnostics) { - if existing := p.files[filename]; existing != nil { - return existing, nil - } - - file, diags := hclsyntax.ParseConfig(src, filename, hcl.Pos{Byte: 0, Line: 1, Column: 1}) - p.files[filename] = file - return file, diags -} - -// ParseHCLFile reads the given filename and parses it as a native-syntax HCL -// configuration file. An error diagnostic is returned if the given file -// cannot be read. -func (p *Parser) ParseHCLFile(filename string) (*hcl.File, hcl.Diagnostics) { - if existing := p.files[filename]; existing != nil { - return existing, nil - } - - src, err := ioutil.ReadFile(filename) - if err != nil { - return nil, hcl.Diagnostics{ - { - Severity: hcl.DiagError, - Summary: "Failed to read file", - Detail: fmt.Sprintf("The configuration file %q could not be read.", filename), - }, - } - } - - return p.ParseHCL(src, filename) -} - -// ParseJSON parses the given JSON buffer (which is assumed to have been loaded -// from the given filename) and returns the hcl.File object representing it. -func (p *Parser) ParseJSON(src []byte, filename string) (*hcl.File, hcl.Diagnostics) { - if existing := p.files[filename]; existing != nil { - return existing, nil - } - - file, diags := json.Parse(src, filename) - p.files[filename] = file - return file, diags -} - -// ParseJSONFile reads the given filename and parses it as JSON, similarly to -// ParseJSON. An error diagnostic is returned if the given file cannot be read. -func (p *Parser) ParseJSONFile(filename string) (*hcl.File, hcl.Diagnostics) { - if existing := p.files[filename]; existing != nil { - return existing, nil - } - - file, diags := json.ParseFile(filename) - p.files[filename] = file - return file, diags -} - -// AddFile allows a caller to record in a parser a file that was parsed some -// other way, thus allowing it to be included in the registry of sources. -func (p *Parser) AddFile(filename string, file *hcl.File) { - p.files[filename] = file -} - -// Sources returns a map from filenames to the raw source code that was -// read from them. This is intended to be used, for example, to print -// diagnostics with contextual information. -// -// The arrays underlying the returned slices should not be modified. -func (p *Parser) Sources() map[string][]byte { - ret := make(map[string][]byte) - for fn, f := range p.files { - ret[fn] = f.Bytes - } - return ret -} - -// Files returns a map from filenames to the File objects produced from them. -// This is intended to be used, for example, to print diagnostics with -// contextual information. -// -// The returned map and all of the objects it refers to directly or indirectly -// must not be modified. -func (p *Parser) Files() map[string]*hcl.File { - return p.files -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/ast.go b/vendor/github.com/hashicorp/hcl2/hclwrite/ast.go deleted file mode 100644 index 090416528..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/ast.go +++ /dev/null @@ -1,121 +0,0 @@ -package hclwrite - -import ( - "bytes" - "io" -) - -type File struct { - inTree - - srcBytes []byte - body *node -} - -// NewEmptyFile constructs a new file with no content, ready to be mutated -// by other calls that append to its body. -func NewEmptyFile() *File { - f := &File{ - inTree: newInTree(), - } - body := newBody() - f.body = f.children.Append(body) - return f -} - -// Body returns the root body of the file, which contains the top-level -// attributes and blocks. -func (f *File) Body() *Body { - return f.body.content.(*Body) -} - -// WriteTo writes the tokens underlying the receiving file to the given writer. -// -// The tokens first have a simple formatting pass applied that adjusts only -// the spaces between them. -func (f *File) WriteTo(wr io.Writer) (int64, error) { - tokens := f.inTree.children.BuildTokens(nil) - format(tokens) - return tokens.WriteTo(wr) -} - -// Bytes returns a buffer containing the source code resulting from the -// tokens underlying the receiving file. If any updates have been made via -// the AST API, these will be reflected in the result. -func (f *File) Bytes() []byte { - buf := &bytes.Buffer{} - f.WriteTo(buf) - return buf.Bytes() -} - -type comments struct { - leafNode - - parent *node - tokens Tokens -} - -func newComments(tokens Tokens) *comments { - return &comments{ - tokens: tokens, - } -} - -func (c *comments) BuildTokens(to Tokens) Tokens { - return c.tokens.BuildTokens(to) -} - -type identifier struct { - leafNode - - parent *node - token *Token -} - -func newIdentifier(token *Token) *identifier { - return &identifier{ - token: token, - } -} - -func (i *identifier) BuildTokens(to Tokens) Tokens { - return append(to, i.token) -} - -func (i *identifier) hasName(name string) bool { - return name == string(i.token.Bytes) -} - -type number struct { - leafNode - - parent *node - token *Token -} - -func newNumber(token *Token) *number { - return &number{ - token: token, - } -} - -func (n *number) BuildTokens(to Tokens) Tokens { - return append(to, n.token) -} - -type quoted struct { - leafNode - - parent *node - tokens Tokens -} - -func newQuoted(tokens Tokens) *quoted { - return "ed{ - tokens: tokens, - } -} - -func (q *quoted) BuildTokens(to Tokens) Tokens { - return q.tokens.BuildTokens(to) -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_attribute.go b/vendor/github.com/hashicorp/hcl2/hclwrite/ast_attribute.go deleted file mode 100644 index 975fa7428..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_attribute.go +++ /dev/null @@ -1,48 +0,0 @@ -package hclwrite - -import ( - "github.com/hashicorp/hcl2/hcl/hclsyntax" -) - -type Attribute struct { - inTree - - leadComments *node - name *node - expr *node - lineComments *node -} - -func newAttribute() *Attribute { - return &Attribute{ - inTree: newInTree(), - } -} - -func (a *Attribute) init(name string, expr *Expression) { - expr.assertUnattached() - - nameTok := newIdentToken(name) - nameObj := newIdentifier(nameTok) - a.leadComments = a.children.Append(newComments(nil)) - a.name = a.children.Append(nameObj) - a.children.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenEqual, - Bytes: []byte{'='}, - }, - }) - a.expr = a.children.Append(expr) - a.expr.list = a.children - a.lineComments = a.children.Append(newComments(nil)) - a.children.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenNewline, - Bytes: []byte{'\n'}, - }, - }) -} - -func (a *Attribute) Expr() *Expression { - return a.expr.content.(*Expression) -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_block.go b/vendor/github.com/hashicorp/hcl2/hclwrite/ast_block.go deleted file mode 100644 index d5fd32bd5..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_block.go +++ /dev/null @@ -1,74 +0,0 @@ -package hclwrite - -import ( - "github.com/hashicorp/hcl2/hcl/hclsyntax" - "github.com/zclconf/go-cty/cty" -) - -type Block struct { - inTree - - leadComments *node - typeName *node - labels nodeSet - open *node - body *node - close *node -} - -func newBlock() *Block { - return &Block{ - inTree: newInTree(), - labels: newNodeSet(), - } -} - -// NewBlock constructs a new, empty block with the given type name and labels. -func NewBlock(typeName string, labels []string) *Block { - block := newBlock() - block.init(typeName, labels) - return block -} - -func (b *Block) init(typeName string, labels []string) { - nameTok := newIdentToken(typeName) - nameObj := newIdentifier(nameTok) - b.leadComments = b.children.Append(newComments(nil)) - b.typeName = b.children.Append(nameObj) - for _, label := range labels { - labelToks := TokensForValue(cty.StringVal(label)) - labelObj := newQuoted(labelToks) - labelNode := b.children.Append(labelObj) - b.labels.Add(labelNode) - } - b.open = b.children.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenOBrace, - Bytes: []byte{'{'}, - }, - { - Type: hclsyntax.TokenNewline, - Bytes: []byte{'\n'}, - }, - }) - body := newBody() // initially totally empty; caller can append to it subsequently - b.body = b.children.Append(body) - b.close = b.children.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenCBrace, - Bytes: []byte{'}'}, - }, - { - Type: hclsyntax.TokenNewline, - Bytes: []byte{'\n'}, - }, - }) -} - -// Body returns the body that represents the content of the receiving block. -// -// Appending to or otherwise modifying this body will make changes to the -// tokens that are generated between the blocks open and close braces. -func (b *Block) Body() *Body { - return b.body.content.(*Body) -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_body.go b/vendor/github.com/hashicorp/hcl2/hclwrite/ast_body.go deleted file mode 100644 index cf69fee21..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_body.go +++ /dev/null @@ -1,153 +0,0 @@ -package hclwrite - -import ( - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hcl/hclsyntax" - "github.com/zclconf/go-cty/cty" -) - -type Body struct { - inTree - - items nodeSet -} - -func newBody() *Body { - return &Body{ - inTree: newInTree(), - items: newNodeSet(), - } -} - -func (b *Body) appendItem(c nodeContent) *node { - nn := b.children.Append(c) - b.items.Add(nn) - return nn -} - -func (b *Body) appendItemNode(nn *node) *node { - nn.assertUnattached() - b.children.AppendNode(nn) - b.items.Add(nn) - return nn -} - -// Clear removes all of the items from the body, making it empty. -func (b *Body) Clear() { - b.children.Clear() -} - -func (b *Body) AppendUnstructuredTokens(ts Tokens) { - b.inTree.children.Append(ts) -} - -// Attributes returns a new map of all of the attributes in the body, with -// the attribute names as the keys. -func (b *Body) Attributes() map[string]*Attribute { - ret := make(map[string]*Attribute) - for n := range b.items { - if attr, isAttr := n.content.(*Attribute); isAttr { - nameObj := attr.name.content.(*identifier) - name := string(nameObj.token.Bytes) - ret[name] = attr - } - } - return ret -} - -// Blocks returns a new slice of all the blocks in the body. -func (b *Body) Blocks() []*Block { - ret := make([]*Block, 0, len(b.items)) - for n := range b.items { - if block, isBlock := n.content.(*Block); isBlock { - ret = append(ret, block) - } - } - return ret -} - -// GetAttribute returns the attribute from the body that has the given name, -// or returns nil if there is currently no matching attribute. -func (b *Body) GetAttribute(name string) *Attribute { - for n := range b.items { - if attr, isAttr := n.content.(*Attribute); isAttr { - nameObj := attr.name.content.(*identifier) - if nameObj.hasName(name) { - // We've found it! - return attr - } - } - } - - return nil -} - -// SetAttributeValue either replaces the expression of an existing attribute -// of the given name or adds a new attribute definition to the end of the block. -// -// The value is given as a cty.Value, and must therefore be a literal. To set -// a variable reference or other traversal, use SetAttributeTraversal. -// -// The return value is the attribute that was either modified in-place or -// created. -func (b *Body) SetAttributeValue(name string, val cty.Value) *Attribute { - attr := b.GetAttribute(name) - expr := NewExpressionLiteral(val) - if attr != nil { - attr.expr = attr.expr.ReplaceWith(expr) - } else { - attr := newAttribute() - attr.init(name, expr) - b.appendItem(attr) - } - return attr -} - -// SetAttributeTraversal either replaces the expression of an existing attribute -// of the given name or adds a new attribute definition to the end of the body. -// -// The new expression is given as a hcl.Traversal, which must be an absolute -// traversal. To set a literal value, use SetAttributeValue. -// -// The return value is the attribute that was either modified in-place or -// created. -func (b *Body) SetAttributeTraversal(name string, traversal hcl.Traversal) *Attribute { - attr := b.GetAttribute(name) - expr := NewExpressionAbsTraversal(traversal) - if attr != nil { - attr.expr = attr.expr.ReplaceWith(expr) - } else { - attr := newAttribute() - attr.init(name, expr) - b.appendItem(attr) - } - return attr -} - -// AppendBlock appends an existing block (which must not be already attached -// to a body) to the end of the receiving body. -func (b *Body) AppendBlock(block *Block) *Block { - b.appendItem(block) - return block -} - -// AppendNewBlock appends a new nested block to the end of the receiving body -// with the given type name and labels. -func (b *Body) AppendNewBlock(typeName string, labels []string) *Block { - block := newBlock() - block.init(typeName, labels) - b.appendItem(block) - return block -} - -// AppendNewline appends a newline token to th end of the receiving body, -// which generally serves as a separator between different sets of body -// contents. -func (b *Body) AppendNewline() { - b.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenNewline, - Bytes: []byte{'\n'}, - }, - }) -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_expression.go b/vendor/github.com/hashicorp/hcl2/hclwrite/ast_expression.go deleted file mode 100644 index 62d89fbef..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/ast_expression.go +++ /dev/null @@ -1,201 +0,0 @@ -package hclwrite - -import ( - "fmt" - - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hcl/hclsyntax" - "github.com/zclconf/go-cty/cty" -) - -type Expression struct { - inTree - - absTraversals nodeSet -} - -func newExpression() *Expression { - return &Expression{ - inTree: newInTree(), - absTraversals: newNodeSet(), - } -} - -// NewExpressionLiteral constructs an an expression that represents the given -// literal value. -// -// Since an unknown value cannot be represented in source code, this function -// will panic if the given value is unknown or contains a nested unknown value. -// Use val.IsWhollyKnown before calling to be sure. -// -// HCL native syntax does not directly represent lists, maps, and sets, and -// instead relies on the automatic conversions to those collection types from -// either list or tuple constructor syntax. Therefore converting collection -// values to source code and re-reading them will lose type information, and -// the reader must provide a suitable type at decode time to recover the -// original value. -func NewExpressionLiteral(val cty.Value) *Expression { - toks := TokensForValue(val) - expr := newExpression() - expr.children.AppendUnstructuredTokens(toks) - return expr -} - -// NewExpressionAbsTraversal constructs an expression that represents the -// given traversal, which must be absolute or this function will panic. -func NewExpressionAbsTraversal(traversal hcl.Traversal) *Expression { - if traversal.IsRelative() { - panic("can't construct expression from relative traversal") - } - - physT := newTraversal() - rootName := traversal.RootName() - steps := traversal[1:] - - { - tn := newTraverseName() - tn.name = tn.children.Append(newIdentifier(&Token{ - Type: hclsyntax.TokenIdent, - Bytes: []byte(rootName), - })) - physT.steps.Add(physT.children.Append(tn)) - } - - for _, step := range steps { - switch ts := step.(type) { - case hcl.TraverseAttr: - tn := newTraverseName() - tn.children.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenDot, - Bytes: []byte{'.'}, - }, - }) - tn.name = tn.children.Append(newIdentifier(&Token{ - Type: hclsyntax.TokenIdent, - Bytes: []byte(ts.Name), - })) - physT.steps.Add(physT.children.Append(tn)) - case hcl.TraverseIndex: - ti := newTraverseIndex() - ti.children.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenOBrack, - Bytes: []byte{'['}, - }, - }) - indexExpr := NewExpressionLiteral(ts.Key) - ti.key = ti.children.Append(indexExpr) - ti.children.AppendUnstructuredTokens(Tokens{ - { - Type: hclsyntax.TokenCBrack, - Bytes: []byte{']'}, - }, - }) - physT.steps.Add(physT.children.Append(ti)) - } - } - - expr := newExpression() - expr.absTraversals.Add(expr.children.Append(physT)) - return expr -} - -// Variables returns the absolute traversals that exist within the receiving -// expression. -func (e *Expression) Variables() []*Traversal { - nodes := e.absTraversals.List() - ret := make([]*Traversal, len(nodes)) - for i, node := range nodes { - ret[i] = node.content.(*Traversal) - } - return ret -} - -// RenameVariablePrefix examines each of the absolute traversals in the -// receiving expression to see if they have the given sequence of names as -// a prefix prefix. If so, they are updated in place to have the given -// replacement names instead of that prefix. -// -// This can be used to implement symbol renaming. The calling application can -// visit all relevant expressions in its input and apply the same renaming -// to implement a global symbol rename. -// -// The search and replacement traversals must be the same length, or this -// method will panic. Only attribute access operations can be matched and -// replaced. Index steps never match the prefix. -func (e *Expression) RenameVariablePrefix(search, replacement []string) { - if len(search) != len(replacement) { - panic(fmt.Sprintf("search and replacement length mismatch (%d and %d)", len(search), len(replacement))) - } -Traversals: - for node := range e.absTraversals { - traversal := node.content.(*Traversal) - if len(traversal.steps) < len(search) { - // If it's shorter then it can't have our prefix - continue - } - - stepNodes := traversal.steps.List() - for i, name := range search { - step, isName := stepNodes[i].content.(*TraverseName) - if !isName { - continue Traversals // only name nodes can match - } - foundNameBytes := step.name.content.(*identifier).token.Bytes - if len(foundNameBytes) != len(name) { - continue Traversals - } - if string(foundNameBytes) != name { - continue Traversals - } - } - - // If we get here then the prefix matched, so now we'll swap in - // the replacement strings. - for i, name := range replacement { - step := stepNodes[i].content.(*TraverseName) - token := step.name.content.(*identifier).token - token.Bytes = []byte(name) - } - } -} - -// Traversal represents a sequence of variable, attribute, and/or index -// operations. -type Traversal struct { - inTree - - steps nodeSet -} - -func newTraversal() *Traversal { - return &Traversal{ - inTree: newInTree(), - steps: newNodeSet(), - } -} - -type TraverseName struct { - inTree - - name *node -} - -func newTraverseName() *TraverseName { - return &TraverseName{ - inTree: newInTree(), - } -} - -type TraverseIndex struct { - inTree - - key *node -} - -func newTraverseIndex() *TraverseIndex { - return &TraverseIndex{ - inTree: newInTree(), - } -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/doc.go b/vendor/github.com/hashicorp/hcl2/hclwrite/doc.go deleted file mode 100644 index 56d5b7752..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/doc.go +++ /dev/null @@ -1,11 +0,0 @@ -// Package hclwrite deals with the problem of generating HCL configuration -// and of making specific surgical changes to existing HCL configurations. -// -// It operates at a different level of abstraction than the main HCL parser -// and AST, since details such as the placement of comments and newlines -// are preserved when unchanged. -// -// The hclwrite API follows a similar principle to XML/HTML DOM, allowing nodes -// to be read out, created and inserted, etc. Nodes represent syntax constructs -// rather than semantic concepts. -package hclwrite diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/format.go b/vendor/github.com/hashicorp/hcl2/hclwrite/format.go deleted file mode 100644 index 7111ebde2..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/format.go +++ /dev/null @@ -1,463 +0,0 @@ -package hclwrite - -import ( - "github.com/hashicorp/hcl2/hcl/hclsyntax" -) - -var inKeyword = hclsyntax.Keyword([]byte{'i', 'n'}) - -// placeholder token used when we don't have a token but we don't want -// to pass a real "nil" and complicate things with nil pointer checks -var nilToken = &Token{ - Type: hclsyntax.TokenNil, - Bytes: []byte{}, - SpacesBefore: 0, -} - -// format rewrites tokens within the given sequence, in-place, to adjust the -// whitespace around their content to achieve canonical formatting. -func format(tokens Tokens) { - // Formatting is a multi-pass process. More details on the passes below, - // but this is the overview: - // - adjust the leading space on each line to create appropriate - // indentation - // - adjust spaces between tokens in a single cell using a set of rules - // - adjust the leading space in the "assign" and "comment" cells on each - // line to vertically align with neighboring lines. - // All of these steps operate in-place on the given tokens, so a caller - // may collect a flat sequence of all of the tokens underlying an AST - // and pass it here and we will then indirectly modify the AST itself. - // Formatting must change only whitespace. Specifically, that means - // changing the SpacesBefore attribute on a token while leaving the - // other token attributes unchanged. - - lines := linesForFormat(tokens) - formatIndent(lines) - formatSpaces(lines) - formatCells(lines) -} - -func formatIndent(lines []formatLine) { - // Our methodology for indents is to take the input one line at a time - // and count the bracketing delimiters on each line. If a line has a net - // increase in open brackets, we increase the indent level by one and - // remember how many new openers we had. If the line has a net _decrease_, - // we'll compare it to the most recent number of openers and decrease the - // dedent level by one each time we pass an indent level remembered - // earlier. - // The "indent stack" used here allows for us to recognize degenerate - // input where brackets are not symmetrical within lines and avoid - // pushing things too far left or right, creating confusion. - - // We'll start our indent stack at a reasonable capacity to minimize the - // chance of us needing to grow it; 10 here means 10 levels of indent, - // which should be more than enough for reasonable HCL uses. - indents := make([]int, 0, 10) - - for i := range lines { - line := &lines[i] - if len(line.lead) == 0 { - continue - } - - if line.lead[0].Type == hclsyntax.TokenNewline { - // Never place spaces before a newline - line.lead[0].SpacesBefore = 0 - continue - } - - netBrackets := 0 - for _, token := range line.lead { - netBrackets += tokenBracketChange(token) - if token.Type == hclsyntax.TokenOHeredoc { - break - } - } - - for _, token := range line.assign { - netBrackets += tokenBracketChange(token) - } - - switch { - case netBrackets > 0: - line.lead[0].SpacesBefore = 2 * len(indents) - indents = append(indents, netBrackets) - case netBrackets < 0: - closed := -netBrackets - for closed > 0 && len(indents) > 0 { - switch { - - case closed > indents[len(indents)-1]: - closed -= indents[len(indents)-1] - indents = indents[:len(indents)-1] - - case closed < indents[len(indents)-1]: - indents[len(indents)-1] -= closed - closed = 0 - - default: - indents = indents[:len(indents)-1] - closed = 0 - } - } - line.lead[0].SpacesBefore = 2 * len(indents) - default: - line.lead[0].SpacesBefore = 2 * len(indents) - } - } -} - -func formatSpaces(lines []formatLine) { - for _, line := range lines { - for i, token := range line.lead { - var before, after *Token - if i > 0 { - before = line.lead[i-1] - } else { - before = nilToken - } - if i < (len(line.lead) - 1) { - after = line.lead[i+1] - } else { - after = nilToken - } - if spaceAfterToken(token, before, after) { - after.SpacesBefore = 1 - } else { - after.SpacesBefore = 0 - } - } - for i, token := range line.assign { - if i == 0 { - // first token in "assign" always has one space before to - // separate the equals sign from what it's assigning. - token.SpacesBefore = 1 - } - - var before, after *Token - if i > 0 { - before = line.assign[i-1] - } else { - before = nilToken - } - if i < (len(line.assign) - 1) { - after = line.assign[i+1] - } else { - after = nilToken - } - if spaceAfterToken(token, before, after) { - after.SpacesBefore = 1 - } else { - after.SpacesBefore = 0 - } - } - - } -} - -func formatCells(lines []formatLine) { - - chainStart := -1 - maxColumns := 0 - - // We'll deal with the "assign" cell first, since moving that will - // also impact the "comment" cell. - closeAssignChain := func(i int) { - for _, chainLine := range lines[chainStart:i] { - columns := chainLine.lead.Columns() - spaces := (maxColumns - columns) + 1 - chainLine.assign[0].SpacesBefore = spaces - } - chainStart = -1 - maxColumns = 0 - } - for i, line := range lines { - if line.assign == nil { - if chainStart != -1 { - closeAssignChain(i) - } - } else { - if chainStart == -1 { - chainStart = i - } - columns := line.lead.Columns() - if columns > maxColumns { - maxColumns = columns - } - } - } - if chainStart != -1 { - closeAssignChain(len(lines)) - } - - // Now we'll deal with the comments - closeCommentChain := func(i int) { - for _, chainLine := range lines[chainStart:i] { - columns := chainLine.lead.Columns() + chainLine.assign.Columns() - spaces := (maxColumns - columns) + 1 - chainLine.comment[0].SpacesBefore = spaces - } - chainStart = -1 - maxColumns = 0 - } - for i, line := range lines { - if line.comment == nil { - if chainStart != -1 { - closeCommentChain(i) - } - } else { - if chainStart == -1 { - chainStart = i - } - columns := line.lead.Columns() + line.assign.Columns() - if columns > maxColumns { - maxColumns = columns - } - } - } - if chainStart != -1 { - closeCommentChain(len(lines)) - } - -} - -// spaceAfterToken decides whether a particular subject token should have a -// space after it when surrounded by the given before and after tokens. -// "before" can be TokenNil, if the subject token is at the start of a sequence. -func spaceAfterToken(subject, before, after *Token) bool { - switch { - - case after.Type == hclsyntax.TokenNewline || after.Type == hclsyntax.TokenNil: - // Never add spaces before a newline - return false - - case subject.Type == hclsyntax.TokenIdent && after.Type == hclsyntax.TokenOParen: - // Don't split a function name from open paren in a call - return false - - case subject.Type == hclsyntax.TokenDot || after.Type == hclsyntax.TokenDot: - // Don't use spaces around attribute access dots - return false - - case after.Type == hclsyntax.TokenComma || after.Type == hclsyntax.TokenEllipsis: - // No space right before a comma or ... in an argument list - return false - - case subject.Type == hclsyntax.TokenComma: - // Always a space after a comma - return true - - case subject.Type == hclsyntax.TokenQuotedLit || subject.Type == hclsyntax.TokenStringLit || subject.Type == hclsyntax.TokenOQuote || subject.Type == hclsyntax.TokenOHeredoc || after.Type == hclsyntax.TokenQuotedLit || after.Type == hclsyntax.TokenStringLit || after.Type == hclsyntax.TokenCQuote || after.Type == hclsyntax.TokenCHeredoc: - // No extra spaces within templates - return false - - case inKeyword.TokenMatches(subject.asHCLSyntax()) && before.Type == hclsyntax.TokenIdent: - // This is a special case for inside for expressions where a user - // might want to use a literal tuple constructor: - // [for x in [foo]: x] - // ... in that case, we would normally produce in[foo] thinking that - // in is a reference, but we'll recognize it as a keyword here instead - // to make the result less confusing. - return true - - case after.Type == hclsyntax.TokenOBrack && (subject.Type == hclsyntax.TokenIdent || subject.Type == hclsyntax.TokenNumberLit || tokenBracketChange(subject) < 0): - return false - - case subject.Type == hclsyntax.TokenMinus: - // Since a minus can either be subtraction or negation, and the latter - // should _not_ have a space after it, we need to use some heuristics - // to decide which case this is. - // We guess that we have a negation if the token before doesn't look - // like it could be the end of an expression. - - switch before.Type { - - case hclsyntax.TokenNil: - // Minus at the start of input must be a negation - return false - - case hclsyntax.TokenOParen, hclsyntax.TokenOBrace, hclsyntax.TokenOBrack, hclsyntax.TokenEqual, hclsyntax.TokenColon, hclsyntax.TokenComma, hclsyntax.TokenQuestion: - // Minus immediately after an opening bracket or separator must be a negation. - return false - - case hclsyntax.TokenPlus, hclsyntax.TokenStar, hclsyntax.TokenSlash, hclsyntax.TokenPercent, hclsyntax.TokenMinus: - // Minus immediately after another arithmetic operator must be negation. - return false - - case hclsyntax.TokenEqualOp, hclsyntax.TokenNotEqual, hclsyntax.TokenGreaterThan, hclsyntax.TokenGreaterThanEq, hclsyntax.TokenLessThan, hclsyntax.TokenLessThanEq: - // Minus immediately after another comparison operator must be negation. - return false - - case hclsyntax.TokenAnd, hclsyntax.TokenOr, hclsyntax.TokenBang: - // Minus immediately after logical operator doesn't make sense but probably intended as negation. - return false - - default: - return true - } - - case subject.Type == hclsyntax.TokenOBrace || after.Type == hclsyntax.TokenCBrace: - // Unlike other bracket types, braces have spaces on both sides of them, - // both in single-line nested blocks foo { bar = baz } and in object - // constructor expressions foo = { bar = baz }. - if subject.Type == hclsyntax.TokenOBrace && after.Type == hclsyntax.TokenCBrace { - // An open brace followed by a close brace is an exception, however. - // e.g. foo {} rather than foo { } - return false - } - return true - - // In the unlikely event that an interpolation expression is just - // a single object constructor, we'll put a space between the ${ and - // the following { to make this more obvious, and then the same - // thing for the two braces at the end. - case (subject.Type == hclsyntax.TokenTemplateInterp || subject.Type == hclsyntax.TokenTemplateControl) && after.Type == hclsyntax.TokenOBrace: - return true - case subject.Type == hclsyntax.TokenCBrace && after.Type == hclsyntax.TokenTemplateSeqEnd: - return true - - // Don't add spaces between interpolated items - case subject.Type == hclsyntax.TokenTemplateSeqEnd && (after.Type == hclsyntax.TokenTemplateInterp || after.Type == hclsyntax.TokenTemplateControl): - return false - - case tokenBracketChange(subject) > 0: - // No spaces after open brackets - return false - - case tokenBracketChange(after) < 0: - // No spaces before close brackets - return false - - default: - // Most tokens are space-separated - return true - - } -} - -func linesForFormat(tokens Tokens) []formatLine { - if len(tokens) == 0 { - return make([]formatLine, 0) - } - - // first we'll count our lines, so we can allocate the array for them in - // a single block. (We want to minimize memory pressure in this codepath, - // so it can be run somewhat-frequently by editor integrations.) - lineCount := 1 // if there are zero newlines then there is one line - for _, tok := range tokens { - if tokenIsNewline(tok) { - lineCount++ - } - } - - // To start, we'll just put everything in the "lead" cell on each line, - // and then do another pass over the lines afterwards to adjust. - lines := make([]formatLine, lineCount) - li := 0 - lineStart := 0 - for i, tok := range tokens { - if tok.Type == hclsyntax.TokenEOF { - // The EOF token doesn't belong to any line, and terminates the - // token sequence. - lines[li].lead = tokens[lineStart:i] - break - } - - if tokenIsNewline(tok) { - lines[li].lead = tokens[lineStart : i+1] - lineStart = i + 1 - li++ - } - } - - // If a set of tokens doesn't end in TokenEOF (e.g. because it's a - // fragment of tokens from the middle of a file) then we might fall - // out here with a line still pending. - if lineStart < len(tokens) { - lines[li].lead = tokens[lineStart:] - if lines[li].lead[len(lines[li].lead)-1].Type == hclsyntax.TokenEOF { - lines[li].lead = lines[li].lead[:len(lines[li].lead)-1] - } - } - - // Now we'll pick off any trailing comments and attribute assignments - // to shuffle off into the "comment" and "assign" cells. - for i := range lines { - line := &lines[i] - - if len(line.lead) == 0 { - // if the line is empty then there's nothing for us to do - // (this should happen only for the final line, because all other - // lines would have a newline token of some kind) - continue - } - - if len(line.lead) > 1 && line.lead[len(line.lead)-1].Type == hclsyntax.TokenComment { - line.comment = line.lead[len(line.lead)-1:] - line.lead = line.lead[:len(line.lead)-1] - } - - for i, tok := range line.lead { - if i > 0 && tok.Type == hclsyntax.TokenEqual { - // We only move the tokens into "assign" if the RHS seems to - // be a whole expression, which we determine by counting - // brackets. If there's a net positive number of brackets - // then that suggests we're introducing a multi-line expression. - netBrackets := 0 - for _, token := range line.lead[i:] { - netBrackets += tokenBracketChange(token) - } - - if netBrackets == 0 { - line.assign = line.lead[i:] - line.lead = line.lead[:i] - } - break - } - } - } - - return lines -} - -func tokenIsNewline(tok *Token) bool { - if tok.Type == hclsyntax.TokenNewline { - return true - } else if tok.Type == hclsyntax.TokenComment { - // Single line tokens (# and //) consume their terminating newline, - // so we need to treat them as newline tokens as well. - if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' { - return true - } - } - return false -} - -func tokenBracketChange(tok *Token) int { - switch tok.Type { - case hclsyntax.TokenOBrace, hclsyntax.TokenOBrack, hclsyntax.TokenOParen, hclsyntax.TokenTemplateControl, hclsyntax.TokenTemplateInterp: - return 1 - case hclsyntax.TokenCBrace, hclsyntax.TokenCBrack, hclsyntax.TokenCParen, hclsyntax.TokenTemplateSeqEnd: - return -1 - default: - return 0 - } -} - -// formatLine represents a single line of source code for formatting purposes, -// splitting its tokens into up to three "cells": -// -// lead: always present, representing everything up to one of the others -// assign: if line contains an attribute assignment, represents the tokens -// starting at (and including) the equals symbol -// comment: if line contains any non-comment tokens and ends with a -// single-line comment token, represents the comment. -// -// When formatting, the leading spaces of the first tokens in each of these -// cells is adjusted to align vertically their occurences on consecutive -// rows. -type formatLine struct { - lead Tokens - assign Tokens - comment Tokens -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/generate.go b/vendor/github.com/hashicorp/hcl2/hclwrite/generate.go deleted file mode 100644 index d249cfdf9..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/generate.go +++ /dev/null @@ -1,250 +0,0 @@ -package hclwrite - -import ( - "fmt" - "unicode" - "unicode/utf8" - - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hcl/hclsyntax" - "github.com/zclconf/go-cty/cty" -) - -// TokensForValue returns a sequence of tokens that represents the given -// constant value. -// -// This function only supports types that are used by HCL. In particular, it -// does not support capsule types and will panic if given one. -// -// It is not possible to express an unknown value in source code, so this -// function will panic if the given value is unknown or contains any unknown -// values. A caller can call the value's IsWhollyKnown method to verify that -// no unknown values are present before calling TokensForValue. -func TokensForValue(val cty.Value) Tokens { - toks := appendTokensForValue(val, nil) - format(toks) // fiddle with the SpacesBefore field to get canonical spacing - return toks -} - -// TokensForTraversal returns a sequence of tokens that represents the given -// traversal. -// -// If the traversal is absolute then the result is a self-contained, valid -// reference expression. If the traversal is relative then the returned tokens -// could be appended to some other expression tokens to traverse into the -// represented expression. -func TokensForTraversal(traversal hcl.Traversal) Tokens { - toks := appendTokensForTraversal(traversal, nil) - format(toks) // fiddle with the SpacesBefore field to get canonical spacing - return toks -} - -func appendTokensForValue(val cty.Value, toks Tokens) Tokens { - switch { - - case !val.IsKnown(): - panic("cannot produce tokens for unknown value") - - case val.IsNull(): - toks = append(toks, &Token{ - Type: hclsyntax.TokenIdent, - Bytes: []byte(`null`), - }) - - case val.Type() == cty.Bool: - var src []byte - if val.True() { - src = []byte(`true`) - } else { - src = []byte(`false`) - } - toks = append(toks, &Token{ - Type: hclsyntax.TokenIdent, - Bytes: src, - }) - - case val.Type() == cty.Number: - bf := val.AsBigFloat() - srcStr := bf.Text('f', -1) - toks = append(toks, &Token{ - Type: hclsyntax.TokenNumberLit, - Bytes: []byte(srcStr), - }) - - case val.Type() == cty.String: - // TODO: If it's a multi-line string ending in a newline, format - // it as a HEREDOC instead. - src := escapeQuotedStringLit(val.AsString()) - toks = append(toks, &Token{ - Type: hclsyntax.TokenOQuote, - Bytes: []byte{'"'}, - }) - if len(src) > 0 { - toks = append(toks, &Token{ - Type: hclsyntax.TokenQuotedLit, - Bytes: src, - }) - } - toks = append(toks, &Token{ - Type: hclsyntax.TokenCQuote, - Bytes: []byte{'"'}, - }) - - case val.Type().IsListType() || val.Type().IsSetType() || val.Type().IsTupleType(): - toks = append(toks, &Token{ - Type: hclsyntax.TokenOBrack, - Bytes: []byte{'['}, - }) - - i := 0 - for it := val.ElementIterator(); it.Next(); { - if i > 0 { - toks = append(toks, &Token{ - Type: hclsyntax.TokenComma, - Bytes: []byte{','}, - }) - } - _, eVal := it.Element() - toks = appendTokensForValue(eVal, toks) - i++ - } - - toks = append(toks, &Token{ - Type: hclsyntax.TokenCBrack, - Bytes: []byte{']'}, - }) - - case val.Type().IsMapType() || val.Type().IsObjectType(): - toks = append(toks, &Token{ - Type: hclsyntax.TokenOBrace, - Bytes: []byte{'{'}, - }) - - i := 0 - for it := val.ElementIterator(); it.Next(); { - if i > 0 { - toks = append(toks, &Token{ - Type: hclsyntax.TokenComma, - Bytes: []byte{','}, - }) - } - eKey, eVal := it.Element() - if hclsyntax.ValidIdentifier(eKey.AsString()) { - toks = append(toks, &Token{ - Type: hclsyntax.TokenIdent, - Bytes: []byte(eKey.AsString()), - }) - } else { - toks = appendTokensForValue(eKey, toks) - } - toks = append(toks, &Token{ - Type: hclsyntax.TokenEqual, - Bytes: []byte{'='}, - }) - toks = appendTokensForValue(eVal, toks) - i++ - } - - toks = append(toks, &Token{ - Type: hclsyntax.TokenCBrace, - Bytes: []byte{'}'}, - }) - - default: - panic(fmt.Sprintf("cannot produce tokens for %#v", val)) - } - - return toks -} - -func appendTokensForTraversal(traversal hcl.Traversal, toks Tokens) Tokens { - for _, step := range traversal { - appendTokensForTraversalStep(step, toks) - } - return toks -} - -func appendTokensForTraversalStep(step hcl.Traverser, toks Tokens) { - switch ts := step.(type) { - case hcl.TraverseRoot: - toks = append(toks, &Token{ - Type: hclsyntax.TokenIdent, - Bytes: []byte(ts.Name), - }) - case hcl.TraverseAttr: - toks = append( - toks, - &Token{ - Type: hclsyntax.TokenDot, - Bytes: []byte{'.'}, - }, - &Token{ - Type: hclsyntax.TokenIdent, - Bytes: []byte(ts.Name), - }, - ) - case hcl.TraverseIndex: - toks = append(toks, &Token{ - Type: hclsyntax.TokenOBrack, - Bytes: []byte{'['}, - }) - appendTokensForValue(ts.Key, toks) - toks = append(toks, &Token{ - Type: hclsyntax.TokenCBrack, - Bytes: []byte{']'}, - }) - default: - panic(fmt.Sprintf("unsupported traversal step type %T", step)) - } -} - -func escapeQuotedStringLit(s string) []byte { - if len(s) == 0 { - return nil - } - buf := make([]byte, 0, len(s)) - for i, r := range s { - switch r { - case '\n': - buf = append(buf, '\\', 'n') - case '\r': - buf = append(buf, '\\', 'r') - case '\t': - buf = append(buf, '\\', 't') - case '"': - buf = append(buf, '\\', '"') - case '\\': - buf = append(buf, '\\', '\\') - case '$', '%': - buf = appendRune(buf, r) - remain := s[i+1:] - if len(remain) > 0 && remain[0] == '{' { - // Double up our template introducer symbol to escape it. - buf = appendRune(buf, r) - } - default: - if !unicode.IsPrint(r) { - var fmted string - if r < 65536 { - fmted = fmt.Sprintf("\\u%04x", r) - } else { - fmted = fmt.Sprintf("\\U%08x", r) - } - buf = append(buf, fmted...) - } else { - buf = appendRune(buf, r) - } - } - } - return buf -} - -func appendRune(b []byte, r rune) []byte { - l := utf8.RuneLen(r) - for i := 0; i < l; i++ { - b = append(b, 0) // make room at the end of our buffer - } - ch := b[len(b)-l:] - utf8.EncodeRune(ch, r) - return b -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/native_node_sorter.go b/vendor/github.com/hashicorp/hcl2/hclwrite/native_node_sorter.go deleted file mode 100644 index a13c0ec41..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/native_node_sorter.go +++ /dev/null @@ -1,23 +0,0 @@ -package hclwrite - -import ( - "github.com/hashicorp/hcl2/hcl/hclsyntax" -) - -type nativeNodeSorter struct { - Nodes []hclsyntax.Node -} - -func (s nativeNodeSorter) Len() int { - return len(s.Nodes) -} - -func (s nativeNodeSorter) Less(i, j int) bool { - rangeI := s.Nodes[i].Range() - rangeJ := s.Nodes[j].Range() - return rangeI.Start.Byte < rangeJ.Start.Byte -} - -func (s nativeNodeSorter) Swap(i, j int) { - s.Nodes[i], s.Nodes[j] = s.Nodes[j], s.Nodes[i] -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/node.go b/vendor/github.com/hashicorp/hcl2/hclwrite/node.go deleted file mode 100644 index 71fd00faf..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/node.go +++ /dev/null @@ -1,236 +0,0 @@ -package hclwrite - -import ( - "fmt" - - "github.com/google/go-cmp/cmp" -) - -// node represents a node in the AST. -type node struct { - content nodeContent - - list *nodes - before, after *node -} - -func newNode(c nodeContent) *node { - return &node{ - content: c, - } -} - -func (n *node) Equal(other *node) bool { - return cmp.Equal(n.content, other.content) -} - -func (n *node) BuildTokens(to Tokens) Tokens { - return n.content.BuildTokens(to) -} - -// Detach removes the receiver from the list it currently belongs to. If the -// node is not currently in a list, this is a no-op. -func (n *node) Detach() { - if n.list == nil { - return - } - if n.before != nil { - n.before.after = n.after - } - if n.after != nil { - n.after.before = n.before - } - if n.list.first == n { - n.list.first = n.after - } - if n.list.last == n { - n.list.last = n.before - } - n.list = nil - n.before = nil - n.after = nil -} - -// ReplaceWith removes the receiver from the list it currently belongs to and -// inserts a new node with the given content in its place. If the node is not -// currently in a list, this function will panic. -// -// The return value is the newly-constructed node, containing the given content. -// After this function returns, the reciever is no longer attached to a list. -func (n *node) ReplaceWith(c nodeContent) *node { - if n.list == nil { - panic("can't replace node that is not in a list") - } - - before := n.before - after := n.after - list := n.list - n.before, n.after, n.list = nil, nil, nil - - nn := newNode(c) - nn.before = before - nn.after = after - nn.list = list - if before != nil { - before.after = nn - } - if after != nil { - after.before = nn - } - return nn -} - -func (n *node) assertUnattached() { - if n.list != nil { - panic(fmt.Sprintf("attempt to attach already-attached node %#v", n)) - } -} - -// nodeContent is the interface type implemented by all AST content types. -type nodeContent interface { - walkChildNodes(w internalWalkFunc) - BuildTokens(to Tokens) Tokens -} - -// nodes is a list of nodes. -type nodes struct { - first, last *node -} - -func (ns *nodes) BuildTokens(to Tokens) Tokens { - for n := ns.first; n != nil; n = n.after { - to = n.BuildTokens(to) - } - return to -} - -func (ns *nodes) Clear() { - ns.first = nil - ns.last = nil -} - -func (ns *nodes) Append(c nodeContent) *node { - n := &node{ - content: c, - } - ns.AppendNode(n) - n.list = ns - return n -} - -func (ns *nodes) AppendNode(n *node) { - if ns.last != nil { - n.before = ns.last - ns.last.after = n - } - n.list = ns - ns.last = n - if ns.first == nil { - ns.first = n - } -} - -func (ns *nodes) AppendUnstructuredTokens(tokens Tokens) *node { - if len(tokens) == 0 { - return nil - } - n := newNode(tokens) - ns.AppendNode(n) - n.list = ns - return n -} - -// nodeSet is an unordered set of nodes. It is used to describe a set of nodes -// that all belong to the same list that have some role or characteristic -// in common. -type nodeSet map[*node]struct{} - -func newNodeSet() nodeSet { - return make(nodeSet) -} - -func (ns nodeSet) Has(n *node) bool { - if ns == nil { - return false - } - _, exists := ns[n] - return exists -} - -func (ns nodeSet) Add(n *node) { - ns[n] = struct{}{} -} - -func (ns nodeSet) Remove(n *node) { - delete(ns, n) -} - -func (ns nodeSet) List() []*node { - if len(ns) == 0 { - return nil - } - - ret := make([]*node, 0, len(ns)) - - // Determine which list we are working with. We assume here that all of - // the nodes belong to the same list, since that is part of the contract - // for nodeSet. - var list *nodes - for n := range ns { - list = n.list - break - } - - // We recover the order by iterating over the whole list. This is not - // the most efficient way to do it, but our node lists should always be - // small so not worth making things more complex. - for n := list.first; n != nil; n = n.after { - if ns.Has(n) { - ret = append(ret, n) - } - } - return ret -} - -type internalWalkFunc func(*node) - -// inTree can be embedded into a content struct that has child nodes to get -// a standard implementation of the NodeContent interface and a record of -// a potential parent node. -type inTree struct { - parent *node - children *nodes -} - -func newInTree() inTree { - return inTree{ - children: &nodes{}, - } -} - -func (it *inTree) assertUnattached() { - if it.parent != nil { - panic(fmt.Sprintf("node is already attached to %T", it.parent.content)) - } -} - -func (it *inTree) walkChildNodes(w internalWalkFunc) { - for n := it.children.first; n != nil; n = n.after { - w(n) - } -} - -func (it *inTree) BuildTokens(to Tokens) Tokens { - for n := it.children.first; n != nil; n = n.after { - to = n.BuildTokens(to) - } - return to -} - -// leafNode can be embedded into a content struct to give it a do-nothing -// implementation of walkChildNodes -type leafNode struct { -} - -func (n *leafNode) walkChildNodes(w internalWalkFunc) { -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/parser.go b/vendor/github.com/hashicorp/hcl2/hclwrite/parser.go deleted file mode 100644 index 1876818fd..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/parser.go +++ /dev/null @@ -1,594 +0,0 @@ -package hclwrite - -import ( - "fmt" - "sort" - - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hcl/hclsyntax" - "github.com/zclconf/go-cty/cty" -) - -// Our "parser" here is actually not doing any parsing of its own. Instead, -// it leans on the native parser in hclsyntax, and then uses the source ranges -// from the AST to partition the raw token sequence to match the raw tokens -// up to AST nodes. -// -// This strategy feels somewhat counter-intuitive, since most of the work the -// parser does is thrown away here, but this strategy is chosen because the -// normal parsing work done by hclsyntax is considered to be the "main case", -// while modifying and re-printing source is more of an edge case, used only -// in ancillary tools, and so it's good to keep all the main parsing logic -// with the main case but keep all of the extra complexity of token wrangling -// out of the main parser, which is already rather complex just serving the -// use-cases it already serves. -// -// If the parsing step produces any errors, the returned File is nil because -// we can't reliably extract tokens from the partial AST produced by an -// erroneous parse. -func parse(src []byte, filename string, start hcl.Pos) (*File, hcl.Diagnostics) { - file, diags := hclsyntax.ParseConfig(src, filename, start) - if diags.HasErrors() { - return nil, diags - } - - // To do our work here, we use the "native" tokens (those from hclsyntax) - // to match against source ranges in the AST, but ultimately produce - // slices from our sequence of "writer" tokens, which contain only - // *relative* position information that is more appropriate for - // transformation/writing use-cases. - nativeTokens, diags := hclsyntax.LexConfig(src, filename, start) - if diags.HasErrors() { - // should never happen, since we would've caught these diags in - // the first call above. - return nil, diags - } - writerTokens := writerTokens(nativeTokens) - - from := inputTokens{ - nativeTokens: nativeTokens, - writerTokens: writerTokens, - } - - before, root, after := parseBody(file.Body.(*hclsyntax.Body), from) - ret := &File{ - inTree: newInTree(), - - srcBytes: src, - body: root, - } - - nodes := ret.inTree.children - nodes.Append(before.Tokens()) - nodes.AppendNode(root) - nodes.Append(after.Tokens()) - - return ret, diags -} - -type inputTokens struct { - nativeTokens hclsyntax.Tokens - writerTokens Tokens -} - -func (it inputTokens) Partition(rng hcl.Range) (before, within, after inputTokens) { - start, end := partitionTokens(it.nativeTokens, rng) - before = it.Slice(0, start) - within = it.Slice(start, end) - after = it.Slice(end, len(it.nativeTokens)) - return -} - -func (it inputTokens) PartitionType(ty hclsyntax.TokenType) (before, within, after inputTokens) { - for i, t := range it.writerTokens { - if t.Type == ty { - return it.Slice(0, i), it.Slice(i, i+1), it.Slice(i+1, len(it.nativeTokens)) - } - } - panic(fmt.Sprintf("didn't find any token of type %s", ty)) -} - -func (it inputTokens) PartitionTypeSingle(ty hclsyntax.TokenType) (before inputTokens, found *Token, after inputTokens) { - before, within, after := it.PartitionType(ty) - if within.Len() != 1 { - panic("PartitionType found more than one token") - } - return before, within.Tokens()[0], after -} - -// PartitionIncludeComments is like Partition except the returned "within" -// range includes any lead and line comments associated with the range. -func (it inputTokens) PartitionIncludingComments(rng hcl.Range) (before, within, after inputTokens) { - start, end := partitionTokens(it.nativeTokens, rng) - start = partitionLeadCommentTokens(it.nativeTokens[:start]) - _, afterNewline := partitionLineEndTokens(it.nativeTokens[end:]) - end += afterNewline - - before = it.Slice(0, start) - within = it.Slice(start, end) - after = it.Slice(end, len(it.nativeTokens)) - return - -} - -// PartitionBlockItem is similar to PartitionIncludeComments but it returns -// the comments as separate token sequences so that they can be captured into -// AST attributes. It makes assumptions that apply only to block items, so -// should not be used for other constructs. -func (it inputTokens) PartitionBlockItem(rng hcl.Range) (before, leadComments, within, lineComments, newline, after inputTokens) { - before, within, after = it.Partition(rng) - before, leadComments = before.PartitionLeadComments() - lineComments, newline, after = after.PartitionLineEndTokens() - return -} - -func (it inputTokens) PartitionLeadComments() (before, within inputTokens) { - start := partitionLeadCommentTokens(it.nativeTokens) - before = it.Slice(0, start) - within = it.Slice(start, len(it.nativeTokens)) - return -} - -func (it inputTokens) PartitionLineEndTokens() (comments, newline, after inputTokens) { - afterComments, afterNewline := partitionLineEndTokens(it.nativeTokens) - comments = it.Slice(0, afterComments) - newline = it.Slice(afterComments, afterNewline) - after = it.Slice(afterNewline, len(it.nativeTokens)) - return -} - -func (it inputTokens) Slice(start, end int) inputTokens { - // When we slice, we create a new slice with no additional capacity because - // we expect that these slices will be mutated in order to insert - // new code into the AST, and we want to ensure that a new underlying - // array gets allocated in that case, rather than writing into some - // following slice and corrupting it. - return inputTokens{ - nativeTokens: it.nativeTokens[start:end:end], - writerTokens: it.writerTokens[start:end:end], - } -} - -func (it inputTokens) Len() int { - return len(it.nativeTokens) -} - -func (it inputTokens) Tokens() Tokens { - return it.writerTokens -} - -func (it inputTokens) Types() []hclsyntax.TokenType { - ret := make([]hclsyntax.TokenType, len(it.nativeTokens)) - for i, tok := range it.nativeTokens { - ret[i] = tok.Type - } - return ret -} - -// parseBody locates the given body within the given input tokens and returns -// the resulting *Body object as well as the tokens that appeared before and -// after it. -func parseBody(nativeBody *hclsyntax.Body, from inputTokens) (inputTokens, *node, inputTokens) { - before, within, after := from.PartitionIncludingComments(nativeBody.SrcRange) - - // The main AST doesn't retain the original source ordering of the - // body items, so we need to reconstruct that ordering by inspecting - // their source ranges. - nativeItems := make([]hclsyntax.Node, 0, len(nativeBody.Attributes)+len(nativeBody.Blocks)) - for _, nativeAttr := range nativeBody.Attributes { - nativeItems = append(nativeItems, nativeAttr) - } - for _, nativeBlock := range nativeBody.Blocks { - nativeItems = append(nativeItems, nativeBlock) - } - sort.Sort(nativeNodeSorter{nativeItems}) - - body := &Body{ - inTree: newInTree(), - items: newNodeSet(), - } - - remain := within - for _, nativeItem := range nativeItems { - beforeItem, item, afterItem := parseBodyItem(nativeItem, remain) - - if beforeItem.Len() > 0 { - body.AppendUnstructuredTokens(beforeItem.Tokens()) - } - body.appendItemNode(item) - - remain = afterItem - } - - if remain.Len() > 0 { - body.AppendUnstructuredTokens(remain.Tokens()) - } - - return before, newNode(body), after -} - -func parseBodyItem(nativeItem hclsyntax.Node, from inputTokens) (inputTokens, *node, inputTokens) { - before, leadComments, within, lineComments, newline, after := from.PartitionBlockItem(nativeItem.Range()) - - var item *node - - switch tItem := nativeItem.(type) { - case *hclsyntax.Attribute: - item = parseAttribute(tItem, within, leadComments, lineComments, newline) - case *hclsyntax.Block: - item = parseBlock(tItem, within, leadComments, lineComments, newline) - default: - // should never happen if caller is behaving - panic("unsupported native item type") - } - - return before, item, after -} - -func parseAttribute(nativeAttr *hclsyntax.Attribute, from, leadComments, lineComments, newline inputTokens) *node { - attr := &Attribute{ - inTree: newInTree(), - } - children := attr.inTree.children - - { - cn := newNode(newComments(leadComments.Tokens())) - attr.leadComments = cn - children.AppendNode(cn) - } - - before, nameTokens, from := from.Partition(nativeAttr.NameRange) - { - children.AppendUnstructuredTokens(before.Tokens()) - if nameTokens.Len() != 1 { - // Should never happen with valid input - panic("attribute name is not exactly one token") - } - token := nameTokens.Tokens()[0] - in := newNode(newIdentifier(token)) - attr.name = in - children.AppendNode(in) - } - - before, equalsTokens, from := from.Partition(nativeAttr.EqualsRange) - children.AppendUnstructuredTokens(before.Tokens()) - children.AppendUnstructuredTokens(equalsTokens.Tokens()) - - before, exprTokens, from := from.Partition(nativeAttr.Expr.Range()) - { - children.AppendUnstructuredTokens(before.Tokens()) - exprNode := parseExpression(nativeAttr.Expr, exprTokens) - attr.expr = exprNode - children.AppendNode(exprNode) - } - - { - cn := newNode(newComments(lineComments.Tokens())) - attr.lineComments = cn - children.AppendNode(cn) - } - - children.AppendUnstructuredTokens(newline.Tokens()) - - // Collect any stragglers, though there shouldn't be any - children.AppendUnstructuredTokens(from.Tokens()) - - return newNode(attr) -} - -func parseBlock(nativeBlock *hclsyntax.Block, from, leadComments, lineComments, newline inputTokens) *node { - block := &Block{ - inTree: newInTree(), - labels: newNodeSet(), - } - children := block.inTree.children - - { - cn := newNode(newComments(leadComments.Tokens())) - block.leadComments = cn - children.AppendNode(cn) - } - - before, typeTokens, from := from.Partition(nativeBlock.TypeRange) - { - children.AppendUnstructuredTokens(before.Tokens()) - if typeTokens.Len() != 1 { - // Should never happen with valid input - panic("block type name is not exactly one token") - } - token := typeTokens.Tokens()[0] - in := newNode(newIdentifier(token)) - block.typeName = in - children.AppendNode(in) - } - - for _, rng := range nativeBlock.LabelRanges { - var labelTokens inputTokens - before, labelTokens, from = from.Partition(rng) - children.AppendUnstructuredTokens(before.Tokens()) - tokens := labelTokens.Tokens() - ln := newNode(newQuoted(tokens)) - block.labels.Add(ln) - children.AppendNode(ln) - } - - before, oBrace, from := from.Partition(nativeBlock.OpenBraceRange) - children.AppendUnstructuredTokens(before.Tokens()) - children.AppendUnstructuredTokens(oBrace.Tokens()) - - // We go a bit out of order here: we go hunting for the closing brace - // so that we have a delimited body, but then we'll deal with the body - // before we actually append the closing brace and any straggling tokens - // that appear after it. - bodyTokens, cBrace, from := from.Partition(nativeBlock.CloseBraceRange) - before, body, after := parseBody(nativeBlock.Body, bodyTokens) - children.AppendUnstructuredTokens(before.Tokens()) - block.body = body - children.AppendNode(body) - children.AppendUnstructuredTokens(after.Tokens()) - - children.AppendUnstructuredTokens(cBrace.Tokens()) - - // stragglers - children.AppendUnstructuredTokens(from.Tokens()) - if lineComments.Len() > 0 { - // blocks don't actually have line comments, so we'll just treat - // them as extra stragglers - children.AppendUnstructuredTokens(lineComments.Tokens()) - } - children.AppendUnstructuredTokens(newline.Tokens()) - - return newNode(block) -} - -func parseExpression(nativeExpr hclsyntax.Expression, from inputTokens) *node { - expr := newExpression() - children := expr.inTree.children - - nativeVars := nativeExpr.Variables() - - for _, nativeTraversal := range nativeVars { - before, traversal, after := parseTraversal(nativeTraversal, from) - children.AppendUnstructuredTokens(before.Tokens()) - children.AppendNode(traversal) - expr.absTraversals.Add(traversal) - from = after - } - // Attach any stragglers that don't belong to a traversal to the expression - // itself. In an expression with no traversals at all, this is just the - // entirety of "from". - children.AppendUnstructuredTokens(from.Tokens()) - - return newNode(expr) -} - -func parseTraversal(nativeTraversal hcl.Traversal, from inputTokens) (before inputTokens, n *node, after inputTokens) { - traversal := newTraversal() - children := traversal.inTree.children - before, from, after = from.Partition(nativeTraversal.SourceRange()) - - stepAfter := from - for _, nativeStep := range nativeTraversal { - before, step, after := parseTraversalStep(nativeStep, stepAfter) - children.AppendUnstructuredTokens(before.Tokens()) - children.AppendNode(step) - traversal.steps.Add(step) - stepAfter = after - } - - return before, newNode(traversal), after -} - -func parseTraversalStep(nativeStep hcl.Traverser, from inputTokens) (before inputTokens, n *node, after inputTokens) { - var children *nodes - switch tNativeStep := nativeStep.(type) { - - case hcl.TraverseRoot, hcl.TraverseAttr: - step := newTraverseName() - children = step.inTree.children - before, from, after = from.Partition(nativeStep.SourceRange()) - inBefore, token, inAfter := from.PartitionTypeSingle(hclsyntax.TokenIdent) - name := newIdentifier(token) - children.AppendUnstructuredTokens(inBefore.Tokens()) - step.name = children.Append(name) - children.AppendUnstructuredTokens(inAfter.Tokens()) - return before, newNode(step), after - - case hcl.TraverseIndex: - step := newTraverseIndex() - children = step.inTree.children - before, from, after = from.Partition(nativeStep.SourceRange()) - - var inBefore, oBrack, keyTokens, cBrack inputTokens - inBefore, oBrack, from = from.PartitionType(hclsyntax.TokenOBrack) - children.AppendUnstructuredTokens(inBefore.Tokens()) - children.AppendUnstructuredTokens(oBrack.Tokens()) - keyTokens, cBrack, from = from.PartitionType(hclsyntax.TokenCBrack) - - keyVal := tNativeStep.Key - switch keyVal.Type() { - case cty.String: - key := newQuoted(keyTokens.Tokens()) - step.key = children.Append(key) - case cty.Number: - valBefore, valToken, valAfter := keyTokens.PartitionTypeSingle(hclsyntax.TokenNumberLit) - children.AppendUnstructuredTokens(valBefore.Tokens()) - key := newNumber(valToken) - step.key = children.Append(key) - children.AppendUnstructuredTokens(valAfter.Tokens()) - } - - children.AppendUnstructuredTokens(cBrack.Tokens()) - children.AppendUnstructuredTokens(from.Tokens()) - - return before, newNode(step), after - default: - panic(fmt.Sprintf("unsupported traversal step type %T", nativeStep)) - } - -} - -// writerTokens takes a sequence of tokens as produced by the main hclsyntax -// package and transforms it into an equivalent sequence of tokens using -// this package's own token model. -// -// The resulting list contains the same number of tokens and uses the same -// indices as the input, allowing the two sets of tokens to be correlated -// by index. -func writerTokens(nativeTokens hclsyntax.Tokens) Tokens { - // Ultimately we want a slice of token _pointers_, but since we can - // predict how much memory we're going to devote to tokens we'll allocate - // it all as a single flat buffer and thus give the GC less work to do. - tokBuf := make([]Token, len(nativeTokens)) - var lastByteOffset int - for i, mainToken := range nativeTokens { - // Create a copy of the bytes so that we can mutate without - // corrupting the original token stream. - bytes := make([]byte, len(mainToken.Bytes)) - copy(bytes, mainToken.Bytes) - - tokBuf[i] = Token{ - Type: mainToken.Type, - Bytes: bytes, - - // We assume here that spaces are always ASCII spaces, since - // that's what the scanner also assumes, and thus the number - // of bytes skipped is also the number of space characters. - SpacesBefore: mainToken.Range.Start.Byte - lastByteOffset, - } - - lastByteOffset = mainToken.Range.End.Byte - } - - // Now make a slice of pointers into the previous slice. - ret := make(Tokens, len(tokBuf)) - for i := range ret { - ret[i] = &tokBuf[i] - } - - return ret -} - -// partitionTokens takes a sequence of tokens and a hcl.Range and returns -// two indices within the token sequence that correspond with the range -// boundaries, such that the slice operator could be used to produce -// three token sequences for before, within, and after respectively: -// -// start, end := partitionTokens(toks, rng) -// before := toks[:start] -// within := toks[start:end] -// after := toks[end:] -// -// This works best when the range is aligned with token boundaries (e.g. -// because it was produced in terms of the scanner's result) but if that isn't -// true then it will make a best effort that may produce strange results at -// the boundaries. -// -// Native hclsyntax tokens are used here, because they contain the necessary -// absolute position information. However, since writerTokens produces a -// correlatable sequence of writer tokens, the resulting indices can be -// used also to index into its result, allowing the partitioning of writer -// tokens to be driven by the partitioning of native tokens. -// -// The tokens are assumed to be in source order and non-overlapping, which -// will be true if the token sequence from the scanner is used directly. -func partitionTokens(toks hclsyntax.Tokens, rng hcl.Range) (start, end int) { - // We us a linear search here because we assume tha in most cases our - // target range is close to the beginning of the sequence, and the seqences - // are generally small for most reasonable files anyway. - for i := 0; ; i++ { - if i >= len(toks) { - // No tokens for the given range at all! - return len(toks), len(toks) - } - - if toks[i].Range.Start.Byte >= rng.Start.Byte { - start = i - break - } - } - - for i := start; ; i++ { - if i >= len(toks) { - // The range "hangs off" the end of the token sequence - return start, len(toks) - } - - if toks[i].Range.Start.Byte >= rng.End.Byte { - end = i // end marker is exclusive - break - } - } - - return start, end -} - -// partitionLeadCommentTokens takes a sequence of tokens that is assumed -// to immediately precede a construct that can have lead comment tokens, -// and returns the index into that sequence where the lead comments begin. -// -// Lead comments are defined as whole lines containing only comment tokens -// with no blank lines between. If no such lines are found, the returned -// index will be len(toks). -func partitionLeadCommentTokens(toks hclsyntax.Tokens) int { - // single-line comments (which is what we're interested in here) - // consume their trailing newline, so we can just walk backwards - // until we stop seeing comment tokens. - for i := len(toks) - 1; i >= 0; i-- { - if toks[i].Type != hclsyntax.TokenComment { - return i + 1 - } - } - return 0 -} - -// partitionLineEndTokens takes a sequence of tokens that is assumed -// to immediately follow a construct that can have a line comment, and -// returns first the index where any line comments end and then second -// the index immediately after the trailing newline. -// -// Line comments are defined as comments that appear immediately after -// a construct on the same line where its significant tokens ended. -// -// Since single-line comment tokens (# and //) include the newline that -// terminates them, in the presence of these the two returned indices -// will be the same since the comment itself serves as the line end. -func partitionLineEndTokens(toks hclsyntax.Tokens) (afterComment, afterNewline int) { - for i := 0; i < len(toks); i++ { - tok := toks[i] - if tok.Type != hclsyntax.TokenComment { - switch tok.Type { - case hclsyntax.TokenNewline: - return i, i + 1 - case hclsyntax.TokenEOF: - // Although this is valid, we mustn't include the EOF - // itself as our "newline" or else strange things will - // happen when we try to append new items. - return i, i - default: - // If we have well-formed input here then nothing else should be - // possible. This path should never happen, because we only try - // to extract tokens from the sequence if the parser succeeded, - // and it should catch this problem itself. - panic("malformed line trailers: expected only comments and newlines") - } - } - - if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' { - // Newline at the end of a single-line comment serves both as - // the end of comments *and* the end of the line. - return i + 1, i + 1 - } - } - return len(toks), len(toks) -} - -// lexConfig uses the hclsyntax scanner to get a token stream and then -// rewrites it into this package's token model. -// -// Any errors produced during scanning are ignored, so the results of this -// function should be used with care. -func lexConfig(src []byte) Tokens { - mainTokens, _ := hclsyntax.LexConfig(src, "", hcl.Pos{Byte: 0, Line: 1, Column: 1}) - return writerTokens(mainTokens) -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/public.go b/vendor/github.com/hashicorp/hcl2/hclwrite/public.go deleted file mode 100644 index 4d5ce2a6e..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/public.go +++ /dev/null @@ -1,44 +0,0 @@ -package hclwrite - -import ( - "bytes" - - "github.com/hashicorp/hcl2/hcl" -) - -// NewFile creates a new file object that is empty and ready to have constructs -// added t it. -func NewFile() *File { - body := &Body{ - inTree: newInTree(), - items: newNodeSet(), - } - file := &File{ - inTree: newInTree(), - } - file.body = file.inTree.children.Append(body) - return file -} - -// ParseConfig interprets the given source bytes into a *hclwrite.File. The -// resulting AST can be used to perform surgical edits on the source code -// before turning it back into bytes again. -func ParseConfig(src []byte, filename string, start hcl.Pos) (*File, hcl.Diagnostics) { - return parse(src, filename, start) -} - -// Format takes source code and performs simple whitespace changes to transform -// it to a canonical layout style. -// -// Format skips constructing an AST and works directly with tokens, so it -// is less expensive than formatting via the AST for situations where no other -// changes will be made. It also ignores syntax errors and can thus be applied -// to partial source code, although the result in that case may not be -// desirable. -func Format(src []byte) []byte { - tokens := lexConfig(src) - format(tokens) - buf := &bytes.Buffer{} - tokens.WriteTo(buf) - return buf.Bytes() -} diff --git a/vendor/github.com/hashicorp/hcl2/hclwrite/tokens.go b/vendor/github.com/hashicorp/hcl2/hclwrite/tokens.go deleted file mode 100644 index d87f81853..000000000 --- a/vendor/github.com/hashicorp/hcl2/hclwrite/tokens.go +++ /dev/null @@ -1,122 +0,0 @@ -package hclwrite - -import ( - "bytes" - "io" - - "github.com/apparentlymart/go-textseg/textseg" - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hcl/hclsyntax" -) - -// Token is a single sequence of bytes annotated with a type. It is similar -// in purpose to hclsyntax.Token, but discards the source position information -// since that is not useful in code generation. -type Token struct { - Type hclsyntax.TokenType - Bytes []byte - - // We record the number of spaces before each token so that we can - // reproduce the exact layout of the original file when we're making - // surgical changes in-place. When _new_ code is created it will always - // be in the canonical style, but we preserve layout of existing code. - SpacesBefore int -} - -// asHCLSyntax returns the receiver expressed as an incomplete hclsyntax.Token. -// A complete token is not possible since we don't have source location -// information here, and so this method is unexported so we can be sure it will -// only be used for internal purposes where we know the range isn't important. -// -// This is primarily intended to allow us to re-use certain functionality from -// hclsyntax rather than re-implementing it against our own token type here. -func (t *Token) asHCLSyntax() hclsyntax.Token { - return hclsyntax.Token{ - Type: t.Type, - Bytes: t.Bytes, - Range: hcl.Range{ - Filename: "", - }, - } -} - -// Tokens is a flat list of tokens. -type Tokens []*Token - -func (ts Tokens) Bytes() []byte { - buf := &bytes.Buffer{} - ts.WriteTo(buf) - return buf.Bytes() -} - -func (ts Tokens) testValue() string { - return string(ts.Bytes()) -} - -// Columns returns the number of columns (grapheme clusters) the token sequence -// occupies. The result is not meaningful if there are newline or single-line -// comment tokens in the sequence. -func (ts Tokens) Columns() int { - ret := 0 - for _, token := range ts { - ret += token.SpacesBefore // spaces are always worth one column each - ct, _ := textseg.TokenCount(token.Bytes, textseg.ScanGraphemeClusters) - ret += ct - } - return ret -} - -// WriteTo takes an io.Writer and writes the bytes for each token to it, -// along with the spacing that separates each token. In other words, this -// allows serializing the tokens to a file or other such byte stream. -func (ts Tokens) WriteTo(wr io.Writer) (int64, error) { - // We know we're going to be writing a lot of small chunks of repeated - // space characters, so we'll prepare a buffer of these that we can - // easily pass to wr.Write without any further allocation. - spaces := make([]byte, 40) - for i := range spaces { - spaces[i] = ' ' - } - - var n int64 - var err error - for _, token := range ts { - if err != nil { - return n, err - } - - for spacesBefore := token.SpacesBefore; spacesBefore > 0; spacesBefore -= len(spaces) { - thisChunk := spacesBefore - if thisChunk > len(spaces) { - thisChunk = len(spaces) - } - var thisN int - thisN, err = wr.Write(spaces[:thisChunk]) - n += int64(thisN) - if err != nil { - return n, err - } - } - - var thisN int - thisN, err = wr.Write(token.Bytes) - n += int64(thisN) - } - - return n, err -} - -func (ts Tokens) walkChildNodes(w internalWalkFunc) { - // Unstructured tokens have no child nodes -} - -func (ts Tokens) BuildTokens(to Tokens) Tokens { - return append(to, ts...) -} - -func newIdentToken(name string) *Token { - return &Token{ - Type: hclsyntax.TokenIdent, - Bytes: []byte(name), - } -} diff --git a/vendor/github.com/hashicorp/logutils/LICENSE b/vendor/github.com/hashicorp/logutils/LICENSE deleted file mode 100644 index c33dcc7c9..000000000 --- a/vendor/github.com/hashicorp/logutils/LICENSE +++ /dev/null @@ -1,354 +0,0 @@ -Mozilla Public License, version 2.0 - -1. Definitions - -1.1. “Contributor” - - means each individual or legal entity that creates, contributes to the - creation of, or owns Covered Software. - -1.2. “Contributor Version” - - means the combination of the Contributions of others (if any) used by a - Contributor and that particular Contributor’s Contribution. - -1.3. “Contribution” - - means Covered Software of a particular Contributor. - -1.4. “Covered Software” - - means Source Code Form to which the initial Contributor has attached the - notice in Exhibit A, the Executable Form of such Source Code Form, and - Modifications of such Source Code Form, in each case including portions - thereof. - -1.5. “Incompatible With Secondary Licenses” - means - - a. that the initial Contributor has attached the notice described in - Exhibit B to the Covered Software; or - - b. that the Covered Software was made available under the terms of version - 1.1 or earlier of the License, but not also under the terms of a - Secondary License. - -1.6. “Executable Form” - - means any form of the work other than Source Code Form. - -1.7. “Larger Work” - - means a work that combines Covered Software with other material, in a separate - file or files, that is not Covered Software. - -1.8. “License” - - means this document. - -1.9. “Licensable” - - means having the right to grant, to the maximum extent possible, whether at the - time of the initial grant or subsequently, any and all of the rights conveyed by - this License. - -1.10. “Modifications” - - means any of the following: - - a. any file in Source Code Form that results from an addition to, deletion - from, or modification of the contents of Covered Software; or - - b. any new file in Source Code Form that contains any Covered Software. - -1.11. “Patent Claims” of a Contributor - - means any patent claim(s), including without limitation, method, process, - and apparatus claims, in any patent Licensable by such Contributor that - would be infringed, but for the grant of the License, by the making, - using, selling, offering for sale, having made, import, or transfer of - either its Contributions or its Contributor Version. - -1.12. “Secondary License” - - means either the GNU General Public License, Version 2.0, the GNU Lesser - General Public License, Version 2.1, the GNU Affero General Public - License, Version 3.0, or any later versions of those licenses. - -1.13. “Source Code Form” - - means the form of the work preferred for making modifications. - -1.14. “You” (or “Your”) - - means an individual or a legal entity exercising rights under this - License. For legal entities, “You” includes any entity that controls, is - controlled by, or is under common control with You. For purposes of this - definition, “control” means (a) the power, direct or indirect, to cause - the direction or management of such entity, whether by contract or - otherwise, or (b) ownership of more than fifty percent (50%) of the - outstanding shares or beneficial ownership of such entity. - - -2. License Grants and Conditions - -2.1. Grants - - Each Contributor hereby grants You a world-wide, royalty-free, - non-exclusive license: - - a. under intellectual property rights (other than patent or trademark) - Licensable by such Contributor to use, reproduce, make available, - modify, display, perform, distribute, and otherwise exploit its - Contributions, either on an unmodified basis, with Modifications, or as - part of a Larger Work; and - - b. under Patent Claims of such Contributor to make, use, sell, offer for - sale, have made, import, and otherwise transfer either its Contributions - or its Contributor Version. - -2.2. Effective Date - - The licenses granted in Section 2.1 with respect to any Contribution become - effective for each Contribution on the date the Contributor first distributes - such Contribution. - -2.3. Limitations on Grant Scope - - The licenses granted in this Section 2 are the only rights granted under this - License. No additional rights or licenses will be implied from the distribution - or licensing of Covered Software under this License. Notwithstanding Section - 2.1(b) above, no patent license is granted by a Contributor: - - a. for any code that a Contributor has removed from Covered Software; or - - b. for infringements caused by: (i) Your and any other third party’s - modifications of Covered Software, or (ii) the combination of its - Contributions with other software (except as part of its Contributor - Version); or - - c. under Patent Claims infringed by Covered Software in the absence of its - Contributions. - - This License does not grant any rights in the trademarks, service marks, or - logos of any Contributor (except as may be necessary to comply with the - notice requirements in Section 3.4). - -2.4. Subsequent Licenses - - No Contributor makes additional grants as a result of Your choice to - distribute the Covered Software under a subsequent version of this License - (see Section 10.2) or under the terms of a Secondary License (if permitted - under the terms of Section 3.3). - -2.5. Representation - - Each Contributor represents that the Contributor believes its Contributions - are its original creation(s) or it has sufficient rights to grant the - rights to its Contributions conveyed by this License. - -2.6. Fair Use - - This License is not intended to limit any rights You have under applicable - copyright doctrines of fair use, fair dealing, or other equivalents. - -2.7. Conditions - - Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in - Section 2.1. - - -3. Responsibilities - -3.1. Distribution of Source Form - - All distribution of Covered Software in Source Code Form, including any - Modifications that You create or to which You contribute, must be under the - terms of this License. You must inform recipients that the Source Code Form - of the Covered Software is governed by the terms of this License, and how - they can obtain a copy of this License. You may not attempt to alter or - restrict the recipients’ rights in the Source Code Form. - -3.2. Distribution of Executable Form - - If You distribute Covered Software in Executable Form then: - - a. such Covered Software must also be made available in Source Code Form, - as described in Section 3.1, and You must inform recipients of the - Executable Form how they can obtain a copy of such Source Code Form by - reasonable means in a timely manner, at a charge no more than the cost - of distribution to the recipient; and - - b. You may distribute such Executable Form under the terms of this License, - or sublicense it under different terms, provided that the license for - the Executable Form does not attempt to limit or alter the recipients’ - rights in the Source Code Form under this License. - -3.3. Distribution of a Larger Work - - You may create and distribute a Larger Work under terms of Your choice, - provided that You also comply with the requirements of this License for the - Covered Software. If the Larger Work is a combination of Covered Software - with a work governed by one or more Secondary Licenses, and the Covered - Software is not Incompatible With Secondary Licenses, this License permits - You to additionally distribute such Covered Software under the terms of - such Secondary License(s), so that the recipient of the Larger Work may, at - their option, further distribute the Covered Software under the terms of - either this License or such Secondary License(s). - -3.4. Notices - - You may not remove or alter the substance of any license notices (including - copyright notices, patent notices, disclaimers of warranty, or limitations - of liability) contained within the Source Code Form of the Covered - Software, except that You may alter any license notices to the extent - required to remedy known factual inaccuracies. - -3.5. Application of Additional Terms - - You may choose to offer, and to charge a fee for, warranty, support, - indemnity or liability obligations to one or more recipients of Covered - Software. However, You may do so only on Your own behalf, and not on behalf - of any Contributor. You must make it absolutely clear that any such - warranty, support, indemnity, or liability obligation is offered by You - alone, and You hereby agree to indemnify every Contributor for any - liability incurred by such Contributor as a result of warranty, support, - indemnity or liability terms You offer. You may include additional - disclaimers of warranty and limitations of liability specific to any - jurisdiction. - -4. Inability to Comply Due to Statute or Regulation - - If it is impossible for You to comply with any of the terms of this License - with respect to some or all of the Covered Software due to statute, judicial - order, or regulation then You must: (a) comply with the terms of this License - to the maximum extent possible; and (b) describe the limitations and the code - they affect. Such description must be placed in a text file included with all - distributions of the Covered Software under this License. Except to the - extent prohibited by statute or regulation, such description must be - sufficiently detailed for a recipient of ordinary skill to be able to - understand it. - -5. Termination - -5.1. The rights granted under this License will terminate automatically if You - fail to comply with any of its terms. However, if You become compliant, - then the rights granted under this License from a particular Contributor - are reinstated (a) provisionally, unless and until such Contributor - explicitly and finally terminates Your grants, and (b) on an ongoing basis, - if such Contributor fails to notify You of the non-compliance by some - reasonable means prior to 60 days after You have come back into compliance. - Moreover, Your grants from a particular Contributor are reinstated on an - ongoing basis if such Contributor notifies You of the non-compliance by - some reasonable means, this is the first time You have received notice of - non-compliance with this License from such Contributor, and You become - compliant prior to 30 days after Your receipt of the notice. - -5.2. If You initiate litigation against any entity by asserting a patent - infringement claim (excluding declaratory judgment actions, counter-claims, - and cross-claims) alleging that a Contributor Version directly or - indirectly infringes any patent, then the rights granted to You by any and - all Contributors for the Covered Software under Section 2.1 of this License - shall terminate. - -5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user - license agreements (excluding distributors and resellers) which have been - validly granted by You or Your distributors under this License prior to - termination shall survive termination. - -6. Disclaimer of Warranty - - Covered Software is provided under this License on an “as is” basis, without - warranty of any kind, either expressed, implied, or statutory, including, - without limitation, warranties that the Covered Software is free of defects, - merchantable, fit for a particular purpose or non-infringing. The entire - risk as to the quality and performance of the Covered Software is with You. - Should any Covered Software prove defective in any respect, You (not any - Contributor) assume the cost of any necessary servicing, repair, or - correction. This disclaimer of warranty constitutes an essential part of this - License. No use of any Covered Software is authorized under this License - except under this disclaimer. - -7. Limitation of Liability - - Under no circumstances and under no legal theory, whether tort (including - negligence), contract, or otherwise, shall any Contributor, or anyone who - distributes Covered Software as permitted above, be liable to You for any - direct, indirect, special, incidental, or consequential damages of any - character including, without limitation, damages for lost profits, loss of - goodwill, work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses, even if such party shall have been - informed of the possibility of such damages. This limitation of liability - shall not apply to liability for death or personal injury resulting from such - party’s negligence to the extent applicable law prohibits such limitation. - Some jurisdictions do not allow the exclusion or limitation of incidental or - consequential damages, so this exclusion and limitation may not apply to You. - -8. Litigation - - Any litigation relating to this License may be brought only in the courts of - a jurisdiction where the defendant maintains its principal place of business - and such litigation shall be governed by laws of that jurisdiction, without - reference to its conflict-of-law provisions. Nothing in this Section shall - prevent a party’s ability to bring cross-claims or counter-claims. - -9. Miscellaneous - - This License represents the complete agreement concerning the subject matter - hereof. If any provision of this License is held to be unenforceable, such - provision shall be reformed only to the extent necessary to make it - enforceable. Any law or regulation which provides that the language of a - contract shall be construed against the drafter shall not be used to construe - this License against a Contributor. - - -10. Versions of the License - -10.1. New Versions - - Mozilla Foundation is the license steward. Except as provided in Section - 10.3, no one other than the license steward has the right to modify or - publish new versions of this License. Each version will be given a - distinguishing version number. - -10.2. Effect of New Versions - - You may distribute the Covered Software under the terms of the version of - the License under which You originally received the Covered Software, or - under the terms of any subsequent version published by the license - steward. - -10.3. Modified Versions - - If you create software not governed by this License, and you want to - create a new license for such software, you may create and use a modified - version of this License if you rename the license and remove any - references to the name of the license steward (except to note that such - modified license differs from this License). - -10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses - If You choose to distribute Source Code Form that is Incompatible With - Secondary Licenses under the terms of this version of the License, the - notice described in Exhibit B of this License must be attached. - -Exhibit A - Source Code Form License Notice - - This Source Code Form is subject to the - terms of the Mozilla Public License, v. - 2.0. If a copy of the MPL was not - distributed with this file, You can - obtain one at - http://mozilla.org/MPL/2.0/. - -If it is not possible or desirable to put the notice in a particular file, then -You may include the notice in a location (such as a LICENSE file in a relevant -directory) where a recipient would be likely to look for such a notice. - -You may add additional accurate notices of copyright ownership. - -Exhibit B - “Incompatible With Secondary Licenses” Notice - - This Source Code Form is “Incompatible - With Secondary Licenses”, as defined by - the Mozilla Public License, v. 2.0. - diff --git a/vendor/github.com/hashicorp/logutils/README.md b/vendor/github.com/hashicorp/logutils/README.md deleted file mode 100644 index 49490eaeb..000000000 --- a/vendor/github.com/hashicorp/logutils/README.md +++ /dev/null @@ -1,36 +0,0 @@ -# logutils - -logutils is a Go package that augments the standard library "log" package -to make logging a bit more modern, without fragmenting the Go ecosystem -with new logging packages. - -## The simplest thing that could possibly work - -Presumably your application already uses the default `log` package. To switch, you'll want your code to look like the following: - -```go -package main - -import ( - "log" - "os" - - "github.com/hashicorp/logutils" -) - -func main() { - filter := &logutils.LevelFilter{ - Levels: []logutils.LogLevel{"DEBUG", "WARN", "ERROR"}, - MinLevel: logutils.LogLevel("WARN"), - Writer: os.Stderr, - } - log.SetOutput(filter) - - log.Print("[DEBUG] Debugging") // this will not print - log.Print("[WARN] Warning") // this will - log.Print("[ERROR] Erring") // and so will this - log.Print("Message I haven't updated") // and so will this -} -``` - -This logs to standard error exactly like go's standard logger. Any log messages you haven't converted to have a level will continue to print as before. diff --git a/vendor/github.com/hashicorp/logutils/go.mod b/vendor/github.com/hashicorp/logutils/go.mod deleted file mode 100644 index ba38a4576..000000000 --- a/vendor/github.com/hashicorp/logutils/go.mod +++ /dev/null @@ -1 +0,0 @@ -module github.com/hashicorp/logutils diff --git a/vendor/github.com/hashicorp/logutils/level.go b/vendor/github.com/hashicorp/logutils/level.go deleted file mode 100644 index 6381bf162..000000000 --- a/vendor/github.com/hashicorp/logutils/level.go +++ /dev/null @@ -1,81 +0,0 @@ -// Package logutils augments the standard log package with levels. -package logutils - -import ( - "bytes" - "io" - "sync" -) - -type LogLevel string - -// LevelFilter is an io.Writer that can be used with a logger that -// will filter out log messages that aren't at least a certain level. -// -// Once the filter is in use somewhere, it is not safe to modify -// the structure. -type LevelFilter struct { - // Levels is the list of log levels, in increasing order of - // severity. Example might be: {"DEBUG", "WARN", "ERROR"}. - Levels []LogLevel - - // MinLevel is the minimum level allowed through - MinLevel LogLevel - - // The underlying io.Writer where log messages that pass the filter - // will be set. - Writer io.Writer - - badLevels map[LogLevel]struct{} - once sync.Once -} - -// Check will check a given line if it would be included in the level -// filter. -func (f *LevelFilter) Check(line []byte) bool { - f.once.Do(f.init) - - // Check for a log level - var level LogLevel - x := bytes.IndexByte(line, '[') - if x >= 0 { - y := bytes.IndexByte(line[x:], ']') - if y >= 0 { - level = LogLevel(line[x+1 : x+y]) - } - } - - _, ok := f.badLevels[level] - return !ok -} - -func (f *LevelFilter) Write(p []byte) (n int, err error) { - // Note in general that io.Writer can receive any byte sequence - // to write, but the "log" package always guarantees that we only - // get a single line. We use that as a slight optimization within - // this method, assuming we're dealing with a single, complete line - // of log data. - - if !f.Check(p) { - return len(p), nil - } - - return f.Writer.Write(p) -} - -// SetMinLevel is used to update the minimum log level -func (f *LevelFilter) SetMinLevel(min LogLevel) { - f.MinLevel = min - f.init() -} - -func (f *LevelFilter) init() { - badLevels := make(map[LogLevel]struct{}) - for _, level := range f.Levels { - if level == f.MinLevel { - break - } - badLevels[level] = struct{}{} - } - f.badLevels = badLevels -} diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/diagnostic.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/diagnostic.go index 8d04ad4de..d9d276258 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/diagnostic.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/diagnostic.go @@ -4,7 +4,7 @@ import ( "fmt" legacyhclparser "github.com/hashicorp/hcl/hcl/parser" - "github.com/hashicorp/hcl2/hcl" + "github.com/hashicorp/hcl/v2" ) // Diagnostic describes a problem (error or warning) encountered during diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load.go index 2d13fe124..a070f76e0 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load.go @@ -6,7 +6,7 @@ import ( "path/filepath" "strings" - "github.com/hashicorp/hcl2/hcl" + "github.com/hashicorp/hcl/v2" ) // LoadModule reads the directory at the given path and attempts to interpret @@ -52,12 +52,12 @@ func (m *Module) init(diags Diagnostics) { // case so callers can easily recognize it. for _, r := range m.ManagedResources { if _, exists := m.RequiredProviders[r.Provider.Name]; !exists { - m.RequiredProviders[r.Provider.Name] = []string{} + m.RequiredProviders[r.Provider.Name] = &ProviderRequirement{} } } for _, r := range m.DataResources { if _, exists := m.RequiredProviders[r.Provider.Name]; !exists { - m.RequiredProviders[r.Provider.Name] = []string{} + m.RequiredProviders[r.Provider.Name] = &ProviderRequirement{} } } diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_hcl.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_hcl.go index 72b5d4af9..f83ac8726 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_hcl.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_hcl.go @@ -5,11 +5,11 @@ import ( "fmt" "strings" - "github.com/hashicorp/hcl2/hcl/hclsyntax" + "github.com/hashicorp/hcl/v2/hclsyntax" - "github.com/hashicorp/hcl2/gohcl" - "github.com/hashicorp/hcl2/hcl" - "github.com/hashicorp/hcl2/hclparse" + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/gohcl" + "github.com/hashicorp/hcl/v2/hclparse" ctyjson "github.com/zclconf/go-cty/cty/json" ) @@ -51,18 +51,17 @@ func loadModule(dir string) (*Module, Diagnostics) { } } - for _, block := range content.Blocks { - // Our schema only allows required_providers here, so we - // assume that we'll only get that block type. - attrs, attrDiags := block.Body.JustAttributes() - diags = append(diags, attrDiags...) - - for name, attr := range attrs { - var version string - valDiags := gohcl.DecodeExpression(attr.Expr, nil, &version) - diags = append(diags, valDiags...) - if !valDiags.HasErrors() { - mod.RequiredProviders[name] = append(mod.RequiredProviders[name], version) + for _, innerBlock := range content.Blocks { + switch innerBlock.Type { + case "required_providers": + reqs, reqsDiags := decodeRequiredProvidersBlock(innerBlock) + diags = append(diags, reqsDiags...) + for name, req := range reqs { + if _, exists := mod.RequiredProviders[name]; !exists { + mod.RequiredProviders[name] = req + } else { + mod.RequiredProviders[name].VersionConstraints = append(mod.RequiredProviders[name].VersionConstraints, req.VersionConstraints...) + } } } } @@ -178,22 +177,20 @@ func loadModule(dir string) (*Module, Diagnostics) { diags = append(diags, contentDiags...) name := block.Labels[0] - + // Even if there isn't an explicit version required, we still + // need an entry in our map to signal the unversioned dependency. + if _, exists := mod.RequiredProviders[name]; !exists { + mod.RequiredProviders[name] = &ProviderRequirement{} + } if attr, defined := content.Attributes["version"]; defined { var version string valDiags := gohcl.DecodeExpression(attr.Expr, nil, &version) diags = append(diags, valDiags...) if !valDiags.HasErrors() { - mod.RequiredProviders[name] = append(mod.RequiredProviders[name], version) + mod.RequiredProviders[name].VersionConstraints = append(mod.RequiredProviders[name].VersionConstraints, version) } } - // Even if there wasn't an explicit version required, we still - // need an entry in our map to signal the unversioned dependency. - if _, exists := mod.RequiredProviders[name]; !exists { - mod.RequiredProviders[name] = []string{} - } - case "resource", "data": content, _, contentDiags := block.Body.PartialContent(resourceSchema) diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_legacy.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_legacy.go index 86ffdf11d..c79b033b6 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_legacy.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/load_legacy.go @@ -267,17 +267,15 @@ func loadModuleLegacyHCL(dir string) (*Module, Diagnostics) { if err != nil { return nil, diagnosticsErrorf("invalid provider block at %s: %s", item.Pos(), err) } - - if block.Version != "" { - mod.RequiredProviders[name] = append(mod.RequiredProviders[name], block.Version) - } - // Even if there wasn't an explicit version required, we still // need an entry in our map to signal the unversioned dependency. if _, exists := mod.RequiredProviders[name]; !exists { - mod.RequiredProviders[name] = []string{} + mod.RequiredProviders[name] = &ProviderRequirement{} } + if block.Version != "" { + mod.RequiredProviders[name].VersionConstraints = append(mod.RequiredProviders[name].VersionConstraints, block.Version) + } } } } diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/module.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/module.go index 65ddb2307..63027d184 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/module.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/module.go @@ -9,8 +9,8 @@ type Module struct { Variables map[string]*Variable `json:"variables"` Outputs map[string]*Output `json:"outputs"` - RequiredCore []string `json:"required_core,omitempty"` - RequiredProviders map[string][]string `json:"required_providers"` + RequiredCore []string `json:"required_core,omitempty"` + RequiredProviders map[string]*ProviderRequirement `json:"required_providers"` ManagedResources map[string]*Resource `json:"managed_resources"` DataResources map[string]*Resource `json:"data_resources"` @@ -27,7 +27,7 @@ func newModule(path string) *Module { Path: path, Variables: make(map[string]*Variable), Outputs: make(map[string]*Output), - RequiredProviders: make(map[string][]string), + RequiredProviders: make(map[string]*ProviderRequirement), ManagedResources: make(map[string]*Resource), DataResources: make(map[string]*Resource), ModuleCalls: make(map[string]*ModuleCall), diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/provider_ref.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/provider_ref.go index d92483778..157c8c2c1 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/provider_ref.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/provider_ref.go @@ -1,5 +1,11 @@ package tfconfig +import ( + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/gohcl" + "github.com/zclconf/go-cty/cty/gocty" +) + // ProviderRef is a reference to a provider configuration within a module. // It represents the contents of a "provider" argument in a resource, or // a value in the "providers" map for a module call. @@ -7,3 +13,73 @@ type ProviderRef struct { Name string `json:"name"` Alias string `json:"alias,omitempty"` // Empty if the default provider configuration is referenced } + +type ProviderRequirement struct { + Source string `json:"source,omitempty"` + VersionConstraints []string `json:"version_constraints,omitempty"` +} + +func decodeRequiredProvidersBlock(block *hcl.Block) (map[string]*ProviderRequirement, hcl.Diagnostics) { + attrs, diags := block.Body.JustAttributes() + reqs := make(map[string]*ProviderRequirement) + for name, attr := range attrs { + expr, err := attr.Expr.Value(nil) + if err != nil { + diags = append(diags, err...) + } + + switch { + case expr.Type().IsPrimitiveType(): + var version string + valDiags := gohcl.DecodeExpression(attr.Expr, nil, &version) + diags = append(diags, valDiags...) + if !valDiags.HasErrors() { + reqs[name] = &ProviderRequirement{ + VersionConstraints: []string{version}, + } + } + + case expr.Type().IsObjectType(): + var pr ProviderRequirement + if expr.Type().HasAttribute("version") { + var version string + err := gocty.FromCtyValue(expr.GetAttr("version"), &version) + if err == nil { + pr.VersionConstraints = append(pr.VersionConstraints, version) + } else { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Unsuitable value type", + Detail: "Unsuitable value: string required", + Subject: attr.Expr.Range().Ptr(), + }) + } + } + if expr.Type().HasAttribute("source") { + var source string + err := gocty.FromCtyValue(expr.GetAttr("source"), &source) + if err == nil { + pr.Source = source + } else { + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Unsuitable value type", + Detail: "Unsuitable value: string required", + Subject: attr.Expr.Range().Ptr(), + }) + } + } + reqs[name] = &pr + + default: + diags = append(diags, &hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Unsuitable value type", + Detail: "Unsuitable value: string required", + Subject: attr.Expr.Range().Ptr(), + }) + } + } + + return reqs, diags +} diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/schema.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/schema.go index 3af742ff7..fd6ca9e70 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/schema.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/schema.go @@ -1,7 +1,7 @@ package tfconfig import ( - "github.com/hashicorp/hcl2/hcl" + "github.com/hashicorp/hcl/v2" ) var rootSchema = &hcl.BodySchema{ diff --git a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/source_pos.go b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/source_pos.go index 883914eb7..548c9f9a3 100644 --- a/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/source_pos.go +++ b/vendor/github.com/hashicorp/terraform-config-inspect/tfconfig/source_pos.go @@ -2,7 +2,7 @@ package tfconfig import ( legacyhcltoken "github.com/hashicorp/hcl/hcl/token" - "github.com/hashicorp/hcl2/hcl" + "github.com/hashicorp/hcl/v2" ) // SourcePos is a pointer to a particular location in a source file. diff --git a/svchost/auth/cache.go b/vendor/github.com/hashicorp/terraform-svchost/auth/cache.go similarity index 98% rename from svchost/auth/cache.go rename to vendor/github.com/hashicorp/terraform-svchost/auth/cache.go index 509d89fd4..0dae567db 100644 --- a/svchost/auth/cache.go +++ b/vendor/github.com/hashicorp/terraform-svchost/auth/cache.go @@ -1,7 +1,7 @@ package auth import ( - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" ) // CachingCredentialsSource creates a new credentials source that wraps another diff --git a/svchost/auth/credentials.go b/vendor/github.com/hashicorp/terraform-svchost/auth/credentials.go similarity index 99% rename from svchost/auth/credentials.go rename to vendor/github.com/hashicorp/terraform-svchost/auth/credentials.go index d86492c54..36441cd11 100644 --- a/svchost/auth/credentials.go +++ b/vendor/github.com/hashicorp/terraform-svchost/auth/credentials.go @@ -8,7 +8,7 @@ import ( "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" ) // Credentials is a list of CredentialsSource objects that can be tried in diff --git a/svchost/auth/from_map.go b/vendor/github.com/hashicorp/terraform-svchost/auth/from_map.go similarity index 100% rename from svchost/auth/from_map.go rename to vendor/github.com/hashicorp/terraform-svchost/auth/from_map.go diff --git a/svchost/auth/helper_program.go b/vendor/github.com/hashicorp/terraform-svchost/auth/helper_program.go similarity index 99% rename from svchost/auth/helper_program.go rename to vendor/github.com/hashicorp/terraform-svchost/auth/helper_program.go index 5b7281bd7..76505f209 100644 --- a/svchost/auth/helper_program.go +++ b/vendor/github.com/hashicorp/terraform-svchost/auth/helper_program.go @@ -9,7 +9,7 @@ import ( ctyjson "github.com/zclconf/go-cty/cty/json" - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" ) type helperProgramCredentialsSource struct { diff --git a/svchost/auth/static.go b/vendor/github.com/hashicorp/terraform-svchost/auth/static.go similarity index 96% rename from svchost/auth/static.go rename to vendor/github.com/hashicorp/terraform-svchost/auth/static.go index ba5252a2c..f8b0b076e 100644 --- a/svchost/auth/static.go +++ b/vendor/github.com/hashicorp/terraform-svchost/auth/static.go @@ -3,7 +3,7 @@ package auth import ( "fmt" - "github.com/hashicorp/terraform/svchost" + "github.com/hashicorp/terraform-svchost" ) // StaticCredentialsSource is a credentials source that retrieves credentials diff --git a/svchost/auth/token_credentials.go b/vendor/github.com/hashicorp/terraform-svchost/auth/token_credentials.go similarity index 100% rename from svchost/auth/token_credentials.go rename to vendor/github.com/hashicorp/terraform-svchost/auth/token_credentials.go diff --git a/svchost/disco/disco.go b/vendor/github.com/hashicorp/terraform-svchost/disco/disco.go similarity index 96% rename from svchost/disco/disco.go rename to vendor/github.com/hashicorp/terraform-svchost/disco/disco.go index 5d8e46934..978313633 100644 --- a/svchost/disco/disco.go +++ b/vendor/github.com/hashicorp/terraform-svchost/disco/disco.go @@ -17,10 +17,8 @@ import ( "net/url" "time" - cleanhttp "github.com/hashicorp/go-cleanhttp" - "github.com/hashicorp/terraform/httpclient" - "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/auth" + "github.com/hashicorp/terraform-svchost" + "github.com/hashicorp/terraform-svchost/auth" ) const ( @@ -38,7 +36,7 @@ const ( ) // httpTransport is overridden during tests, to skip TLS verification. -var httpTransport = cleanhttp.DefaultPooledTransport() +var httpTransport = defaultHttpTransport() // Disco is the main type in this package, which allows discovery on given // hostnames and caches the results by hostname to avoid repeated requests @@ -66,6 +64,13 @@ func NewWithCredentialsSource(credsSrc auth.CredentialsSource) *Disco { } } +func (d *Disco) SetUserAgent(uaString string) { + d.Transport = &userAgentRoundTripper{ + innerRt: d.Transport, + userAgent: uaString, + } +} + // SetCredentialsSource provides a credentials source that will be used to // add credentials to outgoing discovery requests, where available. // @@ -185,7 +190,6 @@ func (d *Disco) discover(hostname svchost.Hostname) (*Host, error) { URL: discoURL, } req.Header.Set("Accept", "application/json") - req.Header.Set("User-Agent", httpclient.UserAgentString()) creds, err := d.CredentialsForHost(hostname) if err != nil { diff --git a/svchost/disco/host.go b/vendor/github.com/hashicorp/terraform-svchost/disco/host.go similarity index 99% rename from svchost/disco/host.go rename to vendor/github.com/hashicorp/terraform-svchost/disco/host.go index 228eadeef..2d0fc9f12 100644 --- a/svchost/disco/host.go +++ b/vendor/github.com/hashicorp/terraform-svchost/disco/host.go @@ -12,7 +12,6 @@ import ( "time" "github.com/hashicorp/go-version" - "github.com/hashicorp/terraform/httpclient" ) const versionServiceID = "versions.v1" @@ -372,7 +371,6 @@ func (h *Host) VersionConstraints(id, product string) (*Constraints, error) { return nil, fmt.Errorf("Failed to create version constraints request: %v", err) } req.Header.Set("Accept", "application/json") - req.Header.Set("User-Agent", httpclient.UserAgentString()) log.Printf("[DEBUG] Retrieve version constraints for service %s and product %s", id, product) diff --git a/vendor/github.com/hashicorp/terraform-svchost/disco/http_transport.go b/vendor/github.com/hashicorp/terraform-svchost/disco/http_transport.go new file mode 100644 index 000000000..7e4a38567 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform-svchost/disco/http_transport.go @@ -0,0 +1,30 @@ +package disco + +import ( + "net/http" + + "github.com/hashicorp/go-cleanhttp" +) + +const DefaultUserAgent = "terraform-svchost/1.0" + +func defaultHttpTransport() http.RoundTripper { + t := cleanhttp.DefaultPooledTransport() + return &userAgentRoundTripper{ + innerRt: t, + userAgent: DefaultUserAgent, + } +} + +type userAgentRoundTripper struct { + innerRt http.RoundTripper + userAgent string +} + +func (rt *userAgentRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { + if _, ok := req.Header["User-Agent"]; !ok { + req.Header.Set("User-Agent", rt.userAgent) + } + + return rt.innerRt.RoundTrip(req) +} diff --git a/svchost/disco/oauth_client.go b/vendor/github.com/hashicorp/terraform-svchost/disco/oauth_client.go similarity index 100% rename from svchost/disco/oauth_client.go rename to vendor/github.com/hashicorp/terraform-svchost/disco/oauth_client.go diff --git a/vendor/github.com/hashicorp/terraform-svchost/go.mod b/vendor/github.com/hashicorp/terraform-svchost/go.mod new file mode 100644 index 000000000..8f29e4ac5 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform-svchost/go.mod @@ -0,0 +1,12 @@ +module github.com/hashicorp/terraform-svchost + +go 1.12 + +require ( + github.com/google/go-cmp v0.3.1 + github.com/hashicorp/go-cleanhttp v0.5.1 + github.com/hashicorp/go-version v1.2.0 + github.com/zclconf/go-cty v1.1.0 + golang.org/x/net v0.0.0-20191009170851-d66e71096ffb + golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 +) diff --git a/vendor/github.com/hashicorp/terraform-svchost/go.sum b/vendor/github.com/hashicorp/terraform-svchost/go.sum new file mode 100644 index 000000000..9ad1712f8 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform-svchost/go.sum @@ -0,0 +1,36 @@ +cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk= +github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= +github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM= +github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= +github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E= +github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k= +github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= +github.com/zclconf/go-cty v1.1.0 h1:uJwc9HiBOCpoKIObTQaLR+tsEXx1HBHnOsOOpcdhZgw= +github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20191009170851-d66e71096ffb h1:TR699M2v0qoKTOHxeLgp6zPqaQNs74f01a/ob9W0qko= +golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0= +golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= +golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw= +golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= +google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= diff --git a/svchost/label_iter.go b/vendor/github.com/hashicorp/terraform-svchost/label_iter.go similarity index 100% rename from svchost/label_iter.go rename to vendor/github.com/hashicorp/terraform-svchost/label_iter.go diff --git a/svchost/svchost.go b/vendor/github.com/hashicorp/terraform-svchost/svchost.go similarity index 100% rename from svchost/svchost.go rename to vendor/github.com/hashicorp/terraform-svchost/svchost.go diff --git a/vendor/github.com/lib/pq/oid/gen.go b/vendor/github.com/lib/pq/oid/gen.go deleted file mode 100644 index 7c634cdc5..000000000 --- a/vendor/github.com/lib/pq/oid/gen.go +++ /dev/null @@ -1,93 +0,0 @@ -// +build ignore - -// Generate the table of OID values -// Run with 'go run gen.go'. -package main - -import ( - "database/sql" - "fmt" - "log" - "os" - "os/exec" - "strings" - - _ "github.com/lib/pq" -) - -// OID represent a postgres Object Identifier Type. -type OID struct { - ID int - Type string -} - -// Name returns an upper case version of the oid type. -func (o OID) Name() string { - return strings.ToUpper(o.Type) -} - -func main() { - datname := os.Getenv("PGDATABASE") - sslmode := os.Getenv("PGSSLMODE") - - if datname == "" { - os.Setenv("PGDATABASE", "pqgotest") - } - - if sslmode == "" { - os.Setenv("PGSSLMODE", "disable") - } - - db, err := sql.Open("postgres", "") - if err != nil { - log.Fatal(err) - } - rows, err := db.Query(` - SELECT typname, oid - FROM pg_type WHERE oid < 10000 - ORDER BY oid; - `) - if err != nil { - log.Fatal(err) - } - oids := make([]*OID, 0) - for rows.Next() { - var oid OID - if err = rows.Scan(&oid.Type, &oid.ID); err != nil { - log.Fatal(err) - } - oids = append(oids, &oid) - } - if err = rows.Err(); err != nil { - log.Fatal(err) - } - cmd := exec.Command("gofmt") - cmd.Stderr = os.Stderr - w, err := cmd.StdinPipe() - if err != nil { - log.Fatal(err) - } - f, err := os.Create("types.go") - if err != nil { - log.Fatal(err) - } - cmd.Stdout = f - err = cmd.Start() - if err != nil { - log.Fatal(err) - } - fmt.Fprintln(w, "// Code generated by gen.go. DO NOT EDIT.") - fmt.Fprintln(w, "\npackage oid") - fmt.Fprintln(w, "const (") - for _, oid := range oids { - fmt.Fprintf(w, "T_%s Oid = %d\n", oid.Type, oid.ID) - } - fmt.Fprintln(w, ")") - fmt.Fprintln(w, "var TypeName = map[Oid]string{") - for _, oid := range oids { - fmt.Fprintf(w, "T_%s: \"%s\",\n", oid.Type, oid.Name()) - } - fmt.Fprintln(w, "}") - w.Close() - cmd.Wait() -} diff --git a/vendor/github.com/likexian/gokit/LICENSE b/vendor/github.com/likexian/gokit/LICENSE new file mode 100644 index 000000000..42d4ca144 --- /dev/null +++ b/vendor/github.com/likexian/gokit/LICENSE @@ -0,0 +1,205 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2012-2019 Li Kexian + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + APPENDIX: Copyright 2012-2019 Li Kexian + + https://www.likexian.com/ diff --git a/vendor/github.com/likexian/gokit/assert/README.md b/vendor/github.com/likexian/gokit/assert/README.md new file mode 100644 index 000000000..8ecb799ce --- /dev/null +++ b/vendor/github.com/likexian/gokit/assert/README.md @@ -0,0 +1,98 @@ +# GoKit - assert + +Assert kits for Golang development. + +## Installation + + go get -u github.com/likexian/gokit + +## Importing + + import ( + "github.com/likexian/gokit/assert" + ) + +## Documentation + +Visit the docs on [GoDoc](https://godoc.org/github.com/likexian/gokit/assert) + +## Example + +### assert panic + +```go +func willItPanic() { + panic("failed") +} +assert.Panic(t, willItPanic) +``` + +### assert err is nil + +```go +fp, err := os.Open("/data/dev/gokit/LICENSE") +assert.Nil(t, err) +``` + +### assert equal + +```go +x := map[string]int{"a": 1, "b": 2} +y := map[string]int{"a": 1, "b": 2} +assert.Equal(t, x, y, "x shall equal to y") +``` + +### check string in array + +```go +ok := assert.IsContains([]string{"a", "b", "c"}, "b") +if ok { + fmt.Println("value in array") +} else { + fmt.Println("value not in array") +} +``` + +### check string in interface array + +```go +ok := assert.IsContains([]interface{}{0, "1", 2}, "1") +if ok { + fmt.Println("value in array") +} else { + fmt.Println("value not in array") +} +``` + +### check object in struct array + +```go +ok := assert.IsContains([]A{A{0, 1}, A{1, 2}, A{1, 3}}, A{1, 2}) +if ok { + fmt.Println("value in array") +} else { + fmt.Println("value not in array") +} +``` + +### a := c ? x : y + +```go +a := 1 +// b := a == 1 ? true : false +b := assert.If(a == 1, true, false) +``` + +## LICENSE + +Copyright 2012-2019 Li Kexian + +Licensed under the Apache License 2.0 + +## About + +- [Li Kexian](https://www.likexian.com/) + +## DONATE + +- [Help me make perfect](https://www.likexian.com/donate/) diff --git a/vendor/github.com/likexian/gokit/assert/assert.go b/vendor/github.com/likexian/gokit/assert/assert.go new file mode 100644 index 000000000..ddc4a6688 --- /dev/null +++ b/vendor/github.com/likexian/gokit/assert/assert.go @@ -0,0 +1,202 @@ +/* + * Copyright 2012-2019 Li Kexian + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * A toolkit for Golang development + * https://www.likexian.com/ + */ + +package assert + +import ( + "fmt" + "reflect" + "runtime" + "testing" +) + +// Version returns package version +func Version() string { + return "0.10.1" +} + +// Author returns package author +func Author() string { + return "[Li Kexian](https://www.likexian.com/)" +} + +// License returns package license +func License() string { + return "Licensed under the Apache License 2.0" +} + +// Equal assert test value to be equal +func Equal(t *testing.T, got, exp interface{}, args ...interface{}) { + equal(t, got, exp, 1, args...) +} + +// NotEqual assert test value to be not equal +func NotEqual(t *testing.T, got, exp interface{}, args ...interface{}) { + notEqual(t, got, exp, 1, args...) +} + +// Nil assert test value to be nil +func Nil(t *testing.T, got interface{}, args ...interface{}) { + equal(t, got, nil, 1, args...) +} + +// NotNil assert test value to be not nil +func NotNil(t *testing.T, got interface{}, args ...interface{}) { + notEqual(t, got, nil, 1, args...) +} + +// True assert test value to be true +func True(t *testing.T, got interface{}, args ...interface{}) { + equal(t, got, true, 1, args...) +} + +// False assert test value to be false +func False(t *testing.T, got interface{}, args ...interface{}) { + notEqual(t, got, true, 1, args...) +} + +// Zero assert test value to be zero value +func Zero(t *testing.T, got interface{}, args ...interface{}) { + equal(t, IsZero(got), true, 1, args...) +} + +// NotZero assert test value to be not zero value +func NotZero(t *testing.T, got interface{}, args ...interface{}) { + notEqual(t, IsZero(got), true, 1, args...) +} + +// Len assert length of test vaue to be exp +func Len(t *testing.T, got interface{}, exp int, args ...interface{}) { + equal(t, Length(got), exp, 1, args...) +} + +// NotLen assert length of test vaue to be not exp +func NotLen(t *testing.T, got interface{}, exp int, args ...interface{}) { + notEqual(t, Length(got), exp, 1, args...) +} + +// Contains assert test value to be contains +func Contains(t *testing.T, got, exp interface{}, args ...interface{}) { + equal(t, IsContains(got, exp), true, 1, args...) +} + +// NotContains assert test value to be contains +func NotContains(t *testing.T, got, exp interface{}, args ...interface{}) { + notEqual(t, IsContains(got, exp), true, 1, args...) +} + +// Match assert test value match exp pattern +func Match(t *testing.T, got, exp interface{}, args ...interface{}) { + equal(t, IsMatch(got, exp), true, 1, args...) +} + +// NotMatch assert test value not match exp pattern +func NotMatch(t *testing.T, got, exp interface{}, args ...interface{}) { + notEqual(t, IsMatch(got, exp), true, 1, args...) +} + +// Lt assert test value less than exp +func Lt(t *testing.T, got, exp interface{}, args ...interface{}) { + equal(t, IsLt(got, exp), true, 1, args...) +} + +// Le assert test value less than exp or equal +func Le(t *testing.T, got, exp interface{}, args ...interface{}) { + equal(t, IsLe(got, exp), true, 1, args...) +} + +// Gt assert test value greater than exp +func Gt(t *testing.T, got, exp interface{}, args ...interface{}) { + equal(t, IsGt(got, exp), true, 1, args...) +} + +// Ge assert test value greater than exp or equal +func Ge(t *testing.T, got, exp interface{}, args ...interface{}) { + equal(t, IsGe(got, exp), true, 1, args...) +} + +// Panic assert testing to be panic +func Panic(t *testing.T, fn func(), args ...interface{}) { + defer func() { + ff := func() { + t.Error("! -", "assert expected to be panic") + if len(args) > 0 { + t.Error("! -", fmt.Sprint(args...)) + } + } + ok := recover() != nil + assert(t, ok, ff, 2) + }() + + fn() +} + +// NotPanic assert testing to be panic +func NotPanic(t *testing.T, fn func(), args ...interface{}) { + defer func() { + ff := func() { + t.Error("! -", "assert expected to be not panic") + if len(args) > 0 { + t.Error("! -", fmt.Sprint(args...)) + } + } + ok := recover() == nil + assert(t, ok, ff, 3) + }() + + fn() +} + +func equal(t *testing.T, got, exp interface{}, step int, args ...interface{}) { + fn := func() { + switch got.(type) { + case error: + t.Errorf("! unexpected error: \"%s\"", got) + default: + t.Errorf("! expected %#v, but got %#v", exp, got) + } + if len(args) > 0 { + t.Error("! -", fmt.Sprint(args...)) + } + } + ok := reflect.DeepEqual(exp, got) + assert(t, ok, fn, step+1) +} + +func notEqual(t *testing.T, got, exp interface{}, step int, args ...interface{}) { + fn := func() { + t.Errorf("! unexpected: %#v", got) + if len(args) > 0 { + t.Error("! -", fmt.Sprint(args...)) + } + } + ok := !reflect.DeepEqual(exp, got) + assert(t, ok, fn, step+1) +} + +func assert(t *testing.T, pass bool, fn func(), step int) { + if !pass { + _, file, line, ok := runtime.Caller(step + 1) + if ok { + t.Errorf("%s:%d", file, line) + } + fn() + t.FailNow() + } +} diff --git a/vendor/github.com/likexian/gokit/assert/values.go b/vendor/github.com/likexian/gokit/assert/values.go new file mode 100644 index 000000000..02c303ab1 --- /dev/null +++ b/vendor/github.com/likexian/gokit/assert/values.go @@ -0,0 +1,334 @@ +/* + * Copyright 2012-2019 Li Kexian + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * A toolkit for Golang development + * https://www.likexian.com/ + */ + +package assert + +import ( + "errors" + "fmt" + "reflect" + "regexp" + "strconv" + "strings" +) + +// ErrInvalid is value invalid for operation +var ErrInvalid = errors.New("value if invalid") + +// ErrLess is expect to be greater error +var ErrLess = errors.New("left is less the right") + +// ErrGreater is expect to be less error +var ErrGreater = errors.New("left is greater then right") + +// CMP is compare operation +var CMP = struct { + LT string + LE string + GT string + GE string +}{ + "<", + "<=", + ">", + ">=", +} + +// IsZero returns value is zero value +func IsZero(v interface{}) bool { + vv := reflect.ValueOf(v) + switch vv.Kind() { + case reflect.Invalid: + return true + case reflect.Bool: + return !vv.Bool() + case reflect.Ptr, reflect.Interface: + return vv.IsNil() + case reflect.Array, reflect.Slice, reflect.Map, reflect.String: + return vv.Len() == 0 + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return vv.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return vv.Uint() == 0 + case reflect.Float32, reflect.Float64: + return vv.Float() == 0 + default: + return false + } +} + +// IsContains returns whether value is within array +func IsContains(array interface{}, value interface{}) bool { + vv := reflect.ValueOf(array) + if vv.Kind() == reflect.Ptr || vv.Kind() == reflect.Interface { + if vv.IsNil() { + return false + } + vv = vv.Elem() + } + + switch vv.Kind() { + case reflect.Invalid: + return false + case reflect.Slice: + for i := 0; i < vv.Len(); i++ { + if reflect.DeepEqual(value, vv.Index(i).Interface()) { + return true + } + } + return false + case reflect.Map: + s := vv.MapKeys() + for i := 0; i < len(s); i++ { + if reflect.DeepEqual(value, s[i].Interface()) { + return true + } + } + return false + case reflect.String: + ss := reflect.ValueOf(value) + switch ss.Kind() { + case reflect.String: + return strings.Contains(vv.String(), ss.String()) + } + return false + default: + return reflect.DeepEqual(array, value) + } +} + +// IsMatch returns if value v contains any match of pattern r +// IsMatch(regexp.MustCompile("v\d+"), "v100") +// IsMatch("v\d+", "v100") +// IsMatch("\d+\.\d+", 100.1) +func IsMatch(r interface{}, v interface{}) bool { + var re *regexp.Regexp + + if v, ok := r.(*regexp.Regexp); ok { + re = v + } else { + re = regexp.MustCompile(fmt.Sprint(r)) + } + + return re.MatchString(fmt.Sprint(v)) +} + +// Length returns length of value +func Length(v interface{}) int { + vv := reflect.ValueOf(v) + if vv.Kind() == reflect.Ptr || vv.Kind() == reflect.Interface { + if vv.IsNil() { + return 0 + } + vv = vv.Elem() + } + + switch vv.Kind() { + case reflect.Invalid: + return 0 + case reflect.Ptr, reflect.Interface: + return 0 + case reflect.Array, reflect.Slice, reflect.Map, reflect.String: + return vv.Len() + default: + return len(fmt.Sprintf("%#v", v)) + } +} + +// IsLt returns if x less than y, value invalid will returns false +func IsLt(x, y interface{}) bool { + return Compare(x, y, CMP.LT) == nil +} + +// IsLe returns if x less than or equal to y, value invalid will returns false +func IsLe(x, y interface{}) bool { + return Compare(x, y, CMP.LE) == nil +} + +// IsGt returns if x greater than y, value invalid will returns false +func IsGt(x, y interface{}) bool { + return Compare(x, y, CMP.GT) == nil +} + +// IsGe returns if x greater than or equal to y, value invalid will returns false +func IsGe(x, y interface{}) bool { + return Compare(x, y, CMP.GE) == nil +} + +// Compare compare x and y, by operation +// It returns nil for true, ErrInvalid for invalid operation, err for false +// Compare(1, 2, ">") // number compare -> true +// Compare("a", "a", ">=") // string compare -> true +// Compare([]string{"a", "b"}, []string{"a"}, "<") // slice len compare -> false +func Compare(x, y interface{}, op string) error { + if !IsContains([]string{CMP.LT, CMP.LE, CMP.GT, CMP.GE}, op) { + return ErrInvalid + } + + vv := reflect.ValueOf(x) + if vv.Kind() == reflect.Ptr || vv.Kind() == reflect.Interface { + if vv.IsNil() { + return ErrInvalid + } + vv = vv.Elem() + } + + var c float64 + switch vv.Kind() { + case reflect.Invalid: + return ErrInvalid + case reflect.String: + yy := reflect.ValueOf(y) + switch yy.Kind() { + case reflect.String: + c = float64(strings.Compare(vv.String(), yy.String())) + default: + return ErrInvalid + } + case reflect.Slice, reflect.Map, reflect.Array: + yy := reflect.ValueOf(y) + switch yy.Kind() { + case reflect.Slice, reflect.Map, reflect.Array: + c = float64(vv.Len() - yy.Len()) + default: + return ErrInvalid + } + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + yy, err := ToInt64(y) + if err != nil { + return ErrInvalid + } + c = float64(vv.Int() - yy) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + yy, err := ToUint64(y) + if err != nil { + return ErrInvalid + } + c = float64(vv.Uint()) - float64(yy) + case reflect.Float32, reflect.Float64: + yy, err := ToFloat64(y) + if err != nil { + return ErrInvalid + } + c = float64(vv.Float() - yy) + default: + return ErrInvalid + } + + switch { + case c < 0: + switch op { + case CMP.LT, CMP.LE: + return nil + default: + return ErrLess + } + case c > 0: + switch op { + case CMP.GT, CMP.GE: + return nil + default: + return ErrGreater + } + default: + switch op { + case CMP.LT: + return ErrGreater + case CMP.GT: + return ErrLess + default: + return nil + } + } +} + +// ToInt64 returns int value for int or uint or float +func ToInt64(v interface{}) (int64, error) { + vv := reflect.ValueOf(v) + switch vv.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return int64(vv.Int()), nil + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return int64(vv.Uint()), nil + case reflect.Float32, reflect.Float64: + return int64(vv.Float()), nil + case reflect.String: + r, err := strconv.ParseInt(vv.String(), 10, 64) + if err != nil { + return 0, ErrInvalid + } + return r, nil + default: + return 0, ErrInvalid + } +} + +// ToUint64 returns uint value for int or uint or float +func ToUint64(v interface{}) (uint64, error) { + vv := reflect.ValueOf(v) + switch vv.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return uint64(vv.Int()), nil + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return uint64(vv.Uint()), nil + case reflect.Float32, reflect.Float64: + return uint64(vv.Float()), nil + case reflect.String: + r, err := strconv.ParseUint(vv.String(), 10, 64) + if err != nil { + return 0, ErrInvalid + } + return r, nil + default: + return 0, ErrInvalid + } +} + +// ToFloat64 returns float64 value for int or uint or float +func ToFloat64(v interface{}) (float64, error) { + vv := reflect.ValueOf(v) + switch vv.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return float64(vv.Int()), nil + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return float64(vv.Uint()), nil + case reflect.Float32, reflect.Float64: + return float64(vv.Float()), nil + case reflect.String: + r, err := strconv.ParseFloat(vv.String(), 64) + if err != nil { + return 0, ErrInvalid + } + return r, nil + default: + return 0, ErrInvalid + } +} + +// If returns x if c is true, else y +// z = If(c, x, y) +// equal to: +// z = c ? x : y +func If(c bool, x, y interface{}) interface{} { + if c { + return x + } + + return y +} diff --git a/vendor/github.com/marstr/guid/.travis.yml b/vendor/github.com/marstr/guid/.travis.yml deleted file mode 100644 index 35158ec53..000000000 --- a/vendor/github.com/marstr/guid/.travis.yml +++ /dev/null @@ -1,18 +0,0 @@ -sudo: false - -language: go - -go: - - 1.7 - - 1.8 - -install: - - go get -u github.com/golang/lint/golint - - go get -u github.com/HewlettPackard/gas - -script: - - golint --set_exit_status - - go vet - - go test -v -cover -race - - go test -bench . - - gas ./... \ No newline at end of file diff --git a/vendor/github.com/marstr/guid/README.md b/vendor/github.com/marstr/guid/README.md deleted file mode 100644 index 355fad16d..000000000 --- a/vendor/github.com/marstr/guid/README.md +++ /dev/null @@ -1,27 +0,0 @@ -[![Build Status](https://travis-ci.org/marstr/guid.svg?branch=master)](https://travis-ci.org/marstr/guid) -[![GoDoc](https://godoc.org/github.com/marstr/guid?status.svg)](https://godoc.org/github.com/marstr/guid) -[![Go Report Card](https://goreportcard.com/badge/github.com/marstr/guid)](https://goreportcard.com/report/github.com/marstr/guid) - -# Guid -Globally unique identifiers offer a quick means of generating non-colliding values across a distributed system. For this implemenation, [RFC 4122](http://ietf.org/rfc/rfc4122.txt) governs the desired behavior. - -## What's in a name? -You have likely already noticed that RFC and some implementations refer to these structures as UUIDs (Universally Unique Identifiers), where as this project is annotated as GUIDs (Globally Unique Identifiers). The name Guid was selected to make clear this project's ties to the [.NET struct Guid.](https://msdn.microsoft.com/en-us/library/system.guid(v=vs.110).aspx) The most obvious relationship is the desire to have the same format specifiers available in this library's Format and Parse methods as .NET would have in its ToString and Parse methods. - -# Installation -- Ensure you have the [Go Programming Language](https://golang.org/) installed on your system. -- Run the command: `go get -u github.com/marstr/guid` - -# Contribution -Contributions are welcome! Feel free to send Pull Requests. Continuous Integration will ensure that you have conformed to Go conventions. Please remember to add tests for your changes. - -# Versioning -This library will adhere to the -[Semantic Versioning 2.0.0](http://semver.org/spec/v2.0.0.html) specification. It may be worth noting this should allow for tools like [glide](https://glide.readthedocs.io/en/latest/) to pull in this library with ease. - -The Release Notes portion of this file will be updated to reflect the most recent major/minor updates, with the option to tag particular bug-fixes as well. Updates to the Release Notes for patches should be addative, where as major/minor updates should replace the previous version. If one desires to see the release notes for an older version, checkout that version of code and open this file. - -# Release Notes 1.1.* - -## v1.1.0 -Adding support for JSON marshaling and unmarshaling. diff --git a/vendor/github.com/marstr/guid/guid.go b/vendor/github.com/marstr/guid/guid.go deleted file mode 100644 index 51b038b75..000000000 --- a/vendor/github.com/marstr/guid/guid.go +++ /dev/null @@ -1,301 +0,0 @@ -package guid - -import ( - "bytes" - "crypto/rand" - "errors" - "fmt" - "net" - "strings" - "sync" - "time" -) - -// GUID is a unique identifier designed to virtually guarantee non-conflict between values generated -// across a distributed system. -type GUID struct { - timeHighAndVersion uint16 - timeMid uint16 - timeLow uint32 - clockSeqHighAndReserved uint8 - clockSeqLow uint8 - node [6]byte -} - -// Format enumerates the values that are supported by Parse and Format -type Format string - -// These constants define the possible string formats available via this implementation of Guid. -const ( - FormatB Format = "B" // {00000000-0000-0000-0000-000000000000} - FormatD Format = "D" // 00000000-0000-0000-0000-000000000000 - FormatN Format = "N" // 00000000000000000000000000000000 - FormatP Format = "P" // (00000000-0000-0000-0000-000000000000) - FormatX Format = "X" // {0x00000000,0x0000,0x0000,{0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00}} - FormatDefault Format = FormatD -) - -// CreationStrategy enumerates the values that are supported for populating the bits of a new Guid. -type CreationStrategy string - -// These constants define the possible creation strategies available via this implementation of Guid. -const ( - CreationStrategyVersion1 CreationStrategy = "version1" - CreationStrategyVersion2 CreationStrategy = "version2" - CreationStrategyVersion3 CreationStrategy = "version3" - CreationStrategyVersion4 CreationStrategy = "version4" - CreationStrategyVersion5 CreationStrategy = "version5" -) - -var emptyGUID GUID - -// NewGUID generates and returns a new globally unique identifier -func NewGUID() GUID { - result, err := version4() - if err != nil { - panic(err) //Version 4 (pseudo-random GUID) doesn't use anything that could fail. - } - return result -} - -var knownStrategies = map[CreationStrategy]func() (GUID, error){ - CreationStrategyVersion1: version1, - CreationStrategyVersion4: version4, -} - -// NewGUIDs generates and returns a new globally unique identifier that conforms to the given strategy. -func NewGUIDs(strategy CreationStrategy) (GUID, error) { - if creator, present := knownStrategies[strategy]; present { - result, err := creator() - return result, err - } - return emptyGUID, errors.New("Unsupported CreationStrategy") -} - -// Empty returns a copy of the default and empty GUID. -func Empty() GUID { - return emptyGUID -} - -var knownFormats = map[Format]string{ - FormatN: "%08x%04x%04x%02x%02x%02x%02x%02x%02x%02x%02x", - FormatD: "%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x", - FormatB: "{%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}", - FormatP: "(%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x)", - FormatX: "{0x%08x,0x%04x,0x%04x,{0x%02x,0x%02x,0x%02x,0x%02x,0x%02x,0x%02x,0x%02x,0x%02x}}", -} - -// MarshalJSON writes a GUID as a JSON string. -func (guid GUID) MarshalJSON() (marshaled []byte, err error) { - buf := bytes.Buffer{} - - _, err = buf.WriteRune('"') - buf.WriteString(guid.String()) - buf.WriteRune('"') - - marshaled = buf.Bytes() - return -} - -// Parse instantiates a GUID from a text representation of the same GUID. -// This is the inverse of function family String() -func Parse(value string) (GUID, error) { - var guid GUID - for _, fullFormat := range knownFormats { - parity, err := fmt.Sscanf( - value, - fullFormat, - &guid.timeLow, - &guid.timeMid, - &guid.timeHighAndVersion, - &guid.clockSeqHighAndReserved, - &guid.clockSeqLow, - &guid.node[0], - &guid.node[1], - &guid.node[2], - &guid.node[3], - &guid.node[4], - &guid.node[5]) - if parity == 11 && err == nil { - return guid, err - } - } - return emptyGUID, fmt.Errorf("\"%s\" is not in a recognized format", value) -} - -// String returns a text representation of a GUID in the default format. -func (guid GUID) String() string { - return guid.Stringf(FormatDefault) -} - -// Stringf returns a text representation of a GUID that conforms to the specified format. -// If an unrecognized format is provided, the empty string is returned. -func (guid GUID) Stringf(format Format) string { - if format == "" { - format = FormatDefault - } - fullFormat, present := knownFormats[format] - if !present { - return "" - } - return fmt.Sprintf( - fullFormat, - guid.timeLow, - guid.timeMid, - guid.timeHighAndVersion, - guid.clockSeqHighAndReserved, - guid.clockSeqLow, - guid.node[0], - guid.node[1], - guid.node[2], - guid.node[3], - guid.node[4], - guid.node[5]) -} - -// UnmarshalJSON parses a GUID from a JSON string token. -func (guid *GUID) UnmarshalJSON(marshaled []byte) (err error) { - if len(marshaled) < 2 { - err = errors.New("JSON GUID must be surrounded by quotes") - return - } - stripped := marshaled[1 : len(marshaled)-1] - *guid, err = Parse(string(stripped)) - return -} - -// Version reads a GUID to parse which mechanism of generating GUIDS was employed. -// Values returned here are documented in rfc4122.txt. -func (guid GUID) Version() uint { - return uint(guid.timeHighAndVersion >> 12) -} - -var unixToGregorianOffset = time.Date(1970, 01, 01, 0, 0, 00, 0, time.UTC).Sub(time.Date(1582, 10, 15, 0, 0, 0, 0, time.UTC)) - -// getRFC4122Time returns a 60-bit count of 100-nanosecond intervals since 00:00:00.00 October 15th, 1582 -func getRFC4122Time() int64 { - currentTime := time.Now().UTC().Add(unixToGregorianOffset).UnixNano() - currentTime /= 100 - return currentTime & 0x0FFFFFFFFFFFFFFF -} - -var clockSeqVal uint16 -var clockSeqKey sync.Mutex - -func getClockSequence() (uint16, error) { - clockSeqKey.Lock() - defer clockSeqKey.Unlock() - - if 0 == clockSeqVal { - var temp [2]byte - if parity, err := rand.Read(temp[:]); !(2 == parity && nil == err) { - return 0, err - } - clockSeqVal = uint16(temp[0])<<8 | uint16(temp[1]) - } - clockSeqVal++ - return clockSeqVal, nil -} - -func getMACAddress() (mac [6]byte, err error) { - var hostNICs []net.Interface - - hostNICs, err = net.Interfaces() - if err != nil { - return - } - - for _, nic := range hostNICs { - var parity int - - parity, err = fmt.Sscanf( - strings.ToLower(nic.HardwareAddr.String()), - "%02x:%02x:%02x:%02x:%02x:%02x", - &mac[0], - &mac[1], - &mac[2], - &mac[3], - &mac[4], - &mac[5]) - - if parity == len(mac) { - return - } - } - - err = fmt.Errorf("No suitable address found") - - return -} - -func version1() (result GUID, err error) { - var localMAC [6]byte - var clockSeq uint16 - - currentTime := getRFC4122Time() - - result.timeLow = uint32(currentTime) - result.timeMid = uint16(currentTime >> 32) - result.timeHighAndVersion = uint16(currentTime >> 48) - if err = result.setVersion(1); err != nil { - return emptyGUID, err - } - - if localMAC, err = getMACAddress(); nil != err { - if parity, err := rand.Read(localMAC[:]); !(len(localMAC) != parity && err == nil) { - return emptyGUID, err - } - localMAC[0] |= 0x1 - } - copy(result.node[:], localMAC[:]) - - if clockSeq, err = getClockSequence(); nil != err { - return emptyGUID, err - } - - result.clockSeqLow = uint8(clockSeq) - result.clockSeqHighAndReserved = uint8(clockSeq >> 8) - - result.setReservedBits() - - return -} - -func version4() (GUID, error) { - var retval GUID - var bits [10]byte - - if parity, err := rand.Read(bits[:]); !(len(bits) == parity && err == nil) { - return emptyGUID, err - } - retval.timeHighAndVersion |= uint16(bits[0]) | uint16(bits[1])<<8 - retval.timeMid |= uint16(bits[2]) | uint16(bits[3])<<8 - retval.timeLow |= uint32(bits[4]) | uint32(bits[5])<<8 | uint32(bits[6])<<16 | uint32(bits[7])<<24 - retval.clockSeqHighAndReserved = uint8(bits[8]) - retval.clockSeqLow = uint8(bits[9]) - - //Randomly set clock-sequence, reserved, and node - if written, err := rand.Read(retval.node[:]); !(nil == err && written == len(retval.node)) { - retval = emptyGUID - return retval, err - } - - if err := retval.setVersion(4); nil != err { - return emptyGUID, err - } - retval.setReservedBits() - - return retval, nil -} - -func (guid *GUID) setVersion(version uint16) error { - if version > 5 || version == 0 { - return fmt.Errorf("While setting GUID version, unsupported version: %d", version) - } - guid.timeHighAndVersion = (guid.timeHighAndVersion & 0x0fff) | version<<12 - return nil -} - -func (guid *GUID) setReservedBits() { - guid.clockSeqHighAndReserved = (guid.clockSeqHighAndReserved & 0x3f) | 0x80 -} diff --git a/vendor/github.com/mitchellh/panicwrap/go.mod b/vendor/github.com/mitchellh/panicwrap/go.mod index c35fabba8..40ccf8798 100644 --- a/vendor/github.com/mitchellh/panicwrap/go.mod +++ b/vendor/github.com/mitchellh/panicwrap/go.mod @@ -1,3 +1,3 @@ module github.com/mitchellh/panicwrap -require github.com/kardianos/osext v0.0.0-20170510131534-ae77be60afb1 +go 1.13 diff --git a/vendor/github.com/mitchellh/panicwrap/go.sum b/vendor/github.com/mitchellh/panicwrap/go.sum index fd3e4dafa..e69de29bb 100644 --- a/vendor/github.com/mitchellh/panicwrap/go.sum +++ b/vendor/github.com/mitchellh/panicwrap/go.sum @@ -1,2 +0,0 @@ -github.com/kardianos/osext v0.0.0-20170510131534-ae77be60afb1 h1:PJPDf8OUfOK1bb/NeTKd4f1QXZItOX389VN3B6qC8ro= -github.com/kardianos/osext v0.0.0-20170510131534-ae77be60afb1/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8= diff --git a/vendor/github.com/mitchellh/panicwrap/panicwrap.go b/vendor/github.com/mitchellh/panicwrap/panicwrap.go index 028d69bfe..1478244bf 100644 --- a/vendor/github.com/mitchellh/panicwrap/panicwrap.go +++ b/vendor/github.com/mitchellh/panicwrap/panicwrap.go @@ -20,8 +20,6 @@ import ( "sync/atomic" "syscall" "time" - - "github.com/kardianos/osext" ) const ( @@ -118,7 +116,7 @@ func Wrap(c *WrapConfig) (int, error) { } // Get the path to our current executable - exePath, err := osext.Executable() + exePath, err := os.Executable() if err != nil { return -1, err } @@ -229,6 +227,11 @@ func Wrap(c *WrapConfig) (int, error) { // Wrapped checks if we're already wrapped according to the configuration // given. // +// It must be only called once with a non-nil configuration as it unsets +// the environment variable it uses to check if we are already wrapped. +// This prevents false positive if your program tries to execute itself +// recursively. +// // Wrapped is very cheap and can be used early to short-circuit some pre-wrap // logic your application may have. // @@ -253,6 +256,9 @@ func Wrapped(c *WrapConfig) bool { // If the cookie key/value match our environment, then we are the // child, so just exit now and tell the caller that we're the child result := os.Getenv(c.CookieKey) == c.CookieValue + if result { + os.Unsetenv(c.CookieKey) + } wrapCache.Store(result) return result } diff --git a/vendor/github.com/mozillazg/go-httpheader/.bumpversion.cfg b/vendor/github.com/mozillazg/go-httpheader/.bumpversion.cfg new file mode 100644 index 000000000..18dad225a --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/.bumpversion.cfg @@ -0,0 +1,7 @@ +[bumpversion] +commit = True +tag = True +current_version = 0.2.1 + +[bumpversion:file:encode.go] + diff --git a/vendor/github.com/hashicorp/logutils/.gitignore b/vendor/github.com/mozillazg/go-httpheader/.gitignore similarity index 86% rename from vendor/github.com/hashicorp/logutils/.gitignore rename to vendor/github.com/mozillazg/go-httpheader/.gitignore index 00268614f..098e254bc 100644 --- a/vendor/github.com/hashicorp/logutils/.gitignore +++ b/vendor/github.com/mozillazg/go-httpheader/.gitignore @@ -20,3 +20,8 @@ _cgo_export.* _testmain.go *.exe +*.test +*.prof +dist/ +cover.html +cover.out diff --git a/vendor/github.com/mozillazg/go-httpheader/.travis.yml b/vendor/github.com/mozillazg/go-httpheader/.travis.yml new file mode 100644 index 000000000..fbb97928b --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/.travis.yml @@ -0,0 +1,25 @@ +language: go +go: + - 1.6 + - 1.7 + - 1.8 + - tip + +sudo: false + +before_install: + - go get github.com/mattn/goveralls + +install: + - go get + - go build + +script: + - make test + - $HOME/gopath/bin/goveralls -service=travis-ci -ignore=vendor/ + +matrix: + allow_failures: + - go: 1.6 + - go: 1.7 + - go: tip diff --git a/vendor/github.com/mozillazg/go-httpheader/CHANGELOG.md b/vendor/github.com/mozillazg/go-httpheader/CHANGELOG.md new file mode 100644 index 000000000..7b643fd64 --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/CHANGELOG.md @@ -0,0 +1,15 @@ +# Changelog + +## 0.2.1 (2018-11-03) + +* add go.mod file to identify as a module + + +## 0.2.0 (2017-06-24) + +* support http.Header field. + + +## 0.1.0 (2017-06-10) + +* Initial Release diff --git a/vendor/github.com/mozillazg/go-httpheader/LICENSE b/vendor/github.com/mozillazg/go-httpheader/LICENSE new file mode 100644 index 000000000..8ff7942e2 --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2017 mozillazg + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/mozillazg/go-httpheader/Makefile b/vendor/github.com/mozillazg/go-httpheader/Makefile new file mode 100644 index 000000000..aadfe8298 --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/Makefile @@ -0,0 +1,15 @@ +help: + @echo "test run test" + @echo "lint run lint" + +.PHONY: test +test: + go test -v -cover -coverprofile cover.out + go tool cover -html=cover.out -o cover.html + +.PHONY: lint +lint: + gofmt -s -w . + goimports -w . + golint . + go vet diff --git a/vendor/github.com/mozillazg/go-httpheader/README.md b/vendor/github.com/mozillazg/go-httpheader/README.md new file mode 100644 index 000000000..d73daf64c --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/README.md @@ -0,0 +1,63 @@ +# go-httpheader + +go-httpheader is a Go library for encoding structs into Header fields. + +[![Build Status](https://img.shields.io/travis/mozillazg/go-httpheader/master.svg)](https://travis-ci.org/mozillazg/go-httpheader) +[![Coverage Status](https://img.shields.io/coveralls/mozillazg/go-httpheader/master.svg)](https://coveralls.io/r/mozillazg/go-httpheader?branch=master) +[![Go Report Card](https://goreportcard.com/badge/github.com/mozillazg/go-httpheader)](https://goreportcard.com/report/github.com/mozillazg/go-httpheader) +[![GoDoc](https://godoc.org/github.com/mozillazg/go-httpheader?status.svg)](https://godoc.org/github.com/mozillazg/go-httpheader) + +## install + +`go get -u github.com/mozillazg/go-httpheader` + + +## usage + +```go +package main + +import ( + "fmt" + "net/http" + + "github.com/mozillazg/go-httpheader" +) + +type Options struct { + hide string + ContentType string `header:"Content-Type"` + Length int + XArray []string `header:"X-Array"` + TestHide string `header:"-"` + IgnoreEmpty string `header:"X-Empty,omitempty"` + IgnoreEmptyN string `header:"X-Empty-N,omitempty"` + CustomHeader http.Header +} + +func main() { + opt := Options{ + hide: "hide", + ContentType: "application/json", + Length: 2, + XArray: []string{"test1", "test2"}, + TestHide: "hide", + IgnoreEmptyN: "n", + CustomHeader: http.Header{ + "X-Test-1": []string{"233"}, + "X-Test-2": []string{"666"}, + }, + } + h, _ := httpheader.Header(opt) + fmt.Printf("%#v", h) + // h: + // http.Header{ + // "X-Test-1": []string{"233"}, + // "X-Test-2": []string{"666"}, + // "Content-Type": []string{"application/json"}, + // "Length": []string{"2"}, + // "X-Array": []string{"test1", "test2"}, + // "X-Empty-N": []string{"n"}, + //} +} +``` diff --git a/vendor/github.com/mozillazg/go-httpheader/encode.go b/vendor/github.com/mozillazg/go-httpheader/encode.go new file mode 100644 index 000000000..c122f2b88 --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/encode.go @@ -0,0 +1,290 @@ +// Package query implements encoding of structs into http.Header fields. +// +// As a simple example: +// +// type Options struct { +// ContentType string `header:"Content-Type"` +// Length int +// } +// +// opt := Options{"application/json", 2} +// h, _ := httpheader.Header(opt) +// fmt.Printf("%#v", h) +// // will output: +// // http.Header{"Content-Type":[]string{"application/json"},"Length":[]string{"2"}} +// +// The exact mapping between Go values and http.Header is described in the +// documentation for the Header() function. +package httpheader + +import ( + "fmt" + "net/http" + "reflect" + "strconv" + "strings" + "time" +) + +const tagName = "header" + +// Version ... +const Version = "0.2.1" + +var timeType = reflect.TypeOf(time.Time{}) +var headerType = reflect.TypeOf(http.Header{}) + +var encoderType = reflect.TypeOf(new(Encoder)).Elem() + +// Encoder is an interface implemented by any type that wishes to encode +// itself into Header fields in a non-standard way. +type Encoder interface { + EncodeHeader(key string, v *http.Header) error +} + +// Header returns the http.Header encoding of v. +// +// Header expects to be passed a struct, and traverses it recursively using the +// following encoding rules. +// +// Each exported struct field is encoded as a Header field unless +// +// - the field's tag is "-", or +// - the field is empty and its tag specifies the "omitempty" option +// +// The empty values are false, 0, any nil pointer or interface value, any array +// slice, map, or string of length zero, and any time.Time that returns true +// for IsZero(). +// +// The Header field name defaults to the struct field name but can be +// specified in the struct field's tag value. The "header" key in the struct +// field's tag value is the key name, followed by an optional comma and +// options. For example: +// +// // Field is ignored by this package. +// Field int `header:"-"` +// +// // Field appears as Header field "X-Name". +// Field int `header:"X-Name"` +// +// // Field appears as Header field "X-Name" and the field is omitted if +// // its value is empty +// Field int `header:"X-Name,omitempty"` +// +// // Field appears as Header field "Field" (the default), but the field +// // is skipped if empty. Note the leading comma. +// Field int `header:",omitempty"` +// +// For encoding individual field values, the following type-dependent rules +// apply: +// +// Boolean values default to encoding as the strings "true" or "false". +// Including the "int" option signals that the field should be encoded as the +// strings "1" or "0". +// +// time.Time values default to encoding as RFC1123("Mon, 02 Jan 2006 15:04:05 GMT") +// timestamps. Including the "unix" option signals that the field should be +// encoded as a Unix time (see time.Unix()) +// +// Slice and Array values default to encoding as multiple Header values of the +// same name. example: +// X-Name: []string{"Tom", "Jim"}, etc. +// +// http.Header values will be used to extend the Header fields. +// +// Anonymous struct fields are usually encoded as if their inner exported +// fields were fields in the outer struct, subject to the standard Go +// visibility rules. An anonymous struct field with a name given in its Header +// tag is treated as having that name, rather than being anonymous. +// +// Non-nil pointer values are encoded as the value pointed to. +// +// All other values are encoded using their default string representation. +// +// Multiple fields that encode to the same Header filed name will be included +// as multiple Header values of the same name. +func Header(v interface{}) (http.Header, error) { + h := make(http.Header) + val := reflect.ValueOf(v) + for val.Kind() == reflect.Ptr { + if val.IsNil() { + return h, nil + } + val = val.Elem() + } + + if v == nil { + return h, nil + } + + if val.Kind() != reflect.Struct { + return nil, fmt.Errorf("httpheader: Header() expects struct input. Got %v", val.Kind()) + } + + err := reflectValue(h, val) + return h, err +} + +// reflectValue populates the header fields from the struct fields in val. +// Embedded structs are followed recursively (using the rules defined in the +// Values function documentation) breadth-first. +func reflectValue(header http.Header, val reflect.Value) error { + var embedded []reflect.Value + + typ := val.Type() + for i := 0; i < typ.NumField(); i++ { + sf := typ.Field(i) + if sf.PkgPath != "" && !sf.Anonymous { // unexported + continue + } + + sv := val.Field(i) + tag := sf.Tag.Get(tagName) + if tag == "-" { + continue + } + name, opts := parseTag(tag) + if name == "" { + if sf.Anonymous && sv.Kind() == reflect.Struct { + // save embedded struct for later processing + embedded = append(embedded, sv) + continue + } + + name = sf.Name + } + + if opts.Contains("omitempty") && isEmptyValue(sv) { + continue + } + + if sv.Type().Implements(encoderType) { + if !reflect.Indirect(sv).IsValid() { + sv = reflect.New(sv.Type().Elem()) + } + + m := sv.Interface().(Encoder) + if err := m.EncodeHeader(name, &header); err != nil { + return err + } + continue + } + + if sv.Kind() == reflect.Slice || sv.Kind() == reflect.Array { + for i := 0; i < sv.Len(); i++ { + k := name + header.Add(k, valueString(sv.Index(i), opts)) + } + continue + } + + for sv.Kind() == reflect.Ptr { + if sv.IsNil() { + break + } + sv = sv.Elem() + } + + if sv.Type() == timeType { + header.Add(name, valueString(sv, opts)) + continue + } + if sv.Type() == headerType { + h := sv.Interface().(http.Header) + for k, vs := range h { + for _, v := range vs { + header.Add(k, v) + } + } + continue + } + + if sv.Kind() == reflect.Struct { + reflectValue(header, sv) + continue + } + + header.Add(name, valueString(sv, opts)) + } + + for _, f := range embedded { + if err := reflectValue(header, f); err != nil { + return err + } + } + + return nil +} + +// valueString returns the string representation of a value. +func valueString(v reflect.Value, opts tagOptions) string { + for v.Kind() == reflect.Ptr { + if v.IsNil() { + return "" + } + v = v.Elem() + } + + if v.Kind() == reflect.Bool && opts.Contains("int") { + if v.Bool() { + return "1" + } + return "0" + } + + if v.Type() == timeType { + t := v.Interface().(time.Time) + if opts.Contains("unix") { + return strconv.FormatInt(t.Unix(), 10) + } + return t.Format(http.TimeFormat) + } + + return fmt.Sprint(v.Interface()) +} + +// isEmptyValue checks if a value should be considered empty for the purposes +// of omitting fields with the "omitempty" option. +func isEmptyValue(v reflect.Value) bool { + switch v.Kind() { + case reflect.Array, reflect.Map, reflect.Slice, reflect.String: + return v.Len() == 0 + case reflect.Bool: + return !v.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Interface, reflect.Ptr: + return v.IsNil() + } + + if v.Type() == timeType { + return v.Interface().(time.Time).IsZero() + } + + return false +} + +// tagOptions is the string following a comma in a struct field's "header" tag, or +// the empty string. It does not include the leading comma. +type tagOptions []string + +// parseTag splits a struct field's header tag into its name and comma-separated +// options. +func parseTag(tag string) (string, tagOptions) { + s := strings.Split(tag, ",") + return s[0], s[1:] +} + +// Contains checks whether the tagOptions contains the specified option. +func (o tagOptions) Contains(option string) bool { + for _, s := range o { + if s == option { + return true + } + } + return false +} diff --git a/vendor/github.com/mozillazg/go-httpheader/go.mod b/vendor/github.com/mozillazg/go-httpheader/go.mod new file mode 100644 index 000000000..27af234eb --- /dev/null +++ b/vendor/github.com/mozillazg/go-httpheader/go.mod @@ -0,0 +1 @@ +module github.com/mozillazg/go-httpheader diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/LICENSE b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/LICENSE new file mode 100644 index 000000000..efc75a225 --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright (c) 2017-2018 Tencent Ltd. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/client.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/client.go new file mode 100644 index 000000000..1927cb568 --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/client.go @@ -0,0 +1,261 @@ +package common + +import ( + "encoding/hex" + "encoding/json" + "fmt" + "log" + "net/http" + "net/http/httputil" + "strconv" + "strings" + "time" + + tchttp "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http" + "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile" +) + +type Client struct { + region string + httpClient *http.Client + httpProfile *profile.HttpProfile + profile *profile.ClientProfile + credential *Credential + signMethod string + unsignedPayload bool + debug bool +} + +func (c *Client) Send(request tchttp.Request, response tchttp.Response) (err error) { + if request.GetDomain() == "" { + domain := c.httpProfile.Endpoint + if domain == "" { + domain = tchttp.GetServiceDomain(request.GetService()) + } + request.SetDomain(domain) + } + + if request.GetHttpMethod() == "" { + request.SetHttpMethod(c.httpProfile.ReqMethod) + } + + tchttp.CompleteCommonParams(request, c.GetRegion()) + + if c.signMethod == "HmacSHA1" || c.signMethod == "HmacSHA256" { + return c.sendWithSignatureV1(request, response) + } else { + return c.sendWithSignatureV3(request, response) + } +} + +func (c *Client) sendWithSignatureV1(request tchttp.Request, response tchttp.Response) (err error) { + // TODO: not an elegant way, it should be done in common params, but finally it need to refactor + request.GetParams()["Language"] = c.profile.Language + err = tchttp.ConstructParams(request) + if err != nil { + return err + } + err = signRequest(request, c.credential, c.signMethod) + if err != nil { + return err + } + httpRequest, err := http.NewRequest(request.GetHttpMethod(), request.GetUrl(), request.GetBodyReader()) + if err != nil { + return err + } + if request.GetHttpMethod() == "POST" { + httpRequest.Header["Content-Type"] = []string{"application/x-www-form-urlencoded"} + } + if c.debug { + outbytes, err := httputil.DumpRequest(httpRequest, true) + if err != nil { + log.Printf("[ERROR] dump request failed because %s", err) + return err + } + log.Printf("[DEBUG] http request = %s", outbytes) + } + httpResponse, err := c.httpClient.Do(httpRequest) + if err != nil { + return err + } + err = tchttp.ParseFromHttpResponse(httpResponse, response) + return err +} + +func (c *Client) sendWithSignatureV3(request tchttp.Request, response tchttp.Response) (err error) { + headers := map[string]string{ + "Host": request.GetDomain(), + "X-TC-Action": request.GetAction(), + "X-TC-Version": request.GetVersion(), + "X-TC-Timestamp": request.GetParams()["Timestamp"], + "X-TC-RequestClient": request.GetParams()["RequestClient"], + "X-TC-Language": c.profile.Language, + } + if c.region != "" { + headers["X-TC-Region"] = c.region + } + if c.credential.Token != "" { + headers["X-TC-Token"] = c.credential.Token + } + if request.GetHttpMethod() == "GET" { + headers["Content-Type"] = "application/x-www-form-urlencoded" + } else { + headers["Content-Type"] = "application/json" + } + + // start signature v3 process + + // build canonical request string + httpRequestMethod := request.GetHttpMethod() + canonicalURI := "/" + canonicalQueryString := "" + if httpRequestMethod == "GET" { + err = tchttp.ConstructParams(request) + if err != nil { + return err + } + params := make(map[string]string) + for key, value := range request.GetParams() { + params[key] = value + } + delete(params, "Action") + delete(params, "Version") + delete(params, "Nonce") + delete(params, "Region") + delete(params, "RequestClient") + delete(params, "Timestamp") + canonicalQueryString = tchttp.GetUrlQueriesEncoded(params) + } + canonicalHeaders := fmt.Sprintf("content-type:%s\nhost:%s\n", headers["Content-Type"], headers["Host"]) + signedHeaders := "content-type;host" + requestPayload := "" + if httpRequestMethod == "POST" { + b, err := json.Marshal(request) + if err != nil { + return err + } + requestPayload = string(b) + } + hashedRequestPayload := "" + if c.unsignedPayload { + hashedRequestPayload = sha256hex("UNSIGNED-PAYLOAD") + headers["X-TC-Content-SHA256"] = "UNSIGNED-PAYLOAD" + } else { + hashedRequestPayload = sha256hex(requestPayload) + } + canonicalRequest := fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", + httpRequestMethod, + canonicalURI, + canonicalQueryString, + canonicalHeaders, + signedHeaders, + hashedRequestPayload) + //log.Println("canonicalRequest:", canonicalRequest) + + // build string to sign + algorithm := "TC3-HMAC-SHA256" + requestTimestamp := headers["X-TC-Timestamp"] + timestamp, _ := strconv.ParseInt(requestTimestamp, 10, 64) + t := time.Unix(timestamp, 0).UTC() + // must be the format 2006-01-02, ref to package time for more info + date := t.Format("2006-01-02") + credentialScope := fmt.Sprintf("%s/%s/tc3_request", date, request.GetService()) + hashedCanonicalRequest := sha256hex(canonicalRequest) + string2sign := fmt.Sprintf("%s\n%s\n%s\n%s", + algorithm, + requestTimestamp, + credentialScope, + hashedCanonicalRequest) + //log.Println("string2sign", string2sign) + + // sign string + secretDate := hmacsha256(date, "TC3"+c.credential.SecretKey) + secretService := hmacsha256(request.GetService(), secretDate) + secretKey := hmacsha256("tc3_request", secretService) + signature := hex.EncodeToString([]byte(hmacsha256(string2sign, secretKey))) + //log.Println("signature", signature) + + // build authorization + authorization := fmt.Sprintf("%s Credential=%s/%s, SignedHeaders=%s, Signature=%s", + algorithm, + c.credential.SecretId, + credentialScope, + signedHeaders, + signature) + //log.Println("authorization", authorization) + + headers["Authorization"] = authorization + url := "https://" + request.GetDomain() + request.GetPath() + if canonicalQueryString != "" { + url = url + "?" + canonicalQueryString + } + httpRequest, err := http.NewRequest(httpRequestMethod, url, strings.NewReader(requestPayload)) + if err != nil { + return err + } + for k, v := range headers { + httpRequest.Header[k] = []string{v} + } + if c.debug { + outbytes, err := httputil.DumpRequest(httpRequest, true) + if err != nil { + log.Printf("[ERROR] dump request failed because %s", err) + return err + } + log.Printf("[DEBUG] http request = %s", outbytes) + } + httpResponse, err := c.httpClient.Do(httpRequest) + if err != nil { + return err + } + err = tchttp.ParseFromHttpResponse(httpResponse, response) + return err +} + +func (c *Client) GetRegion() string { + return c.region +} + +func (c *Client) Init(region string) *Client { + c.httpClient = &http.Client{} + c.region = region + c.signMethod = "TC3-HMAC-SHA256" + c.debug = false + log.SetFlags(log.LstdFlags | log.Lshortfile) + return c +} + +func (c *Client) WithSecretId(secretId, secretKey string) *Client { + c.credential = NewCredential(secretId, secretKey) + return c +} + +func (c *Client) WithCredential(cred *Credential) *Client { + c.credential = cred + return c +} + +func (c *Client) WithProfile(clientProfile *profile.ClientProfile) *Client { + c.profile = clientProfile + c.signMethod = clientProfile.SignMethod + c.unsignedPayload = clientProfile.UnsignedPayload + c.httpProfile = clientProfile.HttpProfile + c.httpClient.Timeout = time.Duration(c.httpProfile.ReqTimeout) * time.Second + return c +} + +func (c *Client) WithSignatureMethod(method string) *Client { + c.signMethod = method + return c +} + +func (c *Client) WithHttpTransport(transport http.RoundTripper) *Client { + c.httpClient.Transport = transport + return c +} + +func NewClientWithSecretId(secretId, secretKey, region string) (client *Client, err error) { + client = &Client{} + client.Init(region).WithSecretId(secretId, secretKey) + return +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/credentials.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/credentials.go new file mode 100644 index 000000000..b734c1373 --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/credentials.go @@ -0,0 +1,58 @@ +package common + +type Credential struct { + SecretId string + SecretKey string + Token string +} + +func NewCredential(secretId, secretKey string) *Credential { + return &Credential{ + SecretId: secretId, + SecretKey: secretKey, + } +} + +func NewTokenCredential(secretId, secretKey, token string) *Credential { + return &Credential{ + SecretId: secretId, + SecretKey: secretKey, + Token: token, + } +} + +func (c *Credential) GetCredentialParams() map[string]string { + p := map[string]string{ + "SecretId": c.SecretId, + } + if c.Token != "" { + p["Token"] = c.Token + } + return p +} + +// Nowhere use them and we haven't well designed these structures and +// underlying method, which leads to the situation that it is hard to +// refactor it to interfaces. +// Hence they are removed and merged into Credential. + +//type TokenCredential struct { +// SecretId string +// SecretKey string +// Token string +//} + +//func NewTokenCredential(secretId, secretKey, token string) *TokenCredential { +// return &TokenCredential{ +// SecretId: secretId, +// SecretKey: secretKey, +// Token: token, +// } +//} + +//func (c *TokenCredential) GetCredentialParams() map[string]string { +// return map[string]string{ +// "SecretId": c.SecretId, +// "Token": c.Token, +// } +//} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/errors/errors.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/errors/errors.go new file mode 100644 index 000000000..27589e59a --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/errors/errors.go @@ -0,0 +1,35 @@ +package errors + +import ( + "fmt" +) + +type TencentCloudSDKError struct { + Code string + Message string + RequestId string +} + +func (e *TencentCloudSDKError) Error() string { + return fmt.Sprintf("[TencentCloudSDKError] Code=%s, Message=%s, RequestId=%s", e.Code, e.Message, e.RequestId) +} + +func NewTencentCloudSDKError(code, message, requestId string) error { + return &TencentCloudSDKError{ + Code: code, + Message: message, + RequestId: requestId, + } +} + +func (e *TencentCloudSDKError) GetCode() string { + return e.Code +} + +func (e *TencentCloudSDKError) GetMessage() string { + return e.Message +} + +func (e *TencentCloudSDKError) GetRequestId() string { + return e.RequestId +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http/request.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http/request.go new file mode 100644 index 000000000..c7912ad18 --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http/request.go @@ -0,0 +1,233 @@ +package common + +import ( + "io" + //"log" + "math/rand" + "net/url" + "reflect" + "strconv" + "strings" + "time" +) + +const ( + POST = "POST" + GET = "GET" + + RootDomain = "tencentcloudapi.com" + Path = "/" +) + +type Request interface { + GetAction() string + GetBodyReader() io.Reader + GetDomain() string + GetHttpMethod() string + GetParams() map[string]string + GetPath() string + GetService() string + GetUrl() string + GetVersion() string + SetDomain(string) + SetHttpMethod(string) +} + +type BaseRequest struct { + httpMethod string + domain string + path string + params map[string]string + formParams map[string]string + + service string + version string + action string +} + +func (r *BaseRequest) GetAction() string { + return r.action +} + +func (r *BaseRequest) GetHttpMethod() string { + return r.httpMethod +} + +func (r *BaseRequest) GetParams() map[string]string { + return r.params +} + +func (r *BaseRequest) GetPath() string { + return r.path +} + +func (r *BaseRequest) GetDomain() string { + return r.domain +} + +func (r *BaseRequest) SetDomain(domain string) { + r.domain = domain +} + +func (r *BaseRequest) SetHttpMethod(method string) { + switch strings.ToUpper(method) { + case POST: + { + r.httpMethod = POST + } + case GET: + { + r.httpMethod = GET + } + default: + { + r.httpMethod = GET + } + } +} + +func (r *BaseRequest) GetService() string { + return r.service +} + +func (r *BaseRequest) GetUrl() string { + if r.httpMethod == GET { + return "https://" + r.domain + r.path + "?" + GetUrlQueriesEncoded(r.params) + } else if r.httpMethod == POST { + return "https://" + r.domain + r.path + } else { + return "" + } +} + +func (r *BaseRequest) GetVersion() string { + return r.version +} + +func GetUrlQueriesEncoded(params map[string]string) string { + values := url.Values{} + for key, value := range params { + if value != "" { + values.Add(key, value) + } + } + return values.Encode() +} + +func (r *BaseRequest) GetBodyReader() io.Reader { + if r.httpMethod == POST { + s := GetUrlQueriesEncoded(r.params) + return strings.NewReader(s) + } else { + return strings.NewReader("") + } +} + +func (r *BaseRequest) Init() *BaseRequest { + r.domain = "" + r.path = Path + r.params = make(map[string]string) + r.formParams = make(map[string]string) + return r +} + +func (r *BaseRequest) WithApiInfo(service, version, action string) *BaseRequest { + r.service = service + r.version = version + r.action = action + return r +} + +func GetServiceDomain(service string) (domain string) { + domain = service + "." + RootDomain + return +} + +func CompleteCommonParams(request Request, region string) { + params := request.GetParams() + params["Region"] = region + if request.GetVersion() != "" { + params["Version"] = request.GetVersion() + } + params["Action"] = request.GetAction() + params["Timestamp"] = strconv.FormatInt(time.Now().Unix(), 10) + params["Nonce"] = strconv.Itoa(rand.Int()) + params["RequestClient"] = "SDK_GO_3.0.82" +} + +func ConstructParams(req Request) (err error) { + value := reflect.ValueOf(req).Elem() + err = flatStructure(value, req, "") + //log.Printf("[DEBUG] params=%s", req.GetParams()) + return +} + +func flatStructure(value reflect.Value, request Request, prefix string) (err error) { + //log.Printf("[DEBUG] reflect value: %v", value.Type()) + valueType := value.Type() + for i := 0; i < valueType.NumField(); i++ { + tag := valueType.Field(i).Tag + nameTag, hasNameTag := tag.Lookup("name") + if !hasNameTag { + continue + } + field := value.Field(i) + kind := field.Kind() + if kind == reflect.Ptr && field.IsNil() { + continue + } + if kind == reflect.Ptr { + field = field.Elem() + kind = field.Kind() + } + key := prefix + nameTag + if kind == reflect.String { + s := field.String() + if s != "" { + request.GetParams()[key] = s + } + } else if kind == reflect.Bool { + request.GetParams()[key] = strconv.FormatBool(field.Bool()) + } else if kind == reflect.Int || kind == reflect.Int64 { + request.GetParams()[key] = strconv.FormatInt(field.Int(), 10) + } else if kind == reflect.Uint || kind == reflect.Uint64 { + request.GetParams()[key] = strconv.FormatUint(field.Uint(), 10) + } else if kind == reflect.Float64 { + request.GetParams()[key] = strconv.FormatFloat(field.Float(), 'f', -1, 64) + } else if kind == reflect.Slice { + list := value.Field(i) + for j := 0; j < list.Len(); j++ { + vj := list.Index(j) + key := prefix + nameTag + "." + strconv.Itoa(j) + kind = vj.Kind() + if kind == reflect.Ptr && vj.IsNil() { + continue + } + if kind == reflect.Ptr { + vj = vj.Elem() + kind = vj.Kind() + } + if kind == reflect.String { + request.GetParams()[key] = vj.String() + } else if kind == reflect.Bool { + request.GetParams()[key] = strconv.FormatBool(vj.Bool()) + } else if kind == reflect.Int || kind == reflect.Int64 { + request.GetParams()[key] = strconv.FormatInt(vj.Int(), 10) + } else if kind == reflect.Uint || kind == reflect.Uint64 { + request.GetParams()[key] = strconv.FormatUint(vj.Uint(), 10) + } else if kind == reflect.Float64 { + request.GetParams()[key] = strconv.FormatFloat(vj.Float(), 'f', -1, 64) + } else { + if err = flatStructure(vj, request, key+"."); err != nil { + return + } + } + } + } else { + if err = flatStructure(reflect.ValueOf(field.Interface()), request, prefix+nameTag+"."); err != nil { + return + } + } + } + return +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http/response.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http/response.go new file mode 100644 index 000000000..288f21bdf --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http/response.go @@ -0,0 +1,81 @@ +package common + +import ( + "encoding/json" + "fmt" + "io/ioutil" + //"log" + "net/http" + + "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/errors" +) + +type Response interface { + ParseErrorFromHTTPResponse(body []byte) error +} + +type BaseResponse struct { +} + +type ErrorResponse struct { + Response struct { + Error struct { + Code string `json:"Code"` + Message string `json:"Message"` + } `json:"Error" omitempty` + RequestId string `json:"RequestId"` + } `json:"Response"` +} + +type DeprecatedAPIErrorResponse struct { + Code int `json:"code"` + Message string `json:"message"` + CodeDesc string `json:"codeDesc"` +} + +func (r *BaseResponse) ParseErrorFromHTTPResponse(body []byte) (err error) { + resp := &ErrorResponse{} + err = json.Unmarshal(body, resp) + if err != nil { + msg := fmt.Sprintf("Fail to parse json content: %s, because: %s", body, err) + return errors.NewTencentCloudSDKError("ClientError.ParseJsonError", msg, "") + } + if resp.Response.Error.Code != "" { + return errors.NewTencentCloudSDKError(resp.Response.Error.Code, resp.Response.Error.Message, resp.Response.RequestId) + } + + deprecated := &DeprecatedAPIErrorResponse{} + err = json.Unmarshal(body, deprecated) + if err != nil { + msg := fmt.Sprintf("Fail to parse json content: %s, because: %s", body, err) + return errors.NewTencentCloudSDKError("ClientError.ParseJsonError", msg, "") + } + if deprecated.Code != 0 { + return errors.NewTencentCloudSDKError(deprecated.CodeDesc, deprecated.Message, "") + } + return nil +} + +func ParseFromHttpResponse(hr *http.Response, response Response) (err error) { + defer hr.Body.Close() + body, err := ioutil.ReadAll(hr.Body) + if err != nil { + msg := fmt.Sprintf("Fail to read response body because %s", err) + return errors.NewTencentCloudSDKError("ClientError.IOError", msg, "") + } + if hr.StatusCode != 200 { + msg := fmt.Sprintf("Request fail with http status code: %s, with body: %s", hr.Status, body) + return errors.NewTencentCloudSDKError("ClientError.HttpStatusCodeError", msg, "") + } + //log.Printf("[DEBUG] Response Body=%s", body) + err = response.ParseErrorFromHTTPResponse(body) + if err != nil { + return + } + err = json.Unmarshal(body, &response) + if err != nil { + msg := fmt.Sprintf("Fail to parse json content: %s, because: %s", body, err) + return errors.NewTencentCloudSDKError("ClientError.ParseJsonError", msg, "") + } + return +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile/client_profile.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile/client_profile.go new file mode 100644 index 000000000..21069ff99 --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile/client_profile.go @@ -0,0 +1,21 @@ +package profile + +type ClientProfile struct { + HttpProfile *HttpProfile + // Valid choices: HmacSHA1, HmacSHA256, TC3-HMAC-SHA256. + // Default value is TC3-HMAC-SHA256. + SignMethod string + UnsignedPayload bool + // Valid choices: zh-CN, en-US. + // Default value is zh-CN. + Language string +} + +func NewClientProfile() *ClientProfile { + return &ClientProfile{ + HttpProfile: NewHttpProfile(), + SignMethod: "TC3-HMAC-SHA256", + UnsignedPayload: false, + Language: "zh-CN", + } +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile/http_profile.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile/http_profile.go new file mode 100644 index 000000000..8d4bf8f57 --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile/http_profile.go @@ -0,0 +1,17 @@ +package profile + +type HttpProfile struct { + ReqMethod string + ReqTimeout int + Endpoint string + Protocol string +} + +func NewHttpProfile() *HttpProfile { + return &HttpProfile{ + ReqMethod: "POST", + ReqTimeout: 60, + Endpoint: "", + Protocol: "HTTPS", + } +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/sign.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/sign.go new file mode 100644 index 000000000..0aa7b7355 --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/sign.go @@ -0,0 +1,94 @@ +package common + +import ( + "bytes" + "crypto/hmac" + "crypto/sha1" + "crypto/sha256" + "encoding/base64" + "encoding/hex" + "sort" + + tchttp "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http" +) + +const ( + SHA256 = "HmacSHA256" + SHA1 = "HmacSHA1" +) + +func Sign(s, secretKey, method string) string { + hashed := hmac.New(sha1.New, []byte(secretKey)) + if method == SHA256 { + hashed = hmac.New(sha256.New, []byte(secretKey)) + } + hashed.Write([]byte(s)) + + return base64.StdEncoding.EncodeToString(hashed.Sum(nil)) +} + +func sha256hex(s string) string { + b := sha256.Sum256([]byte(s)) + return hex.EncodeToString(b[:]) +} + +func hmacsha256(s, key string) string { + hashed := hmac.New(sha256.New, []byte(key)) + hashed.Write([]byte(s)) + return string(hashed.Sum(nil)) +} + +func signRequest(request tchttp.Request, credential *Credential, method string) (err error) { + if method != SHA256 { + method = SHA1 + } + checkAuthParams(request, credential, method) + s := getStringToSign(request) + signature := Sign(s, credential.SecretKey, method) + request.GetParams()["Signature"] = signature + return +} + +func checkAuthParams(request tchttp.Request, credential *Credential, method string) { + params := request.GetParams() + credentialParams := credential.GetCredentialParams() + for key, value := range credentialParams { + params[key] = value + } + params["SignatureMethod"] = method + delete(params, "Signature") +} + +func getStringToSign(request tchttp.Request) string { + method := request.GetHttpMethod() + domain := request.GetDomain() + path := request.GetPath() + + var buf bytes.Buffer + buf.WriteString(method) + buf.WriteString(domain) + buf.WriteString(path) + buf.WriteString("?") + + params := request.GetParams() + // sort params + keys := make([]string, 0, len(params)) + for k, _ := range params { + keys = append(keys, k) + } + sort.Strings(keys) + + for i := range keys { + k := keys[i] + // TODO: check if server side allows empty value in url. + if params[k] == "" { + continue + } + buf.WriteString(k) + buf.WriteString("=") + buf.WriteString(params[k]) + buf.WriteString("&") + } + buf.Truncate(buf.Len() - 1) + return buf.String() +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/types.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/types.go new file mode 100644 index 000000000..ec2c786db --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/types.go @@ -0,0 +1,47 @@ +package common + +func IntPtr(v int) *int { + return &v +} + +func Int64Ptr(v int64) *int64 { + return &v +} + +func UintPtr(v uint) *uint { + return &v +} + +func Uint64Ptr(v uint64) *uint64 { + return &v +} + +func Float64Ptr(v float64) *float64 { + return &v +} + +func StringPtr(v string) *string { + return &v +} + +func StringValues(ptrs []*string) []string { + values := make([]string, len(ptrs)) + for i := 0; i < len(ptrs); i++ { + if ptrs[i] != nil { + values[i] = *ptrs[i] + } + } + return values +} + +func StringPtrs(vals []string) []*string { + ptrs := make([]*string, len(vals)) + for i := 0; i < len(vals); i++ { + ptrs[i] = &vals[i] + } + return ptrs +} + +func BoolPtr(v bool) *bool { + return &v +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813/client.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813/client.go new file mode 100644 index 000000000..4980f915d --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813/client.go @@ -0,0 +1,294 @@ +// Copyright (c) 2017-2018 THL A29 Limited, a Tencent company. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package v20180813 + +import ( + "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common" + tchttp "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http" + "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile" +) + +const APIVersion = "2018-08-13" + +type Client struct { + common.Client +} + +// Deprecated +func NewClientWithSecretId(secretId, secretKey, region string) (client *Client, err error) { + cpf := profile.NewClientProfile() + client = &Client{} + client.Init(region).WithSecretId(secretId, secretKey).WithProfile(cpf) + return +} + +func NewClient(credential *common.Credential, region string, clientProfile *profile.ClientProfile) (client *Client, err error) { + client = &Client{} + client.Init(region). + WithCredential(credential). + WithProfile(clientProfile) + return +} + + +func NewAddResourceTagRequest() (request *AddResourceTagRequest) { + request = &AddResourceTagRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "AddResourceTag") + return +} + +func NewAddResourceTagResponse() (response *AddResourceTagResponse) { + response = &AddResourceTagResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 本接口用于给标签关联资源 +func (c *Client) AddResourceTag(request *AddResourceTagRequest) (response *AddResourceTagResponse, err error) { + if request == nil { + request = NewAddResourceTagRequest() + } + response = NewAddResourceTagResponse() + err = c.Send(request, response) + return +} + +func NewCreateTagRequest() (request *CreateTagRequest) { + request = &CreateTagRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "CreateTag") + return +} + +func NewCreateTagResponse() (response *CreateTagResponse) { + response = &CreateTagResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 本接口用于创建一对标签键和标签值 +func (c *Client) CreateTag(request *CreateTagRequest) (response *CreateTagResponse, err error) { + if request == nil { + request = NewCreateTagRequest() + } + response = NewCreateTagResponse() + err = c.Send(request, response) + return +} + +func NewDeleteResourceTagRequest() (request *DeleteResourceTagRequest) { + request = &DeleteResourceTagRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "DeleteResourceTag") + return +} + +func NewDeleteResourceTagResponse() (response *DeleteResourceTagResponse) { + response = &DeleteResourceTagResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 本接口用于解除标签和资源的关联关系 +func (c *Client) DeleteResourceTag(request *DeleteResourceTagRequest) (response *DeleteResourceTagResponse, err error) { + if request == nil { + request = NewDeleteResourceTagRequest() + } + response = NewDeleteResourceTagResponse() + err = c.Send(request, response) + return +} + +func NewDeleteTagRequest() (request *DeleteTagRequest) { + request = &DeleteTagRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "DeleteTag") + return +} + +func NewDeleteTagResponse() (response *DeleteTagResponse) { + response = &DeleteTagResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 本接口用于删除一对标签键和标签值 +func (c *Client) DeleteTag(request *DeleteTagRequest) (response *DeleteTagResponse, err error) { + if request == nil { + request = NewDeleteTagRequest() + } + response = NewDeleteTagResponse() + err = c.Send(request, response) + return +} + +func NewDescribeResourceTagsByResourceIdsRequest() (request *DescribeResourceTagsByResourceIdsRequest) { + request = &DescribeResourceTagsByResourceIdsRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "DescribeResourceTagsByResourceIds") + return +} + +func NewDescribeResourceTagsByResourceIdsResponse() (response *DescribeResourceTagsByResourceIdsResponse) { + response = &DescribeResourceTagsByResourceIdsResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 用于查询已有资源标签键值对 +func (c *Client) DescribeResourceTagsByResourceIds(request *DescribeResourceTagsByResourceIdsRequest) (response *DescribeResourceTagsByResourceIdsResponse, err error) { + if request == nil { + request = NewDescribeResourceTagsByResourceIdsRequest() + } + response = NewDescribeResourceTagsByResourceIdsResponse() + err = c.Send(request, response) + return +} + +func NewDescribeTagKeysRequest() (request *DescribeTagKeysRequest) { + request = &DescribeTagKeysRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "DescribeTagKeys") + return +} + +func NewDescribeTagKeysResponse() (response *DescribeTagKeysResponse) { + response = &DescribeTagKeysResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 用于查询已建立的标签列表中的标签键。 +func (c *Client) DescribeTagKeys(request *DescribeTagKeysRequest) (response *DescribeTagKeysResponse, err error) { + if request == nil { + request = NewDescribeTagKeysRequest() + } + response = NewDescribeTagKeysResponse() + err = c.Send(request, response) + return +} + +func NewDescribeTagValuesRequest() (request *DescribeTagValuesRequest) { + request = &DescribeTagValuesRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "DescribeTagValues") + return +} + +func NewDescribeTagValuesResponse() (response *DescribeTagValuesResponse) { + response = &DescribeTagValuesResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 用于查询已建立的标签列表中的标签值。 +func (c *Client) DescribeTagValues(request *DescribeTagValuesRequest) (response *DescribeTagValuesResponse, err error) { + if request == nil { + request = NewDescribeTagValuesRequest() + } + response = NewDescribeTagValuesResponse() + err = c.Send(request, response) + return +} + +func NewDescribeTagsRequest() (request *DescribeTagsRequest) { + request = &DescribeTagsRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "DescribeTags") + return +} + +func NewDescribeTagsResponse() (response *DescribeTagsResponse) { + response = &DescribeTagsResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 用于查询已建立的标签列表。 +func (c *Client) DescribeTags(request *DescribeTagsRequest) (response *DescribeTagsResponse, err error) { + if request == nil { + request = NewDescribeTagsRequest() + } + response = NewDescribeTagsResponse() + err = c.Send(request, response) + return +} + +func NewModifyResourceTagsRequest() (request *ModifyResourceTagsRequest) { + request = &ModifyResourceTagsRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "ModifyResourceTags") + return +} + +func NewModifyResourceTagsResponse() (response *ModifyResourceTagsResponse) { + response = &ModifyResourceTagsResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 本接口用于修改资源关联的所有标签 +func (c *Client) ModifyResourceTags(request *ModifyResourceTagsRequest) (response *ModifyResourceTagsResponse, err error) { + if request == nil { + request = NewModifyResourceTagsRequest() + } + response = NewModifyResourceTagsResponse() + err = c.Send(request, response) + return +} + +func NewUpdateResourceTagValueRequest() (request *UpdateResourceTagValueRequest) { + request = &UpdateResourceTagValueRequest{ + BaseRequest: &tchttp.BaseRequest{}, + } + request.Init().WithApiInfo("tag", APIVersion, "UpdateResourceTagValue") + return +} + +func NewUpdateResourceTagValueResponse() (response *UpdateResourceTagValueResponse) { + response = &UpdateResourceTagValueResponse{ + BaseResponse: &tchttp.BaseResponse{}, + } + return +} + +// 本接口用于修改资源已关联的标签值(标签键不变) +func (c *Client) UpdateResourceTagValue(request *UpdateResourceTagValueRequest) (response *UpdateResourceTagValueResponse, err error) { + if request == nil { + request = NewUpdateResourceTagValueRequest() + } + response = NewUpdateResourceTagValueResponse() + err = c.Send(request, response) + return +} diff --git a/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813/models.go b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813/models.go new file mode 100644 index 000000000..b326cddbe --- /dev/null +++ b/vendor/github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813/models.go @@ -0,0 +1,523 @@ +// Copyright (c) 2017-2018 THL A29 Limited, a Tencent company. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package v20180813 + +import ( + "encoding/json" + + tchttp "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http" +) + +type AddResourceTagRequest struct { + *tchttp.BaseRequest + + // 标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 标签值 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` + + // 资源六段式描述 + Resource *string `json:"Resource,omitempty" name:"Resource"` +} + +func (r *AddResourceTagRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *AddResourceTagRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type AddResourceTagResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *AddResourceTagResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *AddResourceTagResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type CreateTagRequest struct { + *tchttp.BaseRequest + + // 标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 标签值 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` +} + +func (r *CreateTagRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *CreateTagRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type CreateTagResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *CreateTagResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *CreateTagResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DeleteResourceTagRequest struct { + *tchttp.BaseRequest + + // 标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 资源六段式描述 + Resource *string `json:"Resource,omitempty" name:"Resource"` +} + +func (r *DeleteResourceTagRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DeleteResourceTagRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DeleteResourceTagResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *DeleteResourceTagResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DeleteResourceTagResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DeleteTagRequest struct { + *tchttp.BaseRequest + + // 需要删除的标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 需要删除的标签值 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` +} + +func (r *DeleteTagRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DeleteTagRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DeleteTagResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *DeleteTagResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DeleteTagResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeResourceTagsByResourceIdsRequest struct { + *tchttp.BaseRequest + + // 业务类型 + ServiceType *string `json:"ServiceType,omitempty" name:"ServiceType"` + + // 资源前缀 + ResourcePrefix *string `json:"ResourcePrefix,omitempty" name:"ResourcePrefix"` + + // 资源唯一标记 + ResourceIds []*string `json:"ResourceIds,omitempty" name:"ResourceIds" list` + + // 资源所在地域 + ResourceRegion *string `json:"ResourceRegion,omitempty" name:"ResourceRegion"` + + // 数据偏移量,默认为 0, 必须为Limit参数的整数倍 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小,默认为 15 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` +} + +func (r *DescribeResourceTagsByResourceIdsRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeResourceTagsByResourceIdsRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeResourceTagsByResourceIdsResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 结果总数 + TotalCount *uint64 `json:"TotalCount,omitempty" name:"TotalCount"` + + // 数据位移偏量 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` + + // 标签列表 + Tags []*TagResource `json:"Tags,omitempty" name:"Tags" list` + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *DescribeResourceTagsByResourceIdsResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeResourceTagsByResourceIdsResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeTagKeysRequest struct { + *tchttp.BaseRequest + + // 创建者用户 Uin,不传或为空只将 Uin 作为条件查询 + CreateUin *uint64 `json:"CreateUin,omitempty" name:"CreateUin"` + + // 数据偏移量,默认为 0, 必须为Limit参数的整数倍 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小,默认为 15 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` +} + +func (r *DescribeTagKeysRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeTagKeysRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeTagKeysResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 结果总数 + TotalCount *uint64 `json:"TotalCount,omitempty" name:"TotalCount"` + + // 数据位移偏量 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` + + // 标签列表 + Tags []*string `json:"Tags,omitempty" name:"Tags" list` + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *DescribeTagKeysResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeTagKeysResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeTagValuesRequest struct { + *tchttp.BaseRequest + + // 标签键列表 + TagKeys []*string `json:"TagKeys,omitempty" name:"TagKeys" list` + + // 创建者用户 Uin,不传或为空只将 Uin 作为条件查询 + CreateUin *uint64 `json:"CreateUin,omitempty" name:"CreateUin"` + + // 数据偏移量,默认为 0, 必须为Limit参数的整数倍 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小,默认为 15 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` +} + +func (r *DescribeTagValuesRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeTagValuesRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeTagValuesResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 结果总数 + TotalCount *uint64 `json:"TotalCount,omitempty" name:"TotalCount"` + + // 数据位移偏量 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` + + // 标签列表 + Tags []*Tag `json:"Tags,omitempty" name:"Tags" list` + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *DescribeTagValuesResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeTagValuesResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeTagsRequest struct { + *tchttp.BaseRequest + + // 标签键,与标签值同时存在或同时不存在,不存在时表示查询该用户所有标签 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 标签值,与标签键同时存在或同时不存在,不存在时表示查询该用户所有标签 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` + + // 数据偏移量,默认为 0, 必须为Limit参数的整数倍 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小,默认为 15 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` + + // 创建者用户 Uin,不传或为空只将 Uin 作为条件查询 + CreateUin *uint64 `json:"CreateUin,omitempty" name:"CreateUin"` +} + +func (r *DescribeTagsRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeTagsRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type DescribeTagsResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 结果总数 + TotalCount *uint64 `json:"TotalCount,omitempty" name:"TotalCount"` + + // 数据位移偏量 + Offset *uint64 `json:"Offset,omitempty" name:"Offset"` + + // 每页大小 + Limit *uint64 `json:"Limit,omitempty" name:"Limit"` + + // 标签列表 + Tags []*TagWithDelete `json:"Tags,omitempty" name:"Tags" list` + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *DescribeTagsResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *DescribeTagsResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type ModifyResourceTagsRequest struct { + *tchttp.BaseRequest + + // 资源的六段式描述 + Resource *string `json:"Resource,omitempty" name:"Resource"` + + // 需要增加或修改的标签集合。如果Resource描述的资源未关联输入的标签键,则增加关联;若已关联,则将该资源关联的键对应的标签值修改为输入值。本接口中ReplaceTags和DeleteTags二者必须存在其一,且二者不能包含相同的标签键 + ReplaceTags []*Tag `json:"ReplaceTags,omitempty" name:"ReplaceTags" list` + + // 需要解关联的标签集合。本接口中ReplaceTags和DeleteTags二者必须存在其一,且二者不能包含相同的标签键 + DeleteTags []*TagKeyObject `json:"DeleteTags,omitempty" name:"DeleteTags" list` +} + +func (r *ModifyResourceTagsRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *ModifyResourceTagsRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type ModifyResourceTagsResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *ModifyResourceTagsResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *ModifyResourceTagsResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type Tag struct { + + // 标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 标签值 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` +} + +type TagKeyObject struct { + + // 标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` +} + +type TagResource struct { + + // 标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 标签值 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` + + // 资源ID + ResourceId *string `json:"ResourceId,omitempty" name:"ResourceId"` + + // 标签键MD5值 + TagKeyMd5 *string `json:"TagKeyMd5,omitempty" name:"TagKeyMd5"` + + // 标签值MD5值 + TagValueMd5 *string `json:"TagValueMd5,omitempty" name:"TagValueMd5"` +} + +type TagWithDelete struct { + + // 标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 标签值 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` + + // 是否可以删除 + CanDelete *uint64 `json:"CanDelete,omitempty" name:"CanDelete"` +} + +type UpdateResourceTagValueRequest struct { + *tchttp.BaseRequest + + // 资源关联的标签键 + TagKey *string `json:"TagKey,omitempty" name:"TagKey"` + + // 修改后的标签值 + TagValue *string `json:"TagValue,omitempty" name:"TagValue"` + + // 资源的六段式描述 + Resource *string `json:"Resource,omitempty" name:"Resource"` +} + +func (r *UpdateResourceTagValueRequest) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *UpdateResourceTagValueRequest) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} + +type UpdateResourceTagValueResponse struct { + *tchttp.BaseResponse + Response *struct { + + // 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 + RequestId *string `json:"RequestId,omitempty" name:"RequestId"` + } `json:"Response"` +} + +func (r *UpdateResourceTagValueResponse) ToJsonString() string { + b, _ := json.Marshal(r) + return string(b) +} + +func (r *UpdateResourceTagValueResponse) FromJsonString(s string) error { + return json.Unmarshal([]byte(s), &r) +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/.bumpversion.cfg b/vendor/github.com/tencentyun/cos-go-sdk-v5/.bumpversion.cfg new file mode 100644 index 000000000..d2bac6df9 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/.bumpversion.cfg @@ -0,0 +1,7 @@ +[bumpversion] +commit = True +tag = True +current_version = 0.7.0 + +[bumpversion:file:cos.go] + diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/.gitignore b/vendor/github.com/tencentyun/cos-go-sdk-v5/.gitignore new file mode 100644 index 000000000..7da68739a --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/.gitignore @@ -0,0 +1,29 @@ +# Compiled Object files, Static and Dynamic libs (Shared Objects) +*.o +*.a +*.so + +# Folders +_obj +_test + +# Architecture specific extensions/prefixes +*.[568vq] +[568vq].out + +*.cgo1.go +*.cgo2.c +_cgo_defun.c +_cgo_gotypes.go +_cgo_export.* + +_testmain.go + +*.exe +*.test +*.prof +dist/ +cover.html +cover.out +covprofile +coverage.html \ No newline at end of file diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/.travis.yml b/vendor/github.com/tencentyun/cos-go-sdk-v5/.travis.yml new file mode 100644 index 000000000..e940c8c27 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/.travis.yml @@ -0,0 +1,38 @@ +language: go +go: +- '1.7' +- '1.8' +- '1.9' +- 1.10.x +- 1.11.x +- 1.12.x +- master +sudo: false +before_install: +- go get -u github.com/mattn/goveralls +- go get -u github.com/stretchr/testify +install: +- go get +- go build +- go build github.com/mattn/goveralls +script: +- if [[ ! -n "$COS_SECRETID" ]]; then exit 0 + ; fi +- make test +- make ci-test +- go test -coverprofile=cover.out github.com/toranger/cos-go-sdk-v5 +- "${TRAVIS_HOME}/gopath/bin/goveralls -service=travis-ci -coverprofile=cover.out" +matrix: + allow_failures: + - go: 1.7 + - go: master +env: + global: + - secure: XXB/cFVnJcAzhOZ2/zplwjhhhireQQGGRbNscPgQ0kpUQCyPZ6oIHvJMafuP4TVTJHEdMiaDxm0HNvgARuopXVaQNmK2UZj6xw40Ud7OT7ZUnw88xkQkXOI5GwG8oz9LqxIXUSItHegKXRLW0e1PoBdjZNv6lxGFAtuOcl9ekAg/q2lGIIQFefz6NK7gCmGYULKe+4J15VFldoYNM0JesxxxArTvtv8+k+U53oUwy9dex6z5oIA1zGIeKLcOD2xXgbjid/Ett3t0B2w3GfJWoM9rGV0eHgveOAUGe5tQkMKvl5LK1hj+93ZmU0MAG7x7t9jYKrFPqU/eDNJRMb4Ro6L7lIXVEKaBUkLx28PnwFQ5D043GBVtQGqYNcldZXIfbyYEHQZlD/BWFOt5YqTpGg+7Wm4NC3Yffqsurzk54juT7FftzVy0A8MFkqO+c5RHrOSUlm01pWXkGLHgZhUP5gEZEuUaoluSQTZksmAUJZ7F8DxwpE4SYBqfN27PZ87rWDNyOqNv1w1trzwx2IfdHHA+vfCZ7UM5e85gxFWUO2tJCUai2q21v3gBrcAgBOb6BwVzbWAorM2zY20f0l21XxOWMakA+r4JJA3s3EmcczcQeeL6pkFIAh+qKdFEPuyQTjH1mGpPzYFNbWtvPXijQo5PqyGrKL8W1t3ovwXMXoE= + - secure: bep0PPD/oYW5zY0QpeeC+WgFIya5DNRVmR92MO+e5BdFlSJPhstoG8bRh91EeftzC/Hyd3PUEIglPqTgZPxwysqW/81plsU95wV3qJi9gPi7+ZtYXH4xZTnaqgZsTr7jsKSVoKHSu7XqCtbSytW8YMN9wRWzG19/9hX2Z79Q6yNy5l9856Oyj1E2IXDjdZLPsWDhnZ8Vvk1wAVy2fc2esqKzHAZwm8n9vee2yR8vz7GXUszzpKvn4R43eNzdlFEHCmN0ANmxLJZmnYDpZHHfNf4slts+0S6I7awFXppuXUDaJPBRCia4XoFeSw+01IW1Vi0kAwvGLhxjJCWc4M/4ZU0byXDT11tDFvWa19NmnbYiizWiXNVecn1oNWYJqIKe7TTAMAtHSXAPmLX0rXuXKzwM09W6yrLFufCxyix9IOnenEbe9WwSdBbhmeLF3Wu/uVGkDog/FsXJM75sk956vV9UKh9zF4B9/NR8szJMF7shEs0Fbru5UUWheqg4AadPl3dhAWuj2+6NANa1LpH3JVD3II9dlXeMmMvsSwDvrYUaX/S8tf6JwZG0zCJK0TYp05rjxH+NIzWaMUTY7+HwYqqK3pOW3San0SlZiMq8N7GSnKUZ7WRQXYSB4gXHrg+mWyeVC7XnqiRtCwVi+LtPMu+YUbg7dwVi0vtKjYZYIUY= + - secure: Ob28vrOuHMKNKEtChkWbsaVv2SwLhcxXMnvGe4XN+y3mFvdhYnwpt6NdgThF8OCZ0761tvTRmvALfiZnO0uORjTtoHKkVPrnVIxlCcode0NVJZNHGn2fqjemdLKCnSeX7hm+9zeLpCnIvC+Sp3iZ3t2AH4AzgFx6nirWO3HwT5l9rNL9Q1CfwlOpNJJ36r9JTHwQnXmOfOmszUNoZ3rtiFXJ8dCi+BgY0lsiIRSiDkAH7KAPf86REM+ww81AaXG4/RuYx1Vj5zQCtZN7XEOViSXEbqqb8SrIFOccDu5FV12djg+4QS7FSjLVGrdIUcn4oI6pS24Et3oXf8xFx6JLYyGGhgZ2BsyJEx5vLQvkTWnMTrwZVRtCQ+g6lMUQpJhL2rBrmVBUqBFb5IH69O7corQm53n5qLM8IiosAQLfbOtML/1PyEpKCG2aOx1377Fx2yzxXW3ucP1PBqCzli0oCM2T52LfiNvZTzkIU6XJebBnzkZXepzOIFSur86kxgvQFElw9ro2X6XXPKU5S25xVaUSvaN1kmqLSkToJ9S1rmDYXnJR4aH0R2GcLw+EkMHFJJoAjnRHxrB4/1vOJbzmfS+qy6ShRhUMSD8gk4YJ6Y7o9h7oekuWOEn+XGhl29U9T5OApzHfoPEGZwLnpHxAiKJtQtv/TNhBIOFCjigsF7U= +notifications: + email: + recipients: + - wjielai@tencent.com + - fysntian@tencent.com diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/LICENSE b/vendor/github.com/tencentyun/cos-go-sdk-v5/LICENSE new file mode 100644 index 000000000..8ff7942e2 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2017 mozillazg + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/Makefile b/vendor/github.com/tencentyun/cos-go-sdk-v5/Makefile new file mode 100644 index 000000000..e3b7aa319 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/Makefile @@ -0,0 +1,22 @@ +help: + @echo "test run test" + @echo "lint run lint" + @echo "example run examples" + +.PHONY: test +test: + go test -v -cover -coverprofile cover.out + go tool cover -html=cover.out -o cover.html + +.PHONY: lint +lint: + gofmt -s -w . + goimports -w . + golint . + go vet + +.PHONY: example +example: + cd example && sh test.sh +ci-test: + cd costesting && go test -v diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/README.md b/vendor/github.com/tencentyun/cos-go-sdk-v5/README.md new file mode 100644 index 000000000..0316a137b --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/README.md @@ -0,0 +1,95 @@ +# cos-go-sdk-v5 + +腾讯云对象存储服务 COS(Cloud Object Storage) Go SDK(API 版本:V5 版本的 XML API)。 + +## Install + +`go get -u github.com/tencentyun/cos-go-sdk-v5` + + +## Usage + +```go +package main + +import ( + "context" + "fmt" + "io/ioutil" + "net/http" + "net/url" + "os" + "time" + + "github.com/tencentyun/cos-go-sdk-v5" +) + +func main() { + //将修改为真实的信息 + //bucket的命名规则为{name}-{appid} ,此处填写的存储桶名称必须为此格式 + u, _ := url.Parse("https://.cos..myqcloud.com") + b := &cos.BaseURL{BucketURL: u} + c := cos.NewClient(b, &http.Client{ + //设置超时时间 + Timeout: 100 * time.Second, + Transport: &cos.AuthorizationTransport{ + //如实填写账号和密钥,也可以设置为环境变量 + SecretID: os.Getenv("COS_SECRETID"), + SecretKey: os.Getenv("COS_SECRETKEY"), + }, + }) + + name := "test/hello.txt" + resp, err := c.Object.Get(context.Background(), name, nil) + if err != nil { + panic(err) + } + bs, _ := ioutil.ReadAll(resp.Body) + resp.Body.Close() + fmt.Printf("%s\n", string(bs)) +} +``` + +所有的 API 在 [example](./example/) 目录下都有对应的使用示例。 + +Service API: + +* [x] Get Service(使用示例:[service/get.go](./example/service/get.go)) + +Bucket API: + +* [x] Get Bucket(使用示例:[bucket/get.go](./example/bucket/get.go)) +* [x] Get Bucket ACL(使用示例:[bucket/getACL.go](./example/bucket/getACL.go)) +* [x] Get Bucket CORS(使用示例:[bucket/getCORS.go](./example/bucket/getCORS.go)) +* [x] Get Bucket Location(使用示例:[bucket/getLocation.go](./example/bucket/getLocation.go)) +* [x] Get Buket Lifecycle(使用示例:[bucket/getLifecycle.go](./example/bucket/getLifecycle.go)) +* [x] Get Bucket Tagging(使用示例:[bucket/getTagging.go](./example/bucket/getTagging.go)) +* [x] Put Bucket(使用示例:[bucket/put.go](./example/bucket/put.go)) +* [x] Put Bucket ACL(使用示例:[bucket/putACL.go](./example/bucket/putACL.go)) +* [x] Put Bucket CORS(使用示例:[bucket/putCORS.go](./example/bucket/putCORS.go)) +* [x] Put Bucket Lifecycle(使用示例:[bucket/putLifecycle.go](./example/bucket/putLifecycle.go)) +* [x] Put Bucket Tagging(使用示例:[bucket/putTagging.go](./example/bucket/putTagging.go)) +* [x] Delete Bucket(使用示例:[bucket/delete.go](./example/bucket/delete.go)) +* [x] Delete Bucket CORS(使用示例:[bucket/deleteCORS.go](./example/bucket/deleteCORS.go)) +* [x] Delete Bucket Lifecycle(使用示例:[bucket/deleteLifecycle.go](./example/bucket/deleteLifecycle.go)) +* [x] Delete Bucket Tagging(使用示例:[bucket/deleteTagging.go](./example/bucket/deleteTagging.go)) +* [x] Head Bucket(使用示例:[bucket/head.go](./example/bucket/head.go)) +* [x] List Multipart Uploads(使用示例:[bucket/listMultipartUploads.go](./example/bucket/listMultipartUploads.go)) + +Object API: + +* [x] Get Object(使用示例:[object/get.go](./example/object/get.go)) +* [x] Get Object ACL(使用示例:[object/getACL.go](./example/object/getACL.go)) +* [x] Put Object(使用示例:[object/put.go](./example/object/put.go)) +* [x] Put Object ACL(使用示例:[object/putACL.go](./example/object/putACL.go)) +* [x] Put Object Copy(使用示例:[object/copy.go](./example/object/copy.go)) +* [x] Delete Object(使用示例:[object/delete.go](./example/object/delete.go)) +* [x] Delete Multiple Object(使用示例:[object/deleteMultiple.go](./example/object/deleteMultiple.go)) +* [x] Head Object(使用示例:[object/head.go](./example/object/head.go)) +* [x] Options Object(使用示例:[object/options.go](./example/object/options.go)) +* [x] Initiate Multipart Upload(使用示例:[object/initiateMultipartUpload.go](./example/object/initiateMultipartUpload.go)) +* [x] Upload Part(使用示例:[object/uploadPart.go](./example/object/uploadPart.go)) +* [x] List Parts(使用示例:[object/listParts.go](./example/object/listParts.go)) +* [x] Complete Multipart Upload(使用示例:[object/completeMultipartUpload.go](./example/object/completeMultipartUpload.go)) +* [x] Abort Multipart Upload(使用示例:[object/abortMultipartUpload.go](./example/object/abortMultipartUpload.go)) +* [x] Mutipart Upload(使用示例:[object/MutiUpload.go](./example/object/MutiUpload.go)) diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/auth.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/auth.go new file mode 100644 index 000000000..6b0cc554c --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/auth.go @@ -0,0 +1,305 @@ +package cos + +import ( + "crypto/hmac" + "crypto/sha1" + "fmt" + "hash" + "net/http" + "net/url" + "sort" + "strings" + "sync" + "time" +) + +const sha1SignAlgorithm = "sha1" +const privateHeaderPrefix = "x-cos-" +const defaultAuthExpire = time.Hour + +// 需要校验的 Headers 列表 +var needSignHeaders = map[string]bool{ + "host": true, + "range": true, + "x-cos-acl": true, + "x-cos-grant-read": true, + "x-cos-grant-write": true, + "x-cos-grant-full-control": true, + "response-content-type": true, + "response-content-language": true, + "response-expires": true, + "response-cache-control": true, + "response-content-disposition": true, + "response-content-encoding": true, + "cache-control": true, + "content-disposition": true, + "content-encoding": true, + "content-type": true, + "content-length": true, + "content-md5": true, + "expect": true, + "expires": true, + "x-cos-content-sha1": true, + "x-cos-storage-class": true, + "if-modified-since": true, + "origin": true, + "access-control-request-method": true, + "access-control-request-headers": true, + "x-cos-object-type": true, +} + +func safeURLEncode(s string) string { + s = encodeURIComponent(s) + s = strings.Replace(s, "!", "%21", -1) + s = strings.Replace(s, "'", "%27", -1) + s = strings.Replace(s, "(", "%28", -1) + s = strings.Replace(s, ")", "%29", -1) + s = strings.Replace(s, "*", "%2A", -1) + return s +} + +type valuesSignMap map[string][]string + +func (vs valuesSignMap) Add(key, value string) { + key = strings.ToLower(key) + vs[key] = append(vs[key], value) +} + +func (vs valuesSignMap) Encode() string { + var keys []string + for k := range vs { + keys = append(keys, k) + } + sort.Strings(keys) + + var pairs []string + for _, k := range keys { + items := vs[k] + sort.Strings(items) + for _, val := range items { + pairs = append( + pairs, + fmt.Sprintf("%s=%s", safeURLEncode(k), safeURLEncode(val))) + } + } + return strings.Join(pairs, "&") +} + +// AuthTime 用于生成签名所需的 q-sign-time 和 q-key-time 相关参数 +type AuthTime struct { + SignStartTime time.Time + SignEndTime time.Time + KeyStartTime time.Time + KeyEndTime time.Time +} + +// NewAuthTime 生成 AuthTime 的便捷函数 +// +// expire: 从现在开始多久过期. +func NewAuthTime(expire time.Duration) *AuthTime { + signStartTime := time.Now() + keyStartTime := signStartTime + signEndTime := signStartTime.Add(expire) + keyEndTime := signEndTime + return &AuthTime{ + SignStartTime: signStartTime, + SignEndTime: signEndTime, + KeyStartTime: keyStartTime, + KeyEndTime: keyEndTime, + } +} + +// signString return q-sign-time string +func (a *AuthTime) signString() string { + return fmt.Sprintf("%d;%d", a.SignStartTime.Unix(), a.SignEndTime.Unix()) +} + +// keyString return q-key-time string +func (a *AuthTime) keyString() string { + return fmt.Sprintf("%d;%d", a.KeyStartTime.Unix(), a.KeyEndTime.Unix()) +} + +// newAuthorization 通过一系列步骤生成最终需要的 Authorization 字符串 +func newAuthorization(secretID, secretKey string, req *http.Request, authTime *AuthTime) string { + signTime := authTime.signString() + keyTime := authTime.keyString() + signKey := calSignKey(secretKey, keyTime) + + formatHeaders := *new(string) + signedHeaderList := *new([]string) + formatHeaders, signedHeaderList = genFormatHeaders(req.Header) + formatParameters, signedParameterList := genFormatParameters(req.URL.Query()) + formatString := genFormatString(req.Method, *req.URL, formatParameters, formatHeaders) + + stringToSign := calStringToSign(sha1SignAlgorithm, keyTime, formatString) + signature := calSignature(signKey, stringToSign) + + return genAuthorization( + secretID, signTime, keyTime, signature, signedHeaderList, + signedParameterList, + ) +} + +// AddAuthorizationHeader 给 req 增加签名信息 +func AddAuthorizationHeader(secretID, secretKey string, sessionToken string, req *http.Request, authTime *AuthTime) { + if secretID == "" { + return + } + + auth := newAuthorization(secretID, secretKey, req, + authTime, + ) + if len(sessionToken) > 0 { + req.Header.Set("x-cos-security-token", sessionToken) + } + req.Header.Set("Authorization", auth) +} + +// calSignKey 计算 SignKey +func calSignKey(secretKey, keyTime string) string { + digest := calHMACDigest(secretKey, keyTime, sha1SignAlgorithm) + return fmt.Sprintf("%x", digest) +} + +// calStringToSign 计算 StringToSign +func calStringToSign(signAlgorithm, signTime, formatString string) string { + h := sha1.New() + h.Write([]byte(formatString)) + return fmt.Sprintf("%s\n%s\n%x\n", signAlgorithm, signTime, h.Sum(nil)) +} + +// calSignature 计算 Signature +func calSignature(signKey, stringToSign string) string { + digest := calHMACDigest(signKey, stringToSign, sha1SignAlgorithm) + return fmt.Sprintf("%x", digest) +} + +// genAuthorization 生成 Authorization +func genAuthorization(secretID, signTime, keyTime, signature string, signedHeaderList, signedParameterList []string) string { + return strings.Join([]string{ + "q-sign-algorithm=" + sha1SignAlgorithm, + "q-ak=" + secretID, + "q-sign-time=" + signTime, + "q-key-time=" + keyTime, + "q-header-list=" + strings.Join(signedHeaderList, ";"), + "q-url-param-list=" + strings.Join(signedParameterList, ";"), + "q-signature=" + signature, + }, "&") +} + +// genFormatString 生成 FormatString +func genFormatString(method string, uri url.URL, formatParameters, formatHeaders string) string { + formatMethod := strings.ToLower(method) + formatURI := uri.Path + + return fmt.Sprintf("%s\n%s\n%s\n%s\n", formatMethod, formatURI, + formatParameters, formatHeaders, + ) +} + +// genFormatParameters 生成 FormatParameters 和 SignedParameterList +// instead of the url.Values{} +func genFormatParameters(parameters url.Values) (formatParameters string, signedParameterList []string) { + ps := valuesSignMap{} + for key, values := range parameters { + key = strings.ToLower(key) + for _, value := range values { + ps.Add(key, value) + signedParameterList = append(signedParameterList, key) + } + } + //formatParameters = strings.ToLower(ps.Encode()) + formatParameters = ps.Encode() + sort.Strings(signedParameterList) + return +} + +// genFormatHeaders 生成 FormatHeaders 和 SignedHeaderList +func genFormatHeaders(headers http.Header) (formatHeaders string, signedHeaderList []string) { + hs := valuesSignMap{} + for key, values := range headers { + key = strings.ToLower(key) + for _, value := range values { + if isSignHeader(key) { + hs.Add(key, value) + signedHeaderList = append(signedHeaderList, key) + } + } + } + formatHeaders = hs.Encode() + sort.Strings(signedHeaderList) + return +} + +// HMAC 签名 +func calHMACDigest(key, msg, signMethod string) []byte { + var hashFunc func() hash.Hash + switch signMethod { + case "sha1": + hashFunc = sha1.New + default: + hashFunc = sha1.New + } + h := hmac.New(hashFunc, []byte(key)) + h.Write([]byte(msg)) + return h.Sum(nil) +} + +func isSignHeader(key string) bool { + for k, v := range needSignHeaders { + if key == k && v { + return true + } + } + return strings.HasPrefix(key, privateHeaderPrefix) +} + +// AuthorizationTransport 给请求增加 Authorization header +type AuthorizationTransport struct { + SecretID string + SecretKey string + SessionToken string + rwLocker sync.RWMutex + // 签名多久过期 + Expire time.Duration + Transport http.RoundTripper +} + +// SetCredential update the SecretID(ak), SercretKey(sk), sessiontoken +func (t *AuthorizationTransport) SetCredential(ak, sk, token string) { + t.rwLocker.Lock() + defer t.rwLocker.Unlock() + t.SecretID = ak + t.SecretKey = sk + t.SessionToken = token +} + +// GetCredential get the ak, sk, token +func (t *AuthorizationTransport) GetCredential() (string, string, string) { + t.rwLocker.RLock() + defer t.rwLocker.RUnlock() + return t.SecretID, t.SecretKey, t.SessionToken +} + +// RoundTrip implements the RoundTripper interface. +func (t *AuthorizationTransport) RoundTrip(req *http.Request) (*http.Response, error) { + req = cloneRequest(req) // per RoundTrip contract + if t.Expire == time.Duration(0) { + t.Expire = defaultAuthExpire + } + + ak, sk, token := t.GetCredential() + // 增加 Authorization header + authTime := NewAuthTime(t.Expire) + AddAuthorizationHeader(ak, sk, token, req, authTime) + + resp, err := t.transport().RoundTrip(req) + return resp, err +} + +func (t *AuthorizationTransport) transport() http.RoundTripper { + if t.Transport != nil { + return t.Transport + } + return http.DefaultTransport +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket.go new file mode 100644 index 000000000..2e3f92c2f --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket.go @@ -0,0 +1,104 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// BucketService 相关 API +type BucketService service + +// BucketGetResult is the result of GetBucket +type BucketGetResult struct { + XMLName xml.Name `xml:"ListBucketResult"` + Name string + Prefix string `xml:"Prefix,omitempty"` + Marker string `xml:"Marker,omitempty"` + NextMarker string `xml:"NextMarker,omitempty"` + Delimiter string `xml:"Delimiter,omitempty"` + MaxKeys int + IsTruncated bool + Contents []Object `xml:"Contents,omitempty"` + CommonPrefixes []string `xml:"CommonPrefixes>Prefix,omitempty"` + EncodingType string `xml:"Encoding-Type,omitempty"` +} + +// BucketGetOptions is the option of GetBucket +type BucketGetOptions struct { + Prefix string `url:"prefix,omitempty"` + Delimiter string `url:"delimiter,omitempty"` + EncodingType string `url:"encoding-type,omitempty"` + Marker string `url:"marker,omitempty"` + MaxKeys int `url:"max-keys,omitempty"` +} + +// Get Bucket请求等同于 List Object请求,可以列出该Bucket下部分或者所有Object,发起该请求需要拥有Read权限。 +// +// https://www.qcloud.com/document/product/436/7734 +func (s *BucketService) Get(ctx context.Context, opt *BucketGetOptions) (*BucketGetResult, *Response, error) { + var res BucketGetResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/", + method: http.MethodGet, + optQuery: opt, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// BucketPutOptions is same to the ACLHeaderOptions +type BucketPutOptions ACLHeaderOptions + +// Put Bucket请求可以在指定账号下创建一个Bucket。 +// +// https://www.qcloud.com/document/product/436/7738 +func (s *BucketService) Put(ctx context.Context, opt *BucketPutOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/", + method: http.MethodPut, + optHeader: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// Delete Bucket请求可以在指定账号下删除Bucket,删除之前要求Bucket为空。 +// +// https://www.qcloud.com/document/product/436/7732 +func (s *BucketService) Delete(ctx context.Context) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/", + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// Head Bucket请求可以确认是否存在该Bucket,是否有权限访问,Head的权限与Read一致。 +// +// 当其存在时,返回 HTTP 状态码200; +// 当无权限时,返回 HTTP 状态码403; +// 当不存在时,返回 HTTP 状态码404。 +// +// https://www.qcloud.com/document/product/436/7735 +func (s *BucketService) Head(ctx context.Context) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/", + method: http.MethodHead, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// Bucket is the meta info of Bucket +type Bucket struct { + Name string + Region string `xml:"Location,omitempty"` + CreationDate string `xml:",omitempty"` +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_acl.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_acl.go new file mode 100644 index 000000000..285b9064b --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_acl.go @@ -0,0 +1,62 @@ +package cos + +import ( + "context" + "net/http" +) + +// BucketGetACLResult is same to the ACLXml +type BucketGetACLResult ACLXml + +// GetACL 使用API读取Bucket的ACL表,只有所有者有权操作。 +// +// https://www.qcloud.com/document/product/436/7733 +func (s *BucketService) GetACL(ctx context.Context) (*BucketGetACLResult, *Response, error) { + var res BucketGetACLResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?acl", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// BucketPutACLOptions is the option of PutBucketACL +type BucketPutACLOptions struct { + Header *ACLHeaderOptions `url:"-" xml:"-"` + Body *ACLXml `url:"-" header:"-"` +} + +// PutACL 使用API写入Bucket的ACL表,您可以通过Header:"x-cos-acl","x-cos-grant-read", +// "x-cos-grant-write","x-cos-grant-full-control"传入ACL信息,也可以通过body以XML格式传入ACL信息, +// +// 但是只能选择Header和Body其中一种,否则返回冲突。 +// +// Put Bucket ACL是一个覆盖操作,传入新的ACL将覆盖原有ACL。只有所有者有权操作。 +// +// "x-cos-acl":枚举值为public-read,private;public-read意味这个Bucket有公有读私有写的权限, +// private意味这个Bucket有私有读写的权限。 +// +// "x-cos-grant-read":意味被赋予权限的用户拥有该Bucket的读权限 +// "x-cos-grant-write":意味被赋予权限的用户拥有该Bucket的写权限 +// "x-cos-grant-full-control":意味被赋予权限的用户拥有该Bucket的读写权限 +// +// https://www.qcloud.com/document/product/436/7737 +func (s *BucketService) PutACL(ctx context.Context, opt *BucketPutACLOptions) (*Response, error) { + header := opt.Header + body := opt.Body + if body != nil { + header = nil + } + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?acl", + method: http.MethodPut, + body: body, + optHeader: header, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_cors.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_cors.go new file mode 100644 index 000000000..1d688c9b2 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_cors.go @@ -0,0 +1,71 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// BucketCORSRule is the rule of BucketCORS +type BucketCORSRule struct { + ID string `xml:"ID,omitempty"` + AllowedMethods []string `xml:"AllowedMethod"` + AllowedOrigins []string `xml:"AllowedOrigin"` + AllowedHeaders []string `xml:"AllowedHeader,omitempty"` + MaxAgeSeconds int `xml:"MaxAgeSeconds,omitempty"` + ExposeHeaders []string `xml:"ExposeHeader,omitempty"` +} + +// BucketGetCORSResult is the result of GetBucketCORS +type BucketGetCORSResult struct { + XMLName xml.Name `xml:"CORSConfiguration"` + Rules []BucketCORSRule `xml:"CORSRule,omitempty"` +} + +// GetCORS 实现 Bucket 跨域访问配置读取。 +// +// https://www.qcloud.com/document/product/436/8274 +func (s *BucketService) GetCORS(ctx context.Context) (*BucketGetCORSResult, *Response, error) { + var res BucketGetCORSResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?cors", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// BucketPutCORSOptions is the option of PutBucketCORS +type BucketPutCORSOptions struct { + XMLName xml.Name `xml:"CORSConfiguration"` + Rules []BucketCORSRule `xml:"CORSRule,omitempty"` +} + +// PutCORS 实现 Bucket 跨域访问设置,您可以通过传入XML格式的配置文件实现配置,文件大小限制为64 KB。 +// +// https://www.qcloud.com/document/product/436/8279 +func (s *BucketService) PutCORS(ctx context.Context, opt *BucketPutCORSOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?cors", + method: http.MethodPut, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// DeleteCORS 实现 Bucket 跨域访问配置删除。 +// +// https://www.qcloud.com/document/product/436/8283 +func (s *BucketService) DeleteCORS(ctx context.Context) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?cors", + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_inventory.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_inventory.go new file mode 100644 index 000000000..17ed781e9 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_inventory.go @@ -0,0 +1,134 @@ +package cos + +import ( + "context" + "encoding/xml" + "fmt" + "net/http" +) + +// Notice bucket_inventory only for test. can not use + +// BucketGetInventoryResult same struct to options +type BucketGetInventoryResult BucketPutInventoryOptions + +// BucketListInventoryConfiguartion same struct to options +type BucketListInventoryConfiguartion BucketPutInventoryOptions + +// BucketInventoryFilter ... +type BucketInventoryFilter struct { + Prefix string `xml:"Prefix,omitempty"` +} + +// BucketInventoryOptionalFields ... +type BucketInventoryOptionalFields struct { + XMLName xml.Name `xml:"OptionalFields,omitempty"` + BucketInventoryFields []string `xml:"Field,omitempty"` +} + +// BucketInventorySchedule ... +type BucketInventorySchedule struct { + Frequency string `xml:"Frequency"` +} + +// BucketInventoryEncryption ... +type BucketInventoryEncryption struct { + XMLName xml.Name `xml:"Encryption"` + SSECOS string `xml:"SSE-COS,omitempty"` +} + +// BucketInventoryDestinationContent ... +type BucketInventoryDestinationContent struct { + Bucket string `xml:"Bucket"` + AccountId string `xml:"AccountId,omitempty"` + Prefix string `xml:"Prefix,omitempty"` + Format string `xml:"Format"` + Encryption *BucketInventoryEncryption `xml:"Encryption,omitempty"` +} + +// BucketInventoryDestination ... +type BucketInventoryDestination struct { + XMLName xml.Name `xml:"Destination"` + BucketDestination *BucketInventoryDestinationContent `xml:"COSBucketDestination"` +} + +// BucketPutInventoryOptions ... +type BucketPutInventoryOptions struct { + XMLName xml.Name `xml:"InventoryConfiguration"` + ID string `xml:"Id"` + IsEnabled string `xml:"IsEnabled"` + IncludedObjectVersions string `xml:"IncludedObjectVersions"` + Filter *BucketInventoryFilter `xml:"Filter,omitempty"` + OptionalFields *BucketInventoryOptionalFields `xml:"OptionalFields,omitempty"` + Schedule *BucketInventorySchedule `xml:"Schedule"` + Destination *BucketInventoryDestination `xml:"Destination"` +} + +// ListBucketInventoryConfigResult result of ListBucketInventoryConfiguration +type ListBucketInventoryConfigResult struct { + XMLName xml.Name `xml:"ListInventoryConfigurationResult"` + InventoryConfigurations []BucketListInventoryConfiguartion `xml:"InventoryConfiguration,omitempty"` + IsTruncated bool `xml:"IsTruncated,omitempty"` + ContinuationToken string `xml:"ContinuationToken,omitempty"` + NextContinuationToken string `xml:"NextContinuationToken,omitempty"` +} + +// PutBucketInventory https://cloud.tencent.com/document/product/436/33707 +func (s *BucketService) PutBucketInventoryTest(ctx context.Context, id string, opt *BucketPutInventoryOptions) (*Response, error) { + u := fmt.Sprintf("/?inventory&id=%s", id) + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodPut, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err + +} + +// GetBucketInventory https://cloud.tencent.com/document/product/436/33705 +func (s *BucketService) GetBucketInventoryTest(ctx context.Context, id string) (*BucketGetInventoryResult, *Response, error) { + u := fmt.Sprintf("/?inventory&id=%s", id) + var res BucketGetInventoryResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// DeleteBucketInventory https://cloud.tencent.com/document/product/436/33704 +func (s *BucketService) DeleteBucketInventoryTest(ctx context.Context, id string) (*Response, error) { + u := fmt.Sprintf("/?inventory&id=%s", id) + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// ListBucketInventoryConfigurations https://cloud.tencent.com/document/product/436/33706 +func (s *BucketService) ListBucketInventoryConfigurationsTest(ctx context.Context, token string) (*ListBucketInventoryConfigResult, *Response, error) { + var res ListBucketInventoryConfigResult + var u string + if token == "" { + u = "/?inventory" + } else { + u = fmt.Sprintf("/?inventory&continuation-token=%s", encodeURIComponent(token)) + } + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err + +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_lifecycle.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_lifecycle.go new file mode 100644 index 000000000..c9ca97f48 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_lifecycle.go @@ -0,0 +1,92 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// BucketLifecycleFilter is the param of BucketLifecycleRule +type BucketLifecycleFilter struct { + Prefix string `xml:"Prefix,omitempty"` +} + +// BucketLifecycleExpiration is the param of BucketLifecycleRule +type BucketLifecycleExpiration struct { + Date string `xml:"Date,omitempty"` + Days int `xml:"Days,omitempty"` +} + +// BucketLifecycleTransition is the param of BucketLifecycleRule +type BucketLifecycleTransition struct { + Date string `xml:"Date,omitempty"` + Days int `xml:"Days,omitempty"` + StorageClass string +} + +// BucketLifecycleAbortIncompleteMultipartUpload is the param of BucketLifecycleRule +type BucketLifecycleAbortIncompleteMultipartUpload struct { + DaysAfterInitiation string `xml:"DaysAfterInititation,omitempty"` +} + +// BucketLifecycleRule is the rule of BucketLifecycle +type BucketLifecycleRule struct { + ID string `xml:"ID,omitempty"` + Status string + Filter *BucketLifecycleFilter `xml:"Filter,omitempty"` + Transition *BucketLifecycleTransition `xml:"Transition,omitempty"` + Expiration *BucketLifecycleExpiration `xml:"Expiration,omitempty"` + AbortIncompleteMultipartUpload *BucketLifecycleAbortIncompleteMultipartUpload `xml:"AbortIncompleteMultipartUpload,omitempty"` +} + +// BucketGetLifecycleResult is the result of BucketGetLifecycle +type BucketGetLifecycleResult struct { + XMLName xml.Name `xml:"LifecycleConfiguration"` + Rules []BucketLifecycleRule `xml:"Rule,omitempty"` +} + +// GetLifecycle 请求实现读取生命周期管理的配置。当配置不存在时,返回404 Not Found。 +// https://www.qcloud.com/document/product/436/8278 +func (s *BucketService) GetLifecycle(ctx context.Context) (*BucketGetLifecycleResult, *Response, error) { + var res BucketGetLifecycleResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?lifecycle", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// BucketPutLifecycleOptions is the option of PutBucketLifecycle +type BucketPutLifecycleOptions struct { + XMLName xml.Name `xml:"LifecycleConfiguration"` + Rules []BucketLifecycleRule `xml:"Rule,omitempty"` +} + +// PutLifecycle 请求实现设置生命周期管理的功能。您可以通过该请求实现数据的生命周期管理配置和定期删除。 +// 此请求为覆盖操作,上传新的配置文件将覆盖之前的配置文件。生命周期管理对文件和文件夹同时生效。 +// https://www.qcloud.com/document/product/436/8280 +func (s *BucketService) PutLifecycle(ctx context.Context, opt *BucketPutLifecycleOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?lifecycle", + method: http.MethodPut, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// DeleteLifecycle 请求实现删除生命周期管理。 +// https://www.qcloud.com/document/product/436/8284 +func (s *BucketService) DeleteLifecycle(ctx context.Context) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?lifecycle", + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_location.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_location.go new file mode 100644 index 000000000..dd4b5a55d --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_location.go @@ -0,0 +1,28 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// BucketGetLocationResult is the result of BucketGetLocation +type BucketGetLocationResult struct { + XMLName xml.Name `xml:"LocationConstraint"` + Location string `xml:",chardata"` +} + +// GetLocation 接口获取Bucket所在地域信息,只有Bucket所有者有权限读取信息。 +// +// https://www.qcloud.com/document/product/436/8275 +func (s *BucketService) GetLocation(ctx context.Context) (*BucketGetLocationResult, *Response, error) { + var res BucketGetLocationResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?location", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_logging.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_logging.go new file mode 100644 index 000000000..d4ea51300 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_logging.go @@ -0,0 +1,53 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// Notice bucket logging function is testing, can not use. + +// BucketLoggingEnabled main struct of logging +type BucketLoggingEnabled struct { + TargetBucket string `xml:"TargetBucket"` + TargetPrefix string `xml:"TargetPrefix"` +} + +// BucketPutLoggingOptions is the options of PutBucketLogging +type BucketPutLoggingOptions struct { + XMLName xml.Name `xml:"BucketLoggingStatus"` + LoggingEnabled *BucketLoggingEnabled `xml:"LoggingEnabled"` +} + +// BucketGetLoggingResult is the result of GetBucketLogging +type BucketGetLoggingResult struct { + XMLName xml.Name `xml:"BucketLoggingStatus"` + LoggingEnabled *BucketLoggingEnabled `xml:"LoggingEnabled"` +} + +// PutBucketLogging https://cloud.tencent.com/document/product/436/17054 +func (s *BucketService) PutBucketLoggingTest(ctx context.Context, opt *BucketPutLoggingOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?logging", + method: http.MethodPut, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// GetBucketLogging https://cloud.tencent.com/document/product/436/17053 +func (s *BucketService) GetBucketLoggingTest(ctx context.Context) (*BucketGetLoggingResult, *Response, error) { + var res BucketGetLoggingResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?logging", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err + +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_part.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_part.go new file mode 100644 index 000000000..e77b4207a --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_part.go @@ -0,0 +1,57 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// ListMultipartUploadsResult is the result of ListMultipartUploads +type ListMultipartUploadsResult struct { + XMLName xml.Name `xml:"ListMultipartUploadsResult"` + Bucket string `xml:"Bucket"` + EncodingType string `xml:"Encoding-Type"` + KeyMarker string + UploadIDMarker string `xml:"UploadIdMarker"` + NextKeyMarker string + NextUploadIDMarker string `xml:"NextUploadIdMarker"` + MaxUploads int + IsTruncated bool + Uploads []struct { + Key string + UploadID string `xml:"UploadId"` + StorageClass string + Initiator *Initiator + Owner *Owner + Initiated string + } `xml:"Upload,omitempty"` + Prefix string + Delimiter string `xml:"delimiter,omitempty"` + CommonPrefixes []string `xml:"CommonPrefixs>Prefix,omitempty"` +} + +// ListMultipartUploadsOptions is the option of ListMultipartUploads +type ListMultipartUploadsOptions struct { + Delimiter string `url:"delimiter,omitempty"` + EncodingType string `url:"encoding-type,omitempty"` + Prefix string `url:"prefix,omitempty"` + MaxUploads int `url:"max-uploads,omitempty"` + KeyMarker string `url:"key-marker,omitempty"` + UploadIDMarker string `url:"upload-id-marker,omitempty"` +} + +// ListMultipartUploads 用来查询正在进行中的分块上传。单次最多列出1000个正在进行中的分块上传。 +// +// https://www.qcloud.com/document/product/436/7736 +func (s *BucketService) ListMultipartUploads(ctx context.Context, opt *ListMultipartUploadsOptions) (*ListMultipartUploadsResult, *Response, error) { + var res ListMultipartUploadsResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?uploads", + method: http.MethodGet, + result: &res, + optQuery: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_replication.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_replication.go new file mode 100644 index 000000000..d0a1a9af9 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_replication.go @@ -0,0 +1,73 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// ReplicationDestination is the sub struct of BucketReplicationRule +type ReplicationDestination struct { + Bucket string `xml:"Bucket"` + StorageClass string `xml:"StorageClass,omitempty"` +} + +// BucketReplicationRule is the main param of replication +type BucketReplicationRule struct { + ID string `xml:"ID,omitempty"` + Status string `xml:"Status"` + Prefix string `xml:"Prefix"` + Destination *ReplicationDestination `xml:"Destination"` +} + +// PutBucketReplicationOptions is the options of PutBucketReplication +type PutBucketReplicationOptions struct { + XMLName xml.Name `xml:"ReplicationConfiguration"` + Role string `xml:"Role"` + Rule []BucketReplicationRule `xml:"Rule"` +} + +// GetBucketReplicationResult is the result of GetBucketReplication +type GetBucketReplicationResult struct { + XMLName xml.Name `xml:"ReplicationConfiguration"` + Role string `xml:"Role"` + Rule []BucketReplicationRule `xml:"Rule"` +} + +// PutBucketReplication https://cloud.tencent.com/document/product/436/19223 +func (s *BucketService) PutBucketReplication(ctx context.Context, opt *PutBucketReplicationOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?replication", + method: http.MethodPut, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err + +} + +// GetBucketReplication https://cloud.tencent.com/document/product/436/19222 +func (s *BucketService) GetBucketReplication(ctx context.Context) (*GetBucketReplicationResult, *Response, error) { + var res GetBucketReplicationResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?replication", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err + +} + +// DeleteBucketReplication https://cloud.tencent.com/document/product/436/19221 +func (s *BucketService) DeleteBucketReplication(ctx context.Context) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?replication", + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_tagging.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_tagging.go new file mode 100644 index 000000000..1d8873175 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_tagging.go @@ -0,0 +1,69 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// BucketTaggingTag is the tag of BucketTagging +type BucketTaggingTag struct { + Key string + Value string +} + +// BucketGetTaggingResult is the result of BucketGetTagging +type BucketGetTaggingResult struct { + XMLName xml.Name `xml:"Tagging"` + TagSet []BucketTaggingTag `xml:"TagSet>Tag,omitempty"` +} + +// GetTagging 接口实现获取指定Bucket的标签。 +// +// https://www.qcloud.com/document/product/436/8277 +func (s *BucketService) GetTagging(ctx context.Context) (*BucketGetTaggingResult, *Response, error) { + var res BucketGetTaggingResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?tagging", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// BucketPutTaggingOptions is the option of BucketPutTagging +type BucketPutTaggingOptions struct { + XMLName xml.Name `xml:"Tagging"` + TagSet []BucketTaggingTag `xml:"TagSet>Tag,omitempty"` +} + +// PutTagging 接口实现给用指定Bucket打标签。用来组织和管理相关Bucket。 +// +// 当该请求设置相同Key名称,不同Value时,会返回400。请求成功,则返回204。 +// +// https://www.qcloud.com/document/product/436/8281 +func (s *BucketService) PutTagging(ctx context.Context, opt *BucketPutTaggingOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?tagging", + method: http.MethodPut, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// DeleteTagging 接口实现删除指定Bucket的标签。 +// +// https://www.qcloud.com/document/product/436/8286 +func (s *BucketService) DeleteTagging(ctx context.Context) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?tagging", + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_version.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_version.go new file mode 100644 index 000000000..b74527bd8 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/bucket_version.go @@ -0,0 +1,45 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// BucketPutVersionOptions is the options of PutBucketVersioning +type BucketPutVersionOptions struct { + XMLName xml.Name `xml:"VersioningConfiguration"` + Status string `xml:"Status"` +} + +// BucketGetVersionResult is the result of GetBucketVersioning +type BucketGetVersionResult struct { + XMLName xml.Name `xml:"VersioningConfiguration"` + Status string `xml:"Status"` +} + +// PutVersion https://cloud.tencent.com/document/product/436/19889 +// Status has Suspended\Enabled +func (s *BucketService) PutVersioning(ctx context.Context, opt *BucketPutVersionOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?versioning", + method: http.MethodPut, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// GetVersion https://cloud.tencent.com/document/product/436/19888 +func (s *BucketService) GetVersioning(ctx context.Context) (*BucketGetVersionResult, *Response, error) { + var res BucketGetVersionResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?versioning", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/cos.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/cos.go new file mode 100644 index 000000000..adc381faf --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/cos.go @@ -0,0 +1,346 @@ +package cos + +import ( + "bytes" + "context" + "encoding/base64" + "encoding/xml" + "fmt" + "io" + "io/ioutil" + "net/http" + "net/url" + "reflect" + "text/template" + + "strconv" + + "github.com/google/go-querystring/query" + "github.com/mozillazg/go-httpheader" +) + +const ( + // Version current go sdk version + Version = "0.7.3" + userAgent = "cos-go-sdk-v5/" + Version + contentTypeXML = "application/xml" + defaultServiceBaseURL = "http://service.cos.myqcloud.com" +) + +var bucketURLTemplate = template.Must( + template.New("bucketURLFormat").Parse( + "{{.Schema}}://{{.BucketName}}.cos.{{.Region}}.myqcloud.com", + ), +) + +// BaseURL 访问各 API 所需的基础 URL +type BaseURL struct { + // 访问 bucket, object 相关 API 的基础 URL(不包含 path 部分): http://example.com + BucketURL *url.URL + // 访问 service API 的基础 URL(不包含 path 部分): http://example.com + ServiceURL *url.URL +} + +// NewBucketURL 生成 BaseURL 所需的 BucketURL +// +// bucketName: bucket名称, bucket的命名规则为{name}-{appid} ,此处填写的存储桶名称必须为此格式 +// Region: 区域代码: ap-beijing-1,ap-beijing,ap-shanghai,ap-guangzhou... +// secure: 是否使用 https +func NewBucketURL(bucketName, region string, secure bool) *url.URL { + schema := "https" + if !secure { + schema = "http" + } + + w := bytes.NewBuffer(nil) + bucketURLTemplate.Execute(w, struct { + Schema string + BucketName string + Region string + }{ + schema, bucketName, region, + }) + + u, _ := url.Parse(w.String()) + return u +} + +// Client is a client manages communication with the COS API. +type Client struct { + client *http.Client + + UserAgent string + BaseURL *BaseURL + + common service + + Service *ServiceService + Bucket *BucketService + Object *ObjectService +} + +type service struct { + client *Client +} + +// NewClient returns a new COS API client. +func NewClient(uri *BaseURL, httpClient *http.Client) *Client { + if httpClient == nil { + httpClient = &http.Client{} + } + + baseURL := &BaseURL{} + if uri != nil { + baseURL.BucketURL = uri.BucketURL + baseURL.ServiceURL = uri.ServiceURL + } + if baseURL.ServiceURL == nil { + baseURL.ServiceURL, _ = url.Parse(defaultServiceBaseURL) + } + + c := &Client{ + client: httpClient, + UserAgent: userAgent, + BaseURL: baseURL, + } + c.common.client = c + c.Service = (*ServiceService)(&c.common) + c.Bucket = (*BucketService)(&c.common) + c.Object = (*ObjectService)(&c.common) + return c +} + +func (c *Client) newRequest(ctx context.Context, baseURL *url.URL, uri, method string, body interface{}, optQuery interface{}, optHeader interface{}) (req *http.Request, err error) { + uri, err = addURLOptions(uri, optQuery) + if err != nil { + return + } + u, _ := url.Parse(uri) + urlStr := baseURL.ResolveReference(u).String() + + var reader io.Reader + contentType := "" + contentMD5 := "" + if body != nil { + // 上传文件 + if r, ok := body.(io.Reader); ok { + reader = r + } else { + b, err := xml.Marshal(body) + if err != nil { + return nil, err + } + contentType = contentTypeXML + reader = bytes.NewReader(b) + contentMD5 = base64.StdEncoding.EncodeToString(calMD5Digest(b)) + } + } + + req, err = http.NewRequest(method, urlStr, reader) + if err != nil { + return + } + + req.Header, err = addHeaderOptions(req.Header, optHeader) + if err != nil { + return + } + if v := req.Header.Get("Content-Length"); req.ContentLength == 0 && v != "" && v != "0" { + req.ContentLength, _ = strconv.ParseInt(v, 10, 64) + } + + if contentMD5 != "" { + req.Header["Content-MD5"] = []string{contentMD5} + } + if c.UserAgent != "" { + req.Header.Set("User-Agent", c.UserAgent) + } + if req.Header.Get("Content-Type") == "" && contentType != "" { + req.Header.Set("Content-Type", contentType) + } + return +} + +func (c *Client) doAPI(ctx context.Context, req *http.Request, result interface{}, closeBody bool) (*Response, error) { + req = req.WithContext(ctx) + + resp, err := c.client.Do(req) + if err != nil { + // If we got an error, and the context has been canceled, + // the context's error is probably more useful. + select { + case <-ctx.Done(): + return nil, ctx.Err() + default: + } + return nil, err + } + + defer func() { + if closeBody { + // Close the body to let the Transport reuse the connection + io.Copy(ioutil.Discard, resp.Body) + resp.Body.Close() + } + }() + + response := newResponse(resp) + + err = checkResponse(resp) + if err != nil { + // even though there was an error, we still return the response + // in case the caller wants to inspect it further + return response, err + } + + if result != nil { + if w, ok := result.(io.Writer); ok { + io.Copy(w, resp.Body) + } else { + err = xml.NewDecoder(resp.Body).Decode(result) + if err == io.EOF { + err = nil // ignore EOF errors caused by empty response body + } + } + } + + return response, err +} + +type sendOptions struct { + // 基础 URL + baseURL *url.URL + // URL 中除基础 URL 外的剩余部分 + uri string + // 请求方法 + method string + + body interface{} + // url 查询参数 + optQuery interface{} + // http header 参数 + optHeader interface{} + // 用 result 反序列化 resp.Body + result interface{} + // 是否禁用自动调用 resp.Body.Close() + // 自动调用 Close() 是为了能够重用连接 + disableCloseBody bool +} + +func (c *Client) send(ctx context.Context, opt *sendOptions) (resp *Response, err error) { + req, err := c.newRequest(ctx, opt.baseURL, opt.uri, opt.method, opt.body, opt.optQuery, opt.optHeader) + if err != nil { + return + } + + resp, err = c.doAPI(ctx, req, opt.result, !opt.disableCloseBody) + if err != nil { + return + } + return +} + +// addURLOptions adds the parameters in opt as URL query parameters to s. opt +// must be a struct whose fields may contain "url" tags. +func addURLOptions(s string, opt interface{}) (string, error) { + v := reflect.ValueOf(opt) + if v.Kind() == reflect.Ptr && v.IsNil() { + return s, nil + } + + u, err := url.Parse(s) + if err != nil { + return s, err + } + + qs, err := query.Values(opt) + if err != nil { + return s, err + } + + // 保留原有的参数,并且放在前面。因为 cos 的 url 路由是以第一个参数作为路由的 + // e.g. /?uploads + q := u.RawQuery + rq := qs.Encode() + if q != "" { + if rq != "" { + u.RawQuery = fmt.Sprintf("%s&%s", q, qs.Encode()) + } + } else { + u.RawQuery = rq + } + return u.String(), nil +} + +// addHeaderOptions adds the parameters in opt as Header fields to req. opt +// must be a struct whose fields may contain "header" tags. +func addHeaderOptions(header http.Header, opt interface{}) (http.Header, error) { + v := reflect.ValueOf(opt) + if v.Kind() == reflect.Ptr && v.IsNil() { + return header, nil + } + + h, err := httpheader.Header(opt) + if err != nil { + return nil, err + } + + for key, values := range h { + for _, value := range values { + header.Add(key, value) + } + } + return header, nil +} + +// Owner defines Bucket/Object's owner +type Owner struct { + UIN string `xml:"uin,omitempty"` + ID string `xml:",omitempty"` + DisplayName string `xml:",omitempty"` +} + +// Initiator same to the Owner struct +type Initiator Owner + +// Response API 响应 +type Response struct { + *http.Response +} + +func newResponse(resp *http.Response) *Response { + return &Response{ + Response: resp, + } +} + +// ACLHeaderOptions is the option of ACLHeader +type ACLHeaderOptions struct { + XCosACL string `header:"x-cos-acl,omitempty" url:"-" xml:"-"` + XCosGrantRead string `header:"x-cos-grant-read,omitempty" url:"-" xml:"-"` + XCosGrantWrite string `header:"x-cos-grant-write,omitempty" url:"-" xml:"-"` + XCosGrantFullControl string `header:"x-cos-grant-full-control,omitempty" url:"-" xml:"-"` +} + +// ACLGrantee is the param of ACLGrant +type ACLGrantee struct { + Type string `xml:"type,attr"` + UIN string `xml:"uin,omitempty"` + URI string `xml:"URI,omitempty"` + ID string `xml:",omitempty"` + DisplayName string `xml:",omitempty"` + SubAccount string `xml:"Subaccount,omitempty"` +} + +// ACLGrant is the param of ACLXml +type ACLGrant struct { + Grantee *ACLGrantee + Permission string +} + +// ACLXml is the ACL body struct +type ACLXml struct { + XMLName xml.Name `xml:"AccessControlPolicy"` + Owner *Owner + AccessControlList []ACLGrant `xml:"AccessControlList>Grant,omitempty"` +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/doc.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/doc.go new file mode 100644 index 000000000..99005ab3a --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/doc.go @@ -0,0 +1,3 @@ +// Package cos is COS(Cloud Object Storage) Go SDK. The V5 version(XML API). +// There are examples of using each API in the project's 'example' directory. +package cos diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/error.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/error.go new file mode 100644 index 000000000..9d06194eb --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/error.go @@ -0,0 +1,49 @@ +package cos + +import ( + "encoding/xml" + "fmt" + "io/ioutil" + "net/http" +) + +// ErrorResponse 包含 API 返回的错误信息 +// +// https://www.qcloud.com/document/product/436/7730 +type ErrorResponse struct { + XMLName xml.Name `xml:"Error"` + Response *http.Response `xml:"-"` + Code string + Message string + Resource string + RequestID string `header:"x-cos-request-id,omitempty" url:"-" xml:"-"` + TraceID string `xml:"TraceId,omitempty"` +} + +// Error returns the error msg +func (r *ErrorResponse) Error() string { + RequestID := r.RequestID + if RequestID == "" { + RequestID = r.Response.Header.Get("X-Cos-Request-Id") + } + TraceID := r.TraceID + if TraceID == "" { + TraceID = r.Response.Header.Get("X-Cos-Trace-Id") + } + return fmt.Sprintf("%v %v: %d %v(Message: %v, RequestId: %v, TraceId: %v)", + r.Response.Request.Method, r.Response.Request.URL, + r.Response.StatusCode, r.Code, r.Message, RequestID, TraceID) +} + +// 检查 response 是否是出错时的返回的 response +func checkResponse(r *http.Response) error { + if c := r.StatusCode; 200 <= c && c <= 299 { + return nil + } + errorResponse := &ErrorResponse{Response: r} + data, err := ioutil.ReadAll(r.Body) + if err == nil && data != nil { + xml.Unmarshal(data, errorResponse) + } + return errorResponse +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/go.mod b/vendor/github.com/tencentyun/cos-go-sdk-v5/go.mod new file mode 100644 index 000000000..35afc5cbc --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/go.mod @@ -0,0 +1,10 @@ +module github.com/tencentyun/cos-go-sdk-v5 + +go 1.12 + +require ( + github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409 + github.com/google/go-querystring v1.0.0 + github.com/mozillazg/go-httpheader v0.2.1 + github.com/stretchr/testify v1.3.0 +) diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/go.sum b/vendor/github.com/tencentyun/cos-go-sdk-v5/go.sum new file mode 100644 index 000000000..5ee5a1d23 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/go.sum @@ -0,0 +1,13 @@ +github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409 h1:DTQ/38ao/CfXsrK0cSAL+h4R/u0VVvfWLZEOlLwEROI= +github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM= +github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= +github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= +github.com/mozillazg/go-httpheader v0.2.1 h1:geV7TrjbL8KXSyvghnFm+NyTux/hxwueTSrwhe88TQQ= +github.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/helper.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/helper.go new file mode 100644 index 000000000..08c5a35df --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/helper.go @@ -0,0 +1,85 @@ +package cos + +import ( + "bytes" + "crypto/md5" + "crypto/sha1" + "fmt" + "net/http" +) + +// 计算 md5 或 sha1 时的分块大小 +const calDigestBlockSize = 1024 * 1024 * 10 + +func calMD5Digest(msg []byte) []byte { + // TODO: 分块计算,减少内存消耗 + m := md5.New() + m.Write(msg) + return m.Sum(nil) +} + +func calSHA1Digest(msg []byte) []byte { + // TODO: 分块计算,减少内存消耗 + m := sha1.New() + m.Write(msg) + return m.Sum(nil) +} + +// cloneRequest returns a clone of the provided *http.Request. The clone is a +// shallow copy of the struct and its Header map. +func cloneRequest(r *http.Request) *http.Request { + // shallow copy of the struct + r2 := new(http.Request) + *r2 = *r + // deep copy of the Header + r2.Header = make(http.Header, len(r.Header)) + for k, s := range r.Header { + r2.Header[k] = append([]string(nil), s...) + } + return r2 +} + +// encodeURIComponent like same function in javascript +// +// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent +// +// http://www.ecma-international.org/ecma-262/6.0/#sec-uri-syntax-and-semantics +func encodeURIComponent(s string) string { + var b bytes.Buffer + written := 0 + + for i, n := 0, len(s); i < n; i++ { + c := s[i] + + switch c { + case '-', '_', '.', '!', '~', '*', '\'', '(', ')': + continue + default: + // Unreserved according to RFC 3986 sec 2.3 + if 'a' <= c && c <= 'z' { + + continue + + } + if 'A' <= c && c <= 'Z' { + + continue + + } + if '0' <= c && c <= '9' { + + continue + } + } + + b.WriteString(s[written:i]) + fmt.Fprintf(&b, "%%%02X", c) + written = i + 1 + } + + if written == 0 { + return s + } + b.WriteString(s[written:]) + return b.String() +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/object.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/object.go new file mode 100644 index 000000000..c88dc02d5 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/object.go @@ -0,0 +1,605 @@ +package cos + +import ( + "context" + "encoding/xml" + "errors" + "fmt" + "io" + "net/http" + "net/url" + "os" + "sort" + "time" +) + +// ObjectService 相关 API +type ObjectService service + +// ObjectGetOptions is the option of GetObject +type ObjectGetOptions struct { + ResponseContentType string `url:"response-content-type,omitempty" header:"-"` + ResponseContentLanguage string `url:"response-content-language,omitempty" header:"-"` + ResponseExpires string `url:"response-expires,omitempty" header:"-"` + ResponseCacheControl string `url:"response-cache-control,omitempty" header:"-"` + ResponseContentDisposition string `url:"response-content-disposition,omitempty" header:"-"` + ResponseContentEncoding string `url:"response-content-encoding,omitempty" header:"-"` + Range string `url:"-" header:"Range,omitempty"` + IfModifiedSince string `url:"-" header:"If-Modified-Since,omitempty"` +} + +// presignedURLTestingOptions is the opt of presigned url +type presignedURLTestingOptions struct { + authTime *AuthTime +} + +// Get Object 请求可以将一个文件(Object)下载至本地。 +// 该操作需要对目标 Object 具有读权限或目标 Object 对所有人都开放了读权限(公有读)。 +// +// https://www.qcloud.com/document/product/436/7753 +func (s *ObjectService) Get(ctx context.Context, name string, opt *ObjectGetOptions, id ...string) (*Response, error) { + var u string + if len(id) == 1 { + u = fmt.Sprintf("/%s?versionId=%s", encodeURIComponent(name), id[0]) + } else if len(id) == 0 { + u = "/" + encodeURIComponent(name) + } else { + return nil, errors.New("wrong params") + } + + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodGet, + optQuery: opt, + optHeader: opt, + disableCloseBody: true, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// GetToFile download the object to local file +func (s *ObjectService) GetToFile(ctx context.Context, name, localpath string, opt *ObjectGetOptions, id ...string) (*Response, error) { + resp, err := s.Get(ctx, name, opt, id...) + if err != nil { + return resp, err + } + defer resp.Body.Close() + + // If file exist, overwrite it + fd, err := os.OpenFile(localpath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0660) + if err != nil { + return resp, err + } + + _, err = io.Copy(fd, resp.Body) + fd.Close() + if err != nil { + return resp, err + } + + return resp, nil +} + +// GetPresignedURL get the object presigned to down or upload file by url +func (s *ObjectService) GetPresignedURL(ctx context.Context, httpMethod, name, ak, sk string, expired time.Duration, opt interface{}) (*url.URL, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name), + method: httpMethod, + optQuery: opt, + optHeader: opt, + } + req, err := s.client.newRequest(ctx, sendOpt.baseURL, sendOpt.uri, sendOpt.method, sendOpt.body, sendOpt.optQuery, sendOpt.optHeader) + if err != nil { + return nil, err + } + + var authTime *AuthTime + if opt != nil { + if opt, ok := opt.(*presignedURLTestingOptions); ok { + authTime = opt.authTime + } + } + if authTime == nil { + authTime = NewAuthTime(expired) + } + authorization := newAuthorization(ak, sk, req, authTime) + sign := encodeURIComponent(authorization) + + if req.URL.RawQuery == "" { + req.URL.RawQuery = fmt.Sprintf("sign=%s", sign) + } else { + req.URL.RawQuery = fmt.Sprintf("%s&sign=%s", req.URL.RawQuery, sign) + } + return req.URL, nil + +} + +// ObjectPutHeaderOptions the options of header of the put object +type ObjectPutHeaderOptions struct { + CacheControl string `header:"Cache-Control,omitempty" url:"-"` + ContentDisposition string `header:"Content-Disposition,omitempty" url:"-"` + ContentEncoding string `header:"Content-Encoding,omitempty" url:"-"` + ContentType string `header:"Content-Type,omitempty" url:"-"` + ContentMD5 string `header:"Content-MD5,omitempty" url:"-"` + ContentLength int `header:"Content-Length,omitempty" url:"-"` + Expect string `header:"Expect,omitempty" url:"-"` + Expires string `header:"Expires,omitempty" url:"-"` + XCosContentSHA1 string `header:"x-cos-content-sha1,omitempty" url:"-"` + // 自定义的 x-cos-meta-* header + XCosMetaXXX *http.Header `header:"x-cos-meta-*,omitempty" url:"-"` + XCosStorageClass string `header:"x-cos-storage-class,omitempty" url:"-"` + // 可选值: Normal, Appendable + //XCosObjectType string `header:"x-cos-object-type,omitempty" url:"-"` + // Enable Server Side Encryption, Only supported: AES256 + XCosServerSideEncryption string `header:"x-cos-server-side-encryption,omitempty" url:"-" xml:"-"` +} + +// ObjectPutOptions the options of put object +type ObjectPutOptions struct { + *ACLHeaderOptions `header:",omitempty" url:"-" xml:"-"` + *ObjectPutHeaderOptions `header:",omitempty" url:"-" xml:"-"` +} + +// Put Object请求可以将一个文件(Oject)上传至指定Bucket。 +// +// 当 r 不是 bytes.Buffer/bytes.Reader/strings.Reader 时,必须指定 opt.ObjectPutHeaderOptions.ContentLength +// +// https://www.qcloud.com/document/product/436/7749 +func (s *ObjectService) Put(ctx context.Context, name string, r io.Reader, opt *ObjectPutOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name), + method: http.MethodPut, + body: r, + optHeader: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// PutFromFile put object from local file +// Notice that when use this put large file need set non-body of debug req/resp, otherwise will out of memory +func (s *ObjectService) PutFromFile(ctx context.Context, name string, filePath string, opt *ObjectPutOptions) (*Response, error) { + fd, err := os.Open(filePath) + if err != nil { + return nil, err + } + defer fd.Close() + + return s.Put(ctx, name, fd, opt) +} + +// ObjectCopyHeaderOptions is the head option of the Copy +type ObjectCopyHeaderOptions struct { + // When use replace directive to update meta infos + CacheControl string `header:"Cache-Control,omitempty" url:"-"` + ContentDisposition string `header:"Content-Disposition,omitempty" url:"-"` + ContentEncoding string `header:"Content-Encoding,omitempty" url:"-"` + ContentType string `header:"Content-Type,omitempty" url:"-"` + Expires string `header:"Expires,omitempty" url:"-"` + Expect string `header:"Expect,omitempty" url:"-"` + XCosMetadataDirective string `header:"x-cos-metadata-directive,omitempty" url:"-" xml:"-"` + XCosCopySourceIfModifiedSince string `header:"x-cos-copy-source-If-Modified-Since,omitempty" url:"-" xml:"-"` + XCosCopySourceIfUnmodifiedSince string `header:"x-cos-copy-source-If-Unmodified-Since,omitempty" url:"-" xml:"-"` + XCosCopySourceIfMatch string `header:"x-cos-copy-source-If-Match,omitempty" url:"-" xml:"-"` + XCosCopySourceIfNoneMatch string `header:"x-cos-copy-source-If-None-Match,omitempty" url:"-" xml:"-"` + XCosStorageClass string `header:"x-cos-storage-class,omitempty" url:"-" xml:"-"` + // 自定义的 x-cos-meta-* header + XCosMetaXXX *http.Header `header:"x-cos-meta-*,omitempty" url:"-"` + XCosCopySource string `header:"x-cos-copy-source" url:"-" xml:"-"` + XCosServerSideEncryption string `header:"x-cos-server-side-encryption,omitempty" url:"-" xml:"-"` +} + +// ObjectCopyOptions is the option of Copy, choose header or body +type ObjectCopyOptions struct { + *ObjectCopyHeaderOptions `header:",omitempty" url:"-" xml:"-"` + *ACLHeaderOptions `header:",omitempty" url:"-" xml:"-"` +} + +// ObjectCopyResult is the result of Copy +type ObjectCopyResult struct { + XMLName xml.Name `xml:"CopyObjectResult"` + ETag string `xml:"ETag,omitempty"` + LastModified string `xml:"LastModified,omitempty"` +} + +// Copy 调用 PutObjectCopy 请求实现将一个文件从源路径复制到目标路径。建议文件大小 1M 到 5G, +// 超过 5G 的文件请使用分块上传 Upload - Copy。在拷贝的过程中,文件元属性和 ACL 可以被修改。 +// +// 用户可以通过该接口实现文件移动,文件重命名,修改文件属性和创建副本。 +// +// 注意:在跨帐号复制的时候,需要先设置被复制文件的权限为公有读,或者对目标帐号赋权,同帐号则不需要。 +// +// https://cloud.tencent.com/document/product/436/10881 +func (s *ObjectService) Copy(ctx context.Context, name, sourceURL string, opt *ObjectCopyOptions) (*ObjectCopyResult, *Response, error) { + var res ObjectCopyResult + if opt == nil { + opt = new(ObjectCopyOptions) + } + if opt.ObjectCopyHeaderOptions == nil { + opt.ObjectCopyHeaderOptions = new(ObjectCopyHeaderOptions) + } + opt.XCosCopySource = encodeURIComponent(sourceURL) + + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name), + method: http.MethodPut, + body: nil, + optHeader: opt, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + // If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. + if err == nil && resp.StatusCode == 200 { + if res.ETag == "" { + return &res, resp, errors.New("response 200 OK, but body contains an error") + } + } + return &res, resp, err +} + +// Delete Object请求可以将一个文件(Object)删除。 +// +// https://www.qcloud.com/document/product/436/7743 +func (s *ObjectService) Delete(ctx context.Context, name string) (*Response, error) { + // When use "" string might call the delete bucket interface + if len(name) == 0 { + return nil, errors.New("empty object name") + } + + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name), + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// ObjectHeadOptions is the option of HeadObject +type ObjectHeadOptions struct { + IfModifiedSince string `url:"-" header:"If-Modified-Since,omitempty"` +} + +// Head Object请求可以取回对应Object的元数据,Head的权限与Get的权限一致 +// +// https://www.qcloud.com/document/product/436/7745 +func (s *ObjectService) Head(ctx context.Context, name string, opt *ObjectHeadOptions, id ...string) (*Response, error) { + var u string + if len(id) == 1 { + u = fmt.Sprintf("/%s?versionId=%s", encodeURIComponent(name), id[0]) + } else if len(id) == 0 { + u = "/" + encodeURIComponent(name) + } else { + return nil, errors.New("wrong params") + } + + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodHead, + optHeader: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + if resp != nil && resp.Header["X-Cos-Object-Type"] != nil && resp.Header["X-Cos-Object-Type"][0] == "appendable" { + resp.Header.Add("x-cos-next-append-position", resp.Header["Content-Length"][0]) + } + + return resp, err +} + +// ObjectOptionsOptions is the option of object options +type ObjectOptionsOptions struct { + Origin string `url:"-" header:"Origin"` + AccessControlRequestMethod string `url:"-" header:"Access-Control-Request-Method"` + AccessControlRequestHeaders string `url:"-" header:"Access-Control-Request-Headers,omitempty"` +} + +// Options Object请求实现跨域访问的预请求。即发出一个 OPTIONS 请求给服务器以确认是否可以进行跨域操作。 +// +// 当CORS配置不存在时,请求返回403 Forbidden。 +// +// https://www.qcloud.com/document/product/436/8288 +func (s *ObjectService) Options(ctx context.Context, name string, opt *ObjectOptionsOptions) (*Response, error) { + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name), + method: http.MethodOptions, + optHeader: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// CASJobParameters support three way: Standard(in 35 hours), Expedited(quick way, in 15 mins), Bulk(in 5-12 hours_ +type CASJobParameters struct { + Tier string `xml:"Tier"` +} + +// ObjectRestoreOptions is the option of object restore +type ObjectRestoreOptions struct { + XMLName xml.Name `xml:"RestoreRequest"` + Days int `xml:"Days"` + Tier *CASJobParameters `xml:"CASJobParameters"` +} + +// PutRestore API can recover an object of type archived by COS archive. +// +// https://cloud.tencent.com/document/product/436/12633 +func (s *ObjectService) PostRestore(ctx context.Context, name string, opt *ObjectRestoreOptions) (*Response, error) { + u := fmt.Sprintf("/%s?restore", encodeURIComponent(name)) + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodPost, + body: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + + return resp, err +} + +// TODO Append 接口在优化未开放使用 +// +// Append请求可以将一个文件(Object)以分块追加的方式上传至 Bucket 中。使用Append Upload的文件必须事前被设定为Appendable。 +// 当Appendable的文件被执行Put Object的操作以后,文件被覆盖,属性改变为Normal。 +// +// 文件属性可以在Head Object操作中被查询到,当您发起Head Object请求时,会返回自定义Header『x-cos-object-type』,该Header只有两个枚举值:Normal或者Appendable。 +// +// 追加上传建议文件大小1M - 5G。如果position的值和当前Object的长度不致,COS会返回409错误。 +// 如果Append一个Normal的Object,COS会返回409 ObjectNotAppendable。 +// +// Appendable的文件不可以被复制,不参与版本管理,不参与生命周期管理,不可跨区域复制。 +// +// 当 r 不是 bytes.Buffer/bytes.Reader/strings.Reader 时,必须指定 opt.ObjectPutHeaderOptions.ContentLength +// +// https://www.qcloud.com/document/product/436/7741 +// func (s *ObjectService) Append(ctx context.Context, name string, position int, r io.Reader, opt *ObjectPutOptions) (*Response, error) { +// u := fmt.Sprintf("/%s?append&position=%d", encodeURIComponent(name), position) +// if position != 0{ +// opt = nil +// } +// sendOpt := sendOptions{ +// baseURL: s.client.BaseURL.BucketURL, +// uri: u, +// method: http.MethodPost, +// optHeader: opt, +// body: r, +// } +// resp, err := s.client.send(ctx, &sendOpt) +// return resp, err +// } + +// ObjectDeleteMultiOptions is the option of DeleteMulti +type ObjectDeleteMultiOptions struct { + XMLName xml.Name `xml:"Delete" header:"-"` + Quiet bool `xml:"Quiet" header:"-"` + Objects []Object `xml:"Object" header:"-"` + //XCosSha1 string `xml:"-" header:"x-cos-sha1"` +} + +// ObjectDeleteMultiResult is the result of DeleteMulti +type ObjectDeleteMultiResult struct { + XMLName xml.Name `xml:"DeleteResult"` + DeletedObjects []Object `xml:"Deleted,omitempty"` + Errors []struct { + Key string + Code string + Message string + } `xml:"Error,omitempty"` +} + +// DeleteMulti 请求实现批量删除文件,最大支持单次删除1000个文件。 +// 对于返回结果,COS提供Verbose和Quiet两种结果模式。Verbose模式将返回每个Object的删除结果; +// Quiet模式只返回报错的Object信息。 +// https://www.qcloud.com/document/product/436/8289 +func (s *ObjectService) DeleteMulti(ctx context.Context, opt *ObjectDeleteMultiOptions) (*ObjectDeleteMultiResult, *Response, error) { + var res ObjectDeleteMultiResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/?delete", + method: http.MethodPost, + body: opt, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// Object is the meta info of the object +type Object struct { + Key string `xml:",omitempty"` + ETag string `xml:",omitempty"` + Size int `xml:",omitempty"` + PartNumber int `xml:",omitempty"` + LastModified string `xml:",omitempty"` + StorageClass string `xml:",omitempty"` + Owner *Owner `xml:",omitempty"` +} + +// MultiUploadOptions is the option of the multiupload, +// ThreadPoolSize default is one +type MultiUploadOptions struct { + OptIni *InitiateMultipartUploadOptions + PartSize int64 + ThreadPoolSize int +} + +type Chunk struct { + Number int + OffSet int64 + Size int64 +} + +// jobs +type Jobs struct { + Name string + UploadId string + FilePath string + RetryTimes int + Chunk Chunk + Data io.Reader + Opt *ObjectUploadPartOptions +} + +type Results struct { + PartNumber int + Resp *Response +} + +func worker(s *ObjectService, jobs <-chan *Jobs, results chan<- *Results) { + for j := range jobs { + fd, err := os.Open(j.FilePath) + var res Results + if err != nil { + res.PartNumber = j.Chunk.Number + res.Resp = nil + results <- &res + } + + fd.Seek(j.Chunk.OffSet, os.SEEK_SET) + // UploadPart do not support the chunk trsf, so need to add the content-length + opt := &ObjectUploadPartOptions{ + ContentLength: int(j.Chunk.Size), + } + + rt := j.RetryTimes + for { + resp, err := s.UploadPart(context.Background(), j.Name, j.UploadId, j.Chunk.Number, + &io.LimitedReader{R: fd, N: j.Chunk.Size}, opt) + res.PartNumber = j.Chunk.Number + res.Resp = resp + if err != nil { + rt-- + if rt == 0 { + fd.Close() + results <- &res + break + } + continue + } + fd.Close() + results <- &res + break + } + } +} + +func SplitFileIntoChunks(filePath string, partSize int64) ([]Chunk, int, error) { + if filePath == "" || partSize <= 0 { + return nil, 0, errors.New("chunkSize invalid") + } + + file, err := os.Open(filePath) + if err != nil { + return nil, 0, err + } + defer file.Close() + + stat, err := file.Stat() + if err != nil { + return nil, 0, err + } + var partNum = stat.Size() / partSize + // 10000 max part size + if partNum >= 10000 { + return nil, 0, errors.New("Too many parts, out of 10000") + } + + var chunks []Chunk + var chunk = Chunk{} + for i := int64(0); i < partNum; i++ { + chunk.Number = int(i + 1) + chunk.OffSet = i * partSize + chunk.Size = partSize + chunks = append(chunks, chunk) + } + + if stat.Size()%partSize > 0 { + chunk.Number = len(chunks) + 1 + chunk.OffSet = int64(len(chunks)) * partSize + chunk.Size = stat.Size() % partSize + chunks = append(chunks, chunk) + partNum++ + } + + return chunks, int(partNum), nil + +} + +// MultiUpload 为高级upload接口,并发分块上传 +// 注意该接口目前只供参考 +// +// 需要指定分块大小 partSize >= 1 ,单位为MB +// 同时请确认分块数量不超过10000 +// + +func (s *ObjectService) MultiUpload(ctx context.Context, name string, filepath string, opt *MultiUploadOptions) (*CompleteMultipartUploadResult, *Response, error) { + // 1.Get the file chunk + bufSize := opt.PartSize * 1024 * 1024 + chunks, partNum, err := SplitFileIntoChunks(filepath, bufSize) + if err != nil { + return nil, nil, err + } + + // 2.Init + optini := opt.OptIni + res, _, err := s.InitiateMultipartUpload(ctx, name, optini) + if err != nil { + return nil, nil, err + } + uploadID := res.UploadID + var poolSize int + if opt.ThreadPoolSize > 0 { + poolSize = opt.ThreadPoolSize + } else { + // Default is one + poolSize = 1 + } + + chjobs := make(chan *Jobs, 100) + chresults := make(chan *Results, 10000) + optcom := &CompleteMultipartUploadOptions{} + + // 3.Start worker + for w := 1; w <= poolSize; w++ { + go worker(s, chjobs, chresults) + } + + // 4.Push jobs + for _, chunk := range chunks { + job := &Jobs{ + Name: name, + RetryTimes: 3, + FilePath: filepath, + UploadId: uploadID, + Chunk: chunk, + } + chjobs <- job + } + close(chjobs) + + // 5.Recv the resp etag to complete + for i := 1; i <= partNum; i++ { + res := <-chresults + // Notice one part fail can not get the etag according. + if res.Resp == nil { + // Some part already fail, can not to get the header inside. + return nil, nil, fmt.Errorf("UploadID %s, part %d failed to get resp content.", uploadID, res.PartNumber) + } + // Notice one part fail can not get the etag according. + etag := res.Resp.Header.Get("ETag") + optcom.Parts = append(optcom.Parts, Object{ + PartNumber: res.PartNumber, ETag: etag}, + ) + } + sort.Sort(ObjectList(optcom.Parts)) + + v, resp, err := s.CompleteMultipartUpload(context.Background(), name, uploadID, optcom) + + return v, resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/object_acl.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/object_acl.go new file mode 100644 index 000000000..2dc5935a4 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/object_acl.go @@ -0,0 +1,63 @@ +package cos + +import ( + "context" + "net/http" +) + +// ObjectGetACLResult is the result of GetObjectACL +type ObjectGetACLResult ACLXml + +// GetACL Get Object ACL接口实现使用API读取Object的ACL表,只有所有者有权操作。 +// +// https://www.qcloud.com/document/product/436/7744 +func (s *ObjectService) GetACL(ctx context.Context, name string) (*ObjectGetACLResult, *Response, error) { + var res ObjectGetACLResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name) + "?acl", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// ObjectPutACLOptions the options of put object acl +type ObjectPutACLOptions struct { + Header *ACLHeaderOptions `url:"-" xml:"-"` + Body *ACLXml `url:"-" header:"-"` +} + +// PutACL 使用API写入Object的ACL表,您可以通过Header:"x-cos-acl", "x-cos-grant-read" , +// "x-cos-grant-write" ,"x-cos-grant-full-control"传入ACL信息, +// 也可以通过body以XML格式传入ACL信息,但是只能选择Header和Body其中一种,否则,返回冲突。 +// +// Put Object ACL是一个覆盖操作,传入新的ACL将覆盖原有ACL。只有所有者有权操作。 +// +// "x-cos-acl":枚举值为public-read,private;public-read意味这个Object有公有读私有写的权限, +// private意味这个Object有私有读写的权限。 +// +// "x-cos-grant-read":意味被赋予权限的用户拥有该Object的读权限 +// +// "x-cos-grant-write":意味被赋予权限的用户拥有该Object的写权限 +// +// "x-cos-grant-full-control":意味被赋予权限的用户拥有该Object的读写权限 +// +// https://www.qcloud.com/document/product/436/7748 +func (s *ObjectService) PutACL(ctx context.Context, name string, opt *ObjectPutACLOptions) (*Response, error) { + header := opt.Header + body := opt.Body + if body != nil { + header = nil + } + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name) + "?acl", + method: http.MethodPut, + optHeader: header, + body: body, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/object_part.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/object_part.go new file mode 100644 index 000000000..6ef577356 --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/object_part.go @@ -0,0 +1,191 @@ +package cos + +import ( + "context" + "encoding/xml" + "errors" + "fmt" + "io" + "net/http" +) + +// InitiateMultipartUploadOptions is the option of InitateMultipartUpload +type InitiateMultipartUploadOptions struct { + *ACLHeaderOptions + *ObjectPutHeaderOptions +} + +// InitiateMultipartUploadResult is the result of InitateMultipartUpload +type InitiateMultipartUploadResult struct { + XMLName xml.Name `xml:"InitiateMultipartUploadResult"` + Bucket string + Key string + UploadID string `xml:"UploadId"` +} + +// InitiateMultipartUpload 请求实现初始化分片上传,成功执行此请求以后会返回Upload ID用于后续的Upload Part请求。 +// +// https://www.qcloud.com/document/product/436/7746 +func (s *ObjectService) InitiateMultipartUpload(ctx context.Context, name string, opt *InitiateMultipartUploadOptions) (*InitiateMultipartUploadResult, *Response, error) { + var res InitiateMultipartUploadResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: "/" + encodeURIComponent(name) + "?uploads", + method: http.MethodPost, + optHeader: opt, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// ObjectUploadPartOptions is the options of upload-part +type ObjectUploadPartOptions struct { + Expect string `header:"Expect,omitempty" url:"-"` + XCosContentSHA1 string `header:"x-cos-content-sha1" url:"-"` + ContentLength int `header:"Content-Length,omitempty" url:"-"` +} + +// UploadPart 请求实现在初始化以后的分块上传,支持的块的数量为1到10000,块的大小为1 MB 到5 GB。 +// 在每次请求Upload Part时候,需要携带partNumber和uploadID,partNumber为块的编号,支持乱序上传。 +// +// 当传入uploadID和partNumber都相同的时候,后传入的块将覆盖之前传入的块。当uploadID不存在时会返回404错误,NoSuchUpload. +// +// 当 r 不是 bytes.Buffer/bytes.Reader/strings.Reader 时,必须指定 opt.ContentLength +// +// https://www.qcloud.com/document/product/436/7750 +func (s *ObjectService) UploadPart(ctx context.Context, name, uploadID string, partNumber int, r io.Reader, opt *ObjectUploadPartOptions) (*Response, error) { + u := fmt.Sprintf("/%s?partNumber=%d&uploadId=%s", encodeURIComponent(name), partNumber, uploadID) + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodPut, + optHeader: opt, + body: r, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} + +// ObjectListPartsOptions is the option of ListParts +type ObjectListPartsOptions struct { + EncodingType string `url:"Encoding-type,omitempty"` + MaxParts string `url:"max-parts,omitempty"` + PartNumberMarker string `url:"part-number-marker,omitempty"` +} + +// ObjectListPartsResult is the result of ListParts +type ObjectListPartsResult struct { + XMLName xml.Name `xml:"ListPartsResult"` + Bucket string + EncodingType string `xml:"Encoding-type,omitempty"` + Key string + UploadID string `xml:"UploadId"` + Initiator *Initiator `xml:"Initiator,omitempty"` + Owner *Owner `xml:"Owner,omitempty"` + StorageClass string + PartNumberMarker string + NextPartNumberMarker string `xml:"NextPartNumberMarker,omitempty"` + MaxParts string + IsTruncated bool + Parts []Object `xml:"Part,omitempty"` +} + +// ListParts 用来查询特定分块上传中的已上传的块。 +// +// https://www.qcloud.com/document/product/436/7747 +func (s *ObjectService) ListParts(ctx context.Context, name, uploadID string, opt *ObjectListPartsOptions) (*ObjectListPartsResult, *Response, error) { + u := fmt.Sprintf("/%s?uploadId=%s", encodeURIComponent(name), uploadID) + var res ObjectListPartsResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodGet, + result: &res, + optQuery: opt, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} + +// CompleteMultipartUploadOptions is the option of CompleteMultipartUpload +type CompleteMultipartUploadOptions struct { + XMLName xml.Name `xml:"CompleteMultipartUpload"` + Parts []Object `xml:"Part"` +} + +// CompleteMultipartUploadResult is the result CompleteMultipartUpload +type CompleteMultipartUploadResult struct { + XMLName xml.Name `xml:"CompleteMultipartUploadResult"` + Location string + Bucket string + Key string + ETag string +} + +// ObjectList can used for sort the parts which needs in complete upload part +// sort.Sort(cos.ObjectList(opt.Parts)) +type ObjectList []Object + +func (o ObjectList) Len() int { + return len(o) +} + +func (o ObjectList) Swap(i, j int) { + o[i], o[j] = o[j], o[i] +} + +func (o ObjectList) Less(i, j int) bool { // rewrite the Less method from small to big + return o[i].PartNumber < o[j].PartNumber +} + +// CompleteMultipartUpload 用来实现完成整个分块上传。当您已经使用Upload Parts上传所有块以后,你可以用该API完成上传。 +// 在使用该API时,您必须在Body中给出每一个块的PartNumber和ETag,用来校验块的准确性。 +// +// 由于分块上传的合并需要数分钟时间,因而当合并分块开始的时候,COS就立即返回200的状态码,在合并的过程中, +// COS会周期性的返回空格信息来保持连接活跃,直到合并完成,COS会在Body中返回合并后块的内容。 +// +// 当上传块小于1 MB的时候,在调用该请求时,会返回400 EntityTooSmall; +// 当上传块编号不连续的时候,在调用该请求时,会返回400 InvalidPart; +// 当请求Body中的块信息没有按序号从小到大排列的时候,在调用该请求时,会返回400 InvalidPartOrder; +// 当UploadId不存在的时候,在调用该请求时,会返回404 NoSuchUpload。 +// +// 建议您及时完成分块上传或者舍弃分块上传,因为已上传但是未终止的块会占用存储空间进而产生存储费用。 +// +// https://www.qcloud.com/document/product/436/7742 +func (s *ObjectService) CompleteMultipartUpload(ctx context.Context, name, uploadID string, opt *CompleteMultipartUploadOptions) (*CompleteMultipartUploadResult, *Response, error) { + u := fmt.Sprintf("/%s?uploadId=%s", encodeURIComponent(name), uploadID) + var res CompleteMultipartUploadResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodPost, + body: opt, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + // If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. + if err == nil && resp.StatusCode == 200 { + if res.ETag == "" { + return &res, resp, errors.New("response 200 OK, but body contains an error") + } + } + return &res, resp, err +} + +// AbortMultipartUpload 用来实现舍弃一个分块上传并删除已上传的块。当您调用Abort Multipart Upload时, +// 如果有正在使用这个Upload Parts上传块的请求,则Upload Parts会返回失败。当该UploadID不存在时,会返回404 NoSuchUpload。 +// +// 建议您及时完成分块上传或者舍弃分块上传,因为已上传但是未终止的块会占用存储空间进而产生存储费用。 +// +// https://www.qcloud.com/document/product/436/7740 +func (s *ObjectService) AbortMultipartUpload(ctx context.Context, name, uploadID string) (*Response, error) { + u := fmt.Sprintf("/%s?uploadId=%s", encodeURIComponent(name), uploadID) + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.BucketURL, + uri: u, + method: http.MethodDelete, + } + resp, err := s.client.send(ctx, &sendOpt) + return resp, err +} diff --git a/vendor/github.com/tencentyun/cos-go-sdk-v5/service.go b/vendor/github.com/tencentyun/cos-go-sdk-v5/service.go new file mode 100644 index 000000000..defe938ae --- /dev/null +++ b/vendor/github.com/tencentyun/cos-go-sdk-v5/service.go @@ -0,0 +1,35 @@ +package cos + +import ( + "context" + "encoding/xml" + "net/http" +) + +// Service 相关 API +type ServiceService service + +// ServiceGetResult is the result of Get Service +type ServiceGetResult struct { + XMLName xml.Name `xml:"ListAllMyBucketsResult"` + Owner *Owner `xml:"Owner"` + Buckets []Bucket `xml:"Buckets>Bucket,omitempty"` +} + +// Get Service 接口实现获取该用户下所有Bucket列表。 +// +// 该API接口需要使用Authorization签名认证, +// 且只能获取签名中AccessID所属账户的Bucket列表。 +// +// https://www.qcloud.com/document/product/436/8291 +func (s *ServiceService) Get(ctx context.Context) (*ServiceGetResult, *Response, error) { + var res ServiceGetResult + sendOpt := sendOptions{ + baseURL: s.client.BaseURL.ServiceURL, + uri: "/", + method: http.MethodGet, + result: &res, + } + resp, err := s.client.send(ctx, &sendOpt) + return &res, resp, err +} diff --git a/vendor/github.com/ugorji/go/codec/xml.go b/vendor/github.com/ugorji/go/codec/xml.go deleted file mode 100644 index 19fc36caf..000000000 --- a/vendor/github.com/ugorji/go/codec/xml.go +++ /dev/null @@ -1,508 +0,0 @@ -// Copyright (c) 2012-2018 Ugorji Nwoke. All rights reserved. -// Use of this source code is governed by a MIT license found in the LICENSE file. - -// +build ignore - -package codec - -import "reflect" - -/* - -A strict Non-validating namespace-aware XML 1.0 parser and (en|de)coder. - -We are attempting this due to perceived issues with encoding/xml: - - Complicated. It tried to do too much, and is not as simple to use as json. - - Due to over-engineering, reflection is over-used AND performance suffers: - java is 6X faster:http://fabsk.eu/blog/category/informatique/dev/golang/ - even PYTHON performs better: http://outgoing.typepad.com/outgoing/2014/07/exploring-golang.html - -codec framework will offer the following benefits - - VASTLY improved performance (when using reflection-mode or codecgen) - - simplicity and consistency: with the rest of the supported formats - - all other benefits of codec framework (streaming, codegeneration, etc) - -codec is not a drop-in replacement for encoding/xml. -It is a replacement, based on the simplicity and performance of codec. -Look at it like JAXB for Go. - -Challenges: - - Need to output XML preamble, with all namespaces at the right location in the output. - - Each "end" block is dynamic, so we need to maintain a context-aware stack - - How to decide when to use an attribute VS an element - - How to handle chardata, attr, comment EXPLICITLY. - - Should it output fragments? - e.g. encoding a bool should just output true OR false, which is not well-formed XML. - -Extend the struct tag. See representative example: - type X struct { - ID uint8 `codec:"http://ugorji.net/x-namespace xid id,omitempty,toarray,attr,cdata"` - // format: [namespace-uri ][namespace-prefix ]local-name, ... - } - -Based on this, we encode - - fields as elements, BUT - encode as attributes if struct tag contains ",attr" and is a scalar (bool, number or string) - - text as entity-escaped text, BUT encode as CDATA if struct tag contains ",cdata". - -To handle namespaces: - - XMLHandle is denoted as being namespace-aware. - Consequently, we WILL use the ns:name pair to encode and decode if defined, else use the plain name. - - *Encoder and *Decoder know whether the Handle "prefers" namespaces. - - add *Encoder.getEncName(*structFieldInfo). - No one calls *structFieldInfo.indexForEncName directly anymore - - OR better yet: indexForEncName is namespace-aware, and helper.go is all namespace-aware - indexForEncName takes a parameter of the form namespace:local-name OR local-name - - add *Decoder.getStructFieldInfo(encName string) // encName here is either like abc, or h1:nsabc - by being a method on *Decoder, or maybe a method on the Handle itself. - No one accesses .encName anymore - - let encode.go and decode.go use these (for consistency) - - only problem exists for gen.go, where we create a big switch on encName. - Now, we also have to add a switch on strings.endsWith(kName, encNsName) - - gen.go will need to have many more methods, and then double-on the 2 switch loops like: - switch k { - case "abc" : x.abc() - case "def" : x.def() - default { - switch { - case !nsAware: panic(...) - case strings.endsWith(":abc"): x.abc() - case strings.endsWith(":def"): x.def() - default: panic(...) - } - } - } - -The structure below accommodates this: - - type typeInfo struct { - sfi []*structFieldInfo // sorted by encName - sfins // sorted by namespace - sfia // sorted, to have those with attributes at the top. Needed to write XML appropriately. - sfip // unsorted - } - type structFieldInfo struct { - encName - nsEncName - ns string - attr bool - cdata bool - } - -indexForEncName is now an internal helper function that takes a sorted array -(one of ti.sfins or ti.sfi). It is only used by *Encoder.getStructFieldInfo(...) - -There will be a separate parser from the builder. -The parser will have a method: next() xmlToken method. It has lookahead support, -so you can pop multiple tokens, make a determination, and push them back in the order popped. -This will be needed to determine whether we are "nakedly" decoding a container or not. -The stack will be implemented using a slice and push/pop happens at the [0] element. - -xmlToken has fields: - - type uint8: 0 | ElementStart | ElementEnd | AttrKey | AttrVal | Text - - value string - - ns string - -SEE: http://www.xml.com/pub/a/98/10/guide0.html?page=3#ENTDECL - -The following are skipped when parsing: - - External Entities (from external file) - - Notation Declaration e.g. - - Entity Declarations & References - - XML Declaration (assume UTF-8) - - XML Directive i.e. - - Other Declarations: Notation, etc. - - Comment - - Processing Instruction - - schema / DTD for validation: - We are not a VALIDATING parser. Validation is done elsewhere. - However, some parts of the DTD internal subset are used (SEE BELOW). - For Attribute List Declarations e.g. - - We considered using the ATTLIST to get "default" value, but not to validate the contents. (VETOED) - -The following XML features are supported - - Namespace - - Element - - Attribute - - cdata - - Unicode escape - -The following DTD (when as an internal sub-set) features are supported: - - Internal Entities e.g. - AND entities for the set: [<>&"'] - - Parameter entities e.g. - - -At decode time, a structure containing the following is kept - - namespace mapping - - default attribute values - - all internal entities (<>&"' and others written in the document) - -When decode starts, it parses XML namespace declarations and creates a map in the -xmlDecDriver. While parsing, that map continuously gets updated. -The only problem happens when a namespace declaration happens on the node that it defines. -e.g. -To handle this, each Element must be fully parsed at a time, -even if it amounts to multiple tokens which are returned one at a time on request. - -xmlns is a special attribute name. - - It is used to define namespaces, including the default - - It is never returned as an AttrKey or AttrVal. - *We may decide later to allow user to use it e.g. you want to parse the xmlns mappings into a field.* - -Number, bool, null, mapKey, etc can all be decoded from any xmlToken. -This accommodates map[int]string for example. - -It should be possible to create a schema from the types, -or vice versa (generate types from schema with appropriate tags). -This is however out-of-scope from this parsing project. - -We should write all namespace information at the first point that it is referenced in the tree, -and use the mapping for all child nodes and attributes. This means that state is maintained -at a point in the tree. This also means that calls to Decode or MustDecode will reset some state. - -When decoding, it is important to keep track of entity references and default attribute values. -It seems these can only be stored in the DTD components. We should honor them when decoding. - -Configuration for XMLHandle will look like this: - - XMLHandle - DefaultNS string - // Encoding: - NS map[string]string // ns URI to key, used for encoding - // Decoding: in case ENTITY declared in external schema or dtd, store info needed here - Entities map[string]string // map of entity rep to character - - -During encode, if a namespace mapping is not defined for a namespace found on a struct, -then we create a mapping for it using nsN (where N is 1..1000000, and doesn't conflict -with any other namespace mapping). - -Note that different fields in a struct can have different namespaces. -However, all fields will default to the namespace on the _struct field (if defined). - -An XML document is a name, a map of attributes and a list of children. -Consequently, we cannot "DecodeNaked" into a map[string]interface{} (for example). -We have to "DecodeNaked" into something that resembles XML data. - -To support DecodeNaked (decode into nil interface{}), we have to define some "supporting" types: - type Name struct { // Preferred. Less allocations due to conversions. - Local string - Space string - } - type Element struct { - Name Name - Attrs map[Name]string - Children []interface{} // each child is either *Element or string - } -Only two "supporting" types are exposed for XML: Name and Element. - -// ------------------ - -We considered 'type Name string' where Name is like "Space Local" (space-separated). -We decided against it, because each creation of a name would lead to -double allocation (first convert []byte to string, then concatenate them into a string). -The benefit is that it is faster to read Attrs from a map. But given that Element is a value -object, we want to eschew methods and have public exposed variables. - -We also considered the following, where xml types were not value objects, and we used -intelligent accessor methods to extract information and for performance. -*** WE DECIDED AGAINST THIS. *** - type Attr struct { - Name Name - Value string - } - // Element is a ValueObject: There are no accessor methods. - // Make element self-contained. - type Element struct { - Name Name - attrsMap map[string]string // where key is "Space Local" - attrs []Attr - childrenT []string - childrenE []Element - childrenI []int // each child is a index into T or E. - } - func (x *Element) child(i) interface{} // returns string or *Element - -// ------------------ - -Per XML spec and our default handling, white space is always treated as -insignificant between elements, except in a text node. The xml:space='preserve' -attribute is ignored. - -**Note: there is no xml: namespace. The xml: attributes were defined before namespaces.** -**So treat them as just "directives" that should be interpreted to mean something**. - -On encoding, we support indenting aka prettifying markup in the same way we support it for json. - -A document or element can only be encoded/decoded from/to a struct. In this mode: - - struct name maps to element name (or tag-info from _struct field) - - fields are mapped to child elements or attributes - -A map is either encoded as attributes on current element, or as a set of child elements. -Maps are encoded as attributes iff their keys and values are primitives (number, bool, string). - -A list is encoded as a set of child elements. - -Primitives (number, bool, string) are encoded as an element, attribute or text -depending on the context. - -Extensions must encode themselves as a text string. - -Encoding is tough, specifically when encoding mappings, because we need to encode -as either attribute or element. To do this, we need to default to encoding as attributes, -and then let Encoder inform the Handle when to start encoding as nodes. -i.e. Encoder does something like: - - h.EncodeMapStart() - h.Encode(), h.Encode(), ... - h.EncodeMapNotAttrSignal() // this is not a bool, because it's a signal - h.Encode(), h.Encode(), ... - h.EncodeEnd() - -Only XMLHandle understands this, and will set itself to start encoding as elements. - -This support extends to maps. For example, if a struct field is a map, and it has -the struct tag signifying it should be attr, then all its fields are encoded as attributes. -e.g. - - type X struct { - M map[string]int `codec:"m,attr"` // encode keys as attributes named - } - -Question: - - if encoding a map, what if map keys have spaces in them??? - Then they cannot be attributes or child elements. Error. - -Options to consider adding later: - - For attribute values, normalize by trimming beginning and ending white space, - and converting every white space sequence to a single space. - - ATTLIST restrictions are enforced. - e.g. default value of xml:space, skipping xml:XYZ style attributes, etc. - - Consider supporting NON-STRICT mode (e.g. to handle HTML parsing). - Some elements e.g. br, hr, etc need not close and should be auto-closed - ... (see http://www.w3.org/TR/html4/loose.dtd) - An expansive set of entities are pre-defined. - - Have easy way to create a HTML parser: - add a HTML() method to XMLHandle, that will set Strict=false, specify AutoClose, - and add HTML Entities to the list. - - Support validating element/attribute XMLName before writing it. - Keep this behind a flag, which is set to false by default (for performance). - type XMLHandle struct { - CheckName bool - } - -Misc: - -ROADMAP (1 weeks): - - build encoder (1 day) - - build decoder (based off xmlParser) (1 day) - - implement xmlParser (2 days). - Look at encoding/xml for inspiration. - - integrate and TEST (1 days) - - write article and post it (1 day) - -// ---------- MORE NOTES FROM 2017-11-30 ------------ - -when parsing -- parse the attributes first -- then parse the nodes - -basically: -- if encoding a field: we use the field name for the wrapper -- if encoding a non-field, then just use the element type name - - map[string]string ==> abcval... or - val... OR - val1val2... <- PREFERED - []string ==> v1v2... - string v1 ==> v1 - bool true ==> true - float 1.0 ==> 1.0 - ... - - F1 map[string]string ==> abcval... OR - val... OR - val... <- PREFERED - F2 []string ==> v1v2... - F3 bool ==> true - ... - -- a scalar is encoded as: - (value) of type T ==> - (value) of field F ==> -- A kv-pair is encoded as: - (key,value) ==> OR - (key,value) of field F ==> OR -- A map or struct is just a list of kv-pairs -- A list is encoded as sequences of same node e.g. - - - value21 - value22 -- we may have to singularize the field name, when entering into xml, - and pluralize them when encoding. -- bi-directional encode->decode->encode is not a MUST. - even encoding/xml cannot decode correctly what was encoded: - - see https://play.golang.org/p/224V_nyhMS - func main() { - fmt.Println("Hello, playground") - v := []interface{}{"hello", 1, true, nil, time.Now()} - s, err := xml.Marshal(v) - fmt.Printf("err: %v, \ns: %s\n", err, s) - var v2 []interface{} - err = xml.Unmarshal(s, &v2) - fmt.Printf("err: %v, \nv2: %v\n", err, v2) - type T struct { - V []interface{} - } - v3 := T{V: v} - s, err = xml.Marshal(v3) - fmt.Printf("err: %v, \ns: %s\n", err, s) - var v4 T - err = xml.Unmarshal(s, &v4) - fmt.Printf("err: %v, \nv4: %v\n", err, v4) - } - Output: - err: , - s: hello1true - err: , - v2: [] - err: , - s: hello1true2009-11-10T23:00:00Z - err: , - v4: {[ ]} -- -*/ - -// ----------- PARSER ------------------- - -type xmlTokenType uint8 - -const ( - _ xmlTokenType = iota << 1 - xmlTokenElemStart - xmlTokenElemEnd - xmlTokenAttrKey - xmlTokenAttrVal - xmlTokenText -) - -type xmlToken struct { - Type xmlTokenType - Value string - Namespace string // blank for AttrVal and Text -} - -type xmlParser struct { - r decReader - toks []xmlToken // list of tokens. - ptr int // ptr into the toks slice - done bool // nothing else to parse. r now returns EOF. -} - -func (x *xmlParser) next() (t *xmlToken) { - // once x.done, or x.ptr == len(x.toks) == 0, then return nil (to signify finish) - if !x.done && len(x.toks) == 0 { - x.nextTag() - } - // parses one element at a time (into possible many tokens) - if x.ptr < len(x.toks) { - t = &(x.toks[x.ptr]) - x.ptr++ - if x.ptr == len(x.toks) { - x.ptr = 0 - x.toks = x.toks[:0] - } - } - return -} - -// nextTag will parses the next element and fill up toks. -// It set done flag if/once EOF is reached. -func (x *xmlParser) nextTag() { - // TODO: implement. -} - -// ----------- ENCODER ------------------- - -type xmlEncDriver struct { - e *Encoder - w encWriter - h *XMLHandle - b [64]byte // scratch - bs []byte // scratch - // s jsonStack - noBuiltInTypes -} - -// ----------- DECODER ------------------- - -type xmlDecDriver struct { - d *Decoder - h *XMLHandle - r decReader // *bytesDecReader decReader - ct valueType // container type. one of unset, array or map. - bstr [8]byte // scratch used for string \UXXX parsing - b [64]byte // scratch - - // wsSkipped bool // whitespace skipped - - // s jsonStack - - noBuiltInTypes -} - -// DecodeNaked will decode into an XMLNode - -// XMLName is a value object representing a namespace-aware NAME -type XMLName struct { - Local string - Space string -} - -// XMLNode represents a "union" of the different types of XML Nodes. -// Only one of fields (Text or *Element) is set. -type XMLNode struct { - Element *Element - Text string -} - -// XMLElement is a value object representing an fully-parsed XML element. -type XMLElement struct { - Name Name - Attrs map[XMLName]string - // Children is a list of child nodes, each being a *XMLElement or string - Children []XMLNode -} - -// ----------- HANDLE ------------------- - -type XMLHandle struct { - BasicHandle - textEncodingType - - DefaultNS string - NS map[string]string // ns URI to key, for encoding - Entities map[string]string // entity representation to string, for encoding. -} - -func (h *XMLHandle) newEncDriver(e *Encoder) encDriver { - return &xmlEncDriver{e: e, w: e.w, h: h} -} - -func (h *XMLHandle) newDecDriver(d *Decoder) decDriver { - // d := xmlDecDriver{r: r.(*bytesDecReader), h: h} - hd := xmlDecDriver{d: d, r: d.r, h: h} - hd.n.bytes = d.b[:] - return &hd -} - -func (h *XMLHandle) SetInterfaceExt(rt reflect.Type, tag uint64, ext InterfaceExt) (err error) { - return h.SetExt(rt, tag, &extWrapper{bytesExtFailer{}, ext}) -} - -var _ decDriver = (*xmlDecDriver)(nil) -var _ encDriver = (*xmlEncDriver)(nil) diff --git a/vendor/github.com/ulikunitz/xz/example.go b/vendor/github.com/ulikunitz/xz/example.go deleted file mode 100644 index 855e60aee..000000000 --- a/vendor/github.com/ulikunitz/xz/example.go +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright 2014-2017 Ulrich Kunitz. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -import ( - "bytes" - "io" - "log" - "os" - - "github.com/ulikunitz/xz" -) - -func main() { - const text = "The quick brown fox jumps over the lazy dog.\n" - var buf bytes.Buffer - // compress text - w, err := xz.NewWriter(&buf) - if err != nil { - log.Fatalf("xz.NewWriter error %s", err) - } - if _, err := io.WriteString(w, text); err != nil { - log.Fatalf("WriteString error %s", err) - } - if err := w.Close(); err != nil { - log.Fatalf("w.Close error %s", err) - } - // decompress buffer and write output to stdout - r, err := xz.NewReader(&buf) - if err != nil { - log.Fatalf("NewReader error %s", err) - } - if _, err = io.Copy(os.Stdout, r); err != nil { - log.Fatalf("io.Copy error %s", err) - } -} diff --git a/vendor/github.com/zclconf/go-cty/cty/capsule.go b/vendor/github.com/zclconf/go-cty/cty/capsule.go index d273d1483..2fdc15eae 100644 --- a/vendor/github.com/zclconf/go-cty/cty/capsule.go +++ b/vendor/github.com/zclconf/go-cty/cty/capsule.go @@ -9,6 +9,7 @@ type capsuleType struct { typeImplSigil Name string GoType reflect.Type + Ops *CapsuleOps } func (t *capsuleType) Equals(other Type) bool { @@ -24,10 +25,22 @@ func (t *capsuleType) FriendlyName(mode friendlyTypeNameMode) string { } func (t *capsuleType) GoString() string { - // To get a useful representation of our native type requires some - // shenanigans. - victimVal := reflect.Zero(t.GoType) - return fmt.Sprintf("cty.Capsule(%q, reflect.TypeOf(%#v))", t.Name, victimVal.Interface()) + impl := t.Ops.TypeGoString + if impl == nil { + // To get a useful representation of our native type requires some + // shenanigans. + victimVal := reflect.Zero(t.GoType) + if t.Ops == noCapsuleOps { + return fmt.Sprintf("cty.Capsule(%q, reflect.TypeOf(%#v))", t.Name, victimVal.Interface()) + } else { + // Including the operations in the output will make this _very_ long, + // so in practice any capsule type with ops ought to provide a + // TypeGoString function to override this with something more + // reasonable. + return fmt.Sprintf("cty.CapsuleWithOps(%q, reflect.TypeOf(%#v), %#v)", t.Name, victimVal.Interface(), t.Ops) + } + } + return impl(t.GoType) } // Capsule creates a new Capsule type. @@ -47,8 +60,11 @@ func (t *capsuleType) GoString() string { // use the same native type. // // Each capsule-typed value contains a pointer to a value of the given native -// type. A capsule-typed value supports no operations except equality, and -// equality is implemented by pointer identity of the encapsulated pointer. +// type. A capsule-typed value by default supports no operations except +// equality, and equality is implemented by pointer identity of the +// encapsulated pointer. A capsule type can optionally have its own +// implementations of certain operations if it is created with CapsuleWithOps +// instead of Capsule. // // The given name is used as the new type's "friendly name". This can be any // string in principle, but will usually be a short, all-lowercase name aimed @@ -65,6 +81,29 @@ func Capsule(name string, nativeType reflect.Type) Type { &capsuleType{ Name: name, GoType: nativeType, + Ops: noCapsuleOps, + }, + } +} + +// CapsuleWithOps is like Capsule except the caller may provide an object +// representing some overloaded operation implementations to associate with +// the given capsule type. +// +// All of the other caveats and restrictions for capsule types still apply, but +// overloaded operations can potentially help a capsule type participate better +// in cty operations. +func CapsuleWithOps(name string, nativeType reflect.Type, ops *CapsuleOps) Type { + // Copy the operations to make sure the caller can't modify them after + // we're constructed. + ourOps := *ops + ourOps.assertValid() + + return Type{ + &capsuleType{ + Name: name, + GoType: nativeType, + Ops: &ourOps, }, } } diff --git a/vendor/github.com/zclconf/go-cty/cty/capsule_ops.go b/vendor/github.com/zclconf/go-cty/cty/capsule_ops.go new file mode 100644 index 000000000..3ff6855ec --- /dev/null +++ b/vendor/github.com/zclconf/go-cty/cty/capsule_ops.go @@ -0,0 +1,132 @@ +package cty + +import ( + "reflect" +) + +// CapsuleOps represents a set of overloaded operations for a capsule type. +// +// Each field is a reference to a function that can either be nil or can be +// set to an implementation of the corresponding operation. If an operation +// function is nil then it isn't supported for the given capsule type. +type CapsuleOps struct { + // GoString provides the GoString implementation for values of the + // corresponding type. Conventionally this should return a string + // representation of an expression that would produce an equivalent + // value. + GoString func(val interface{}) string + + // TypeGoString provides the GoString implementation for the corresponding + // capsule type itself. + TypeGoString func(goTy reflect.Type) string + + // Equals provides the implementation of the Equals operation. This is + // called only with known, non-null values of the corresponding type, + // but if the corresponding type is a compound type then it must be + // ready to detect and handle nested unknown or null values, usually + // by recursively calling Value.Equals on those nested values. + // + // The result value must always be of type cty.Bool, or the Equals + // operation will panic. + // + // If RawEquals is set without also setting Equals, the RawEquals + // implementation will be used as a fallback implementation. That fallback + // is appropriate only for leaf types that do not contain any nested + // cty.Value that would need to distinguish Equals vs. RawEquals for their + // own equality. + // + // If RawEquals is nil then Equals must also be nil, selecting the default + // pointer-identity comparison instead. + Equals func(a, b interface{}) Value + + // RawEquals provides the implementation of the RawEquals operation. + // This is called only with known, non-null values of the corresponding + // type, but if the corresponding type is a compound type then it must be + // ready to detect and handle nested unknown or null values, usually + // by recursively calling Value.RawEquals on those nested values. + // + // If RawEquals is nil, values of the corresponding type are compared by + // pointer identity of the encapsulated value. + RawEquals func(a, b interface{}) bool + + // ConversionFrom can provide conversions from the corresponding type to + // some other type when values of the corresponding type are used with + // the "convert" package. (The main cty package does not use this operation.) + // + // This function itself returns a function, allowing it to switch its + // behavior depending on the given source type. Return nil to indicate + // that no such conversion is available. + ConversionFrom func(src Type) func(interface{}, Path) (Value, error) + + // ConversionTo can provide conversions to the corresponding type from + // some other type when values of the corresponding type are used with + // the "convert" package. (The main cty package does not use this operation.) + // + // This function itself returns a function, allowing it to switch its + // behavior depending on the given destination type. Return nil to indicate + // that no such conversion is available. + ConversionTo func(dst Type) func(Value, Path) (interface{}, error) + + // ExtensionData is an extension point for applications that wish to + // create their own extension features using capsule types. + // + // The key argument is any value that can be compared with Go's == + // operator, but should be of a named type in a package belonging to the + // application defining the key. An ExtensionData implementation must + // check to see if the given key is familar to it, and if so return a + // suitable value for the key. + // + // If the given key is unrecognized, the ExtensionData function must + // return a nil interface. (Importantly, not an interface containing a nil + // pointer of some other type.) + // The common implementation of ExtensionData is a single switch statement + // over "key" which has a default case returning nil. + // + // The meaning of any given key is entirely up to the application that + // defines it. Applications consuming ExtensionData from capsule types + // should do so defensively: if the result of ExtensionData is not valid, + // prefer to ignore it or gracefully produce an error rather than causing + // a panic. + ExtensionData func(key interface{}) interface{} +} + +// noCapsuleOps is a pointer to a CapsuleOps with no functions set, which +// is used as the default operations value when a type is created using +// the Capsule function. +var noCapsuleOps = &CapsuleOps{} + +func (ops *CapsuleOps) assertValid() { + if ops.RawEquals == nil && ops.Equals != nil { + panic("Equals cannot be set without RawEquals") + } +} + +// CapsuleOps returns a pointer to the CapsuleOps value for a capsule type, +// or panics if the receiver is not a capsule type. +// +// The caller must not modify the CapsuleOps. +func (ty Type) CapsuleOps() *CapsuleOps { + if !ty.IsCapsuleType() { + panic("not a capsule-typed value") + } + + return ty.typeImpl.(*capsuleType).Ops +} + +// CapsuleExtensionData is a convenience interface to the ExtensionData +// function that can be optionally implemented for a capsule type. It will +// check to see if the underlying type implements ExtensionData and call it +// if so. If not, it will return nil to indicate that the given key is not +// supported. +// +// See the documentation for CapsuleOps.ExtensionData for more information +// on the purpose of and usage of this mechanism. +// +// If CapsuleExtensionData is called on a non-capsule type then it will panic. +func (ty Type) CapsuleExtensionData(key interface{}) interface{} { + ops := ty.CapsuleOps() + if ops.ExtensionData == nil { + return nil + } + return ops.ExtensionData(key) +} diff --git a/vendor/github.com/zclconf/go-cty/cty/convert/conversion.go b/vendor/github.com/zclconf/go-cty/cty/convert/conversion.go index f9aacb4ee..8d177f151 100644 --- a/vendor/github.com/zclconf/go-cty/cty/convert/conversion.go +++ b/vendor/github.com/zclconf/go-cty/cty/convert/conversion.go @@ -16,7 +16,19 @@ func getConversion(in cty.Type, out cty.Type, unsafe bool) conversion { // Wrap the conversion in some standard checks that we don't want to // have to repeat in every conversion function. - return func(in cty.Value, path cty.Path) (cty.Value, error) { + var ret conversion + ret = func(in cty.Value, path cty.Path) (cty.Value, error) { + if in.IsMarked() { + // We must unmark during the conversion and then re-apply the + // same marks to the result. + in, inMarks := in.Unmark() + v, err := ret(in, path) + if v != cty.NilVal { + v = v.WithMarks(inMarks) + } + return v, err + } + if out == cty.DynamicPseudoType { // Conversion to DynamicPseudoType always just passes through verbatim. return in, nil @@ -33,6 +45,8 @@ func getConversion(in cty.Type, out cty.Type, unsafe bool) conversion { return conv(in, path) } + + return ret } func getConversionKnown(in cty.Type, out cty.Type, unsafe bool) conversion { @@ -124,6 +138,39 @@ func getConversionKnown(in cty.Type, out cty.Type, unsafe bool) conversion { outEty := out.ElementType() return conversionObjectToMap(in, outEty, unsafe) + case out.IsObjectType() && in.IsMapType(): + if !unsafe { + // Converting a map to an object is an "unsafe" conversion, + // because we don't know if all the map keys will correspond to + // object attributes. + return nil + } + return conversionMapToObject(in, out, unsafe) + + case in.IsCapsuleType() || out.IsCapsuleType(): + if !unsafe { + // Capsule types can only participate in "unsafe" conversions, + // because we don't know enough about their conversion behaviors + // to be sure that they will always be safe. + return nil + } + if in.Equals(out) { + // conversion to self is never allowed + return nil + } + if out.IsCapsuleType() { + if fn := out.CapsuleOps().ConversionTo; fn != nil { + return conversionToCapsule(in, out, fn) + } + } + if in.IsCapsuleType() { + if fn := in.CapsuleOps().ConversionFrom; fn != nil { + return conversionFromCapsule(in, out, fn) + } + } + // No conversion operation is available, then. + return nil + default: return nil diff --git a/vendor/github.com/zclconf/go-cty/cty/convert/conversion_capsule.go b/vendor/github.com/zclconf/go-cty/cty/convert/conversion_capsule.go new file mode 100644 index 000000000..ded4079d4 --- /dev/null +++ b/vendor/github.com/zclconf/go-cty/cty/convert/conversion_capsule.go @@ -0,0 +1,31 @@ +package convert + +import ( + "github.com/zclconf/go-cty/cty" +) + +func conversionToCapsule(inTy, outTy cty.Type, fn func(inTy cty.Type) func(cty.Value, cty.Path) (interface{}, error)) conversion { + rawConv := fn(inTy) + if rawConv == nil { + return nil + } + + return func(in cty.Value, path cty.Path) (cty.Value, error) { + rawV, err := rawConv(in, path) + if err != nil { + return cty.NilVal, err + } + return cty.CapsuleVal(outTy, rawV), nil + } +} + +func conversionFromCapsule(inTy, outTy cty.Type, fn func(outTy cty.Type) func(interface{}, cty.Path) (cty.Value, error)) conversion { + rawConv := fn(outTy) + if rawConv == nil { + return nil + } + + return func(in cty.Value, path cty.Path) (cty.Value, error) { + return rawConv(in.EncapsulatedValue(), path) + } +} diff --git a/vendor/github.com/zclconf/go-cty/cty/convert/conversion_collection.go b/vendor/github.com/zclconf/go-cty/cty/convert/conversion_collection.go index 3039ba22e..ea23bf618 100644 --- a/vendor/github.com/zclconf/go-cty/cty/convert/conversion_collection.go +++ b/vendor/github.com/zclconf/go-cty/cty/convert/conversion_collection.go @@ -15,18 +15,18 @@ func conversionCollectionToList(ety cty.Type, conv conversion) conversion { return func(val cty.Value, path cty.Path) (cty.Value, error) { elems := make([]cty.Value, 0, val.LengthInt()) i := int64(0) - path = append(path, nil) + elemPath := append(path.Copy(), nil) it := val.ElementIterator() for it.Next() { _, val := it.Element() var err error - path[len(path)-1] = cty.IndexStep{ + elemPath[len(elemPath)-1] = cty.IndexStep{ Key: cty.NumberIntVal(i), } if conv != nil { - val, err = conv(val, path) + val, err = conv(val, elemPath) if err != nil { return cty.NilVal, err } @@ -37,6 +37,9 @@ func conversionCollectionToList(ety cty.Type, conv conversion) conversion { } if len(elems) == 0 { + if ety == cty.DynamicPseudoType { + ety = val.Type().ElementType() + } return cty.ListValEmpty(ety), nil } @@ -55,18 +58,18 @@ func conversionCollectionToSet(ety cty.Type, conv conversion) conversion { return func(val cty.Value, path cty.Path) (cty.Value, error) { elems := make([]cty.Value, 0, val.LengthInt()) i := int64(0) - path = append(path, nil) + elemPath := append(path.Copy(), nil) it := val.ElementIterator() for it.Next() { _, val := it.Element() var err error - path[len(path)-1] = cty.IndexStep{ + elemPath[len(elemPath)-1] = cty.IndexStep{ Key: cty.NumberIntVal(i), } if conv != nil { - val, err = conv(val, path) + val, err = conv(val, elemPath) if err != nil { return cty.NilVal, err } @@ -77,6 +80,11 @@ func conversionCollectionToSet(ety cty.Type, conv conversion) conversion { } if len(elems) == 0 { + // Prefer a concrete type over a dynamic type when returning an + // empty set + if ety == cty.DynamicPseudoType { + ety = val.Type().ElementType() + } return cty.SetValEmpty(ety), nil } @@ -93,13 +101,13 @@ func conversionCollectionToSet(ety cty.Type, conv conversion) conversion { func conversionCollectionToMap(ety cty.Type, conv conversion) conversion { return func(val cty.Value, path cty.Path) (cty.Value, error) { elems := make(map[string]cty.Value, 0) - path = append(path, nil) + elemPath := append(path.Copy(), nil) it := val.ElementIterator() for it.Next() { key, val := it.Element() var err error - path[len(path)-1] = cty.IndexStep{ + elemPath[len(elemPath)-1] = cty.IndexStep{ Key: key, } @@ -107,11 +115,11 @@ func conversionCollectionToMap(ety cty.Type, conv conversion) conversion { if err != nil { // Should never happen, because keys can only be numbers or // strings and both can convert to string. - return cty.DynamicVal, path.NewErrorf("cannot convert key type %s to string for map", key.Type().FriendlyName()) + return cty.DynamicVal, elemPath.NewErrorf("cannot convert key type %s to string for map", key.Type().FriendlyName()) } if conv != nil { - val, err = conv(val, path) + val, err = conv(val, elemPath) if err != nil { return cty.NilVal, err } @@ -121,9 +129,25 @@ func conversionCollectionToMap(ety cty.Type, conv conversion) conversion { } if len(elems) == 0 { + // Prefer a concrete type over a dynamic type when returning an + // empty map + if ety == cty.DynamicPseudoType { + ety = val.Type().ElementType() + } return cty.MapValEmpty(ety), nil } + if ety.IsCollectionType() || ety.IsObjectType() { + var err error + if elems, err = conversionUnifyCollectionElements(elems, path, false); err != nil { + return cty.NilVal, err + } + } + + if err := conversionCheckMapElementTypes(elems, path); err != nil { + return cty.NilVal, err + } + return cty.MapVal(elems), nil } } @@ -171,20 +195,20 @@ func conversionTupleToSet(tupleType cty.Type, listEty cty.Type, unsafe bool) con // element conversions in elemConvs return func(val cty.Value, path cty.Path) (cty.Value, error) { elems := make([]cty.Value, 0, len(elemConvs)) - path = append(path, nil) + elemPath := append(path.Copy(), nil) i := int64(0) it := val.ElementIterator() for it.Next() { _, val := it.Element() var err error - path[len(path)-1] = cty.IndexStep{ + elemPath[len(elemPath)-1] = cty.IndexStep{ Key: cty.NumberIntVal(i), } conv := elemConvs[i] if conv != nil { - val, err = conv(val, path) + val, err = conv(val, elemPath) if err != nil { return cty.NilVal, err } @@ -241,20 +265,20 @@ func conversionTupleToList(tupleType cty.Type, listEty cty.Type, unsafe bool) co // element conversions in elemConvs return func(val cty.Value, path cty.Path) (cty.Value, error) { elems := make([]cty.Value, 0, len(elemConvs)) - path = append(path, nil) + elemPath := append(path.Copy(), nil) i := int64(0) it := val.ElementIterator() for it.Next() { _, val := it.Element() var err error - path[len(path)-1] = cty.IndexStep{ + elemPath[len(elemPath)-1] = cty.IndexStep{ Key: cty.NumberIntVal(i), } conv := elemConvs[i] if conv != nil { - val, err = conv(val, path) + val, err = conv(val, elemPath) if err != nil { return cty.NilVal, err } @@ -315,19 +339,19 @@ func conversionObjectToMap(objectType cty.Type, mapEty cty.Type, unsafe bool) co // element conversions in elemConvs return func(val cty.Value, path cty.Path) (cty.Value, error) { elems := make(map[string]cty.Value, len(elemConvs)) - path = append(path, nil) + elemPath := append(path.Copy(), nil) it := val.ElementIterator() for it.Next() { name, val := it.Element() var err error - path[len(path)-1] = cty.IndexStep{ + elemPath[len(elemPath)-1] = cty.IndexStep{ Key: name, } conv := elemConvs[name.AsString()] if conv != nil { - val, err = conv(val, path) + val, err = conv(val, elemPath) if err != nil { return cty.NilVal, err } @@ -335,6 +359,130 @@ func conversionObjectToMap(objectType cty.Type, mapEty cty.Type, unsafe bool) co elems[name.AsString()] = val } + if mapEty.IsCollectionType() || mapEty.IsObjectType() { + var err error + if elems, err = conversionUnifyCollectionElements(elems, path, unsafe); err != nil { + return cty.NilVal, err + } + } + + if err := conversionCheckMapElementTypes(elems, path); err != nil { + return cty.NilVal, err + } + return cty.MapVal(elems), nil } } + +// conversionMapToObject returns a conversion that will take a value of the +// given map type and return an object of the given type. The object attribute +// types must all be compatible with the map element type. +// +// Will panic if the given mapType and objType are not maps and objects +// respectively. +func conversionMapToObject(mapType cty.Type, objType cty.Type, unsafe bool) conversion { + objectAtys := objType.AttributeTypes() + mapEty := mapType.ElementType() + + elemConvs := make(map[string]conversion, len(objectAtys)) + for name, objectAty := range objectAtys { + if objectAty.Equals(mapEty) { + // no conversion required + continue + } + + elemConvs[name] = getConversion(mapEty, objectAty, unsafe) + if elemConvs[name] == nil { + // If any of our element conversions are impossible, then the our + // whole conversion is impossible. + return nil + } + } + + // If we fall out here then a conversion is possible, using the + // element conversions in elemConvs + return func(val cty.Value, path cty.Path) (cty.Value, error) { + elems := make(map[string]cty.Value, len(elemConvs)) + elemPath := append(path.Copy(), nil) + it := val.ElementIterator() + for it.Next() { + name, val := it.Element() + + // if there is no corresponding attribute, we skip this key + if _, ok := objectAtys[name.AsString()]; !ok { + continue + } + + var err error + + elemPath[len(elemPath)-1] = cty.IndexStep{ + Key: name, + } + + conv := elemConvs[name.AsString()] + if conv != nil { + val, err = conv(val, elemPath) + if err != nil { + return cty.NilVal, err + } + } + + elems[name.AsString()] = val + } + + return cty.ObjectVal(elems), nil + } +} + +func conversionUnifyCollectionElements(elems map[string]cty.Value, path cty.Path, unsafe bool) (map[string]cty.Value, error) { + elemTypes := make([]cty.Type, 0, len(elems)) + for _, elem := range elems { + elemTypes = append(elemTypes, elem.Type()) + } + unifiedType, _ := unify(elemTypes, unsafe) + if unifiedType == cty.NilType { + } + + unifiedElems := make(map[string]cty.Value) + elemPath := append(path.Copy(), nil) + + for name, elem := range elems { + if elem.Type().Equals(unifiedType) { + unifiedElems[name] = elem + continue + } + conv := getConversion(elem.Type(), unifiedType, unsafe) + if conv == nil { + } + elemPath[len(elemPath)-1] = cty.IndexStep{ + Key: cty.StringVal(name), + } + val, err := conv(elem, elemPath) + if err != nil { + return nil, err + } + unifiedElems[name] = val + } + + return unifiedElems, nil +} + +func conversionCheckMapElementTypes(elems map[string]cty.Value, path cty.Path) error { + elementType := cty.NilType + elemPath := append(path.Copy(), nil) + + for name, elem := range elems { + if elementType == cty.NilType { + elementType = elem.Type() + continue + } + if !elementType.Equals(elem.Type()) { + elemPath[len(elemPath)-1] = cty.IndexStep{ + Key: cty.StringVal(name), + } + return elemPath.NewErrorf("%s is required", elementType.FriendlyName()) + } + } + + return nil +} diff --git a/vendor/github.com/zclconf/go-cty/cty/convert/conversion_primitive.go b/vendor/github.com/zclconf/go-cty/cty/convert/conversion_primitive.go index e0dbf491e..0d6fae964 100644 --- a/vendor/github.com/zclconf/go-cty/cty/convert/conversion_primitive.go +++ b/vendor/github.com/zclconf/go-cty/cty/convert/conversion_primitive.go @@ -1,6 +1,8 @@ package convert import ( + "strings" + "github.com/zclconf/go-cty/cty" ) @@ -41,7 +43,14 @@ var primitiveConversionsUnsafe = map[cty.Type]map[cty.Type]conversion{ case "false", "0": return cty.False, nil default: - return cty.NilVal, path.NewErrorf("a bool is required") + switch strings.ToLower(val.AsString()) { + case "true": + return cty.NilVal, path.NewErrorf("a bool is required; to convert from string, use lowercase \"true\"") + case "false": + return cty.NilVal, path.NewErrorf("a bool is required; to convert from string, use lowercase \"false\"") + default: + return cty.NilVal, path.NewErrorf("a bool is required") + } } }, }, diff --git a/vendor/github.com/zclconf/go-cty/cty/convert/unify.go b/vendor/github.com/zclconf/go-cty/cty/convert/unify.go index 53ebbfe08..ee171d1ed 100644 --- a/vendor/github.com/zclconf/go-cty/cty/convert/unify.go +++ b/vendor/github.com/zclconf/go-cty/cty/convert/unify.go @@ -28,11 +28,14 @@ func unify(types []cty.Type, unsafe bool) (cty.Type, []Conversion) { // a subset of that type, which would be a much less useful conversion for // unification purposes. { + mapCt := 0 objectCt := 0 tupleCt := 0 dynamicCt := 0 for _, ty := range types { switch { + case ty.IsMapType(): + mapCt++ case ty.IsObjectType(): objectCt++ case ty.IsTupleType(): @@ -44,6 +47,8 @@ func unify(types []cty.Type, unsafe bool) (cty.Type, []Conversion) { } } switch { + case mapCt > 0 && (mapCt+dynamicCt) == len(types): + return unifyMapTypes(types, unsafe, dynamicCt > 0) case objectCt > 0 && (objectCt+dynamicCt) == len(types): return unifyObjectTypes(types, unsafe, dynamicCt > 0) case tupleCt > 0 && (tupleCt+dynamicCt) == len(types): @@ -95,6 +100,44 @@ Preferences: return cty.NilType, nil } +func unifyMapTypes(types []cty.Type, unsafe bool, hasDynamic bool) (cty.Type, []Conversion) { + // If we had any dynamic types in the input here then we can't predict + // what path we'll take through here once these become known types, so + // we'll conservatively produce DynamicVal for these. + if hasDynamic { + return unifyAllAsDynamic(types) + } + + elemTypes := make([]cty.Type, 0, len(types)) + for _, ty := range types { + elemTypes = append(elemTypes, ty.ElementType()) + } + retElemType, _ := unify(elemTypes, unsafe) + if retElemType == cty.NilType { + return cty.NilType, nil + } + + retTy := cty.Map(retElemType) + + conversions := make([]Conversion, len(types)) + for i, ty := range types { + if ty.Equals(retTy) { + continue + } + if unsafe { + conversions[i] = GetConversionUnsafe(ty, retTy) + } else { + conversions[i] = GetConversion(ty, retTy) + } + if conversions[i] == nil { + // Shouldn't be reachable, since we were able to unify + return cty.NilType, nil + } + } + + return retTy, conversions +} + func unifyObjectTypes(types []cty.Type, unsafe bool, hasDynamic bool) (cty.Type, []Conversion) { // If we had any dynamic types in the input here then we can't predict // what path we'll take through here once these become known types, so diff --git a/vendor/github.com/zclconf/go-cty/cty/element_iterator.go b/vendor/github.com/zclconf/go-cty/cty/element_iterator.go index 0bf84c774..9e4fff66f 100644 --- a/vendor/github.com/zclconf/go-cty/cty/element_iterator.go +++ b/vendor/github.com/zclconf/go-cty/cty/element_iterator.go @@ -23,6 +23,8 @@ type ElementIterator interface { func canElementIterator(val Value) bool { switch { + case val.IsMarked(): + return false case val.ty.IsListType(): return true case val.ty.IsMapType(): @@ -39,6 +41,7 @@ func canElementIterator(val Value) bool { } func elementIterator(val Value) ElementIterator { + val.assertUnmarked() switch { case val.ty.IsListType(): return &listElementIterator{ diff --git a/vendor/github.com/zclconf/go-cty/cty/function/argument.go b/vendor/github.com/zclconf/go-cty/cty/function/argument.go index bfd30157e..5a26c275f 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/argument.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/argument.go @@ -47,4 +47,24 @@ type Parameter struct { // values are not, thus improving the type-check accuracy of derived // values. AllowDynamicType bool + + // If AllowMarked is set then marked values may be passed into this + // argument's slot in the implementation function. If not set, any + // marked value will be unmarked before calling and then the markings + // from that value will be applied automatically to the function result, + // ensuring that the marks get propagated in a simplistic way even if + // a function is unable to handle them. + // + // For any argument whose parameter has AllowMarked set, it's the + // function implementation's responsibility to Unmark the given value + // and propagate the marks appropriatedly to the result in order to + // avoid losing the marks. Application-specific functions might use + // special rules to selectively propagate particular marks. + // + // The automatic unmarking of values applies only to the main + // implementation function. In an application that uses marked values, + // the Type implementation for a function must always be prepared to accept + // marked values, which is easy to achieve by consulting only the type + // and ignoring the value itself. + AllowMarked bool } diff --git a/vendor/github.com/zclconf/go-cty/cty/function/function.go b/vendor/github.com/zclconf/go-cty/cty/function/function.go index 9e8bf3376..efd882725 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/function.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/function.go @@ -142,6 +142,21 @@ func (f Function) ReturnTypeForValues(args []cty.Value) (ty cty.Type, err error) for i, spec := range f.spec.Params { val := posArgs[i] + if val.IsMarked() && !spec.AllowMarked { + // During type checking we just unmark values and discard their + // marks, under the assumption that during actual execution of + // the function we'll do similarly and then re-apply the marks + // afterwards. Note that this does mean that a function that + // inspects values (rather than just types) in its Type + // implementation can potentially fail to take into account marks, + // unless it specifically opts in to seeing them. + unmarked, _ := val.Unmark() + newArgs := make([]cty.Value, len(args)) + copy(newArgs, args) + newArgs[i] = unmarked + args = newArgs + } + if val.IsNull() && !spec.AllowNull { return cty.Type{}, NewArgErrorf(i, "argument must not be null") } @@ -168,6 +183,15 @@ func (f Function) ReturnTypeForValues(args []cty.Value) (ty cty.Type, err error) for i, val := range varArgs { realI := i + len(posArgs) + if val.IsMarked() && !spec.AllowMarked { + // See the similar block in the loop above for what's going on here. + unmarked, _ := val.Unmark() + newArgs := make([]cty.Value, len(args)) + copy(newArgs, args) + newArgs[realI] = unmarked + args = newArgs + } + if val.IsNull() && !spec.AllowNull { return cty.Type{}, NewArgErrorf(realI, "argument must not be null") } @@ -208,9 +232,10 @@ func (f Function) Call(args []cty.Value) (val cty.Value, err error) { // Type checking already dealt with most situations relating to our // parameter specification, but we still need to deal with unknown - // values. + // values and marked values. posArgs := args[:len(f.spec.Params)] varArgs := args[len(f.spec.Params):] + var resultMarks []cty.ValueMarks for i, spec := range f.spec.Params { val := posArgs[i] @@ -218,14 +243,37 @@ func (f Function) Call(args []cty.Value) (val cty.Value, err error) { if !val.IsKnown() && !spec.AllowUnknown { return cty.UnknownVal(expectedType), nil } + + if val.IsMarked() && !spec.AllowMarked { + unwrappedVal, marks := val.Unmark() + // In order to avoid additional overhead on applications that + // are not using marked values, we copy the given args only + // if we encounter a marked value we need to unmark. However, + // as a consequence we end up doing redundant copying if multiple + // marked values need to be unwrapped. That seems okay because + // argument lists are generally small. + newArgs := make([]cty.Value, len(args)) + copy(newArgs, args) + newArgs[i] = unwrappedVal + resultMarks = append(resultMarks, marks) + args = newArgs + } } if f.spec.VarParam != nil { spec := f.spec.VarParam - for _, val := range varArgs { + for i, val := range varArgs { if !val.IsKnown() && !spec.AllowUnknown { return cty.UnknownVal(expectedType), nil } + if val.IsMarked() && !spec.AllowMarked { + unwrappedVal, marks := val.Unmark() + newArgs := make([]cty.Value, len(args)) + copy(newArgs, args) + newArgs[len(posArgs)+i] = unwrappedVal + resultMarks = append(resultMarks, marks) + args = newArgs + } } } @@ -244,6 +292,9 @@ func (f Function) Call(args []cty.Value) (val cty.Value, err error) { if err != nil { return cty.NilVal, err } + if len(resultMarks) > 0 { + retVal = retVal.WithMarks(resultMarks...) + } } // Returned value must conform to what the Type function expected, to diff --git a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/bool.go b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/bool.go index a473d0ec3..4f1ecc8d9 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/bool.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/bool.go @@ -11,6 +11,7 @@ var NotFunc = function.New(&function.Spec{ Name: "val", Type: cty.Bool, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Bool), @@ -25,11 +26,13 @@ var AndFunc = function.New(&function.Spec{ Name: "a", Type: cty.Bool, AllowDynamicType: true, + AllowMarked: true, }, { Name: "b", Type: cty.Bool, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Bool), @@ -44,11 +47,13 @@ var OrFunc = function.New(&function.Spec{ Name: "a", Type: cty.Bool, AllowDynamicType: true, + AllowMarked: true, }, { Name: "b", Type: cty.Bool, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Bool), diff --git a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/collection.go b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/collection.go index 967ba03c8..b2ce062a6 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/collection.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/collection.go @@ -1,9 +1,12 @@ package stdlib import ( + "errors" "fmt" + "sort" "github.com/zclconf/go-cty/cty" + "github.com/zclconf/go-cty/cty/convert" "github.com/zclconf/go-cty/cty/function" "github.com/zclconf/go-cty/cty/gocty" ) @@ -122,6 +125,1094 @@ var LengthFunc = function.New(&function.Spec{ }, }) +var ElementFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.DynamicPseudoType, + }, + { + Name: "index", + Type: cty.Number, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + list := args[0] + listTy := list.Type() + switch { + case listTy.IsListType(): + return listTy.ElementType(), nil + case listTy.IsTupleType(): + if !args[1].IsKnown() { + // If the index isn't known yet then we can't predict the + // result type since each tuple element can have its own type. + return cty.DynamicPseudoType, nil + } + + etys := listTy.TupleElementTypes() + var index int + err := gocty.FromCtyValue(args[1], &index) + if err != nil { + // e.g. fractional number where whole number is required + return cty.DynamicPseudoType, fmt.Errorf("invalid index: %s", err) + } + if len(etys) == 0 { + return cty.DynamicPseudoType, errors.New("cannot use element function with an empty list") + } + index = index % len(etys) + return etys[index], nil + default: + return cty.DynamicPseudoType, fmt.Errorf("cannot read elements from %s", listTy.FriendlyName()) + } + }, + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + var index int + err := gocty.FromCtyValue(args[1], &index) + if err != nil { + // can't happen because we checked this in the Type function above + return cty.DynamicVal, fmt.Errorf("invalid index: %s", err) + } + + if !args[0].IsKnown() { + return cty.UnknownVal(retType), nil + } + + l := args[0].LengthInt() + if l == 0 { + return cty.DynamicVal, errors.New("cannot use element function with an empty list") + } + index = index % l + + // We did all the necessary type checks in the type function above, + // so this is guaranteed not to fail. + return args[0].Index(cty.NumberIntVal(int64(index))), nil + }, +}) + +// CoalesceListFunc is a function that takes any number of list arguments +// and returns the first one that isn't empty. +var CoalesceListFunc = function.New(&function.Spec{ + Params: []function.Parameter{}, + VarParam: &function.Parameter{ + Name: "vals", + Type: cty.DynamicPseudoType, + AllowUnknown: true, + AllowDynamicType: true, + AllowNull: true, + }, + Type: func(args []cty.Value) (ret cty.Type, err error) { + if len(args) == 0 { + return cty.NilType, errors.New("at least one argument is required") + } + + argTypes := make([]cty.Type, len(args)) + + for i, arg := range args { + // if any argument is unknown, we can't be certain know which type we will return + if !arg.IsKnown() { + return cty.DynamicPseudoType, nil + } + ty := arg.Type() + + if !ty.IsListType() && !ty.IsTupleType() { + return cty.NilType, errors.New("coalescelist arguments must be lists or tuples") + } + + argTypes[i] = arg.Type() + } + + last := argTypes[0] + // If there are mixed types, we have to return a dynamic type. + for _, next := range argTypes[1:] { + if !next.Equals(last) { + return cty.DynamicPseudoType, nil + } + } + + return last, nil + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + for _, arg := range args { + if !arg.IsKnown() { + // If we run into an unknown list at some point, we can't + // predict the final result yet. (If there's a known, non-empty + // arg before this then we won't get here.) + return cty.UnknownVal(retType), nil + } + + if arg.LengthInt() > 0 { + return arg, nil + } + } + + return cty.NilVal, errors.New("no non-null arguments") + }, +}) + +// CompactFunc is a function that takes a list of strings and returns a new list +// with any empty string elements removed. +var CompactFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.List(cty.String), + }, + }, + Type: function.StaticReturnType(cty.List(cty.String)), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + listVal := args[0] + if !listVal.IsWhollyKnown() { + // If some of the element values aren't known yet then we + // can't yet return a compacted list + return cty.UnknownVal(retType), nil + } + + var outputList []cty.Value + + for it := listVal.ElementIterator(); it.Next(); { + _, v := it.Element() + if v.IsNull() || v.AsString() == "" { + continue + } + outputList = append(outputList, v) + } + + if len(outputList) == 0 { + return cty.ListValEmpty(cty.String), nil + } + + return cty.ListVal(outputList), nil + }, +}) + +// ContainsFunc is a function that determines whether a given list or +// set contains a given single value as one of its elements. +var ContainsFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.DynamicPseudoType, + }, + { + Name: "value", + Type: cty.DynamicPseudoType, + }, + }, + Type: function.StaticReturnType(cty.Bool), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + arg := args[0] + ty := arg.Type() + + if !ty.IsListType() && !ty.IsTupleType() && !ty.IsSetType() { + return cty.NilVal, errors.New("argument must be list, tuple, or set") + } + + if args[0].IsNull() { + return cty.NilVal, errors.New("cannot search a nil list or set") + } + + if args[0].LengthInt() == 0 { + return cty.False, nil + } + + if !args[0].IsKnown() || !args[1].IsKnown() { + return cty.UnknownVal(cty.Bool), nil + } + + containsUnknown := false + for it := args[0].ElementIterator(); it.Next(); { + _, v := it.Element() + eq := args[1].Equals(v) + if !eq.IsKnown() { + // We may have an unknown value which could match later, but we + // first need to continue checking all values for an exact + // match. + containsUnknown = true + continue + } + if eq.True() { + return cty.True, nil + } + } + + if containsUnknown { + return cty.UnknownVal(cty.Bool), nil + } + + return cty.False, nil + }, +}) + +// DistinctFunc is a function that takes a list and returns a new list +// with any duplicate elements removed. +var DistinctFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.List(cty.DynamicPseudoType), + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + return args[0].Type(), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + listVal := args[0] + + if !listVal.IsWhollyKnown() { + return cty.UnknownVal(retType), nil + } + var list []cty.Value + + for it := listVal.ElementIterator(); it.Next(); { + _, v := it.Element() + list, err = appendIfMissing(list, v) + if err != nil { + return cty.NilVal, err + } + } + + if len(list) == 0 { + return cty.ListValEmpty(retType.ElementType()), nil + } + return cty.ListVal(list), nil + }, +}) + +// ChunklistFunc is a function that splits a single list into fixed-size chunks, +// returning a list of lists. +var ChunklistFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.List(cty.DynamicPseudoType), + }, + { + Name: "size", + Type: cty.Number, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + return cty.List(args[0].Type()), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + listVal := args[0] + if !listVal.IsKnown() { + return cty.UnknownVal(retType), nil + } + + if listVal.LengthInt() == 0 { + return cty.ListValEmpty(listVal.Type()), nil + } + + var size int + err = gocty.FromCtyValue(args[1], &size) + if err != nil { + return cty.NilVal, fmt.Errorf("invalid index: %s", err) + } + + if size < 0 { + return cty.NilVal, errors.New("the size argument must be positive") + } + + output := make([]cty.Value, 0) + + // if size is 0, returns a list made of the initial list + if size == 0 { + output = append(output, listVal) + return cty.ListVal(output), nil + } + + chunk := make([]cty.Value, 0) + + l := args[0].LengthInt() + i := 0 + + for it := listVal.ElementIterator(); it.Next(); { + _, v := it.Element() + chunk = append(chunk, v) + + // Chunk when index isn't 0, or when reaching the values's length + if (i+1)%size == 0 || (i+1) == l { + output = append(output, cty.ListVal(chunk)) + chunk = make([]cty.Value, 0) + } + i++ + } + + return cty.ListVal(output), nil + }, +}) + +// FlattenFunc is a function that takes a list and replaces any elements +// that are lists with a flattened sequence of the list contents. +var FlattenFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.DynamicPseudoType, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + if !args[0].IsWhollyKnown() { + return cty.DynamicPseudoType, nil + } + + argTy := args[0].Type() + if !argTy.IsListType() && !argTy.IsSetType() && !argTy.IsTupleType() { + return cty.NilType, errors.New("can only flatten lists, sets and tuples") + } + + retVal, known := flattener(args[0]) + if !known { + return cty.DynamicPseudoType, nil + } + + tys := make([]cty.Type, len(retVal)) + for i, ty := range retVal { + tys[i] = ty.Type() + } + return cty.Tuple(tys), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + inputList := args[0] + if inputList.LengthInt() == 0 { + return cty.EmptyTupleVal, nil + } + + out, known := flattener(inputList) + if !known { + return cty.UnknownVal(retType), nil + } + + return cty.TupleVal(out), nil + }, +}) + +// Flatten until it's not a cty.List, and return whether the value is known. +// We can flatten lists with unknown values, as long as they are not +// lists themselves. +func flattener(flattenList cty.Value) ([]cty.Value, bool) { + out := make([]cty.Value, 0) + for it := flattenList.ElementIterator(); it.Next(); { + _, val := it.Element() + if val.Type().IsListType() || val.Type().IsSetType() || val.Type().IsTupleType() { + if !val.IsKnown() { + return out, false + } + + res, known := flattener(val) + if !known { + return res, known + } + out = append(out, res...) + } else { + out = append(out, val) + } + } + return out, true +} + +// KeysFunc is a function that takes a map and returns a sorted list of the map keys. +var KeysFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "inputMap", + Type: cty.DynamicPseudoType, + AllowUnknown: true, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + ty := args[0].Type() + switch { + case ty.IsMapType(): + return cty.List(cty.String), nil + case ty.IsObjectType(): + atys := ty.AttributeTypes() + if len(atys) == 0 { + return cty.EmptyTuple, nil + } + // All of our result elements will be strings, and atys just + // decides how many there are. + etys := make([]cty.Type, len(atys)) + for i := range etys { + etys[i] = cty.String + } + return cty.Tuple(etys), nil + default: + return cty.DynamicPseudoType, function.NewArgErrorf(0, "must have map or object type") + } + }, + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + m := args[0] + var keys []cty.Value + + switch { + case m.Type().IsObjectType(): + // In this case we allow unknown values so we must work only with + // the attribute _types_, not with the value itself. + var names []string + for name := range m.Type().AttributeTypes() { + names = append(names, name) + } + sort.Strings(names) // same ordering guaranteed by cty's ElementIterator + if len(names) == 0 { + return cty.EmptyTupleVal, nil + } + keys = make([]cty.Value, len(names)) + for i, name := range names { + keys[i] = cty.StringVal(name) + } + return cty.TupleVal(keys), nil + default: + if !m.IsKnown() { + return cty.UnknownVal(retType), nil + } + + // cty guarantees that ElementIterator will iterate in lexicographical + // order by key. + for it := args[0].ElementIterator(); it.Next(); { + k, _ := it.Element() + keys = append(keys, k) + } + if len(keys) == 0 { + return cty.ListValEmpty(cty.String), nil + } + return cty.ListVal(keys), nil + } + }, +}) + +// LookupFunc is a function that performs dynamic lookups of map types. +var LookupFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "inputMap", + Type: cty.DynamicPseudoType, + }, + { + Name: "key", + Type: cty.String, + }, + { + Name: "default", + Type: cty.DynamicPseudoType, + }, + }, + Type: func(args []cty.Value) (ret cty.Type, err error) { + ty := args[0].Type() + + switch { + case ty.IsObjectType(): + if !args[1].IsKnown() { + return cty.DynamicPseudoType, nil + } + + key := args[1].AsString() + if ty.HasAttribute(key) { + return args[0].GetAttr(key).Type(), nil + } else if len(args) == 3 { + // if the key isn't found but a default is provided, + // return the default type + return args[2].Type(), nil + } + return cty.DynamicPseudoType, function.NewArgErrorf(0, "the given object has no attribute %q", key) + case ty.IsMapType(): + if len(args) == 3 { + _, err = convert.Convert(args[2], ty.ElementType()) + if err != nil { + return cty.NilType, function.NewArgErrorf(2, "the default value must have the same type as the map elements") + } + } + return ty.ElementType(), nil + default: + return cty.NilType, function.NewArgErrorf(0, "lookup() requires a map as the first argument") + } + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + defaultVal := args[2] + + mapVar := args[0] + lookupKey := args[1].AsString() + + if !mapVar.IsWhollyKnown() { + return cty.UnknownVal(retType), nil + } + + if mapVar.Type().IsObjectType() { + if mapVar.Type().HasAttribute(lookupKey) { + return mapVar.GetAttr(lookupKey), nil + } + } else if mapVar.HasIndex(cty.StringVal(lookupKey)) == cty.True { + return mapVar.Index(cty.StringVal(lookupKey)), nil + } + + defaultVal, err = convert.Convert(defaultVal, retType) + if err != nil { + return cty.NilVal, err + } + return defaultVal, nil + }, +}) + +// MergeFunc constructs a function that takes an arbitrary number of maps or +// objects, and returns a single value that contains a merged set of keys and +// values from all of the inputs. +// +// If more than one given map or object defines the same key then the one that +// is later in the argument sequence takes precedence. +var MergeFunc = function.New(&function.Spec{ + Params: []function.Parameter{}, + VarParam: &function.Parameter{ + Name: "maps", + Type: cty.DynamicPseudoType, + AllowDynamicType: true, + AllowNull: true, + }, + Type: func(args []cty.Value) (cty.Type, error) { + // empty args is accepted, so assume an empty object since we have no + // key-value types. + if len(args) == 0 { + return cty.EmptyObject, nil + } + + // collect the possible object attrs + attrs := map[string]cty.Type{} + + first := cty.NilType + matching := true + attrsKnown := true + for i, arg := range args { + ty := arg.Type() + // any dynamic args mean we can't compute a type + if ty.Equals(cty.DynamicPseudoType) { + return cty.DynamicPseudoType, nil + } + + // check for invalid arguments + if !ty.IsMapType() && !ty.IsObjectType() { + return cty.NilType, fmt.Errorf("arguments must be maps or objects, got %#v", ty.FriendlyName()) + } + + switch { + case ty.IsObjectType() && !arg.IsNull(): + for attr, aty := range ty.AttributeTypes() { + attrs[attr] = aty + } + case ty.IsMapType(): + switch { + case arg.IsNull(): + // pass, nothing to add + case arg.IsKnown(): + ety := arg.Type().ElementType() + for it := arg.ElementIterator(); it.Next(); { + attr, _ := it.Element() + attrs[attr.AsString()] = ety + } + default: + // any unknown maps means we don't know all possible attrs + // for the return type + attrsKnown = false + } + } + + // record the first argument type for comparison + if i == 0 { + first = arg.Type() + continue + } + + if !ty.Equals(first) && matching { + matching = false + } + } + + // the types all match, so use the first argument type + if matching { + return first, nil + } + + // We had a mix of unknown maps and objects, so we can't predict the + // attributes + if !attrsKnown { + return cty.DynamicPseudoType, nil + } + + return cty.Object(attrs), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + outputMap := make(map[string]cty.Value) + + // if all inputs are null, return a null value rather than an object + // with null attributes + allNull := true + for _, arg := range args { + if arg.IsNull() { + continue + } else { + allNull = false + } + + for it := arg.ElementIterator(); it.Next(); { + k, v := it.Element() + outputMap[k.AsString()] = v + } + } + + switch { + case allNull: + return cty.NullVal(retType), nil + case retType.IsMapType(): + return cty.MapVal(outputMap), nil + case retType.IsObjectType(), retType.Equals(cty.DynamicPseudoType): + return cty.ObjectVal(outputMap), nil + default: + panic(fmt.Sprintf("unexpected return type: %#v", retType)) + } + }, +}) + +// ReverseListFunc takes a sequence and produces a new sequence of the same length +// with all of the same elements as the given sequence but in reverse order. +var ReverseListFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.DynamicPseudoType, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + argTy := args[0].Type() + switch { + case argTy.IsTupleType(): + argTys := argTy.TupleElementTypes() + retTys := make([]cty.Type, len(argTys)) + for i, ty := range argTys { + retTys[len(retTys)-i-1] = ty + } + return cty.Tuple(retTys), nil + case argTy.IsListType(), argTy.IsSetType(): // We accept sets here to mimic the usual behavior of auto-converting to list + return cty.List(argTy.ElementType()), nil + default: + return cty.NilType, function.NewArgErrorf(0, "can only reverse list or tuple values, not %s", argTy.FriendlyName()) + } + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + in := args[0].AsValueSlice() + outVals := make([]cty.Value, len(in)) + for i, v := range in { + outVals[len(outVals)-i-1] = v + } + switch { + case retType.IsTupleType(): + return cty.TupleVal(outVals), nil + default: + if len(outVals) == 0 { + return cty.ListValEmpty(retType.ElementType()), nil + } + return cty.ListVal(outVals), nil + } + }, +}) + +// SetProductFunc calculates the Cartesian product of two or more sets or +// sequences. If the arguments are all lists then the result is a list of tuples, +// preserving the ordering of all of the input lists. Otherwise the result is a +// set of tuples. +var SetProductFunc = function.New(&function.Spec{ + Params: []function.Parameter{}, + VarParam: &function.Parameter{ + Name: "sets", + Type: cty.DynamicPseudoType, + }, + Type: func(args []cty.Value) (retType cty.Type, err error) { + if len(args) < 2 { + return cty.NilType, errors.New("at least two arguments are required") + } + + listCount := 0 + elemTys := make([]cty.Type, len(args)) + for i, arg := range args { + aty := arg.Type() + switch { + case aty.IsSetType(): + elemTys[i] = aty.ElementType() + case aty.IsListType(): + elemTys[i] = aty.ElementType() + listCount++ + case aty.IsTupleType(): + // We can accept a tuple type only if there's some common type + // that all of its elements can be converted to. + allEtys := aty.TupleElementTypes() + if len(allEtys) == 0 { + elemTys[i] = cty.DynamicPseudoType + listCount++ + break + } + ety, _ := convert.UnifyUnsafe(allEtys) + if ety == cty.NilType { + return cty.NilType, function.NewArgErrorf(i, "all elements must be of the same type") + } + elemTys[i] = ety + listCount++ + default: + return cty.NilType, function.NewArgErrorf(i, "a set or a list is required") + } + } + + if listCount == len(args) { + return cty.List(cty.Tuple(elemTys)), nil + } + return cty.Set(cty.Tuple(elemTys)), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + ety := retType.ElementType() + + total := 1 + for _, arg := range args { + // Because of our type checking function, we are guaranteed that + // all of the arguments are known, non-null values of types that + // support LengthInt. + total *= arg.LengthInt() + } + + if total == 0 { + // If any of the arguments was an empty collection then our result + // is also an empty collection, which we'll short-circuit here. + if retType.IsListType() { + return cty.ListValEmpty(ety), nil + } + return cty.SetValEmpty(ety), nil + } + + subEtys := ety.TupleElementTypes() + product := make([][]cty.Value, total) + + b := make([]cty.Value, total*len(args)) + n := make([]int, len(args)) + s := 0 + argVals := make([][]cty.Value, len(args)) + for i, arg := range args { + argVals[i] = arg.AsValueSlice() + } + + for i := range product { + e := s + len(args) + pi := b[s:e] + product[i] = pi + s = e + + for j, n := range n { + val := argVals[j][n] + ty := subEtys[j] + if !val.Type().Equals(ty) { + var err error + val, err = convert.Convert(val, ty) + if err != nil { + // Should never happen since we checked this in our + // type-checking function. + return cty.NilVal, fmt.Errorf("failed to convert argVals[%d][%d] to %s; this is a bug in cty", j, n, ty.FriendlyName()) + } + } + pi[j] = val + } + + for j := len(n) - 1; j >= 0; j-- { + n[j]++ + if n[j] < len(argVals[j]) { + break + } + n[j] = 0 + } + } + + productVals := make([]cty.Value, total) + for i, vals := range product { + productVals[i] = cty.TupleVal(vals) + } + + if retType.IsListType() { + return cty.ListVal(productVals), nil + } + return cty.SetVal(productVals), nil + }, +}) + +// SliceFunc is a function that extracts some consecutive elements +// from within a list. +var SliceFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.DynamicPseudoType, + }, + { + Name: "start_index", + Type: cty.Number, + }, + { + Name: "end_index", + Type: cty.Number, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + arg := args[0] + argTy := arg.Type() + + if argTy.IsSetType() { + return cty.NilType, function.NewArgErrorf(0, "cannot slice a set, because its elements do not have indices; explicitly convert to a list if the ordering of the result is not important") + } + if !argTy.IsListType() && !argTy.IsTupleType() { + return cty.NilType, function.NewArgErrorf(0, "must be a list or tuple value") + } + + startIndex, endIndex, idxsKnown, err := sliceIndexes(args) + if err != nil { + return cty.NilType, err + } + + if argTy.IsListType() { + return argTy, nil + } + + if !idxsKnown { + // If we don't know our start/end indices then we can't predict + // the result type if we're planning to return a tuple. + return cty.DynamicPseudoType, nil + } + return cty.Tuple(argTy.TupleElementTypes()[startIndex:endIndex]), nil + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + inputList := args[0] + + if retType == cty.DynamicPseudoType { + return cty.DynamicVal, nil + } + + // we ignore idxsKnown return value here because the indices are always + // known here, or else the call would've short-circuited. + startIndex, endIndex, _, err := sliceIndexes(args) + if err != nil { + return cty.NilVal, err + } + + if endIndex-startIndex == 0 { + if retType.IsTupleType() { + return cty.EmptyTupleVal, nil + } + return cty.ListValEmpty(retType.ElementType()), nil + } + + outputList := inputList.AsValueSlice()[startIndex:endIndex] + + if retType.IsTupleType() { + return cty.TupleVal(outputList), nil + } + + return cty.ListVal(outputList), nil + }, +}) + +func sliceIndexes(args []cty.Value) (int, int, bool, error) { + var startIndex, endIndex, length int + var startKnown, endKnown, lengthKnown bool + + if args[0].Type().IsTupleType() || args[0].IsKnown() { // if it's a tuple then we always know the length by the type, but lists must be known + length = args[0].LengthInt() + lengthKnown = true + } + + if args[1].IsKnown() { + if err := gocty.FromCtyValue(args[1], &startIndex); err != nil { + return 0, 0, false, function.NewArgErrorf(1, "invalid start index: %s", err) + } + if startIndex < 0 { + return 0, 0, false, function.NewArgErrorf(1, "start index must not be less than zero") + } + if lengthKnown && startIndex > length { + return 0, 0, false, function.NewArgErrorf(1, "start index must not be greater than the length of the list") + } + startKnown = true + } + if args[2].IsKnown() { + if err := gocty.FromCtyValue(args[2], &endIndex); err != nil { + return 0, 0, false, function.NewArgErrorf(2, "invalid end index: %s", err) + } + if endIndex < 0 { + return 0, 0, false, function.NewArgErrorf(2, "end index must not be less than zero") + } + if lengthKnown && endIndex > length { + return 0, 0, false, function.NewArgErrorf(2, "end index must not be greater than the length of the list") + } + endKnown = true + } + if startKnown && endKnown { + if startIndex > endIndex { + return 0, 0, false, function.NewArgErrorf(1, "start index must not be greater than end index") + } + } + return startIndex, endIndex, startKnown && endKnown, nil +} + +// ValuesFunc is a function that returns a list of the map values, +// in the order of the sorted keys. +var ValuesFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "values", + Type: cty.DynamicPseudoType, + }, + }, + Type: func(args []cty.Value) (ret cty.Type, err error) { + ty := args[0].Type() + if ty.IsMapType() { + return cty.List(ty.ElementType()), nil + } else if ty.IsObjectType() { + // The result is a tuple type with all of the same types as our + // object type's attributes, sorted in lexicographical order by the + // keys. (This matches the sort order guaranteed by ElementIterator + // on a cty object value.) + atys := ty.AttributeTypes() + if len(atys) == 0 { + return cty.EmptyTuple, nil + } + attrNames := make([]string, 0, len(atys)) + for name := range atys { + attrNames = append(attrNames, name) + } + sort.Strings(attrNames) + + tys := make([]cty.Type, len(attrNames)) + for i, name := range attrNames { + tys[i] = atys[name] + } + return cty.Tuple(tys), nil + } + return cty.NilType, errors.New("values() requires a map as the first argument") + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + mapVar := args[0] + + // We can just iterate the map/object value here because cty guarantees + // that these types always iterate in key lexicographical order. + var values []cty.Value + for it := mapVar.ElementIterator(); it.Next(); { + _, val := it.Element() + values = append(values, val) + } + + if retType.IsTupleType() { + return cty.TupleVal(values), nil + } + if len(values) == 0 { + return cty.ListValEmpty(retType.ElementType()), nil + } + return cty.ListVal(values), nil + }, +}) + +// ZipmapFunc is a function that constructs a map from a list of keys +// and a corresponding list of values. +var ZipmapFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "keys", + Type: cty.List(cty.String), + }, + { + Name: "values", + Type: cty.DynamicPseudoType, + }, + }, + Type: func(args []cty.Value) (ret cty.Type, err error) { + keys := args[0] + values := args[1] + valuesTy := values.Type() + + switch { + case valuesTy.IsListType(): + return cty.Map(values.Type().ElementType()), nil + case valuesTy.IsTupleType(): + if !keys.IsWhollyKnown() { + // Since zipmap with a tuple produces an object, we need to know + // all of the key names before we can predict our result type. + return cty.DynamicPseudoType, nil + } + + keysRaw := keys.AsValueSlice() + valueTypesRaw := valuesTy.TupleElementTypes() + if len(keysRaw) != len(valueTypesRaw) { + return cty.NilType, fmt.Errorf("number of keys (%d) does not match number of values (%d)", len(keysRaw), len(valueTypesRaw)) + } + atys := make(map[string]cty.Type, len(valueTypesRaw)) + for i, keyVal := range keysRaw { + if keyVal.IsNull() { + return cty.NilType, fmt.Errorf("keys list has null value at index %d", i) + } + key := keyVal.AsString() + atys[key] = valueTypesRaw[i] + } + return cty.Object(atys), nil + + default: + return cty.NilType, errors.New("values argument must be a list or tuple value") + } + }, + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + keys := args[0] + values := args[1] + + if !keys.IsWhollyKnown() { + // Unknown map keys and object attributes are not supported, so + // our entire result must be unknown in this case. + return cty.UnknownVal(retType), nil + } + + // both keys and values are guaranteed to be shallowly-known here, + // because our declared params above don't allow unknown or null values. + if keys.LengthInt() != values.LengthInt() { + return cty.NilVal, fmt.Errorf("number of keys (%d) does not match number of values (%d)", keys.LengthInt(), values.LengthInt()) + } + + output := make(map[string]cty.Value) + + i := 0 + for it := keys.ElementIterator(); it.Next(); { + _, v := it.Element() + val := values.Index(cty.NumberIntVal(int64(i))) + output[v.AsString()] = val + i++ + } + + switch { + case retType.IsMapType(): + if len(output) == 0 { + return cty.MapValEmpty(retType.ElementType()), nil + } + return cty.MapVal(output), nil + case retType.IsObjectType(): + return cty.ObjectVal(output), nil + default: + // Should never happen because the type-check function should've + // caught any other case. + return cty.NilVal, fmt.Errorf("internally selected incorrect result type %s (this is a bug)", retType.FriendlyName()) + } + }, +}) + +// helper function to add an element to a list, if it does not already exist +func appendIfMissing(slice []cty.Value, element cty.Value) ([]cty.Value, error) { + for _, ele := range slice { + eq, err := Equal(ele, element) + if err != nil { + return slice, err + } + if eq.True() { + return slice, nil + } + } + return append(slice, element), nil +} + // HasIndex determines whether the given collection can be indexed with the // given key. func HasIndex(collection cty.Value, key cty.Value) (cty.Value, error) { @@ -138,3 +1229,91 @@ func Index(collection cty.Value, key cty.Value) (cty.Value, error) { func Length(collection cty.Value) (cty.Value, error) { return LengthFunc.Call([]cty.Value{collection}) } + +// Element returns a single element from a given list at the given index. If +// index is greater than the length of the list then it is wrapped modulo +// the list length. +func Element(list, index cty.Value) (cty.Value, error) { + return ElementFunc.Call([]cty.Value{list, index}) +} + +// CoalesceList takes any number of list arguments and returns the first one that isn't empty. +func CoalesceList(args ...cty.Value) (cty.Value, error) { + return CoalesceListFunc.Call(args) +} + +// Compact takes a list of strings and returns a new list +// with any empty string elements removed. +func Compact(list cty.Value) (cty.Value, error) { + return CompactFunc.Call([]cty.Value{list}) +} + +// Contains determines whether a given list contains a given single value +// as one of its elements. +func Contains(list, value cty.Value) (cty.Value, error) { + return ContainsFunc.Call([]cty.Value{list, value}) +} + +// Distinct takes a list and returns a new list with any duplicate elements removed. +func Distinct(list cty.Value) (cty.Value, error) { + return DistinctFunc.Call([]cty.Value{list}) +} + +// Chunklist splits a single list into fixed-size chunks, returning a list of lists. +func Chunklist(list, size cty.Value) (cty.Value, error) { + return ChunklistFunc.Call([]cty.Value{list, size}) +} + +// Flatten takes a list and replaces any elements that are lists with a flattened +// sequence of the list contents. +func Flatten(list cty.Value) (cty.Value, error) { + return FlattenFunc.Call([]cty.Value{list}) +} + +// Keys takes a map and returns a sorted list of the map keys. +func Keys(inputMap cty.Value) (cty.Value, error) { + return KeysFunc.Call([]cty.Value{inputMap}) +} + +// Lookup performs a dynamic lookup into a map. +// There are two required arguments, map and key, plus an optional default, +// which is a value to return if no key is found in map. +func Lookup(inputMap, key, defaultValue cty.Value) (cty.Value, error) { + return LookupFunc.Call([]cty.Value{inputMap, key, defaultValue}) +} + +// Merge takes an arbitrary number of maps and returns a single map that contains +// a merged set of elements from all of the maps. +// +// If more than one given map defines the same key then the one that is later in +// the argument sequence takes precedence. +func Merge(maps ...cty.Value) (cty.Value, error) { + return MergeFunc.Call(maps) +} + +// ReverseList takes a sequence and produces a new sequence of the same length +// with all of the same elements as the given sequence but in reverse order. +func ReverseList(list cty.Value) (cty.Value, error) { + return ReverseListFunc.Call([]cty.Value{list}) +} + +// SetProduct computes the Cartesian product of sets or sequences. +func SetProduct(sets ...cty.Value) (cty.Value, error) { + return SetProductFunc.Call(sets) +} + +// Slice extracts some consecutive elements from within a list. +func Slice(list, start, end cty.Value) (cty.Value, error) { + return SliceFunc.Call([]cty.Value{list, start, end}) +} + +// Values returns a list of the map values, in the order of the sorted keys. +// This function only works on flat maps. +func Values(values cty.Value) (cty.Value, error) { + return ValuesFunc.Call([]cty.Value{values}) +} + +// Zipmap constructs a map from a list of keys and a corresponding list of values. +func Zipmap(keys, values cty.Value) (cty.Value, error) { + return ZipmapFunc.Call([]cty.Value{keys, values}) +} diff --git a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/conversion.go b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/conversion.go new file mode 100644 index 000000000..66eb97e25 --- /dev/null +++ b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/conversion.go @@ -0,0 +1,87 @@ +package stdlib + +import ( + "strconv" + + "github.com/zclconf/go-cty/cty" + "github.com/zclconf/go-cty/cty/convert" + "github.com/zclconf/go-cty/cty/function" +) + +// MakeToFunc constructs a "to..." function, like "tostring", which converts +// its argument to a specific type or type kind. +// +// The given type wantTy can be any type constraint that cty's "convert" package +// would accept. In particular, this means that you can pass +// cty.List(cty.DynamicPseudoType) to mean "list of any single type", which +// will then cause cty to attempt to unify all of the element types when given +// a tuple. +func MakeToFunc(wantTy cty.Type) function.Function { + return function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "v", + // We use DynamicPseudoType rather than wantTy here so that + // all values will pass through the function API verbatim and + // we can handle the conversion logic within the Type and + // Impl functions. This allows us to customize the error + // messages to be more appropriate for an explicit type + // conversion, whereas the cty function system produces + // messages aimed at _implicit_ type conversions. + Type: cty.DynamicPseudoType, + AllowNull: true, + }, + }, + Type: func(args []cty.Value) (cty.Type, error) { + gotTy := args[0].Type() + if gotTy.Equals(wantTy) { + return wantTy, nil + } + conv := convert.GetConversionUnsafe(args[0].Type(), wantTy) + if conv == nil { + // We'll use some specialized errors for some trickier cases, + // but most we can handle in a simple way. + switch { + case gotTy.IsTupleType() && wantTy.IsTupleType(): + return cty.NilType, function.NewArgErrorf(0, "incompatible tuple type for conversion: %s", convert.MismatchMessage(gotTy, wantTy)) + case gotTy.IsObjectType() && wantTy.IsObjectType(): + return cty.NilType, function.NewArgErrorf(0, "incompatible object type for conversion: %s", convert.MismatchMessage(gotTy, wantTy)) + default: + return cty.NilType, function.NewArgErrorf(0, "cannot convert %s to %s", gotTy.FriendlyName(), wantTy.FriendlyNameForConstraint()) + } + } + // If a conversion is available then everything is fine. + return wantTy, nil + }, + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + // We didn't set "AllowUnknown" on our argument, so it is guaranteed + // to be known here but may still be null. + ret, err := convert.Convert(args[0], retType) + if err != nil { + // Because we used GetConversionUnsafe above, conversion can + // still potentially fail in here. For example, if the user + // asks to convert the string "a" to bool then we'll + // optimistically permit it during type checking but fail here + // once we note that the value isn't either "true" or "false". + gotTy := args[0].Type() + switch { + case gotTy == cty.String && wantTy == cty.Bool: + what := "string" + if !args[0].IsNull() { + what = strconv.Quote(args[0].AsString()) + } + return cty.NilVal, function.NewArgErrorf(0, `cannot convert %s to bool; only the strings "true" or "false" are allowed`, what) + case gotTy == cty.String && wantTy == cty.Number: + what := "string" + if !args[0].IsNull() { + what = strconv.Quote(args[0].AsString()) + } + return cty.NilVal, function.NewArgErrorf(0, `cannot convert %s to number; given string must be a decimal representation of a number`, what) + default: + return cty.NilVal, function.NewArgErrorf(0, "cannot convert %s to %s", gotTy.FriendlyName(), wantTy.FriendlyNameForConstraint()) + } + } + return ret, nil + }, + }) +} diff --git a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/datetime.go b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/datetime.go index aa15b7bde..3ce41ba9d 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/datetime.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/datetime.go @@ -203,6 +203,33 @@ var FormatDateFunc = function.New(&function.Spec{ }, }) +// TimeAddFunc is a function that adds a duration to a timestamp, returning a new timestamp. +var TimeAddFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "timestamp", + Type: cty.String, + }, + { + Name: "duration", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + ts, err := parseTimestamp(args[0].AsString()) + if err != nil { + return cty.UnknownVal(cty.String), err + } + duration, err := time.ParseDuration(args[1].AsString()) + if err != nil { + return cty.UnknownVal(cty.String), err + } + + return cty.StringVal(ts.Add(duration).Format(time.RFC3339)), nil + }, +}) + // FormatDate reformats a timestamp given in RFC3339 syntax into another time // syntax defined by a given format string. // @@ -383,3 +410,20 @@ func splitDateFormat(data []byte, atEOF bool) (advance int, token []byte, err er func startsDateFormatVerb(b byte) bool { return (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') } + +// TimeAdd adds a duration to a timestamp, returning a new timestamp. +// +// In the HCL language, timestamps are conventionally represented as +// strings using RFC 3339 "Date and Time format" syntax. Timeadd requires +// the timestamp argument to be a string conforming to this syntax. +// +// `duration` is a string representation of a time difference, consisting of +// sequences of number and unit pairs, like `"1.5h"` or `1h30m`. The accepted +// units are `ns`, `us` (or `µs`), `"ms"`, `"s"`, `"m"`, and `"h"`. The first +// number may be negative to indicate a negative duration, like `"-2h5m"`. +// +// The result is a string, also in RFC 3339 format, representing the result +// of adding the given direction to the given timestamp. +func TimeAdd(timestamp cty.Value, duration cty.Value) (cty.Value, error) { + return TimeAddFunc.Call([]cty.Value{timestamp, duration}) +} diff --git a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/format.go b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/format.go index 664790b46..834e9b6fc 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/format.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/format.go @@ -80,6 +80,7 @@ var FormatListFunc = function.New(&function.Spec{ lenChooser := -1 iterators := make([]cty.ElementIterator, len(args)) singleVals := make([]cty.Value, len(args)) + unknowns := make([]bool, len(args)) for i, arg := range args { argTy := arg.Type() switch { @@ -87,7 +88,8 @@ var FormatListFunc = function.New(&function.Spec{ if !argTy.IsTupleType() && !arg.IsKnown() { // We can't iterate this one at all yet then, so we can't // yet produce a result. - return cty.UnknownVal(retType), nil + unknowns[i] = true + continue } thisLen := arg.LengthInt() if iterLen == -1 { @@ -103,12 +105,26 @@ var FormatListFunc = function.New(&function.Spec{ ) } } + if !arg.IsKnown() { + // We allowed an unknown tuple value to fall through in + // our initial check above so that we'd be able to run + // the above error checks against it, but we still can't + // iterate it if the checks pass. + unknowns[i] = true + continue + } iterators[i] = arg.ElementIterator() default: singleVals[i] = arg } } + for _, isUnk := range unknowns { + if isUnk { + return cty.UnknownVal(retType), nil + } + } + if iterLen == 0 { // If our sequences are all empty then our result must be empty. return cty.ListValEmpty(cty.String), nil diff --git a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/number.go b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/number.go index bd9b2e51b..48438fe01 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/number.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/number.go @@ -2,10 +2,12 @@ package stdlib import ( "fmt" + "math" "math/big" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/function" + "github.com/zclconf/go-cty/cty/gocty" ) var AbsoluteFunc = function.New(&function.Spec{ @@ -14,6 +16,7 @@ var AbsoluteFunc = function.New(&function.Spec{ Name: "num", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Number), @@ -196,11 +199,13 @@ var GreaterThanFunc = function.New(&function.Spec{ Name: "a", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, { Name: "b", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Bool), @@ -215,11 +220,13 @@ var GreaterThanOrEqualToFunc = function.New(&function.Spec{ Name: "a", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, { Name: "b", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Bool), @@ -234,11 +241,13 @@ var LessThanFunc = function.New(&function.Spec{ Name: "a", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, { Name: "b", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Bool), @@ -253,11 +262,13 @@ var LessThanOrEqualToFunc = function.New(&function.Spec{ Name: "a", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, { Name: "b", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Bool), @@ -272,6 +283,7 @@ var NegateFunc = function.New(&function.Spec{ Name: "num", Type: cty.Number, AllowDynamicType: true, + AllowMarked: true, }, }, Type: function.StaticReturnType(cty.Number), @@ -348,6 +360,182 @@ var IntFunc = function.New(&function.Spec{ }, }) +// CeilFunc is a function that returns the closest whole number greater +// than or equal to the given value. +var CeilFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "num", + Type: cty.Number, + }, + }, + Type: function.StaticReturnType(cty.Number), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + var val float64 + if err := gocty.FromCtyValue(args[0], &val); err != nil { + return cty.UnknownVal(cty.String), err + } + return cty.NumberIntVal(int64(math.Ceil(val))), nil + }, +}) + +// FloorFunc is a function that returns the closest whole number lesser +// than or equal to the given value. +var FloorFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "num", + Type: cty.Number, + }, + }, + Type: function.StaticReturnType(cty.Number), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + var val float64 + if err := gocty.FromCtyValue(args[0], &val); err != nil { + return cty.UnknownVal(cty.String), err + } + return cty.NumberIntVal(int64(math.Floor(val))), nil + }, +}) + +// LogFunc is a function that returns the logarithm of a given number in a given base. +var LogFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "num", + Type: cty.Number, + }, + { + Name: "base", + Type: cty.Number, + }, + }, + Type: function.StaticReturnType(cty.Number), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + var num float64 + if err := gocty.FromCtyValue(args[0], &num); err != nil { + return cty.UnknownVal(cty.String), err + } + + var base float64 + if err := gocty.FromCtyValue(args[1], &base); err != nil { + return cty.UnknownVal(cty.String), err + } + + return cty.NumberFloatVal(math.Log(num) / math.Log(base)), nil + }, +}) + +// PowFunc is a function that returns the logarithm of a given number in a given base. +var PowFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "num", + Type: cty.Number, + }, + { + Name: "power", + Type: cty.Number, + }, + }, + Type: function.StaticReturnType(cty.Number), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + var num float64 + if err := gocty.FromCtyValue(args[0], &num); err != nil { + return cty.UnknownVal(cty.String), err + } + + var power float64 + if err := gocty.FromCtyValue(args[1], &power); err != nil { + return cty.UnknownVal(cty.String), err + } + + return cty.NumberFloatVal(math.Pow(num, power)), nil + }, +}) + +// SignumFunc is a function that determines the sign of a number, returning a +// number between -1 and 1 to represent the sign.. +var SignumFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "num", + Type: cty.Number, + }, + }, + Type: function.StaticReturnType(cty.Number), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + var num int + if err := gocty.FromCtyValue(args[0], &num); err != nil { + return cty.UnknownVal(cty.String), err + } + switch { + case num < 0: + return cty.NumberIntVal(-1), nil + case num > 0: + return cty.NumberIntVal(+1), nil + default: + return cty.NumberIntVal(0), nil + } + }, +}) + +// ParseIntFunc is a function that parses a string argument and returns an integer of the specified base. +var ParseIntFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "number", + Type: cty.DynamicPseudoType, + }, + { + Name: "base", + Type: cty.Number, + }, + }, + + Type: func(args []cty.Value) (cty.Type, error) { + if !args[0].Type().Equals(cty.String) { + return cty.Number, function.NewArgErrorf(0, "first argument must be a string, not %s", args[0].Type().FriendlyName()) + } + return cty.Number, nil + }, + + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + var numstr string + var base int + var err error + + if err = gocty.FromCtyValue(args[0], &numstr); err != nil { + return cty.UnknownVal(cty.String), function.NewArgError(0, err) + } + + if err = gocty.FromCtyValue(args[1], &base); err != nil { + return cty.UnknownVal(cty.Number), function.NewArgError(1, err) + } + + if base < 2 || base > 62 { + return cty.UnknownVal(cty.Number), function.NewArgErrorf( + 1, + "base must be a whole number between 2 and 62 inclusive", + ) + } + + num, ok := (&big.Int{}).SetString(numstr, base) + if !ok { + return cty.UnknownVal(cty.Number), function.NewArgErrorf( + 0, + "cannot parse %q as a base %d integer", + numstr, + base, + ) + } + + parsedNum := cty.NumberVal((&big.Float{}).SetInt(num)) + + return parsedNum, nil + }, +}) + // Absolute returns the magnitude of the given number, without its sign. // That is, it turns negative values into positive values. func Absolute(num cty.Value) (cty.Value, error) { @@ -426,3 +614,34 @@ func Int(num cty.Value) (cty.Value, error) { } return IntFunc.Call([]cty.Value{num}) } + +// Ceil returns the closest whole number greater than or equal to the given value. +func Ceil(num cty.Value) (cty.Value, error) { + return CeilFunc.Call([]cty.Value{num}) +} + +// Floor returns the closest whole number lesser than or equal to the given value. +func Floor(num cty.Value) (cty.Value, error) { + return FloorFunc.Call([]cty.Value{num}) +} + +// Log returns returns the logarithm of a given number in a given base. +func Log(num, base cty.Value) (cty.Value, error) { + return LogFunc.Call([]cty.Value{num, base}) +} + +// Pow returns the logarithm of a given number in a given base. +func Pow(num, power cty.Value) (cty.Value, error) { + return PowFunc.Call([]cty.Value{num, power}) +} + +// Signum determines the sign of a number, returning a number between -1 and +// 1 to represent the sign. +func Signum(num cty.Value) (cty.Value, error) { + return SignumFunc.Call([]cty.Value{num}) +} + +// ParseInt parses a string argument and returns an integer of the specified base. +func ParseInt(num cty.Value, base cty.Value) (cty.Value, error) { + return ParseIntFunc.Call([]cty.Value{num, base}) +} diff --git a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/string.go b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/string.go index d7c89fa82..417cc3913 100644 --- a/vendor/github.com/zclconf/go-cty/cty/function/stdlib/string.go +++ b/vendor/github.com/zclconf/go-cty/cty/function/stdlib/string.go @@ -1,12 +1,15 @@ package stdlib import ( + "fmt" + "regexp" + "sort" "strings" + "github.com/apparentlymart/go-textseg/textseg" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/function" "github.com/zclconf/go-cty/cty/gocty" - "github.com/apparentlymart/go-textseg/textseg" ) var UpperFunc = function.New(&function.Spec{ @@ -187,6 +190,252 @@ var SubstrFunc = function.New(&function.Spec{ }, }) +var JoinFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "separator", + Type: cty.String, + }, + }, + VarParam: &function.Parameter{ + Name: "lists", + Type: cty.List(cty.String), + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + sep := args[0].AsString() + listVals := args[1:] + if len(listVals) < 1 { + return cty.UnknownVal(cty.String), fmt.Errorf("at least one list is required") + } + + l := 0 + for _, list := range listVals { + if !list.IsWhollyKnown() { + return cty.UnknownVal(cty.String), nil + } + l += list.LengthInt() + } + + items := make([]string, 0, l) + for ai, list := range listVals { + ei := 0 + for it := list.ElementIterator(); it.Next(); { + _, val := it.Element() + if val.IsNull() { + if len(listVals) > 1 { + return cty.UnknownVal(cty.String), function.NewArgErrorf(ai+1, "element %d of list %d is null; cannot concatenate null values", ei, ai+1) + } + return cty.UnknownVal(cty.String), function.NewArgErrorf(ai+1, "element %d is null; cannot concatenate null values", ei) + } + items = append(items, val.AsString()) + ei++ + } + } + + return cty.StringVal(strings.Join(items, sep)), nil + }, +}) + +var SortFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "list", + Type: cty.List(cty.String), + }, + }, + Type: function.StaticReturnType(cty.List(cty.String)), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + listVal := args[0] + + if !listVal.IsWhollyKnown() { + // If some of the element values aren't known yet then we + // can't yet predict the order of the result. + return cty.UnknownVal(retType), nil + } + if listVal.LengthInt() == 0 { // Easy path + return listVal, nil + } + + list := make([]string, 0, listVal.LengthInt()) + for it := listVal.ElementIterator(); it.Next(); { + iv, v := it.Element() + if v.IsNull() { + return cty.UnknownVal(retType), fmt.Errorf("given list element %s is null; a null string cannot be sorted", iv.AsBigFloat().String()) + } + list = append(list, v.AsString()) + } + + sort.Strings(list) + retVals := make([]cty.Value, len(list)) + for i, s := range list { + retVals[i] = cty.StringVal(s) + } + return cty.ListVal(retVals), nil + }, +}) + +var SplitFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "separator", + Type: cty.String, + }, + { + Name: "str", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.List(cty.String)), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + sep := args[0].AsString() + str := args[1].AsString() + elems := strings.Split(str, sep) + elemVals := make([]cty.Value, len(elems)) + for i, s := range elems { + elemVals[i] = cty.StringVal(s) + } + if len(elemVals) == 0 { + return cty.ListValEmpty(cty.String), nil + } + return cty.ListVal(elemVals), nil + }, +}) + +// ChompFunc is a function that removes newline characters at the end of a +// string. +var ChompFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "str", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + newlines := regexp.MustCompile(`(?:\r\n?|\n)*\z`) + return cty.StringVal(newlines.ReplaceAllString(args[0].AsString(), "")), nil + }, +}) + +// IndentFunc is a function that adds a given number of spaces to the +// beginnings of all but the first line in a given multi-line string. +var IndentFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "spaces", + Type: cty.Number, + }, + { + Name: "str", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + var spaces int + if err := gocty.FromCtyValue(args[0], &spaces); err != nil { + return cty.UnknownVal(cty.String), err + } + data := args[1].AsString() + pad := strings.Repeat(" ", spaces) + return cty.StringVal(strings.Replace(data, "\n", "\n"+pad, -1)), nil + }, +}) + +// TitleFunc is a function that converts the first letter of each word in the +// given string to uppercase. +var TitleFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "str", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + return cty.StringVal(strings.Title(args[0].AsString())), nil + }, +}) + +// TrimSpaceFunc is a function that removes any space characters from the start +// and end of the given string. +var TrimSpaceFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "str", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) { + return cty.StringVal(strings.TrimSpace(args[0].AsString())), nil + }, +}) + +// TrimFunc is a function that removes the specified characters from the start +// and end of the given string. +var TrimFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "str", + Type: cty.String, + }, + { + Name: "cutset", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + str := args[0].AsString() + cutset := args[1].AsString() + return cty.StringVal(strings.Trim(str, cutset)), nil + }, +}) + +// TrimPrefixFunc is a function that removes the specified characters from the +// start the given string. +var TrimPrefixFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "str", + Type: cty.String, + }, + { + Name: "prefix", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + str := args[0].AsString() + prefix := args[1].AsString() + return cty.StringVal(strings.TrimPrefix(str, prefix)), nil + }, +}) + +// TrimSuffixFunc is a function that removes the specified characters from the +// end of the given string. +var TrimSuffixFunc = function.New(&function.Spec{ + Params: []function.Parameter{ + { + Name: "str", + Type: cty.String, + }, + { + Name: "suffix", + Type: cty.String, + }, + }, + Type: function.StaticReturnType(cty.String), + Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { + str := args[0].AsString() + cutset := args[1].AsString() + return cty.StringVal(strings.TrimSuffix(str, cutset)), nil + }, +}) + // Upper is a Function that converts a given string to uppercase. func Upper(str cty.Value) (cty.Value, error) { return UpperFunc.Call([]cty.Value{str}) @@ -232,3 +481,60 @@ func Strlen(str cty.Value) (cty.Value, error) { func Substr(str cty.Value, offset cty.Value, length cty.Value) (cty.Value, error) { return SubstrFunc.Call([]cty.Value{str, offset, length}) } + +// Join concatenates together the string elements of one or more lists with a +// given separator. +func Join(sep cty.Value, lists ...cty.Value) (cty.Value, error) { + args := make([]cty.Value, len(lists)+1) + args[0] = sep + copy(args[1:], lists) + return JoinFunc.Call(args) +} + +// Sort re-orders the elements of a given list of strings so that they are +// in ascending lexicographical order. +func Sort(list cty.Value) (cty.Value, error) { + return SortFunc.Call([]cty.Value{list}) +} + +// Split divides a given string by a given separator, returning a list of +// strings containing the characters between the separator sequences. +func Split(sep, str cty.Value) (cty.Value, error) { + return SplitFunc.Call([]cty.Value{sep, str}) +} + +// Chomp removes newline characters at the end of a string. +func Chomp(str cty.Value) (cty.Value, error) { + return ChompFunc.Call([]cty.Value{str}) +} + +// Indent adds a given number of spaces to the beginnings of all but the first +// line in a given multi-line string. +func Indent(spaces, str cty.Value) (cty.Value, error) { + return IndentFunc.Call([]cty.Value{spaces, str}) +} + +// Title converts the first letter of each word in the given string to uppercase. +func Title(str cty.Value) (cty.Value, error) { + return TitleFunc.Call([]cty.Value{str}) +} + +// TrimSpace removes any space characters from the start and end of the given string. +func TrimSpace(str cty.Value) (cty.Value, error) { + return TrimSpaceFunc.Call([]cty.Value{str}) +} + +// Trim removes the specified characters from the start and end of the given string. +func Trim(str, cutset cty.Value) (cty.Value, error) { + return TrimFunc.Call([]cty.Value{str, cutset}) +} + +// TrimPrefix removes the specified prefix from the start of the given string. +func TrimPrefix(str, prefix cty.Value) (cty.Value, error) { + return TrimPrefixFunc.Call([]cty.Value{str, prefix}) +} + +// TrimSuffix removes the specified suffix from the end of the given string. +func TrimSuffix(str, suffix cty.Value) (cty.Value, error) { + return TrimSuffixFunc.Call([]cty.Value{str, suffix}) +} diff --git a/vendor/github.com/zclconf/go-cty/cty/gob.go b/vendor/github.com/zclconf/go-cty/cty/gob.go index a77dace27..a0961b8a0 100644 --- a/vendor/github.com/zclconf/go-cty/cty/gob.go +++ b/vendor/github.com/zclconf/go-cty/cty/gob.go @@ -3,8 +3,11 @@ package cty import ( "bytes" "encoding/gob" + "errors" "fmt" "math/big" + + "github.com/zclconf/go-cty/cty/set" ) // GobEncode is an implementation of the gob.GobEncoder interface, which @@ -13,6 +16,10 @@ import ( // Currently it is not possible to represent values of capsule types in gob, // because the types themselves cannot be represented. func (val Value) GobEncode() ([]byte, error) { + if val.IsMarked() { + return nil, errors.New("value is marked") + } + buf := &bytes.Buffer{} enc := gob.NewEncoder(buf) @@ -46,11 +53,12 @@ func (val *Value) GobDecode(buf []byte) error { return fmt.Errorf("unsupported cty.Value encoding version %d; only 0 is supported", gv.Version) } - // big.Float seems to, for some reason, lose its "pointerness" when we - // round-trip it, so we'll fix that here. - if bf, ok := gv.V.(big.Float); ok { - gv.V = &bf - } + // Because big.Float.GobEncode is implemented with a pointer reciever, + // gob encoding of an interface{} containing a *big.Float value does not + // round-trip correctly, emerging instead as a non-pointer big.Float. + // The rest of cty expects all number values to be represented by + // *big.Float, so we'll fix that up here. + gv.V = gobDecodeFixNumberPtr(gv.V, gv.Ty) val.ty = gv.Ty val.v = gv.V @@ -123,3 +131,74 @@ type gobType struct { type gobCapsuleTypeImpl struct { } + +// goDecodeFixNumberPtr fixes an unfortunate quirk of round-tripping cty.Number +// values through gob: the big.Float.GobEncode method is implemented on a +// pointer receiver, and so it loses the "pointer-ness" of the value on +// encode, causing the values to emerge the other end as big.Float rather than +// *big.Float as we expect elsewhere in cty. +// +// The implementation of gobDecodeFixNumberPtr mutates the given raw value +// during its work, and may either return the same value mutated or a new +// value. Callers must no longer use whatever value they pass as "raw" after +// this function is called. +func gobDecodeFixNumberPtr(raw interface{}, ty Type) interface{} { + // Unfortunately we need to work recursively here because number values + // might be embedded in structural or collection type values. + + switch { + case ty.Equals(Number): + if bf, ok := raw.(big.Float); ok { + return &bf // wrap in pointer + } + case ty.IsMapType() && ty.ElementType().Equals(Number): + if m, ok := raw.(map[string]interface{}); ok { + for k, v := range m { + m[k] = gobDecodeFixNumberPtr(v, ty.ElementType()) + } + } + case ty.IsListType() && ty.ElementType().Equals(Number): + if s, ok := raw.([]interface{}); ok { + for i, v := range s { + s[i] = gobDecodeFixNumberPtr(v, ty.ElementType()) + } + } + case ty.IsSetType() && ty.ElementType().Equals(Number): + if s, ok := raw.(set.Set); ok { + newS := set.NewSet(s.Rules()) + for it := s.Iterator(); it.Next(); { + newV := gobDecodeFixNumberPtr(it.Value(), ty.ElementType()) + newS.Add(newV) + } + return newS + } + case ty.IsObjectType(): + if m, ok := raw.(map[string]interface{}); ok { + for k, v := range m { + aty := ty.AttributeType(k) + m[k] = gobDecodeFixNumberPtr(v, aty) + } + } + case ty.IsTupleType(): + if s, ok := raw.([]interface{}); ok { + for i, v := range s { + ety := ty.TupleElementType(i) + s[i] = gobDecodeFixNumberPtr(v, ety) + } + } + } + + return raw +} + +// gobDecodeFixNumberPtrVal is a helper wrapper around gobDecodeFixNumberPtr +// that works with already-constructed values. This is primarily for testing, +// to fix up intentionally-invalid number values for the parts of the test +// code that need them to be valid, such as calling GoString on them. +func gobDecodeFixNumberPtrVal(v Value) Value { + raw := gobDecodeFixNumberPtr(v.v, v.ty) + return Value{ + v: raw, + ty: v.ty, + } +} diff --git a/vendor/github.com/zclconf/go-cty/cty/json/marshal.go b/vendor/github.com/zclconf/go-cty/cty/json/marshal.go index f7bea1a2f..75e02577b 100644 --- a/vendor/github.com/zclconf/go-cty/cty/json/marshal.go +++ b/vendor/github.com/zclconf/go-cty/cty/json/marshal.go @@ -9,6 +9,10 @@ import ( ) func marshal(val cty.Value, t cty.Type, path cty.Path, b *bytes.Buffer) error { + if val.IsMarked() { + return path.NewErrorf("value has marks, so it cannot be seralized") + } + // If we're going to decode as DynamicPseudoType then we need to save // dynamic type information to recover the real type. if t == cty.DynamicPseudoType && val.Type() != cty.DynamicPseudoType { diff --git a/vendor/github.com/zclconf/go-cty/cty/marks.go b/vendor/github.com/zclconf/go-cty/cty/marks.go new file mode 100644 index 000000000..3898e4553 --- /dev/null +++ b/vendor/github.com/zclconf/go-cty/cty/marks.go @@ -0,0 +1,296 @@ +package cty + +import ( + "fmt" + "strings" +) + +// marker is an internal wrapper type used to add special "marks" to values. +// +// A "mark" is an annotation that can be used to represent additional +// characteristics of values that propagate through operation methods to +// result values. However, a marked value cannot be used with integration +// methods normally associated with its type, in order to ensure that +// calling applications don't inadvertently drop marks as they round-trip +// values out of cty and back in again. +// +// Marked values are created only explicitly by the calling application, so +// an application that never marks a value does not need to worry about +// encountering marked values. +type marker struct { + realV interface{} + marks ValueMarks +} + +// ValueMarks is a map, representing a set, of "mark" values associated with +// a Value. See Value.Mark for more information on the usage of mark values. +type ValueMarks map[interface{}]struct{} + +// NewValueMarks constructs a new ValueMarks set with the given mark values. +func NewValueMarks(marks ...interface{}) ValueMarks { + if len(marks) == 0 { + return nil + } + ret := make(ValueMarks, len(marks)) + for _, v := range marks { + ret[v] = struct{}{} + } + return ret +} + +// Equal returns true if the receiver and the given ValueMarks both contain +// the same marks. +func (m ValueMarks) Equal(o ValueMarks) bool { + if len(m) != len(o) { + return false + } + for v := range m { + if _, ok := o[v]; !ok { + return false + } + } + return true +} + +func (m ValueMarks) GoString() string { + var s strings.Builder + s.WriteString("cty.NewValueMarks(") + i := 0 + for mv := range m { + if i != 0 { + s.WriteString(", ") + } + s.WriteString(fmt.Sprintf("%#v", mv)) + i++ + } + s.WriteString(")") + return s.String() +} + +// IsMarked returns true if and only if the receiving value carries at least +// one mark. A marked value cannot be used directly with integration methods +// without explicitly unmarking it (and retrieving the markings) first. +func (val Value) IsMarked() bool { + _, ok := val.v.(marker) + return ok +} + +// HasMark returns true if and only if the receiving value has the given mark. +func (val Value) HasMark(mark interface{}) bool { + if mr, ok := val.v.(marker); ok { + _, ok := mr.marks[mark] + return ok + } + return false +} + +// ContainsMarked returns true if the receiving value or any value within it +// is marked. +// +// This operation is relatively expensive. If you only need a shallow result, +// use IsMarked instead. +func (val Value) ContainsMarked() bool { + ret := false + Walk(val, func(_ Path, v Value) (bool, error) { + if v.IsMarked() { + ret = true + return false, nil + } + return true, nil + }) + return ret +} + +func (val Value) assertUnmarked() { + if val.IsMarked() { + panic("value is marked, so must be unmarked first") + } +} + +// Marks returns a map (representing a set) of all of the mark values +// associated with the receiving value, without changing the marks. Returns nil +// if the value is not marked at all. +func (val Value) Marks() ValueMarks { + if mr, ok := val.v.(marker); ok { + // copy so that the caller can't mutate our internals + ret := make(ValueMarks, len(mr.marks)) + for k, v := range mr.marks { + ret[k] = v + } + return ret + } + return nil +} + +// HasSameMarks returns true if an only if the receiver and the given other +// value have identical marks. +func (val Value) HasSameMarks(other Value) bool { + vm, vmOK := val.v.(marker) + om, omOK := other.v.(marker) + if vmOK != omOK { + return false + } + if vmOK { + return vm.marks.Equal(om.marks) + } + return true +} + +// Mark returns a new value that as the same type and underlying value as +// the receiver but that also carries the given value as a "mark". +// +// Marks are used to carry additional application-specific characteristics +// associated with values. A marked value can be used with operation methods, +// in which case the marks are propagated to the operation results. A marked +// value _cannot_ be used with integration methods, so callers of those +// must derive an unmarked value using Unmark (and thus explicitly handle +// the markings) before calling the integration methods. +// +// The mark value can be any value that would be valid to use as a map key. +// The mark value should be of a named type in order to use the type itself +// as a namespace for markings. That type can be unexported if desired, in +// order to ensure that the mark can only be handled through the defining +// package's own functions. +// +// An application that never calls this method does not need to worry about +// handling marked values. +func (val Value) Mark(mark interface{}) Value { + var newMarker marker + newMarker.realV = val.v + if mr, ok := val.v.(marker); ok { + // It's already a marker, so we'll retain existing marks. + newMarker.marks = make(ValueMarks, len(mr.marks)+1) + for k, v := range mr.marks { + newMarker.marks[k] = v + } + } else { + // It's not a marker yet, so we're creating the first mark. + newMarker.marks = make(ValueMarks, 1) + } + newMarker.marks[mark] = struct{}{} + return Value{ + ty: val.ty, + v: newMarker, + } +} + +// Unmark separates the marks of the receiving value from the value itself, +// removing a new unmarked value and a map (representing a set) of the marks. +// +// If the receiver isn't marked, Unmark returns it verbatim along with a nil +// map of marks. +func (val Value) Unmark() (Value, ValueMarks) { + if !val.IsMarked() { + return val, nil + } + mr := val.v.(marker) + marks := val.Marks() // copy so that the caller can't mutate our internals + return Value{ + ty: val.ty, + v: mr.realV, + }, marks +} + +// UnmarkDeep is similar to Unmark, but it works with an entire nested structure +// rather than just the given value directly. +// +// The result is guaranteed to contain no nested values that are marked, and +// the returned marks set includes the superset of all of the marks encountered +// during the operation. +func (val Value) UnmarkDeep() (Value, ValueMarks) { + marks := make(ValueMarks) + ret, _ := Transform(val, func(_ Path, v Value) (Value, error) { + unmarkedV, valueMarks := v.Unmark() + for m, s := range valueMarks { + marks[m] = s + } + return unmarkedV, nil + }) + return ret, marks +} + +func (val Value) unmarkForce() Value { + unw, _ := val.Unmark() + return unw +} + +// WithMarks returns a new value that has the same type and underlying value +// as the receiver and also has the marks from the given maps (representing +// sets). +func (val Value) WithMarks(marks ...ValueMarks) Value { + if len(marks) == 0 { + return val + } + ownMarks := val.Marks() + markCount := len(ownMarks) + for _, s := range marks { + markCount += len(s) + } + if markCount == 0 { + return val + } + newMarks := make(ValueMarks, markCount) + for m := range ownMarks { + newMarks[m] = struct{}{} + } + for _, s := range marks { + for m := range s { + newMarks[m] = struct{}{} + } + } + v := val.v + if mr, ok := v.(marker); ok { + v = mr.realV + } + return Value{ + ty: val.ty, + v: marker{ + realV: v, + marks: newMarks, + }, + } +} + +// WithSameMarks returns a new value that has the same type and underlying +// value as the receiver and also has the marks from the given source values. +// +// Use this if you are implementing your own higher-level operations against +// cty using the integration methods, to re-introduce the marks from the +// source values of the operation. +func (val Value) WithSameMarks(srcs ...Value) Value { + if len(srcs) == 0 { + return val + } + ownMarks := val.Marks() + markCount := len(ownMarks) + for _, sv := range srcs { + if mr, ok := sv.v.(marker); ok { + markCount += len(mr.marks) + } + } + if markCount == 0 { + return val + } + newMarks := make(ValueMarks, markCount) + for m := range ownMarks { + newMarks[m] = struct{}{} + } + for _, sv := range srcs { + if mr, ok := sv.v.(marker); ok { + for m := range mr.marks { + newMarks[m] = struct{}{} + } + } + } + v := val.v + if mr, ok := v.(marker); ok { + v = mr.realV + } + return Value{ + ty: val.ty, + v: marker{ + realV: v, + marks: newMarks, + }, + } +} diff --git a/vendor/github.com/zclconf/go-cty/cty/msgpack/marshal.go b/vendor/github.com/zclconf/go-cty/cty/msgpack/marshal.go index 87b096ca4..51c75aa8d 100644 --- a/vendor/github.com/zclconf/go-cty/cty/msgpack/marshal.go +++ b/vendor/github.com/zclconf/go-cty/cty/msgpack/marshal.go @@ -41,6 +41,10 @@ func Marshal(val cty.Value, ty cty.Type) ([]byte, error) { } func marshal(val cty.Value, ty cty.Type, path cty.Path, enc *msgpack.Encoder) error { + if val.IsMarked() { + return path.NewErrorf("value has marks, so it cannot be seralized") + } + // If we're going to decode as DynamicPseudoType then we need to save // dynamic type information to recover the real type. if ty == cty.DynamicPseudoType && val.Type() != cty.DynamicPseudoType { diff --git a/vendor/github.com/zclconf/go-cty/cty/set_helper.go b/vendor/github.com/zclconf/go-cty/cty/set_helper.go index a88ddaffb..962bb5295 100644 --- a/vendor/github.com/zclconf/go-cty/cty/set_helper.go +++ b/vendor/github.com/zclconf/go-cty/cty/set_helper.go @@ -119,7 +119,13 @@ func (s ValueSet) SymmetricDifference(other ValueSet) ValueSet { } // requireElementType panics if the given value is not of the set's element type. +// +// It also panics if the given value is marked, because marked values cannot +// be stored in sets. func (s ValueSet) requireElementType(v Value) { + if v.IsMarked() { + panic("cannot store marked value directly in a set (make the set itself unknown instead)") + } if !v.Type().Equals(s.ElementType()) { panic(fmt.Errorf("attempt to use %#v value with set of %#v", v.Type(), s.ElementType())) } diff --git a/vendor/github.com/zclconf/go-cty/cty/set_internals.go b/vendor/github.com/zclconf/go-cty/cty/set_internals.go index 3fd4fb2df..e7e1d3337 100644 --- a/vendor/github.com/zclconf/go-cty/cty/set_internals.go +++ b/vendor/github.com/zclconf/go-cty/cty/set_internals.go @@ -32,7 +32,10 @@ var _ set.OrderedRules = setRules{} // This function is not safe to use for security-related applications, since // the hash used is not strong enough. func (val Value) Hash() int { - hashBytes := makeSetHashBytes(val) + hashBytes, marks := makeSetHashBytes(val) + if len(marks) > 0 { + panic("can't take hash of value that has marks or has embedded values that have marks") + } return int(crc32.ChecksumIEEE(hashBytes)) } @@ -110,19 +113,20 @@ func (r setRules) Less(v1, v2 interface{}) bool { // default consistent-but-undefined ordering then. This situation is // not considered a compatibility constraint; callers should rely only // on the ordering rules for primitive values. - v1h := makeSetHashBytes(v1v) - v2h := makeSetHashBytes(v2v) + v1h, _ := makeSetHashBytes(v1v) + v2h, _ := makeSetHashBytes(v2v) return bytes.Compare(v1h, v2h) < 0 } } -func makeSetHashBytes(val Value) []byte { +func makeSetHashBytes(val Value) ([]byte, ValueMarks) { var buf bytes.Buffer - appendSetHashBytes(val, &buf) - return buf.Bytes() + marks := make(ValueMarks) + appendSetHashBytes(val, &buf, marks) + return buf.Bytes(), marks } -func appendSetHashBytes(val Value, buf *bytes.Buffer) { +func appendSetHashBytes(val Value, buf *bytes.Buffer, marks ValueMarks) { // Exactly what bytes we generate here don't matter as long as the following // constraints hold: // - Unknown and null values all generate distinct strings from @@ -136,6 +140,19 @@ func appendSetHashBytes(val Value, buf *bytes.Buffer) { // the Equivalent function will still distinguish values, but set // performance will be best if we are able to produce a distinct string // for each distinct value, unknown values notwithstanding. + + // Marks aren't considered part of a value for equality-testing purposes, + // so we'll unmark our value before we work with it but we'll remember + // the marks in case the caller needs to re-apply them to a derived + // value. + if val.IsMarked() { + unmarkedVal, valMarks := val.Unmark() + for m := range valMarks { + marks[m] = struct{}{} + } + val = unmarkedVal + } + if !val.IsKnown() { buf.WriteRune('?') return @@ -147,6 +164,17 @@ func appendSetHashBytes(val Value, buf *bytes.Buffer) { switch val.ty { case Number: + // Due to an unfortunate quirk of gob encoding for big.Float, we end up + // with non-pointer values immediately after a gob round-trip, and + // we end up in here before we've had a chance to run + // gobDecodeFixNumberPtr on the inner values of a gob-encoded set, + // and so sadly we must make a special effort to handle that situation + // here just so that we can get far enough along to fix it up for + // everything else in this package. + if bf, ok := val.v.(big.Float); ok { + buf.WriteString(bf.String()) + return + } buf.WriteString(val.v.(*big.Float).String()) return case Bool: @@ -164,9 +192,9 @@ func appendSetHashBytes(val Value, buf *bytes.Buffer) { if val.ty.IsMapType() { buf.WriteRune('{') val.ForEachElement(func(keyVal, elementVal Value) bool { - appendSetHashBytes(keyVal, buf) + appendSetHashBytes(keyVal, buf, marks) buf.WriteRune(':') - appendSetHashBytes(elementVal, buf) + appendSetHashBytes(elementVal, buf, marks) buf.WriteRune(';') return false }) @@ -177,7 +205,7 @@ func appendSetHashBytes(val Value, buf *bytes.Buffer) { if val.ty.IsListType() || val.ty.IsSetType() { buf.WriteRune('[') val.ForEachElement(func(keyVal, elementVal Value) bool { - appendSetHashBytes(elementVal, buf) + appendSetHashBytes(elementVal, buf, marks) buf.WriteRune(';') return false }) @@ -193,7 +221,7 @@ func appendSetHashBytes(val Value, buf *bytes.Buffer) { } sort.Strings(attrNames) for _, attrName := range attrNames { - appendSetHashBytes(val.GetAttr(attrName), buf) + appendSetHashBytes(val.GetAttr(attrName), buf, marks) buf.WriteRune(';') } buf.WriteRune('>') @@ -203,7 +231,7 @@ func appendSetHashBytes(val Value, buf *bytes.Buffer) { if val.ty.IsTupleType() { buf.WriteRune('<') val.ForEachElement(func(keyVal, elementVal Value) bool { - appendSetHashBytes(elementVal, buf) + appendSetHashBytes(elementVal, buf, marks) buf.WriteRune(';') return false }) diff --git a/vendor/github.com/zclconf/go-cty/cty/value.go b/vendor/github.com/zclconf/go-cty/cty/value.go index 80cb8f76f..1025ba82e 100644 --- a/vendor/github.com/zclconf/go-cty/cty/value.go +++ b/vendor/github.com/zclconf/go-cty/cty/value.go @@ -45,6 +45,9 @@ func (val Value) Type() Type { // operating on other unknown values, and so an application that never // introduces Unknown values can be guaranteed to never receive any either. func (val Value) IsKnown() bool { + if val.IsMarked() { + return val.unmarkForce().IsKnown() + } return val.v != unknown } @@ -53,6 +56,9 @@ func (val Value) IsKnown() bool { // produces null, so an application that never introduces Null values can // be guaranteed to never receive any either. func (val Value) IsNull() bool { + if val.IsMarked() { + return val.unmarkForce().IsNull() + } return val.v == nil } @@ -74,6 +80,10 @@ var NilVal = Value{ // inside collections and structures to see if there are any nested unknown // values. func (val Value) IsWhollyKnown() bool { + if val.IsMarked() { + return val.unmarkForce().IsWhollyKnown() + } + if !val.IsKnown() { return false } diff --git a/vendor/github.com/zclconf/go-cty/cty/value_init.go b/vendor/github.com/zclconf/go-cty/cty/value_init.go index 3deeba3bd..2dafe17ae 100644 --- a/vendor/github.com/zclconf/go-cty/cty/value_init.go +++ b/vendor/github.com/zclconf/go-cty/cty/value_init.go @@ -240,8 +240,18 @@ func SetVal(vals []Value) Value { } elementType := DynamicPseudoType rawList := make([]interface{}, len(vals)) + var markSets []ValueMarks for i, val := range vals { + if unmarkedVal, marks := val.UnmarkDeep(); len(marks) > 0 { + val = unmarkedVal + markSets = append(markSets, marks) + } + if val.ContainsMarked() { + // FIXME: Allow this, but unmark the values and apply the + // marking to the set itself instead. + panic("set cannot contain marked values") + } if elementType == DynamicPseudoType { elementType = val.ty } else if val.ty != DynamicPseudoType && !elementType.Equals(val.ty) { @@ -259,7 +269,7 @@ func SetVal(vals []Value) Value { return Value{ ty: Set(elementType), v: rawVal, - } + }.WithMarks(markSets...) } // SetValFromValueSet returns a Value of set type based on an already-constructed diff --git a/vendor/github.com/zclconf/go-cty/cty/value_ops.go b/vendor/github.com/zclconf/go-cty/cty/value_ops.go index afd621cf4..35a644be4 100644 --- a/vendor/github.com/zclconf/go-cty/cty/value_ops.go +++ b/vendor/github.com/zclconf/go-cty/cty/value_ops.go @@ -11,6 +11,18 @@ import ( // GoString is an implementation of fmt.GoStringer that produces concise // source-like representations of values suitable for use in debug messages. func (val Value) GoString() string { + if val.IsMarked() { + unVal, marks := val.Unmark() + if len(marks) == 1 { + var mark interface{} + for m := range marks { + mark = m + } + return fmt.Sprintf("%#v.Mark(%#v)", unVal, mark) + } + return fmt.Sprintf("%#v.WithMarks(%#v)", unVal, marks) + } + if val == NilVal { return "cty.NilVal" } @@ -82,7 +94,11 @@ func (val Value) GoString() string { vals := val.AsValueMap() return fmt.Sprintf("cty.ObjectVal(%#v)", vals) case val.ty.IsCapsuleType(): - return fmt.Sprintf("cty.CapsuleVal(%#v, %#v)", val.ty, val.v) + impl := val.ty.CapsuleOps().GoString + if impl == nil { + return fmt.Sprintf("cty.CapsuleVal(%#v, %#v)", val.ty, val.v) + } + return impl(val.EncapsulatedValue()) } // Default exposes implementation details, so should actually cover @@ -101,6 +117,12 @@ func (val Value) GoString() string { // Use RawEquals to compare if two values are equal *ignoring* the // short-circuit rules and the exception for null values. func (val Value) Equals(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.Equals(other).WithMarks(valMarks, otherMarks) + } + // Start by handling Unknown values before considering types. // This needs to be done since Null values are always equal regardless of // type. @@ -288,10 +310,22 @@ func (val Value) Equals(other Value) Value { } } case ty.IsCapsuleType(): - // A capsule type's encapsulated value is a pointer to a value of its - // native type, so we can just compare these to get the identity test - // we need. - return BoolVal(val.v == other.v) + impl := val.ty.CapsuleOps().Equals + if impl == nil { + impl := val.ty.CapsuleOps().RawEquals + if impl == nil { + // A capsule type's encapsulated value is a pointer to a value of its + // native type, so we can just compare these to get the identity test + // we need. + return BoolVal(val.v == other.v) + } + return BoolVal(impl(val.v, other.v)) + } + ret := impl(val.v, other.v) + if !ret.Type().Equals(Bool) { + panic(fmt.Sprintf("Equals for %#v returned %#v, not cty.Bool", ty, ret.Type())) + } + return ret default: // should never happen @@ -314,6 +348,7 @@ func (val Value) NotEqual(other Value) Value { // or null values. For more robust handling with unknown value // short-circuiting, use val.Equals(cty.True). func (val Value) True() bool { + val.assertUnmarked() if val.ty != Bool { panic("not bool") } @@ -338,6 +373,13 @@ func (val Value) RawEquals(other Value) bool { if !val.ty.Equals(other.ty) { return false } + if !val.HasSameMarks(other) { + return false + } + // Since we've now checked the marks, we'll unmark for the rest of this... + val = val.unmarkForce() + other = other.unmarkForce() + if (!val.IsKnown()) && (!other.IsKnown()) { return true } @@ -448,10 +490,14 @@ func (val Value) RawEquals(other Value) bool { } return false case ty.IsCapsuleType(): - // A capsule type's encapsulated value is a pointer to a value of its - // native type, so we can just compare these to get the identity test - // we need. - return val.v == other.v + impl := val.ty.CapsuleOps().RawEquals + if impl == nil { + // A capsule type's encapsulated value is a pointer to a value of its + // native type, so we can just compare these to get the identity test + // we need. + return val.v == other.v + } + return impl(val.v, other.v) default: // should never happen @@ -462,6 +508,12 @@ func (val Value) RawEquals(other Value) bool { // Add returns the sum of the receiver and the given other value. Both values // must be numbers; this method will panic if not. func (val Value) Add(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.Add(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Number, Number, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Number) return *shortCircuit @@ -475,6 +527,12 @@ func (val Value) Add(other Value) Value { // Subtract returns receiver minus the given other value. Both values must be // numbers; this method will panic if not. func (val Value) Subtract(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.Subtract(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Number, Number, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Number) return *shortCircuit @@ -486,6 +544,11 @@ func (val Value) Subtract(other Value) Value { // Negate returns the numeric negative of the receiver, which must be a number. // This method will panic when given a value of any other type. func (val Value) Negate() Value { + if val.IsMarked() { + val, valMarks := val.Unmark() + return val.Negate().WithMarks(valMarks) + } + if shortCircuit := mustTypeCheck(Number, Number, val); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Number) return *shortCircuit @@ -498,6 +561,12 @@ func (val Value) Negate() Value { // Multiply returns the product of the receiver and the given other value. // Both values must be numbers; this method will panic if not. func (val Value) Multiply(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.Multiply(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Number, Number, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Number) return *shortCircuit @@ -520,6 +589,12 @@ func (val Value) Multiply(other Value) Value { // If both values are zero or infinity, this function will panic with // an instance of big.ErrNaN. func (val Value) Divide(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.Divide(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Number, Number, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Number) return *shortCircuit @@ -546,6 +621,12 @@ func (val Value) Divide(other Value) Value { // may wish to disallow such things outright or implement their own modulo // if they disagree with the interpretation used here. func (val Value) Modulo(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.Modulo(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Number, Number, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Number) return *shortCircuit @@ -576,6 +657,11 @@ func (val Value) Modulo(other Value) Value { // Absolute returns the absolute (signless) value of the receiver, which must // be a number or this method will panic. func (val Value) Absolute() Value { + if val.IsMarked() { + val, valMarks := val.Unmark() + return val.Absolute().WithMarks(valMarks) + } + if shortCircuit := mustTypeCheck(Number, Number, val); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Number) return *shortCircuit @@ -596,6 +682,11 @@ func (val Value) Absolute() Value { // This method may be called on a value whose type is DynamicPseudoType, // in which case the result will also be DynamicVal. func (val Value) GetAttr(name string) Value { + if val.IsMarked() { + val, valMarks := val.Unmark() + return val.GetAttr(name).WithMarks(valMarks) + } + if val.ty == DynamicPseudoType { return DynamicVal } @@ -638,6 +729,12 @@ func (val Value) GetAttr(name string) Value { // This method may be called on a value whose type is DynamicPseudoType, // in which case the result will also be the DynamicValue. func (val Value) Index(key Value) Value { + if val.IsMarked() || key.IsMarked() { + val, valMarks := val.Unmark() + key, keyMarks := key.Unmark() + return val.Index(key).WithMarks(valMarks, keyMarks) + } + if val.ty == DynamicPseudoType { return DynamicVal } @@ -733,6 +830,12 @@ func (val Value) Index(key Value) Value { // This method will panic if the receiver is not indexable, but does not // impose any panic-causing type constraints on the key. func (val Value) HasIndex(key Value) Value { + if val.IsMarked() || key.IsMarked() { + val, valMarks := val.Unmark() + key, keyMarks := key.Unmark() + return val.HasIndex(key).WithMarks(valMarks, keyMarks) + } + if val.ty == DynamicPseudoType { return UnknownVal(Bool) } @@ -810,6 +913,12 @@ func (val Value) HasIndex(key Value) Value { // // This method will panic if the receiver is not a set, or if it is a null set. func (val Value) HasElement(elem Value) Value { + if val.IsMarked() || elem.IsMarked() { + val, valMarks := val.Unmark() + elem, elemMarks := elem.Unmark() + return val.HasElement(elem).WithMarks(valMarks, elemMarks) + } + ty := val.Type() if !ty.IsSetType() { @@ -841,6 +950,11 @@ func (val Value) HasElement(elem Value) Value { // of a string, call AsString and take the length of the native Go string // that is returned. func (val Value) Length() Value { + if val.IsMarked() { + val, valMarks := val.Unmark() + return val.Length().WithMarks(valMarks) + } + if val.Type().IsTupleType() { // For tuples, we can return the length even if the value is not known. return NumberIntVal(int64(val.Type().Length())) @@ -859,6 +973,7 @@ func (val Value) Length() Value { // This is an integration method provided for the convenience of code bridging // into Go's type system. func (val Value) LengthInt() int { + val.assertUnmarked() if val.Type().IsTupleType() { // For tuples, we can return the length even if the value is not known. return val.Type().Length() @@ -915,6 +1030,7 @@ func (val Value) LengthInt() int { // ElementIterator is an integration method, so it cannot handle Unknown // values. This method will panic if the receiver is Unknown. func (val Value) ElementIterator() ElementIterator { + val.assertUnmarked() if !val.IsKnown() { panic("can't use ElementIterator on unknown value") } @@ -943,6 +1059,7 @@ func (val Value) CanIterateElements() bool { // ForEachElement is an integration method, so it cannot handle Unknown // values. This method will panic if the receiver is Unknown. func (val Value) ForEachElement(cb ElementCallback) bool { + val.assertUnmarked() it := val.ElementIterator() for it.Next() { key, val := it.Element() @@ -957,6 +1074,11 @@ func (val Value) ForEachElement(cb ElementCallback) bool { // Not returns the logical inverse of the receiver, which must be of type // Bool or this method will panic. func (val Value) Not() Value { + if val.IsMarked() { + val, valMarks := val.Unmark() + return val.Not().WithMarks(valMarks) + } + if shortCircuit := mustTypeCheck(Bool, Bool, val); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Bool) return *shortCircuit @@ -968,6 +1090,12 @@ func (val Value) Not() Value { // And returns the result of logical AND with the receiver and the other given // value, which must both be of type Bool or this method will panic. func (val Value) And(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.And(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Bool, Bool, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Bool) return *shortCircuit @@ -979,6 +1107,12 @@ func (val Value) And(other Value) Value { // Or returns the result of logical OR with the receiver and the other given // value, which must both be of type Bool or this method will panic. func (val Value) Or(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.Or(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Bool, Bool, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Bool) return *shortCircuit @@ -990,6 +1124,12 @@ func (val Value) Or(other Value) Value { // LessThan returns True if the receiver is less than the other given value, // which must both be numbers or this method will panic. func (val Value) LessThan(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.LessThan(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Number, Bool, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Bool) return *shortCircuit @@ -1001,6 +1141,12 @@ func (val Value) LessThan(other Value) Value { // GreaterThan returns True if the receiver is greater than the other given // value, which must both be numbers or this method will panic. func (val Value) GreaterThan(other Value) Value { + if val.IsMarked() || other.IsMarked() { + val, valMarks := val.Unmark() + other, otherMarks := other.Unmark() + return val.GreaterThan(other).WithMarks(valMarks, otherMarks) + } + if shortCircuit := mustTypeCheck(Number, Bool, val, other); shortCircuit != nil { shortCircuit = forceShortCircuitType(shortCircuit, Bool) return *shortCircuit @@ -1022,6 +1168,7 @@ func (val Value) GreaterThanOrEqualTo(other Value) Value { // AsString returns the native string from a non-null, non-unknown cty.String // value, or panics if called on any other value. func (val Value) AsString() string { + val.assertUnmarked() if val.ty != String { panic("not a string") } @@ -1041,6 +1188,7 @@ func (val Value) AsString() string { // For more convenient conversions to other native numeric types, use the // "gocty" package. func (val Value) AsBigFloat() *big.Float { + val.assertUnmarked() if val.ty != Number { panic("not a number") } @@ -1064,6 +1212,7 @@ func (val Value) AsBigFloat() *big.Float { // For more convenient conversions to slices of more specific types, use // the "gocty" package. func (val Value) AsValueSlice() []Value { + val.assertUnmarked() l := val.LengthInt() if l == 0 { return nil @@ -1084,6 +1233,7 @@ func (val Value) AsValueSlice() []Value { // For more convenient conversions to maps of more specific types, use // the "gocty" package. func (val Value) AsValueMap() map[string]Value { + val.assertUnmarked() l := val.LengthInt() if l == 0 { return nil @@ -1108,6 +1258,7 @@ func (val Value) AsValueMap() map[string]Value { // // The returned ValueSet can store only values of the receiver's element type. func (val Value) AsValueSet() ValueSet { + val.assertUnmarked() if !val.Type().IsCollectionType() { panic("not a collection type") } @@ -1130,6 +1281,7 @@ func (val Value) AsValueSet() ValueSet { // the value. Since cty considers values to be immutable, it is strongly // recommended to treat the encapsulated value itself as immutable too. func (val Value) EncapsulatedValue() interface{} { + val.assertUnmarked() if !val.Type().IsCapsuleType() { panic("not a capsule-typed value") } diff --git a/vendor/golang.org/x/net/html/atom/gen.go b/vendor/golang.org/x/net/html/atom/gen.go deleted file mode 100644 index 5d052781b..000000000 --- a/vendor/golang.org/x/net/html/atom/gen.go +++ /dev/null @@ -1,712 +0,0 @@ -// Copyright 2012 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -//go:generate go run gen.go -//go:generate go run gen.go -test - -package main - -import ( - "bytes" - "flag" - "fmt" - "go/format" - "io/ioutil" - "math/rand" - "os" - "sort" - "strings" -) - -// identifier converts s to a Go exported identifier. -// It converts "div" to "Div" and "accept-charset" to "AcceptCharset". -func identifier(s string) string { - b := make([]byte, 0, len(s)) - cap := true - for _, c := range s { - if c == '-' { - cap = true - continue - } - if cap && 'a' <= c && c <= 'z' { - c -= 'a' - 'A' - } - cap = false - b = append(b, byte(c)) - } - return string(b) -} - -var test = flag.Bool("test", false, "generate table_test.go") - -func genFile(name string, buf *bytes.Buffer) { - b, err := format.Source(buf.Bytes()) - if err != nil { - fmt.Fprintln(os.Stderr, err) - os.Exit(1) - } - if err := ioutil.WriteFile(name, b, 0644); err != nil { - fmt.Fprintln(os.Stderr, err) - os.Exit(1) - } -} - -func main() { - flag.Parse() - - var all []string - all = append(all, elements...) - all = append(all, attributes...) - all = append(all, eventHandlers...) - all = append(all, extra...) - sort.Strings(all) - - // uniq - lists have dups - w := 0 - for _, s := range all { - if w == 0 || all[w-1] != s { - all[w] = s - w++ - } - } - all = all[:w] - - if *test { - var buf bytes.Buffer - fmt.Fprintln(&buf, "// Code generated by go generate gen.go; DO NOT EDIT.\n") - fmt.Fprintln(&buf, "//go:generate go run gen.go -test\n") - fmt.Fprintln(&buf, "package atom\n") - fmt.Fprintln(&buf, "var testAtomList = []string{") - for _, s := range all { - fmt.Fprintf(&buf, "\t%q,\n", s) - } - fmt.Fprintln(&buf, "}") - - genFile("table_test.go", &buf) - return - } - - // Find hash that minimizes table size. - var best *table - for i := 0; i < 1000000; i++ { - if best != nil && 1<<(best.k-1) < len(all) { - break - } - h := rand.Uint32() - for k := uint(0); k <= 16; k++ { - if best != nil && k >= best.k { - break - } - var t table - if t.init(h, k, all) { - best = &t - break - } - } - } - if best == nil { - fmt.Fprintf(os.Stderr, "failed to construct string table\n") - os.Exit(1) - } - - // Lay out strings, using overlaps when possible. - layout := append([]string{}, all...) - - // Remove strings that are substrings of other strings - for changed := true; changed; { - changed = false - for i, s := range layout { - if s == "" { - continue - } - for j, t := range layout { - if i != j && t != "" && strings.Contains(s, t) { - changed = true - layout[j] = "" - } - } - } - } - - // Join strings where one suffix matches another prefix. - for { - // Find best i, j, k such that layout[i][len-k:] == layout[j][:k], - // maximizing overlap length k. - besti := -1 - bestj := -1 - bestk := 0 - for i, s := range layout { - if s == "" { - continue - } - for j, t := range layout { - if i == j { - continue - } - for k := bestk + 1; k <= len(s) && k <= len(t); k++ { - if s[len(s)-k:] == t[:k] { - besti = i - bestj = j - bestk = k - } - } - } - } - if bestk > 0 { - layout[besti] += layout[bestj][bestk:] - layout[bestj] = "" - continue - } - break - } - - text := strings.Join(layout, "") - - atom := map[string]uint32{} - for _, s := range all { - off := strings.Index(text, s) - if off < 0 { - panic("lost string " + s) - } - atom[s] = uint32(off<<8 | len(s)) - } - - var buf bytes.Buffer - // Generate the Go code. - fmt.Fprintln(&buf, "// Code generated by go generate gen.go; DO NOT EDIT.\n") - fmt.Fprintln(&buf, "//go:generate go run gen.go\n") - fmt.Fprintln(&buf, "package atom\n\nconst (") - - // compute max len - maxLen := 0 - for _, s := range all { - if maxLen < len(s) { - maxLen = len(s) - } - fmt.Fprintf(&buf, "\t%s Atom = %#x\n", identifier(s), atom[s]) - } - fmt.Fprintln(&buf, ")\n") - - fmt.Fprintf(&buf, "const hash0 = %#x\n\n", best.h0) - fmt.Fprintf(&buf, "const maxAtomLen = %d\n\n", maxLen) - - fmt.Fprintf(&buf, "var table = [1<<%d]Atom{\n", best.k) - for i, s := range best.tab { - if s == "" { - continue - } - fmt.Fprintf(&buf, "\t%#x: %#x, // %s\n", i, atom[s], s) - } - fmt.Fprintf(&buf, "}\n") - datasize := (1 << best.k) * 4 - - fmt.Fprintln(&buf, "const atomText =") - textsize := len(text) - for len(text) > 60 { - fmt.Fprintf(&buf, "\t%q +\n", text[:60]) - text = text[60:] - } - fmt.Fprintf(&buf, "\t%q\n\n", text) - - genFile("table.go", &buf) - - fmt.Fprintf(os.Stdout, "%d atoms; %d string bytes + %d tables = %d total data\n", len(all), textsize, datasize, textsize+datasize) -} - -type byLen []string - -func (x byLen) Less(i, j int) bool { return len(x[i]) > len(x[j]) } -func (x byLen) Swap(i, j int) { x[i], x[j] = x[j], x[i] } -func (x byLen) Len() int { return len(x) } - -// fnv computes the FNV hash with an arbitrary starting value h. -func fnv(h uint32, s string) uint32 { - for i := 0; i < len(s); i++ { - h ^= uint32(s[i]) - h *= 16777619 - } - return h -} - -// A table represents an attempt at constructing the lookup table. -// The lookup table uses cuckoo hashing, meaning that each string -// can be found in one of two positions. -type table struct { - h0 uint32 - k uint - mask uint32 - tab []string -} - -// hash returns the two hashes for s. -func (t *table) hash(s string) (h1, h2 uint32) { - h := fnv(t.h0, s) - h1 = h & t.mask - h2 = (h >> 16) & t.mask - return -} - -// init initializes the table with the given parameters. -// h0 is the initial hash value, -// k is the number of bits of hash value to use, and -// x is the list of strings to store in the table. -// init returns false if the table cannot be constructed. -func (t *table) init(h0 uint32, k uint, x []string) bool { - t.h0 = h0 - t.k = k - t.tab = make([]string, 1< len(t.tab) { - return false - } - s := t.tab[i] - h1, h2 := t.hash(s) - j := h1 + h2 - i - if t.tab[j] != "" && !t.push(j, depth+1) { - return false - } - t.tab[j] = s - return true -} - -// The lists of element names and attribute keys were taken from -// https://html.spec.whatwg.org/multipage/indices.html#index -// as of the "HTML Living Standard - Last Updated 16 April 2018" version. - -// "command", "keygen" and "menuitem" have been removed from the spec, -// but are kept here for backwards compatibility. -var elements = []string{ - "a", - "abbr", - "address", - "area", - "article", - "aside", - "audio", - "b", - "base", - "bdi", - "bdo", - "blockquote", - "body", - "br", - "button", - "canvas", - "caption", - "cite", - "code", - "col", - "colgroup", - "command", - "data", - "datalist", - "dd", - "del", - "details", - "dfn", - "dialog", - "div", - "dl", - "dt", - "em", - "embed", - "fieldset", - "figcaption", - "figure", - "footer", - "form", - "h1", - "h2", - "h3", - "h4", - "h5", - "h6", - "head", - "header", - "hgroup", - "hr", - "html", - "i", - "iframe", - "img", - "input", - "ins", - "kbd", - "keygen", - "label", - "legend", - "li", - "link", - "main", - "map", - "mark", - "menu", - "menuitem", - "meta", - "meter", - "nav", - "noscript", - "object", - "ol", - "optgroup", - "option", - "output", - "p", - "param", - "picture", - "pre", - "progress", - "q", - "rp", - "rt", - "ruby", - "s", - "samp", - "script", - "section", - "select", - "slot", - "small", - "source", - "span", - "strong", - "style", - "sub", - "summary", - "sup", - "table", - "tbody", - "td", - "template", - "textarea", - "tfoot", - "th", - "thead", - "time", - "title", - "tr", - "track", - "u", - "ul", - "var", - "video", - "wbr", -} - -// https://html.spec.whatwg.org/multipage/indices.html#attributes-3 -// -// "challenge", "command", "contextmenu", "dropzone", "icon", "keytype", "mediagroup", -// "radiogroup", "spellcheck", "scoped", "seamless", "sortable" and "sorted" have been removed from the spec, -// but are kept here for backwards compatibility. -var attributes = []string{ - "abbr", - "accept", - "accept-charset", - "accesskey", - "action", - "allowfullscreen", - "allowpaymentrequest", - "allowusermedia", - "alt", - "as", - "async", - "autocomplete", - "autofocus", - "autoplay", - "challenge", - "charset", - "checked", - "cite", - "class", - "color", - "cols", - "colspan", - "command", - "content", - "contenteditable", - "contextmenu", - "controls", - "coords", - "crossorigin", - "data", - "datetime", - "default", - "defer", - "dir", - "dirname", - "disabled", - "download", - "draggable", - "dropzone", - "enctype", - "for", - "form", - "formaction", - "formenctype", - "formmethod", - "formnovalidate", - "formtarget", - "headers", - "height", - "hidden", - "high", - "href", - "hreflang", - "http-equiv", - "icon", - "id", - "inputmode", - "integrity", - "is", - "ismap", - "itemid", - "itemprop", - "itemref", - "itemscope", - "itemtype", - "keytype", - "kind", - "label", - "lang", - "list", - "loop", - "low", - "manifest", - "max", - "maxlength", - "media", - "mediagroup", - "method", - "min", - "minlength", - "multiple", - "muted", - "name", - "nomodule", - "nonce", - "novalidate", - "open", - "optimum", - "pattern", - "ping", - "placeholder", - "playsinline", - "poster", - "preload", - "radiogroup", - "readonly", - "referrerpolicy", - "rel", - "required", - "reversed", - "rows", - "rowspan", - "sandbox", - "spellcheck", - "scope", - "scoped", - "seamless", - "selected", - "shape", - "size", - "sizes", - "sortable", - "sorted", - "slot", - "span", - "spellcheck", - "src", - "srcdoc", - "srclang", - "srcset", - "start", - "step", - "style", - "tabindex", - "target", - "title", - "translate", - "type", - "typemustmatch", - "updateviacache", - "usemap", - "value", - "width", - "workertype", - "wrap", -} - -// "onautocomplete", "onautocompleteerror", "onmousewheel", -// "onshow" and "onsort" have been removed from the spec, -// but are kept here for backwards compatibility. -var eventHandlers = []string{ - "onabort", - "onautocomplete", - "onautocompleteerror", - "onauxclick", - "onafterprint", - "onbeforeprint", - "onbeforeunload", - "onblur", - "oncancel", - "oncanplay", - "oncanplaythrough", - "onchange", - "onclick", - "onclose", - "oncontextmenu", - "oncopy", - "oncuechange", - "oncut", - "ondblclick", - "ondrag", - "ondragend", - "ondragenter", - "ondragexit", - "ondragleave", - "ondragover", - "ondragstart", - "ondrop", - "ondurationchange", - "onemptied", - "onended", - "onerror", - "onfocus", - "onhashchange", - "oninput", - "oninvalid", - "onkeydown", - "onkeypress", - "onkeyup", - "onlanguagechange", - "onload", - "onloadeddata", - "onloadedmetadata", - "onloadend", - "onloadstart", - "onmessage", - "onmessageerror", - "onmousedown", - "onmouseenter", - "onmouseleave", - "onmousemove", - "onmouseout", - "onmouseover", - "onmouseup", - "onmousewheel", - "onwheel", - "onoffline", - "ononline", - "onpagehide", - "onpageshow", - "onpaste", - "onpause", - "onplay", - "onplaying", - "onpopstate", - "onprogress", - "onratechange", - "onreset", - "onresize", - "onrejectionhandled", - "onscroll", - "onsecuritypolicyviolation", - "onseeked", - "onseeking", - "onselect", - "onshow", - "onsort", - "onstalled", - "onstorage", - "onsubmit", - "onsuspend", - "ontimeupdate", - "ontoggle", - "onunhandledrejection", - "onunload", - "onvolumechange", - "onwaiting", -} - -// extra are ad-hoc values not covered by any of the lists above. -var extra = []string{ - "acronym", - "align", - "annotation", - "annotation-xml", - "applet", - "basefont", - "bgsound", - "big", - "blink", - "center", - "color", - "desc", - "face", - "font", - "foreignObject", // HTML is case-insensitive, but SVG-embedded-in-HTML is case-sensitive. - "foreignobject", - "frame", - "frameset", - "image", - "isindex", - "listing", - "malignmark", - "marquee", - "math", - "mglyph", - "mi", - "mn", - "mo", - "ms", - "mtext", - "nobr", - "noembed", - "noframes", - "plaintext", - "prompt", - "public", - "rb", - "rtc", - "spacer", - "strike", - "svg", - "system", - "tt", - "xmp", -} diff --git a/vendor/golang.org/x/net/html/token.go b/vendor/golang.org/x/net/html/token.go index e3c01d7c9..ae0d1b05c 100644 --- a/vendor/golang.org/x/net/html/token.go +++ b/vendor/golang.org/x/net/html/token.go @@ -347,6 +347,7 @@ loop: break loop } if c != '/' { + z.raw.end-- continue loop } if z.readRawEndTag() || z.err != nil { @@ -1067,6 +1068,11 @@ loop: // Raw returns the unmodified text of the current token. Calling Next, Token, // Text, TagName or TagAttr may change the contents of the returned slice. +// +// The token stream's raw bytes partition the byte stream (up until an +// ErrorToken). There are no overlaps or gaps between two consecutive token's +// raw bytes. One implication is that the byte offset of the current token is +// the sum of the lengths of all previous tokens' raw bytes. func (z *Tokenizer) Raw() []byte { return z.buf[z.raw.start:z.raw.end] } diff --git a/vendor/golang.org/x/net/http2/hpack/encode.go b/vendor/golang.org/x/net/http2/hpack/encode.go index 1565cf270..97f17831f 100644 --- a/vendor/golang.org/x/net/http2/hpack/encode.go +++ b/vendor/golang.org/x/net/http2/hpack/encode.go @@ -150,7 +150,7 @@ func appendIndexed(dst []byte, i uint64) []byte { // extended buffer. // // If f.Sensitive is true, "Never Indexed" representation is used. If -// f.Sensitive is false and indexing is true, "Inremental Indexing" +// f.Sensitive is false and indexing is true, "Incremental Indexing" // representation is used. func appendNewName(dst []byte, f HeaderField, indexing bool) []byte { dst = append(dst, encodeTypeByte(indexing, f.Sensitive)) diff --git a/vendor/golang.org/x/net/http2/server.go b/vendor/golang.org/x/net/http2/server.go index 57334dc79..d2ba820c7 100644 --- a/vendor/golang.org/x/net/http2/server.go +++ b/vendor/golang.org/x/net/http2/server.go @@ -52,10 +52,11 @@ import ( ) const ( - prefaceTimeout = 10 * time.Second - firstSettingsTimeout = 2 * time.Second // should be in-flight with preface anyway - handlerChunkWriteSize = 4 << 10 - defaultMaxStreams = 250 // TODO: make this 100 as the GFE seems to? + prefaceTimeout = 10 * time.Second + firstSettingsTimeout = 2 * time.Second // should be in-flight with preface anyway + handlerChunkWriteSize = 4 << 10 + defaultMaxStreams = 250 // TODO: make this 100 as the GFE seems to? + maxQueuedControlFrames = 10000 ) var ( @@ -163,6 +164,15 @@ func (s *Server) maxConcurrentStreams() uint32 { return defaultMaxStreams } +// maxQueuedControlFrames is the maximum number of control frames like +// SETTINGS, PING and RST_STREAM that will be queued for writing before +// the connection is closed to prevent memory exhaustion attacks. +func (s *Server) maxQueuedControlFrames() int { + // TODO: if anybody asks, add a Server field, and remember to define the + // behavior of negative values. + return maxQueuedControlFrames +} + type serverInternalState struct { mu sync.Mutex activeConns map[*serverConn]struct{} @@ -312,7 +322,7 @@ type ServeConnOpts struct { } func (o *ServeConnOpts) context() context.Context { - if o.Context != nil { + if o != nil && o.Context != nil { return o.Context } return context.Background() @@ -506,6 +516,7 @@ type serverConn struct { sawFirstSettings bool // got the initial SETTINGS frame after the preface needToSendSettingsAck bool unackedSettings int // how many SETTINGS have we sent without ACKs? + queuedControlFrames int // control frames in the writeSched queue clientMaxStreams uint32 // SETTINGS_MAX_CONCURRENT_STREAMS from client (our PUSH_PROMISE limit) advMaxStreams uint32 // our SETTINGS_MAX_CONCURRENT_STREAMS advertised the client curClientStreams uint32 // number of open streams initiated by the client @@ -894,6 +905,14 @@ func (sc *serverConn) serve() { } } + // If the peer is causing us to generate a lot of control frames, + // but not reading them from us, assume they are trying to make us + // run out of memory. + if sc.queuedControlFrames > sc.srv.maxQueuedControlFrames() { + sc.vlogf("http2: too many control frames in send queue, closing connection") + return + } + // Start the shutdown timer after sending a GOAWAY. When sending GOAWAY // with no error code (graceful shutdown), don't start the timer until // all open streams have been completed. @@ -1093,6 +1112,14 @@ func (sc *serverConn) writeFrame(wr FrameWriteRequest) { } if !ignoreWrite { + if wr.isControl() { + sc.queuedControlFrames++ + // For extra safety, detect wraparounds, which should not happen, + // and pull the plug. + if sc.queuedControlFrames < 0 { + sc.conn.Close() + } + } sc.writeSched.Push(wr) } sc.scheduleFrameWrite() @@ -1210,10 +1237,8 @@ func (sc *serverConn) wroteFrame(res frameWriteResult) { // If a frame is already being written, nothing happens. This will be called again // when the frame is done being written. // -// If a frame isn't being written we need to send one, the best frame -// to send is selected, preferring first things that aren't -// stream-specific (e.g. ACKing settings), and then finding the -// highest priority stream. +// If a frame isn't being written and we need to send one, the best frame +// to send is selected by writeSched. // // If a frame isn't being written and there's nothing else to send, we // flush the write buffer. @@ -1241,6 +1266,9 @@ func (sc *serverConn) scheduleFrameWrite() { } if !sc.inGoAway || sc.goAwayCode == ErrCodeNo { if wr, ok := sc.writeSched.Pop(); ok { + if wr.isControl() { + sc.queuedControlFrames-- + } sc.startFrameWrite(wr) continue } @@ -1533,6 +1561,8 @@ func (sc *serverConn) processSettings(f *SettingsFrame) error { if err := f.ForeachSetting(sc.processSetting); err != nil { return err } + // TODO: judging by RFC 7540, Section 6.5.3 each SETTINGS frame should be + // acknowledged individually, even if multiple are received before the ACK. sc.needToSendSettingsAck = true sc.scheduleFrameWrite() return nil @@ -2385,7 +2415,11 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) { clen = strconv.Itoa(len(p)) } _, hasContentType := rws.snapHeader["Content-Type"] - if !hasContentType && bodyAllowedForStatus(rws.status) && len(p) > 0 { + // If the Content-Encoding is non-blank, we shouldn't + // sniff the body. See Issue golang.org/issue/31753. + ce := rws.snapHeader.Get("Content-Encoding") + hasCE := len(ce) > 0 + if !hasCE && !hasContentType && bodyAllowedForStatus(rws.status) && len(p) > 0 { ctype = http.DetectContentType(p) } var date string @@ -2494,7 +2528,7 @@ const TrailerPrefix = "Trailer:" // trailers. That worked for a while, until we found the first major // user of Trailers in the wild: gRPC (using them only over http2), // and gRPC libraries permit setting trailers mid-stream without -// predeclarnig them. So: change of plans. We still permit the old +// predeclaring them. So: change of plans. We still permit the old // way, but we also permit this hack: if a Header() key begins with // "Trailer:", the suffix of that key is a Trailer. Because ':' is an // invalid token byte anyway, there is no ambiguity. (And it's already @@ -2794,7 +2828,7 @@ func (sc *serverConn) startPush(msg *startPushRequest) { // PUSH_PROMISE frames MUST only be sent on a peer-initiated stream that // is in either the "open" or "half-closed (remote)" state. if msg.parent.state != stateOpen && msg.parent.state != stateHalfClosedRemote { - // responseWriter.Push checks that the stream is peer-initiaed. + // responseWriter.Push checks that the stream is peer-initiated. msg.done <- errStreamClosed return } diff --git a/vendor/golang.org/x/net/http2/transport.go b/vendor/golang.org/x/net/http2/transport.go index c0c80d893..c51a73c06 100644 --- a/vendor/golang.org/x/net/http2/transport.go +++ b/vendor/golang.org/x/net/http2/transport.go @@ -992,7 +992,7 @@ func (cc *ClientConn) roundTrip(req *http.Request) (res *http.Response, gotErrAf req.Method != "HEAD" { // Request gzip only, not deflate. Deflate is ambiguous and // not as universally supported anyway. - // See: http://www.gzip.org/zlib/zlib_faq.html#faq38 + // See: https://zlib.net/zlib_faq.html#faq39 // // Note that we don't request this for HEAD requests, // due to a bug in nginx: @@ -1216,6 +1216,8 @@ var ( // abort request body write, but send stream reset of cancel. errStopReqBodyWriteAndCancel = errors.New("http2: canceling request") + + errReqBodyTooLong = errors.New("http2: request body larger than specified content length") ) func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) (err error) { @@ -1238,10 +1240,32 @@ func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) ( req := cs.req hasTrailers := req.Trailer != nil + remainLen := actualContentLength(req) + hasContentLen := remainLen != -1 var sawEOF bool for !sawEOF { - n, err := body.Read(buf) + n, err := body.Read(buf[:len(buf)-1]) + if hasContentLen { + remainLen -= int64(n) + if remainLen == 0 && err == nil { + // The request body's Content-Length was predeclared and + // we just finished reading it all, but the underlying io.Reader + // returned the final chunk with a nil error (which is one of + // the two valid things a Reader can do at EOF). Because we'd prefer + // to send the END_STREAM bit early, double-check that we're actually + // at EOF. Subsequent reads should return (0, EOF) at this point. + // If either value is different, we return an error in one of two ways below. + var n1 int + n1, err = body.Read(buf[n:]) + remainLen -= int64(n1) + } + if remainLen < 0 { + err = errReqBodyTooLong + cc.writeStreamReset(cs.ID, ErrCodeCancel, err) + return err + } + } if err == io.EOF { sawEOF = true err = nil diff --git a/vendor/golang.org/x/net/http2/writesched.go b/vendor/golang.org/x/net/http2/writesched.go index 4fe307307..f24d2b1e7 100644 --- a/vendor/golang.org/x/net/http2/writesched.go +++ b/vendor/golang.org/x/net/http2/writesched.go @@ -32,7 +32,7 @@ type WriteScheduler interface { // Pop dequeues the next frame to write. Returns false if no frames can // be written. Frames with a given wr.StreamID() are Pop'd in the same - // order they are Push'd. + // order they are Push'd. No frames should be discarded except by CloseStream. Pop() (wr FrameWriteRequest, ok bool) } @@ -76,6 +76,12 @@ func (wr FrameWriteRequest) StreamID() uint32 { return wr.stream.id } +// isControl reports whether wr is a control frame for MaxQueuedControlFrames +// purposes. That includes non-stream frames and RST_STREAM frames. +func (wr FrameWriteRequest) isControl() bool { + return wr.stream == nil +} + // DataSize returns the number of flow control bytes that must be consumed // to write this entire frame. This is 0 for non-DATA frames. func (wr FrameWriteRequest) DataSize() int { diff --git a/vendor/golang.org/x/net/http2/writesched_priority.go b/vendor/golang.org/x/net/http2/writesched_priority.go index 848fed6ec..2618b2c11 100644 --- a/vendor/golang.org/x/net/http2/writesched_priority.go +++ b/vendor/golang.org/x/net/http2/writesched_priority.go @@ -149,7 +149,7 @@ func (n *priorityNode) addBytes(b int64) { } // walkReadyInOrder iterates over the tree in priority order, calling f for each node -// with a non-empty write queue. When f returns true, this funcion returns true and the +// with a non-empty write queue. When f returns true, this function returns true and the // walk halts. tmp is used as scratch space for sorting. // // f(n, openParent) takes two arguments: the node to visit, n, and a bool that is true diff --git a/vendor/golang.org/x/net/http2/writesched_random.go b/vendor/golang.org/x/net/http2/writesched_random.go index 36d7919f1..9a7b9e581 100644 --- a/vendor/golang.org/x/net/http2/writesched_random.go +++ b/vendor/golang.org/x/net/http2/writesched_random.go @@ -19,7 +19,8 @@ type randomWriteScheduler struct { zero writeQueue // sq contains the stream-specific queues, keyed by stream ID. - // When a stream is idle or closed, it's deleted from the map. + // When a stream is idle, closed, or emptied, it's deleted + // from the map. sq map[uint32]*writeQueue // pool of empty queues for reuse. @@ -63,8 +64,12 @@ func (ws *randomWriteScheduler) Pop() (FrameWriteRequest, bool) { return ws.zero.shift(), true } // Iterate over all non-idle streams until finding one that can be consumed. - for _, q := range ws.sq { + for streamID, q := range ws.sq { if wr, ok := q.consume(math.MaxInt32); ok { + if q.empty() { + delete(ws.sq, streamID) + ws.queuePool.put(q) + } return wr, true } } diff --git a/vendor/golang.org/x/sys/unix/mkasm_darwin.go b/vendor/golang.org/x/sys/unix/mkasm_darwin.go deleted file mode 100644 index 4548b993d..000000000 --- a/vendor/golang.org/x/sys/unix/mkasm_darwin.go +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// mkasm_darwin.go generates assembly trampolines to call libSystem routines from Go. -//This program must be run after mksyscall.go. -package main - -import ( - "bytes" - "fmt" - "io/ioutil" - "log" - "os" - "strings" -) - -func main() { - in1, err := ioutil.ReadFile("syscall_darwin.go") - if err != nil { - log.Fatalf("can't open syscall_darwin.go: %s", err) - } - arch := os.Args[1] - in2, err := ioutil.ReadFile(fmt.Sprintf("syscall_darwin_%s.go", arch)) - if err != nil { - log.Fatalf("can't open syscall_darwin_%s.go: %s", arch, err) - } - in3, err := ioutil.ReadFile(fmt.Sprintf("zsyscall_darwin_%s.go", arch)) - if err != nil { - log.Fatalf("can't open zsyscall_darwin_%s.go: %s", arch, err) - } - in := string(in1) + string(in2) + string(in3) - - trampolines := map[string]bool{} - - var out bytes.Buffer - - fmt.Fprintf(&out, "// go run mkasm_darwin.go %s\n", strings.Join(os.Args[1:], " ")) - fmt.Fprintf(&out, "// Code generated by the command above; DO NOT EDIT.\n") - fmt.Fprintf(&out, "\n") - fmt.Fprintf(&out, "// +build go1.12\n") - fmt.Fprintf(&out, "\n") - fmt.Fprintf(&out, "#include \"textflag.h\"\n") - for _, line := range strings.Split(in, "\n") { - if !strings.HasPrefix(line, "func ") || !strings.HasSuffix(line, "_trampoline()") { - continue - } - fn := line[5 : len(line)-13] - if !trampolines[fn] { - trampolines[fn] = true - fmt.Fprintf(&out, "TEXT ·%s_trampoline(SB),NOSPLIT,$0-0\n", fn) - fmt.Fprintf(&out, "\tJMP\t%s(SB)\n", fn) - } - } - err = ioutil.WriteFile(fmt.Sprintf("zsyscall_darwin_%s.s", arch), out.Bytes(), 0644) - if err != nil { - log.Fatalf("can't write zsyscall_darwin_%s.s: %s", arch, err) - } -} diff --git a/vendor/golang.org/x/sys/unix/mkpost.go b/vendor/golang.org/x/sys/unix/mkpost.go deleted file mode 100644 index eb4332059..000000000 --- a/vendor/golang.org/x/sys/unix/mkpost.go +++ /dev/null @@ -1,122 +0,0 @@ -// Copyright 2016 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// mkpost processes the output of cgo -godefs to -// modify the generated types. It is used to clean up -// the sys API in an architecture specific manner. -// -// mkpost is run after cgo -godefs; see README.md. -package main - -import ( - "bytes" - "fmt" - "go/format" - "io/ioutil" - "log" - "os" - "regexp" -) - -func main() { - // Get the OS and architecture (using GOARCH_TARGET if it exists) - goos := os.Getenv("GOOS") - goarch := os.Getenv("GOARCH_TARGET") - if goarch == "" { - goarch = os.Getenv("GOARCH") - } - // Check that we are using the Docker-based build system if we should be. - if goos == "linux" { - if os.Getenv("GOLANG_SYS_BUILD") != "docker" { - os.Stderr.WriteString("In the Docker-based build system, mkpost should not be called directly.\n") - os.Stderr.WriteString("See README.md\n") - os.Exit(1) - } - } - - b, err := ioutil.ReadAll(os.Stdin) - if err != nil { - log.Fatal(err) - } - - if goos == "aix" { - // Replace type of Atim, Mtim and Ctim by Timespec in Stat_t - // to avoid having both StTimespec and Timespec. - sttimespec := regexp.MustCompile(`_Ctype_struct_st_timespec`) - b = sttimespec.ReplaceAll(b, []byte("Timespec")) - } - - // Intentionally export __val fields in Fsid and Sigset_t - valRegex := regexp.MustCompile(`type (Fsid|Sigset_t) struct {(\s+)X__(bits|val)(\s+\S+\s+)}`) - b = valRegex.ReplaceAll(b, []byte("type $1 struct {${2}Val$4}")) - - // Intentionally export __fds_bits field in FdSet - fdSetRegex := regexp.MustCompile(`type (FdSet) struct {(\s+)X__fds_bits(\s+\S+\s+)}`) - b = fdSetRegex.ReplaceAll(b, []byte("type $1 struct {${2}Bits$3}")) - - // If we have empty Ptrace structs, we should delete them. Only s390x emits - // nonempty Ptrace structs. - ptraceRexexp := regexp.MustCompile(`type Ptrace((Psw|Fpregs|Per) struct {\s*})`) - b = ptraceRexexp.ReplaceAll(b, nil) - - // Replace the control_regs union with a blank identifier for now. - controlRegsRegex := regexp.MustCompile(`(Control_regs)\s+\[0\]uint64`) - b = controlRegsRegex.ReplaceAll(b, []byte("_ [0]uint64")) - - // Remove fields that are added by glibc - // Note that this is unstable as the identifers are private. - removeFieldsRegex := regexp.MustCompile(`X__glibc\S*`) - b = removeFieldsRegex.ReplaceAll(b, []byte("_")) - - // Convert [65]int8 to [65]byte in Utsname members to simplify - // conversion to string; see golang.org/issue/20753 - convertUtsnameRegex := regexp.MustCompile(`((Sys|Node|Domain)name|Release|Version|Machine)(\s+)\[(\d+)\]u?int8`) - b = convertUtsnameRegex.ReplaceAll(b, []byte("$1$3[$4]byte")) - - // Convert [1024]int8 to [1024]byte in Ptmget members - convertPtmget := regexp.MustCompile(`([SC]n)(\s+)\[(\d+)\]u?int8`) - b = convertPtmget.ReplaceAll(b, []byte("$1[$3]byte")) - - // Remove spare fields (e.g. in Statx_t) - spareFieldsRegex := regexp.MustCompile(`X__spare\S*`) - b = spareFieldsRegex.ReplaceAll(b, []byte("_")) - - // Remove cgo padding fields - removePaddingFieldsRegex := regexp.MustCompile(`Pad_cgo_\d+`) - b = removePaddingFieldsRegex.ReplaceAll(b, []byte("_")) - - // Remove padding, hidden, or unused fields - removeFieldsRegex = regexp.MustCompile(`\b(X_\S+|Padding)`) - b = removeFieldsRegex.ReplaceAll(b, []byte("_")) - - // Remove the first line of warning from cgo - b = b[bytes.IndexByte(b, '\n')+1:] - // Modify the command in the header to include: - // mkpost, our own warning, and a build tag. - replacement := fmt.Sprintf(`$1 | go run mkpost.go -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s,%s`, goarch, goos) - cgoCommandRegex := regexp.MustCompile(`(cgo -godefs .*)`) - b = cgoCommandRegex.ReplaceAll(b, []byte(replacement)) - - // Rename Stat_t time fields - if goos == "freebsd" && goarch == "386" { - // Hide Stat_t.[AMCB]tim_ext fields - renameStatTimeExtFieldsRegex := regexp.MustCompile(`[AMCB]tim_ext`) - b = renameStatTimeExtFieldsRegex.ReplaceAll(b, []byte("_")) - } - renameStatTimeFieldsRegex := regexp.MustCompile(`([AMCB])(?:irth)?time?(?:spec)?\s+(Timespec|StTimespec)`) - b = renameStatTimeFieldsRegex.ReplaceAll(b, []byte("${1}tim ${2}")) - - // gofmt - b, err = format.Source(b) - if err != nil { - log.Fatal(err) - } - - os.Stdout.Write(b) -} diff --git a/vendor/golang.org/x/sys/unix/mksyscall.go b/vendor/golang.org/x/sys/unix/mksyscall.go deleted file mode 100644 index e4af9424e..000000000 --- a/vendor/golang.org/x/sys/unix/mksyscall.go +++ /dev/null @@ -1,407 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -This program reads a file containing function prototypes -(like syscall_darwin.go) and generates system call bodies. -The prototypes are marked by lines beginning with "//sys" -and read like func declarations if //sys is replaced by func, but: - * The parameter lists must give a name for each argument. - This includes return parameters. - * The parameter lists must give a type for each argument: - the (x, y, z int) shorthand is not allowed. - * If the return parameter is an error number, it must be named errno. - -A line beginning with //sysnb is like //sys, except that the -goroutine will not be suspended during the execution of the system -call. This must only be used for system calls which can never -block, as otherwise the system call could cause all goroutines to -hang. -*/ -package main - -import ( - "bufio" - "flag" - "fmt" - "os" - "regexp" - "strings" -) - -var ( - b32 = flag.Bool("b32", false, "32bit big-endian") - l32 = flag.Bool("l32", false, "32bit little-endian") - plan9 = flag.Bool("plan9", false, "plan9") - openbsd = flag.Bool("openbsd", false, "openbsd") - netbsd = flag.Bool("netbsd", false, "netbsd") - dragonfly = flag.Bool("dragonfly", false, "dragonfly") - arm = flag.Bool("arm", false, "arm") // 64-bit value should use (even, odd)-pair - tags = flag.String("tags", "", "build tags") - filename = flag.String("output", "", "output file name (standard output if omitted)") -) - -// cmdLine returns this programs's commandline arguments -func cmdLine() string { - return "go run mksyscall.go " + strings.Join(os.Args[1:], " ") -} - -// buildTags returns build tags -func buildTags() string { - return *tags -} - -// Param is function parameter -type Param struct { - Name string - Type string -} - -// usage prints the program usage -func usage() { - fmt.Fprintf(os.Stderr, "usage: go run mksyscall.go [-b32 | -l32] [-tags x,y] [file ...]\n") - os.Exit(1) -} - -// parseParamList parses parameter list and returns a slice of parameters -func parseParamList(list string) []string { - list = strings.TrimSpace(list) - if list == "" { - return []string{} - } - return regexp.MustCompile(`\s*,\s*`).Split(list, -1) -} - -// parseParam splits a parameter into name and type -func parseParam(p string) Param { - ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p) - if ps == nil { - fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p) - os.Exit(1) - } - return Param{ps[1], ps[2]} -} - -func main() { - // Get the OS and architecture (using GOARCH_TARGET if it exists) - goos := os.Getenv("GOOS") - if goos == "" { - fmt.Fprintln(os.Stderr, "GOOS not defined in environment") - os.Exit(1) - } - goarch := os.Getenv("GOARCH_TARGET") - if goarch == "" { - goarch = os.Getenv("GOARCH") - } - - // Check that we are using the Docker-based build system if we should - if goos == "linux" { - if os.Getenv("GOLANG_SYS_BUILD") != "docker" { - fmt.Fprintf(os.Stderr, "In the Docker-based build system, mksyscall should not be called directly.\n") - fmt.Fprintf(os.Stderr, "See README.md\n") - os.Exit(1) - } - } - - flag.Usage = usage - flag.Parse() - if len(flag.Args()) <= 0 { - fmt.Fprintf(os.Stderr, "no files to parse provided\n") - usage() - } - - endianness := "" - if *b32 { - endianness = "big-endian" - } else if *l32 { - endianness = "little-endian" - } - - libc := false - if goos == "darwin" && strings.Contains(buildTags(), ",go1.12") { - libc = true - } - trampolines := map[string]bool{} - - text := "" - for _, path := range flag.Args() { - file, err := os.Open(path) - if err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - s := bufio.NewScanner(file) - for s.Scan() { - t := s.Text() - t = strings.TrimSpace(t) - t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `) - nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t) - if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil { - continue - } - - // Line must be of the form - // func Open(path string, mode int, perm int) (fd int, errno error) - // Split into name, in params, out params. - f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*((?i)SYS_[A-Z0-9_]+))?$`).FindStringSubmatch(t) - if f == nil { - fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t) - os.Exit(1) - } - funct, inps, outps, sysname := f[2], f[3], f[4], f[5] - - // ClockGettime doesn't have a syscall number on Darwin, only generate libc wrappers. - if goos == "darwin" && !libc && funct == "ClockGettime" { - continue - } - - // Split argument lists on comma. - in := parseParamList(inps) - out := parseParamList(outps) - - // Try in vain to keep people from editing this file. - // The theory is that they jump into the middle of the file - // without reading the header. - text += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n" - - // Go function header. - outDecl := "" - if len(out) > 0 { - outDecl = fmt.Sprintf(" (%s)", strings.Join(out, ", ")) - } - text += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outDecl) - - // Check if err return available - errvar := "" - for _, param := range out { - p := parseParam(param) - if p.Type == "error" { - errvar = p.Name - break - } - } - - // Prepare arguments to Syscall. - var args []string - n := 0 - for _, param := range in { - p := parseParam(param) - if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil { - args = append(args, "uintptr(unsafe.Pointer("+p.Name+"))") - } else if p.Type == "string" && errvar != "" { - text += fmt.Sprintf("\tvar _p%d *byte\n", n) - text += fmt.Sprintf("\t_p%d, %s = BytePtrFromString(%s)\n", n, errvar, p.Name) - text += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar) - args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n)) - n++ - } else if p.Type == "string" { - fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n") - text += fmt.Sprintf("\tvar _p%d *byte\n", n) - text += fmt.Sprintf("\t_p%d, _ = BytePtrFromString(%s)\n", n, p.Name) - args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n)) - n++ - } else if regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type) != nil { - // Convert slice into pointer, length. - // Have to be careful not to take address of &a[0] if len == 0: - // pass dummy pointer in that case. - // Used to pass nil, but some OSes or simulators reject write(fd, nil, 0). - text += fmt.Sprintf("\tvar _p%d unsafe.Pointer\n", n) - text += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = unsafe.Pointer(&%s[0])\n\t}", p.Name, n, p.Name) - text += fmt.Sprintf(" else {\n\t\t_p%d = unsafe.Pointer(&_zero)\n\t}\n", n) - args = append(args, fmt.Sprintf("uintptr(_p%d)", n), fmt.Sprintf("uintptr(len(%s))", p.Name)) - n++ - } else if p.Type == "int64" && (*openbsd || *netbsd) { - args = append(args, "0") - if endianness == "big-endian" { - args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name)) - } else if endianness == "little-endian" { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name)) - } else { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name)) - } - } else if p.Type == "int64" && *dragonfly { - if regexp.MustCompile(`^(?i)extp(read|write)`).FindStringSubmatch(funct) == nil { - args = append(args, "0") - } - if endianness == "big-endian" { - args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name)) - } else if endianness == "little-endian" { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name)) - } else { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name)) - } - } else if (p.Type == "int64" || p.Type == "uint64") && endianness != "" { - if len(args)%2 == 1 && *arm { - // arm abi specifies 64-bit argument uses - // (even, odd) pair - args = append(args, "0") - } - if endianness == "big-endian" { - args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name)) - } else { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name)) - } - } else { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name)) - } - } - - // Determine which form to use; pad args with zeros. - asm := "Syscall" - if nonblock != nil { - if errvar == "" && goos == "linux" { - asm = "RawSyscallNoError" - } else { - asm = "RawSyscall" - } - } else { - if errvar == "" && goos == "linux" { - asm = "SyscallNoError" - } - } - if len(args) <= 3 { - for len(args) < 3 { - args = append(args, "0") - } - } else if len(args) <= 6 { - asm += "6" - for len(args) < 6 { - args = append(args, "0") - } - } else if len(args) <= 9 { - asm += "9" - for len(args) < 9 { - args = append(args, "0") - } - } else { - fmt.Fprintf(os.Stderr, "%s:%s too many arguments to system call\n", path, funct) - } - - // System call number. - if sysname == "" { - sysname = "SYS_" + funct - sysname = regexp.MustCompile(`([a-z])([A-Z])`).ReplaceAllString(sysname, `${1}_$2`) - sysname = strings.ToUpper(sysname) - } - - var libcFn string - if libc { - asm = "syscall_" + strings.ToLower(asm[:1]) + asm[1:] // internal syscall call - sysname = strings.TrimPrefix(sysname, "SYS_") // remove SYS_ - sysname = strings.ToLower(sysname) // lowercase - if sysname == "getdirentries64" { - // Special case - libSystem name and - // raw syscall name don't match. - sysname = "__getdirentries64" - } - libcFn = sysname - sysname = "funcPC(libc_" + sysname + "_trampoline)" - } - - // Actual call. - arglist := strings.Join(args, ", ") - call := fmt.Sprintf("%s(%s, %s)", asm, sysname, arglist) - - // Assign return values. - body := "" - ret := []string{"_", "_", "_"} - doErrno := false - for i := 0; i < len(out); i++ { - p := parseParam(out[i]) - reg := "" - if p.Name == "err" && !*plan9 { - reg = "e1" - ret[2] = reg - doErrno = true - } else if p.Name == "err" && *plan9 { - ret[0] = "r0" - ret[2] = "e1" - break - } else { - reg = fmt.Sprintf("r%d", i) - ret[i] = reg - } - if p.Type == "bool" { - reg = fmt.Sprintf("%s != 0", reg) - } - if p.Type == "int64" && endianness != "" { - // 64-bit number in r1:r0 or r0:r1. - if i+2 > len(out) { - fmt.Fprintf(os.Stderr, "%s:%s not enough registers for int64 return\n", path, funct) - } - if endianness == "big-endian" { - reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i, i+1) - } else { - reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i+1, i) - } - ret[i] = fmt.Sprintf("r%d", i) - ret[i+1] = fmt.Sprintf("r%d", i+1) - } - if reg != "e1" || *plan9 { - body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg) - } - } - if ret[0] == "_" && ret[1] == "_" && ret[2] == "_" { - text += fmt.Sprintf("\t%s\n", call) - } else { - if errvar == "" && goos == "linux" { - // raw syscall without error on Linux, see golang.org/issue/22924 - text += fmt.Sprintf("\t%s, %s := %s\n", ret[0], ret[1], call) - } else { - text += fmt.Sprintf("\t%s, %s, %s := %s\n", ret[0], ret[1], ret[2], call) - } - } - text += body - - if *plan9 && ret[2] == "e1" { - text += "\tif int32(r0) == -1 {\n" - text += "\t\terr = e1\n" - text += "\t}\n" - } else if doErrno { - text += "\tif e1 != 0 {\n" - text += "\t\terr = errnoErr(e1)\n" - text += "\t}\n" - } - text += "\treturn\n" - text += "}\n\n" - - if libc && !trampolines[libcFn] { - // some system calls share a trampoline, like read and readlen. - trampolines[libcFn] = true - // Declare assembly trampoline. - text += fmt.Sprintf("func libc_%s_trampoline()\n", libcFn) - // Assembly trampoline calls the libc_* function, which this magic - // redirects to use the function from libSystem. - text += fmt.Sprintf("//go:linkname libc_%s libc_%s\n", libcFn, libcFn) - text += fmt.Sprintf("//go:cgo_import_dynamic libc_%s %s \"/usr/lib/libSystem.B.dylib\"\n", libcFn, libcFn) - text += "\n" - } - } - if err := s.Err(); err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - file.Close() - } - fmt.Printf(srcTemplate, cmdLine(), buildTags(), text) -} - -const srcTemplate = `// %s -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -%s -` diff --git a/vendor/golang.org/x/sys/unix/mksyscall_aix_ppc.go b/vendor/golang.org/x/sys/unix/mksyscall_aix_ppc.go deleted file mode 100644 index 3be3cdfc3..000000000 --- a/vendor/golang.org/x/sys/unix/mksyscall_aix_ppc.go +++ /dev/null @@ -1,415 +0,0 @@ -// Copyright 2019 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -This program reads a file containing function prototypes -(like syscall_aix.go) and generates system call bodies. -The prototypes are marked by lines beginning with "//sys" -and read like func declarations if //sys is replaced by func, but: - * The parameter lists must give a name for each argument. - This includes return parameters. - * The parameter lists must give a type for each argument: - the (x, y, z int) shorthand is not allowed. - * If the return parameter is an error number, it must be named err. - * If go func name needs to be different than its libc name, - * or the function is not in libc, name could be specified - * at the end, after "=" sign, like - //sys getsockopt(s int, level int, name int, val uintptr, vallen *_Socklen) (err error) = libsocket.getsockopt -*/ -package main - -import ( - "bufio" - "flag" - "fmt" - "os" - "regexp" - "strings" -) - -var ( - b32 = flag.Bool("b32", false, "32bit big-endian") - l32 = flag.Bool("l32", false, "32bit little-endian") - aix = flag.Bool("aix", false, "aix") - tags = flag.String("tags", "", "build tags") -) - -// cmdLine returns this programs's commandline arguments -func cmdLine() string { - return "go run mksyscall_aix_ppc.go " + strings.Join(os.Args[1:], " ") -} - -// buildTags returns build tags -func buildTags() string { - return *tags -} - -// Param is function parameter -type Param struct { - Name string - Type string -} - -// usage prints the program usage -func usage() { - fmt.Fprintf(os.Stderr, "usage: go run mksyscall_aix_ppc.go [-b32 | -l32] [-tags x,y] [file ...]\n") - os.Exit(1) -} - -// parseParamList parses parameter list and returns a slice of parameters -func parseParamList(list string) []string { - list = strings.TrimSpace(list) - if list == "" { - return []string{} - } - return regexp.MustCompile(`\s*,\s*`).Split(list, -1) -} - -// parseParam splits a parameter into name and type -func parseParam(p string) Param { - ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p) - if ps == nil { - fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p) - os.Exit(1) - } - return Param{ps[1], ps[2]} -} - -func main() { - flag.Usage = usage - flag.Parse() - if len(flag.Args()) <= 0 { - fmt.Fprintf(os.Stderr, "no files to parse provided\n") - usage() - } - - endianness := "" - if *b32 { - endianness = "big-endian" - } else if *l32 { - endianness = "little-endian" - } - - pack := "" - text := "" - cExtern := "/*\n#include \n#include \n" - for _, path := range flag.Args() { - file, err := os.Open(path) - if err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - s := bufio.NewScanner(file) - for s.Scan() { - t := s.Text() - t = strings.TrimSpace(t) - t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `) - if p := regexp.MustCompile(`^package (\S+)$`).FindStringSubmatch(t); p != nil && pack == "" { - pack = p[1] - } - nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t) - if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil { - continue - } - - // Line must be of the form - // func Open(path string, mode int, perm int) (fd int, err error) - // Split into name, in params, out params. - f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*(?:(\w*)\.)?(\w*))?$`).FindStringSubmatch(t) - if f == nil { - fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t) - os.Exit(1) - } - funct, inps, outps, modname, sysname := f[2], f[3], f[4], f[5], f[6] - - // Split argument lists on comma. - in := parseParamList(inps) - out := parseParamList(outps) - - inps = strings.Join(in, ", ") - outps = strings.Join(out, ", ") - - // Try in vain to keep people from editing this file. - // The theory is that they jump into the middle of the file - // without reading the header. - text += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n" - - // Check if value return, err return available - errvar := "" - retvar := "" - rettype := "" - for _, param := range out { - p := parseParam(param) - if p.Type == "error" { - errvar = p.Name - } else { - retvar = p.Name - rettype = p.Type - } - } - - // System call name. - if sysname == "" { - sysname = funct - } - sysname = regexp.MustCompile(`([a-z])([A-Z])`).ReplaceAllString(sysname, `${1}_$2`) - sysname = strings.ToLower(sysname) // All libc functions are lowercase. - - cRettype := "" - if rettype == "unsafe.Pointer" { - cRettype = "uintptr_t" - } else if rettype == "uintptr" { - cRettype = "uintptr_t" - } else if regexp.MustCompile(`^_`).FindStringSubmatch(rettype) != nil { - cRettype = "uintptr_t" - } else if rettype == "int" { - cRettype = "int" - } else if rettype == "int32" { - cRettype = "int" - } else if rettype == "int64" { - cRettype = "long long" - } else if rettype == "uint32" { - cRettype = "unsigned int" - } else if rettype == "uint64" { - cRettype = "unsigned long long" - } else { - cRettype = "int" - } - if sysname == "exit" { - cRettype = "void" - } - - // Change p.Types to c - var cIn []string - for _, param := range in { - p := parseParam(param) - if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil { - cIn = append(cIn, "uintptr_t") - } else if p.Type == "string" { - cIn = append(cIn, "uintptr_t") - } else if regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type) != nil { - cIn = append(cIn, "uintptr_t", "size_t") - } else if p.Type == "unsafe.Pointer" { - cIn = append(cIn, "uintptr_t") - } else if p.Type == "uintptr" { - cIn = append(cIn, "uintptr_t") - } else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil { - cIn = append(cIn, "uintptr_t") - } else if p.Type == "int" { - cIn = append(cIn, "int") - } else if p.Type == "int32" { - cIn = append(cIn, "int") - } else if p.Type == "int64" { - cIn = append(cIn, "long long") - } else if p.Type == "uint32" { - cIn = append(cIn, "unsigned int") - } else if p.Type == "uint64" { - cIn = append(cIn, "unsigned long long") - } else { - cIn = append(cIn, "int") - } - } - - if funct != "fcntl" && funct != "FcntlInt" && funct != "readlen" && funct != "writelen" { - if sysname == "select" { - // select is a keyword of Go. Its name is - // changed to c_select. - cExtern += "#define c_select select\n" - } - // Imports of system calls from libc - cExtern += fmt.Sprintf("%s %s", cRettype, sysname) - cIn := strings.Join(cIn, ", ") - cExtern += fmt.Sprintf("(%s);\n", cIn) - } - - // So file name. - if *aix { - if modname == "" { - modname = "libc.a/shr_64.o" - } else { - fmt.Fprintf(os.Stderr, "%s: only syscall using libc are available\n", funct) - os.Exit(1) - } - } - - strconvfunc := "C.CString" - - // Go function header. - if outps != "" { - outps = fmt.Sprintf(" (%s)", outps) - } - if text != "" { - text += "\n" - } - - text += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outps) - - // Prepare arguments to Syscall. - var args []string - n := 0 - argN := 0 - for _, param := range in { - p := parseParam(param) - if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil { - args = append(args, "C.uintptr_t(uintptr(unsafe.Pointer("+p.Name+")))") - } else if p.Type == "string" && errvar != "" { - text += fmt.Sprintf("\t_p%d := uintptr(unsafe.Pointer(%s(%s)))\n", n, strconvfunc, p.Name) - args = append(args, fmt.Sprintf("C.uintptr_t(_p%d)", n)) - n++ - } else if p.Type == "string" { - fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n") - text += fmt.Sprintf("\t_p%d := uintptr(unsafe.Pointer(%s(%s)))\n", n, strconvfunc, p.Name) - args = append(args, fmt.Sprintf("C.uintptr_t(_p%d)", n)) - n++ - } else if m := regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type); m != nil { - // Convert slice into pointer, length. - // Have to be careful not to take address of &a[0] if len == 0: - // pass nil in that case. - text += fmt.Sprintf("\tvar _p%d *%s\n", n, m[1]) - text += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = &%s[0]\n\t}\n", p.Name, n, p.Name) - args = append(args, fmt.Sprintf("C.uintptr_t(uintptr(unsafe.Pointer(_p%d)))", n)) - n++ - text += fmt.Sprintf("\tvar _p%d int\n", n) - text += fmt.Sprintf("\t_p%d = len(%s)\n", n, p.Name) - args = append(args, fmt.Sprintf("C.size_t(_p%d)", n)) - n++ - } else if p.Type == "int64" && endianness != "" { - if endianness == "big-endian" { - args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name)) - } else { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name)) - } - n++ - } else if p.Type == "bool" { - text += fmt.Sprintf("\tvar _p%d uint32\n", n) - text += fmt.Sprintf("\tif %s {\n\t\t_p%d = 1\n\t} else {\n\t\t_p%d = 0\n\t}\n", p.Name, n, n) - args = append(args, fmt.Sprintf("_p%d", n)) - } else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil { - args = append(args, fmt.Sprintf("C.uintptr_t(uintptr(%s))", p.Name)) - } else if p.Type == "unsafe.Pointer" { - args = append(args, fmt.Sprintf("C.uintptr_t(uintptr(%s))", p.Name)) - } else if p.Type == "int" { - if (argN == 2) && ((funct == "readlen") || (funct == "writelen")) { - args = append(args, fmt.Sprintf("C.size_t(%s)", p.Name)) - } else if argN == 0 && funct == "fcntl" { - args = append(args, fmt.Sprintf("C.uintptr_t(%s)", p.Name)) - } else if (argN == 2) && ((funct == "fcntl") || (funct == "FcntlInt")) { - args = append(args, fmt.Sprintf("C.uintptr_t(%s)", p.Name)) - } else { - args = append(args, fmt.Sprintf("C.int(%s)", p.Name)) - } - } else if p.Type == "int32" { - args = append(args, fmt.Sprintf("C.int(%s)", p.Name)) - } else if p.Type == "int64" { - args = append(args, fmt.Sprintf("C.longlong(%s)", p.Name)) - } else if p.Type == "uint32" { - args = append(args, fmt.Sprintf("C.uint(%s)", p.Name)) - } else if p.Type == "uint64" { - args = append(args, fmt.Sprintf("C.ulonglong(%s)", p.Name)) - } else if p.Type == "uintptr" { - args = append(args, fmt.Sprintf("C.uintptr_t(%s)", p.Name)) - } else { - args = append(args, fmt.Sprintf("C.int(%s)", p.Name)) - } - argN++ - } - - // Actual call. - arglist := strings.Join(args, ", ") - call := "" - if sysname == "exit" { - if errvar != "" { - call += "er :=" - } else { - call += "" - } - } else if errvar != "" { - call += "r0,er :=" - } else if retvar != "" { - call += "r0,_ :=" - } else { - call += "" - } - if sysname == "select" { - // select is a keyword of Go. Its name is - // changed to c_select. - call += fmt.Sprintf("C.c_%s(%s)", sysname, arglist) - } else { - call += fmt.Sprintf("C.%s(%s)", sysname, arglist) - } - - // Assign return values. - body := "" - for i := 0; i < len(out); i++ { - p := parseParam(out[i]) - reg := "" - if p.Name == "err" { - reg = "e1" - } else { - reg = "r0" - } - if reg != "e1" { - body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg) - } - } - - // verify return - if sysname != "exit" && errvar != "" { - if regexp.MustCompile(`^uintptr`).FindStringSubmatch(cRettype) != nil { - body += "\tif (uintptr(r0) ==^uintptr(0) && er != nil) {\n" - body += fmt.Sprintf("\t\t%s = er\n", errvar) - body += "\t}\n" - } else { - body += "\tif (r0 ==-1 && er != nil) {\n" - body += fmt.Sprintf("\t\t%s = er\n", errvar) - body += "\t}\n" - } - } else if errvar != "" { - body += "\tif (er != nil) {\n" - body += fmt.Sprintf("\t\t%s = er\n", errvar) - body += "\t}\n" - } - - text += fmt.Sprintf("\t%s\n", call) - text += body - - text += "\treturn\n" - text += "}\n" - } - if err := s.Err(); err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - file.Close() - } - imp := "" - if pack != "unix" { - imp = "import \"golang.org/x/sys/unix\"\n" - - } - fmt.Printf(srcTemplate, cmdLine(), buildTags(), pack, cExtern, imp, text) -} - -const srcTemplate = `// %s -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s - -package %s - - -%s -*/ -import "C" -import ( - "unsafe" -) - - -%s - -%s -` diff --git a/vendor/golang.org/x/sys/unix/mksyscall_aix_ppc64.go b/vendor/golang.org/x/sys/unix/mksyscall_aix_ppc64.go deleted file mode 100644 index c96009951..000000000 --- a/vendor/golang.org/x/sys/unix/mksyscall_aix_ppc64.go +++ /dev/null @@ -1,614 +0,0 @@ -// Copyright 2019 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -This program reads a file containing function prototypes -(like syscall_aix.go) and generates system call bodies. -The prototypes are marked by lines beginning with "//sys" -and read like func declarations if //sys is replaced by func, but: - * The parameter lists must give a name for each argument. - This includes return parameters. - * The parameter lists must give a type for each argument: - the (x, y, z int) shorthand is not allowed. - * If the return parameter is an error number, it must be named err. - * If go func name needs to be different than its libc name, - * or the function is not in libc, name could be specified - * at the end, after "=" sign, like - //sys getsockopt(s int, level int, name int, val uintptr, vallen *_Socklen) (err error) = libsocket.getsockopt - - -This program will generate three files and handle both gc and gccgo implementation: - - zsyscall_aix_ppc64.go: the common part of each implementation (error handler, pointer creation) - - zsyscall_aix_ppc64_gc.go: gc part with //go_cgo_import_dynamic and a call to syscall6 - - zsyscall_aix_ppc64_gccgo.go: gccgo part with C function and conversion to C type. - - The generated code looks like this - -zsyscall_aix_ppc64.go -func asyscall(...) (n int, err error) { - // Pointer Creation - r1, e1 := callasyscall(...) - // Type Conversion - // Error Handler - return -} - -zsyscall_aix_ppc64_gc.go -//go:cgo_import_dynamic libc_asyscall asyscall "libc.a/shr_64.o" -//go:linkname libc_asyscall libc_asyscall -var asyscall syscallFunc - -func callasyscall(...) (r1 uintptr, e1 Errno) { - r1, _, e1 = syscall6(uintptr(unsafe.Pointer(&libc_asyscall)), "nb_args", ... ) - return -} - -zsyscall_aix_ppc64_ggcgo.go - -// int asyscall(...) - -import "C" - -func callasyscall(...) (r1 uintptr, e1 Errno) { - r1 = uintptr(C.asyscall(...)) - e1 = syscall.GetErrno() - return -} -*/ - -package main - -import ( - "bufio" - "flag" - "fmt" - "io/ioutil" - "os" - "regexp" - "strings" -) - -var ( - b32 = flag.Bool("b32", false, "32bit big-endian") - l32 = flag.Bool("l32", false, "32bit little-endian") - aix = flag.Bool("aix", false, "aix") - tags = flag.String("tags", "", "build tags") -) - -// cmdLine returns this programs's commandline arguments -func cmdLine() string { - return "go run mksyscall_aix_ppc64.go " + strings.Join(os.Args[1:], " ") -} - -// buildTags returns build tags -func buildTags() string { - return *tags -} - -// Param is function parameter -type Param struct { - Name string - Type string -} - -// usage prints the program usage -func usage() { - fmt.Fprintf(os.Stderr, "usage: go run mksyscall_aix_ppc64.go [-b32 | -l32] [-tags x,y] [file ...]\n") - os.Exit(1) -} - -// parseParamList parses parameter list and returns a slice of parameters -func parseParamList(list string) []string { - list = strings.TrimSpace(list) - if list == "" { - return []string{} - } - return regexp.MustCompile(`\s*,\s*`).Split(list, -1) -} - -// parseParam splits a parameter into name and type -func parseParam(p string) Param { - ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p) - if ps == nil { - fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p) - os.Exit(1) - } - return Param{ps[1], ps[2]} -} - -func main() { - flag.Usage = usage - flag.Parse() - if len(flag.Args()) <= 0 { - fmt.Fprintf(os.Stderr, "no files to parse provided\n") - usage() - } - - endianness := "" - if *b32 { - endianness = "big-endian" - } else if *l32 { - endianness = "little-endian" - } - - pack := "" - // GCCGO - textgccgo := "" - cExtern := "/*\n#include \n" - // GC - textgc := "" - dynimports := "" - linknames := "" - var vars []string - // COMMON - textcommon := "" - for _, path := range flag.Args() { - file, err := os.Open(path) - if err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - s := bufio.NewScanner(file) - for s.Scan() { - t := s.Text() - t = strings.TrimSpace(t) - t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `) - if p := regexp.MustCompile(`^package (\S+)$`).FindStringSubmatch(t); p != nil && pack == "" { - pack = p[1] - } - nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t) - if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil { - continue - } - - // Line must be of the form - // func Open(path string, mode int, perm int) (fd int, err error) - // Split into name, in params, out params. - f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*(?:(\w*)\.)?(\w*))?$`).FindStringSubmatch(t) - if f == nil { - fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t) - os.Exit(1) - } - funct, inps, outps, modname, sysname := f[2], f[3], f[4], f[5], f[6] - - // Split argument lists on comma. - in := parseParamList(inps) - out := parseParamList(outps) - - inps = strings.Join(in, ", ") - outps = strings.Join(out, ", ") - - if sysname == "" { - sysname = funct - } - - onlyCommon := false - if funct == "readlen" || funct == "writelen" || funct == "FcntlInt" || funct == "FcntlFlock" { - // This function call another syscall which is already implemented. - // Therefore, the gc and gccgo part must not be generated. - onlyCommon = true - } - - // Try in vain to keep people from editing this file. - // The theory is that they jump into the middle of the file - // without reading the header. - - textcommon += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n" - if !onlyCommon { - textgccgo += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n" - textgc += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n" - } - - // Check if value return, err return available - errvar := "" - rettype := "" - for _, param := range out { - p := parseParam(param) - if p.Type == "error" { - errvar = p.Name - } else { - rettype = p.Type - } - } - - sysname = regexp.MustCompile(`([a-z])([A-Z])`).ReplaceAllString(sysname, `${1}_$2`) - sysname = strings.ToLower(sysname) // All libc functions are lowercase. - - // GCCGO Prototype return type - cRettype := "" - if rettype == "unsafe.Pointer" { - cRettype = "uintptr_t" - } else if rettype == "uintptr" { - cRettype = "uintptr_t" - } else if regexp.MustCompile(`^_`).FindStringSubmatch(rettype) != nil { - cRettype = "uintptr_t" - } else if rettype == "int" { - cRettype = "int" - } else if rettype == "int32" { - cRettype = "int" - } else if rettype == "int64" { - cRettype = "long long" - } else if rettype == "uint32" { - cRettype = "unsigned int" - } else if rettype == "uint64" { - cRettype = "unsigned long long" - } else { - cRettype = "int" - } - if sysname == "exit" { - cRettype = "void" - } - - // GCCGO Prototype arguments type - var cIn []string - for i, param := range in { - p := parseParam(param) - if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil { - cIn = append(cIn, "uintptr_t") - } else if p.Type == "string" { - cIn = append(cIn, "uintptr_t") - } else if regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type) != nil { - cIn = append(cIn, "uintptr_t", "size_t") - } else if p.Type == "unsafe.Pointer" { - cIn = append(cIn, "uintptr_t") - } else if p.Type == "uintptr" { - cIn = append(cIn, "uintptr_t") - } else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil { - cIn = append(cIn, "uintptr_t") - } else if p.Type == "int" { - if (i == 0 || i == 2) && funct == "fcntl" { - // These fcntl arguments needs to be uintptr to be able to call FcntlInt and FcntlFlock - cIn = append(cIn, "uintptr_t") - } else { - cIn = append(cIn, "int") - } - - } else if p.Type == "int32" { - cIn = append(cIn, "int") - } else if p.Type == "int64" { - cIn = append(cIn, "long long") - } else if p.Type == "uint32" { - cIn = append(cIn, "unsigned int") - } else if p.Type == "uint64" { - cIn = append(cIn, "unsigned long long") - } else { - cIn = append(cIn, "int") - } - } - - if !onlyCommon { - // GCCGO Prototype Generation - // Imports of system calls from libc - if sysname == "select" { - // select is a keyword of Go. Its name is - // changed to c_select. - cExtern += "#define c_select select\n" - } - cExtern += fmt.Sprintf("%s %s", cRettype, sysname) - cIn := strings.Join(cIn, ", ") - cExtern += fmt.Sprintf("(%s);\n", cIn) - } - // GC Library name - if modname == "" { - modname = "libc.a/shr_64.o" - } else { - fmt.Fprintf(os.Stderr, "%s: only syscall using libc are available\n", funct) - os.Exit(1) - } - sysvarname := fmt.Sprintf("libc_%s", sysname) - - if !onlyCommon { - // GC Runtime import of function to allow cross-platform builds. - dynimports += fmt.Sprintf("//go:cgo_import_dynamic %s %s \"%s\"\n", sysvarname, sysname, modname) - // GC Link symbol to proc address variable. - linknames += fmt.Sprintf("//go:linkname %s %s\n", sysvarname, sysvarname) - // GC Library proc address variable. - vars = append(vars, sysvarname) - } - - strconvfunc := "BytePtrFromString" - strconvtype := "*byte" - - // Go function header. - if outps != "" { - outps = fmt.Sprintf(" (%s)", outps) - } - if textcommon != "" { - textcommon += "\n" - } - - textcommon += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outps) - - // Prepare arguments tocall. - var argscommon []string // Arguments in the common part - var argscall []string // Arguments for call prototype - var argsgc []string // Arguments for gc call (with syscall6) - var argsgccgo []string // Arguments for gccgo call (with C.name_of_syscall) - n := 0 - argN := 0 - for _, param := range in { - p := parseParam(param) - if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil { - argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(%s))", p.Name)) - argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name)) - argsgc = append(argsgc, p.Name) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name)) - } else if p.Type == "string" && errvar != "" { - textcommon += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype) - textcommon += fmt.Sprintf("\t_p%d, %s = %s(%s)\n", n, errvar, strconvfunc, p.Name) - textcommon += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar) - - argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n)) - argscall = append(argscall, fmt.Sprintf("_p%d uintptr ", n)) - argsgc = append(argsgc, fmt.Sprintf("_p%d", n)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(_p%d)", n)) - n++ - } else if p.Type == "string" { - fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n") - textcommon += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype) - textcommon += fmt.Sprintf("\t_p%d, %s = %s(%s)\n", n, errvar, strconvfunc, p.Name) - textcommon += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar) - - argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n)) - argscall = append(argscall, fmt.Sprintf("_p%d uintptr", n)) - argsgc = append(argsgc, fmt.Sprintf("_p%d", n)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(_p%d)", n)) - n++ - } else if m := regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type); m != nil { - // Convert slice into pointer, length. - // Have to be careful not to take address of &a[0] if len == 0: - // pass nil in that case. - textcommon += fmt.Sprintf("\tvar _p%d *%s\n", n, m[1]) - textcommon += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = &%s[0]\n\t}\n", p.Name, n, p.Name) - argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n), fmt.Sprintf("len(%s)", p.Name)) - argscall = append(argscall, fmt.Sprintf("_p%d uintptr", n), fmt.Sprintf("_lenp%d int", n)) - argsgc = append(argsgc, fmt.Sprintf("_p%d", n), fmt.Sprintf("uintptr(_lenp%d)", n)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(_p%d)", n), fmt.Sprintf("C.size_t(_lenp%d)", n)) - n++ - } else if p.Type == "int64" && endianness != "" { - fmt.Fprintf(os.Stderr, path+":"+funct+" uses int64 with 32 bits mode. Case not yet implemented\n") - } else if p.Type == "bool" { - fmt.Fprintf(os.Stderr, path+":"+funct+" uses bool. Case not yet implemented\n") - } else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil || p.Type == "unsafe.Pointer" { - argscommon = append(argscommon, fmt.Sprintf("uintptr(%s)", p.Name)) - argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name)) - argsgc = append(argsgc, p.Name) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name)) - } else if p.Type == "int" { - if (argN == 0 || argN == 2) && ((funct == "fcntl") || (funct == "FcntlInt") || (funct == "FcntlFlock")) { - // These fcntl arguments need to be uintptr to be able to call FcntlInt and FcntlFlock - argscommon = append(argscommon, fmt.Sprintf("uintptr(%s)", p.Name)) - argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name)) - argsgc = append(argsgc, p.Name) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name)) - - } else { - argscommon = append(argscommon, p.Name) - argscall = append(argscall, fmt.Sprintf("%s int", p.Name)) - argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.int(%s)", p.Name)) - } - } else if p.Type == "int32" { - argscommon = append(argscommon, p.Name) - argscall = append(argscall, fmt.Sprintf("%s int32", p.Name)) - argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.int(%s)", p.Name)) - } else if p.Type == "int64" { - argscommon = append(argscommon, p.Name) - argscall = append(argscall, fmt.Sprintf("%s int64", p.Name)) - argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.longlong(%s)", p.Name)) - } else if p.Type == "uint32" { - argscommon = append(argscommon, p.Name) - argscall = append(argscall, fmt.Sprintf("%s uint32", p.Name)) - argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uint(%s)", p.Name)) - } else if p.Type == "uint64" { - argscommon = append(argscommon, p.Name) - argscall = append(argscall, fmt.Sprintf("%s uint64", p.Name)) - argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.ulonglong(%s)", p.Name)) - } else if p.Type == "uintptr" { - argscommon = append(argscommon, p.Name) - argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name)) - argsgc = append(argsgc, p.Name) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name)) - } else { - argscommon = append(argscommon, fmt.Sprintf("int(%s)", p.Name)) - argscall = append(argscall, fmt.Sprintf("%s int", p.Name)) - argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name)) - argsgccgo = append(argsgccgo, fmt.Sprintf("C.int(%s)", p.Name)) - } - argN++ - } - nargs := len(argsgc) - - // COMMON function generation - argscommonlist := strings.Join(argscommon, ", ") - callcommon := fmt.Sprintf("call%s(%s)", sysname, argscommonlist) - ret := []string{"_", "_"} - body := "" - doErrno := false - for i := 0; i < len(out); i++ { - p := parseParam(out[i]) - reg := "" - if p.Name == "err" { - reg = "e1" - ret[1] = reg - doErrno = true - } else { - reg = "r0" - ret[0] = reg - } - if p.Type == "bool" { - reg = fmt.Sprintf("%s != 0", reg) - } - if reg != "e1" { - body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg) - } - } - if ret[0] == "_" && ret[1] == "_" { - textcommon += fmt.Sprintf("\t%s\n", callcommon) - } else { - textcommon += fmt.Sprintf("\t%s, %s := %s\n", ret[0], ret[1], callcommon) - } - textcommon += body - - if doErrno { - textcommon += "\tif e1 != 0 {\n" - textcommon += "\t\terr = errnoErr(e1)\n" - textcommon += "\t}\n" - } - textcommon += "\treturn\n" - textcommon += "}\n" - - if onlyCommon { - continue - } - - // CALL Prototype - callProto := fmt.Sprintf("func call%s(%s) (r1 uintptr, e1 Errno) {\n", sysname, strings.Join(argscall, ", ")) - - // GC function generation - asm := "syscall6" - if nonblock != nil { - asm = "rawSyscall6" - } - - if len(argsgc) <= 6 { - for len(argsgc) < 6 { - argsgc = append(argsgc, "0") - } - } else { - fmt.Fprintf(os.Stderr, "%s: too many arguments to system call", funct) - os.Exit(1) - } - argsgclist := strings.Join(argsgc, ", ") - callgc := fmt.Sprintf("%s(uintptr(unsafe.Pointer(&%s)), %d, %s)", asm, sysvarname, nargs, argsgclist) - - textgc += callProto - textgc += fmt.Sprintf("\tr1, _, e1 = %s\n", callgc) - textgc += "\treturn\n}\n" - - // GCCGO function generation - argsgccgolist := strings.Join(argsgccgo, ", ") - var callgccgo string - if sysname == "select" { - // select is a keyword of Go. Its name is - // changed to c_select. - callgccgo = fmt.Sprintf("C.c_%s(%s)", sysname, argsgccgolist) - } else { - callgccgo = fmt.Sprintf("C.%s(%s)", sysname, argsgccgolist) - } - textgccgo += callProto - textgccgo += fmt.Sprintf("\tr1 = uintptr(%s)\n", callgccgo) - textgccgo += "\te1 = syscall.GetErrno()\n" - textgccgo += "\treturn\n}\n" - } - if err := s.Err(); err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - file.Close() - } - imp := "" - if pack != "unix" { - imp = "import \"golang.org/x/sys/unix\"\n" - - } - - // Print zsyscall_aix_ppc64.go - err := ioutil.WriteFile("zsyscall_aix_ppc64.go", - []byte(fmt.Sprintf(srcTemplate1, cmdLine(), buildTags(), pack, imp, textcommon)), - 0644) - if err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - - // Print zsyscall_aix_ppc64_gc.go - vardecls := "\t" + strings.Join(vars, ",\n\t") - vardecls += " syscallFunc" - err = ioutil.WriteFile("zsyscall_aix_ppc64_gc.go", - []byte(fmt.Sprintf(srcTemplate2, cmdLine(), buildTags(), pack, imp, dynimports, linknames, vardecls, textgc)), - 0644) - if err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - - // Print zsyscall_aix_ppc64_gccgo.go - err = ioutil.WriteFile("zsyscall_aix_ppc64_gccgo.go", - []byte(fmt.Sprintf(srcTemplate3, cmdLine(), buildTags(), pack, cExtern, imp, textgccgo)), - 0644) - if err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } -} - -const srcTemplate1 = `// %s -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s - -package %s - -import ( - "unsafe" -) - - -%s - -%s -` -const srcTemplate2 = `// %s -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s -// +build !gccgo - -package %s - -import ( - "unsafe" -) -%s -%s -%s -type syscallFunc uintptr - -var ( -%s -) - -// Implemented in runtime/syscall_aix.go. -func rawSyscall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err Errno) -func syscall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err Errno) - -%s -` -const srcTemplate3 = `// %s -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s -// +build gccgo - -package %s - -%s -*/ -import "C" -import ( - "syscall" -) - - -%s - -%s -` diff --git a/vendor/golang.org/x/sys/unix/mksyscall_solaris.go b/vendor/golang.org/x/sys/unix/mksyscall_solaris.go deleted file mode 100644 index 3d864738b..000000000 --- a/vendor/golang.org/x/sys/unix/mksyscall_solaris.go +++ /dev/null @@ -1,335 +0,0 @@ -// Copyright 2019 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* - This program reads a file containing function prototypes - (like syscall_solaris.go) and generates system call bodies. - The prototypes are marked by lines beginning with "//sys" - and read like func declarations if //sys is replaced by func, but: - * The parameter lists must give a name for each argument. - This includes return parameters. - * The parameter lists must give a type for each argument: - the (x, y, z int) shorthand is not allowed. - * If the return parameter is an error number, it must be named err. - * If go func name needs to be different than its libc name, - * or the function is not in libc, name could be specified - * at the end, after "=" sign, like - //sys getsockopt(s int, level int, name int, val uintptr, vallen *_Socklen) (err error) = libsocket.getsockopt -*/ - -package main - -import ( - "bufio" - "flag" - "fmt" - "os" - "regexp" - "strings" -) - -var ( - b32 = flag.Bool("b32", false, "32bit big-endian") - l32 = flag.Bool("l32", false, "32bit little-endian") - tags = flag.String("tags", "", "build tags") -) - -// cmdLine returns this programs's commandline arguments -func cmdLine() string { - return "go run mksyscall_solaris.go " + strings.Join(os.Args[1:], " ") -} - -// buildTags returns build tags -func buildTags() string { - return *tags -} - -// Param is function parameter -type Param struct { - Name string - Type string -} - -// usage prints the program usage -func usage() { - fmt.Fprintf(os.Stderr, "usage: go run mksyscall_solaris.go [-b32 | -l32] [-tags x,y] [file ...]\n") - os.Exit(1) -} - -// parseParamList parses parameter list and returns a slice of parameters -func parseParamList(list string) []string { - list = strings.TrimSpace(list) - if list == "" { - return []string{} - } - return regexp.MustCompile(`\s*,\s*`).Split(list, -1) -} - -// parseParam splits a parameter into name and type -func parseParam(p string) Param { - ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p) - if ps == nil { - fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p) - os.Exit(1) - } - return Param{ps[1], ps[2]} -} - -func main() { - flag.Usage = usage - flag.Parse() - if len(flag.Args()) <= 0 { - fmt.Fprintf(os.Stderr, "no files to parse provided\n") - usage() - } - - endianness := "" - if *b32 { - endianness = "big-endian" - } else if *l32 { - endianness = "little-endian" - } - - pack := "" - text := "" - dynimports := "" - linknames := "" - var vars []string - for _, path := range flag.Args() { - file, err := os.Open(path) - if err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - s := bufio.NewScanner(file) - for s.Scan() { - t := s.Text() - t = strings.TrimSpace(t) - t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `) - if p := regexp.MustCompile(`^package (\S+)$`).FindStringSubmatch(t); p != nil && pack == "" { - pack = p[1] - } - nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t) - if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil { - continue - } - - // Line must be of the form - // func Open(path string, mode int, perm int) (fd int, err error) - // Split into name, in params, out params. - f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*(?:(\w*)\.)?(\w*))?$`).FindStringSubmatch(t) - if f == nil { - fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t) - os.Exit(1) - } - funct, inps, outps, modname, sysname := f[2], f[3], f[4], f[5], f[6] - - // Split argument lists on comma. - in := parseParamList(inps) - out := parseParamList(outps) - - inps = strings.Join(in, ", ") - outps = strings.Join(out, ", ") - - // Try in vain to keep people from editing this file. - // The theory is that they jump into the middle of the file - // without reading the header. - text += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n" - - // So file name. - if modname == "" { - modname = "libc" - } - - // System call name. - if sysname == "" { - sysname = funct - } - - // System call pointer variable name. - sysvarname := fmt.Sprintf("proc%s", sysname) - - strconvfunc := "BytePtrFromString" - strconvtype := "*byte" - - sysname = strings.ToLower(sysname) // All libc functions are lowercase. - - // Runtime import of function to allow cross-platform builds. - dynimports += fmt.Sprintf("//go:cgo_import_dynamic libc_%s %s \"%s.so\"\n", sysname, sysname, modname) - // Link symbol to proc address variable. - linknames += fmt.Sprintf("//go:linkname %s libc_%s\n", sysvarname, sysname) - // Library proc address variable. - vars = append(vars, sysvarname) - - // Go function header. - outlist := strings.Join(out, ", ") - if outlist != "" { - outlist = fmt.Sprintf(" (%s)", outlist) - } - if text != "" { - text += "\n" - } - text += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outlist) - - // Check if err return available - errvar := "" - for _, param := range out { - p := parseParam(param) - if p.Type == "error" { - errvar = p.Name - continue - } - } - - // Prepare arguments to Syscall. - var args []string - n := 0 - for _, param := range in { - p := parseParam(param) - if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil { - args = append(args, "uintptr(unsafe.Pointer("+p.Name+"))") - } else if p.Type == "string" && errvar != "" { - text += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype) - text += fmt.Sprintf("\t_p%d, %s = %s(%s)\n", n, errvar, strconvfunc, p.Name) - text += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar) - args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n)) - n++ - } else if p.Type == "string" { - fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n") - text += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype) - text += fmt.Sprintf("\t_p%d, _ = %s(%s)\n", n, strconvfunc, p.Name) - args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n)) - n++ - } else if s := regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type); s != nil { - // Convert slice into pointer, length. - // Have to be careful not to take address of &a[0] if len == 0: - // pass nil in that case. - text += fmt.Sprintf("\tvar _p%d *%s\n", n, s[1]) - text += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = &%s[0]\n\t}\n", p.Name, n, p.Name) - args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n), fmt.Sprintf("uintptr(len(%s))", p.Name)) - n++ - } else if p.Type == "int64" && endianness != "" { - if endianness == "big-endian" { - args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name)) - } else { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name)) - } - } else if p.Type == "bool" { - text += fmt.Sprintf("\tvar _p%d uint32\n", n) - text += fmt.Sprintf("\tif %s {\n\t\t_p%d = 1\n\t} else {\n\t\t_p%d = 0\n\t}\n", p.Name, n, n) - args = append(args, fmt.Sprintf("uintptr(_p%d)", n)) - n++ - } else { - args = append(args, fmt.Sprintf("uintptr(%s)", p.Name)) - } - } - nargs := len(args) - - // Determine which form to use; pad args with zeros. - asm := "sysvicall6" - if nonblock != nil { - asm = "rawSysvicall6" - } - if len(args) <= 6 { - for len(args) < 6 { - args = append(args, "0") - } - } else { - fmt.Fprintf(os.Stderr, "%s: too many arguments to system call\n", path) - os.Exit(1) - } - - // Actual call. - arglist := strings.Join(args, ", ") - call := fmt.Sprintf("%s(uintptr(unsafe.Pointer(&%s)), %d, %s)", asm, sysvarname, nargs, arglist) - - // Assign return values. - body := "" - ret := []string{"_", "_", "_"} - doErrno := false - for i := 0; i < len(out); i++ { - p := parseParam(out[i]) - reg := "" - if p.Name == "err" { - reg = "e1" - ret[2] = reg - doErrno = true - } else { - reg = fmt.Sprintf("r%d", i) - ret[i] = reg - } - if p.Type == "bool" { - reg = fmt.Sprintf("%d != 0", reg) - } - if p.Type == "int64" && endianness != "" { - // 64-bit number in r1:r0 or r0:r1. - if i+2 > len(out) { - fmt.Fprintf(os.Stderr, "%s: not enough registers for int64 return\n", path) - os.Exit(1) - } - if endianness == "big-endian" { - reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i, i+1) - } else { - reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i+1, i) - } - ret[i] = fmt.Sprintf("r%d", i) - ret[i+1] = fmt.Sprintf("r%d", i+1) - } - if reg != "e1" { - body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg) - } - } - if ret[0] == "_" && ret[1] == "_" && ret[2] == "_" { - text += fmt.Sprintf("\t%s\n", call) - } else { - text += fmt.Sprintf("\t%s, %s, %s := %s\n", ret[0], ret[1], ret[2], call) - } - text += body - - if doErrno { - text += "\tif e1 != 0 {\n" - text += "\t\terr = e1\n" - text += "\t}\n" - } - text += "\treturn\n" - text += "}\n" - } - if err := s.Err(); err != nil { - fmt.Fprintf(os.Stderr, err.Error()) - os.Exit(1) - } - file.Close() - } - imp := "" - if pack != "unix" { - imp = "import \"golang.org/x/sys/unix\"\n" - - } - vardecls := "\t" + strings.Join(vars, ",\n\t") - vardecls += " syscallFunc" - fmt.Printf(srcTemplate, cmdLine(), buildTags(), pack, imp, dynimports, linknames, vardecls, text) -} - -const srcTemplate = `// %s -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s - -package %s - -import ( - "syscall" - "unsafe" -) -%s -%s -%s -var ( -%s -) - -%s -` diff --git a/vendor/golang.org/x/sys/unix/mksysctl_openbsd.go b/vendor/golang.org/x/sys/unix/mksysctl_openbsd.go deleted file mode 100644 index b6b409909..000000000 --- a/vendor/golang.org/x/sys/unix/mksysctl_openbsd.go +++ /dev/null @@ -1,355 +0,0 @@ -// Copyright 2019 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// Parse the header files for OpenBSD and generate a Go usable sysctl MIB. -// -// Build a MIB with each entry being an array containing the level, type and -// a hash that will contain additional entries if the current entry is a node. -// We then walk this MIB and create a flattened sysctl name to OID hash. - -package main - -import ( - "bufio" - "fmt" - "os" - "path/filepath" - "regexp" - "sort" - "strings" -) - -var ( - goos, goarch string -) - -// cmdLine returns this programs's commandline arguments. -func cmdLine() string { - return "go run mksysctl_openbsd.go " + strings.Join(os.Args[1:], " ") -} - -// buildTags returns build tags. -func buildTags() string { - return fmt.Sprintf("%s,%s", goarch, goos) -} - -// reMatch performs regular expression match and stores the substring slice to value pointed by m. -func reMatch(re *regexp.Regexp, str string, m *[]string) bool { - *m = re.FindStringSubmatch(str) - if *m != nil { - return true - } - return false -} - -type nodeElement struct { - n int - t string - pE *map[string]nodeElement -} - -var ( - debugEnabled bool - mib map[string]nodeElement - node *map[string]nodeElement - nodeMap map[string]string - sysCtl []string -) - -var ( - ctlNames1RE = regexp.MustCompile(`^#define\s+(CTL_NAMES)\s+{`) - ctlNames2RE = regexp.MustCompile(`^#define\s+(CTL_(.*)_NAMES)\s+{`) - ctlNames3RE = regexp.MustCompile(`^#define\s+((.*)CTL_NAMES)\s+{`) - netInetRE = regexp.MustCompile(`^netinet/`) - netInet6RE = regexp.MustCompile(`^netinet6/`) - netRE = regexp.MustCompile(`^net/`) - bracesRE = regexp.MustCompile(`{.*}`) - ctlTypeRE = regexp.MustCompile(`{\s+"(\w+)",\s+(CTLTYPE_[A-Z]+)\s+}`) - fsNetKernRE = regexp.MustCompile(`^(fs|net|kern)_`) -) - -func debug(s string) { - if debugEnabled { - fmt.Fprintln(os.Stderr, s) - } -} - -// Walk the MIB and build a sysctl name to OID mapping. -func buildSysctl(pNode *map[string]nodeElement, name string, oid []int) { - lNode := pNode // local copy of pointer to node - var keys []string - for k := range *lNode { - keys = append(keys, k) - } - sort.Strings(keys) - - for _, key := range keys { - nodename := name - if name != "" { - nodename += "." - } - nodename += key - - nodeoid := append(oid, (*pNode)[key].n) - - if (*pNode)[key].t == `CTLTYPE_NODE` { - if _, ok := nodeMap[nodename]; ok { - lNode = &mib - ctlName := nodeMap[nodename] - for _, part := range strings.Split(ctlName, ".") { - lNode = ((*lNode)[part]).pE - } - } else { - lNode = (*pNode)[key].pE - } - buildSysctl(lNode, nodename, nodeoid) - } else if (*pNode)[key].t != "" { - oidStr := []string{} - for j := range nodeoid { - oidStr = append(oidStr, fmt.Sprintf("%d", nodeoid[j])) - } - text := "\t{ \"" + nodename + "\", []_C_int{ " + strings.Join(oidStr, ", ") + " } }, \n" - sysCtl = append(sysCtl, text) - } - } -} - -func main() { - // Get the OS (using GOOS_TARGET if it exist) - goos = os.Getenv("GOOS_TARGET") - if goos == "" { - goos = os.Getenv("GOOS") - } - // Get the architecture (using GOARCH_TARGET if it exists) - goarch = os.Getenv("GOARCH_TARGET") - if goarch == "" { - goarch = os.Getenv("GOARCH") - } - // Check if GOOS and GOARCH environment variables are defined - if goarch == "" || goos == "" { - fmt.Fprintf(os.Stderr, "GOARCH or GOOS not defined in environment\n") - os.Exit(1) - } - - mib = make(map[string]nodeElement) - headers := [...]string{ - `sys/sysctl.h`, - `sys/socket.h`, - `sys/tty.h`, - `sys/malloc.h`, - `sys/mount.h`, - `sys/namei.h`, - `sys/sem.h`, - `sys/shm.h`, - `sys/vmmeter.h`, - `uvm/uvmexp.h`, - `uvm/uvm_param.h`, - `uvm/uvm_swap_encrypt.h`, - `ddb/db_var.h`, - `net/if.h`, - `net/if_pfsync.h`, - `net/pipex.h`, - `netinet/in.h`, - `netinet/icmp_var.h`, - `netinet/igmp_var.h`, - `netinet/ip_ah.h`, - `netinet/ip_carp.h`, - `netinet/ip_divert.h`, - `netinet/ip_esp.h`, - `netinet/ip_ether.h`, - `netinet/ip_gre.h`, - `netinet/ip_ipcomp.h`, - `netinet/ip_ipip.h`, - `netinet/pim_var.h`, - `netinet/tcp_var.h`, - `netinet/udp_var.h`, - `netinet6/in6.h`, - `netinet6/ip6_divert.h`, - `netinet6/pim6_var.h`, - `netinet/icmp6.h`, - `netmpls/mpls.h`, - } - - ctls := [...]string{ - `kern`, - `vm`, - `fs`, - `net`, - //debug /* Special handling required */ - `hw`, - //machdep /* Arch specific */ - `user`, - `ddb`, - //vfs /* Special handling required */ - `fs.posix`, - `kern.forkstat`, - `kern.intrcnt`, - `kern.malloc`, - `kern.nchstats`, - `kern.seminfo`, - `kern.shminfo`, - `kern.timecounter`, - `kern.tty`, - `kern.watchdog`, - `net.bpf`, - `net.ifq`, - `net.inet`, - `net.inet.ah`, - `net.inet.carp`, - `net.inet.divert`, - `net.inet.esp`, - `net.inet.etherip`, - `net.inet.gre`, - `net.inet.icmp`, - `net.inet.igmp`, - `net.inet.ip`, - `net.inet.ip.ifq`, - `net.inet.ipcomp`, - `net.inet.ipip`, - `net.inet.mobileip`, - `net.inet.pfsync`, - `net.inet.pim`, - `net.inet.tcp`, - `net.inet.udp`, - `net.inet6`, - `net.inet6.divert`, - `net.inet6.ip6`, - `net.inet6.icmp6`, - `net.inet6.pim6`, - `net.inet6.tcp6`, - `net.inet6.udp6`, - `net.mpls`, - `net.mpls.ifq`, - `net.key`, - `net.pflow`, - `net.pfsync`, - `net.pipex`, - `net.rt`, - `vm.swapencrypt`, - //vfsgenctl /* Special handling required */ - } - - // Node name "fixups" - ctlMap := map[string]string{ - "ipproto": "net.inet", - "net.inet.ipproto": "net.inet", - "net.inet6.ipv6proto": "net.inet6", - "net.inet6.ipv6": "net.inet6.ip6", - "net.inet.icmpv6": "net.inet6.icmp6", - "net.inet6.divert6": "net.inet6.divert", - "net.inet6.tcp6": "net.inet.tcp", - "net.inet6.udp6": "net.inet.udp", - "mpls": "net.mpls", - "swpenc": "vm.swapencrypt", - } - - // Node mappings - nodeMap = map[string]string{ - "net.inet.ip.ifq": "net.ifq", - "net.inet.pfsync": "net.pfsync", - "net.mpls.ifq": "net.ifq", - } - - mCtls := make(map[string]bool) - for _, ctl := range ctls { - mCtls[ctl] = true - } - - for _, header := range headers { - debug("Processing " + header) - file, err := os.Open(filepath.Join("/usr/include", header)) - if err != nil { - fmt.Fprintf(os.Stderr, "%v\n", err) - os.Exit(1) - } - s := bufio.NewScanner(file) - for s.Scan() { - var sub []string - if reMatch(ctlNames1RE, s.Text(), &sub) || - reMatch(ctlNames2RE, s.Text(), &sub) || - reMatch(ctlNames3RE, s.Text(), &sub) { - if sub[1] == `CTL_NAMES` { - // Top level. - node = &mib - } else { - // Node. - nodename := strings.ToLower(sub[2]) - ctlName := "" - if reMatch(netInetRE, header, &sub) { - ctlName = "net.inet." + nodename - } else if reMatch(netInet6RE, header, &sub) { - ctlName = "net.inet6." + nodename - } else if reMatch(netRE, header, &sub) { - ctlName = "net." + nodename - } else { - ctlName = nodename - ctlName = fsNetKernRE.ReplaceAllString(ctlName, `$1.`) - } - - if val, ok := ctlMap[ctlName]; ok { - ctlName = val - } - if _, ok := mCtls[ctlName]; !ok { - debug("Ignoring " + ctlName + "...") - continue - } - - // Walk down from the top of the MIB. - node = &mib - for _, part := range strings.Split(ctlName, ".") { - if _, ok := (*node)[part]; !ok { - debug("Missing node " + part) - (*node)[part] = nodeElement{n: 0, t: "", pE: &map[string]nodeElement{}} - } - node = (*node)[part].pE - } - } - - // Populate current node with entries. - i := -1 - for !strings.HasPrefix(s.Text(), "}") { - s.Scan() - if reMatch(bracesRE, s.Text(), &sub) { - i++ - } - if !reMatch(ctlTypeRE, s.Text(), &sub) { - continue - } - (*node)[sub[1]] = nodeElement{n: i, t: sub[2], pE: &map[string]nodeElement{}} - } - } - } - err = s.Err() - if err != nil { - fmt.Fprintf(os.Stderr, "%v\n", err) - os.Exit(1) - } - file.Close() - } - buildSysctl(&mib, "", []int{}) - - sort.Strings(sysCtl) - text := strings.Join(sysCtl, "") - - fmt.Printf(srcTemplate, cmdLine(), buildTags(), text) -} - -const srcTemplate = `// %s -// Code generated by the command above; DO NOT EDIT. - -// +build %s - -package unix - -type mibentry struct { - ctlname string - ctloid []_C_int -} - -var sysctlMib = []mibentry { -%s -} -` diff --git a/vendor/golang.org/x/sys/unix/mksysnum.go b/vendor/golang.org/x/sys/unix/mksysnum.go deleted file mode 100644 index baa6ecd85..000000000 --- a/vendor/golang.org/x/sys/unix/mksysnum.go +++ /dev/null @@ -1,190 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// Generate system call table for DragonFly, NetBSD, -// FreeBSD, OpenBSD or Darwin from master list -// (for example, /usr/src/sys/kern/syscalls.master or -// sys/syscall.h). -package main - -import ( - "bufio" - "fmt" - "io" - "io/ioutil" - "net/http" - "os" - "regexp" - "strings" -) - -var ( - goos, goarch string -) - -// cmdLine returns this programs's commandline arguments -func cmdLine() string { - return "go run mksysnum.go " + strings.Join(os.Args[1:], " ") -} - -// buildTags returns build tags -func buildTags() string { - return fmt.Sprintf("%s,%s", goarch, goos) -} - -func checkErr(err error) { - if err != nil { - fmt.Fprintf(os.Stderr, "%v\n", err) - os.Exit(1) - } -} - -// source string and substring slice for regexp -type re struct { - str string // source string - sub []string // matched sub-string -} - -// Match performs regular expression match -func (r *re) Match(exp string) bool { - r.sub = regexp.MustCompile(exp).FindStringSubmatch(r.str) - if r.sub != nil { - return true - } - return false -} - -// fetchFile fetches a text file from URL -func fetchFile(URL string) io.Reader { - resp, err := http.Get(URL) - checkErr(err) - defer resp.Body.Close() - body, err := ioutil.ReadAll(resp.Body) - checkErr(err) - return strings.NewReader(string(body)) -} - -// readFile reads a text file from path -func readFile(path string) io.Reader { - file, err := os.Open(os.Args[1]) - checkErr(err) - return file -} - -func format(name, num, proto string) string { - name = strings.ToUpper(name) - // There are multiple entries for enosys and nosys, so comment them out. - nm := re{str: name} - if nm.Match(`^SYS_E?NOSYS$`) { - name = fmt.Sprintf("// %s", name) - } - if name == `SYS_SYS_EXIT` { - name = `SYS_EXIT` - } - return fmt.Sprintf(" %s = %s; // %s\n", name, num, proto) -} - -func main() { - // Get the OS (using GOOS_TARGET if it exist) - goos = os.Getenv("GOOS_TARGET") - if goos == "" { - goos = os.Getenv("GOOS") - } - // Get the architecture (using GOARCH_TARGET if it exists) - goarch = os.Getenv("GOARCH_TARGET") - if goarch == "" { - goarch = os.Getenv("GOARCH") - } - // Check if GOOS and GOARCH environment variables are defined - if goarch == "" || goos == "" { - fmt.Fprintf(os.Stderr, "GOARCH or GOOS not defined in environment\n") - os.Exit(1) - } - - file := strings.TrimSpace(os.Args[1]) - var syscalls io.Reader - if strings.HasPrefix(file, "https://") || strings.HasPrefix(file, "http://") { - // Download syscalls.master file - syscalls = fetchFile(file) - } else { - syscalls = readFile(file) - } - - var text, line string - s := bufio.NewScanner(syscalls) - for s.Scan() { - t := re{str: line} - if t.Match(`^(.*)\\$`) { - // Handle continuation - line = t.sub[1] - line += strings.TrimLeft(s.Text(), " \t") - } else { - // New line - line = s.Text() - } - t = re{str: line} - if t.Match(`\\$`) { - continue - } - t = re{str: line} - - switch goos { - case "dragonfly": - if t.Match(`^([0-9]+)\s+STD\s+({ \S+\s+(\w+).*)$`) { - num, proto := t.sub[1], t.sub[2] - name := fmt.Sprintf("SYS_%s", t.sub[3]) - text += format(name, num, proto) - } - case "freebsd": - if t.Match(`^([0-9]+)\s+\S+\s+(?:(?:NO)?STD|COMPAT10)\s+({ \S+\s+(\w+).*)$`) { - num, proto := t.sub[1], t.sub[2] - name := fmt.Sprintf("SYS_%s", t.sub[3]) - text += format(name, num, proto) - } - case "openbsd": - if t.Match(`^([0-9]+)\s+STD\s+(NOLOCK\s+)?({ \S+\s+\*?(\w+).*)$`) { - num, proto, name := t.sub[1], t.sub[3], t.sub[4] - text += format(name, num, proto) - } - case "netbsd": - if t.Match(`^([0-9]+)\s+((STD)|(NOERR))\s+(RUMP\s+)?({\s+\S+\s*\*?\s*\|(\S+)\|(\S*)\|(\w+).*\s+})(\s+(\S+))?$`) { - num, proto, compat := t.sub[1], t.sub[6], t.sub[8] - name := t.sub[7] + "_" + t.sub[9] - if t.sub[11] != "" { - name = t.sub[7] + "_" + t.sub[11] - } - name = strings.ToUpper(name) - if compat == "" || compat == "13" || compat == "30" || compat == "50" { - text += fmt.Sprintf(" %s = %s; // %s\n", name, num, proto) - } - } - case "darwin": - if t.Match(`^#define\s+SYS_(\w+)\s+([0-9]+)`) { - name, num := t.sub[1], t.sub[2] - name = strings.ToUpper(name) - text += fmt.Sprintf(" SYS_%s = %s;\n", name, num) - } - default: - fmt.Fprintf(os.Stderr, "unrecognized GOOS=%s\n", goos) - os.Exit(1) - - } - } - err := s.Err() - checkErr(err) - - fmt.Printf(template, cmdLine(), buildTags(), text) -} - -const template = `// %s -// Code generated by the command above; see README.md. DO NOT EDIT. - -// +build %s - -package unix - -const( -%s)` diff --git a/vendor/golang.org/x/sys/unix/types_aix.go b/vendor/golang.org/x/sys/unix/types_aix.go deleted file mode 100644 index 40d2beede..000000000 --- a/vendor/golang.org/x/sys/unix/types_aix.go +++ /dev/null @@ -1,237 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore -// +build aix - -/* -Input to cgo -godefs. See also mkerrors.sh and mkall.sh -*/ - -// +godefs map struct_in_addr [4]byte /* in_addr */ -// +godefs map struct_in6_addr [16]byte /* in6_addr */ - -package unix - -/* -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include - -#include -#include -#include -#include - - -#include -#include - -enum { - sizeofPtr = sizeof(void*), -}; - -union sockaddr_all { - struct sockaddr s1; // this one gets used for fields - struct sockaddr_in s2; // these pad it out - struct sockaddr_in6 s3; - struct sockaddr_un s4; - struct sockaddr_dl s5; -}; - -struct sockaddr_any { - struct sockaddr addr; - char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)]; -}; - -*/ -import "C" - -// Machine characteristics - -const ( - SizeofPtr = C.sizeofPtr - SizeofShort = C.sizeof_short - SizeofInt = C.sizeof_int - SizeofLong = C.sizeof_long - SizeofLongLong = C.sizeof_longlong - PathMax = C.PATH_MAX -) - -// Basic types - -type ( - _C_short C.short - _C_int C.int - _C_long C.long - _C_long_long C.longlong -) - -type off64 C.off64_t -type off C.off_t -type Mode_t C.mode_t - -// Time - -type Timespec C.struct_timespec - -type Timeval C.struct_timeval - -type Timeval32 C.struct_timeval32 - -type Timex C.struct_timex - -type Time_t C.time_t - -type Tms C.struct_tms - -type Utimbuf C.struct_utimbuf - -type Timezone C.struct_timezone - -// Processes - -type Rusage C.struct_rusage - -type Rlimit C.struct_rlimit64 - -type Pid_t C.pid_t - -type _Gid_t C.gid_t - -type dev_t C.dev_t - -// Files - -type Stat_t C.struct_stat - -type StatxTimestamp C.struct_statx_timestamp - -type Statx_t C.struct_statx - -type Dirent C.struct_dirent - -// Sockets - -type RawSockaddrInet4 C.struct_sockaddr_in - -type RawSockaddrInet6 C.struct_sockaddr_in6 - -type RawSockaddrUnix C.struct_sockaddr_un - -type RawSockaddrDatalink C.struct_sockaddr_dl - -type RawSockaddr C.struct_sockaddr - -type RawSockaddrAny C.struct_sockaddr_any - -type _Socklen C.socklen_t - -type Cmsghdr C.struct_cmsghdr - -type ICMPv6Filter C.struct_icmp6_filter - -type Iovec C.struct_iovec - -type IPMreq C.struct_ip_mreq - -type IPv6Mreq C.struct_ipv6_mreq - -type IPv6MTUInfo C.struct_ip6_mtuinfo - -type Linger C.struct_linger - -type Msghdr C.struct_msghdr - -const ( - SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in - SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 - SizeofSockaddrAny = C.sizeof_struct_sockaddr_any - SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un - SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl - SizeofLinger = C.sizeof_struct_linger - SizeofIPMreq = C.sizeof_struct_ip_mreq - SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq - SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo - SizeofMsghdr = C.sizeof_struct_msghdr - SizeofCmsghdr = C.sizeof_struct_cmsghdr - SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter -) - -// Routing and interface messages - -const ( - SizeofIfMsghdr = C.sizeof_struct_if_msghdr -) - -type IfMsgHdr C.struct_if_msghdr - -// Misc - -type FdSet C.fd_set - -type Utsname C.struct_utsname - -type Ustat_t C.struct_ustat - -type Sigset_t C.sigset_t - -const ( - AT_FDCWD = C.AT_FDCWD - AT_REMOVEDIR = C.AT_REMOVEDIR - AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW -) - -// Terminal handling - -type Termios C.struct_termios - -type Termio C.struct_termio - -type Winsize C.struct_winsize - -//poll - -type PollFd struct { - Fd int32 - Events uint16 - Revents uint16 -} - -const ( - POLLERR = C.POLLERR - POLLHUP = C.POLLHUP - POLLIN = C.POLLIN - POLLNVAL = C.POLLNVAL - POLLOUT = C.POLLOUT - POLLPRI = C.POLLPRI - POLLRDBAND = C.POLLRDBAND - POLLRDNORM = C.POLLRDNORM - POLLWRBAND = C.POLLWRBAND - POLLWRNORM = C.POLLWRNORM -) - -//flock_t - -type Flock_t C.struct_flock64 - -// Statfs - -type Fsid_t C.struct_fsid_t -type Fsid64_t C.struct_fsid64_t - -type Statfs_t C.struct_statfs - -const RNDGETENTCNT = 0x80045200 diff --git a/vendor/golang.org/x/sys/unix/types_darwin.go b/vendor/golang.org/x/sys/unix/types_darwin.go deleted file mode 100644 index 155c2e692..000000000 --- a/vendor/golang.org/x/sys/unix/types_darwin.go +++ /dev/null @@ -1,283 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -Input to cgo -godefs. See README.md -*/ - -// +godefs map struct_in_addr [4]byte /* in_addr */ -// +godefs map struct_in6_addr [16]byte /* in6_addr */ - -package unix - -/* -#define __DARWIN_UNIX03 0 -#define KERNEL -#define _DARWIN_USE_64_BIT_INODE -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -enum { - sizeofPtr = sizeof(void*), -}; - -union sockaddr_all { - struct sockaddr s1; // this one gets used for fields - struct sockaddr_in s2; // these pad it out - struct sockaddr_in6 s3; - struct sockaddr_un s4; - struct sockaddr_dl s5; -}; - -struct sockaddr_any { - struct sockaddr addr; - char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)]; -}; - -*/ -import "C" - -// Machine characteristics - -const ( - SizeofPtr = C.sizeofPtr - SizeofShort = C.sizeof_short - SizeofInt = C.sizeof_int - SizeofLong = C.sizeof_long - SizeofLongLong = C.sizeof_longlong -) - -// Basic types - -type ( - _C_short C.short - _C_int C.int - _C_long C.long - _C_long_long C.longlong -) - -// Time - -type Timespec C.struct_timespec - -type Timeval C.struct_timeval - -type Timeval32 C.struct_timeval32 - -// Processes - -type Rusage C.struct_rusage - -type Rlimit C.struct_rlimit - -type _Gid_t C.gid_t - -// Files - -type Stat_t C.struct_stat64 - -type Statfs_t C.struct_statfs64 - -type Flock_t C.struct_flock - -type Fstore_t C.struct_fstore - -type Radvisory_t C.struct_radvisory - -type Fbootstraptransfer_t C.struct_fbootstraptransfer - -type Log2phys_t C.struct_log2phys - -type Fsid C.struct_fsid - -type Dirent C.struct_dirent - -// Sockets - -type RawSockaddrInet4 C.struct_sockaddr_in - -type RawSockaddrInet6 C.struct_sockaddr_in6 - -type RawSockaddrUnix C.struct_sockaddr_un - -type RawSockaddrDatalink C.struct_sockaddr_dl - -type RawSockaddr C.struct_sockaddr - -type RawSockaddrAny C.struct_sockaddr_any - -type _Socklen C.socklen_t - -type Linger C.struct_linger - -type Iovec C.struct_iovec - -type IPMreq C.struct_ip_mreq - -type IPv6Mreq C.struct_ipv6_mreq - -type Msghdr C.struct_msghdr - -type Cmsghdr C.struct_cmsghdr - -type Inet4Pktinfo C.struct_in_pktinfo - -type Inet6Pktinfo C.struct_in6_pktinfo - -type IPv6MTUInfo C.struct_ip6_mtuinfo - -type ICMPv6Filter C.struct_icmp6_filter - -const ( - SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in - SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 - SizeofSockaddrAny = C.sizeof_struct_sockaddr_any - SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un - SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl - SizeofLinger = C.sizeof_struct_linger - SizeofIPMreq = C.sizeof_struct_ip_mreq - SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq - SizeofMsghdr = C.sizeof_struct_msghdr - SizeofCmsghdr = C.sizeof_struct_cmsghdr - SizeofInet4Pktinfo = C.sizeof_struct_in_pktinfo - SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo - SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo - SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter -) - -// Ptrace requests - -const ( - PTRACE_TRACEME = C.PT_TRACE_ME - PTRACE_CONT = C.PT_CONTINUE - PTRACE_KILL = C.PT_KILL -) - -// Events (kqueue, kevent) - -type Kevent_t C.struct_kevent - -// Select - -type FdSet C.fd_set - -// Routing and interface messages - -const ( - SizeofIfMsghdr = C.sizeof_struct_if_msghdr - SizeofIfData = C.sizeof_struct_if_data - SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr - SizeofIfmaMsghdr = C.sizeof_struct_ifma_msghdr - SizeofIfmaMsghdr2 = C.sizeof_struct_ifma_msghdr2 - SizeofRtMsghdr = C.sizeof_struct_rt_msghdr - SizeofRtMetrics = C.sizeof_struct_rt_metrics -) - -type IfMsghdr C.struct_if_msghdr - -type IfData C.struct_if_data - -type IfaMsghdr C.struct_ifa_msghdr - -type IfmaMsghdr C.struct_ifma_msghdr - -type IfmaMsghdr2 C.struct_ifma_msghdr2 - -type RtMsghdr C.struct_rt_msghdr - -type RtMetrics C.struct_rt_metrics - -// Berkeley packet filter - -const ( - SizeofBpfVersion = C.sizeof_struct_bpf_version - SizeofBpfStat = C.sizeof_struct_bpf_stat - SizeofBpfProgram = C.sizeof_struct_bpf_program - SizeofBpfInsn = C.sizeof_struct_bpf_insn - SizeofBpfHdr = C.sizeof_struct_bpf_hdr -) - -type BpfVersion C.struct_bpf_version - -type BpfStat C.struct_bpf_stat - -type BpfProgram C.struct_bpf_program - -type BpfInsn C.struct_bpf_insn - -type BpfHdr C.struct_bpf_hdr - -// Terminal handling - -type Termios C.struct_termios - -type Winsize C.struct_winsize - -// fchmodat-like syscalls. - -const ( - AT_FDCWD = C.AT_FDCWD - AT_REMOVEDIR = C.AT_REMOVEDIR - AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW - AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW -) - -// poll - -type PollFd C.struct_pollfd - -const ( - POLLERR = C.POLLERR - POLLHUP = C.POLLHUP - POLLIN = C.POLLIN - POLLNVAL = C.POLLNVAL - POLLOUT = C.POLLOUT - POLLPRI = C.POLLPRI - POLLRDBAND = C.POLLRDBAND - POLLRDNORM = C.POLLRDNORM - POLLWRBAND = C.POLLWRBAND - POLLWRNORM = C.POLLWRNORM -) - -// uname - -type Utsname C.struct_utsname - -// Clockinfo - -const SizeofClockinfo = C.sizeof_struct_clockinfo - -type Clockinfo C.struct_clockinfo diff --git a/vendor/golang.org/x/sys/unix/types_dragonfly.go b/vendor/golang.org/x/sys/unix/types_dragonfly.go deleted file mode 100644 index 3365dd79d..000000000 --- a/vendor/golang.org/x/sys/unix/types_dragonfly.go +++ /dev/null @@ -1,263 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -Input to cgo -godefs. See README.md -*/ - -// +godefs map struct_in_addr [4]byte /* in_addr */ -// +godefs map struct_in6_addr [16]byte /* in6_addr */ - -package unix - -/* -#define KERNEL -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -enum { - sizeofPtr = sizeof(void*), -}; - -union sockaddr_all { - struct sockaddr s1; // this one gets used for fields - struct sockaddr_in s2; // these pad it out - struct sockaddr_in6 s3; - struct sockaddr_un s4; - struct sockaddr_dl s5; -}; - -struct sockaddr_any { - struct sockaddr addr; - char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)]; -}; - -*/ -import "C" - -// Machine characteristics - -const ( - SizeofPtr = C.sizeofPtr - SizeofShort = C.sizeof_short - SizeofInt = C.sizeof_int - SizeofLong = C.sizeof_long - SizeofLongLong = C.sizeof_longlong -) - -// Basic types - -type ( - _C_short C.short - _C_int C.int - _C_long C.long - _C_long_long C.longlong -) - -// Time - -type Timespec C.struct_timespec - -type Timeval C.struct_timeval - -// Processes - -type Rusage C.struct_rusage - -type Rlimit C.struct_rlimit - -type _Gid_t C.gid_t - -// Files - -type Stat_t C.struct_stat - -type Statfs_t C.struct_statfs - -type Flock_t C.struct_flock - -type Dirent C.struct_dirent - -type Fsid C.struct_fsid - -// File system limits - -const ( - PathMax = C.PATH_MAX -) - -// Sockets - -type RawSockaddrInet4 C.struct_sockaddr_in - -type RawSockaddrInet6 C.struct_sockaddr_in6 - -type RawSockaddrUnix C.struct_sockaddr_un - -type RawSockaddrDatalink C.struct_sockaddr_dl - -type RawSockaddr C.struct_sockaddr - -type RawSockaddrAny C.struct_sockaddr_any - -type _Socklen C.socklen_t - -type Linger C.struct_linger - -type Iovec C.struct_iovec - -type IPMreq C.struct_ip_mreq - -type IPv6Mreq C.struct_ipv6_mreq - -type Msghdr C.struct_msghdr - -type Cmsghdr C.struct_cmsghdr - -type Inet6Pktinfo C.struct_in6_pktinfo - -type IPv6MTUInfo C.struct_ip6_mtuinfo - -type ICMPv6Filter C.struct_icmp6_filter - -const ( - SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in - SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 - SizeofSockaddrAny = C.sizeof_struct_sockaddr_any - SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un - SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl - SizeofLinger = C.sizeof_struct_linger - SizeofIPMreq = C.sizeof_struct_ip_mreq - SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq - SizeofMsghdr = C.sizeof_struct_msghdr - SizeofCmsghdr = C.sizeof_struct_cmsghdr - SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo - SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo - SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter -) - -// Ptrace requests - -const ( - PTRACE_TRACEME = C.PT_TRACE_ME - PTRACE_CONT = C.PT_CONTINUE - PTRACE_KILL = C.PT_KILL -) - -// Events (kqueue, kevent) - -type Kevent_t C.struct_kevent - -// Select - -type FdSet C.fd_set - -// Routing and interface messages - -const ( - SizeofIfMsghdr = C.sizeof_struct_if_msghdr - SizeofIfData = C.sizeof_struct_if_data - SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr - SizeofIfmaMsghdr = C.sizeof_struct_ifma_msghdr - SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr - SizeofRtMsghdr = C.sizeof_struct_rt_msghdr - SizeofRtMetrics = C.sizeof_struct_rt_metrics -) - -type IfMsghdr C.struct_if_msghdr - -type IfData C.struct_if_data - -type IfaMsghdr C.struct_ifa_msghdr - -type IfmaMsghdr C.struct_ifma_msghdr - -type IfAnnounceMsghdr C.struct_if_announcemsghdr - -type RtMsghdr C.struct_rt_msghdr - -type RtMetrics C.struct_rt_metrics - -// Berkeley packet filter - -const ( - SizeofBpfVersion = C.sizeof_struct_bpf_version - SizeofBpfStat = C.sizeof_struct_bpf_stat - SizeofBpfProgram = C.sizeof_struct_bpf_program - SizeofBpfInsn = C.sizeof_struct_bpf_insn - SizeofBpfHdr = C.sizeof_struct_bpf_hdr -) - -type BpfVersion C.struct_bpf_version - -type BpfStat C.struct_bpf_stat - -type BpfProgram C.struct_bpf_program - -type BpfInsn C.struct_bpf_insn - -type BpfHdr C.struct_bpf_hdr - -// Terminal handling - -type Termios C.struct_termios - -type Winsize C.struct_winsize - -// fchmodat-like syscalls. - -const ( - AT_FDCWD = C.AT_FDCWD - AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW -) - -// poll - -type PollFd C.struct_pollfd - -const ( - POLLERR = C.POLLERR - POLLHUP = C.POLLHUP - POLLIN = C.POLLIN - POLLNVAL = C.POLLNVAL - POLLOUT = C.POLLOUT - POLLPRI = C.POLLPRI - POLLRDBAND = C.POLLRDBAND - POLLRDNORM = C.POLLRDNORM - POLLWRBAND = C.POLLWRBAND - POLLWRNORM = C.POLLWRNORM -) - -// Uname - -type Utsname C.struct_utsname diff --git a/vendor/golang.org/x/sys/unix/types_freebsd.go b/vendor/golang.org/x/sys/unix/types_freebsd.go deleted file mode 100644 index a121dc336..000000000 --- a/vendor/golang.org/x/sys/unix/types_freebsd.go +++ /dev/null @@ -1,400 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -Input to cgo -godefs. See README.md -*/ - -// +godefs map struct_in_addr [4]byte /* in_addr */ -// +godefs map struct_in6_addr [16]byte /* in6_addr */ - -package unix - -/* -#define _WANT_FREEBSD11_STAT 1 -#define _WANT_FREEBSD11_STATFS 1 -#define _WANT_FREEBSD11_DIRENT 1 -#define _WANT_FREEBSD11_KEVENT 1 - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -enum { - sizeofPtr = sizeof(void*), -}; - -union sockaddr_all { - struct sockaddr s1; // this one gets used for fields - struct sockaddr_in s2; // these pad it out - struct sockaddr_in6 s3; - struct sockaddr_un s4; - struct sockaddr_dl s5; -}; - -struct sockaddr_any { - struct sockaddr addr; - char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)]; -}; - -// This structure is a duplicate of if_data on FreeBSD 8-STABLE. -// See /usr/include/net/if.h. -struct if_data8 { - u_char ifi_type; - u_char ifi_physical; - u_char ifi_addrlen; - u_char ifi_hdrlen; - u_char ifi_link_state; - u_char ifi_spare_char1; - u_char ifi_spare_char2; - u_char ifi_datalen; - u_long ifi_mtu; - u_long ifi_metric; - u_long ifi_baudrate; - u_long ifi_ipackets; - u_long ifi_ierrors; - u_long ifi_opackets; - u_long ifi_oerrors; - u_long ifi_collisions; - u_long ifi_ibytes; - u_long ifi_obytes; - u_long ifi_imcasts; - u_long ifi_omcasts; - u_long ifi_iqdrops; - u_long ifi_noproto; - u_long ifi_hwassist; -// FIXME: these are now unions, so maybe need to change definitions? -#undef ifi_epoch - time_t ifi_epoch; -#undef ifi_lastchange - struct timeval ifi_lastchange; -}; - -// This structure is a duplicate of if_msghdr on FreeBSD 8-STABLE. -// See /usr/include/net/if.h. -struct if_msghdr8 { - u_short ifm_msglen; - u_char ifm_version; - u_char ifm_type; - int ifm_addrs; - int ifm_flags; - u_short ifm_index; - struct if_data8 ifm_data; -}; -*/ -import "C" - -// Machine characteristics - -const ( - SizeofPtr = C.sizeofPtr - SizeofShort = C.sizeof_short - SizeofInt = C.sizeof_int - SizeofLong = C.sizeof_long - SizeofLongLong = C.sizeof_longlong -) - -// Basic types - -type ( - _C_short C.short - _C_int C.int - _C_long C.long - _C_long_long C.longlong -) - -// Time - -type Timespec C.struct_timespec - -type Timeval C.struct_timeval - -// Processes - -type Rusage C.struct_rusage - -type Rlimit C.struct_rlimit - -type _Gid_t C.gid_t - -// Files - -const ( - _statfsVersion = C.STATFS_VERSION - _dirblksiz = C.DIRBLKSIZ -) - -type Stat_t C.struct_stat - -type stat_freebsd11_t C.struct_freebsd11_stat - -type Statfs_t C.struct_statfs - -type statfs_freebsd11_t C.struct_freebsd11_statfs - -type Flock_t C.struct_flock - -type Dirent C.struct_dirent - -type dirent_freebsd11 C.struct_freebsd11_dirent - -type Fsid C.struct_fsid - -// File system limits - -const ( - PathMax = C.PATH_MAX -) - -// Advice to Fadvise - -const ( - FADV_NORMAL = C.POSIX_FADV_NORMAL - FADV_RANDOM = C.POSIX_FADV_RANDOM - FADV_SEQUENTIAL = C.POSIX_FADV_SEQUENTIAL - FADV_WILLNEED = C.POSIX_FADV_WILLNEED - FADV_DONTNEED = C.POSIX_FADV_DONTNEED - FADV_NOREUSE = C.POSIX_FADV_NOREUSE -) - -// Sockets - -type RawSockaddrInet4 C.struct_sockaddr_in - -type RawSockaddrInet6 C.struct_sockaddr_in6 - -type RawSockaddrUnix C.struct_sockaddr_un - -type RawSockaddrDatalink C.struct_sockaddr_dl - -type RawSockaddr C.struct_sockaddr - -type RawSockaddrAny C.struct_sockaddr_any - -type _Socklen C.socklen_t - -type Linger C.struct_linger - -type Iovec C.struct_iovec - -type IPMreq C.struct_ip_mreq - -type IPMreqn C.struct_ip_mreqn - -type IPv6Mreq C.struct_ipv6_mreq - -type Msghdr C.struct_msghdr - -type Cmsghdr C.struct_cmsghdr - -type Inet6Pktinfo C.struct_in6_pktinfo - -type IPv6MTUInfo C.struct_ip6_mtuinfo - -type ICMPv6Filter C.struct_icmp6_filter - -const ( - SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in - SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 - SizeofSockaddrAny = C.sizeof_struct_sockaddr_any - SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un - SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl - SizeofLinger = C.sizeof_struct_linger - SizeofIPMreq = C.sizeof_struct_ip_mreq - SizeofIPMreqn = C.sizeof_struct_ip_mreqn - SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq - SizeofMsghdr = C.sizeof_struct_msghdr - SizeofCmsghdr = C.sizeof_struct_cmsghdr - SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo - SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo - SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter -) - -// Ptrace requests - -const ( - PTRACE_ATTACH = C.PT_ATTACH - PTRACE_CONT = C.PT_CONTINUE - PTRACE_DETACH = C.PT_DETACH - PTRACE_GETFPREGS = C.PT_GETFPREGS - PTRACE_GETFSBASE = C.PT_GETFSBASE - PTRACE_GETLWPLIST = C.PT_GETLWPLIST - PTRACE_GETNUMLWPS = C.PT_GETNUMLWPS - PTRACE_GETREGS = C.PT_GETREGS - PTRACE_GETXSTATE = C.PT_GETXSTATE - PTRACE_IO = C.PT_IO - PTRACE_KILL = C.PT_KILL - PTRACE_LWPEVENTS = C.PT_LWP_EVENTS - PTRACE_LWPINFO = C.PT_LWPINFO - PTRACE_SETFPREGS = C.PT_SETFPREGS - PTRACE_SETREGS = C.PT_SETREGS - PTRACE_SINGLESTEP = C.PT_STEP - PTRACE_TRACEME = C.PT_TRACE_ME -) - -const ( - PIOD_READ_D = C.PIOD_READ_D - PIOD_WRITE_D = C.PIOD_WRITE_D - PIOD_READ_I = C.PIOD_READ_I - PIOD_WRITE_I = C.PIOD_WRITE_I -) - -const ( - PL_FLAG_BORN = C.PL_FLAG_BORN - PL_FLAG_EXITED = C.PL_FLAG_EXITED - PL_FLAG_SI = C.PL_FLAG_SI -) - -const ( - TRAP_BRKPT = C.TRAP_BRKPT - TRAP_TRACE = C.TRAP_TRACE -) - -type PtraceLwpInfoStruct C.struct_ptrace_lwpinfo - -type __Siginfo C.struct___siginfo - -type Sigset_t C.sigset_t - -type Reg C.struct_reg - -type FpReg C.struct_fpreg - -type PtraceIoDesc C.struct_ptrace_io_desc - -// Events (kqueue, kevent) - -type Kevent_t C.struct_kevent_freebsd11 - -// Select - -type FdSet C.fd_set - -// Routing and interface messages - -const ( - sizeofIfMsghdr = C.sizeof_struct_if_msghdr - SizeofIfMsghdr = C.sizeof_struct_if_msghdr8 - sizeofIfData = C.sizeof_struct_if_data - SizeofIfData = C.sizeof_struct_if_data8 - SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr - SizeofIfmaMsghdr = C.sizeof_struct_ifma_msghdr - SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr - SizeofRtMsghdr = C.sizeof_struct_rt_msghdr - SizeofRtMetrics = C.sizeof_struct_rt_metrics -) - -type ifMsghdr C.struct_if_msghdr - -type IfMsghdr C.struct_if_msghdr8 - -type ifData C.struct_if_data - -type IfData C.struct_if_data8 - -type IfaMsghdr C.struct_ifa_msghdr - -type IfmaMsghdr C.struct_ifma_msghdr - -type IfAnnounceMsghdr C.struct_if_announcemsghdr - -type RtMsghdr C.struct_rt_msghdr - -type RtMetrics C.struct_rt_metrics - -// Berkeley packet filter - -const ( - SizeofBpfVersion = C.sizeof_struct_bpf_version - SizeofBpfStat = C.sizeof_struct_bpf_stat - SizeofBpfZbuf = C.sizeof_struct_bpf_zbuf - SizeofBpfProgram = C.sizeof_struct_bpf_program - SizeofBpfInsn = C.sizeof_struct_bpf_insn - SizeofBpfHdr = C.sizeof_struct_bpf_hdr - SizeofBpfZbufHeader = C.sizeof_struct_bpf_zbuf_header -) - -type BpfVersion C.struct_bpf_version - -type BpfStat C.struct_bpf_stat - -type BpfZbuf C.struct_bpf_zbuf - -type BpfProgram C.struct_bpf_program - -type BpfInsn C.struct_bpf_insn - -type BpfHdr C.struct_bpf_hdr - -type BpfZbufHeader C.struct_bpf_zbuf_header - -// Terminal handling - -type Termios C.struct_termios - -type Winsize C.struct_winsize - -// fchmodat-like syscalls. - -const ( - AT_FDCWD = C.AT_FDCWD - AT_REMOVEDIR = C.AT_REMOVEDIR - AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW - AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW -) - -// poll - -type PollFd C.struct_pollfd - -const ( - POLLERR = C.POLLERR - POLLHUP = C.POLLHUP - POLLIN = C.POLLIN - POLLINIGNEOF = C.POLLINIGNEOF - POLLNVAL = C.POLLNVAL - POLLOUT = C.POLLOUT - POLLPRI = C.POLLPRI - POLLRDBAND = C.POLLRDBAND - POLLRDNORM = C.POLLRDNORM - POLLWRBAND = C.POLLWRBAND - POLLWRNORM = C.POLLWRNORM -) - -// Capabilities - -type CapRights C.struct_cap_rights - -// Uname - -type Utsname C.struct_utsname diff --git a/vendor/golang.org/x/sys/unix/types_netbsd.go b/vendor/golang.org/x/sys/unix/types_netbsd.go deleted file mode 100644 index 4a96d72c3..000000000 --- a/vendor/golang.org/x/sys/unix/types_netbsd.go +++ /dev/null @@ -1,290 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -Input to cgo -godefs. See README.md -*/ - -// +godefs map struct_in_addr [4]byte /* in_addr */ -// +godefs map struct_in6_addr [16]byte /* in6_addr */ - -package unix - -/* -#define KERNEL -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -enum { - sizeofPtr = sizeof(void*), -}; - -union sockaddr_all { - struct sockaddr s1; // this one gets used for fields - struct sockaddr_in s2; // these pad it out - struct sockaddr_in6 s3; - struct sockaddr_un s4; - struct sockaddr_dl s5; -}; - -struct sockaddr_any { - struct sockaddr addr; - char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)]; -}; - -*/ -import "C" - -// Machine characteristics - -const ( - SizeofPtr = C.sizeofPtr - SizeofShort = C.sizeof_short - SizeofInt = C.sizeof_int - SizeofLong = C.sizeof_long - SizeofLongLong = C.sizeof_longlong -) - -// Basic types - -type ( - _C_short C.short - _C_int C.int - _C_long C.long - _C_long_long C.longlong -) - -// Time - -type Timespec C.struct_timespec - -type Timeval C.struct_timeval - -// Processes - -type Rusage C.struct_rusage - -type Rlimit C.struct_rlimit - -type _Gid_t C.gid_t - -// Files - -type Stat_t C.struct_stat - -type Statfs_t C.struct_statfs - -type Flock_t C.struct_flock - -type Dirent C.struct_dirent - -type Fsid C.fsid_t - -// File system limits - -const ( - PathMax = C.PATH_MAX -) - -// Advice to Fadvise - -const ( - FADV_NORMAL = C.POSIX_FADV_NORMAL - FADV_RANDOM = C.POSIX_FADV_RANDOM - FADV_SEQUENTIAL = C.POSIX_FADV_SEQUENTIAL - FADV_WILLNEED = C.POSIX_FADV_WILLNEED - FADV_DONTNEED = C.POSIX_FADV_DONTNEED - FADV_NOREUSE = C.POSIX_FADV_NOREUSE -) - -// Sockets - -type RawSockaddrInet4 C.struct_sockaddr_in - -type RawSockaddrInet6 C.struct_sockaddr_in6 - -type RawSockaddrUnix C.struct_sockaddr_un - -type RawSockaddrDatalink C.struct_sockaddr_dl - -type RawSockaddr C.struct_sockaddr - -type RawSockaddrAny C.struct_sockaddr_any - -type _Socklen C.socklen_t - -type Linger C.struct_linger - -type Iovec C.struct_iovec - -type IPMreq C.struct_ip_mreq - -type IPv6Mreq C.struct_ipv6_mreq - -type Msghdr C.struct_msghdr - -type Cmsghdr C.struct_cmsghdr - -type Inet6Pktinfo C.struct_in6_pktinfo - -type IPv6MTUInfo C.struct_ip6_mtuinfo - -type ICMPv6Filter C.struct_icmp6_filter - -const ( - SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in - SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 - SizeofSockaddrAny = C.sizeof_struct_sockaddr_any - SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un - SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl - SizeofLinger = C.sizeof_struct_linger - SizeofIPMreq = C.sizeof_struct_ip_mreq - SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq - SizeofMsghdr = C.sizeof_struct_msghdr - SizeofCmsghdr = C.sizeof_struct_cmsghdr - SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo - SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo - SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter -) - -// Ptrace requests - -const ( - PTRACE_TRACEME = C.PT_TRACE_ME - PTRACE_CONT = C.PT_CONTINUE - PTRACE_KILL = C.PT_KILL -) - -// Events (kqueue, kevent) - -type Kevent_t C.struct_kevent - -// Select - -type FdSet C.fd_set - -// Routing and interface messages - -const ( - SizeofIfMsghdr = C.sizeof_struct_if_msghdr - SizeofIfData = C.sizeof_struct_if_data - SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr - SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr - SizeofRtMsghdr = C.sizeof_struct_rt_msghdr - SizeofRtMetrics = C.sizeof_struct_rt_metrics -) - -type IfMsghdr C.struct_if_msghdr - -type IfData C.struct_if_data - -type IfaMsghdr C.struct_ifa_msghdr - -type IfAnnounceMsghdr C.struct_if_announcemsghdr - -type RtMsghdr C.struct_rt_msghdr - -type RtMetrics C.struct_rt_metrics - -type Mclpool C.struct_mclpool - -// Berkeley packet filter - -const ( - SizeofBpfVersion = C.sizeof_struct_bpf_version - SizeofBpfStat = C.sizeof_struct_bpf_stat - SizeofBpfProgram = C.sizeof_struct_bpf_program - SizeofBpfInsn = C.sizeof_struct_bpf_insn - SizeofBpfHdr = C.sizeof_struct_bpf_hdr -) - -type BpfVersion C.struct_bpf_version - -type BpfStat C.struct_bpf_stat - -type BpfProgram C.struct_bpf_program - -type BpfInsn C.struct_bpf_insn - -type BpfHdr C.struct_bpf_hdr - -type BpfTimeval C.struct_bpf_timeval - -// Terminal handling - -type Termios C.struct_termios - -type Winsize C.struct_winsize - -type Ptmget C.struct_ptmget - -// fchmodat-like syscalls. - -const ( - AT_FDCWD = C.AT_FDCWD - AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW - AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW -) - -// poll - -type PollFd C.struct_pollfd - -const ( - POLLERR = C.POLLERR - POLLHUP = C.POLLHUP - POLLIN = C.POLLIN - POLLNVAL = C.POLLNVAL - POLLOUT = C.POLLOUT - POLLPRI = C.POLLPRI - POLLRDBAND = C.POLLRDBAND - POLLRDNORM = C.POLLRDNORM - POLLWRBAND = C.POLLWRBAND - POLLWRNORM = C.POLLWRNORM -) - -// Sysctl - -type Sysctlnode C.struct_sysctlnode - -// Uname - -type Utsname C.struct_utsname - -// Clockinfo - -const SizeofClockinfo = C.sizeof_struct_clockinfo - -type Clockinfo C.struct_clockinfo diff --git a/vendor/golang.org/x/sys/unix/types_openbsd.go b/vendor/golang.org/x/sys/unix/types_openbsd.go deleted file mode 100644 index 775cb57dc..000000000 --- a/vendor/golang.org/x/sys/unix/types_openbsd.go +++ /dev/null @@ -1,283 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -Input to cgo -godefs. See README.md -*/ - -// +godefs map struct_in_addr [4]byte /* in_addr */ -// +godefs map struct_in6_addr [16]byte /* in6_addr */ - -package unix - -/* -#define KERNEL -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -enum { - sizeofPtr = sizeof(void*), -}; - -union sockaddr_all { - struct sockaddr s1; // this one gets used for fields - struct sockaddr_in s2; // these pad it out - struct sockaddr_in6 s3; - struct sockaddr_un s4; - struct sockaddr_dl s5; -}; - -struct sockaddr_any { - struct sockaddr addr; - char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)]; -}; - -*/ -import "C" - -// Machine characteristics - -const ( - SizeofPtr = C.sizeofPtr - SizeofShort = C.sizeof_short - SizeofInt = C.sizeof_int - SizeofLong = C.sizeof_long - SizeofLongLong = C.sizeof_longlong -) - -// Basic types - -type ( - _C_short C.short - _C_int C.int - _C_long C.long - _C_long_long C.longlong -) - -// Time - -type Timespec C.struct_timespec - -type Timeval C.struct_timeval - -// Processes - -type Rusage C.struct_rusage - -type Rlimit C.struct_rlimit - -type _Gid_t C.gid_t - -// Files - -type Stat_t C.struct_stat - -type Statfs_t C.struct_statfs - -type Flock_t C.struct_flock - -type Dirent C.struct_dirent - -type Fsid C.fsid_t - -// File system limits - -const ( - PathMax = C.PATH_MAX -) - -// Sockets - -type RawSockaddrInet4 C.struct_sockaddr_in - -type RawSockaddrInet6 C.struct_sockaddr_in6 - -type RawSockaddrUnix C.struct_sockaddr_un - -type RawSockaddrDatalink C.struct_sockaddr_dl - -type RawSockaddr C.struct_sockaddr - -type RawSockaddrAny C.struct_sockaddr_any - -type _Socklen C.socklen_t - -type Linger C.struct_linger - -type Iovec C.struct_iovec - -type IPMreq C.struct_ip_mreq - -type IPv6Mreq C.struct_ipv6_mreq - -type Msghdr C.struct_msghdr - -type Cmsghdr C.struct_cmsghdr - -type Inet6Pktinfo C.struct_in6_pktinfo - -type IPv6MTUInfo C.struct_ip6_mtuinfo - -type ICMPv6Filter C.struct_icmp6_filter - -const ( - SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in - SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 - SizeofSockaddrAny = C.sizeof_struct_sockaddr_any - SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un - SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl - SizeofLinger = C.sizeof_struct_linger - SizeofIPMreq = C.sizeof_struct_ip_mreq - SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq - SizeofMsghdr = C.sizeof_struct_msghdr - SizeofCmsghdr = C.sizeof_struct_cmsghdr - SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo - SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo - SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter -) - -// Ptrace requests - -const ( - PTRACE_TRACEME = C.PT_TRACE_ME - PTRACE_CONT = C.PT_CONTINUE - PTRACE_KILL = C.PT_KILL -) - -// Events (kqueue, kevent) - -type Kevent_t C.struct_kevent - -// Select - -type FdSet C.fd_set - -// Routing and interface messages - -const ( - SizeofIfMsghdr = C.sizeof_struct_if_msghdr - SizeofIfData = C.sizeof_struct_if_data - SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr - SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr - SizeofRtMsghdr = C.sizeof_struct_rt_msghdr - SizeofRtMetrics = C.sizeof_struct_rt_metrics -) - -type IfMsghdr C.struct_if_msghdr - -type IfData C.struct_if_data - -type IfaMsghdr C.struct_ifa_msghdr - -type IfAnnounceMsghdr C.struct_if_announcemsghdr - -type RtMsghdr C.struct_rt_msghdr - -type RtMetrics C.struct_rt_metrics - -type Mclpool C.struct_mclpool - -// Berkeley packet filter - -const ( - SizeofBpfVersion = C.sizeof_struct_bpf_version - SizeofBpfStat = C.sizeof_struct_bpf_stat - SizeofBpfProgram = C.sizeof_struct_bpf_program - SizeofBpfInsn = C.sizeof_struct_bpf_insn - SizeofBpfHdr = C.sizeof_struct_bpf_hdr -) - -type BpfVersion C.struct_bpf_version - -type BpfStat C.struct_bpf_stat - -type BpfProgram C.struct_bpf_program - -type BpfInsn C.struct_bpf_insn - -type BpfHdr C.struct_bpf_hdr - -type BpfTimeval C.struct_bpf_timeval - -// Terminal handling - -type Termios C.struct_termios - -type Winsize C.struct_winsize - -// fchmodat-like syscalls. - -const ( - AT_FDCWD = C.AT_FDCWD - AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW - AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW -) - -// poll - -type PollFd C.struct_pollfd - -const ( - POLLERR = C.POLLERR - POLLHUP = C.POLLHUP - POLLIN = C.POLLIN - POLLNVAL = C.POLLNVAL - POLLOUT = C.POLLOUT - POLLPRI = C.POLLPRI - POLLRDBAND = C.POLLRDBAND - POLLRDNORM = C.POLLRDNORM - POLLWRBAND = C.POLLWRBAND - POLLWRNORM = C.POLLWRNORM -) - -// Signal Sets - -type Sigset_t C.sigset_t - -// Uname - -type Utsname C.struct_utsname - -// Uvmexp - -const SizeofUvmexp = C.sizeof_struct_uvmexp - -type Uvmexp C.struct_uvmexp - -// Clockinfo - -const SizeofClockinfo = C.sizeof_struct_clockinfo - -type Clockinfo C.struct_clockinfo diff --git a/vendor/golang.org/x/sys/unix/types_solaris.go b/vendor/golang.org/x/sys/unix/types_solaris.go deleted file mode 100644 index 2b716f934..000000000 --- a/vendor/golang.org/x/sys/unix/types_solaris.go +++ /dev/null @@ -1,266 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -/* -Input to cgo -godefs. See README.md -*/ - -// +godefs map struct_in_addr [4]byte /* in_addr */ -// +godefs map struct_in6_addr [16]byte /* in6_addr */ - -package unix - -/* -#define KERNEL -// These defines ensure that builds done on newer versions of Solaris are -// backwards-compatible with older versions of Solaris and -// OpenSolaris-based derivatives. -#define __USE_SUNOS_SOCKETS__ // msghdr -#define __USE_LEGACY_PROTOTYPES__ // iovec -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -enum { - sizeofPtr = sizeof(void*), -}; - -union sockaddr_all { - struct sockaddr s1; // this one gets used for fields - struct sockaddr_in s2; // these pad it out - struct sockaddr_in6 s3; - struct sockaddr_un s4; - struct sockaddr_dl s5; -}; - -struct sockaddr_any { - struct sockaddr addr; - char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)]; -}; - -*/ -import "C" - -// Machine characteristics - -const ( - SizeofPtr = C.sizeofPtr - SizeofShort = C.sizeof_short - SizeofInt = C.sizeof_int - SizeofLong = C.sizeof_long - SizeofLongLong = C.sizeof_longlong - PathMax = C.PATH_MAX - MaxHostNameLen = C.MAXHOSTNAMELEN -) - -// Basic types - -type ( - _C_short C.short - _C_int C.int - _C_long C.long - _C_long_long C.longlong -) - -// Time - -type Timespec C.struct_timespec - -type Timeval C.struct_timeval - -type Timeval32 C.struct_timeval32 - -type Tms C.struct_tms - -type Utimbuf C.struct_utimbuf - -// Processes - -type Rusage C.struct_rusage - -type Rlimit C.struct_rlimit - -type _Gid_t C.gid_t - -// Files - -type Stat_t C.struct_stat - -type Flock_t C.struct_flock - -type Dirent C.struct_dirent - -// Filesystems - -type _Fsblkcnt_t C.fsblkcnt_t - -type Statvfs_t C.struct_statvfs - -// Sockets - -type RawSockaddrInet4 C.struct_sockaddr_in - -type RawSockaddrInet6 C.struct_sockaddr_in6 - -type RawSockaddrUnix C.struct_sockaddr_un - -type RawSockaddrDatalink C.struct_sockaddr_dl - -type RawSockaddr C.struct_sockaddr - -type RawSockaddrAny C.struct_sockaddr_any - -type _Socklen C.socklen_t - -type Linger C.struct_linger - -type Iovec C.struct_iovec - -type IPMreq C.struct_ip_mreq - -type IPv6Mreq C.struct_ipv6_mreq - -type Msghdr C.struct_msghdr - -type Cmsghdr C.struct_cmsghdr - -type Inet6Pktinfo C.struct_in6_pktinfo - -type IPv6MTUInfo C.struct_ip6_mtuinfo - -type ICMPv6Filter C.struct_icmp6_filter - -const ( - SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in - SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 - SizeofSockaddrAny = C.sizeof_struct_sockaddr_any - SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un - SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl - SizeofLinger = C.sizeof_struct_linger - SizeofIPMreq = C.sizeof_struct_ip_mreq - SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq - SizeofMsghdr = C.sizeof_struct_msghdr - SizeofCmsghdr = C.sizeof_struct_cmsghdr - SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo - SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo - SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter -) - -// Select - -type FdSet C.fd_set - -// Misc - -type Utsname C.struct_utsname - -type Ustat_t C.struct_ustat - -const ( - AT_FDCWD = C.AT_FDCWD - AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW - AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW - AT_REMOVEDIR = C.AT_REMOVEDIR - AT_EACCESS = C.AT_EACCESS -) - -// Routing and interface messages - -const ( - SizeofIfMsghdr = C.sizeof_struct_if_msghdr - SizeofIfData = C.sizeof_struct_if_data - SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr - SizeofRtMsghdr = C.sizeof_struct_rt_msghdr - SizeofRtMetrics = C.sizeof_struct_rt_metrics -) - -type IfMsghdr C.struct_if_msghdr - -type IfData C.struct_if_data - -type IfaMsghdr C.struct_ifa_msghdr - -type RtMsghdr C.struct_rt_msghdr - -type RtMetrics C.struct_rt_metrics - -// Berkeley packet filter - -const ( - SizeofBpfVersion = C.sizeof_struct_bpf_version - SizeofBpfStat = C.sizeof_struct_bpf_stat - SizeofBpfProgram = C.sizeof_struct_bpf_program - SizeofBpfInsn = C.sizeof_struct_bpf_insn - SizeofBpfHdr = C.sizeof_struct_bpf_hdr -) - -type BpfVersion C.struct_bpf_version - -type BpfStat C.struct_bpf_stat - -type BpfProgram C.struct_bpf_program - -type BpfInsn C.struct_bpf_insn - -type BpfTimeval C.struct_bpf_timeval - -type BpfHdr C.struct_bpf_hdr - -// Terminal handling - -type Termios C.struct_termios - -type Termio C.struct_termio - -type Winsize C.struct_winsize - -// poll - -type PollFd C.struct_pollfd - -const ( - POLLERR = C.POLLERR - POLLHUP = C.POLLHUP - POLLIN = C.POLLIN - POLLNVAL = C.POLLNVAL - POLLOUT = C.POLLOUT - POLLPRI = C.POLLPRI - POLLRDBAND = C.POLLRDBAND - POLLRDNORM = C.POLLRDNORM - POLLWRBAND = C.POLLWRBAND - POLLWRNORM = C.POLLWRNORM -) diff --git a/vendor/golang.org/x/text/encoding/charmap/maketables.go b/vendor/golang.org/x/text/encoding/charmap/maketables.go deleted file mode 100644 index f7941701e..000000000 --- a/vendor/golang.org/x/text/encoding/charmap/maketables.go +++ /dev/null @@ -1,556 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -import ( - "bufio" - "fmt" - "log" - "net/http" - "sort" - "strings" - "unicode/utf8" - - "golang.org/x/text/encoding" - "golang.org/x/text/internal/gen" -) - -const ascii = "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + - "\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" + - ` !"#$%&'()*+,-./0123456789:;<=>?` + - `@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_` + - "`abcdefghijklmnopqrstuvwxyz{|}~\u007f" - -var encodings = []struct { - name string - mib string - comment string - varName string - replacement byte - mapping string -}{ - { - "IBM Code Page 037", - "IBM037", - "", - "CodePage037", - 0x3f, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM037-2.1.2.ucm", - }, - { - "IBM Code Page 437", - "PC8CodePage437", - "", - "CodePage437", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM437-2.1.2.ucm", - }, - { - "IBM Code Page 850", - "PC850Multilingual", - "", - "CodePage850", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM850-2.1.2.ucm", - }, - { - "IBM Code Page 852", - "PCp852", - "", - "CodePage852", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM852-2.1.2.ucm", - }, - { - "IBM Code Page 855", - "IBM855", - "", - "CodePage855", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM855-2.1.2.ucm", - }, - { - "Windows Code Page 858", // PC latin1 with Euro - "IBM00858", - "", - "CodePage858", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/windows-858-2000.ucm", - }, - { - "IBM Code Page 860", - "IBM860", - "", - "CodePage860", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM860-2.1.2.ucm", - }, - { - "IBM Code Page 862", - "PC862LatinHebrew", - "", - "CodePage862", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM862-2.1.2.ucm", - }, - { - "IBM Code Page 863", - "IBM863", - "", - "CodePage863", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM863-2.1.2.ucm", - }, - { - "IBM Code Page 865", - "IBM865", - "", - "CodePage865", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM865-2.1.2.ucm", - }, - { - "IBM Code Page 866", - "IBM866", - "", - "CodePage866", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-ibm866.txt", - }, - { - "IBM Code Page 1047", - "IBM1047", - "", - "CodePage1047", - 0x3f, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM1047-2.1.2.ucm", - }, - { - "IBM Code Page 1140", - "IBM01140", - "", - "CodePage1140", - 0x3f, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/ibm-1140_P100-1997.ucm", - }, - { - "ISO 8859-1", - "ISOLatin1", - "", - "ISO8859_1", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/iso-8859_1-1998.ucm", - }, - { - "ISO 8859-2", - "ISOLatin2", - "", - "ISO8859_2", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-2.txt", - }, - { - "ISO 8859-3", - "ISOLatin3", - "", - "ISO8859_3", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-3.txt", - }, - { - "ISO 8859-4", - "ISOLatin4", - "", - "ISO8859_4", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-4.txt", - }, - { - "ISO 8859-5", - "ISOLatinCyrillic", - "", - "ISO8859_5", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-5.txt", - }, - { - "ISO 8859-6", - "ISOLatinArabic", - "", - "ISO8859_6,ISO8859_6E,ISO8859_6I", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-6.txt", - }, - { - "ISO 8859-7", - "ISOLatinGreek", - "", - "ISO8859_7", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-7.txt", - }, - { - "ISO 8859-8", - "ISOLatinHebrew", - "", - "ISO8859_8,ISO8859_8E,ISO8859_8I", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-8.txt", - }, - { - "ISO 8859-9", - "ISOLatin5", - "", - "ISO8859_9", - encoding.ASCIISub, - "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/iso-8859_9-1999.ucm", - }, - { - "ISO 8859-10", - "ISOLatin6", - "", - "ISO8859_10", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-10.txt", - }, - { - "ISO 8859-13", - "ISO885913", - "", - "ISO8859_13", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-13.txt", - }, - { - "ISO 8859-14", - "ISO885914", - "", - "ISO8859_14", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-14.txt", - }, - { - "ISO 8859-15", - "ISO885915", - "", - "ISO8859_15", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-15.txt", - }, - { - "ISO 8859-16", - "ISO885916", - "", - "ISO8859_16", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-iso-8859-16.txt", - }, - { - "KOI8-R", - "KOI8R", - "", - "KOI8R", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-koi8-r.txt", - }, - { - "KOI8-U", - "KOI8U", - "", - "KOI8U", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-koi8-u.txt", - }, - { - "Macintosh", - "Macintosh", - "", - "Macintosh", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-macintosh.txt", - }, - { - "Macintosh Cyrillic", - "MacintoshCyrillic", - "", - "MacintoshCyrillic", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-x-mac-cyrillic.txt", - }, - { - "Windows 874", - "Windows874", - "", - "Windows874", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-874.txt", - }, - { - "Windows 1250", - "Windows1250", - "", - "Windows1250", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1250.txt", - }, - { - "Windows 1251", - "Windows1251", - "", - "Windows1251", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1251.txt", - }, - { - "Windows 1252", - "Windows1252", - "", - "Windows1252", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1252.txt", - }, - { - "Windows 1253", - "Windows1253", - "", - "Windows1253", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1253.txt", - }, - { - "Windows 1254", - "Windows1254", - "", - "Windows1254", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1254.txt", - }, - { - "Windows 1255", - "Windows1255", - "", - "Windows1255", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1255.txt", - }, - { - "Windows 1256", - "Windows1256", - "", - "Windows1256", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1256.txt", - }, - { - "Windows 1257", - "Windows1257", - "", - "Windows1257", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1257.txt", - }, - { - "Windows 1258", - "Windows1258", - "", - "Windows1258", - encoding.ASCIISub, - "http://encoding.spec.whatwg.org/index-windows-1258.txt", - }, - { - "X-User-Defined", - "XUserDefined", - "It is defined at http://encoding.spec.whatwg.org/#x-user-defined", - "XUserDefined", - encoding.ASCIISub, - ascii + - "\uf780\uf781\uf782\uf783\uf784\uf785\uf786\uf787" + - "\uf788\uf789\uf78a\uf78b\uf78c\uf78d\uf78e\uf78f" + - "\uf790\uf791\uf792\uf793\uf794\uf795\uf796\uf797" + - "\uf798\uf799\uf79a\uf79b\uf79c\uf79d\uf79e\uf79f" + - "\uf7a0\uf7a1\uf7a2\uf7a3\uf7a4\uf7a5\uf7a6\uf7a7" + - "\uf7a8\uf7a9\uf7aa\uf7ab\uf7ac\uf7ad\uf7ae\uf7af" + - "\uf7b0\uf7b1\uf7b2\uf7b3\uf7b4\uf7b5\uf7b6\uf7b7" + - "\uf7b8\uf7b9\uf7ba\uf7bb\uf7bc\uf7bd\uf7be\uf7bf" + - "\uf7c0\uf7c1\uf7c2\uf7c3\uf7c4\uf7c5\uf7c6\uf7c7" + - "\uf7c8\uf7c9\uf7ca\uf7cb\uf7cc\uf7cd\uf7ce\uf7cf" + - "\uf7d0\uf7d1\uf7d2\uf7d3\uf7d4\uf7d5\uf7d6\uf7d7" + - "\uf7d8\uf7d9\uf7da\uf7db\uf7dc\uf7dd\uf7de\uf7df" + - "\uf7e0\uf7e1\uf7e2\uf7e3\uf7e4\uf7e5\uf7e6\uf7e7" + - "\uf7e8\uf7e9\uf7ea\uf7eb\uf7ec\uf7ed\uf7ee\uf7ef" + - "\uf7f0\uf7f1\uf7f2\uf7f3\uf7f4\uf7f5\uf7f6\uf7f7" + - "\uf7f8\uf7f9\uf7fa\uf7fb\uf7fc\uf7fd\uf7fe\uf7ff", - }, -} - -func getWHATWG(url string) string { - res, err := http.Get(url) - if err != nil { - log.Fatalf("%q: Get: %v", url, err) - } - defer res.Body.Close() - - mapping := make([]rune, 128) - for i := range mapping { - mapping[i] = '\ufffd' - } - - scanner := bufio.NewScanner(res.Body) - for scanner.Scan() { - s := strings.TrimSpace(scanner.Text()) - if s == "" || s[0] == '#' { - continue - } - x, y := 0, 0 - if _, err := fmt.Sscanf(s, "%d\t0x%x", &x, &y); err != nil { - log.Fatalf("could not parse %q", s) - } - if x < 0 || 128 <= x { - log.Fatalf("code %d is out of range", x) - } - if 0x80 <= y && y < 0xa0 { - // We diverge from the WHATWG spec by mapping control characters - // in the range [0x80, 0xa0) to U+FFFD. - continue - } - mapping[x] = rune(y) - } - return ascii + string(mapping) -} - -func getUCM(url string) string { - res, err := http.Get(url) - if err != nil { - log.Fatalf("%q: Get: %v", url, err) - } - defer res.Body.Close() - - mapping := make([]rune, 256) - for i := range mapping { - mapping[i] = '\ufffd' - } - - charsFound := 0 - scanner := bufio.NewScanner(res.Body) - for scanner.Scan() { - s := strings.TrimSpace(scanner.Text()) - if s == "" || s[0] == '#' { - continue - } - var c byte - var r rune - if _, err := fmt.Sscanf(s, ` \x%x |0`, &r, &c); err != nil { - continue - } - mapping[c] = r - charsFound++ - } - - if charsFound < 200 { - log.Fatalf("%q: only %d characters found (wrong page format?)", url, charsFound) - } - - return string(mapping) -} - -func main() { - mibs := map[string]bool{} - all := []string{} - - w := gen.NewCodeWriter() - defer w.WriteGoFile("tables.go", "charmap") - - printf := func(s string, a ...interface{}) { fmt.Fprintf(w, s, a...) } - - printf("import (\n") - printf("\t\"golang.org/x/text/encoding\"\n") - printf("\t\"golang.org/x/text/encoding/internal/identifier\"\n") - printf(")\n\n") - for _, e := range encodings { - varNames := strings.Split(e.varName, ",") - all = append(all, varNames...) - varName := varNames[0] - switch { - case strings.HasPrefix(e.mapping, "http://encoding.spec.whatwg.org/"): - e.mapping = getWHATWG(e.mapping) - case strings.HasPrefix(e.mapping, "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/"): - e.mapping = getUCM(e.mapping) - } - - asciiSuperset, low := strings.HasPrefix(e.mapping, ascii), 0x00 - if asciiSuperset { - low = 0x80 - } - lvn := 1 - if strings.HasPrefix(varName, "ISO") || strings.HasPrefix(varName, "KOI") { - lvn = 3 - } - lowerVarName := strings.ToLower(varName[:lvn]) + varName[lvn:] - printf("// %s is the %s encoding.\n", varName, e.name) - if e.comment != "" { - printf("//\n// %s\n", e.comment) - } - printf("var %s *Charmap = &%s\n\nvar %s = Charmap{\nname: %q,\n", - varName, lowerVarName, lowerVarName, e.name) - if mibs[e.mib] { - log.Fatalf("MIB type %q declared multiple times.", e.mib) - } - printf("mib: identifier.%s,\n", e.mib) - printf("asciiSuperset: %t,\n", asciiSuperset) - printf("low: 0x%02x,\n", low) - printf("replacement: 0x%02x,\n", e.replacement) - - printf("decode: [256]utf8Enc{\n") - i, backMapping := 0, map[rune]byte{} - for _, c := range e.mapping { - if _, ok := backMapping[c]; !ok && c != utf8.RuneError { - backMapping[c] = byte(i) - } - var buf [8]byte - n := utf8.EncodeRune(buf[:], c) - if n > 3 { - panic(fmt.Sprintf("rune %q (%U) is too long", c, c)) - } - printf("{%d,[3]byte{0x%02x,0x%02x,0x%02x}},", n, buf[0], buf[1], buf[2]) - if i%2 == 1 { - printf("\n") - } - i++ - } - printf("},\n") - - printf("encode: [256]uint32{\n") - encode := make([]uint32, 0, 256) - for c, i := range backMapping { - encode = append(encode, uint32(i)<<24|uint32(c)) - } - sort.Sort(byRune(encode)) - for len(encode) < cap(encode) { - encode = append(encode, encode[len(encode)-1]) - } - for i, enc := range encode { - printf("0x%08x,", enc) - if i%8 == 7 { - printf("\n") - } - } - printf("},\n}\n") - - // Add an estimate of the size of a single Charmap{} struct value, which - // includes two 256 elem arrays of 4 bytes and some extra fields, which - // align to 3 uint64s on 64-bit architectures. - w.Size += 2*4*256 + 3*8 - } - // TODO: add proper line breaking. - printf("var listAll = []encoding.Encoding{\n%s,\n}\n\n", strings.Join(all, ",\n")) -} - -type byRune []uint32 - -func (b byRune) Len() int { return len(b) } -func (b byRune) Less(i, j int) bool { return b[i]&0xffffff < b[j]&0xffffff } -func (b byRune) Swap(i, j int) { b[i], b[j] = b[j], b[i] } diff --git a/vendor/golang.org/x/text/encoding/htmlindex/gen.go b/vendor/golang.org/x/text/encoding/htmlindex/gen.go deleted file mode 100644 index ac6b4a77f..000000000 --- a/vendor/golang.org/x/text/encoding/htmlindex/gen.go +++ /dev/null @@ -1,173 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -import ( - "bytes" - "encoding/json" - "fmt" - "log" - "strings" - - "golang.org/x/text/internal/gen" -) - -type group struct { - Encodings []struct { - Labels []string - Name string - } -} - -func main() { - gen.Init() - - r := gen.Open("https://encoding.spec.whatwg.org", "whatwg", "encodings.json") - var groups []group - if err := json.NewDecoder(r).Decode(&groups); err != nil { - log.Fatalf("Error reading encodings.json: %v", err) - } - - w := &bytes.Buffer{} - fmt.Fprintln(w, "type htmlEncoding byte") - fmt.Fprintln(w, "const (") - for i, g := range groups { - for _, e := range g.Encodings { - key := strings.ToLower(e.Name) - name := consts[key] - if name == "" { - log.Fatalf("No const defined for %s.", key) - } - if i == 0 { - fmt.Fprintf(w, "%s htmlEncoding = iota\n", name) - } else { - fmt.Fprintf(w, "%s\n", name) - } - } - } - fmt.Fprintln(w, "numEncodings") - fmt.Fprint(w, ")\n\n") - - fmt.Fprintln(w, "var canonical = [numEncodings]string{") - for _, g := range groups { - for _, e := range g.Encodings { - fmt.Fprintf(w, "%q,\n", strings.ToLower(e.Name)) - } - } - fmt.Fprint(w, "}\n\n") - - fmt.Fprintln(w, "var nameMap = map[string]htmlEncoding{") - for _, g := range groups { - for _, e := range g.Encodings { - for _, l := range e.Labels { - key := strings.ToLower(e.Name) - name := consts[key] - fmt.Fprintf(w, "%q: %s,\n", l, name) - } - } - } - fmt.Fprint(w, "}\n\n") - - var tags []string - fmt.Fprintln(w, "var localeMap = []htmlEncoding{") - for _, loc := range locales { - tags = append(tags, loc.tag) - fmt.Fprintf(w, "%s, // %s \n", consts[loc.name], loc.tag) - } - fmt.Fprint(w, "}\n\n") - - fmt.Fprintf(w, "const locales = %q\n", strings.Join(tags, " ")) - - gen.WriteGoFile("tables.go", "htmlindex", w.Bytes()) -} - -// consts maps canonical encoding name to internal constant. -var consts = map[string]string{ - "utf-8": "utf8", - "ibm866": "ibm866", - "iso-8859-2": "iso8859_2", - "iso-8859-3": "iso8859_3", - "iso-8859-4": "iso8859_4", - "iso-8859-5": "iso8859_5", - "iso-8859-6": "iso8859_6", - "iso-8859-7": "iso8859_7", - "iso-8859-8": "iso8859_8", - "iso-8859-8-i": "iso8859_8I", - "iso-8859-10": "iso8859_10", - "iso-8859-13": "iso8859_13", - "iso-8859-14": "iso8859_14", - "iso-8859-15": "iso8859_15", - "iso-8859-16": "iso8859_16", - "koi8-r": "koi8r", - "koi8-u": "koi8u", - "macintosh": "macintosh", - "windows-874": "windows874", - "windows-1250": "windows1250", - "windows-1251": "windows1251", - "windows-1252": "windows1252", - "windows-1253": "windows1253", - "windows-1254": "windows1254", - "windows-1255": "windows1255", - "windows-1256": "windows1256", - "windows-1257": "windows1257", - "windows-1258": "windows1258", - "x-mac-cyrillic": "macintoshCyrillic", - "gbk": "gbk", - "gb18030": "gb18030", - // "hz-gb-2312": "hzgb2312", // Was removed from WhatWG - "big5": "big5", - "euc-jp": "eucjp", - "iso-2022-jp": "iso2022jp", - "shift_jis": "shiftJIS", - "euc-kr": "euckr", - "replacement": "replacement", - "utf-16be": "utf16be", - "utf-16le": "utf16le", - "x-user-defined": "xUserDefined", -} - -// locales is taken from -// https://html.spec.whatwg.org/multipage/syntax.html#encoding-sniffing-algorithm. -var locales = []struct{ tag, name string }{ - // The default value. Explicitly state latin to benefit from the exact - // script option, while still making 1252 the default encoding for languages - // written in Latin script. - {"und_Latn", "windows-1252"}, - {"ar", "windows-1256"}, - {"ba", "windows-1251"}, - {"be", "windows-1251"}, - {"bg", "windows-1251"}, - {"cs", "windows-1250"}, - {"el", "iso-8859-7"}, - {"et", "windows-1257"}, - {"fa", "windows-1256"}, - {"he", "windows-1255"}, - {"hr", "windows-1250"}, - {"hu", "iso-8859-2"}, - {"ja", "shift_jis"}, - {"kk", "windows-1251"}, - {"ko", "euc-kr"}, - {"ku", "windows-1254"}, - {"ky", "windows-1251"}, - {"lt", "windows-1257"}, - {"lv", "windows-1257"}, - {"mk", "windows-1251"}, - {"pl", "iso-8859-2"}, - {"ru", "windows-1251"}, - {"sah", "windows-1251"}, - {"sk", "windows-1250"}, - {"sl", "iso-8859-2"}, - {"sr", "windows-1251"}, - {"tg", "windows-1251"}, - {"th", "windows-874"}, - {"tr", "windows-1254"}, - {"tt", "windows-1251"}, - {"uk", "windows-1251"}, - {"vi", "windows-1258"}, - {"zh-hans", "gb18030"}, - {"zh-hant", "big5"}, -} diff --git a/vendor/golang.org/x/text/encoding/internal/identifier/gen.go b/vendor/golang.org/x/text/encoding/internal/identifier/gen.go deleted file mode 100644 index 26cfef9c6..000000000 --- a/vendor/golang.org/x/text/encoding/internal/identifier/gen.go +++ /dev/null @@ -1,142 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -import ( - "bytes" - "encoding/xml" - "fmt" - "io" - "log" - "strings" - - "golang.org/x/text/internal/gen" -) - -type registry struct { - XMLName xml.Name `xml:"registry"` - Updated string `xml:"updated"` - Registry []struct { - ID string `xml:"id,attr"` - Record []struct { - Name string `xml:"name"` - Xref []struct { - Type string `xml:"type,attr"` - Data string `xml:"data,attr"` - } `xml:"xref"` - Desc struct { - Data string `xml:",innerxml"` - // Any []struct { - // Data string `xml:",chardata"` - // } `xml:",any"` - // Data string `xml:",chardata"` - } `xml:"description,"` - MIB string `xml:"value"` - Alias []string `xml:"alias"` - MIME string `xml:"preferred_alias"` - } `xml:"record"` - } `xml:"registry"` -} - -func main() { - r := gen.OpenIANAFile("assignments/character-sets/character-sets.xml") - reg := ®istry{} - if err := xml.NewDecoder(r).Decode(®); err != nil && err != io.EOF { - log.Fatalf("Error decoding charset registry: %v", err) - } - if len(reg.Registry) == 0 || reg.Registry[0].ID != "character-sets-1" { - log.Fatalf("Unexpected ID %s", reg.Registry[0].ID) - } - - w := &bytes.Buffer{} - fmt.Fprintf(w, "const (\n") - for _, rec := range reg.Registry[0].Record { - constName := "" - for _, a := range rec.Alias { - if strings.HasPrefix(a, "cs") && strings.IndexByte(a, '-') == -1 { - // Some of the constant definitions have comments in them. Strip those. - constName = strings.Title(strings.SplitN(a[2:], "\n", 2)[0]) - } - } - if constName == "" { - switch rec.MIB { - case "2085": - constName = "HZGB2312" // Not listed as alias for some reason. - default: - log.Fatalf("No cs alias defined for %s.", rec.MIB) - } - } - if rec.MIME != "" { - rec.MIME = fmt.Sprintf(" (MIME: %s)", rec.MIME) - } - fmt.Fprintf(w, "// %s is the MIB identifier with IANA name %s%s.\n//\n", constName, rec.Name, rec.MIME) - if len(rec.Desc.Data) > 0 { - fmt.Fprint(w, "// ") - d := xml.NewDecoder(strings.NewReader(rec.Desc.Data)) - inElem := true - attr := "" - for { - t, err := d.Token() - if err != nil { - if err != io.EOF { - log.Fatal(err) - } - break - } - switch x := t.(type) { - case xml.CharData: - attr = "" // Don't need attribute info. - a := bytes.Split([]byte(x), []byte("\n")) - for i, b := range a { - if b = bytes.TrimSpace(b); len(b) != 0 { - if !inElem && i > 0 { - fmt.Fprint(w, "\n// ") - } - inElem = false - fmt.Fprintf(w, "%s ", string(b)) - } - } - case xml.StartElement: - if x.Name.Local == "xref" { - inElem = true - use := false - for _, a := range x.Attr { - if a.Name.Local == "type" { - use = use || a.Value != "person" - } - if a.Name.Local == "data" && use { - // Patch up URLs to use https. From some links, the - // https version is different from the http one. - s := a.Value - s = strings.Replace(s, "http://", "https://", -1) - s = strings.Replace(s, "/unicode/", "/", -1) - attr = s + " " - } - } - } - case xml.EndElement: - inElem = false - fmt.Fprint(w, attr) - } - } - fmt.Fprint(w, "\n") - } - for _, x := range rec.Xref { - switch x.Type { - case "rfc": - fmt.Fprintf(w, "// Reference: %s\n", strings.ToUpper(x.Data)) - case "uri": - fmt.Fprintf(w, "// Reference: %s\n", x.Data) - } - } - fmt.Fprintf(w, "%s MIB = %s\n", constName, rec.MIB) - fmt.Fprintln(w) - } - fmt.Fprintln(w, ")") - - gen.WriteGoFile("mib.go", "identifier", w.Bytes()) -} diff --git a/vendor/golang.org/x/text/encoding/japanese/maketables.go b/vendor/golang.org/x/text/encoding/japanese/maketables.go deleted file mode 100644 index 023957a67..000000000 --- a/vendor/golang.org/x/text/encoding/japanese/maketables.go +++ /dev/null @@ -1,161 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -// This program generates tables.go: -// go run maketables.go | gofmt > tables.go - -// TODO: Emoji extensions? -// https://www.unicode.org/faq/emoji_dingbats.html -// https://www.unicode.org/Public/UNIDATA/EmojiSources.txt - -import ( - "bufio" - "fmt" - "log" - "net/http" - "sort" - "strings" -) - -type entry struct { - jisCode, table int -} - -func main() { - fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n") - fmt.Printf("// Package japanese provides Japanese encodings such as EUC-JP and Shift JIS.\n") - fmt.Printf(`package japanese // import "golang.org/x/text/encoding/japanese"` + "\n\n") - - reverse := [65536]entry{} - for i := range reverse { - reverse[i].table = -1 - } - - tables := []struct { - url string - name string - }{ - {"http://encoding.spec.whatwg.org/index-jis0208.txt", "0208"}, - {"http://encoding.spec.whatwg.org/index-jis0212.txt", "0212"}, - } - for i, table := range tables { - res, err := http.Get(table.url) - if err != nil { - log.Fatalf("%q: Get: %v", table.url, err) - } - defer res.Body.Close() - - mapping := [65536]uint16{} - - scanner := bufio.NewScanner(res.Body) - for scanner.Scan() { - s := strings.TrimSpace(scanner.Text()) - if s == "" || s[0] == '#' { - continue - } - x, y := 0, uint16(0) - if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil { - log.Fatalf("%q: could not parse %q", table.url, s) - } - if x < 0 || 120*94 <= x { - log.Fatalf("%q: JIS code %d is out of range", table.url, x) - } - mapping[x] = y - if reverse[y].table == -1 { - reverse[y] = entry{jisCode: x, table: i} - } - } - if err := scanner.Err(); err != nil { - log.Fatalf("%q: scanner error: %v", table.url, err) - } - - fmt.Printf("// jis%sDecode is the decoding table from JIS %s code to Unicode.\n// It is defined at %s\n", - table.name, table.name, table.url) - fmt.Printf("var jis%sDecode = [...]uint16{\n", table.name) - for i, m := range mapping { - if m != 0 { - fmt.Printf("\t%d: 0x%04X,\n", i, m) - } - } - fmt.Printf("}\n\n") - } - - // Any run of at least separation continuous zero entries in the reverse map will - // be a separate encode table. - const separation = 1024 - - intervals := []interval(nil) - low, high := -1, -1 - for i, v := range reverse { - if v.table == -1 { - continue - } - if low < 0 { - low = i - } else if i-high >= separation { - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - low = i - } - high = i + 1 - } - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - sort.Sort(byDecreasingLength(intervals)) - - fmt.Printf("const (\n") - fmt.Printf("\tjis0208 = 1\n") - fmt.Printf("\tjis0212 = 2\n") - fmt.Printf("\tcodeMask = 0x7f\n") - fmt.Printf("\tcodeShift = 7\n") - fmt.Printf("\ttableShift = 14\n") - fmt.Printf(")\n\n") - - fmt.Printf("const numEncodeTables = %d\n\n", len(intervals)) - fmt.Printf("// encodeX are the encoding tables from Unicode to JIS code,\n") - fmt.Printf("// sorted by decreasing length.\n") - for i, v := range intervals { - fmt.Printf("// encode%d: %5d entries for runes in [%5d, %5d).\n", i, v.len(), v.low, v.high) - } - fmt.Printf("//\n") - fmt.Printf("// The high two bits of the value record whether the JIS code comes from the\n") - fmt.Printf("// JIS0208 table (high bits == 1) or the JIS0212 table (high bits == 2).\n") - fmt.Printf("// The low 14 bits are two 7-bit unsigned integers j1 and j2 that form the\n") - fmt.Printf("// JIS code (94*j1 + j2) within that table.\n") - fmt.Printf("\n") - - for i, v := range intervals { - fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high) - fmt.Printf("var encode%d = [...]uint16{\n", i) - for j := v.low; j < v.high; j++ { - x := reverse[j] - if x.table == -1 { - continue - } - fmt.Printf("\t%d - %d: jis%s<<14 | 0x%02X<<7 | 0x%02X,\n", - j, v.low, tables[x.table].name, x.jisCode/94, x.jisCode%94) - } - fmt.Printf("}\n\n") - } -} - -// interval is a half-open interval [low, high). -type interval struct { - low, high int -} - -func (i interval) len() int { return i.high - i.low } - -// byDecreasingLength sorts intervals by decreasing length. -type byDecreasingLength []interval - -func (b byDecreasingLength) Len() int { return len(b) } -func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() } -func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] } diff --git a/vendor/golang.org/x/text/encoding/korean/maketables.go b/vendor/golang.org/x/text/encoding/korean/maketables.go deleted file mode 100644 index c84034fb6..000000000 --- a/vendor/golang.org/x/text/encoding/korean/maketables.go +++ /dev/null @@ -1,143 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -// This program generates tables.go: -// go run maketables.go | gofmt > tables.go - -import ( - "bufio" - "fmt" - "log" - "net/http" - "sort" - "strings" -) - -func main() { - fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n") - fmt.Printf("// Package korean provides Korean encodings such as EUC-KR.\n") - fmt.Printf(`package korean // import "golang.org/x/text/encoding/korean"` + "\n\n") - - res, err := http.Get("http://encoding.spec.whatwg.org/index-euc-kr.txt") - if err != nil { - log.Fatalf("Get: %v", err) - } - defer res.Body.Close() - - mapping := [65536]uint16{} - reverse := [65536]uint16{} - - scanner := bufio.NewScanner(res.Body) - for scanner.Scan() { - s := strings.TrimSpace(scanner.Text()) - if s == "" || s[0] == '#' { - continue - } - x, y := uint16(0), uint16(0) - if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil { - log.Fatalf("could not parse %q", s) - } - if x < 0 || 178*(0xc7-0x81)+(0xfe-0xc7)*94+(0xff-0xa1) <= x { - log.Fatalf("EUC-KR code %d is out of range", x) - } - mapping[x] = y - if reverse[y] == 0 { - c0, c1 := uint16(0), uint16(0) - if x < 178*(0xc7-0x81) { - c0 = uint16(x/178) + 0x81 - c1 = uint16(x % 178) - switch { - case c1 < 1*26: - c1 += 0x41 - case c1 < 2*26: - c1 += 0x47 - default: - c1 += 0x4d - } - } else { - x -= 178 * (0xc7 - 0x81) - c0 = uint16(x/94) + 0xc7 - c1 = uint16(x%94) + 0xa1 - } - reverse[y] = c0<<8 | c1 - } - } - if err := scanner.Err(); err != nil { - log.Fatalf("scanner error: %v", err) - } - - fmt.Printf("// decode is the decoding table from EUC-KR code to Unicode.\n") - fmt.Printf("// It is defined at http://encoding.spec.whatwg.org/index-euc-kr.txt\n") - fmt.Printf("var decode = [...]uint16{\n") - for i, v := range mapping { - if v != 0 { - fmt.Printf("\t%d: 0x%04X,\n", i, v) - } - } - fmt.Printf("}\n\n") - - // Any run of at least separation continuous zero entries in the reverse map will - // be a separate encode table. - const separation = 1024 - - intervals := []interval(nil) - low, high := -1, -1 - for i, v := range reverse { - if v == 0 { - continue - } - if low < 0 { - low = i - } else if i-high >= separation { - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - low = i - } - high = i + 1 - } - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - sort.Sort(byDecreasingLength(intervals)) - - fmt.Printf("const numEncodeTables = %d\n\n", len(intervals)) - fmt.Printf("// encodeX are the encoding tables from Unicode to EUC-KR code,\n") - fmt.Printf("// sorted by decreasing length.\n") - for i, v := range intervals { - fmt.Printf("// encode%d: %5d entries for runes in [%5d, %5d).\n", i, v.len(), v.low, v.high) - } - fmt.Printf("\n") - - for i, v := range intervals { - fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high) - fmt.Printf("var encode%d = [...]uint16{\n", i) - for j := v.low; j < v.high; j++ { - x := reverse[j] - if x == 0 { - continue - } - fmt.Printf("\t%d-%d: 0x%04X,\n", j, v.low, x) - } - fmt.Printf("}\n\n") - } -} - -// interval is a half-open interval [low, high). -type interval struct { - low, high int -} - -func (i interval) len() int { return i.high - i.low } - -// byDecreasingLength sorts intervals by decreasing length. -type byDecreasingLength []interval - -func (b byDecreasingLength) Len() int { return len(b) } -func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() } -func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] } diff --git a/vendor/golang.org/x/text/encoding/simplifiedchinese/maketables.go b/vendor/golang.org/x/text/encoding/simplifiedchinese/maketables.go deleted file mode 100644 index 55016c786..000000000 --- a/vendor/golang.org/x/text/encoding/simplifiedchinese/maketables.go +++ /dev/null @@ -1,161 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -// This program generates tables.go: -// go run maketables.go | gofmt > tables.go - -import ( - "bufio" - "fmt" - "log" - "net/http" - "sort" - "strings" -) - -func main() { - fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n") - fmt.Printf("// Package simplifiedchinese provides Simplified Chinese encodings such as GBK.\n") - fmt.Printf(`package simplifiedchinese // import "golang.org/x/text/encoding/simplifiedchinese"` + "\n\n") - - printGB18030() - printGBK() -} - -func printGB18030() { - res, err := http.Get("http://encoding.spec.whatwg.org/index-gb18030.txt") - if err != nil { - log.Fatalf("Get: %v", err) - } - defer res.Body.Close() - - fmt.Printf("// gb18030 is the table from http://encoding.spec.whatwg.org/index-gb18030.txt\n") - fmt.Printf("var gb18030 = [...][2]uint16{\n") - scanner := bufio.NewScanner(res.Body) - for scanner.Scan() { - s := strings.TrimSpace(scanner.Text()) - if s == "" || s[0] == '#' { - continue - } - x, y := uint32(0), uint32(0) - if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil { - log.Fatalf("could not parse %q", s) - } - if x < 0x10000 && y < 0x10000 { - fmt.Printf("\t{0x%04x, 0x%04x},\n", x, y) - } - } - fmt.Printf("}\n\n") -} - -func printGBK() { - res, err := http.Get("http://encoding.spec.whatwg.org/index-gbk.txt") - if err != nil { - log.Fatalf("Get: %v", err) - } - defer res.Body.Close() - - mapping := [65536]uint16{} - reverse := [65536]uint16{} - - scanner := bufio.NewScanner(res.Body) - for scanner.Scan() { - s := strings.TrimSpace(scanner.Text()) - if s == "" || s[0] == '#' { - continue - } - x, y := uint16(0), uint16(0) - if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil { - log.Fatalf("could not parse %q", s) - } - if x < 0 || 126*190 <= x { - log.Fatalf("GBK code %d is out of range", x) - } - mapping[x] = y - if reverse[y] == 0 { - c0, c1 := x/190, x%190 - if c1 >= 0x3f { - c1++ - } - reverse[y] = (0x81+c0)<<8 | (0x40 + c1) - } - } - if err := scanner.Err(); err != nil { - log.Fatalf("scanner error: %v", err) - } - - fmt.Printf("// decode is the decoding table from GBK code to Unicode.\n") - fmt.Printf("// It is defined at http://encoding.spec.whatwg.org/index-gbk.txt\n") - fmt.Printf("var decode = [...]uint16{\n") - for i, v := range mapping { - if v != 0 { - fmt.Printf("\t%d: 0x%04X,\n", i, v) - } - } - fmt.Printf("}\n\n") - - // Any run of at least separation continuous zero entries in the reverse map will - // be a separate encode table. - const separation = 1024 - - intervals := []interval(nil) - low, high := -1, -1 - for i, v := range reverse { - if v == 0 { - continue - } - if low < 0 { - low = i - } else if i-high >= separation { - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - low = i - } - high = i + 1 - } - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - sort.Sort(byDecreasingLength(intervals)) - - fmt.Printf("const numEncodeTables = %d\n\n", len(intervals)) - fmt.Printf("// encodeX are the encoding tables from Unicode to GBK code,\n") - fmt.Printf("// sorted by decreasing length.\n") - for i, v := range intervals { - fmt.Printf("// encode%d: %5d entries for runes in [%5d, %5d).\n", i, v.len(), v.low, v.high) - } - fmt.Printf("\n") - - for i, v := range intervals { - fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high) - fmt.Printf("var encode%d = [...]uint16{\n", i) - for j := v.low; j < v.high; j++ { - x := reverse[j] - if x == 0 { - continue - } - fmt.Printf("\t%d-%d: 0x%04X,\n", j, v.low, x) - } - fmt.Printf("}\n\n") - } -} - -// interval is a half-open interval [low, high). -type interval struct { - low, high int -} - -func (i interval) len() int { return i.high - i.low } - -// byDecreasingLength sorts intervals by decreasing length. -type byDecreasingLength []interval - -func (b byDecreasingLength) Len() int { return len(b) } -func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() } -func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] } diff --git a/vendor/golang.org/x/text/encoding/traditionalchinese/maketables.go b/vendor/golang.org/x/text/encoding/traditionalchinese/maketables.go deleted file mode 100644 index cf7fdb31a..000000000 --- a/vendor/golang.org/x/text/encoding/traditionalchinese/maketables.go +++ /dev/null @@ -1,140 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -// This program generates tables.go: -// go run maketables.go | gofmt > tables.go - -import ( - "bufio" - "fmt" - "log" - "net/http" - "sort" - "strings" -) - -func main() { - fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n") - fmt.Printf("// Package traditionalchinese provides Traditional Chinese encodings such as Big5.\n") - fmt.Printf(`package traditionalchinese // import "golang.org/x/text/encoding/traditionalchinese"` + "\n\n") - - res, err := http.Get("http://encoding.spec.whatwg.org/index-big5.txt") - if err != nil { - log.Fatalf("Get: %v", err) - } - defer res.Body.Close() - - mapping := [65536]uint32{} - reverse := [65536 * 4]uint16{} - - scanner := bufio.NewScanner(res.Body) - for scanner.Scan() { - s := strings.TrimSpace(scanner.Text()) - if s == "" || s[0] == '#' { - continue - } - x, y := uint16(0), uint32(0) - if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil { - log.Fatalf("could not parse %q", s) - } - if x < 0 || 126*157 <= x { - log.Fatalf("Big5 code %d is out of range", x) - } - mapping[x] = y - - // The WHATWG spec http://encoding.spec.whatwg.org/#indexes says that - // "The index pointer for code point in index is the first pointer - // corresponding to code point in index", which would normally mean - // that the code below should be guarded by "if reverse[y] == 0", but - // last instead of first seems to match the behavior of - // "iconv -f UTF-8 -t BIG5". For example, U+8005 者 occurs twice in - // http://encoding.spec.whatwg.org/index-big5.txt, as index 2148 - // (encoded as "\x8e\xcd") and index 6543 (encoded as "\xaa\xcc") - // and "echo 者 | iconv -f UTF-8 -t BIG5 | xxd" gives "\xaa\xcc". - c0, c1 := x/157, x%157 - if c1 < 0x3f { - c1 += 0x40 - } else { - c1 += 0x62 - } - reverse[y] = (0x81+c0)<<8 | c1 - } - if err := scanner.Err(); err != nil { - log.Fatalf("scanner error: %v", err) - } - - fmt.Printf("// decode is the decoding table from Big5 code to Unicode.\n") - fmt.Printf("// It is defined at http://encoding.spec.whatwg.org/index-big5.txt\n") - fmt.Printf("var decode = [...]uint32{\n") - for i, v := range mapping { - if v != 0 { - fmt.Printf("\t%d: 0x%08X,\n", i, v) - } - } - fmt.Printf("}\n\n") - - // Any run of at least separation continuous zero entries in the reverse map will - // be a separate encode table. - const separation = 1024 - - intervals := []interval(nil) - low, high := -1, -1 - for i, v := range reverse { - if v == 0 { - continue - } - if low < 0 { - low = i - } else if i-high >= separation { - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - low = i - } - high = i + 1 - } - if high >= 0 { - intervals = append(intervals, interval{low, high}) - } - sort.Sort(byDecreasingLength(intervals)) - - fmt.Printf("const numEncodeTables = %d\n\n", len(intervals)) - fmt.Printf("// encodeX are the encoding tables from Unicode to Big5 code,\n") - fmt.Printf("// sorted by decreasing length.\n") - for i, v := range intervals { - fmt.Printf("// encode%d: %5d entries for runes in [%6d, %6d).\n", i, v.len(), v.low, v.high) - } - fmt.Printf("\n") - - for i, v := range intervals { - fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high) - fmt.Printf("var encode%d = [...]uint16{\n", i) - for j := v.low; j < v.high; j++ { - x := reverse[j] - if x == 0 { - continue - } - fmt.Printf("\t%d-%d: 0x%04X,\n", j, v.low, x) - } - fmt.Printf("}\n\n") - } -} - -// interval is a half-open interval [low, high). -type interval struct { - low, high int -} - -func (i interval) len() int { return i.high - i.low } - -// byDecreasingLength sorts intervals by decreasing length. -type byDecreasingLength []interval - -func (b byDecreasingLength) Len() int { return len(b) } -func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() } -func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] } diff --git a/vendor/golang.org/x/text/internal/language/compact/gen.go b/vendor/golang.org/x/text/internal/language/compact/gen.go deleted file mode 100644 index 0c36a052f..000000000 --- a/vendor/golang.org/x/text/internal/language/compact/gen.go +++ /dev/null @@ -1,64 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// Language tag table generator. -// Data read from the web. - -package main - -import ( - "flag" - "fmt" - "log" - - "golang.org/x/text/internal/gen" - "golang.org/x/text/unicode/cldr" -) - -var ( - test = flag.Bool("test", - false, - "test existing tables; can be used to compare web data with package data.") - outputFile = flag.String("output", - "tables.go", - "output file for generated tables") -) - -func main() { - gen.Init() - - w := gen.NewCodeWriter() - defer w.WriteGoFile("tables.go", "compact") - - fmt.Fprintln(w, `import "golang.org/x/text/internal/language"`) - - b := newBuilder(w) - gen.WriteCLDRVersion(w) - - b.writeCompactIndex() -} - -type builder struct { - w *gen.CodeWriter - data *cldr.CLDR - supp *cldr.SupplementalData -} - -func newBuilder(w *gen.CodeWriter) *builder { - r := gen.OpenCLDRCoreZip() - defer r.Close() - d := &cldr.Decoder{} - data, err := d.DecodeZip(r) - if err != nil { - log.Fatal(err) - } - b := builder{ - w: w, - data: data, - supp: data.Supplemental(), - } - return &b -} diff --git a/vendor/golang.org/x/text/internal/language/compact/gen_index.go b/vendor/golang.org/x/text/internal/language/compact/gen_index.go deleted file mode 100644 index 136cefaf0..000000000 --- a/vendor/golang.org/x/text/internal/language/compact/gen_index.go +++ /dev/null @@ -1,113 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -// This file generates derivative tables based on the language package itself. - -import ( - "fmt" - "log" - "sort" - "strings" - - "golang.org/x/text/internal/language" -) - -// Compact indices: -// Note -va-X variants only apply to localization variants. -// BCP variants only ever apply to language. -// The only ambiguity between tags is with regions. - -func (b *builder) writeCompactIndex() { - // Collect all language tags for which we have any data in CLDR. - m := map[language.Tag]bool{} - for _, lang := range b.data.Locales() { - // We include all locales unconditionally to be consistent with en_US. - // We want en_US, even though it has no data associated with it. - - // TODO: put any of the languages for which no data exists at the end - // of the index. This allows all components based on ICU to use that - // as the cutoff point. - // if x := data.RawLDML(lang); false || - // x.LocaleDisplayNames != nil || - // x.Characters != nil || - // x.Delimiters != nil || - // x.Measurement != nil || - // x.Dates != nil || - // x.Numbers != nil || - // x.Units != nil || - // x.ListPatterns != nil || - // x.Collations != nil || - // x.Segmentations != nil || - // x.Rbnf != nil || - // x.Annotations != nil || - // x.Metadata != nil { - - // TODO: support POSIX natively, albeit non-standard. - tag := language.Make(strings.Replace(lang, "_POSIX", "-u-va-posix", 1)) - m[tag] = true - // } - } - - // TODO: plural rules are also defined for the deprecated tags: - // iw mo sh tl - // Consider removing these as compact tags. - - // Include locales for plural rules, which uses a different structure. - for _, plurals := range b.supp.Plurals { - for _, rules := range plurals.PluralRules { - for _, lang := range strings.Split(rules.Locales, " ") { - m[language.Make(lang)] = true - } - } - } - - var coreTags []language.CompactCoreInfo - var special []string - - for t := range m { - if x := t.Extensions(); len(x) != 0 && fmt.Sprint(x) != "[u-va-posix]" { - log.Fatalf("Unexpected extension %v in %v", x, t) - } - if len(t.Variants()) == 0 && len(t.Extensions()) == 0 { - cci, ok := language.GetCompactCore(t) - if !ok { - log.Fatalf("Locale for non-basic language %q", t) - } - coreTags = append(coreTags, cci) - } else { - special = append(special, t.String()) - } - } - - w := b.w - - sort.Slice(coreTags, func(i, j int) bool { return coreTags[i] < coreTags[j] }) - sort.Strings(special) - - w.WriteComment(` - NumCompactTags is the number of common tags. The maximum tag is - NumCompactTags-1.`) - w.WriteConst("NumCompactTags", len(m)) - - fmt.Fprintln(w, "const (") - for i, t := range coreTags { - fmt.Fprintf(w, "%s ID = %d\n", ident(t.Tag().String()), i) - } - for i, t := range special { - fmt.Fprintf(w, "%s ID = %d\n", ident(t), i+len(coreTags)) - } - fmt.Fprintln(w, ")") - - w.WriteVar("coreTags", coreTags) - - w.WriteConst("specialTagsStr", strings.Join(special, " ")) -} - -func ident(s string) string { - return strings.Replace(s, "-", "", -1) + "Index" -} diff --git a/vendor/golang.org/x/text/internal/language/compact/gen_parents.go b/vendor/golang.org/x/text/internal/language/compact/gen_parents.go deleted file mode 100644 index 9543d5832..000000000 --- a/vendor/golang.org/x/text/internal/language/compact/gen_parents.go +++ /dev/null @@ -1,54 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -import ( - "log" - - "golang.org/x/text/internal/gen" - "golang.org/x/text/internal/language" - "golang.org/x/text/internal/language/compact" - "golang.org/x/text/unicode/cldr" -) - -func main() { - r := gen.OpenCLDRCoreZip() - defer r.Close() - - d := &cldr.Decoder{} - data, err := d.DecodeZip(r) - if err != nil { - log.Fatalf("DecodeZip: %v", err) - } - - w := gen.NewCodeWriter() - defer w.WriteGoFile("parents.go", "compact") - - // Create parents table. - type ID uint16 - parents := make([]ID, compact.NumCompactTags) - for _, loc := range data.Locales() { - tag := language.MustParse(loc) - index, ok := compact.FromTag(tag) - if !ok { - continue - } - parentIndex := compact.ID(0) // und - for p := tag.Parent(); p != language.Und; p = p.Parent() { - if x, ok := compact.FromTag(p); ok { - parentIndex = x - break - } - } - parents[index] = ID(parentIndex) - } - - w.WriteComment(` - parents maps a compact index of a tag to the compact index of the parent of - this tag.`) - w.WriteVar("parents", parents) -} diff --git a/vendor/golang.org/x/text/internal/language/gen.go b/vendor/golang.org/x/text/internal/language/gen.go deleted file mode 100644 index cdcc7febc..000000000 --- a/vendor/golang.org/x/text/internal/language/gen.go +++ /dev/null @@ -1,1520 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// Language tag table generator. -// Data read from the web. - -package main - -import ( - "bufio" - "flag" - "fmt" - "io" - "io/ioutil" - "log" - "math" - "reflect" - "regexp" - "sort" - "strconv" - "strings" - - "golang.org/x/text/internal/gen" - "golang.org/x/text/internal/tag" - "golang.org/x/text/unicode/cldr" -) - -var ( - test = flag.Bool("test", - false, - "test existing tables; can be used to compare web data with package data.") - outputFile = flag.String("output", - "tables.go", - "output file for generated tables") -) - -var comment = []string{ - ` -lang holds an alphabetically sorted list of ISO-639 language identifiers. -All entries are 4 bytes. The index of the identifier (divided by 4) is the language tag. -For 2-byte language identifiers, the two successive bytes have the following meaning: - - if the first letter of the 2- and 3-letter ISO codes are the same: - the second and third letter of the 3-letter ISO code. - - otherwise: a 0 and a by 2 bits right-shifted index into altLangISO3. -For 3-byte language identifiers the 4th byte is 0.`, - ` -langNoIndex is a bit vector of all 3-letter language codes that are not used as an index -in lookup tables. The language ids for these language codes are derived directly -from the letters and are not consecutive.`, - ` -altLangISO3 holds an alphabetically sorted list of 3-letter language code alternatives -to 2-letter language codes that cannot be derived using the method described above. -Each 3-letter code is followed by its 1-byte langID.`, - ` -altLangIndex is used to convert indexes in altLangISO3 to langIDs.`, - ` -AliasMap maps langIDs to their suggested replacements.`, - ` -script is an alphabetically sorted list of ISO 15924 codes. The index -of the script in the string, divided by 4, is the internal scriptID.`, - ` -isoRegionOffset needs to be added to the index of regionISO to obtain the regionID -for 2-letter ISO codes. (The first isoRegionOffset regionIDs are reserved for -the UN.M49 codes used for groups.)`, - ` -regionISO holds a list of alphabetically sorted 2-letter ISO region codes. -Each 2-letter codes is followed by two bytes with the following meaning: - - [A-Z}{2}: the first letter of the 2-letter code plus these two - letters form the 3-letter ISO code. - - 0, n: index into altRegionISO3.`, - ` -regionTypes defines the status of a region for various standards.`, - ` -m49 maps regionIDs to UN.M49 codes. The first isoRegionOffset entries are -codes indicating collections of regions.`, - ` -m49Index gives indexes into fromM49 based on the three most significant bits -of a 10-bit UN.M49 code. To search an UN.M49 code in fromM49, search in - fromM49[m49Index[msb39(code)]:m49Index[msb3(code)+1]] -for an entry where the first 7 bits match the 7 lsb of the UN.M49 code. -The region code is stored in the 9 lsb of the indexed value.`, - ` -fromM49 contains entries to map UN.M49 codes to regions. See m49Index for details.`, - ` -altRegionISO3 holds a list of 3-letter region codes that cannot be -mapped to 2-letter codes using the default algorithm. This is a short list.`, - ` -altRegionIDs holds a list of regionIDs the positions of which match those -of the 3-letter ISO codes in altRegionISO3.`, - ` -variantNumSpecialized is the number of specialized variants in variants.`, - ` -suppressScript is an index from langID to the dominant script for that language, -if it exists. If a script is given, it should be suppressed from the language tag.`, - ` -likelyLang is a lookup table, indexed by langID, for the most likely -scripts and regions given incomplete information. If more entries exist for a -given language, region and script are the index and size respectively -of the list in likelyLangList.`, - ` -likelyLangList holds lists info associated with likelyLang.`, - ` -likelyRegion is a lookup table, indexed by regionID, for the most likely -languages and scripts given incomplete information. If more entries exist -for a given regionID, lang and script are the index and size respectively -of the list in likelyRegionList. -TODO: exclude containers and user-definable regions from the list.`, - ` -likelyRegionList holds lists info associated with likelyRegion.`, - ` -likelyScript is a lookup table, indexed by scriptID, for the most likely -languages and regions given a script.`, - ` -nRegionGroups is the number of region groups.`, - ` -regionInclusion maps region identifiers to sets of regions in regionInclusionBits, -where each set holds all groupings that are directly connected in a region -containment graph.`, - ` -regionInclusionBits is an array of bit vectors where every vector represents -a set of region groupings. These sets are used to compute the distance -between two regions for the purpose of language matching.`, - ` -regionInclusionNext marks, for each entry in regionInclusionBits, the set of -all groups that are reachable from the groups set in the respective entry.`, -} - -// TODO: consider changing some of these structures to tries. This can reduce -// memory, but may increase the need for memory allocations. This could be -// mitigated if we can piggyback on language tags for common cases. - -func failOnError(e error) { - if e != nil { - log.Panic(e) - } -} - -type setType int - -const ( - Indexed setType = 1 + iota // all elements must be of same size - Linear -) - -type stringSet struct { - s []string - sorted, frozen bool - - // We often need to update values after the creation of an index is completed. - // We include a convenience map for keeping track of this. - update map[string]string - typ setType // used for checking. -} - -func (ss *stringSet) clone() stringSet { - c := *ss - c.s = append([]string(nil), c.s...) - return c -} - -func (ss *stringSet) setType(t setType) { - if ss.typ != t && ss.typ != 0 { - log.Panicf("type %d cannot be assigned as it was already %d", t, ss.typ) - } -} - -// parse parses a whitespace-separated string and initializes ss with its -// components. -func (ss *stringSet) parse(s string) { - scan := bufio.NewScanner(strings.NewReader(s)) - scan.Split(bufio.ScanWords) - for scan.Scan() { - ss.add(scan.Text()) - } -} - -func (ss *stringSet) assertChangeable() { - if ss.frozen { - log.Panic("attempt to modify a frozen stringSet") - } -} - -func (ss *stringSet) add(s string) { - ss.assertChangeable() - ss.s = append(ss.s, s) - ss.sorted = ss.frozen -} - -func (ss *stringSet) freeze() { - ss.compact() - ss.frozen = true -} - -func (ss *stringSet) compact() { - if ss.sorted { - return - } - a := ss.s - sort.Strings(a) - k := 0 - for i := 1; i < len(a); i++ { - if a[k] != a[i] { - a[k+1] = a[i] - k++ - } - } - ss.s = a[:k+1] - ss.sorted = ss.frozen -} - -type funcSorter struct { - fn func(a, b string) bool - sort.StringSlice -} - -func (s funcSorter) Less(i, j int) bool { - return s.fn(s.StringSlice[i], s.StringSlice[j]) -} - -func (ss *stringSet) sortFunc(f func(a, b string) bool) { - ss.compact() - sort.Sort(funcSorter{f, sort.StringSlice(ss.s)}) -} - -func (ss *stringSet) remove(s string) { - ss.assertChangeable() - if i, ok := ss.find(s); ok { - copy(ss.s[i:], ss.s[i+1:]) - ss.s = ss.s[:len(ss.s)-1] - } -} - -func (ss *stringSet) replace(ol, nu string) { - ss.s[ss.index(ol)] = nu - ss.sorted = ss.frozen -} - -func (ss *stringSet) index(s string) int { - ss.setType(Indexed) - i, ok := ss.find(s) - if !ok { - if i < len(ss.s) { - log.Panicf("find: item %q is not in list. Closest match is %q.", s, ss.s[i]) - } - log.Panicf("find: item %q is not in list", s) - - } - return i -} - -func (ss *stringSet) find(s string) (int, bool) { - ss.compact() - i := sort.SearchStrings(ss.s, s) - return i, i != len(ss.s) && ss.s[i] == s -} - -func (ss *stringSet) slice() []string { - ss.compact() - return ss.s -} - -func (ss *stringSet) updateLater(v, key string) { - if ss.update == nil { - ss.update = map[string]string{} - } - ss.update[v] = key -} - -// join joins the string and ensures that all entries are of the same length. -func (ss *stringSet) join() string { - ss.setType(Indexed) - n := len(ss.s[0]) - for _, s := range ss.s { - if len(s) != n { - log.Panicf("join: not all entries are of the same length: %q", s) - } - } - ss.s = append(ss.s, strings.Repeat("\xff", n)) - return strings.Join(ss.s, "") -} - -// ianaEntry holds information for an entry in the IANA Language Subtag Repository. -// All types use the same entry. -// See http://tools.ietf.org/html/bcp47#section-5.1 for a description of the various -// fields. -type ianaEntry struct { - typ string - description []string - scope string - added string - preferred string - deprecated string - suppressScript string - macro string - prefix []string -} - -type builder struct { - w *gen.CodeWriter - hw io.Writer // MultiWriter for w and w.Hash - data *cldr.CLDR - supp *cldr.SupplementalData - - // indices - locale stringSet // common locales - lang stringSet // canonical language ids (2 or 3 letter ISO codes) with data - langNoIndex stringSet // 3-letter ISO codes with no associated data - script stringSet // 4-letter ISO codes - region stringSet // 2-letter ISO or 3-digit UN M49 codes - variant stringSet // 4-8-alphanumeric variant code. - - // Region codes that are groups with their corresponding group IDs. - groups map[int]index - - // langInfo - registry map[string]*ianaEntry -} - -type index uint - -func newBuilder(w *gen.CodeWriter) *builder { - r := gen.OpenCLDRCoreZip() - defer r.Close() - d := &cldr.Decoder{} - data, err := d.DecodeZip(r) - failOnError(err) - b := builder{ - w: w, - hw: io.MultiWriter(w, w.Hash), - data: data, - supp: data.Supplemental(), - } - b.parseRegistry() - return &b -} - -func (b *builder) parseRegistry() { - r := gen.OpenIANAFile("assignments/language-subtag-registry") - defer r.Close() - b.registry = make(map[string]*ianaEntry) - - scan := bufio.NewScanner(r) - scan.Split(bufio.ScanWords) - var record *ianaEntry - for more := scan.Scan(); more; { - key := scan.Text() - more = scan.Scan() - value := scan.Text() - switch key { - case "Type:": - record = &ianaEntry{typ: value} - case "Subtag:", "Tag:": - if s := strings.SplitN(value, "..", 2); len(s) > 1 { - for a := s[0]; a <= s[1]; a = inc(a) { - b.addToRegistry(a, record) - } - } else { - b.addToRegistry(value, record) - } - case "Suppress-Script:": - record.suppressScript = value - case "Added:": - record.added = value - case "Deprecated:": - record.deprecated = value - case "Macrolanguage:": - record.macro = value - case "Preferred-Value:": - record.preferred = value - case "Prefix:": - record.prefix = append(record.prefix, value) - case "Scope:": - record.scope = value - case "Description:": - buf := []byte(value) - for more = scan.Scan(); more; more = scan.Scan() { - b := scan.Bytes() - if b[0] == '%' || b[len(b)-1] == ':' { - break - } - buf = append(buf, ' ') - buf = append(buf, b...) - } - record.description = append(record.description, string(buf)) - continue - default: - continue - } - more = scan.Scan() - } - if scan.Err() != nil { - log.Panic(scan.Err()) - } -} - -func (b *builder) addToRegistry(key string, entry *ianaEntry) { - if info, ok := b.registry[key]; ok { - if info.typ != "language" || entry.typ != "extlang" { - log.Fatalf("parseRegistry: tag %q already exists", key) - } - } else { - b.registry[key] = entry - } -} - -var commentIndex = make(map[string]string) - -func init() { - for _, s := range comment { - key := strings.TrimSpace(strings.SplitN(s, " ", 2)[0]) - commentIndex[key] = s - } -} - -func (b *builder) comment(name string) { - if s := commentIndex[name]; len(s) > 0 { - b.w.WriteComment(s) - } else { - fmt.Fprintln(b.w) - } -} - -func (b *builder) pf(f string, x ...interface{}) { - fmt.Fprintf(b.hw, f, x...) - fmt.Fprint(b.hw, "\n") -} - -func (b *builder) p(x ...interface{}) { - fmt.Fprintln(b.hw, x...) -} - -func (b *builder) addSize(s int) { - b.w.Size += s - b.pf("// Size: %d bytes", s) -} - -func (b *builder) writeConst(name string, x interface{}) { - b.comment(name) - b.w.WriteConst(name, x) -} - -// writeConsts computes f(v) for all v in values and writes the results -// as constants named _v to a single constant block. -func (b *builder) writeConsts(f func(string) int, values ...string) { - b.pf("const (") - for _, v := range values { - b.pf("\t_%s = %v", v, f(v)) - } - b.pf(")") -} - -// writeType writes the type of the given value, which must be a struct. -func (b *builder) writeType(value interface{}) { - b.comment(reflect.TypeOf(value).Name()) - b.w.WriteType(value) -} - -func (b *builder) writeSlice(name string, ss interface{}) { - b.writeSliceAddSize(name, 0, ss) -} - -func (b *builder) writeSliceAddSize(name string, extraSize int, ss interface{}) { - b.comment(name) - b.w.Size += extraSize - v := reflect.ValueOf(ss) - t := v.Type().Elem() - b.pf("// Size: %d bytes, %d elements", v.Len()*int(t.Size())+extraSize, v.Len()) - - fmt.Fprintf(b.w, "var %s = ", name) - b.w.WriteArray(ss) - b.p() -} - -type FromTo struct { - From, To uint16 -} - -func (b *builder) writeSortedMap(name string, ss *stringSet, index func(s string) uint16) { - ss.sortFunc(func(a, b string) bool { - return index(a) < index(b) - }) - m := []FromTo{} - for _, s := range ss.s { - m = append(m, FromTo{index(s), index(ss.update[s])}) - } - b.writeSlice(name, m) -} - -const base = 'z' - 'a' + 1 - -func strToInt(s string) uint { - v := uint(0) - for i := 0; i < len(s); i++ { - v *= base - v += uint(s[i] - 'a') - } - return v -} - -// converts the given integer to the original ASCII string passed to strToInt. -// len(s) must match the number of characters obtained. -func intToStr(v uint, s []byte) { - for i := len(s) - 1; i >= 0; i-- { - s[i] = byte(v%base) + 'a' - v /= base - } -} - -func (b *builder) writeBitVector(name string, ss []string) { - vec := make([]uint8, int(math.Ceil(math.Pow(base, float64(len(ss[0])))/8))) - for _, s := range ss { - v := strToInt(s) - vec[v/8] |= 1 << (v % 8) - } - b.writeSlice(name, vec) -} - -// TODO: convert this type into a list or two-stage trie. -func (b *builder) writeMapFunc(name string, m map[string]string, f func(string) uint16) { - b.comment(name) - v := reflect.ValueOf(m) - sz := v.Len() * (2 + int(v.Type().Key().Size())) - for _, k := range m { - sz += len(k) - } - b.addSize(sz) - keys := []string{} - b.pf(`var %s = map[string]uint16{`, name) - for k := range m { - keys = append(keys, k) - } - sort.Strings(keys) - for _, k := range keys { - b.pf("\t%q: %v,", k, f(m[k])) - } - b.p("}") -} - -func (b *builder) writeMap(name string, m interface{}) { - b.comment(name) - v := reflect.ValueOf(m) - sz := v.Len() * (2 + int(v.Type().Key().Size()) + int(v.Type().Elem().Size())) - b.addSize(sz) - f := strings.FieldsFunc(fmt.Sprintf("%#v", m), func(r rune) bool { - return strings.IndexRune("{}, ", r) != -1 - }) - sort.Strings(f[1:]) - b.pf(`var %s = %s{`, name, f[0]) - for _, kv := range f[1:] { - b.pf("\t%s,", kv) - } - b.p("}") -} - -func (b *builder) langIndex(s string) uint16 { - if s == "und" { - return 0 - } - if i, ok := b.lang.find(s); ok { - return uint16(i) - } - return uint16(strToInt(s)) + uint16(len(b.lang.s)) -} - -// inc advances the string to its lexicographical successor. -func inc(s string) string { - const maxTagLength = 4 - var buf [maxTagLength]byte - intToStr(strToInt(strings.ToLower(s))+1, buf[:len(s)]) - for i := 0; i < len(s); i++ { - if s[i] <= 'Z' { - buf[i] -= 'a' - 'A' - } - } - return string(buf[:len(s)]) -} - -func (b *builder) parseIndices() { - meta := b.supp.Metadata - - for k, v := range b.registry { - var ss *stringSet - switch v.typ { - case "language": - if len(k) == 2 || v.suppressScript != "" || v.scope == "special" { - b.lang.add(k) - continue - } else { - ss = &b.langNoIndex - } - case "region": - ss = &b.region - case "script": - ss = &b.script - case "variant": - ss = &b.variant - default: - continue - } - ss.add(k) - } - // Include any language for which there is data. - for _, lang := range b.data.Locales() { - if x := b.data.RawLDML(lang); false || - x.LocaleDisplayNames != nil || - x.Characters != nil || - x.Delimiters != nil || - x.Measurement != nil || - x.Dates != nil || - x.Numbers != nil || - x.Units != nil || - x.ListPatterns != nil || - x.Collations != nil || - x.Segmentations != nil || - x.Rbnf != nil || - x.Annotations != nil || - x.Metadata != nil { - - from := strings.Split(lang, "_") - if lang := from[0]; lang != "root" { - b.lang.add(lang) - } - } - } - // Include locales for plural rules, which uses a different structure. - for _, plurals := range b.data.Supplemental().Plurals { - for _, rules := range plurals.PluralRules { - for _, lang := range strings.Split(rules.Locales, " ") { - if lang = strings.Split(lang, "_")[0]; lang != "root" { - b.lang.add(lang) - } - } - } - } - // Include languages in likely subtags. - for _, m := range b.supp.LikelySubtags.LikelySubtag { - from := strings.Split(m.From, "_") - b.lang.add(from[0]) - } - // Include ISO-639 alpha-3 bibliographic entries. - for _, a := range meta.Alias.LanguageAlias { - if a.Reason == "bibliographic" { - b.langNoIndex.add(a.Type) - } - } - // Include regions in territoryAlias (not all are in the IANA registry!) - for _, reg := range b.supp.Metadata.Alias.TerritoryAlias { - if len(reg.Type) == 2 { - b.region.add(reg.Type) - } - } - - for _, s := range b.lang.s { - if len(s) == 3 { - b.langNoIndex.remove(s) - } - } - b.writeConst("NumLanguages", len(b.lang.slice())+len(b.langNoIndex.slice())) - b.writeConst("NumScripts", len(b.script.slice())) - b.writeConst("NumRegions", len(b.region.slice())) - - // Add dummy codes at the start of each list to represent "unspecified". - b.lang.add("---") - b.script.add("----") - b.region.add("---") - - // common locales - b.locale.parse(meta.DefaultContent.Locales) -} - -// TODO: region inclusion data will probably not be use used in future matchers. - -func (b *builder) computeRegionGroups() { - b.groups = make(map[int]index) - - // Create group indices. - for i := 1; b.region.s[i][0] < 'A'; i++ { // Base M49 indices on regionID. - b.groups[i] = index(len(b.groups)) - } - for _, g := range b.supp.TerritoryContainment.Group { - // Skip UN and EURO zone as they are flattening the containment - // relationship. - if g.Type == "EZ" || g.Type == "UN" { - continue - } - group := b.region.index(g.Type) - if _, ok := b.groups[group]; !ok { - b.groups[group] = index(len(b.groups)) - } - } - if len(b.groups) > 64 { - log.Fatalf("only 64 groups supported, found %d", len(b.groups)) - } - b.writeConst("nRegionGroups", len(b.groups)) -} - -var langConsts = []string{ - "af", "am", "ar", "az", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", - "et", "fa", "fi", "fil", "fr", "gu", "he", "hi", "hr", "hu", "hy", "id", "is", - "it", "ja", "ka", "kk", "km", "kn", "ko", "ky", "lo", "lt", "lv", "mk", "ml", - "mn", "mo", "mr", "ms", "mul", "my", "nb", "ne", "nl", "no", "pa", "pl", "pt", - "ro", "ru", "sh", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "th", - "tl", "tn", "tr", "uk", "ur", "uz", "vi", "zh", "zu", - - // constants for grandfathered tags (if not already defined) - "jbo", "ami", "bnn", "hak", "tlh", "lb", "nv", "pwn", "tao", "tay", "tsu", - "nn", "sfb", "vgt", "sgg", "cmn", "nan", "hsn", -} - -// writeLanguage generates all tables needed for language canonicalization. -func (b *builder) writeLanguage() { - meta := b.supp.Metadata - - b.writeConst("nonCanonicalUnd", b.lang.index("und")) - b.writeConsts(func(s string) int { return int(b.langIndex(s)) }, langConsts...) - b.writeConst("langPrivateStart", b.langIndex("qaa")) - b.writeConst("langPrivateEnd", b.langIndex("qtz")) - - // Get language codes that need to be mapped (overlong 3-letter codes, - // deprecated 2-letter codes, legacy and grandfathered tags.) - langAliasMap := stringSet{} - aliasTypeMap := map[string]AliasType{} - - // altLangISO3 get the alternative ISO3 names that need to be mapped. - altLangISO3 := stringSet{} - // Add dummy start to avoid the use of index 0. - altLangISO3.add("---") - altLangISO3.updateLater("---", "aa") - - lang := b.lang.clone() - for _, a := range meta.Alias.LanguageAlias { - if a.Replacement == "" { - a.Replacement = "und" - } - // TODO: support mapping to tags - repl := strings.SplitN(a.Replacement, "_", 2)[0] - if a.Reason == "overlong" { - if len(a.Replacement) == 2 && len(a.Type) == 3 { - lang.updateLater(a.Replacement, a.Type) - } - } else if len(a.Type) <= 3 { - switch a.Reason { - case "macrolanguage": - aliasTypeMap[a.Type] = Macro - case "deprecated": - // handled elsewhere - continue - case "bibliographic", "legacy": - if a.Type == "no" { - continue - } - aliasTypeMap[a.Type] = Legacy - default: - log.Fatalf("new %s alias: %s", a.Reason, a.Type) - } - langAliasMap.add(a.Type) - langAliasMap.updateLater(a.Type, repl) - } - } - // Manually add the mapping of "nb" (Norwegian) to its macro language. - // This can be removed if CLDR adopts this change. - langAliasMap.add("nb") - langAliasMap.updateLater("nb", "no") - aliasTypeMap["nb"] = Macro - - for k, v := range b.registry { - // Also add deprecated values for 3-letter ISO codes, which CLDR omits. - if v.typ == "language" && v.deprecated != "" && v.preferred != "" { - langAliasMap.add(k) - langAliasMap.updateLater(k, v.preferred) - aliasTypeMap[k] = Deprecated - } - } - // Fix CLDR mappings. - lang.updateLater("tl", "tgl") - lang.updateLater("sh", "hbs") - lang.updateLater("mo", "mol") - lang.updateLater("no", "nor") - lang.updateLater("tw", "twi") - lang.updateLater("nb", "nob") - lang.updateLater("ak", "aka") - lang.updateLater("bh", "bih") - - // Ensure that each 2-letter code is matched with a 3-letter code. - for _, v := range lang.s[1:] { - s, ok := lang.update[v] - if !ok { - if s, ok = lang.update[langAliasMap.update[v]]; !ok { - continue - } - lang.update[v] = s - } - if v[0] != s[0] { - altLangISO3.add(s) - altLangISO3.updateLater(s, v) - } - } - - // Complete canonicalized language tags. - lang.freeze() - for i, v := range lang.s { - // We can avoid these manual entries by using the IANA registry directly. - // Seems easier to update the list manually, as changes are rare. - // The panic in this loop will trigger if we miss an entry. - add := "" - if s, ok := lang.update[v]; ok { - if s[0] == v[0] { - add = s[1:] - } else { - add = string([]byte{0, byte(altLangISO3.index(s))}) - } - } else if len(v) == 3 { - add = "\x00" - } else { - log.Panicf("no data for long form of %q", v) - } - lang.s[i] += add - } - b.writeConst("lang", tag.Index(lang.join())) - - b.writeConst("langNoIndexOffset", len(b.lang.s)) - - // space of all valid 3-letter language identifiers. - b.writeBitVector("langNoIndex", b.langNoIndex.slice()) - - altLangIndex := []uint16{} - for i, s := range altLangISO3.slice() { - altLangISO3.s[i] += string([]byte{byte(len(altLangIndex))}) - if i > 0 { - idx := b.lang.index(altLangISO3.update[s]) - altLangIndex = append(altLangIndex, uint16(idx)) - } - } - b.writeConst("altLangISO3", tag.Index(altLangISO3.join())) - b.writeSlice("altLangIndex", altLangIndex) - - b.writeSortedMap("AliasMap", &langAliasMap, b.langIndex) - types := make([]AliasType, len(langAliasMap.s)) - for i, s := range langAliasMap.s { - types[i] = aliasTypeMap[s] - } - b.writeSlice("AliasTypes", types) -} - -var scriptConsts = []string{ - "Latn", "Hani", "Hans", "Hant", "Qaaa", "Qaai", "Qabx", "Zinh", "Zyyy", - "Zzzz", -} - -func (b *builder) writeScript() { - b.writeConsts(b.script.index, scriptConsts...) - b.writeConst("script", tag.Index(b.script.join())) - - supp := make([]uint8, len(b.lang.slice())) - for i, v := range b.lang.slice()[1:] { - if sc := b.registry[v].suppressScript; sc != "" { - supp[i+1] = uint8(b.script.index(sc)) - } - } - b.writeSlice("suppressScript", supp) - - // There is only one deprecated script in CLDR. This value is hard-coded. - // We check here if the code must be updated. - for _, a := range b.supp.Metadata.Alias.ScriptAlias { - if a.Type != "Qaai" { - log.Panicf("unexpected deprecated stript %q", a.Type) - } - } -} - -func parseM49(s string) int16 { - if len(s) == 0 { - return 0 - } - v, err := strconv.ParseUint(s, 10, 10) - failOnError(err) - return int16(v) -} - -var regionConsts = []string{ - "001", "419", "BR", "CA", "ES", "GB", "MD", "PT", "UK", "US", - "ZZ", "XA", "XC", "XK", // Unofficial tag for Kosovo. -} - -func (b *builder) writeRegion() { - b.writeConsts(b.region.index, regionConsts...) - - isoOffset := b.region.index("AA") - m49map := make([]int16, len(b.region.slice())) - fromM49map := make(map[int16]int) - altRegionISO3 := "" - altRegionIDs := []uint16{} - - b.writeConst("isoRegionOffset", isoOffset) - - // 2-letter region lookup and mapping to numeric codes. - regionISO := b.region.clone() - regionISO.s = regionISO.s[isoOffset:] - regionISO.sorted = false - - regionTypes := make([]byte, len(b.region.s)) - - // Is the region valid BCP 47? - for s, e := range b.registry { - if len(s) == 2 && s == strings.ToUpper(s) { - i := b.region.index(s) - for _, d := range e.description { - if strings.Contains(d, "Private use") { - regionTypes[i] = iso3166UserAssigned - } - } - regionTypes[i] |= bcp47Region - } - } - - // Is the region a valid ccTLD? - r := gen.OpenIANAFile("domains/root/db") - defer r.Close() - - buf, err := ioutil.ReadAll(r) - failOnError(err) - re := regexp.MustCompile(`"/domains/root/db/([a-z]{2}).html"`) - for _, m := range re.FindAllSubmatch(buf, -1) { - i := b.region.index(strings.ToUpper(string(m[1]))) - regionTypes[i] |= ccTLD - } - - b.writeSlice("regionTypes", regionTypes) - - iso3Set := make(map[string]int) - update := func(iso2, iso3 string) { - i := regionISO.index(iso2) - if j, ok := iso3Set[iso3]; !ok && iso3[0] == iso2[0] { - regionISO.s[i] += iso3[1:] - iso3Set[iso3] = -1 - } else { - if ok && j >= 0 { - regionISO.s[i] += string([]byte{0, byte(j)}) - } else { - iso3Set[iso3] = len(altRegionISO3) - regionISO.s[i] += string([]byte{0, byte(len(altRegionISO3))}) - altRegionISO3 += iso3 - altRegionIDs = append(altRegionIDs, uint16(isoOffset+i)) - } - } - } - for _, tc := range b.supp.CodeMappings.TerritoryCodes { - i := regionISO.index(tc.Type) + isoOffset - if d := m49map[i]; d != 0 { - log.Panicf("%s found as a duplicate UN.M49 code of %03d", tc.Numeric, d) - } - m49 := parseM49(tc.Numeric) - m49map[i] = m49 - if r := fromM49map[m49]; r == 0 { - fromM49map[m49] = i - } else if r != i { - dep := b.registry[regionISO.s[r-isoOffset]].deprecated - if t := b.registry[tc.Type]; t != nil && dep != "" && (t.deprecated == "" || t.deprecated > dep) { - fromM49map[m49] = i - } - } - } - for _, ta := range b.supp.Metadata.Alias.TerritoryAlias { - if len(ta.Type) == 3 && ta.Type[0] <= '9' && len(ta.Replacement) == 2 { - from := parseM49(ta.Type) - if r := fromM49map[from]; r == 0 { - fromM49map[from] = regionISO.index(ta.Replacement) + isoOffset - } - } - } - for _, tc := range b.supp.CodeMappings.TerritoryCodes { - if len(tc.Alpha3) == 3 { - update(tc.Type, tc.Alpha3) - } - } - // This entries are not included in territoryCodes. Mostly 3-letter variants - // of deleted codes and an entry for QU. - for _, m := range []struct{ iso2, iso3 string }{ - {"CT", "CTE"}, - {"DY", "DHY"}, - {"HV", "HVO"}, - {"JT", "JTN"}, - {"MI", "MID"}, - {"NH", "NHB"}, - {"NQ", "ATN"}, - {"PC", "PCI"}, - {"PU", "PUS"}, - {"PZ", "PCZ"}, - {"RH", "RHO"}, - {"VD", "VDR"}, - {"WK", "WAK"}, - // These three-letter codes are used for others as well. - {"FQ", "ATF"}, - } { - update(m.iso2, m.iso3) - } - for i, s := range regionISO.s { - if len(s) != 4 { - regionISO.s[i] = s + " " - } - } - b.writeConst("regionISO", tag.Index(regionISO.join())) - b.writeConst("altRegionISO3", altRegionISO3) - b.writeSlice("altRegionIDs", altRegionIDs) - - // Create list of deprecated regions. - // TODO: consider inserting SF -> FI. Not included by CLDR, but is the only - // Transitionally-reserved mapping not included. - regionOldMap := stringSet{} - // Include regions in territoryAlias (not all are in the IANA registry!) - for _, reg := range b.supp.Metadata.Alias.TerritoryAlias { - if len(reg.Type) == 2 && reg.Reason == "deprecated" && len(reg.Replacement) == 2 { - regionOldMap.add(reg.Type) - regionOldMap.updateLater(reg.Type, reg.Replacement) - i, _ := regionISO.find(reg.Type) - j, _ := regionISO.find(reg.Replacement) - if k := m49map[i+isoOffset]; k == 0 { - m49map[i+isoOffset] = m49map[j+isoOffset] - } - } - } - b.writeSortedMap("regionOldMap", ®ionOldMap, func(s string) uint16 { - return uint16(b.region.index(s)) - }) - // 3-digit region lookup, groupings. - for i := 1; i < isoOffset; i++ { - m := parseM49(b.region.s[i]) - m49map[i] = m - fromM49map[m] = i - } - b.writeSlice("m49", m49map) - - const ( - searchBits = 7 - regionBits = 9 - ) - if len(m49map) >= 1< %d", len(m49map), 1<>searchBits] = int16(len(fromM49)) - } - b.writeSlice("m49Index", m49Index) - b.writeSlice("fromM49", fromM49) -} - -const ( - // TODO: put these lists in regionTypes as user data? Could be used for - // various optimizations and refinements and could be exposed in the API. - iso3166Except = "AC CP DG EA EU FX IC SU TA UK" - iso3166Trans = "AN BU CS NT TP YU ZR" // SF is not in our set of Regions. - // DY and RH are actually not deleted, but indeterminately reserved. - iso3166DelCLDR = "CT DD DY FQ HV JT MI NH NQ PC PU PZ RH VD WK YD" -) - -const ( - iso3166UserAssigned = 1 << iota - ccTLD - bcp47Region -) - -func find(list []string, s string) int { - for i, t := range list { - if t == s { - return i - } - } - return -1 -} - -// writeVariants generates per-variant information and creates a map from variant -// name to index value. We assign index values such that sorting multiple -// variants by index value will result in the correct order. -// There are two types of variants: specialized and general. Specialized variants -// are only applicable to certain language or language-script pairs. Generalized -// variants apply to any language. Generalized variants always sort after -// specialized variants. We will therefore always assign a higher index value -// to a generalized variant than any other variant. Generalized variants are -// sorted alphabetically among themselves. -// Specialized variants may also sort after other specialized variants. Such -// variants will be ordered after any of the variants they may follow. -// We assume that if a variant x is followed by a variant y, then for any prefix -// p of x, p-x is a prefix of y. This allows us to order tags based on the -// maximum of the length of any of its prefixes. -// TODO: it is possible to define a set of Prefix values on variants such that -// a total order cannot be defined to the point that this algorithm breaks. -// In other words, we cannot guarantee the same order of variants for the -// future using the same algorithm or for non-compliant combinations of -// variants. For this reason, consider using simple alphabetic sorting -// of variants and ignore Prefix restrictions altogether. -func (b *builder) writeVariant() { - generalized := stringSet{} - specialized := stringSet{} - specializedExtend := stringSet{} - // Collate the variants by type and check assumptions. - for _, v := range b.variant.slice() { - e := b.registry[v] - if len(e.prefix) == 0 { - generalized.add(v) - continue - } - c := strings.Split(e.prefix[0], "-") - hasScriptOrRegion := false - if len(c) > 1 { - _, hasScriptOrRegion = b.script.find(c[1]) - if !hasScriptOrRegion { - _, hasScriptOrRegion = b.region.find(c[1]) - - } - } - if len(c) == 1 || len(c) == 2 && hasScriptOrRegion { - // Variant is preceded by a language. - specialized.add(v) - continue - } - // Variant is preceded by another variant. - specializedExtend.add(v) - prefix := c[0] + "-" - if hasScriptOrRegion { - prefix += c[1] - } - for _, p := range e.prefix { - // Verify that the prefix minus the last element is a prefix of the - // predecessor element. - i := strings.LastIndex(p, "-") - pred := b.registry[p[i+1:]] - if find(pred.prefix, p[:i]) < 0 { - log.Fatalf("prefix %q for variant %q not consistent with predecessor spec", p, v) - } - // The sorting used below does not work in the general case. It works - // if we assume that variants that may be followed by others only have - // prefixes of the same length. Verify this. - count := strings.Count(p[:i], "-") - for _, q := range pred.prefix { - if c := strings.Count(q, "-"); c != count { - log.Fatalf("variant %q preceding %q has a prefix %q of size %d; want %d", p[i+1:], v, q, c, count) - } - } - if !strings.HasPrefix(p, prefix) { - log.Fatalf("prefix %q of variant %q should start with %q", p, v, prefix) - } - } - } - - // Sort extended variants. - a := specializedExtend.s - less := func(v, w string) bool { - // Sort by the maximum number of elements. - maxCount := func(s string) (max int) { - for _, p := range b.registry[s].prefix { - if c := strings.Count(p, "-"); c > max { - max = c - } - } - return - } - if cv, cw := maxCount(v), maxCount(w); cv != cw { - return cv < cw - } - // Sort by name as tie breaker. - return v < w - } - sort.Sort(funcSorter{less, sort.StringSlice(a)}) - specializedExtend.frozen = true - - // Create index from variant name to index. - variantIndex := make(map[string]uint8) - add := func(s []string) { - for _, v := range s { - variantIndex[v] = uint8(len(variantIndex)) - } - } - add(specialized.slice()) - add(specializedExtend.s) - numSpecialized := len(variantIndex) - add(generalized.slice()) - if n := len(variantIndex); n > 255 { - log.Fatalf("maximum number of variants exceeded: was %d; want <= 255", n) - } - b.writeMap("variantIndex", variantIndex) - b.writeConst("variantNumSpecialized", numSpecialized) -} - -func (b *builder) writeLanguageInfo() { -} - -// writeLikelyData writes tables that are used both for finding parent relations and for -// language matching. Each entry contains additional bits to indicate the status of the -// data to know when it cannot be used for parent relations. -func (b *builder) writeLikelyData() { - const ( - isList = 1 << iota - scriptInFrom - regionInFrom - ) - type ( // generated types - likelyScriptRegion struct { - region uint16 - script uint8 - flags uint8 - } - likelyLangScript struct { - lang uint16 - script uint8 - flags uint8 - } - likelyLangRegion struct { - lang uint16 - region uint16 - } - // likelyTag is used for getting likely tags for group regions, where - // the likely region might be a region contained in the group. - likelyTag struct { - lang uint16 - region uint16 - script uint8 - } - ) - var ( // generated variables - likelyRegionGroup = make([]likelyTag, len(b.groups)) - likelyLang = make([]likelyScriptRegion, len(b.lang.s)) - likelyRegion = make([]likelyLangScript, len(b.region.s)) - likelyScript = make([]likelyLangRegion, len(b.script.s)) - likelyLangList = []likelyScriptRegion{} - likelyRegionList = []likelyLangScript{} - ) - type fromTo struct { - from, to []string - } - langToOther := map[int][]fromTo{} - regionToOther := map[int][]fromTo{} - for _, m := range b.supp.LikelySubtags.LikelySubtag { - from := strings.Split(m.From, "_") - to := strings.Split(m.To, "_") - if len(to) != 3 { - log.Fatalf("invalid number of subtags in %q: found %d, want 3", m.To, len(to)) - } - if len(from) > 3 { - log.Fatalf("invalid number of subtags: found %d, want 1-3", len(from)) - } - if from[0] != to[0] && from[0] != "und" { - log.Fatalf("unexpected language change in expansion: %s -> %s", from, to) - } - if len(from) == 3 { - if from[2] != to[2] { - log.Fatalf("unexpected region change in expansion: %s -> %s", from, to) - } - if from[0] != "und" { - log.Fatalf("unexpected fully specified from tag: %s -> %s", from, to) - } - } - if len(from) == 1 || from[0] != "und" { - id := 0 - if from[0] != "und" { - id = b.lang.index(from[0]) - } - langToOther[id] = append(langToOther[id], fromTo{from, to}) - } else if len(from) == 2 && len(from[1]) == 4 { - sid := b.script.index(from[1]) - likelyScript[sid].lang = uint16(b.langIndex(to[0])) - likelyScript[sid].region = uint16(b.region.index(to[2])) - } else { - r := b.region.index(from[len(from)-1]) - if id, ok := b.groups[r]; ok { - if from[0] != "und" { - log.Fatalf("region changed unexpectedly: %s -> %s", from, to) - } - likelyRegionGroup[id].lang = uint16(b.langIndex(to[0])) - likelyRegionGroup[id].script = uint8(b.script.index(to[1])) - likelyRegionGroup[id].region = uint16(b.region.index(to[2])) - } else { - regionToOther[r] = append(regionToOther[r], fromTo{from, to}) - } - } - } - b.writeType(likelyLangRegion{}) - b.writeSlice("likelyScript", likelyScript) - - for id := range b.lang.s { - list := langToOther[id] - if len(list) == 1 { - likelyLang[id].region = uint16(b.region.index(list[0].to[2])) - likelyLang[id].script = uint8(b.script.index(list[0].to[1])) - } else if len(list) > 1 { - likelyLang[id].flags = isList - likelyLang[id].region = uint16(len(likelyLangList)) - likelyLang[id].script = uint8(len(list)) - for _, x := range list { - flags := uint8(0) - if len(x.from) > 1 { - if x.from[1] == x.to[2] { - flags = regionInFrom - } else { - flags = scriptInFrom - } - } - likelyLangList = append(likelyLangList, likelyScriptRegion{ - region: uint16(b.region.index(x.to[2])), - script: uint8(b.script.index(x.to[1])), - flags: flags, - }) - } - } - } - // TODO: merge suppressScript data with this table. - b.writeType(likelyScriptRegion{}) - b.writeSlice("likelyLang", likelyLang) - b.writeSlice("likelyLangList", likelyLangList) - - for id := range b.region.s { - list := regionToOther[id] - if len(list) == 1 { - likelyRegion[id].lang = uint16(b.langIndex(list[0].to[0])) - likelyRegion[id].script = uint8(b.script.index(list[0].to[1])) - if len(list[0].from) > 2 { - likelyRegion[id].flags = scriptInFrom - } - } else if len(list) > 1 { - likelyRegion[id].flags = isList - likelyRegion[id].lang = uint16(len(likelyRegionList)) - likelyRegion[id].script = uint8(len(list)) - for i, x := range list { - if len(x.from) == 2 && i != 0 || i > 0 && len(x.from) != 3 { - log.Fatalf("unspecified script must be first in list: %v at %d", x.from, i) - } - x := likelyLangScript{ - lang: uint16(b.langIndex(x.to[0])), - script: uint8(b.script.index(x.to[1])), - } - if len(list[0].from) > 2 { - x.flags = scriptInFrom - } - likelyRegionList = append(likelyRegionList, x) - } - } - } - b.writeType(likelyLangScript{}) - b.writeSlice("likelyRegion", likelyRegion) - b.writeSlice("likelyRegionList", likelyRegionList) - - b.writeType(likelyTag{}) - b.writeSlice("likelyRegionGroup", likelyRegionGroup) -} - -func (b *builder) writeRegionInclusionData() { - var ( - // mm holds for each group the set of groups with a distance of 1. - mm = make(map[int][]index) - - // containment holds for each group the transitive closure of - // containment of other groups. - containment = make(map[index][]index) - ) - for _, g := range b.supp.TerritoryContainment.Group { - // Skip UN and EURO zone as they are flattening the containment - // relationship. - if g.Type == "EZ" || g.Type == "UN" { - continue - } - group := b.region.index(g.Type) - groupIdx := b.groups[group] - for _, mem := range strings.Split(g.Contains, " ") { - r := b.region.index(mem) - mm[r] = append(mm[r], groupIdx) - if g, ok := b.groups[r]; ok { - mm[group] = append(mm[group], g) - containment[groupIdx] = append(containment[groupIdx], g) - } - } - } - - regionContainment := make([]uint64, len(b.groups)) - for _, g := range b.groups { - l := containment[g] - - // Compute the transitive closure of containment. - for i := 0; i < len(l); i++ { - l = append(l, containment[l[i]]...) - } - - // Compute the bitmask. - regionContainment[g] = 1 << g - for _, v := range l { - regionContainment[g] |= 1 << v - } - } - b.writeSlice("regionContainment", regionContainment) - - regionInclusion := make([]uint8, len(b.region.s)) - bvs := make(map[uint64]index) - // Make the first bitvector positions correspond with the groups. - for r, i := range b.groups { - bv := uint64(1 << i) - for _, g := range mm[r] { - bv |= 1 << g - } - bvs[bv] = i - regionInclusion[r] = uint8(bvs[bv]) - } - for r := 1; r < len(b.region.s); r++ { - if _, ok := b.groups[r]; !ok { - bv := uint64(0) - for _, g := range mm[r] { - bv |= 1 << g - } - if bv == 0 { - // Pick the world for unspecified regions. - bv = 1 << b.groups[b.region.index("001")] - } - if _, ok := bvs[bv]; !ok { - bvs[bv] = index(len(bvs)) - } - regionInclusion[r] = uint8(bvs[bv]) - } - } - b.writeSlice("regionInclusion", regionInclusion) - regionInclusionBits := make([]uint64, len(bvs)) - for k, v := range bvs { - regionInclusionBits[v] = uint64(k) - } - // Add bit vectors for increasingly large distances until a fixed point is reached. - regionInclusionNext := []uint8{} - for i := 0; i < len(regionInclusionBits); i++ { - bits := regionInclusionBits[i] - next := bits - for i := uint(0); i < uint(len(b.groups)); i++ { - if bits&(1< 6 { - log.Fatalf("Too many groups: %d", i) - } - idToIndex[mv.Id] = uint8(i + 1) - // TODO: also handle '-' - for _, r := range strings.Split(mv.Value, "+") { - todo := []string{r} - for k := 0; k < len(todo); k++ { - r := todo[k] - regionToGroups[b.regionIndex(r)] |= 1 << uint8(i) - todo = append(todo, regionHierarchy[r]...) - } - } - } - b.w.WriteVar("regionToGroups", regionToGroups) - - // maps language id to in- and out-of-group region. - paradigmLocales := [][3]uint16{} - locales := strings.Split(lm[0].ParadigmLocales[0].Locales, " ") - for i := 0; i < len(locales); i += 2 { - x := [3]uint16{} - for j := 0; j < 2; j++ { - pc := strings.SplitN(locales[i+j], "-", 2) - x[0] = b.langIndex(pc[0]) - if len(pc) == 2 { - x[1+j] = uint16(b.regionIndex(pc[1])) - } - } - paradigmLocales = append(paradigmLocales, x) - } - b.w.WriteVar("paradigmLocales", paradigmLocales) - - b.w.WriteType(mutualIntelligibility{}) - b.w.WriteType(scriptIntelligibility{}) - b.w.WriteType(regionIntelligibility{}) - - matchLang := []mutualIntelligibility{} - matchScript := []scriptIntelligibility{} - matchRegion := []regionIntelligibility{} - // Convert the languageMatch entries in lists keyed by desired language. - for _, m := range lm[0].LanguageMatch { - // Different versions of CLDR use different separators. - desired := strings.Replace(m.Desired, "-", "_", -1) - supported := strings.Replace(m.Supported, "-", "_", -1) - d := strings.Split(desired, "_") - s := strings.Split(supported, "_") - if len(d) != len(s) { - log.Fatalf("not supported: desired=%q; supported=%q", desired, supported) - continue - } - distance, _ := strconv.ParseInt(m.Distance, 10, 8) - switch len(d) { - case 2: - if desired == supported && desired == "*_*" { - continue - } - // language-script pair. - matchScript = append(matchScript, scriptIntelligibility{ - wantLang: uint16(b.langIndex(d[0])), - haveLang: uint16(b.langIndex(s[0])), - wantScript: uint8(b.scriptIndex(d[1])), - haveScript: uint8(b.scriptIndex(s[1])), - distance: uint8(distance), - }) - if m.Oneway != "true" { - matchScript = append(matchScript, scriptIntelligibility{ - wantLang: uint16(b.langIndex(s[0])), - haveLang: uint16(b.langIndex(d[0])), - wantScript: uint8(b.scriptIndex(s[1])), - haveScript: uint8(b.scriptIndex(d[1])), - distance: uint8(distance), - }) - } - case 1: - if desired == supported && desired == "*" { - continue - } - if distance == 1 { - // nb == no is already handled by macro mapping. Check there - // really is only this case. - if d[0] != "no" || s[0] != "nb" { - log.Fatalf("unhandled equivalence %s == %s", s[0], d[0]) - } - continue - } - // TODO: consider dropping oneway field and just doubling the entry. - matchLang = append(matchLang, mutualIntelligibility{ - want: uint16(b.langIndex(d[0])), - have: uint16(b.langIndex(s[0])), - distance: uint8(distance), - oneway: m.Oneway == "true", - }) - case 3: - if desired == supported && desired == "*_*_*" { - continue - } - if desired != supported { - // This is now supported by CLDR, but only one case, which - // should already be covered by paradigm locales. For instance, - // test case "und, en, en-GU, en-IN, en-GB ; en-ZA ; en-GB" in - // testdata/CLDRLocaleMatcherTest.txt tests this. - if supported != "en_*_GB" { - log.Fatalf("not supported: desired=%q; supported=%q", desired, supported) - } - continue - } - ri := regionIntelligibility{ - lang: b.langIndex(d[0]), - distance: uint8(distance), - } - if d[1] != "*" { - ri.script = uint8(b.scriptIndex(d[1])) - } - switch { - case d[2] == "*": - ri.group = 0x80 // not contained in anything - case strings.HasPrefix(d[2], "$!"): - ri.group = 0x80 - d[2] = "$" + d[2][len("$!"):] - fallthrough - case strings.HasPrefix(d[2], "$"): - ri.group |= idToIndex[d[2]] - } - matchRegion = append(matchRegion, ri) - default: - log.Fatalf("not supported: desired=%q; supported=%q", desired, supported) - } - } - sort.SliceStable(matchLang, func(i, j int) bool { - return matchLang[i].distance < matchLang[j].distance - }) - b.w.WriteComment(` - matchLang holds pairs of langIDs of base languages that are typically - mutually intelligible. Each pair is associated with a confidence and - whether the intelligibility goes one or both ways.`) - b.w.WriteVar("matchLang", matchLang) - - b.w.WriteComment(` - matchScript holds pairs of scriptIDs where readers of one script - can typically also read the other. Each is associated with a confidence.`) - sort.SliceStable(matchScript, func(i, j int) bool { - return matchScript[i].distance < matchScript[j].distance - }) - b.w.WriteVar("matchScript", matchScript) - - sort.SliceStable(matchRegion, func(i, j int) bool { - return matchRegion[i].distance < matchRegion[j].distance - }) - b.w.WriteVar("matchRegion", matchRegion) -} diff --git a/vendor/golang.org/x/text/unicode/bidi/gen.go b/vendor/golang.org/x/text/unicode/bidi/gen.go deleted file mode 100644 index 987fc169c..000000000 --- a/vendor/golang.org/x/text/unicode/bidi/gen.go +++ /dev/null @@ -1,133 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -import ( - "flag" - "log" - - "golang.org/x/text/internal/gen" - "golang.org/x/text/internal/triegen" - "golang.org/x/text/internal/ucd" -) - -var outputFile = flag.String("out", "tables.go", "output file") - -func main() { - gen.Init() - gen.Repackage("gen_trieval.go", "trieval.go", "bidi") - gen.Repackage("gen_ranges.go", "ranges_test.go", "bidi") - - genTables() -} - -// bidiClass names and codes taken from class "bc" in -// https://www.unicode.org/Public/8.0.0/ucd/PropertyValueAliases.txt -var bidiClass = map[string]Class{ - "AL": AL, // ArabicLetter - "AN": AN, // ArabicNumber - "B": B, // ParagraphSeparator - "BN": BN, // BoundaryNeutral - "CS": CS, // CommonSeparator - "EN": EN, // EuropeanNumber - "ES": ES, // EuropeanSeparator - "ET": ET, // EuropeanTerminator - "L": L, // LeftToRight - "NSM": NSM, // NonspacingMark - "ON": ON, // OtherNeutral - "R": R, // RightToLeft - "S": S, // SegmentSeparator - "WS": WS, // WhiteSpace - - "FSI": Control, - "PDF": Control, - "PDI": Control, - "LRE": Control, - "LRI": Control, - "LRO": Control, - "RLE": Control, - "RLI": Control, - "RLO": Control, -} - -func genTables() { - if numClass > 0x0F { - log.Fatalf("Too many Class constants (%#x > 0x0F).", numClass) - } - w := gen.NewCodeWriter() - defer w.WriteVersionedGoFile(*outputFile, "bidi") - - gen.WriteUnicodeVersion(w) - - t := triegen.NewTrie("bidi") - - // Build data about bracket mapping. These bits need to be or-ed with - // any other bits. - orMask := map[rune]uint64{} - - xorMap := map[rune]int{} - xorMasks := []rune{0} // First value is no-op. - - ucd.Parse(gen.OpenUCDFile("BidiBrackets.txt"), func(p *ucd.Parser) { - r1 := p.Rune(0) - r2 := p.Rune(1) - xor := r1 ^ r2 - if _, ok := xorMap[xor]; !ok { - xorMap[xor] = len(xorMasks) - xorMasks = append(xorMasks, xor) - } - entry := uint64(xorMap[xor]) << xorMaskShift - switch p.String(2) { - case "o": - entry |= openMask - case "c", "n": - default: - log.Fatalf("Unknown bracket class %q.", p.String(2)) - } - orMask[r1] = entry - }) - - w.WriteComment(` - xorMasks contains masks to be xor-ed with brackets to get the reverse - version.`) - w.WriteVar("xorMasks", xorMasks) - - done := map[rune]bool{} - - insert := func(r rune, c Class) { - if !done[r] { - t.Insert(r, orMask[r]|uint64(c)) - done[r] = true - } - } - - // Insert the derived BiDi properties. - ucd.Parse(gen.OpenUCDFile("extracted/DerivedBidiClass.txt"), func(p *ucd.Parser) { - r := p.Rune(0) - class, ok := bidiClass[p.String(1)] - if !ok { - log.Fatalf("%U: Unknown BiDi class %q", r, p.String(1)) - } - insert(r, class) - }) - visitDefaults(insert) - - // TODO: use sparse blocks. This would reduce table size considerably - // from the looks of it. - - sz, err := t.Gen(w) - if err != nil { - log.Fatal(err) - } - w.Size += sz -} - -// dummy values to make methods in gen_common compile. The real versions -// will be generated by this file to tables.go. -var ( - xorMasks []rune -) diff --git a/vendor/golang.org/x/text/unicode/bidi/gen_ranges.go b/vendor/golang.org/x/text/unicode/bidi/gen_ranges.go deleted file mode 100644 index 02c3b505d..000000000 --- a/vendor/golang.org/x/text/unicode/bidi/gen_ranges.go +++ /dev/null @@ -1,57 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -import ( - "unicode" - - "golang.org/x/text/internal/gen" - "golang.org/x/text/internal/ucd" - "golang.org/x/text/unicode/rangetable" -) - -// These tables are hand-extracted from: -// https://www.unicode.org/Public/8.0.0/ucd/extracted/DerivedBidiClass.txt -func visitDefaults(fn func(r rune, c Class)) { - // first write default values for ranges listed above. - visitRunes(fn, AL, []rune{ - 0x0600, 0x07BF, // Arabic - 0x08A0, 0x08FF, // Arabic Extended-A - 0xFB50, 0xFDCF, // Arabic Presentation Forms - 0xFDF0, 0xFDFF, - 0xFE70, 0xFEFF, - 0x0001EE00, 0x0001EEFF, // Arabic Mathematical Alpha Symbols - }) - visitRunes(fn, R, []rune{ - 0x0590, 0x05FF, // Hebrew - 0x07C0, 0x089F, // Nko et al. - 0xFB1D, 0xFB4F, - 0x00010800, 0x00010FFF, // Cypriot Syllabary et. al. - 0x0001E800, 0x0001EDFF, - 0x0001EF00, 0x0001EFFF, - }) - visitRunes(fn, ET, []rune{ // European Terminator - 0x20A0, 0x20Cf, // Currency symbols - }) - rangetable.Visit(unicode.Noncharacter_Code_Point, func(r rune) { - fn(r, BN) // Boundary Neutral - }) - ucd.Parse(gen.OpenUCDFile("DerivedCoreProperties.txt"), func(p *ucd.Parser) { - if p.String(1) == "Default_Ignorable_Code_Point" { - fn(p.Rune(0), BN) // Boundary Neutral - } - }) -} - -func visitRunes(fn func(r rune, c Class), c Class, runes []rune) { - for i := 0; i < len(runes); i += 2 { - lo, hi := runes[i], runes[i+1] - for j := lo; j <= hi; j++ { - fn(j, c) - } - } -} diff --git a/vendor/golang.org/x/text/unicode/bidi/gen_trieval.go b/vendor/golang.org/x/text/unicode/bidi/gen_trieval.go deleted file mode 100644 index 9cb994289..000000000 --- a/vendor/golang.org/x/text/unicode/bidi/gen_trieval.go +++ /dev/null @@ -1,64 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -package main - -// Class is the Unicode BiDi class. Each rune has a single class. -type Class uint - -const ( - L Class = iota // LeftToRight - R // RightToLeft - EN // EuropeanNumber - ES // EuropeanSeparator - ET // EuropeanTerminator - AN // ArabicNumber - CS // CommonSeparator - B // ParagraphSeparator - S // SegmentSeparator - WS // WhiteSpace - ON // OtherNeutral - BN // BoundaryNeutral - NSM // NonspacingMark - AL // ArabicLetter - Control // Control LRO - PDI - - numClass - - LRO // LeftToRightOverride - RLO // RightToLeftOverride - LRE // LeftToRightEmbedding - RLE // RightToLeftEmbedding - PDF // PopDirectionalFormat - LRI // LeftToRightIsolate - RLI // RightToLeftIsolate - FSI // FirstStrongIsolate - PDI // PopDirectionalIsolate - - unknownClass = ^Class(0) -) - -var controlToClass = map[rune]Class{ - 0x202D: LRO, // LeftToRightOverride, - 0x202E: RLO, // RightToLeftOverride, - 0x202A: LRE, // LeftToRightEmbedding, - 0x202B: RLE, // RightToLeftEmbedding, - 0x202C: PDF, // PopDirectionalFormat, - 0x2066: LRI, // LeftToRightIsolate, - 0x2067: RLI, // RightToLeftIsolate, - 0x2068: FSI, // FirstStrongIsolate, - 0x2069: PDI, // PopDirectionalIsolate, -} - -// A trie entry has the following bits: -// 7..5 XOR mask for brackets -// 4 1: Bracket open, 0: Bracket close -// 3..0 Class type - -const ( - openMask = 0x10 - xorMaskShift = 5 -) diff --git a/vendor/golang.org/x/text/unicode/norm/maketables.go b/vendor/golang.org/x/text/unicode/norm/maketables.go deleted file mode 100644 index 30a3aa933..000000000 --- a/vendor/golang.org/x/text/unicode/norm/maketables.go +++ /dev/null @@ -1,986 +0,0 @@ -// Copyright 2011 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// Normalization table generator. -// Data read from the web. -// See forminfo.go for a description of the trie values associated with each rune. - -package main - -import ( - "bytes" - "encoding/binary" - "flag" - "fmt" - "io" - "log" - "sort" - "strconv" - "strings" - - "golang.org/x/text/internal/gen" - "golang.org/x/text/internal/triegen" - "golang.org/x/text/internal/ucd" -) - -func main() { - gen.Init() - loadUnicodeData() - compactCCC() - loadCompositionExclusions() - completeCharFields(FCanonical) - completeCharFields(FCompatibility) - computeNonStarterCounts() - verifyComputed() - printChars() - testDerived() - printTestdata() - makeTables() -} - -var ( - tablelist = flag.String("tables", - "all", - "comma-separated list of which tables to generate; "+ - "can be 'decomp', 'recomp', 'info' and 'all'") - test = flag.Bool("test", - false, - "test existing tables against DerivedNormalizationProps and generate test data for regression testing") - verbose = flag.Bool("verbose", - false, - "write data to stdout as it is parsed") -) - -const MaxChar = 0x10FFFF // anything above this shouldn't exist - -// Quick Check properties of runes allow us to quickly -// determine whether a rune may occur in a normal form. -// For a given normal form, a rune may be guaranteed to occur -// verbatim (QC=Yes), may or may not combine with another -// rune (QC=Maybe), or may not occur (QC=No). -type QCResult int - -const ( - QCUnknown QCResult = iota - QCYes - QCNo - QCMaybe -) - -func (r QCResult) String() string { - switch r { - case QCYes: - return "Yes" - case QCNo: - return "No" - case QCMaybe: - return "Maybe" - } - return "***UNKNOWN***" -} - -const ( - FCanonical = iota // NFC or NFD - FCompatibility // NFKC or NFKD - FNumberOfFormTypes -) - -const ( - MComposed = iota // NFC or NFKC - MDecomposed // NFD or NFKD - MNumberOfModes -) - -// This contains only the properties we're interested in. -type Char struct { - name string - codePoint rune // if zero, this index is not a valid code point. - ccc uint8 // canonical combining class - origCCC uint8 - excludeInComp bool // from CompositionExclusions.txt - compatDecomp bool // it has a compatibility expansion - - nTrailingNonStarters uint8 - nLeadingNonStarters uint8 // must be equal to trailing if non-zero - - forms [FNumberOfFormTypes]FormInfo // For FCanonical and FCompatibility - - state State -} - -var chars = make([]Char, MaxChar+1) -var cccMap = make(map[uint8]uint8) - -func (c Char) String() string { - buf := new(bytes.Buffer) - - fmt.Fprintf(buf, "%U [%s]:\n", c.codePoint, c.name) - fmt.Fprintf(buf, " ccc: %v\n", c.ccc) - fmt.Fprintf(buf, " excludeInComp: %v\n", c.excludeInComp) - fmt.Fprintf(buf, " compatDecomp: %v\n", c.compatDecomp) - fmt.Fprintf(buf, " state: %v\n", c.state) - fmt.Fprintf(buf, " NFC:\n") - fmt.Fprint(buf, c.forms[FCanonical]) - fmt.Fprintf(buf, " NFKC:\n") - fmt.Fprint(buf, c.forms[FCompatibility]) - - return buf.String() -} - -// In UnicodeData.txt, some ranges are marked like this: -// 3400;;Lo;0;L;;;;;N;;;;; -// 4DB5;;Lo;0;L;;;;;N;;;;; -// parseCharacter keeps a state variable indicating the weirdness. -type State int - -const ( - SNormal State = iota // known to be zero for the type - SFirst - SLast - SMissing -) - -var lastChar = rune('\u0000') - -func (c Char) isValid() bool { - return c.codePoint != 0 && c.state != SMissing -} - -type FormInfo struct { - quickCheck [MNumberOfModes]QCResult // index: MComposed or MDecomposed - verified [MNumberOfModes]bool // index: MComposed or MDecomposed - - combinesForward bool // May combine with rune on the right - combinesBackward bool // May combine with rune on the left - isOneWay bool // Never appears in result - inDecomp bool // Some decompositions result in this char. - decomp Decomposition - expandedDecomp Decomposition -} - -func (f FormInfo) String() string { - buf := bytes.NewBuffer(make([]byte, 0)) - - fmt.Fprintf(buf, " quickCheck[C]: %v\n", f.quickCheck[MComposed]) - fmt.Fprintf(buf, " quickCheck[D]: %v\n", f.quickCheck[MDecomposed]) - fmt.Fprintf(buf, " cmbForward: %v\n", f.combinesForward) - fmt.Fprintf(buf, " cmbBackward: %v\n", f.combinesBackward) - fmt.Fprintf(buf, " isOneWay: %v\n", f.isOneWay) - fmt.Fprintf(buf, " inDecomp: %v\n", f.inDecomp) - fmt.Fprintf(buf, " decomposition: %X\n", f.decomp) - fmt.Fprintf(buf, " expandedDecomp: %X\n", f.expandedDecomp) - - return buf.String() -} - -type Decomposition []rune - -func parseDecomposition(s string, skipfirst bool) (a []rune, err error) { - decomp := strings.Split(s, " ") - if len(decomp) > 0 && skipfirst { - decomp = decomp[1:] - } - for _, d := range decomp { - point, err := strconv.ParseUint(d, 16, 64) - if err != nil { - return a, err - } - a = append(a, rune(point)) - } - return a, nil -} - -func loadUnicodeData() { - f := gen.OpenUCDFile("UnicodeData.txt") - defer f.Close() - p := ucd.New(f) - for p.Next() { - r := p.Rune(ucd.CodePoint) - char := &chars[r] - - char.ccc = uint8(p.Uint(ucd.CanonicalCombiningClass)) - decmap := p.String(ucd.DecompMapping) - - exp, err := parseDecomposition(decmap, false) - isCompat := false - if err != nil { - if len(decmap) > 0 { - exp, err = parseDecomposition(decmap, true) - if err != nil { - log.Fatalf(`%U: bad decomp |%v|: "%s"`, r, decmap, err) - } - isCompat = true - } - } - - char.name = p.String(ucd.Name) - char.codePoint = r - char.forms[FCompatibility].decomp = exp - if !isCompat { - char.forms[FCanonical].decomp = exp - } else { - char.compatDecomp = true - } - if len(decmap) > 0 { - char.forms[FCompatibility].decomp = exp - } - } - if err := p.Err(); err != nil { - log.Fatal(err) - } -} - -// compactCCC converts the sparse set of CCC values to a continguous one, -// reducing the number of bits needed from 8 to 6. -func compactCCC() { - m := make(map[uint8]uint8) - for i := range chars { - c := &chars[i] - m[c.ccc] = 0 - } - cccs := []int{} - for v, _ := range m { - cccs = append(cccs, int(v)) - } - sort.Ints(cccs) - for i, c := range cccs { - cccMap[uint8(i)] = uint8(c) - m[uint8(c)] = uint8(i) - } - for i := range chars { - c := &chars[i] - c.origCCC = c.ccc - c.ccc = m[c.ccc] - } - if len(m) >= 1<<6 { - log.Fatalf("too many difference CCC values: %d >= 64", len(m)) - } -} - -// CompositionExclusions.txt has form: -// 0958 # ... -// See https://unicode.org/reports/tr44/ for full explanation -func loadCompositionExclusions() { - f := gen.OpenUCDFile("CompositionExclusions.txt") - defer f.Close() - p := ucd.New(f) - for p.Next() { - c := &chars[p.Rune(0)] - if c.excludeInComp { - log.Fatalf("%U: Duplicate entry in exclusions.", c.codePoint) - } - c.excludeInComp = true - } - if e := p.Err(); e != nil { - log.Fatal(e) - } -} - -// hasCompatDecomp returns true if any of the recursive -// decompositions contains a compatibility expansion. -// In this case, the character may not occur in NFK*. -func hasCompatDecomp(r rune) bool { - c := &chars[r] - if c.compatDecomp { - return true - } - for _, d := range c.forms[FCompatibility].decomp { - if hasCompatDecomp(d) { - return true - } - } - return false -} - -// Hangul related constants. -const ( - HangulBase = 0xAC00 - HangulEnd = 0xD7A4 // hangulBase + Jamo combinations (19 * 21 * 28) - - JamoLBase = 0x1100 - JamoLEnd = 0x1113 - JamoVBase = 0x1161 - JamoVEnd = 0x1176 - JamoTBase = 0x11A8 - JamoTEnd = 0x11C3 - - JamoLVTCount = 19 * 21 * 28 - JamoTCount = 28 -) - -func isHangul(r rune) bool { - return HangulBase <= r && r < HangulEnd -} - -func isHangulWithoutJamoT(r rune) bool { - if !isHangul(r) { - return false - } - r -= HangulBase - return r < JamoLVTCount && r%JamoTCount == 0 -} - -func ccc(r rune) uint8 { - return chars[r].ccc -} - -// Insert a rune in a buffer, ordered by Canonical Combining Class. -func insertOrdered(b Decomposition, r rune) Decomposition { - n := len(b) - b = append(b, 0) - cc := ccc(r) - if cc > 0 { - // Use bubble sort. - for ; n > 0; n-- { - if ccc(b[n-1]) <= cc { - break - } - b[n] = b[n-1] - } - } - b[n] = r - return b -} - -// Recursively decompose. -func decomposeRecursive(form int, r rune, d Decomposition) Decomposition { - dcomp := chars[r].forms[form].decomp - if len(dcomp) == 0 { - return insertOrdered(d, r) - } - for _, c := range dcomp { - d = decomposeRecursive(form, c, d) - } - return d -} - -func completeCharFields(form int) { - // Phase 0: pre-expand decomposition. - for i := range chars { - f := &chars[i].forms[form] - if len(f.decomp) == 0 { - continue - } - exp := make(Decomposition, 0) - for _, c := range f.decomp { - exp = decomposeRecursive(form, c, exp) - } - f.expandedDecomp = exp - } - - // Phase 1: composition exclusion, mark decomposition. - for i := range chars { - c := &chars[i] - f := &c.forms[form] - - // Marks script-specific exclusions and version restricted. - f.isOneWay = c.excludeInComp - - // Singletons - f.isOneWay = f.isOneWay || len(f.decomp) == 1 - - // Non-starter decompositions - if len(f.decomp) > 1 { - chk := c.ccc != 0 || chars[f.decomp[0]].ccc != 0 - f.isOneWay = f.isOneWay || chk - } - - // Runes that decompose into more than two runes. - f.isOneWay = f.isOneWay || len(f.decomp) > 2 - - if form == FCompatibility { - f.isOneWay = f.isOneWay || hasCompatDecomp(c.codePoint) - } - - for _, r := range f.decomp { - chars[r].forms[form].inDecomp = true - } - } - - // Phase 2: forward and backward combining. - for i := range chars { - c := &chars[i] - f := &c.forms[form] - - if !f.isOneWay && len(f.decomp) == 2 { - f0 := &chars[f.decomp[0]].forms[form] - f1 := &chars[f.decomp[1]].forms[form] - if !f0.isOneWay { - f0.combinesForward = true - } - if !f1.isOneWay { - f1.combinesBackward = true - } - } - if isHangulWithoutJamoT(rune(i)) { - f.combinesForward = true - } - } - - // Phase 3: quick check values. - for i := range chars { - c := &chars[i] - f := &c.forms[form] - - switch { - case len(f.decomp) > 0: - f.quickCheck[MDecomposed] = QCNo - case isHangul(rune(i)): - f.quickCheck[MDecomposed] = QCNo - default: - f.quickCheck[MDecomposed] = QCYes - } - switch { - case f.isOneWay: - f.quickCheck[MComposed] = QCNo - case (i & 0xffff00) == JamoLBase: - f.quickCheck[MComposed] = QCYes - if JamoLBase <= i && i < JamoLEnd { - f.combinesForward = true - } - if JamoVBase <= i && i < JamoVEnd { - f.quickCheck[MComposed] = QCMaybe - f.combinesBackward = true - f.combinesForward = true - } - if JamoTBase <= i && i < JamoTEnd { - f.quickCheck[MComposed] = QCMaybe - f.combinesBackward = true - } - case !f.combinesBackward: - f.quickCheck[MComposed] = QCYes - default: - f.quickCheck[MComposed] = QCMaybe - } - } -} - -func computeNonStarterCounts() { - // Phase 4: leading and trailing non-starter count - for i := range chars { - c := &chars[i] - - runes := []rune{rune(i)} - // We always use FCompatibility so that the CGJ insertion points do not - // change for repeated normalizations with different forms. - if exp := c.forms[FCompatibility].expandedDecomp; len(exp) > 0 { - runes = exp - } - // We consider runes that combine backwards to be non-starters for the - // purpose of Stream-Safe Text Processing. - for _, r := range runes { - if cr := &chars[r]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward { - break - } - c.nLeadingNonStarters++ - } - for i := len(runes) - 1; i >= 0; i-- { - if cr := &chars[runes[i]]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward { - break - } - c.nTrailingNonStarters++ - } - if c.nTrailingNonStarters > 3 { - log.Fatalf("%U: Decomposition with more than 3 (%d) trailing modifiers (%U)", i, c.nTrailingNonStarters, runes) - } - - if isHangul(rune(i)) { - c.nTrailingNonStarters = 2 - if isHangulWithoutJamoT(rune(i)) { - c.nTrailingNonStarters = 1 - } - } - - if l, t := c.nLeadingNonStarters, c.nTrailingNonStarters; l > 0 && l != t { - log.Fatalf("%U: number of leading and trailing non-starters should be equal (%d vs %d)", i, l, t) - } - if t := c.nTrailingNonStarters; t > 3 { - log.Fatalf("%U: number of trailing non-starters is %d > 3", t) - } - } -} - -func printBytes(w io.Writer, b []byte, name string) { - fmt.Fprintf(w, "// %s: %d bytes\n", name, len(b)) - fmt.Fprintf(w, "var %s = [...]byte {", name) - for i, c := range b { - switch { - case i%64 == 0: - fmt.Fprintf(w, "\n// Bytes %x - %x\n", i, i+63) - case i%8 == 0: - fmt.Fprintf(w, "\n") - } - fmt.Fprintf(w, "0x%.2X, ", c) - } - fmt.Fprint(w, "\n}\n\n") -} - -// See forminfo.go for format. -func makeEntry(f *FormInfo, c *Char) uint16 { - e := uint16(0) - if r := c.codePoint; HangulBase <= r && r < HangulEnd { - e |= 0x40 - } - if f.combinesForward { - e |= 0x20 - } - if f.quickCheck[MDecomposed] == QCNo { - e |= 0x4 - } - switch f.quickCheck[MComposed] { - case QCYes: - case QCNo: - e |= 0x10 - case QCMaybe: - e |= 0x18 - default: - log.Fatalf("Illegal quickcheck value %v.", f.quickCheck[MComposed]) - } - e |= uint16(c.nTrailingNonStarters) - return e -} - -// decompSet keeps track of unique decompositions, grouped by whether -// the decomposition is followed by a trailing and/or leading CCC. -type decompSet [7]map[string]bool - -const ( - normalDecomp = iota - firstMulti - firstCCC - endMulti - firstLeadingCCC - firstCCCZeroExcept - firstStarterWithNLead - lastDecomp -) - -var cname = []string{"firstMulti", "firstCCC", "endMulti", "firstLeadingCCC", "firstCCCZeroExcept", "firstStarterWithNLead", "lastDecomp"} - -func makeDecompSet() decompSet { - m := decompSet{} - for i := range m { - m[i] = make(map[string]bool) - } - return m -} -func (m *decompSet) insert(key int, s string) { - m[key][s] = true -} - -func printCharInfoTables(w io.Writer) int { - mkstr := func(r rune, f *FormInfo) (int, string) { - d := f.expandedDecomp - s := string([]rune(d)) - if max := 1 << 6; len(s) >= max { - const msg = "%U: too many bytes in decomposition: %d >= %d" - log.Fatalf(msg, r, len(s), max) - } - head := uint8(len(s)) - if f.quickCheck[MComposed] != QCYes { - head |= 0x40 - } - if f.combinesForward { - head |= 0x80 - } - s = string([]byte{head}) + s - - lccc := ccc(d[0]) - tccc := ccc(d[len(d)-1]) - cc := ccc(r) - if cc != 0 && lccc == 0 && tccc == 0 { - log.Fatalf("%U: trailing and leading ccc are 0 for non-zero ccc %d", r, cc) - } - if tccc < lccc && lccc != 0 { - const msg = "%U: lccc (%d) must be <= tcc (%d)" - log.Fatalf(msg, r, lccc, tccc) - } - index := normalDecomp - nTrail := chars[r].nTrailingNonStarters - nLead := chars[r].nLeadingNonStarters - if tccc > 0 || lccc > 0 || nTrail > 0 { - tccc <<= 2 - tccc |= nTrail - s += string([]byte{tccc}) - index = endMulti - for _, r := range d[1:] { - if ccc(r) == 0 { - index = firstCCC - } - } - if lccc > 0 || nLead > 0 { - s += string([]byte{lccc}) - if index == firstCCC { - log.Fatalf("%U: multi-segment decomposition not supported for decompositions with leading CCC != 0", r) - } - index = firstLeadingCCC - } - if cc != lccc { - if cc != 0 { - log.Fatalf("%U: for lccc != ccc, expected ccc to be 0; was %d", r, cc) - } - index = firstCCCZeroExcept - } - } else if len(d) > 1 { - index = firstMulti - } - return index, s - } - - decompSet := makeDecompSet() - const nLeadStr = "\x00\x01" // 0-byte length and tccc with nTrail. - decompSet.insert(firstStarterWithNLead, nLeadStr) - - // Store the uniqued decompositions in a byte buffer, - // preceded by their byte length. - for _, c := range chars { - for _, f := range c.forms { - if len(f.expandedDecomp) == 0 { - continue - } - if f.combinesBackward { - log.Fatalf("%U: combinesBackward and decompose", c.codePoint) - } - index, s := mkstr(c.codePoint, &f) - decompSet.insert(index, s) - } - } - - decompositions := bytes.NewBuffer(make([]byte, 0, 10000)) - size := 0 - positionMap := make(map[string]uint16) - decompositions.WriteString("\000") - fmt.Fprintln(w, "const (") - for i, m := range decompSet { - sa := []string{} - for s := range m { - sa = append(sa, s) - } - sort.Strings(sa) - for _, s := range sa { - p := decompositions.Len() - decompositions.WriteString(s) - positionMap[s] = uint16(p) - } - if cname[i] != "" { - fmt.Fprintf(w, "%s = 0x%X\n", cname[i], decompositions.Len()) - } - } - fmt.Fprintln(w, "maxDecomp = 0x8000") - fmt.Fprintln(w, ")") - b := decompositions.Bytes() - printBytes(w, b, "decomps") - size += len(b) - - varnames := []string{"nfc", "nfkc"} - for i := 0; i < FNumberOfFormTypes; i++ { - trie := triegen.NewTrie(varnames[i]) - - for r, c := range chars { - f := c.forms[i] - d := f.expandedDecomp - if len(d) != 0 { - _, key := mkstr(c.codePoint, &f) - trie.Insert(rune(r), uint64(positionMap[key])) - if c.ccc != ccc(d[0]) { - // We assume the lead ccc of a decomposition !=0 in this case. - if ccc(d[0]) == 0 { - log.Fatalf("Expected leading CCC to be non-zero; ccc is %d", c.ccc) - } - } - } else if c.nLeadingNonStarters > 0 && len(f.expandedDecomp) == 0 && c.ccc == 0 && !f.combinesBackward { - // Handle cases where it can't be detected that the nLead should be equal - // to nTrail. - trie.Insert(c.codePoint, uint64(positionMap[nLeadStr])) - } else if v := makeEntry(&f, &c)<<8 | uint16(c.ccc); v != 0 { - trie.Insert(c.codePoint, uint64(0x8000|v)) - } - } - sz, err := trie.Gen(w, triegen.Compact(&normCompacter{name: varnames[i]})) - if err != nil { - log.Fatal(err) - } - size += sz - } - return size -} - -func contains(sa []string, s string) bool { - for _, a := range sa { - if a == s { - return true - } - } - return false -} - -func makeTables() { - w := &bytes.Buffer{} - - size := 0 - if *tablelist == "" { - return - } - list := strings.Split(*tablelist, ",") - if *tablelist == "all" { - list = []string{"recomp", "info"} - } - - // Compute maximum decomposition size. - max := 0 - for _, c := range chars { - if n := len(string(c.forms[FCompatibility].expandedDecomp)); n > max { - max = n - } - } - fmt.Fprintln(w, `import "sync"`) - fmt.Fprintln(w) - - fmt.Fprintln(w, "const (") - fmt.Fprintln(w, "\t// Version is the Unicode edition from which the tables are derived.") - fmt.Fprintf(w, "\tVersion = %q\n", gen.UnicodeVersion()) - fmt.Fprintln(w) - fmt.Fprintln(w, "\t// MaxTransformChunkSize indicates the maximum number of bytes that Transform") - fmt.Fprintln(w, "\t// may need to write atomically for any Form. Making a destination buffer at") - fmt.Fprintln(w, "\t// least this size ensures that Transform can always make progress and that") - fmt.Fprintln(w, "\t// the user does not need to grow the buffer on an ErrShortDst.") - fmt.Fprintf(w, "\tMaxTransformChunkSize = %d+maxNonStarters*4\n", len(string(0x034F))+max) - fmt.Fprintln(w, ")\n") - - // Print the CCC remap table. - size += len(cccMap) - fmt.Fprintf(w, "var ccc = [%d]uint8{", len(cccMap)) - for i := 0; i < len(cccMap); i++ { - if i%8 == 0 { - fmt.Fprintln(w) - } - fmt.Fprintf(w, "%3d, ", cccMap[uint8(i)]) - } - fmt.Fprintln(w, "\n}\n") - - if contains(list, "info") { - size += printCharInfoTables(w) - } - - if contains(list, "recomp") { - // Note that we use 32 bit keys, instead of 64 bit. - // This clips the bits of three entries, but we know - // this won't cause a collision. The compiler will catch - // any changes made to UnicodeData.txt that introduces - // a collision. - // Note that the recomposition map for NFC and NFKC - // are identical. - - // Recomposition map - nrentries := 0 - for _, c := range chars { - f := c.forms[FCanonical] - if !f.isOneWay && len(f.decomp) > 0 { - nrentries++ - } - } - sz := nrentries * 8 - size += sz - fmt.Fprintf(w, "// recompMap: %d bytes (entries only)\n", sz) - fmt.Fprintln(w, "var recompMap map[uint32]rune") - fmt.Fprintln(w, "var recompMapOnce sync.Once\n") - fmt.Fprintln(w, `const recompMapPacked = "" +`) - var buf [8]byte - for i, c := range chars { - f := c.forms[FCanonical] - d := f.decomp - if !f.isOneWay && len(d) > 0 { - key := uint32(uint16(d[0]))<<16 + uint32(uint16(d[1])) - binary.BigEndian.PutUint32(buf[:4], key) - binary.BigEndian.PutUint32(buf[4:], uint32(i)) - fmt.Fprintf(w, "\t\t%q + // 0x%.8X: 0x%.8X\n", string(buf[:]), key, uint32(i)) - } - } - // hack so we don't have to special case the trailing plus sign - fmt.Fprintf(w, ` ""`) - fmt.Fprintln(w) - } - - fmt.Fprintf(w, "// Total size of tables: %dKB (%d bytes)\n", (size+512)/1024, size) - gen.WriteVersionedGoFile("tables.go", "norm", w.Bytes()) -} - -func printChars() { - if *verbose { - for _, c := range chars { - if !c.isValid() || c.state == SMissing { - continue - } - fmt.Println(c) - } - } -} - -// verifyComputed does various consistency tests. -func verifyComputed() { - for i, c := range chars { - for _, f := range c.forms { - isNo := (f.quickCheck[MDecomposed] == QCNo) - if (len(f.decomp) > 0) != isNo && !isHangul(rune(i)) { - log.Fatalf("%U: NF*D QC must be No if rune decomposes", i) - } - - isMaybe := f.quickCheck[MComposed] == QCMaybe - if f.combinesBackward != isMaybe { - log.Fatalf("%U: NF*C QC must be Maybe if combinesBackward", i) - } - if len(f.decomp) > 0 && f.combinesForward && isMaybe { - log.Fatalf("%U: NF*C QC must be Yes or No if combinesForward and decomposes", i) - } - - if len(f.expandedDecomp) != 0 { - continue - } - if a, b := c.nLeadingNonStarters > 0, (c.ccc > 0 || f.combinesBackward); a != b { - // We accept these runes to be treated differently (it only affects - // segment breaking in iteration, most likely on improper use), but - // reconsider if more characters are added. - // U+FF9E HALFWIDTH KATAKANA VOICED SOUND MARK;Lm;0;L; 3099;;;;N;;;;; - // U+FF9F HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK;Lm;0;L; 309A;;;;N;;;;; - // U+3133 HANGUL LETTER KIYEOK-SIOS;Lo;0;L; 11AA;;;;N;HANGUL LETTER GIYEOG SIOS;;;; - // U+318E HANGUL LETTER ARAEAE;Lo;0;L; 11A1;;;;N;HANGUL LETTER ALAE AE;;;; - // U+FFA3 HALFWIDTH HANGUL LETTER KIYEOK-SIOS;Lo;0;L; 3133;;;;N;HALFWIDTH HANGUL LETTER GIYEOG SIOS;;;; - // U+FFDC HALFWIDTH HANGUL LETTER I;Lo;0;L; 3163;;;;N;;;;; - if i != 0xFF9E && i != 0xFF9F && !(0x3133 <= i && i <= 0x318E) && !(0xFFA3 <= i && i <= 0xFFDC) { - log.Fatalf("%U: nLead was %v; want %v", i, a, b) - } - } - } - nfc := c.forms[FCanonical] - nfkc := c.forms[FCompatibility] - if nfc.combinesBackward != nfkc.combinesBackward { - log.Fatalf("%U: Cannot combine combinesBackward\n", c.codePoint) - } - } -} - -// Use values in DerivedNormalizationProps.txt to compare against the -// values we computed. -// DerivedNormalizationProps.txt has form: -// 00C0..00C5 ; NFD_QC; N # ... -// 0374 ; NFD_QC; N # ... -// See https://unicode.org/reports/tr44/ for full explanation -func testDerived() { - f := gen.OpenUCDFile("DerivedNormalizationProps.txt") - defer f.Close() - p := ucd.New(f) - for p.Next() { - r := p.Rune(0) - c := &chars[r] - - var ftype, mode int - qt := p.String(1) - switch qt { - case "NFC_QC": - ftype, mode = FCanonical, MComposed - case "NFD_QC": - ftype, mode = FCanonical, MDecomposed - case "NFKC_QC": - ftype, mode = FCompatibility, MComposed - case "NFKD_QC": - ftype, mode = FCompatibility, MDecomposed - default: - continue - } - var qr QCResult - switch p.String(2) { - case "Y": - qr = QCYes - case "N": - qr = QCNo - case "M": - qr = QCMaybe - default: - log.Fatalf(`Unexpected quick check value "%s"`, p.String(2)) - } - if got := c.forms[ftype].quickCheck[mode]; got != qr { - log.Printf("%U: FAILED %s (was %v need %v)\n", r, qt, got, qr) - } - c.forms[ftype].verified[mode] = true - } - if err := p.Err(); err != nil { - log.Fatal(err) - } - // Any unspecified value must be QCYes. Verify this. - for i, c := range chars { - for j, fd := range c.forms { - for k, qr := range fd.quickCheck { - if !fd.verified[k] && qr != QCYes { - m := "%U: FAIL F:%d M:%d (was %v need Yes) %s\n" - log.Printf(m, i, j, k, qr, c.name) - } - } - } - } -} - -var testHeader = `const ( - Yes = iota - No - Maybe -) - -type formData struct { - qc uint8 - combinesForward bool - decomposition string -} - -type runeData struct { - r rune - ccc uint8 - nLead uint8 - nTrail uint8 - f [2]formData // 0: canonical; 1: compatibility -} - -func f(qc uint8, cf bool, dec string) [2]formData { - return [2]formData{{qc, cf, dec}, {qc, cf, dec}} -} - -func g(qc, qck uint8, cf, cfk bool, d, dk string) [2]formData { - return [2]formData{{qc, cf, d}, {qck, cfk, dk}} -} - -var testData = []runeData{ -` - -func printTestdata() { - type lastInfo struct { - ccc uint8 - nLead uint8 - nTrail uint8 - f string - } - - last := lastInfo{} - w := &bytes.Buffer{} - fmt.Fprintf(w, testHeader) - for r, c := range chars { - f := c.forms[FCanonical] - qc, cf, d := f.quickCheck[MComposed], f.combinesForward, string(f.expandedDecomp) - f = c.forms[FCompatibility] - qck, cfk, dk := f.quickCheck[MComposed], f.combinesForward, string(f.expandedDecomp) - s := "" - if d == dk && qc == qck && cf == cfk { - s = fmt.Sprintf("f(%s, %v, %q)", qc, cf, d) - } else { - s = fmt.Sprintf("g(%s, %s, %v, %v, %q, %q)", qc, qck, cf, cfk, d, dk) - } - current := lastInfo{c.ccc, c.nLeadingNonStarters, c.nTrailingNonStarters, s} - if last != current { - fmt.Fprintf(w, "\t{0x%x, %d, %d, %d, %s},\n", r, c.origCCC, c.nLeadingNonStarters, c.nTrailingNonStarters, s) - last = current - } - } - fmt.Fprintln(w, "}") - gen.WriteVersionedGoFile("data_test.go", "norm", w.Bytes()) -} diff --git a/vendor/golang.org/x/text/unicode/norm/triegen.go b/vendor/golang.org/x/text/unicode/norm/triegen.go deleted file mode 100644 index 45d711900..000000000 --- a/vendor/golang.org/x/text/unicode/norm/triegen.go +++ /dev/null @@ -1,117 +0,0 @@ -// Copyright 2011 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build ignore - -// Trie table generator. -// Used by make*tables tools to generate a go file with trie data structures -// for mapping UTF-8 to a 16-bit value. All but the last byte in a UTF-8 byte -// sequence are used to lookup offsets in the index table to be used for the -// next byte. The last byte is used to index into a table with 16-bit values. - -package main - -import ( - "fmt" - "io" -) - -const maxSparseEntries = 16 - -type normCompacter struct { - sparseBlocks [][]uint64 - sparseOffset []uint16 - sparseCount int - name string -} - -func mostFrequentStride(a []uint64) int { - counts := make(map[int]int) - var v int - for _, x := range a { - if stride := int(x) - v; v != 0 && stride >= 0 { - counts[stride]++ - } - v = int(x) - } - var maxs, maxc int - for stride, cnt := range counts { - if cnt > maxc || (cnt == maxc && stride < maxs) { - maxs, maxc = stride, cnt - } - } - return maxs -} - -func countSparseEntries(a []uint64) int { - stride := mostFrequentStride(a) - var v, count int - for _, tv := range a { - if int(tv)-v != stride { - if tv != 0 { - count++ - } - } - v = int(tv) - } - return count -} - -func (c *normCompacter) Size(v []uint64) (sz int, ok bool) { - if n := countSparseEntries(v); n <= maxSparseEntries { - return (n+1)*4 + 2, true - } - return 0, false -} - -func (c *normCompacter) Store(v []uint64) uint32 { - h := uint32(len(c.sparseOffset)) - c.sparseBlocks = append(c.sparseBlocks, v) - c.sparseOffset = append(c.sparseOffset, uint16(c.sparseCount)) - c.sparseCount += countSparseEntries(v) + 1 - return h -} - -func (c *normCompacter) Handler() string { - return c.name + "Sparse.lookup" -} - -func (c *normCompacter) Print(w io.Writer) (retErr error) { - p := func(f string, x ...interface{}) { - if _, err := fmt.Fprintf(w, f, x...); retErr == nil && err != nil { - retErr = err - } - } - - ls := len(c.sparseBlocks) - p("// %sSparseOffset: %d entries, %d bytes\n", c.name, ls, ls*2) - p("var %sSparseOffset = %#v\n\n", c.name, c.sparseOffset) - - ns := c.sparseCount - p("// %sSparseValues: %d entries, %d bytes\n", c.name, ns, ns*4) - p("var %sSparseValues = [%d]valueRange {", c.name, ns) - for i, b := range c.sparseBlocks { - p("\n// Block %#x, offset %#x", i, c.sparseOffset[i]) - var v int - stride := mostFrequentStride(b) - n := countSparseEntries(b) - p("\n{value:%#04x,lo:%#02x},", stride, uint8(n)) - for i, nv := range b { - if int(nv)-v != stride { - if v != 0 { - p(",hi:%#02x},", 0x80+i-1) - } - if nv != 0 { - p("\n{value:%#04x,lo:%#02x", nv, 0x80+i) - } - } - v = int(nv) - } - if v != 0 { - p(",hi:%#02x},", 0x80+len(b)-1) - } - } - p("\n}\n\n") - return -} diff --git a/vendor/golang.org/x/tools/AUTHORS b/vendor/golang.org/x/tools/AUTHORS new file mode 100644 index 000000000..15167cd74 --- /dev/null +++ b/vendor/golang.org/x/tools/AUTHORS @@ -0,0 +1,3 @@ +# This source code refers to The Go Authors for copyright purposes. +# The master list of authors is in the main Go distribution, +# visible at http://tip.golang.org/AUTHORS. diff --git a/vendor/golang.org/x/tools/CONTRIBUTORS b/vendor/golang.org/x/tools/CONTRIBUTORS new file mode 100644 index 000000000..1c4577e96 --- /dev/null +++ b/vendor/golang.org/x/tools/CONTRIBUTORS @@ -0,0 +1,3 @@ +# This source code was written by the Go contributors. +# The master list of contributors is in the main Go distribution, +# visible at http://tip.golang.org/CONTRIBUTORS. diff --git a/vendor/golang.org/x/tools/LICENSE b/vendor/golang.org/x/tools/LICENSE new file mode 100644 index 000000000..6a66aea5e --- /dev/null +++ b/vendor/golang.org/x/tools/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/golang.org/x/tools/PATENTS b/vendor/golang.org/x/tools/PATENTS new file mode 100644 index 000000000..733099041 --- /dev/null +++ b/vendor/golang.org/x/tools/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/vendor/golang.org/x/tools/cmd/cover/README b/vendor/golang.org/x/tools/cmd/cover/README new file mode 100644 index 000000000..ff9523d4b --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/cover/README @@ -0,0 +1,2 @@ +NOTE: For Go releases 1.5 and later, this tool lives in the +standard repository. The code here is not maintained. diff --git a/vendor/golang.org/x/tools/cmd/cover/cover.go b/vendor/golang.org/x/tools/cmd/cover/cover.go new file mode 100644 index 000000000..e09336499 --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/cover/cover.go @@ -0,0 +1,722 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package main + +import ( + "bytes" + "flag" + "fmt" + "go/ast" + "go/parser" + "go/printer" + "go/token" + "io" + "io/ioutil" + "log" + "os" + "path/filepath" + "sort" + "strconv" + "strings" +) + +const usageMessage = "" + + `Usage of 'go tool cover': +Given a coverage profile produced by 'go test': + go test -coverprofile=c.out + +Open a web browser displaying annotated source code: + go tool cover -html=c.out + +Write out an HTML file instead of launching a web browser: + go tool cover -html=c.out -o coverage.html + +Display coverage percentages to stdout for each function: + go tool cover -func=c.out + +Finally, to generate modified source code with coverage annotations +(what go test -cover does): + go tool cover -mode=set -var=CoverageVariableName program.go +` + +func usage() { + fmt.Fprintln(os.Stderr, usageMessage) + fmt.Fprintln(os.Stderr, "Flags:") + flag.PrintDefaults() + fmt.Fprintln(os.Stderr, "\n Only one of -html, -func, or -mode may be set.") + os.Exit(2) +} + +var ( + mode = flag.String("mode", "", "coverage mode: set, count, atomic") + varVar = flag.String("var", "GoCover", "name of coverage variable to generate") + output = flag.String("o", "", "file for output; default: stdout") + htmlOut = flag.String("html", "", "generate HTML representation of coverage profile") + funcOut = flag.String("func", "", "output coverage profile information for each function") +) + +var profile string // The profile to read; the value of -html or -func + +var counterStmt func(*File, ast.Expr) ast.Stmt + +const ( + atomicPackagePath = "sync/atomic" + atomicPackageName = "_cover_atomic_" +) + +func main() { + flag.Usage = usage + flag.Parse() + + // Usage information when no arguments. + if flag.NFlag() == 0 && flag.NArg() == 0 { + flag.Usage() + } + + err := parseFlags() + if err != nil { + fmt.Fprintln(os.Stderr, err) + fmt.Fprintln(os.Stderr, `For usage information, run "go tool cover -help"`) + os.Exit(2) + } + + // Generate coverage-annotated source. + if *mode != "" { + annotate(flag.Arg(0)) + return + } + + // Output HTML or function coverage information. + if *htmlOut != "" { + err = htmlOutput(profile, *output) + } else { + err = funcOutput(profile, *output) + } + + if err != nil { + fmt.Fprintf(os.Stderr, "cover: %v\n", err) + os.Exit(2) + } +} + +// parseFlags sets the profile and counterStmt globals and performs validations. +func parseFlags() error { + profile = *htmlOut + if *funcOut != "" { + if profile != "" { + return fmt.Errorf("too many options") + } + profile = *funcOut + } + + // Must either display a profile or rewrite Go source. + if (profile == "") == (*mode == "") { + return fmt.Errorf("too many options") + } + + if *mode != "" { + switch *mode { + case "set": + counterStmt = setCounterStmt + case "count": + counterStmt = incCounterStmt + case "atomic": + counterStmt = atomicCounterStmt + default: + return fmt.Errorf("unknown -mode %v", *mode) + } + + if flag.NArg() == 0 { + return fmt.Errorf("missing source file") + } else if flag.NArg() == 1 { + return nil + } + } else if flag.NArg() == 0 { + return nil + } + return fmt.Errorf("too many arguments") +} + +// Block represents the information about a basic block to be recorded in the analysis. +// Note: Our definition of basic block is based on control structures; we don't break +// apart && and ||. We could but it doesn't seem important enough to bother. +type Block struct { + startByte token.Pos + endByte token.Pos + numStmt int +} + +// File is a wrapper for the state of a file used in the parser. +// The basic parse tree walker is a method of this type. +type File struct { + fset *token.FileSet + name string // Name of file. + astFile *ast.File + blocks []Block + atomicPkg string // Package name for "sync/atomic" in this file. +} + +// Visit implements the ast.Visitor interface. +func (f *File) Visit(node ast.Node) ast.Visitor { + switch n := node.(type) { + case *ast.BlockStmt: + // If it's a switch or select, the body is a list of case clauses; don't tag the block itself. + if len(n.List) > 0 { + switch n.List[0].(type) { + case *ast.CaseClause: // switch + for _, n := range n.List { + clause := n.(*ast.CaseClause) + clause.Body = f.addCounters(clause.Pos(), clause.End(), clause.Body, false) + } + return f + case *ast.CommClause: // select + for _, n := range n.List { + clause := n.(*ast.CommClause) + clause.Body = f.addCounters(clause.Pos(), clause.End(), clause.Body, false) + } + return f + } + } + n.List = f.addCounters(n.Lbrace, n.Rbrace+1, n.List, true) // +1 to step past closing brace. + case *ast.IfStmt: + ast.Walk(f, n.Body) + if n.Else == nil { + return nil + } + // The elses are special, because if we have + // if x { + // } else if y { + // } + // we want to cover the "if y". To do this, we need a place to drop the counter, + // so we add a hidden block: + // if x { + // } else { + // if y { + // } + // } + switch stmt := n.Else.(type) { + case *ast.IfStmt: + block := &ast.BlockStmt{ + Lbrace: n.Body.End(), // Start at end of the "if" block so the covered part looks like it starts at the "else". + List: []ast.Stmt{stmt}, + Rbrace: stmt.End(), + } + n.Else = block + case *ast.BlockStmt: + stmt.Lbrace = n.Body.End() // Start at end of the "if" block so the covered part looks like it starts at the "else". + default: + panic("unexpected node type in if") + } + ast.Walk(f, n.Else) + return nil + case *ast.SelectStmt: + // Don't annotate an empty select - creates a syntax error. + if n.Body == nil || len(n.Body.List) == 0 { + return nil + } + case *ast.SwitchStmt: + // Don't annotate an empty switch - creates a syntax error. + if n.Body == nil || len(n.Body.List) == 0 { + return nil + } + case *ast.TypeSwitchStmt: + // Don't annotate an empty type switch - creates a syntax error. + if n.Body == nil || len(n.Body.List) == 0 { + return nil + } + } + return f +} + +// unquote returns the unquoted string. +func unquote(s string) string { + t, err := strconv.Unquote(s) + if err != nil { + log.Fatalf("cover: improperly quoted string %q\n", s) + } + return t +} + +// addImport adds an import for the specified path, if one does not already exist, and returns +// the local package name. +func (f *File) addImport(path string) string { + // Does the package already import it? + for _, s := range f.astFile.Imports { + if unquote(s.Path.Value) == path { + if s.Name != nil { + return s.Name.Name + } + return filepath.Base(path) + } + } + newImport := &ast.ImportSpec{ + Name: ast.NewIdent(atomicPackageName), + Path: &ast.BasicLit{ + Kind: token.STRING, + Value: fmt.Sprintf("%q", path), + }, + } + impDecl := &ast.GenDecl{ + Tok: token.IMPORT, + Specs: []ast.Spec{ + newImport, + }, + } + // Make the new import the first Decl in the file. + astFile := f.astFile + astFile.Decls = append(astFile.Decls, nil) + copy(astFile.Decls[1:], astFile.Decls[0:]) + astFile.Decls[0] = impDecl + astFile.Imports = append(astFile.Imports, newImport) + + // Now refer to the package, just in case it ends up unused. + // That is, append to the end of the file the declaration + // var _ = _cover_atomic_.AddUint32 + reference := &ast.GenDecl{ + Tok: token.VAR, + Specs: []ast.Spec{ + &ast.ValueSpec{ + Names: []*ast.Ident{ + ast.NewIdent("_"), + }, + Values: []ast.Expr{ + &ast.SelectorExpr{ + X: ast.NewIdent(atomicPackageName), + Sel: ast.NewIdent("AddUint32"), + }, + }, + }, + }, + } + astFile.Decls = append(astFile.Decls, reference) + return atomicPackageName +} + +var slashslash = []byte("//") + +// initialComments returns the prefix of content containing only +// whitespace and line comments. Any +build directives must appear +// within this region. This approach is more reliable than using +// go/printer to print a modified AST containing comments. +// +func initialComments(content []byte) []byte { + // Derived from go/build.Context.shouldBuild. + end := 0 + p := content + for len(p) > 0 { + line := p + if i := bytes.IndexByte(line, '\n'); i >= 0 { + line, p = line[:i], p[i+1:] + } else { + p = p[len(p):] + } + line = bytes.TrimSpace(line) + if len(line) == 0 { // Blank line. + end = len(content) - len(p) + continue + } + if !bytes.HasPrefix(line, slashslash) { // Not comment line. + break + } + } + return content[:end] +} + +func annotate(name string) { + fset := token.NewFileSet() + content, err := ioutil.ReadFile(name) + if err != nil { + log.Fatalf("cover: %s: %s", name, err) + } + parsedFile, err := parser.ParseFile(fset, name, content, parser.ParseComments) + if err != nil { + log.Fatalf("cover: %s: %s", name, err) + } + parsedFile.Comments = trimComments(parsedFile, fset) + + file := &File{ + fset: fset, + name: name, + astFile: parsedFile, + } + if *mode == "atomic" { + file.atomicPkg = file.addImport(atomicPackagePath) + } + ast.Walk(file, file.astFile) + fd := os.Stdout + if *output != "" { + var err error + fd, err = os.Create(*output) + if err != nil { + log.Fatalf("cover: %s", err) + } + } + fd.Write(initialComments(content)) // Retain '// +build' directives. + file.print(fd) + // After printing the source tree, add some declarations for the counters etc. + // We could do this by adding to the tree, but it's easier just to print the text. + file.addVariables(fd) +} + +// trimComments drops all but the //go: comments, some of which are semantically important. +// We drop all others because they can appear in places that cause our counters +// to appear in syntactically incorrect places. //go: appears at the beginning of +// the line and is syntactically safe. +func trimComments(file *ast.File, fset *token.FileSet) []*ast.CommentGroup { + var comments []*ast.CommentGroup + for _, group := range file.Comments { + var list []*ast.Comment + for _, comment := range group.List { + if strings.HasPrefix(comment.Text, "//go:") && fset.Position(comment.Slash).Column == 1 { + list = append(list, comment) + } + } + if list != nil { + comments = append(comments, &ast.CommentGroup{List: list}) + } + } + return comments +} + +func (f *File) print(w io.Writer) { + printer.Fprint(w, f.fset, f.astFile) +} + +// intLiteral returns an ast.BasicLit representing the integer value. +func (f *File) intLiteral(i int) *ast.BasicLit { + node := &ast.BasicLit{ + Kind: token.INT, + Value: fmt.Sprint(i), + } + return node +} + +// index returns an ast.BasicLit representing the number of counters present. +func (f *File) index() *ast.BasicLit { + return f.intLiteral(len(f.blocks)) +} + +// setCounterStmt returns the expression: __count[23] = 1. +func setCounterStmt(f *File, counter ast.Expr) ast.Stmt { + return &ast.AssignStmt{ + Lhs: []ast.Expr{counter}, + Tok: token.ASSIGN, + Rhs: []ast.Expr{f.intLiteral(1)}, + } +} + +// incCounterStmt returns the expression: __count[23]++. +func incCounterStmt(f *File, counter ast.Expr) ast.Stmt { + return &ast.IncDecStmt{ + X: counter, + Tok: token.INC, + } +} + +// atomicCounterStmt returns the expression: atomic.AddUint32(&__count[23], 1) +func atomicCounterStmt(f *File, counter ast.Expr) ast.Stmt { + return &ast.ExprStmt{ + X: &ast.CallExpr{ + Fun: &ast.SelectorExpr{ + X: ast.NewIdent(f.atomicPkg), + Sel: ast.NewIdent("AddUint32"), + }, + Args: []ast.Expr{&ast.UnaryExpr{ + Op: token.AND, + X: counter, + }, + f.intLiteral(1), + }, + }, + } +} + +// newCounter creates a new counter expression of the appropriate form. +func (f *File) newCounter(start, end token.Pos, numStmt int) ast.Stmt { + counter := &ast.IndexExpr{ + X: &ast.SelectorExpr{ + X: ast.NewIdent(*varVar), + Sel: ast.NewIdent("Count"), + }, + Index: f.index(), + } + stmt := counterStmt(f, counter) + f.blocks = append(f.blocks, Block{start, end, numStmt}) + return stmt +} + +// addCounters takes a list of statements and adds counters to the beginning of +// each basic block at the top level of that list. For instance, given +// +// S1 +// if cond { +// S2 +// } +// S3 +// +// counters will be added before S1 and before S3. The block containing S2 +// will be visited in a separate call. +// TODO: Nested simple blocks get unnecessary (but correct) counters +func (f *File) addCounters(pos, blockEnd token.Pos, list []ast.Stmt, extendToClosingBrace bool) []ast.Stmt { + // Special case: make sure we add a counter to an empty block. Can't do this below + // or we will add a counter to an empty statement list after, say, a return statement. + if len(list) == 0 { + return []ast.Stmt{f.newCounter(pos, blockEnd, 0)} + } + // We have a block (statement list), but it may have several basic blocks due to the + // appearance of statements that affect the flow of control. + var newList []ast.Stmt + for { + // Find first statement that affects flow of control (break, continue, if, etc.). + // It will be the last statement of this basic block. + var last int + end := blockEnd + for last = 0; last < len(list); last++ { + end = f.statementBoundary(list[last]) + if f.endsBasicSourceBlock(list[last]) { + extendToClosingBrace = false // Block is broken up now. + last++ + break + } + } + if extendToClosingBrace { + end = blockEnd + } + if pos != end { // Can have no source to cover if e.g. blocks abut. + newList = append(newList, f.newCounter(pos, end, last)) + } + newList = append(newList, list[0:last]...) + list = list[last:] + if len(list) == 0 { + break + } + pos = list[0].Pos() + } + return newList +} + +// hasFuncLiteral reports the existence and position of the first func literal +// in the node, if any. If a func literal appears, it usually marks the termination +// of a basic block because the function body is itself a block. +// Therefore we draw a line at the start of the body of the first function literal we find. +// TODO: what if there's more than one? Probably doesn't matter much. +func hasFuncLiteral(n ast.Node) (bool, token.Pos) { + if n == nil { + return false, 0 + } + var literal funcLitFinder + ast.Walk(&literal, n) + return literal.found(), token.Pos(literal) +} + +// statementBoundary finds the location in s that terminates the current basic +// block in the source. +func (f *File) statementBoundary(s ast.Stmt) token.Pos { + // Control flow statements are easy. + switch s := s.(type) { + case *ast.BlockStmt: + // Treat blocks like basic blocks to avoid overlapping counters. + return s.Lbrace + case *ast.IfStmt: + found, pos := hasFuncLiteral(s.Init) + if found { + return pos + } + found, pos = hasFuncLiteral(s.Cond) + if found { + return pos + } + return s.Body.Lbrace + case *ast.ForStmt: + found, pos := hasFuncLiteral(s.Init) + if found { + return pos + } + found, pos = hasFuncLiteral(s.Cond) + if found { + return pos + } + found, pos = hasFuncLiteral(s.Post) + if found { + return pos + } + return s.Body.Lbrace + case *ast.LabeledStmt: + return f.statementBoundary(s.Stmt) + case *ast.RangeStmt: + found, pos := hasFuncLiteral(s.X) + if found { + return pos + } + return s.Body.Lbrace + case *ast.SwitchStmt: + found, pos := hasFuncLiteral(s.Init) + if found { + return pos + } + found, pos = hasFuncLiteral(s.Tag) + if found { + return pos + } + return s.Body.Lbrace + case *ast.SelectStmt: + return s.Body.Lbrace + case *ast.TypeSwitchStmt: + found, pos := hasFuncLiteral(s.Init) + if found { + return pos + } + return s.Body.Lbrace + } + // If not a control flow statement, it is a declaration, expression, call, etc. and it may have a function literal. + // If it does, that's tricky because we want to exclude the body of the function from this block. + // Draw a line at the start of the body of the first function literal we find. + // TODO: what if there's more than one? Probably doesn't matter much. + found, pos := hasFuncLiteral(s) + if found { + return pos + } + return s.End() +} + +// endsBasicSourceBlock reports whether s changes the flow of control: break, if, etc., +// or if it's just problematic, for instance contains a function literal, which will complicate +// accounting due to the block-within-an expression. +func (f *File) endsBasicSourceBlock(s ast.Stmt) bool { + switch s := s.(type) { + case *ast.BlockStmt: + // Treat blocks like basic blocks to avoid overlapping counters. + return true + case *ast.BranchStmt: + return true + case *ast.ForStmt: + return true + case *ast.IfStmt: + return true + case *ast.LabeledStmt: + return f.endsBasicSourceBlock(s.Stmt) + case *ast.RangeStmt: + return true + case *ast.SwitchStmt: + return true + case *ast.SelectStmt: + return true + case *ast.TypeSwitchStmt: + return true + case *ast.ExprStmt: + // Calls to panic change the flow. + // We really should verify that "panic" is the predefined function, + // but without type checking we can't and the likelihood of it being + // an actual problem is vanishingly small. + if call, ok := s.X.(*ast.CallExpr); ok { + if ident, ok := call.Fun.(*ast.Ident); ok && ident.Name == "panic" && len(call.Args) == 1 { + return true + } + } + } + found, _ := hasFuncLiteral(s) + return found +} + +// funcLitFinder implements the ast.Visitor pattern to find the location of any +// function literal in a subtree. +type funcLitFinder token.Pos + +func (f *funcLitFinder) Visit(node ast.Node) (w ast.Visitor) { + if f.found() { + return nil // Prune search. + } + switch n := node.(type) { + case *ast.FuncLit: + *f = funcLitFinder(n.Body.Lbrace) + return nil // Prune search. + } + return f +} + +func (f *funcLitFinder) found() bool { + return token.Pos(*f) != token.NoPos +} + +// Sort interface for []block1; used for self-check in addVariables. + +type block1 struct { + Block + index int +} + +type blockSlice []block1 + +func (b blockSlice) Len() int { return len(b) } +func (b blockSlice) Less(i, j int) bool { return b[i].startByte < b[j].startByte } +func (b blockSlice) Swap(i, j int) { b[i], b[j] = b[j], b[i] } + +// offset translates a token position into a 0-indexed byte offset. +func (f *File) offset(pos token.Pos) int { + return f.fset.Position(pos).Offset +} + +// addVariables adds to the end of the file the declarations to set up the counter and position variables. +func (f *File) addVariables(w io.Writer) { + // Self-check: Verify that the instrumented basic blocks are disjoint. + t := make([]block1, len(f.blocks)) + for i := range f.blocks { + t[i].Block = f.blocks[i] + t[i].index = i + } + sort.Sort(blockSlice(t)) + for i := 1; i < len(t); i++ { + if t[i-1].endByte > t[i].startByte { + fmt.Fprintf(os.Stderr, "cover: internal error: block %d overlaps block %d\n", t[i-1].index, t[i].index) + // Note: error message is in byte positions, not token positions. + fmt.Fprintf(os.Stderr, "\t%s:#%d,#%d %s:#%d,#%d\n", + f.name, f.offset(t[i-1].startByte), f.offset(t[i-1].endByte), + f.name, f.offset(t[i].startByte), f.offset(t[i].endByte)) + } + } + + // Declare the coverage struct as a package-level variable. + fmt.Fprintf(w, "\nvar %s = struct {\n", *varVar) + fmt.Fprintf(w, "\tCount [%d]uint32\n", len(f.blocks)) + fmt.Fprintf(w, "\tPos [3 * %d]uint32\n", len(f.blocks)) + fmt.Fprintf(w, "\tNumStmt [%d]uint16\n", len(f.blocks)) + fmt.Fprintf(w, "} {\n") + + // Initialize the position array field. + fmt.Fprintf(w, "\tPos: [3 * %d]uint32{\n", len(f.blocks)) + + // A nice long list of positions. Each position is encoded as follows to reduce size: + // - 32-bit starting line number + // - 32-bit ending line number + // - (16 bit ending column number << 16) | (16-bit starting column number). + for i, block := range f.blocks { + start := f.fset.Position(block.startByte) + end := f.fset.Position(block.endByte) + fmt.Fprintf(w, "\t\t%d, %d, %#x, // [%d]\n", start.Line, end.Line, (end.Column&0xFFFF)<<16|(start.Column&0xFFFF), i) + } + + // Close the position array. + fmt.Fprintf(w, "\t},\n") + + // Initialize the position array field. + fmt.Fprintf(w, "\tNumStmt: [%d]uint16{\n", len(f.blocks)) + + // A nice long list of statements-per-block, so we can give a conventional + // valuation of "percent covered". To save space, it's a 16-bit number, so we + // clamp it if it overflows - won't matter in practice. + for i, block := range f.blocks { + n := block.numStmt + if n > 1<<16-1 { + n = 1<<16 - 1 + } + fmt.Fprintf(w, "\t\t%d, // %d\n", n, i) + } + + // Close the statements-per-block array. + fmt.Fprintf(w, "\t},\n") + + // Close the struct initialization. + fmt.Fprintf(w, "}\n") +} diff --git a/vendor/golang.org/x/tools/cmd/cover/doc.go b/vendor/golang.org/x/tools/cmd/cover/doc.go new file mode 100644 index 000000000..b74d5b3ce --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/cover/doc.go @@ -0,0 +1,27 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Cover is a program for analyzing the coverage profiles generated by +'go test -coverprofile=cover.out'. + +Cover is also used by 'go test -cover' to rewrite the source code with +annotations to track which parts of each function are executed. +It operates on one Go source file at a time, computing approximate +basic block information by studying the source. It is thus more portable +than binary-rewriting coverage tools, but also a little less capable. +For instance, it does not probe inside && and || expressions, and can +be mildly confused by single statements with multiple function literals. + +For usage information, please see: + go help testflag + go tool cover -help + +No longer maintained: + +For Go releases 1.5 and later, this tool lives in the +standard repository. The code here is not maintained. + +*/ +package main // import "golang.org/x/tools/cmd/cover" diff --git a/vendor/golang.org/x/tools/cmd/cover/func.go b/vendor/golang.org/x/tools/cmd/cover/func.go new file mode 100644 index 000000000..41d9fceca --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/cover/func.go @@ -0,0 +1,166 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file implements the visitor that computes the (line, column)-(line-column) range for each function. + +package main + +import ( + "bufio" + "fmt" + "go/ast" + "go/build" + "go/parser" + "go/token" + "os" + "path/filepath" + "text/tabwriter" + + "golang.org/x/tools/cover" +) + +// funcOutput takes two file names as arguments, a coverage profile to read as input and an output +// file to write ("" means to write to standard output). The function reads the profile and produces +// as output the coverage data broken down by function, like this: +// +// fmt/format.go:30: init 100.0% +// fmt/format.go:57: clearflags 100.0% +// ... +// fmt/scan.go:1046: doScan 100.0% +// fmt/scan.go:1075: advance 96.2% +// fmt/scan.go:1119: doScanf 96.8% +// total: (statements) 91.9% + +func funcOutput(profile, outputFile string) error { + profiles, err := cover.ParseProfiles(profile) + if err != nil { + return err + } + + var out *bufio.Writer + if outputFile == "" { + out = bufio.NewWriter(os.Stdout) + } else { + fd, err := os.Create(outputFile) + if err != nil { + return err + } + defer fd.Close() + out = bufio.NewWriter(fd) + } + defer out.Flush() + + tabber := tabwriter.NewWriter(out, 1, 8, 1, '\t', 0) + defer tabber.Flush() + + var total, covered int64 + for _, profile := range profiles { + fn := profile.FileName + file, err := findFile(fn) + if err != nil { + return err + } + funcs, err := findFuncs(file) + if err != nil { + return err + } + // Now match up functions and profile blocks. + for _, f := range funcs { + c, t := f.coverage(profile) + fmt.Fprintf(tabber, "%s:%d:\t%s\t%.1f%%\n", fn, f.startLine, f.name, 100.0*float64(c)/float64(t)) + total += t + covered += c + } + } + fmt.Fprintf(tabber, "total:\t(statements)\t%.1f%%\n", 100.0*float64(covered)/float64(total)) + + return nil +} + +// findFuncs parses the file and returns a slice of FuncExtent descriptors. +func findFuncs(name string) ([]*FuncExtent, error) { + fset := token.NewFileSet() + parsedFile, err := parser.ParseFile(fset, name, nil, 0) + if err != nil { + return nil, err + } + visitor := &FuncVisitor{ + fset: fset, + name: name, + astFile: parsedFile, + } + ast.Walk(visitor, visitor.astFile) + return visitor.funcs, nil +} + +// FuncExtent describes a function's extent in the source by file and position. +type FuncExtent struct { + name string + startLine int + startCol int + endLine int + endCol int +} + +// FuncVisitor implements the visitor that builds the function position list for a file. +type FuncVisitor struct { + fset *token.FileSet + name string // Name of file. + astFile *ast.File + funcs []*FuncExtent +} + +// Visit implements the ast.Visitor interface. +func (v *FuncVisitor) Visit(node ast.Node) ast.Visitor { + switch n := node.(type) { + case *ast.FuncDecl: + start := v.fset.Position(n.Pos()) + end := v.fset.Position(n.End()) + fe := &FuncExtent{ + name: n.Name.Name, + startLine: start.Line, + startCol: start.Column, + endLine: end.Line, + endCol: end.Column, + } + v.funcs = append(v.funcs, fe) + } + return v +} + +// coverage returns the fraction of the statements in the function that were covered, as a numerator and denominator. +func (f *FuncExtent) coverage(profile *cover.Profile) (num, den int64) { + // We could avoid making this n^2 overall by doing a single scan and annotating the functions, + // but the sizes of the data structures is never very large and the scan is almost instantaneous. + var covered, total int64 + // The blocks are sorted, so we can stop counting as soon as we reach the end of the relevant block. + for _, b := range profile.Blocks { + if b.StartLine > f.endLine || (b.StartLine == f.endLine && b.StartCol >= f.endCol) { + // Past the end of the function. + break + } + if b.EndLine < f.startLine || (b.EndLine == f.startLine && b.EndCol <= f.startCol) { + // Before the beginning of the function + continue + } + total += int64(b.NumStmt) + if b.Count > 0 { + covered += int64(b.NumStmt) + } + } + if total == 0 { + total = 1 // Avoid zero denominator. + } + return covered, total +} + +// findFile finds the location of the named file in GOROOT, GOPATH etc. +func findFile(file string) (string, error) { + dir, file := filepath.Split(file) + pkg, err := build.Import(dir, ".", build.FindOnly) + if err != nil { + return "", fmt.Errorf("can't find %q: %v", file, err) + } + return filepath.Join(pkg.Dir, file), nil +} diff --git a/vendor/golang.org/x/tools/cmd/cover/html.go b/vendor/golang.org/x/tools/cmd/cover/html.go new file mode 100644 index 000000000..ef50e2bfc --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/cover/html.go @@ -0,0 +1,284 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package main + +import ( + "bufio" + "bytes" + "fmt" + "html/template" + "io" + "io/ioutil" + "math" + "os" + "os/exec" + "path/filepath" + "runtime" + + "golang.org/x/tools/cover" +) + +// htmlOutput reads the profile data from profile and generates an HTML +// coverage report, writing it to outfile. If outfile is empty, +// it writes the report to a temporary file and opens it in a web browser. +func htmlOutput(profile, outfile string) error { + profiles, err := cover.ParseProfiles(profile) + if err != nil { + return err + } + + var d templateData + + for _, profile := range profiles { + fn := profile.FileName + if profile.Mode == "set" { + d.Set = true + } + file, err := findFile(fn) + if err != nil { + return err + } + src, err := ioutil.ReadFile(file) + if err != nil { + return fmt.Errorf("can't read %q: %v", fn, err) + } + var buf bytes.Buffer + err = htmlGen(&buf, src, profile.Boundaries(src)) + if err != nil { + return err + } + d.Files = append(d.Files, &templateFile{ + Name: fn, + Body: template.HTML(buf.String()), + Coverage: percentCovered(profile), + }) + } + + var out *os.File + if outfile == "" { + var dir string + dir, err = ioutil.TempDir("", "cover") + if err != nil { + return err + } + out, err = os.Create(filepath.Join(dir, "coverage.html")) + } else { + out, err = os.Create(outfile) + } + if err != nil { + return err + } + err = htmlTemplate.Execute(out, d) + if err == nil { + err = out.Close() + } + if err != nil { + return err + } + + if outfile == "" { + if !startBrowser("file://" + out.Name()) { + fmt.Fprintf(os.Stderr, "HTML output written to %s\n", out.Name()) + } + } + + return nil +} + +// percentCovered returns, as a percentage, the fraction of the statements in +// the profile covered by the test run. +// In effect, it reports the coverage of a given source file. +func percentCovered(p *cover.Profile) float64 { + var total, covered int64 + for _, b := range p.Blocks { + total += int64(b.NumStmt) + if b.Count > 0 { + covered += int64(b.NumStmt) + } + } + if total == 0 { + return 0 + } + return float64(covered) / float64(total) * 100 +} + +// htmlGen generates an HTML coverage report with the provided filename, +// source code, and tokens, and writes it to the given Writer. +func htmlGen(w io.Writer, src []byte, boundaries []cover.Boundary) error { + dst := bufio.NewWriter(w) + for i := range src { + for len(boundaries) > 0 && boundaries[0].Offset == i { + b := boundaries[0] + if b.Start { + n := 0 + if b.Count > 0 { + n = int(math.Floor(b.Norm*9)) + 1 + } + fmt.Fprintf(dst, ``, n, b.Count) + } else { + dst.WriteString("") + } + boundaries = boundaries[1:] + } + switch b := src[i]; b { + case '>': + dst.WriteString(">") + case '<': + dst.WriteString("<") + case '&': + dst.WriteString("&") + case '\t': + dst.WriteString(" ") + default: + dst.WriteByte(b) + } + } + return dst.Flush() +} + +// startBrowser tries to open the URL in a browser +// and reports whether it succeeds. +func startBrowser(url string) bool { + // try to start the browser + var args []string + switch runtime.GOOS { + case "darwin": + args = []string{"open"} + case "windows": + args = []string{"cmd", "/c", "start"} + default: + args = []string{"xdg-open"} + } + cmd := exec.Command(args[0], append(args[1:], url)...) + return cmd.Start() == nil +} + +// rgb returns an rgb value for the specified coverage value +// between 0 (no coverage) and 10 (max coverage). +func rgb(n int) string { + if n == 0 { + return "rgb(192, 0, 0)" // Red + } + // Gradient from gray to green. + r := 128 - 12*(n-1) + g := 128 + 12*(n-1) + b := 128 + 3*(n-1) + return fmt.Sprintf("rgb(%v, %v, %v)", r, g, b) +} + +// colors generates the CSS rules for coverage colors. +func colors() template.CSS { + var buf bytes.Buffer + for i := 0; i < 11; i++ { + fmt.Fprintf(&buf, ".cov%v { color: %v }\n", i, rgb(i)) + } + return template.CSS(buf.String()) +} + +var htmlTemplate = template.Must(template.New("html").Funcs(template.FuncMap{ + "colors": colors, +}).Parse(tmplHTML)) + +type templateData struct { + Files []*templateFile + Set bool +} + +type templateFile struct { + Name string + Body template.HTML + Coverage float64 +} + +const tmplHTML = ` + + + + + + + +
+ +
+ not tracked + {{if .Set}} + not covered + covered + {{else}} + no coverage + low coverage + * + * + * + * + * + * + * + * + high coverage + {{end}} +
+
+
+ {{range $i, $f := .Files}} +
{{$f.Body}}
+ {{end}} +
+ + + +` diff --git a/vendor/golang.org/x/tools/cmd/stringer/stringer.go b/vendor/golang.org/x/tools/cmd/stringer/stringer.go new file mode 100644 index 000000000..4b7049a35 --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/stringer/stringer.go @@ -0,0 +1,643 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Stringer is a tool to automate the creation of methods that satisfy the fmt.Stringer +// interface. Given the name of a (signed or unsigned) integer type T that has constants +// defined, stringer will create a new self-contained Go source file implementing +// func (t T) String() string +// The file is created in the same package and directory as the package that defines T. +// It has helpful defaults designed for use with go generate. +// +// Stringer works best with constants that are consecutive values such as created using iota, +// but creates good code regardless. In the future it might also provide custom support for +// constant sets that are bit patterns. +// +// For example, given this snippet, +// +// package painkiller +// +// type Pill int +// +// const ( +// Placebo Pill = iota +// Aspirin +// Ibuprofen +// Paracetamol +// Acetaminophen = Paracetamol +// ) +// +// running this command +// +// stringer -type=Pill +// +// in the same directory will create the file pill_string.go, in package painkiller, +// containing a definition of +// +// func (Pill) String() string +// +// That method will translate the value of a Pill constant to the string representation +// of the respective constant name, so that the call fmt.Print(painkiller.Aspirin) will +// print the string "Aspirin". +// +// Typically this process would be run using go generate, like this: +// +// //go:generate stringer -type=Pill +// +// If multiple constants have the same value, the lexically first matching name will +// be used (in the example, Acetaminophen will print as "Paracetamol"). +// +// With no arguments, it processes the package in the current directory. +// Otherwise, the arguments must name a single directory holding a Go package +// or a set of Go source files that represent a single Go package. +// +// The -type flag accepts a comma-separated list of types so a single run can +// generate methods for multiple types. The default output file is t_string.go, +// where t is the lower-cased name of the first type listed. It can be overridden +// with the -output flag. +// +package main // import "golang.org/x/tools/cmd/stringer" + +import ( + "bytes" + "flag" + "fmt" + "go/ast" + "go/constant" + "go/format" + "go/token" + "go/types" + "io/ioutil" + "log" + "os" + "path/filepath" + "sort" + "strings" + + "golang.org/x/tools/go/packages" +) + +var ( + typeNames = flag.String("type", "", "comma-separated list of type names; must be set") + output = flag.String("output", "", "output file name; default srcdir/_string.go") + trimprefix = flag.String("trimprefix", "", "trim the `prefix` from the generated constant names") + linecomment = flag.Bool("linecomment", false, "use line comment text as printed text when present") + buildTags = flag.String("tags", "", "comma-separated list of build tags to apply") +) + +// Usage is a replacement usage function for the flags package. +func Usage() { + fmt.Fprintf(os.Stderr, "Usage of stringer:\n") + fmt.Fprintf(os.Stderr, "\tstringer [flags] -type T [directory]\n") + fmt.Fprintf(os.Stderr, "\tstringer [flags] -type T files... # Must be a single package\n") + fmt.Fprintf(os.Stderr, "For more information, see:\n") + fmt.Fprintf(os.Stderr, "\thttp://godoc.org/golang.org/x/tools/cmd/stringer\n") + fmt.Fprintf(os.Stderr, "Flags:\n") + flag.PrintDefaults() +} + +func main() { + log.SetFlags(0) + log.SetPrefix("stringer: ") + flag.Usage = Usage + flag.Parse() + if len(*typeNames) == 0 { + flag.Usage() + os.Exit(2) + } + types := strings.Split(*typeNames, ",") + var tags []string + if len(*buildTags) > 0 { + tags = strings.Split(*buildTags, ",") + } + + // We accept either one directory or a list of files. Which do we have? + args := flag.Args() + if len(args) == 0 { + // Default: process whole package in current directory. + args = []string{"."} + } + + // Parse the package once. + var dir string + g := Generator{ + trimPrefix: *trimprefix, + lineComment: *linecomment, + } + // TODO(suzmue): accept other patterns for packages (directories, list of files, import paths, etc). + if len(args) == 1 && isDirectory(args[0]) { + dir = args[0] + } else { + if len(tags) != 0 { + log.Fatal("-tags option applies only to directories, not when files are specified") + } + dir = filepath.Dir(args[0]) + } + + g.parsePackage(args, tags) + + // Print the header and package clause. + g.Printf("// Code generated by \"stringer %s\"; DO NOT EDIT.\n", strings.Join(os.Args[1:], " ")) + g.Printf("\n") + g.Printf("package %s", g.pkg.name) + g.Printf("\n") + g.Printf("import \"strconv\"\n") // Used by all methods. + + // Run generate for each type. + for _, typeName := range types { + g.generate(typeName) + } + + // Format the output. + src := g.format() + + // Write to file. + outputName := *output + if outputName == "" { + baseName := fmt.Sprintf("%s_string.go", types[0]) + outputName = filepath.Join(dir, strings.ToLower(baseName)) + } + err := ioutil.WriteFile(outputName, src, 0644) + if err != nil { + log.Fatalf("writing output: %s", err) + } +} + +// isDirectory reports whether the named file is a directory. +func isDirectory(name string) bool { + info, err := os.Stat(name) + if err != nil { + log.Fatal(err) + } + return info.IsDir() +} + +// Generator holds the state of the analysis. Primarily used to buffer +// the output for format.Source. +type Generator struct { + buf bytes.Buffer // Accumulated output. + pkg *Package // Package we are scanning. + + trimPrefix string + lineComment bool +} + +func (g *Generator) Printf(format string, args ...interface{}) { + fmt.Fprintf(&g.buf, format, args...) +} + +// File holds a single parsed file and associated data. +type File struct { + pkg *Package // Package to which this file belongs. + file *ast.File // Parsed AST. + // These fields are reset for each type being generated. + typeName string // Name of the constant type. + values []Value // Accumulator for constant values of that type. + + trimPrefix string + lineComment bool +} + +type Package struct { + name string + defs map[*ast.Ident]types.Object + files []*File +} + +// parsePackage analyzes the single package constructed from the patterns and tags. +// parsePackage exits if there is an error. +func (g *Generator) parsePackage(patterns []string, tags []string) { + cfg := &packages.Config{ + Mode: packages.LoadSyntax, + // TODO: Need to think about constants in test files. Maybe write type_string_test.go + // in a separate pass? For later. + Tests: false, + BuildFlags: []string{fmt.Sprintf("-tags=%s", strings.Join(tags, " "))}, + } + pkgs, err := packages.Load(cfg, patterns...) + if err != nil { + log.Fatal(err) + } + if len(pkgs) != 1 { + log.Fatalf("error: %d packages found", len(pkgs)) + } + g.addPackage(pkgs[0]) +} + +// addPackage adds a type checked Package and its syntax files to the generator. +func (g *Generator) addPackage(pkg *packages.Package) { + g.pkg = &Package{ + name: pkg.Name, + defs: pkg.TypesInfo.Defs, + files: make([]*File, len(pkg.Syntax)), + } + + for i, file := range pkg.Syntax { + g.pkg.files[i] = &File{ + file: file, + pkg: g.pkg, + trimPrefix: g.trimPrefix, + lineComment: g.lineComment, + } + } +} + +// generate produces the String method for the named type. +func (g *Generator) generate(typeName string) { + values := make([]Value, 0, 100) + for _, file := range g.pkg.files { + // Set the state for this run of the walker. + file.typeName = typeName + file.values = nil + if file.file != nil { + ast.Inspect(file.file, file.genDecl) + values = append(values, file.values...) + } + } + + if len(values) == 0 { + log.Fatalf("no values defined for type %s", typeName) + } + // Generate code that will fail if the constants change value. + g.Printf("func _() {\n") + g.Printf("\t// An \"invalid array index\" compiler error signifies that the constant values have changed.\n") + g.Printf("\t// Re-run the stringer command to generate them again.\n") + g.Printf("\tvar x [1]struct{}\n") + for _, v := range values { + g.Printf("\t_ = x[%s - %s]\n", v.originalName, v.str) + } + g.Printf("}\n") + runs := splitIntoRuns(values) + // The decision of which pattern to use depends on the number of + // runs in the numbers. If there's only one, it's easy. For more than + // one, there's a tradeoff between complexity and size of the data + // and code vs. the simplicity of a map. A map takes more space, + // but so does the code. The decision here (crossover at 10) is + // arbitrary, but considers that for large numbers of runs the cost + // of the linear scan in the switch might become important, and + // rather than use yet another algorithm such as binary search, + // we punt and use a map. In any case, the likelihood of a map + // being necessary for any realistic example other than bitmasks + // is very low. And bitmasks probably deserve their own analysis, + // to be done some other day. + switch { + case len(runs) == 1: + g.buildOneRun(runs, typeName) + case len(runs) <= 10: + g.buildMultipleRuns(runs, typeName) + default: + g.buildMap(runs, typeName) + } +} + +// splitIntoRuns breaks the values into runs of contiguous sequences. +// For example, given 1,2,3,5,6,7 it returns {1,2,3},{5,6,7}. +// The input slice is known to be non-empty. +func splitIntoRuns(values []Value) [][]Value { + // We use stable sort so the lexically first name is chosen for equal elements. + sort.Stable(byValue(values)) + // Remove duplicates. Stable sort has put the one we want to print first, + // so use that one. The String method won't care about which named constant + // was the argument, so the first name for the given value is the only one to keep. + // We need to do this because identical values would cause the switch or map + // to fail to compile. + j := 1 + for i := 1; i < len(values); i++ { + if values[i].value != values[i-1].value { + values[j] = values[i] + j++ + } + } + values = values[:j] + runs := make([][]Value, 0, 10) + for len(values) > 0 { + // One contiguous sequence per outer loop. + i := 1 + for i < len(values) && values[i].value == values[i-1].value+1 { + i++ + } + runs = append(runs, values[:i]) + values = values[i:] + } + return runs +} + +// format returns the gofmt-ed contents of the Generator's buffer. +func (g *Generator) format() []byte { + src, err := format.Source(g.buf.Bytes()) + if err != nil { + // Should never happen, but can arise when developing this code. + // The user can compile the output to see the error. + log.Printf("warning: internal error: invalid Go generated: %s", err) + log.Printf("warning: compile the package to analyze the error") + return g.buf.Bytes() + } + return src +} + +// Value represents a declared constant. +type Value struct { + originalName string // The name of the constant. + name string // The name with trimmed prefix. + // The value is stored as a bit pattern alone. The boolean tells us + // whether to interpret it as an int64 or a uint64; the only place + // this matters is when sorting. + // Much of the time the str field is all we need; it is printed + // by Value.String. + value uint64 // Will be converted to int64 when needed. + signed bool // Whether the constant is a signed type. + str string // The string representation given by the "go/constant" package. +} + +func (v *Value) String() string { + return v.str +} + +// byValue lets us sort the constants into increasing order. +// We take care in the Less method to sort in signed or unsigned order, +// as appropriate. +type byValue []Value + +func (b byValue) Len() int { return len(b) } +func (b byValue) Swap(i, j int) { b[i], b[j] = b[j], b[i] } +func (b byValue) Less(i, j int) bool { + if b[i].signed { + return int64(b[i].value) < int64(b[j].value) + } + return b[i].value < b[j].value +} + +// genDecl processes one declaration clause. +func (f *File) genDecl(node ast.Node) bool { + decl, ok := node.(*ast.GenDecl) + if !ok || decl.Tok != token.CONST { + // We only care about const declarations. + return true + } + // The name of the type of the constants we are declaring. + // Can change if this is a multi-element declaration. + typ := "" + // Loop over the elements of the declaration. Each element is a ValueSpec: + // a list of names possibly followed by a type, possibly followed by values. + // If the type and value are both missing, we carry down the type (and value, + // but the "go/types" package takes care of that). + for _, spec := range decl.Specs { + vspec := spec.(*ast.ValueSpec) // Guaranteed to succeed as this is CONST. + if vspec.Type == nil && len(vspec.Values) > 0 { + // "X = 1". With no type but a value. If the constant is untyped, + // skip this vspec and reset the remembered type. + typ = "" + + // If this is a simple type conversion, remember the type. + // We don't mind if this is actually a call; a qualified call won't + // be matched (that will be SelectorExpr, not Ident), and only unusual + // situations will result in a function call that appears to be + // a type conversion. + ce, ok := vspec.Values[0].(*ast.CallExpr) + if !ok { + continue + } + id, ok := ce.Fun.(*ast.Ident) + if !ok { + continue + } + typ = id.Name + } + if vspec.Type != nil { + // "X T". We have a type. Remember it. + ident, ok := vspec.Type.(*ast.Ident) + if !ok { + continue + } + typ = ident.Name + } + if typ != f.typeName { + // This is not the type we're looking for. + continue + } + // We now have a list of names (from one line of source code) all being + // declared with the desired type. + // Grab their names and actual values and store them in f.values. + for _, name := range vspec.Names { + if name.Name == "_" { + continue + } + // This dance lets the type checker find the values for us. It's a + // bit tricky: look up the object declared by the name, find its + // types.Const, and extract its value. + obj, ok := f.pkg.defs[name] + if !ok { + log.Fatalf("no value for constant %s", name) + } + info := obj.Type().Underlying().(*types.Basic).Info() + if info&types.IsInteger == 0 { + log.Fatalf("can't handle non-integer constant type %s", typ) + } + value := obj.(*types.Const).Val() // Guaranteed to succeed as this is CONST. + if value.Kind() != constant.Int { + log.Fatalf("can't happen: constant is not an integer %s", name) + } + i64, isInt := constant.Int64Val(value) + u64, isUint := constant.Uint64Val(value) + if !isInt && !isUint { + log.Fatalf("internal error: value of %s is not an integer: %s", name, value.String()) + } + if !isInt { + u64 = uint64(i64) + } + v := Value{ + originalName: name.Name, + value: u64, + signed: info&types.IsUnsigned == 0, + str: value.String(), + } + if c := vspec.Comment; f.lineComment && c != nil && len(c.List) == 1 { + v.name = strings.TrimSpace(c.Text()) + } else { + v.name = strings.TrimPrefix(v.originalName, f.trimPrefix) + } + f.values = append(f.values, v) + } + } + return false +} + +// Helpers + +// usize returns the number of bits of the smallest unsigned integer +// type that will hold n. Used to create the smallest possible slice of +// integers to use as indexes into the concatenated strings. +func usize(n int) int { + switch { + case n < 1<<8: + return 8 + case n < 1<<16: + return 16 + default: + // 2^32 is enough constants for anyone. + return 32 + } +} + +// declareIndexAndNameVars declares the index slices and concatenated names +// strings representing the runs of values. +func (g *Generator) declareIndexAndNameVars(runs [][]Value, typeName string) { + var indexes, names []string + for i, run := range runs { + index, name := g.createIndexAndNameDecl(run, typeName, fmt.Sprintf("_%d", i)) + if len(run) != 1 { + indexes = append(indexes, index) + } + names = append(names, name) + } + g.Printf("const (\n") + for _, name := range names { + g.Printf("\t%s\n", name) + } + g.Printf(")\n\n") + + if len(indexes) > 0 { + g.Printf("var (") + for _, index := range indexes { + g.Printf("\t%s\n", index) + } + g.Printf(")\n\n") + } +} + +// declareIndexAndNameVar is the single-run version of declareIndexAndNameVars +func (g *Generator) declareIndexAndNameVar(run []Value, typeName string) { + index, name := g.createIndexAndNameDecl(run, typeName, "") + g.Printf("const %s\n", name) + g.Printf("var %s\n", index) +} + +// createIndexAndNameDecl returns the pair of declarations for the run. The caller will add "const" and "var". +func (g *Generator) createIndexAndNameDecl(run []Value, typeName string, suffix string) (string, string) { + b := new(bytes.Buffer) + indexes := make([]int, len(run)) + for i := range run { + b.WriteString(run[i].name) + indexes[i] = b.Len() + } + nameConst := fmt.Sprintf("_%s_name%s = %q", typeName, suffix, b.String()) + nameLen := b.Len() + b.Reset() + fmt.Fprintf(b, "_%s_index%s = [...]uint%d{0, ", typeName, suffix, usize(nameLen)) + for i, v := range indexes { + if i > 0 { + fmt.Fprintf(b, ", ") + } + fmt.Fprintf(b, "%d", v) + } + fmt.Fprintf(b, "}") + return b.String(), nameConst +} + +// declareNameVars declares the concatenated names string representing all the values in the runs. +func (g *Generator) declareNameVars(runs [][]Value, typeName string, suffix string) { + g.Printf("const _%s_name%s = \"", typeName, suffix) + for _, run := range runs { + for i := range run { + g.Printf("%s", run[i].name) + } + } + g.Printf("\"\n") +} + +// buildOneRun generates the variables and String method for a single run of contiguous values. +func (g *Generator) buildOneRun(runs [][]Value, typeName string) { + values := runs[0] + g.Printf("\n") + g.declareIndexAndNameVar(values, typeName) + // The generated code is simple enough to write as a Printf format. + lessThanZero := "" + if values[0].signed { + lessThanZero = "i < 0 || " + } + if values[0].value == 0 { // Signed or unsigned, 0 is still 0. + g.Printf(stringOneRun, typeName, usize(len(values)), lessThanZero) + } else { + g.Printf(stringOneRunWithOffset, typeName, values[0].String(), usize(len(values)), lessThanZero) + } +} + +// Arguments to format are: +// [1]: type name +// [2]: size of index element (8 for uint8 etc.) +// [3]: less than zero check (for signed types) +const stringOneRun = `func (i %[1]s) String() string { + if %[3]si >= %[1]s(len(_%[1]s_index)-1) { + return "%[1]s(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _%[1]s_name[_%[1]s_index[i]:_%[1]s_index[i+1]] +} +` + +// Arguments to format are: +// [1]: type name +// [2]: lowest defined value for type, as a string +// [3]: size of index element (8 for uint8 etc.) +// [4]: less than zero check (for signed types) +/* + */ +const stringOneRunWithOffset = `func (i %[1]s) String() string { + i -= %[2]s + if %[4]si >= %[1]s(len(_%[1]s_index)-1) { + return "%[1]s(" + strconv.FormatInt(int64(i + %[2]s), 10) + ")" + } + return _%[1]s_name[_%[1]s_index[i] : _%[1]s_index[i+1]] +} +` + +// buildMultipleRuns generates the variables and String method for multiple runs of contiguous values. +// For this pattern, a single Printf format won't do. +func (g *Generator) buildMultipleRuns(runs [][]Value, typeName string) { + g.Printf("\n") + g.declareIndexAndNameVars(runs, typeName) + g.Printf("func (i %s) String() string {\n", typeName) + g.Printf("\tswitch {\n") + for i, values := range runs { + if len(values) == 1 { + g.Printf("\tcase i == %s:\n", &values[0]) + g.Printf("\t\treturn _%s_name_%d\n", typeName, i) + continue + } + g.Printf("\tcase %s <= i && i <= %s:\n", &values[0], &values[len(values)-1]) + if values[0].value != 0 { + g.Printf("\t\ti -= %s\n", &values[0]) + } + g.Printf("\t\treturn _%s_name_%d[_%s_index_%d[i]:_%s_index_%d[i+1]]\n", + typeName, i, typeName, i, typeName, i) + } + g.Printf("\tdefault:\n") + g.Printf("\t\treturn \"%s(\" + strconv.FormatInt(int64(i), 10) + \")\"\n", typeName) + g.Printf("\t}\n") + g.Printf("}\n") +} + +// buildMap handles the case where the space is so sparse a map is a reasonable fallback. +// It's a rare situation but has simple code. +func (g *Generator) buildMap(runs [][]Value, typeName string) { + g.Printf("\n") + g.declareNameVars(runs, typeName, "") + g.Printf("\nvar _%s_map = map[%s]string{\n", typeName, typeName) + n := 0 + for _, values := range runs { + for _, value := range values { + g.Printf("\t%s: _%s_name[%d:%d],\n", &value, typeName, n, n+len(value.name)) + n += len(value.name) + } + } + g.Printf("}\n\n") + g.Printf(stringMap, typeName) +} + +// Argument to format is the type name. +const stringMap = `func (i %[1]s) String() string { + if str, ok := _%[1]s_map[i]; ok { + return str + } + return "%[1]s(" + strconv.FormatInt(int64(i), 10) + ")" +} +` diff --git a/vendor/golang.org/x/tools/cover/profile.go b/vendor/golang.org/x/tools/cover/profile.go new file mode 100644 index 000000000..b6c8120a5 --- /dev/null +++ b/vendor/golang.org/x/tools/cover/profile.go @@ -0,0 +1,213 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package cover provides support for parsing coverage profiles +// generated by "go test -coverprofile=cover.out". +package cover // import "golang.org/x/tools/cover" + +import ( + "bufio" + "fmt" + "math" + "os" + "regexp" + "sort" + "strconv" + "strings" +) + +// Profile represents the profiling data for a specific file. +type Profile struct { + FileName string + Mode string + Blocks []ProfileBlock +} + +// ProfileBlock represents a single block of profiling data. +type ProfileBlock struct { + StartLine, StartCol int + EndLine, EndCol int + NumStmt, Count int +} + +type byFileName []*Profile + +func (p byFileName) Len() int { return len(p) } +func (p byFileName) Less(i, j int) bool { return p[i].FileName < p[j].FileName } +func (p byFileName) Swap(i, j int) { p[i], p[j] = p[j], p[i] } + +// ParseProfiles parses profile data in the specified file and returns a +// Profile for each source file described therein. +func ParseProfiles(fileName string) ([]*Profile, error) { + pf, err := os.Open(fileName) + if err != nil { + return nil, err + } + defer pf.Close() + + files := make(map[string]*Profile) + buf := bufio.NewReader(pf) + // First line is "mode: foo", where foo is "set", "count", or "atomic". + // Rest of file is in the format + // encoding/base64/base64.go:34.44,37.40 3 1 + // where the fields are: name.go:line.column,line.column numberOfStatements count + s := bufio.NewScanner(buf) + mode := "" + for s.Scan() { + line := s.Text() + if mode == "" { + const p = "mode: " + if !strings.HasPrefix(line, p) || line == p { + return nil, fmt.Errorf("bad mode line: %v", line) + } + mode = line[len(p):] + continue + } + m := lineRe.FindStringSubmatch(line) + if m == nil { + return nil, fmt.Errorf("line %q doesn't match expected format: %v", line, lineRe) + } + fn := m[1] + p := files[fn] + if p == nil { + p = &Profile{ + FileName: fn, + Mode: mode, + } + files[fn] = p + } + p.Blocks = append(p.Blocks, ProfileBlock{ + StartLine: toInt(m[2]), + StartCol: toInt(m[3]), + EndLine: toInt(m[4]), + EndCol: toInt(m[5]), + NumStmt: toInt(m[6]), + Count: toInt(m[7]), + }) + } + if err := s.Err(); err != nil { + return nil, err + } + for _, p := range files { + sort.Sort(blocksByStart(p.Blocks)) + // Merge samples from the same location. + j := 1 + for i := 1; i < len(p.Blocks); i++ { + b := p.Blocks[i] + last := p.Blocks[j-1] + if b.StartLine == last.StartLine && + b.StartCol == last.StartCol && + b.EndLine == last.EndLine && + b.EndCol == last.EndCol { + if b.NumStmt != last.NumStmt { + return nil, fmt.Errorf("inconsistent NumStmt: changed from %d to %d", last.NumStmt, b.NumStmt) + } + if mode == "set" { + p.Blocks[j-1].Count |= b.Count + } else { + p.Blocks[j-1].Count += b.Count + } + continue + } + p.Blocks[j] = b + j++ + } + p.Blocks = p.Blocks[:j] + } + // Generate a sorted slice. + profiles := make([]*Profile, 0, len(files)) + for _, profile := range files { + profiles = append(profiles, profile) + } + sort.Sort(byFileName(profiles)) + return profiles, nil +} + +type blocksByStart []ProfileBlock + +func (b blocksByStart) Len() int { return len(b) } +func (b blocksByStart) Swap(i, j int) { b[i], b[j] = b[j], b[i] } +func (b blocksByStart) Less(i, j int) bool { + bi, bj := b[i], b[j] + return bi.StartLine < bj.StartLine || bi.StartLine == bj.StartLine && bi.StartCol < bj.StartCol +} + +var lineRe = regexp.MustCompile(`^(.+):([0-9]+).([0-9]+),([0-9]+).([0-9]+) ([0-9]+) ([0-9]+)$`) + +func toInt(s string) int { + i, err := strconv.Atoi(s) + if err != nil { + panic(err) + } + return i +} + +// Boundary represents the position in a source file of the beginning or end of a +// block as reported by the coverage profile. In HTML mode, it will correspond to +// the opening or closing of a tag and will be used to colorize the source +type Boundary struct { + Offset int // Location as a byte offset in the source file. + Start bool // Is this the start of a block? + Count int // Event count from the cover profile. + Norm float64 // Count normalized to [0..1]. +} + +// Boundaries returns a Profile as a set of Boundary objects within the provided src. +func (p *Profile) Boundaries(src []byte) (boundaries []Boundary) { + // Find maximum count. + max := 0 + for _, b := range p.Blocks { + if b.Count > max { + max = b.Count + } + } + // Divisor for normalization. + divisor := math.Log(float64(max)) + + // boundary returns a Boundary, populating the Norm field with a normalized Count. + boundary := func(offset int, start bool, count int) Boundary { + b := Boundary{Offset: offset, Start: start, Count: count} + if !start || count == 0 { + return b + } + if max <= 1 { + b.Norm = 0.8 // Profile is in"set" mode; we want a heat map. Use cov8 in the CSS. + } else if count > 0 { + b.Norm = math.Log(float64(count)) / divisor + } + return b + } + + line, col := 1, 2 // TODO: Why is this 2? + for si, bi := 0, 0; si < len(src) && bi < len(p.Blocks); { + b := p.Blocks[bi] + if b.StartLine == line && b.StartCol == col { + boundaries = append(boundaries, boundary(si, true, b.Count)) + } + if b.EndLine == line && b.EndCol == col || line > b.EndLine { + boundaries = append(boundaries, boundary(si, false, 0)) + bi++ + continue // Don't advance through src; maybe the next block starts here. + } + if src[si] == '\n' { + line++ + col = 0 + } + col++ + si++ + } + sort.Sort(boundariesByPos(boundaries)) + return +} + +type boundariesByPos []Boundary + +func (b boundariesByPos) Len() int { return len(b) } +func (b boundariesByPos) Swap(i, j int) { b[i], b[j] = b[j], b[i] } +func (b boundariesByPos) Less(i, j int) bool { + if b[i].Offset == b[j].Offset { + return !b[i].Start && b[j].Start + } + return b[i].Offset < b[j].Offset +} diff --git a/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go b/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go new file mode 100644 index 000000000..98b3987b9 --- /dev/null +++ b/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go @@ -0,0 +1,109 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package gcexportdata provides functions for locating, reading, and +// writing export data files containing type information produced by the +// gc compiler. This package supports go1.7 export data format and all +// later versions. +// +// Although it might seem convenient for this package to live alongside +// go/types in the standard library, this would cause version skew +// problems for developer tools that use it, since they must be able to +// consume the outputs of the gc compiler both before and after a Go +// update such as from Go 1.7 to Go 1.8. Because this package lives in +// golang.org/x/tools, sites can update their version of this repo some +// time before the Go 1.8 release and rebuild and redeploy their +// developer tools, which will then be able to consume both Go 1.7 and +// Go 1.8 export data files, so they will work before and after the +// Go update. (See discussion at https://golang.org/issue/15651.) +// +package gcexportdata // import "golang.org/x/tools/go/gcexportdata" + +import ( + "bufio" + "bytes" + "fmt" + "go/token" + "go/types" + "io" + "io/ioutil" + + "golang.org/x/tools/go/internal/gcimporter" +) + +// Find returns the name of an object (.o) or archive (.a) file +// containing type information for the specified import path, +// using the workspace layout conventions of go/build. +// If no file was found, an empty filename is returned. +// +// A relative srcDir is interpreted relative to the current working directory. +// +// Find also returns the package's resolved (canonical) import path, +// reflecting the effects of srcDir and vendoring on importPath. +func Find(importPath, srcDir string) (filename, path string) { + return gcimporter.FindPkg(importPath, srcDir) +} + +// NewReader returns a reader for the export data section of an object +// (.o) or archive (.a) file read from r. The new reader may provide +// additional trailing data beyond the end of the export data. +func NewReader(r io.Reader) (io.Reader, error) { + buf := bufio.NewReader(r) + _, err := gcimporter.FindExportData(buf) + // If we ever switch to a zip-like archive format with the ToC + // at the end, we can return the correct portion of export data, + // but for now we must return the entire rest of the file. + return buf, err +} + +// Read reads export data from in, decodes it, and returns type +// information for the package. +// The package name is specified by path. +// File position information is added to fset. +// +// Read may inspect and add to the imports map to ensure that references +// within the export data to other packages are consistent. The caller +// must ensure that imports[path] does not exist, or exists but is +// incomplete (see types.Package.Complete), and Read inserts the +// resulting package into this map entry. +// +// On return, the state of the reader is undefined. +func Read(in io.Reader, fset *token.FileSet, imports map[string]*types.Package, path string) (*types.Package, error) { + data, err := ioutil.ReadAll(in) + if err != nil { + return nil, fmt.Errorf("reading export data for %q: %v", path, err) + } + + if bytes.HasPrefix(data, []byte("!")) { + return nil, fmt.Errorf("can't read export data for %q directly from an archive file (call gcexportdata.NewReader first to extract export data)", path) + } + + // The App Engine Go runtime v1.6 uses the old export data format. + // TODO(adonovan): delete once v1.7 has been around for a while. + if bytes.HasPrefix(data, []byte("package ")) { + return gcimporter.ImportData(imports, path, path, bytes.NewReader(data)) + } + + // The indexed export format starts with an 'i'; the older + // binary export format starts with a 'c', 'd', or 'v' + // (from "version"). Select appropriate importer. + if len(data) > 0 && data[0] == 'i' { + _, pkg, err := gcimporter.IImportData(fset, imports, data[1:], path) + return pkg, err + } + + _, pkg, err := gcimporter.BImportData(fset, imports, data, path) + return pkg, err +} + +// Write writes encoded type information for the specified package to out. +// The FileSet provides file position information for named objects. +func Write(out io.Writer, fset *token.FileSet, pkg *types.Package) error { + b, err := gcimporter.BExportData(fset, pkg) + if err != nil { + return err + } + _, err = out.Write(b) + return err +} diff --git a/vendor/golang.org/x/tools/go/gcexportdata/importer.go b/vendor/golang.org/x/tools/go/gcexportdata/importer.go new file mode 100644 index 000000000..efe221e7e --- /dev/null +++ b/vendor/golang.org/x/tools/go/gcexportdata/importer.go @@ -0,0 +1,73 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package gcexportdata + +import ( + "fmt" + "go/token" + "go/types" + "os" +) + +// NewImporter returns a new instance of the types.Importer interface +// that reads type information from export data files written by gc. +// The Importer also satisfies types.ImporterFrom. +// +// Export data files are located using "go build" workspace conventions +// and the build.Default context. +// +// Use this importer instead of go/importer.For("gc", ...) to avoid the +// version-skew problems described in the documentation of this package, +// or to control the FileSet or access the imports map populated during +// package loading. +// +func NewImporter(fset *token.FileSet, imports map[string]*types.Package) types.ImporterFrom { + return importer{fset, imports} +} + +type importer struct { + fset *token.FileSet + imports map[string]*types.Package +} + +func (imp importer) Import(importPath string) (*types.Package, error) { + return imp.ImportFrom(importPath, "", 0) +} + +func (imp importer) ImportFrom(importPath, srcDir string, mode types.ImportMode) (_ *types.Package, err error) { + filename, path := Find(importPath, srcDir) + if filename == "" { + if importPath == "unsafe" { + // Even for unsafe, call Find first in case + // the package was vendored. + return types.Unsafe, nil + } + return nil, fmt.Errorf("can't find import: %s", importPath) + } + + if pkg, ok := imp.imports[path]; ok && pkg.Complete() { + return pkg, nil // cache hit + } + + // open file + f, err := os.Open(filename) + if err != nil { + return nil, err + } + defer func() { + f.Close() + if err != nil { + // add file name to error + err = fmt.Errorf("reading export data: %s: %v", filename, err) + } + }() + + r, err := NewReader(f) + if err != nil { + return nil, err + } + + return Read(r, imp.fset, imp.imports, path) +} diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/bexport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/bexport.go new file mode 100644 index 000000000..a807d0aaa --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/bexport.go @@ -0,0 +1,852 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Binary package export. +// This file was derived from $GOROOT/src/cmd/compile/internal/gc/bexport.go; +// see that file for specification of the format. + +package gcimporter + +import ( + "bytes" + "encoding/binary" + "fmt" + "go/ast" + "go/constant" + "go/token" + "go/types" + "math" + "math/big" + "sort" + "strings" +) + +// If debugFormat is set, each integer and string value is preceded by a marker +// and position information in the encoding. This mechanism permits an importer +// to recognize immediately when it is out of sync. The importer recognizes this +// mode automatically (i.e., it can import export data produced with debugging +// support even if debugFormat is not set at the time of import). This mode will +// lead to massively larger export data (by a factor of 2 to 3) and should only +// be enabled during development and debugging. +// +// NOTE: This flag is the first flag to enable if importing dies because of +// (suspected) format errors, and whenever a change is made to the format. +const debugFormat = false // default: false + +// If trace is set, debugging output is printed to std out. +const trace = false // default: false + +// Current export format version. Increase with each format change. +// Note: The latest binary (non-indexed) export format is at version 6. +// This exporter is still at level 4, but it doesn't matter since +// the binary importer can handle older versions just fine. +// 6: package height (CL 105038) -- NOT IMPLEMENTED HERE +// 5: improved position encoding efficiency (issue 20080, CL 41619) -- NOT IMPLEMEMTED HERE +// 4: type name objects support type aliases, uses aliasTag +// 3: Go1.8 encoding (same as version 2, aliasTag defined but never used) +// 2: removed unused bool in ODCL export (compiler only) +// 1: header format change (more regular), export package for _ struct fields +// 0: Go1.7 encoding +const exportVersion = 4 + +// trackAllTypes enables cycle tracking for all types, not just named +// types. The existing compiler invariants assume that unnamed types +// that are not completely set up are not used, or else there are spurious +// errors. +// If disabled, only named types are tracked, possibly leading to slightly +// less efficient encoding in rare cases. It also prevents the export of +// some corner-case type declarations (but those are not handled correctly +// with with the textual export format either). +// TODO(gri) enable and remove once issues caused by it are fixed +const trackAllTypes = false + +type exporter struct { + fset *token.FileSet + out bytes.Buffer + + // object -> index maps, indexed in order of serialization + strIndex map[string]int + pkgIndex map[*types.Package]int + typIndex map[types.Type]int + + // position encoding + posInfoFormat bool + prevFile string + prevLine int + + // debugging support + written int // bytes written + indent int // for trace +} + +// internalError represents an error generated inside this package. +type internalError string + +func (e internalError) Error() string { return "gcimporter: " + string(e) } + +func internalErrorf(format string, args ...interface{}) error { + return internalError(fmt.Sprintf(format, args...)) +} + +// BExportData returns binary export data for pkg. +// If no file set is provided, position info will be missing. +func BExportData(fset *token.FileSet, pkg *types.Package) (b []byte, err error) { + defer func() { + if e := recover(); e != nil { + if ierr, ok := e.(internalError); ok { + err = ierr + return + } + // Not an internal error; panic again. + panic(e) + } + }() + + p := exporter{ + fset: fset, + strIndex: map[string]int{"": 0}, // empty string is mapped to 0 + pkgIndex: make(map[*types.Package]int), + typIndex: make(map[types.Type]int), + posInfoFormat: true, // TODO(gri) might become a flag, eventually + } + + // write version info + // The version string must start with "version %d" where %d is the version + // number. Additional debugging information may follow after a blank; that + // text is ignored by the importer. + p.rawStringln(fmt.Sprintf("version %d", exportVersion)) + var debug string + if debugFormat { + debug = "debug" + } + p.rawStringln(debug) // cannot use p.bool since it's affected by debugFormat; also want to see this clearly + p.bool(trackAllTypes) + p.bool(p.posInfoFormat) + + // --- generic export data --- + + // populate type map with predeclared "known" types + for index, typ := range predeclared() { + p.typIndex[typ] = index + } + if len(p.typIndex) != len(predeclared()) { + return nil, internalError("duplicate entries in type map?") + } + + // write package data + p.pkg(pkg, true) + if trace { + p.tracef("\n") + } + + // write objects + objcount := 0 + scope := pkg.Scope() + for _, name := range scope.Names() { + if !ast.IsExported(name) { + continue + } + if trace { + p.tracef("\n") + } + p.obj(scope.Lookup(name)) + objcount++ + } + + // indicate end of list + if trace { + p.tracef("\n") + } + p.tag(endTag) + + // for self-verification only (redundant) + p.int(objcount) + + if trace { + p.tracef("\n") + } + + // --- end of export data --- + + return p.out.Bytes(), nil +} + +func (p *exporter) pkg(pkg *types.Package, emptypath bool) { + if pkg == nil { + panic(internalError("unexpected nil pkg")) + } + + // if we saw the package before, write its index (>= 0) + if i, ok := p.pkgIndex[pkg]; ok { + p.index('P', i) + return + } + + // otherwise, remember the package, write the package tag (< 0) and package data + if trace { + p.tracef("P%d = { ", len(p.pkgIndex)) + defer p.tracef("} ") + } + p.pkgIndex[pkg] = len(p.pkgIndex) + + p.tag(packageTag) + p.string(pkg.Name()) + if emptypath { + p.string("") + } else { + p.string(pkg.Path()) + } +} + +func (p *exporter) obj(obj types.Object) { + switch obj := obj.(type) { + case *types.Const: + p.tag(constTag) + p.pos(obj) + p.qualifiedName(obj) + p.typ(obj.Type()) + p.value(obj.Val()) + + case *types.TypeName: + if obj.IsAlias() { + p.tag(aliasTag) + p.pos(obj) + p.qualifiedName(obj) + } else { + p.tag(typeTag) + } + p.typ(obj.Type()) + + case *types.Var: + p.tag(varTag) + p.pos(obj) + p.qualifiedName(obj) + p.typ(obj.Type()) + + case *types.Func: + p.tag(funcTag) + p.pos(obj) + p.qualifiedName(obj) + sig := obj.Type().(*types.Signature) + p.paramList(sig.Params(), sig.Variadic()) + p.paramList(sig.Results(), false) + + default: + panic(internalErrorf("unexpected object %v (%T)", obj, obj)) + } +} + +func (p *exporter) pos(obj types.Object) { + if !p.posInfoFormat { + return + } + + file, line := p.fileLine(obj) + if file == p.prevFile { + // common case: write line delta + // delta == 0 means different file or no line change + delta := line - p.prevLine + p.int(delta) + if delta == 0 { + p.int(-1) // -1 means no file change + } + } else { + // different file + p.int(0) + // Encode filename as length of common prefix with previous + // filename, followed by (possibly empty) suffix. Filenames + // frequently share path prefixes, so this can save a lot + // of space and make export data size less dependent on file + // path length. The suffix is unlikely to be empty because + // file names tend to end in ".go". + n := commonPrefixLen(p.prevFile, file) + p.int(n) // n >= 0 + p.string(file[n:]) // write suffix only + p.prevFile = file + p.int(line) + } + p.prevLine = line +} + +func (p *exporter) fileLine(obj types.Object) (file string, line int) { + if p.fset != nil { + pos := p.fset.Position(obj.Pos()) + file = pos.Filename + line = pos.Line + } + return +} + +func commonPrefixLen(a, b string) int { + if len(a) > len(b) { + a, b = b, a + } + // len(a) <= len(b) + i := 0 + for i < len(a) && a[i] == b[i] { + i++ + } + return i +} + +func (p *exporter) qualifiedName(obj types.Object) { + p.string(obj.Name()) + p.pkg(obj.Pkg(), false) +} + +func (p *exporter) typ(t types.Type) { + if t == nil { + panic(internalError("nil type")) + } + + // Possible optimization: Anonymous pointer types *T where + // T is a named type are common. We could canonicalize all + // such types *T to a single type PT = *T. This would lead + // to at most one *T entry in typIndex, and all future *T's + // would be encoded as the respective index directly. Would + // save 1 byte (pointerTag) per *T and reduce the typIndex + // size (at the cost of a canonicalization map). We can do + // this later, without encoding format change. + + // if we saw the type before, write its index (>= 0) + if i, ok := p.typIndex[t]; ok { + p.index('T', i) + return + } + + // otherwise, remember the type, write the type tag (< 0) and type data + if trackAllTypes { + if trace { + p.tracef("T%d = {>\n", len(p.typIndex)) + defer p.tracef("<\n} ") + } + p.typIndex[t] = len(p.typIndex) + } + + switch t := t.(type) { + case *types.Named: + if !trackAllTypes { + // if we don't track all types, track named types now + p.typIndex[t] = len(p.typIndex) + } + + p.tag(namedTag) + p.pos(t.Obj()) + p.qualifiedName(t.Obj()) + p.typ(t.Underlying()) + if !types.IsInterface(t) { + p.assocMethods(t) + } + + case *types.Array: + p.tag(arrayTag) + p.int64(t.Len()) + p.typ(t.Elem()) + + case *types.Slice: + p.tag(sliceTag) + p.typ(t.Elem()) + + case *dddSlice: + p.tag(dddTag) + p.typ(t.elem) + + case *types.Struct: + p.tag(structTag) + p.fieldList(t) + + case *types.Pointer: + p.tag(pointerTag) + p.typ(t.Elem()) + + case *types.Signature: + p.tag(signatureTag) + p.paramList(t.Params(), t.Variadic()) + p.paramList(t.Results(), false) + + case *types.Interface: + p.tag(interfaceTag) + p.iface(t) + + case *types.Map: + p.tag(mapTag) + p.typ(t.Key()) + p.typ(t.Elem()) + + case *types.Chan: + p.tag(chanTag) + p.int(int(3 - t.Dir())) // hack + p.typ(t.Elem()) + + default: + panic(internalErrorf("unexpected type %T: %s", t, t)) + } +} + +func (p *exporter) assocMethods(named *types.Named) { + // Sort methods (for determinism). + var methods []*types.Func + for i := 0; i < named.NumMethods(); i++ { + methods = append(methods, named.Method(i)) + } + sort.Sort(methodsByName(methods)) + + p.int(len(methods)) + + if trace && methods != nil { + p.tracef("associated methods {>\n") + } + + for i, m := range methods { + if trace && i > 0 { + p.tracef("\n") + } + + p.pos(m) + name := m.Name() + p.string(name) + if !exported(name) { + p.pkg(m.Pkg(), false) + } + + sig := m.Type().(*types.Signature) + p.paramList(types.NewTuple(sig.Recv()), false) + p.paramList(sig.Params(), sig.Variadic()) + p.paramList(sig.Results(), false) + p.int(0) // dummy value for go:nointerface pragma - ignored by importer + } + + if trace && methods != nil { + p.tracef("<\n} ") + } +} + +type methodsByName []*types.Func + +func (x methodsByName) Len() int { return len(x) } +func (x methodsByName) Swap(i, j int) { x[i], x[j] = x[j], x[i] } +func (x methodsByName) Less(i, j int) bool { return x[i].Name() < x[j].Name() } + +func (p *exporter) fieldList(t *types.Struct) { + if trace && t.NumFields() > 0 { + p.tracef("fields {>\n") + defer p.tracef("<\n} ") + } + + p.int(t.NumFields()) + for i := 0; i < t.NumFields(); i++ { + if trace && i > 0 { + p.tracef("\n") + } + p.field(t.Field(i)) + p.string(t.Tag(i)) + } +} + +func (p *exporter) field(f *types.Var) { + if !f.IsField() { + panic(internalError("field expected")) + } + + p.pos(f) + p.fieldName(f) + p.typ(f.Type()) +} + +func (p *exporter) iface(t *types.Interface) { + // TODO(gri): enable importer to load embedded interfaces, + // then emit Embeddeds and ExplicitMethods separately here. + p.int(0) + + n := t.NumMethods() + if trace && n > 0 { + p.tracef("methods {>\n") + defer p.tracef("<\n} ") + } + p.int(n) + for i := 0; i < n; i++ { + if trace && i > 0 { + p.tracef("\n") + } + p.method(t.Method(i)) + } +} + +func (p *exporter) method(m *types.Func) { + sig := m.Type().(*types.Signature) + if sig.Recv() == nil { + panic(internalError("method expected")) + } + + p.pos(m) + p.string(m.Name()) + if m.Name() != "_" && !ast.IsExported(m.Name()) { + p.pkg(m.Pkg(), false) + } + + // interface method; no need to encode receiver. + p.paramList(sig.Params(), sig.Variadic()) + p.paramList(sig.Results(), false) +} + +func (p *exporter) fieldName(f *types.Var) { + name := f.Name() + + if f.Anonymous() { + // anonymous field - we distinguish between 3 cases: + // 1) field name matches base type name and is exported + // 2) field name matches base type name and is not exported + // 3) field name doesn't match base type name (alias name) + bname := basetypeName(f.Type()) + if name == bname { + if ast.IsExported(name) { + name = "" // 1) we don't need to know the field name or package + } else { + name = "?" // 2) use unexported name "?" to force package export + } + } else { + // 3) indicate alias and export name as is + // (this requires an extra "@" but this is a rare case) + p.string("@") + } + } + + p.string(name) + if name != "" && !ast.IsExported(name) { + p.pkg(f.Pkg(), false) + } +} + +func basetypeName(typ types.Type) string { + switch typ := deref(typ).(type) { + case *types.Basic: + return typ.Name() + case *types.Named: + return typ.Obj().Name() + default: + return "" // unnamed type + } +} + +func (p *exporter) paramList(params *types.Tuple, variadic bool) { + // use negative length to indicate unnamed parameters + // (look at the first parameter only since either all + // names are present or all are absent) + n := params.Len() + if n > 0 && params.At(0).Name() == "" { + n = -n + } + p.int(n) + for i := 0; i < params.Len(); i++ { + q := params.At(i) + t := q.Type() + if variadic && i == params.Len()-1 { + t = &dddSlice{t.(*types.Slice).Elem()} + } + p.typ(t) + if n > 0 { + name := q.Name() + p.string(name) + if name != "_" { + p.pkg(q.Pkg(), false) + } + } + p.string("") // no compiler-specific info + } +} + +func (p *exporter) value(x constant.Value) { + if trace { + p.tracef("= ") + } + + switch x.Kind() { + case constant.Bool: + tag := falseTag + if constant.BoolVal(x) { + tag = trueTag + } + p.tag(tag) + + case constant.Int: + if v, exact := constant.Int64Val(x); exact { + // common case: x fits into an int64 - use compact encoding + p.tag(int64Tag) + p.int64(v) + return + } + // uncommon case: large x - use float encoding + // (powers of 2 will be encoded efficiently with exponent) + p.tag(floatTag) + p.float(constant.ToFloat(x)) + + case constant.Float: + p.tag(floatTag) + p.float(x) + + case constant.Complex: + p.tag(complexTag) + p.float(constant.Real(x)) + p.float(constant.Imag(x)) + + case constant.String: + p.tag(stringTag) + p.string(constant.StringVal(x)) + + case constant.Unknown: + // package contains type errors + p.tag(unknownTag) + + default: + panic(internalErrorf("unexpected value %v (%T)", x, x)) + } +} + +func (p *exporter) float(x constant.Value) { + if x.Kind() != constant.Float { + panic(internalErrorf("unexpected constant %v, want float", x)) + } + // extract sign (there is no -0) + sign := constant.Sign(x) + if sign == 0 { + // x == 0 + p.int(0) + return + } + // x != 0 + + var f big.Float + if v, exact := constant.Float64Val(x); exact { + // float64 + f.SetFloat64(v) + } else if num, denom := constant.Num(x), constant.Denom(x); num.Kind() == constant.Int { + // TODO(gri): add big.Rat accessor to constant.Value. + r := valueToRat(num) + f.SetRat(r.Quo(r, valueToRat(denom))) + } else { + // Value too large to represent as a fraction => inaccessible. + // TODO(gri): add big.Float accessor to constant.Value. + f.SetFloat64(math.MaxFloat64) // FIXME + } + + // extract exponent such that 0.5 <= m < 1.0 + var m big.Float + exp := f.MantExp(&m) + + // extract mantissa as *big.Int + // - set exponent large enough so mant satisfies mant.IsInt() + // - get *big.Int from mant + m.SetMantExp(&m, int(m.MinPrec())) + mant, acc := m.Int(nil) + if acc != big.Exact { + panic(internalError("internal error")) + } + + p.int(sign) + p.int(exp) + p.string(string(mant.Bytes())) +} + +func valueToRat(x constant.Value) *big.Rat { + // Convert little-endian to big-endian. + // I can't believe this is necessary. + bytes := constant.Bytes(x) + for i := 0; i < len(bytes)/2; i++ { + bytes[i], bytes[len(bytes)-1-i] = bytes[len(bytes)-1-i], bytes[i] + } + return new(big.Rat).SetInt(new(big.Int).SetBytes(bytes)) +} + +func (p *exporter) bool(b bool) bool { + if trace { + p.tracef("[") + defer p.tracef("= %v] ", b) + } + + x := 0 + if b { + x = 1 + } + p.int(x) + return b +} + +// ---------------------------------------------------------------------------- +// Low-level encoders + +func (p *exporter) index(marker byte, index int) { + if index < 0 { + panic(internalError("invalid index < 0")) + } + if debugFormat { + p.marker('t') + } + if trace { + p.tracef("%c%d ", marker, index) + } + p.rawInt64(int64(index)) +} + +func (p *exporter) tag(tag int) { + if tag >= 0 { + panic(internalError("invalid tag >= 0")) + } + if debugFormat { + p.marker('t') + } + if trace { + p.tracef("%s ", tagString[-tag]) + } + p.rawInt64(int64(tag)) +} + +func (p *exporter) int(x int) { + p.int64(int64(x)) +} + +func (p *exporter) int64(x int64) { + if debugFormat { + p.marker('i') + } + if trace { + p.tracef("%d ", x) + } + p.rawInt64(x) +} + +func (p *exporter) string(s string) { + if debugFormat { + p.marker('s') + } + if trace { + p.tracef("%q ", s) + } + // if we saw the string before, write its index (>= 0) + // (the empty string is mapped to 0) + if i, ok := p.strIndex[s]; ok { + p.rawInt64(int64(i)) + return + } + // otherwise, remember string and write its negative length and bytes + p.strIndex[s] = len(p.strIndex) + p.rawInt64(-int64(len(s))) + for i := 0; i < len(s); i++ { + p.rawByte(s[i]) + } +} + +// marker emits a marker byte and position information which makes +// it easy for a reader to detect if it is "out of sync". Used for +// debugFormat format only. +func (p *exporter) marker(m byte) { + p.rawByte(m) + // Enable this for help tracking down the location + // of an incorrect marker when running in debugFormat. + if false && trace { + p.tracef("#%d ", p.written) + } + p.rawInt64(int64(p.written)) +} + +// rawInt64 should only be used by low-level encoders. +func (p *exporter) rawInt64(x int64) { + var tmp [binary.MaxVarintLen64]byte + n := binary.PutVarint(tmp[:], x) + for i := 0; i < n; i++ { + p.rawByte(tmp[i]) + } +} + +// rawStringln should only be used to emit the initial version string. +func (p *exporter) rawStringln(s string) { + for i := 0; i < len(s); i++ { + p.rawByte(s[i]) + } + p.rawByte('\n') +} + +// rawByte is the bottleneck interface to write to p.out. +// rawByte escapes b as follows (any encoding does that +// hides '$'): +// +// '$' => '|' 'S' +// '|' => '|' '|' +// +// Necessary so other tools can find the end of the +// export data by searching for "$$". +// rawByte should only be used by low-level encoders. +func (p *exporter) rawByte(b byte) { + switch b { + case '$': + // write '$' as '|' 'S' + b = 'S' + fallthrough + case '|': + // write '|' as '|' '|' + p.out.WriteByte('|') + p.written++ + } + p.out.WriteByte(b) + p.written++ +} + +// tracef is like fmt.Printf but it rewrites the format string +// to take care of indentation. +func (p *exporter) tracef(format string, args ...interface{}) { + if strings.ContainsAny(format, "<>\n") { + var buf bytes.Buffer + for i := 0; i < len(format); i++ { + // no need to deal with runes + ch := format[i] + switch ch { + case '>': + p.indent++ + continue + case '<': + p.indent-- + continue + } + buf.WriteByte(ch) + if ch == '\n' { + for j := p.indent; j > 0; j-- { + buf.WriteString(". ") + } + } + } + format = buf.String() + } + fmt.Printf(format, args...) +} + +// Debugging support. +// (tagString is only used when tracing is enabled) +var tagString = [...]string{ + // Packages + -packageTag: "package", + + // Types + -namedTag: "named type", + -arrayTag: "array", + -sliceTag: "slice", + -dddTag: "ddd", + -structTag: "struct", + -pointerTag: "pointer", + -signatureTag: "signature", + -interfaceTag: "interface", + -mapTag: "map", + -chanTag: "chan", + + // Values + -falseTag: "false", + -trueTag: "true", + -int64Tag: "int64", + -floatTag: "float", + -fractionTag: "fraction", + -complexTag: "complex", + -stringTag: "string", + -unknownTag: "unknown", + + // Type aliases + -aliasTag: "alias", +} diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/bimport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/bimport.go new file mode 100644 index 000000000..e3c310782 --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/bimport.go @@ -0,0 +1,1036 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file is a copy of $GOROOT/src/go/internal/gcimporter/bimport.go. + +package gcimporter + +import ( + "encoding/binary" + "fmt" + "go/constant" + "go/token" + "go/types" + "sort" + "strconv" + "strings" + "sync" + "unicode" + "unicode/utf8" +) + +type importer struct { + imports map[string]*types.Package + data []byte + importpath string + buf []byte // for reading strings + version int // export format version + + // object lists + strList []string // in order of appearance + pathList []string // in order of appearance + pkgList []*types.Package // in order of appearance + typList []types.Type // in order of appearance + interfaceList []*types.Interface // for delayed completion only + trackAllTypes bool + + // position encoding + posInfoFormat bool + prevFile string + prevLine int + fake fakeFileSet + + // debugging support + debugFormat bool + read int // bytes read +} + +// BImportData imports a package from the serialized package data +// and returns the number of bytes consumed and a reference to the package. +// If the export data version is not recognized or the format is otherwise +// compromised, an error is returned. +func BImportData(fset *token.FileSet, imports map[string]*types.Package, data []byte, path string) (_ int, pkg *types.Package, err error) { + // catch panics and return them as errors + const currentVersion = 6 + version := -1 // unknown version + defer func() { + if e := recover(); e != nil { + // Return a (possibly nil or incomplete) package unchanged (see #16088). + if version > currentVersion { + err = fmt.Errorf("cannot import %q (%v), export data is newer version - update tool", path, e) + } else { + err = fmt.Errorf("cannot import %q (%v), possibly version skew - reinstall package", path, e) + } + } + }() + + p := importer{ + imports: imports, + data: data, + importpath: path, + version: version, + strList: []string{""}, // empty string is mapped to 0 + pathList: []string{""}, // empty string is mapped to 0 + fake: fakeFileSet{ + fset: fset, + files: make(map[string]*token.File), + }, + } + + // read version info + var versionstr string + if b := p.rawByte(); b == 'c' || b == 'd' { + // Go1.7 encoding; first byte encodes low-level + // encoding format (compact vs debug). + // For backward-compatibility only (avoid problems with + // old installed packages). Newly compiled packages use + // the extensible format string. + // TODO(gri) Remove this support eventually; after Go1.8. + if b == 'd' { + p.debugFormat = true + } + p.trackAllTypes = p.rawByte() == 'a' + p.posInfoFormat = p.int() != 0 + versionstr = p.string() + if versionstr == "v1" { + version = 0 + } + } else { + // Go1.8 extensible encoding + // read version string and extract version number (ignore anything after the version number) + versionstr = p.rawStringln(b) + if s := strings.SplitN(versionstr, " ", 3); len(s) >= 2 && s[0] == "version" { + if v, err := strconv.Atoi(s[1]); err == nil && v > 0 { + version = v + } + } + } + p.version = version + + // read version specific flags - extend as necessary + switch p.version { + // case currentVersion: + // ... + // fallthrough + case currentVersion, 5, 4, 3, 2, 1: + p.debugFormat = p.rawStringln(p.rawByte()) == "debug" + p.trackAllTypes = p.int() != 0 + p.posInfoFormat = p.int() != 0 + case 0: + // Go1.7 encoding format - nothing to do here + default: + errorf("unknown bexport format version %d (%q)", p.version, versionstr) + } + + // --- generic export data --- + + // populate typList with predeclared "known" types + p.typList = append(p.typList, predeclared()...) + + // read package data + pkg = p.pkg() + + // read objects of phase 1 only (see cmd/compile/internal/gc/bexport.go) + objcount := 0 + for { + tag := p.tagOrIndex() + if tag == endTag { + break + } + p.obj(tag) + objcount++ + } + + // self-verification + if count := p.int(); count != objcount { + errorf("got %d objects; want %d", objcount, count) + } + + // ignore compiler-specific import data + + // complete interfaces + // TODO(gri) re-investigate if we still need to do this in a delayed fashion + for _, typ := range p.interfaceList { + typ.Complete() + } + + // record all referenced packages as imports + list := append(([]*types.Package)(nil), p.pkgList[1:]...) + sort.Sort(byPath(list)) + pkg.SetImports(list) + + // package was imported completely and without errors + pkg.MarkComplete() + + return p.read, pkg, nil +} + +func errorf(format string, args ...interface{}) { + panic(fmt.Sprintf(format, args...)) +} + +func (p *importer) pkg() *types.Package { + // if the package was seen before, i is its index (>= 0) + i := p.tagOrIndex() + if i >= 0 { + return p.pkgList[i] + } + + // otherwise, i is the package tag (< 0) + if i != packageTag { + errorf("unexpected package tag %d version %d", i, p.version) + } + + // read package data + name := p.string() + var path string + if p.version >= 5 { + path = p.path() + } else { + path = p.string() + } + if p.version >= 6 { + p.int() // package height; unused by go/types + } + + // we should never see an empty package name + if name == "" { + errorf("empty package name in import") + } + + // an empty path denotes the package we are currently importing; + // it must be the first package we see + if (path == "") != (len(p.pkgList) == 0) { + errorf("package path %q for pkg index %d", path, len(p.pkgList)) + } + + // if the package was imported before, use that one; otherwise create a new one + if path == "" { + path = p.importpath + } + pkg := p.imports[path] + if pkg == nil { + pkg = types.NewPackage(path, name) + p.imports[path] = pkg + } else if pkg.Name() != name { + errorf("conflicting names %s and %s for package %q", pkg.Name(), name, path) + } + p.pkgList = append(p.pkgList, pkg) + + return pkg +} + +// objTag returns the tag value for each object kind. +func objTag(obj types.Object) int { + switch obj.(type) { + case *types.Const: + return constTag + case *types.TypeName: + return typeTag + case *types.Var: + return varTag + case *types.Func: + return funcTag + default: + errorf("unexpected object: %v (%T)", obj, obj) // panics + panic("unreachable") + } +} + +func sameObj(a, b types.Object) bool { + // Because unnamed types are not canonicalized, we cannot simply compare types for + // (pointer) identity. + // Ideally we'd check equality of constant values as well, but this is good enough. + return objTag(a) == objTag(b) && types.Identical(a.Type(), b.Type()) +} + +func (p *importer) declare(obj types.Object) { + pkg := obj.Pkg() + if alt := pkg.Scope().Insert(obj); alt != nil { + // This can only trigger if we import a (non-type) object a second time. + // Excluding type aliases, this cannot happen because 1) we only import a package + // once; and b) we ignore compiler-specific export data which may contain + // functions whose inlined function bodies refer to other functions that + // were already imported. + // However, type aliases require reexporting the original type, so we need + // to allow it (see also the comment in cmd/compile/internal/gc/bimport.go, + // method importer.obj, switch case importing functions). + // TODO(gri) review/update this comment once the gc compiler handles type aliases. + if !sameObj(obj, alt) { + errorf("inconsistent import:\n\t%v\npreviously imported as:\n\t%v\n", obj, alt) + } + } +} + +func (p *importer) obj(tag int) { + switch tag { + case constTag: + pos := p.pos() + pkg, name := p.qualifiedName() + typ := p.typ(nil, nil) + val := p.value() + p.declare(types.NewConst(pos, pkg, name, typ, val)) + + case aliasTag: + // TODO(gri) verify type alias hookup is correct + pos := p.pos() + pkg, name := p.qualifiedName() + typ := p.typ(nil, nil) + p.declare(types.NewTypeName(pos, pkg, name, typ)) + + case typeTag: + p.typ(nil, nil) + + case varTag: + pos := p.pos() + pkg, name := p.qualifiedName() + typ := p.typ(nil, nil) + p.declare(types.NewVar(pos, pkg, name, typ)) + + case funcTag: + pos := p.pos() + pkg, name := p.qualifiedName() + params, isddd := p.paramList() + result, _ := p.paramList() + sig := types.NewSignature(nil, params, result, isddd) + p.declare(types.NewFunc(pos, pkg, name, sig)) + + default: + errorf("unexpected object tag %d", tag) + } +} + +const deltaNewFile = -64 // see cmd/compile/internal/gc/bexport.go + +func (p *importer) pos() token.Pos { + if !p.posInfoFormat { + return token.NoPos + } + + file := p.prevFile + line := p.prevLine + delta := p.int() + line += delta + if p.version >= 5 { + if delta == deltaNewFile { + if n := p.int(); n >= 0 { + // file changed + file = p.path() + line = n + } + } + } else { + if delta == 0 { + if n := p.int(); n >= 0 { + // file changed + file = p.prevFile[:n] + p.string() + line = p.int() + } + } + } + p.prevFile = file + p.prevLine = line + + return p.fake.pos(file, line) +} + +// Synthesize a token.Pos +type fakeFileSet struct { + fset *token.FileSet + files map[string]*token.File +} + +func (s *fakeFileSet) pos(file string, line int) token.Pos { + // Since we don't know the set of needed file positions, we + // reserve maxlines positions per file. + const maxlines = 64 * 1024 + f := s.files[file] + if f == nil { + f = s.fset.AddFile(file, -1, maxlines) + s.files[file] = f + // Allocate the fake linebreak indices on first use. + // TODO(adonovan): opt: save ~512KB using a more complex scheme? + fakeLinesOnce.Do(func() { + fakeLines = make([]int, maxlines) + for i := range fakeLines { + fakeLines[i] = i + } + }) + f.SetLines(fakeLines) + } + + if line > maxlines { + line = 1 + } + + // Treat the file as if it contained only newlines + // and column=1: use the line number as the offset. + return f.Pos(line - 1) +} + +var ( + fakeLines []int + fakeLinesOnce sync.Once +) + +func (p *importer) qualifiedName() (pkg *types.Package, name string) { + name = p.string() + pkg = p.pkg() + return +} + +func (p *importer) record(t types.Type) { + p.typList = append(p.typList, t) +} + +// A dddSlice is a types.Type representing ...T parameters. +// It only appears for parameter types and does not escape +// the importer. +type dddSlice struct { + elem types.Type +} + +func (t *dddSlice) Underlying() types.Type { return t } +func (t *dddSlice) String() string { return "..." + t.elem.String() } + +// parent is the package which declared the type; parent == nil means +// the package currently imported. The parent package is needed for +// exported struct fields and interface methods which don't contain +// explicit package information in the export data. +// +// A non-nil tname is used as the "owner" of the result type; i.e., +// the result type is the underlying type of tname. tname is used +// to give interface methods a named receiver type where possible. +func (p *importer) typ(parent *types.Package, tname *types.Named) types.Type { + // if the type was seen before, i is its index (>= 0) + i := p.tagOrIndex() + if i >= 0 { + return p.typList[i] + } + + // otherwise, i is the type tag (< 0) + switch i { + case namedTag: + // read type object + pos := p.pos() + parent, name := p.qualifiedName() + scope := parent.Scope() + obj := scope.Lookup(name) + + // if the object doesn't exist yet, create and insert it + if obj == nil { + obj = types.NewTypeName(pos, parent, name, nil) + scope.Insert(obj) + } + + if _, ok := obj.(*types.TypeName); !ok { + errorf("pkg = %s, name = %s => %s", parent, name, obj) + } + + // associate new named type with obj if it doesn't exist yet + t0 := types.NewNamed(obj.(*types.TypeName), nil, nil) + + // but record the existing type, if any + tname := obj.Type().(*types.Named) // tname is either t0 or the existing type + p.record(tname) + + // read underlying type + t0.SetUnderlying(p.typ(parent, t0)) + + // interfaces don't have associated methods + if types.IsInterface(t0) { + return tname + } + + // read associated methods + for i := p.int(); i > 0; i-- { + // TODO(gri) replace this with something closer to fieldName + pos := p.pos() + name := p.string() + if !exported(name) { + p.pkg() + } + + recv, _ := p.paramList() // TODO(gri) do we need a full param list for the receiver? + params, isddd := p.paramList() + result, _ := p.paramList() + p.int() // go:nointerface pragma - discarded + + sig := types.NewSignature(recv.At(0), params, result, isddd) + t0.AddMethod(types.NewFunc(pos, parent, name, sig)) + } + + return tname + + case arrayTag: + t := new(types.Array) + if p.trackAllTypes { + p.record(t) + } + + n := p.int64() + *t = *types.NewArray(p.typ(parent, nil), n) + return t + + case sliceTag: + t := new(types.Slice) + if p.trackAllTypes { + p.record(t) + } + + *t = *types.NewSlice(p.typ(parent, nil)) + return t + + case dddTag: + t := new(dddSlice) + if p.trackAllTypes { + p.record(t) + } + + t.elem = p.typ(parent, nil) + return t + + case structTag: + t := new(types.Struct) + if p.trackAllTypes { + p.record(t) + } + + *t = *types.NewStruct(p.fieldList(parent)) + return t + + case pointerTag: + t := new(types.Pointer) + if p.trackAllTypes { + p.record(t) + } + + *t = *types.NewPointer(p.typ(parent, nil)) + return t + + case signatureTag: + t := new(types.Signature) + if p.trackAllTypes { + p.record(t) + } + + params, isddd := p.paramList() + result, _ := p.paramList() + *t = *types.NewSignature(nil, params, result, isddd) + return t + + case interfaceTag: + // Create a dummy entry in the type list. This is safe because we + // cannot expect the interface type to appear in a cycle, as any + // such cycle must contain a named type which would have been + // first defined earlier. + // TODO(gri) Is this still true now that we have type aliases? + // See issue #23225. + n := len(p.typList) + if p.trackAllTypes { + p.record(nil) + } + + var embeddeds []types.Type + for n := p.int(); n > 0; n-- { + p.pos() + embeddeds = append(embeddeds, p.typ(parent, nil)) + } + + t := newInterface(p.methodList(parent, tname), embeddeds) + p.interfaceList = append(p.interfaceList, t) + if p.trackAllTypes { + p.typList[n] = t + } + return t + + case mapTag: + t := new(types.Map) + if p.trackAllTypes { + p.record(t) + } + + key := p.typ(parent, nil) + val := p.typ(parent, nil) + *t = *types.NewMap(key, val) + return t + + case chanTag: + t := new(types.Chan) + if p.trackAllTypes { + p.record(t) + } + + dir := chanDir(p.int()) + val := p.typ(parent, nil) + *t = *types.NewChan(dir, val) + return t + + default: + errorf("unexpected type tag %d", i) // panics + panic("unreachable") + } +} + +func chanDir(d int) types.ChanDir { + // tag values must match the constants in cmd/compile/internal/gc/go.go + switch d { + case 1 /* Crecv */ : + return types.RecvOnly + case 2 /* Csend */ : + return types.SendOnly + case 3 /* Cboth */ : + return types.SendRecv + default: + errorf("unexpected channel dir %d", d) + return 0 + } +} + +func (p *importer) fieldList(parent *types.Package) (fields []*types.Var, tags []string) { + if n := p.int(); n > 0 { + fields = make([]*types.Var, n) + tags = make([]string, n) + for i := range fields { + fields[i], tags[i] = p.field(parent) + } + } + return +} + +func (p *importer) field(parent *types.Package) (*types.Var, string) { + pos := p.pos() + pkg, name, alias := p.fieldName(parent) + typ := p.typ(parent, nil) + tag := p.string() + + anonymous := false + if name == "" { + // anonymous field - typ must be T or *T and T must be a type name + switch typ := deref(typ).(type) { + case *types.Basic: // basic types are named types + pkg = nil // // objects defined in Universe scope have no package + name = typ.Name() + case *types.Named: + name = typ.Obj().Name() + default: + errorf("named base type expected") + } + anonymous = true + } else if alias { + // anonymous field: we have an explicit name because it's an alias + anonymous = true + } + + return types.NewField(pos, pkg, name, typ, anonymous), tag +} + +func (p *importer) methodList(parent *types.Package, baseType *types.Named) (methods []*types.Func) { + if n := p.int(); n > 0 { + methods = make([]*types.Func, n) + for i := range methods { + methods[i] = p.method(parent, baseType) + } + } + return +} + +func (p *importer) method(parent *types.Package, baseType *types.Named) *types.Func { + pos := p.pos() + pkg, name, _ := p.fieldName(parent) + // If we don't have a baseType, use a nil receiver. + // A receiver using the actual interface type (which + // we don't know yet) will be filled in when we call + // types.Interface.Complete. + var recv *types.Var + if baseType != nil { + recv = types.NewVar(token.NoPos, parent, "", baseType) + } + params, isddd := p.paramList() + result, _ := p.paramList() + sig := types.NewSignature(recv, params, result, isddd) + return types.NewFunc(pos, pkg, name, sig) +} + +func (p *importer) fieldName(parent *types.Package) (pkg *types.Package, name string, alias bool) { + name = p.string() + pkg = parent + if pkg == nil { + // use the imported package instead + pkg = p.pkgList[0] + } + if p.version == 0 && name == "_" { + // version 0 didn't export a package for _ fields + return + } + switch name { + case "": + // 1) field name matches base type name and is exported: nothing to do + case "?": + // 2) field name matches base type name and is not exported: need package + name = "" + pkg = p.pkg() + case "@": + // 3) field name doesn't match type name (alias) + name = p.string() + alias = true + fallthrough + default: + if !exported(name) { + pkg = p.pkg() + } + } + return +} + +func (p *importer) paramList() (*types.Tuple, bool) { + n := p.int() + if n == 0 { + return nil, false + } + // negative length indicates unnamed parameters + named := true + if n < 0 { + n = -n + named = false + } + // n > 0 + params := make([]*types.Var, n) + isddd := false + for i := range params { + params[i], isddd = p.param(named) + } + return types.NewTuple(params...), isddd +} + +func (p *importer) param(named bool) (*types.Var, bool) { + t := p.typ(nil, nil) + td, isddd := t.(*dddSlice) + if isddd { + t = types.NewSlice(td.elem) + } + + var pkg *types.Package + var name string + if named { + name = p.string() + if name == "" { + errorf("expected named parameter") + } + if name != "_" { + pkg = p.pkg() + } + if i := strings.Index(name, "·"); i > 0 { + name = name[:i] // cut off gc-specific parameter numbering + } + } + + // read and discard compiler-specific info + p.string() + + return types.NewVar(token.NoPos, pkg, name, t), isddd +} + +func exported(name string) bool { + ch, _ := utf8.DecodeRuneInString(name) + return unicode.IsUpper(ch) +} + +func (p *importer) value() constant.Value { + switch tag := p.tagOrIndex(); tag { + case falseTag: + return constant.MakeBool(false) + case trueTag: + return constant.MakeBool(true) + case int64Tag: + return constant.MakeInt64(p.int64()) + case floatTag: + return p.float() + case complexTag: + re := p.float() + im := p.float() + return constant.BinaryOp(re, token.ADD, constant.MakeImag(im)) + case stringTag: + return constant.MakeString(p.string()) + case unknownTag: + return constant.MakeUnknown() + default: + errorf("unexpected value tag %d", tag) // panics + panic("unreachable") + } +} + +func (p *importer) float() constant.Value { + sign := p.int() + if sign == 0 { + return constant.MakeInt64(0) + } + + exp := p.int() + mant := []byte(p.string()) // big endian + + // remove leading 0's if any + for len(mant) > 0 && mant[0] == 0 { + mant = mant[1:] + } + + // convert to little endian + // TODO(gri) go/constant should have a more direct conversion function + // (e.g., once it supports a big.Float based implementation) + for i, j := 0, len(mant)-1; i < j; i, j = i+1, j-1 { + mant[i], mant[j] = mant[j], mant[i] + } + + // adjust exponent (constant.MakeFromBytes creates an integer value, + // but mant represents the mantissa bits such that 0.5 <= mant < 1.0) + exp -= len(mant) << 3 + if len(mant) > 0 { + for msd := mant[len(mant)-1]; msd&0x80 == 0; msd <<= 1 { + exp++ + } + } + + x := constant.MakeFromBytes(mant) + switch { + case exp < 0: + d := constant.Shift(constant.MakeInt64(1), token.SHL, uint(-exp)) + x = constant.BinaryOp(x, token.QUO, d) + case exp > 0: + x = constant.Shift(x, token.SHL, uint(exp)) + } + + if sign < 0 { + x = constant.UnaryOp(token.SUB, x, 0) + } + return x +} + +// ---------------------------------------------------------------------------- +// Low-level decoders + +func (p *importer) tagOrIndex() int { + if p.debugFormat { + p.marker('t') + } + + return int(p.rawInt64()) +} + +func (p *importer) int() int { + x := p.int64() + if int64(int(x)) != x { + errorf("exported integer too large") + } + return int(x) +} + +func (p *importer) int64() int64 { + if p.debugFormat { + p.marker('i') + } + + return p.rawInt64() +} + +func (p *importer) path() string { + if p.debugFormat { + p.marker('p') + } + // if the path was seen before, i is its index (>= 0) + // (the empty string is at index 0) + i := p.rawInt64() + if i >= 0 { + return p.pathList[i] + } + // otherwise, i is the negative path length (< 0) + a := make([]string, -i) + for n := range a { + a[n] = p.string() + } + s := strings.Join(a, "/") + p.pathList = append(p.pathList, s) + return s +} + +func (p *importer) string() string { + if p.debugFormat { + p.marker('s') + } + // if the string was seen before, i is its index (>= 0) + // (the empty string is at index 0) + i := p.rawInt64() + if i >= 0 { + return p.strList[i] + } + // otherwise, i is the negative string length (< 0) + if n := int(-i); n <= cap(p.buf) { + p.buf = p.buf[:n] + } else { + p.buf = make([]byte, n) + } + for i := range p.buf { + p.buf[i] = p.rawByte() + } + s := string(p.buf) + p.strList = append(p.strList, s) + return s +} + +func (p *importer) marker(want byte) { + if got := p.rawByte(); got != want { + errorf("incorrect marker: got %c; want %c (pos = %d)", got, want, p.read) + } + + pos := p.read + if n := int(p.rawInt64()); n != pos { + errorf("incorrect position: got %d; want %d", n, pos) + } +} + +// rawInt64 should only be used by low-level decoders. +func (p *importer) rawInt64() int64 { + i, err := binary.ReadVarint(p) + if err != nil { + errorf("read error: %v", err) + } + return i +} + +// rawStringln should only be used to read the initial version string. +func (p *importer) rawStringln(b byte) string { + p.buf = p.buf[:0] + for b != '\n' { + p.buf = append(p.buf, b) + b = p.rawByte() + } + return string(p.buf) +} + +// needed for binary.ReadVarint in rawInt64 +func (p *importer) ReadByte() (byte, error) { + return p.rawByte(), nil +} + +// byte is the bottleneck interface for reading p.data. +// It unescapes '|' 'S' to '$' and '|' '|' to '|'. +// rawByte should only be used by low-level decoders. +func (p *importer) rawByte() byte { + b := p.data[0] + r := 1 + if b == '|' { + b = p.data[1] + r = 2 + switch b { + case 'S': + b = '$' + case '|': + // nothing to do + default: + errorf("unexpected escape sequence in export data") + } + } + p.data = p.data[r:] + p.read += r + return b + +} + +// ---------------------------------------------------------------------------- +// Export format + +// Tags. Must be < 0. +const ( + // Objects + packageTag = -(iota + 1) + constTag + typeTag + varTag + funcTag + endTag + + // Types + namedTag + arrayTag + sliceTag + dddTag + structTag + pointerTag + signatureTag + interfaceTag + mapTag + chanTag + + // Values + falseTag + trueTag + int64Tag + floatTag + fractionTag // not used by gc + complexTag + stringTag + nilTag // only used by gc (appears in exported inlined function bodies) + unknownTag // not used by gc (only appears in packages with errors) + + // Type aliases + aliasTag +) + +var predecl []types.Type // initialized lazily + +func predeclared() []types.Type { + if predecl == nil { + // initialize lazily to be sure that all + // elements have been initialized before + predecl = []types.Type{ // basic types + types.Typ[types.Bool], + types.Typ[types.Int], + types.Typ[types.Int8], + types.Typ[types.Int16], + types.Typ[types.Int32], + types.Typ[types.Int64], + types.Typ[types.Uint], + types.Typ[types.Uint8], + types.Typ[types.Uint16], + types.Typ[types.Uint32], + types.Typ[types.Uint64], + types.Typ[types.Uintptr], + types.Typ[types.Float32], + types.Typ[types.Float64], + types.Typ[types.Complex64], + types.Typ[types.Complex128], + types.Typ[types.String], + + // basic type aliases + types.Universe.Lookup("byte").Type(), + types.Universe.Lookup("rune").Type(), + + // error + types.Universe.Lookup("error").Type(), + + // untyped types + types.Typ[types.UntypedBool], + types.Typ[types.UntypedInt], + types.Typ[types.UntypedRune], + types.Typ[types.UntypedFloat], + types.Typ[types.UntypedComplex], + types.Typ[types.UntypedString], + types.Typ[types.UntypedNil], + + // package unsafe + types.Typ[types.UnsafePointer], + + // invalid type + types.Typ[types.Invalid], // only appears in packages with errors + + // used internally by gc; never used by this package or in .a files + anyType{}, + } + } + return predecl +} + +type anyType struct{} + +func (t anyType) Underlying() types.Type { return t } +func (t anyType) String() string { return "any" } diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/exportdata.go b/vendor/golang.org/x/tools/go/internal/gcimporter/exportdata.go new file mode 100644 index 000000000..f33dc5613 --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/exportdata.go @@ -0,0 +1,93 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file is a copy of $GOROOT/src/go/internal/gcimporter/exportdata.go. + +// This file implements FindExportData. + +package gcimporter + +import ( + "bufio" + "fmt" + "io" + "strconv" + "strings" +) + +func readGopackHeader(r *bufio.Reader) (name string, size int, err error) { + // See $GOROOT/include/ar.h. + hdr := make([]byte, 16+12+6+6+8+10+2) + _, err = io.ReadFull(r, hdr) + if err != nil { + return + } + // leave for debugging + if false { + fmt.Printf("header: %s", hdr) + } + s := strings.TrimSpace(string(hdr[16+12+6+6+8:][:10])) + size, err = strconv.Atoi(s) + if err != nil || hdr[len(hdr)-2] != '`' || hdr[len(hdr)-1] != '\n' { + err = fmt.Errorf("invalid archive header") + return + } + name = strings.TrimSpace(string(hdr[:16])) + return +} + +// FindExportData positions the reader r at the beginning of the +// export data section of an underlying GC-created object/archive +// file by reading from it. The reader must be positioned at the +// start of the file before calling this function. The hdr result +// is the string before the export data, either "$$" or "$$B". +// +func FindExportData(r *bufio.Reader) (hdr string, err error) { + // Read first line to make sure this is an object file. + line, err := r.ReadSlice('\n') + if err != nil { + err = fmt.Errorf("can't find export data (%v)", err) + return + } + + if string(line) == "!\n" { + // Archive file. Scan to __.PKGDEF. + var name string + if name, _, err = readGopackHeader(r); err != nil { + return + } + + // First entry should be __.PKGDEF. + if name != "__.PKGDEF" { + err = fmt.Errorf("go archive is missing __.PKGDEF") + return + } + + // Read first line of __.PKGDEF data, so that line + // is once again the first line of the input. + if line, err = r.ReadSlice('\n'); err != nil { + err = fmt.Errorf("can't find export data (%v)", err) + return + } + } + + // Now at __.PKGDEF in archive or still at beginning of file. + // Either way, line should begin with "go object ". + if !strings.HasPrefix(string(line), "go object ") { + err = fmt.Errorf("not a Go object file") + return + } + + // Skip over object header to export data. + // Begins after first line starting with $$. + for line[0] != '$' { + if line, err = r.ReadSlice('\n'); err != nil { + err = fmt.Errorf("can't find export data (%v)", err) + return + } + } + hdr = string(line) + + return +} diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/gcimporter.go b/vendor/golang.org/x/tools/go/internal/gcimporter/gcimporter.go new file mode 100644 index 000000000..9cf186605 --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/gcimporter.go @@ -0,0 +1,1078 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file is a modified copy of $GOROOT/src/go/internal/gcimporter/gcimporter.go, +// but it also contains the original source-based importer code for Go1.6. +// Once we stop supporting 1.6, we can remove that code. + +// Package gcimporter provides various functions for reading +// gc-generated object files that can be used to implement the +// Importer interface defined by the Go 1.5 standard library package. +package gcimporter // import "golang.org/x/tools/go/internal/gcimporter" + +import ( + "bufio" + "errors" + "fmt" + "go/build" + "go/constant" + "go/token" + "go/types" + "io" + "io/ioutil" + "os" + "path/filepath" + "sort" + "strconv" + "strings" + "text/scanner" +) + +// debugging/development support +const debug = false + +var pkgExts = [...]string{".a", ".o"} + +// FindPkg returns the filename and unique package id for an import +// path based on package information provided by build.Import (using +// the build.Default build.Context). A relative srcDir is interpreted +// relative to the current working directory. +// If no file was found, an empty filename is returned. +// +func FindPkg(path, srcDir string) (filename, id string) { + if path == "" { + return + } + + var noext string + switch { + default: + // "x" -> "$GOPATH/pkg/$GOOS_$GOARCH/x.ext", "x" + // Don't require the source files to be present. + if abs, err := filepath.Abs(srcDir); err == nil { // see issue 14282 + srcDir = abs + } + bp, _ := build.Import(path, srcDir, build.FindOnly|build.AllowBinary) + if bp.PkgObj == "" { + id = path // make sure we have an id to print in error message + return + } + noext = strings.TrimSuffix(bp.PkgObj, ".a") + id = bp.ImportPath + + case build.IsLocalImport(path): + // "./x" -> "/this/directory/x.ext", "/this/directory/x" + noext = filepath.Join(srcDir, path) + id = noext + + case filepath.IsAbs(path): + // for completeness only - go/build.Import + // does not support absolute imports + // "/x" -> "/x.ext", "/x" + noext = path + id = path + } + + if false { // for debugging + if path != id { + fmt.Printf("%s -> %s\n", path, id) + } + } + + // try extensions + for _, ext := range pkgExts { + filename = noext + ext + if f, err := os.Stat(filename); err == nil && !f.IsDir() { + return + } + } + + filename = "" // not found + return +} + +// ImportData imports a package by reading the gc-generated export data, +// adds the corresponding package object to the packages map indexed by id, +// and returns the object. +// +// The packages map must contains all packages already imported. The data +// reader position must be the beginning of the export data section. The +// filename is only used in error messages. +// +// If packages[id] contains the completely imported package, that package +// can be used directly, and there is no need to call this function (but +// there is also no harm but for extra time used). +// +func ImportData(packages map[string]*types.Package, filename, id string, data io.Reader) (pkg *types.Package, err error) { + // support for parser error handling + defer func() { + switch r := recover().(type) { + case nil: + // nothing to do + case importError: + err = r + default: + panic(r) // internal error + } + }() + + var p parser + p.init(filename, id, data, packages) + pkg = p.parseExport() + + return +} + +// Import imports a gc-generated package given its import path and srcDir, adds +// the corresponding package object to the packages map, and returns the object. +// The packages map must contain all packages already imported. +// +func Import(packages map[string]*types.Package, path, srcDir string, lookup func(path string) (io.ReadCloser, error)) (pkg *types.Package, err error) { + var rc io.ReadCloser + var filename, id string + if lookup != nil { + // With custom lookup specified, assume that caller has + // converted path to a canonical import path for use in the map. + if path == "unsafe" { + return types.Unsafe, nil + } + id = path + + // No need to re-import if the package was imported completely before. + if pkg = packages[id]; pkg != nil && pkg.Complete() { + return + } + f, err := lookup(path) + if err != nil { + return nil, err + } + rc = f + } else { + filename, id = FindPkg(path, srcDir) + if filename == "" { + if path == "unsafe" { + return types.Unsafe, nil + } + return nil, fmt.Errorf("can't find import: %q", id) + } + + // no need to re-import if the package was imported completely before + if pkg = packages[id]; pkg != nil && pkg.Complete() { + return + } + + // open file + f, err := os.Open(filename) + if err != nil { + return nil, err + } + defer func() { + if err != nil { + // add file name to error + err = fmt.Errorf("%s: %v", filename, err) + } + }() + rc = f + } + defer rc.Close() + + var hdr string + buf := bufio.NewReader(rc) + if hdr, err = FindExportData(buf); err != nil { + return + } + + switch hdr { + case "$$\n": + // Work-around if we don't have a filename; happens only if lookup != nil. + // Either way, the filename is only needed for importer error messages, so + // this is fine. + if filename == "" { + filename = path + } + return ImportData(packages, filename, id, buf) + + case "$$B\n": + var data []byte + data, err = ioutil.ReadAll(buf) + if err != nil { + break + } + + // TODO(gri): allow clients of go/importer to provide a FileSet. + // Or, define a new standard go/types/gcexportdata package. + fset := token.NewFileSet() + + // The indexed export format starts with an 'i'; the older + // binary export format starts with a 'c', 'd', or 'v' + // (from "version"). Select appropriate importer. + if len(data) > 0 && data[0] == 'i' { + _, pkg, err = IImportData(fset, packages, data[1:], id) + } else { + _, pkg, err = BImportData(fset, packages, data, id) + } + + default: + err = fmt.Errorf("unknown export data header: %q", hdr) + } + + return +} + +// ---------------------------------------------------------------------------- +// Parser + +// TODO(gri) Imported objects don't have position information. +// Ideally use the debug table line info; alternatively +// create some fake position (or the position of the +// import). That way error messages referring to imported +// objects can print meaningful information. + +// parser parses the exports inside a gc compiler-produced +// object/archive file and populates its scope with the results. +type parser struct { + scanner scanner.Scanner + tok rune // current token + lit string // literal string; only valid for Ident, Int, String tokens + id string // package id of imported package + sharedPkgs map[string]*types.Package // package id -> package object (across importer) + localPkgs map[string]*types.Package // package id -> package object (just this package) +} + +func (p *parser) init(filename, id string, src io.Reader, packages map[string]*types.Package) { + p.scanner.Init(src) + p.scanner.Error = func(_ *scanner.Scanner, msg string) { p.error(msg) } + p.scanner.Mode = scanner.ScanIdents | scanner.ScanInts | scanner.ScanChars | scanner.ScanStrings | scanner.ScanComments | scanner.SkipComments + p.scanner.Whitespace = 1<<'\t' | 1<<' ' + p.scanner.Filename = filename // for good error messages + p.next() + p.id = id + p.sharedPkgs = packages + if debug { + // check consistency of packages map + for _, pkg := range packages { + if pkg.Name() == "" { + fmt.Printf("no package name for %s\n", pkg.Path()) + } + } + } +} + +func (p *parser) next() { + p.tok = p.scanner.Scan() + switch p.tok { + case scanner.Ident, scanner.Int, scanner.Char, scanner.String, '·': + p.lit = p.scanner.TokenText() + default: + p.lit = "" + } + if debug { + fmt.Printf("%s: %q -> %q\n", scanner.TokenString(p.tok), p.scanner.TokenText(), p.lit) + } +} + +func declTypeName(pkg *types.Package, name string) *types.TypeName { + scope := pkg.Scope() + if obj := scope.Lookup(name); obj != nil { + return obj.(*types.TypeName) + } + obj := types.NewTypeName(token.NoPos, pkg, name, nil) + // a named type may be referred to before the underlying type + // is known - set it up + types.NewNamed(obj, nil, nil) + scope.Insert(obj) + return obj +} + +// ---------------------------------------------------------------------------- +// Error handling + +// Internal errors are boxed as importErrors. +type importError struct { + pos scanner.Position + err error +} + +func (e importError) Error() string { + return fmt.Sprintf("import error %s (byte offset = %d): %s", e.pos, e.pos.Offset, e.err) +} + +func (p *parser) error(err interface{}) { + if s, ok := err.(string); ok { + err = errors.New(s) + } + // panic with a runtime.Error if err is not an error + panic(importError{p.scanner.Pos(), err.(error)}) +} + +func (p *parser) errorf(format string, args ...interface{}) { + p.error(fmt.Sprintf(format, args...)) +} + +func (p *parser) expect(tok rune) string { + lit := p.lit + if p.tok != tok { + p.errorf("expected %s, got %s (%s)", scanner.TokenString(tok), scanner.TokenString(p.tok), lit) + } + p.next() + return lit +} + +func (p *parser) expectSpecial(tok string) { + sep := 'x' // not white space + i := 0 + for i < len(tok) && p.tok == rune(tok[i]) && sep > ' ' { + sep = p.scanner.Peek() // if sep <= ' ', there is white space before the next token + p.next() + i++ + } + if i < len(tok) { + p.errorf("expected %q, got %q", tok, tok[0:i]) + } +} + +func (p *parser) expectKeyword(keyword string) { + lit := p.expect(scanner.Ident) + if lit != keyword { + p.errorf("expected keyword %s, got %q", keyword, lit) + } +} + +// ---------------------------------------------------------------------------- +// Qualified and unqualified names + +// PackageId = string_lit . +// +func (p *parser) parsePackageId() string { + id, err := strconv.Unquote(p.expect(scanner.String)) + if err != nil { + p.error(err) + } + // id == "" stands for the imported package id + // (only known at time of package installation) + if id == "" { + id = p.id + } + return id +} + +// PackageName = ident . +// +func (p *parser) parsePackageName() string { + return p.expect(scanner.Ident) +} + +// dotIdentifier = ( ident | '·' ) { ident | int | '·' } . +func (p *parser) parseDotIdent() string { + ident := "" + if p.tok != scanner.Int { + sep := 'x' // not white space + for (p.tok == scanner.Ident || p.tok == scanner.Int || p.tok == '·') && sep > ' ' { + ident += p.lit + sep = p.scanner.Peek() // if sep <= ' ', there is white space before the next token + p.next() + } + } + if ident == "" { + p.expect(scanner.Ident) // use expect() for error handling + } + return ident +} + +// QualifiedName = "@" PackageId "." ( "?" | dotIdentifier ) . +// +func (p *parser) parseQualifiedName() (id, name string) { + p.expect('@') + id = p.parsePackageId() + p.expect('.') + // Per rev f280b8a485fd (10/2/2013), qualified names may be used for anonymous fields. + if p.tok == '?' { + p.next() + } else { + name = p.parseDotIdent() + } + return +} + +// getPkg returns the package for a given id. If the package is +// not found, create the package and add it to the p.localPkgs +// and p.sharedPkgs maps. name is the (expected) name of the +// package. If name == "", the package name is expected to be +// set later via an import clause in the export data. +// +// id identifies a package, usually by a canonical package path like +// "encoding/json" but possibly by a non-canonical import path like +// "./json". +// +func (p *parser) getPkg(id, name string) *types.Package { + // package unsafe is not in the packages maps - handle explicitly + if id == "unsafe" { + return types.Unsafe + } + + pkg := p.localPkgs[id] + if pkg == nil { + // first import of id from this package + pkg = p.sharedPkgs[id] + if pkg == nil { + // first import of id by this importer; + // add (possibly unnamed) pkg to shared packages + pkg = types.NewPackage(id, name) + p.sharedPkgs[id] = pkg + } + // add (possibly unnamed) pkg to local packages + if p.localPkgs == nil { + p.localPkgs = make(map[string]*types.Package) + } + p.localPkgs[id] = pkg + } else if name != "" { + // package exists already and we have an expected package name; + // make sure names match or set package name if necessary + if pname := pkg.Name(); pname == "" { + pkg.SetName(name) + } else if pname != name { + p.errorf("%s package name mismatch: %s (given) vs %s (expected)", id, pname, name) + } + } + return pkg +} + +// parseExportedName is like parseQualifiedName, but +// the package id is resolved to an imported *types.Package. +// +func (p *parser) parseExportedName() (pkg *types.Package, name string) { + id, name := p.parseQualifiedName() + pkg = p.getPkg(id, "") + return +} + +// ---------------------------------------------------------------------------- +// Types + +// BasicType = identifier . +// +func (p *parser) parseBasicType() types.Type { + id := p.expect(scanner.Ident) + obj := types.Universe.Lookup(id) + if obj, ok := obj.(*types.TypeName); ok { + return obj.Type() + } + p.errorf("not a basic type: %s", id) + return nil +} + +// ArrayType = "[" int_lit "]" Type . +// +func (p *parser) parseArrayType(parent *types.Package) types.Type { + // "[" already consumed and lookahead known not to be "]" + lit := p.expect(scanner.Int) + p.expect(']') + elem := p.parseType(parent) + n, err := strconv.ParseInt(lit, 10, 64) + if err != nil { + p.error(err) + } + return types.NewArray(elem, n) +} + +// MapType = "map" "[" Type "]" Type . +// +func (p *parser) parseMapType(parent *types.Package) types.Type { + p.expectKeyword("map") + p.expect('[') + key := p.parseType(parent) + p.expect(']') + elem := p.parseType(parent) + return types.NewMap(key, elem) +} + +// Name = identifier | "?" | QualifiedName . +// +// For unqualified and anonymous names, the returned package is the parent +// package unless parent == nil, in which case the returned package is the +// package being imported. (The parent package is not nil if the the name +// is an unqualified struct field or interface method name belonging to a +// type declared in another package.) +// +// For qualified names, the returned package is nil (and not created if +// it doesn't exist yet) unless materializePkg is set (which creates an +// unnamed package with valid package path). In the latter case, a +// subsequent import clause is expected to provide a name for the package. +// +func (p *parser) parseName(parent *types.Package, materializePkg bool) (pkg *types.Package, name string) { + pkg = parent + if pkg == nil { + pkg = p.sharedPkgs[p.id] + } + switch p.tok { + case scanner.Ident: + name = p.lit + p.next() + case '?': + // anonymous + p.next() + case '@': + // exported name prefixed with package path + pkg = nil + var id string + id, name = p.parseQualifiedName() + if materializePkg { + pkg = p.getPkg(id, "") + } + default: + p.error("name expected") + } + return +} + +func deref(typ types.Type) types.Type { + if p, _ := typ.(*types.Pointer); p != nil { + return p.Elem() + } + return typ +} + +// Field = Name Type [ string_lit ] . +// +func (p *parser) parseField(parent *types.Package) (*types.Var, string) { + pkg, name := p.parseName(parent, true) + + if name == "_" { + // Blank fields should be package-qualified because they + // are unexported identifiers, but gc does not qualify them. + // Assuming that the ident belongs to the current package + // causes types to change during re-exporting, leading + // to spurious "can't assign A to B" errors from go/types. + // As a workaround, pretend all blank fields belong + // to the same unique dummy package. + const blankpkg = "<_>" + pkg = p.getPkg(blankpkg, blankpkg) + } + + typ := p.parseType(parent) + anonymous := false + if name == "" { + // anonymous field - typ must be T or *T and T must be a type name + switch typ := deref(typ).(type) { + case *types.Basic: // basic types are named types + pkg = nil // objects defined in Universe scope have no package + name = typ.Name() + case *types.Named: + name = typ.Obj().Name() + default: + p.errorf("anonymous field expected") + } + anonymous = true + } + tag := "" + if p.tok == scanner.String { + s := p.expect(scanner.String) + var err error + tag, err = strconv.Unquote(s) + if err != nil { + p.errorf("invalid struct tag %s: %s", s, err) + } + } + return types.NewField(token.NoPos, pkg, name, typ, anonymous), tag +} + +// StructType = "struct" "{" [ FieldList ] "}" . +// FieldList = Field { ";" Field } . +// +func (p *parser) parseStructType(parent *types.Package) types.Type { + var fields []*types.Var + var tags []string + + p.expectKeyword("struct") + p.expect('{') + for i := 0; p.tok != '}' && p.tok != scanner.EOF; i++ { + if i > 0 { + p.expect(';') + } + fld, tag := p.parseField(parent) + if tag != "" && tags == nil { + tags = make([]string, i) + } + if tags != nil { + tags = append(tags, tag) + } + fields = append(fields, fld) + } + p.expect('}') + + return types.NewStruct(fields, tags) +} + +// Parameter = ( identifier | "?" ) [ "..." ] Type [ string_lit ] . +// +func (p *parser) parseParameter() (par *types.Var, isVariadic bool) { + _, name := p.parseName(nil, false) + // remove gc-specific parameter numbering + if i := strings.Index(name, "·"); i >= 0 { + name = name[:i] + } + if p.tok == '.' { + p.expectSpecial("...") + isVariadic = true + } + typ := p.parseType(nil) + if isVariadic { + typ = types.NewSlice(typ) + } + // ignore argument tag (e.g. "noescape") + if p.tok == scanner.String { + p.next() + } + // TODO(gri) should we provide a package? + par = types.NewVar(token.NoPos, nil, name, typ) + return +} + +// Parameters = "(" [ ParameterList ] ")" . +// ParameterList = { Parameter "," } Parameter . +// +func (p *parser) parseParameters() (list []*types.Var, isVariadic bool) { + p.expect('(') + for p.tok != ')' && p.tok != scanner.EOF { + if len(list) > 0 { + p.expect(',') + } + par, variadic := p.parseParameter() + list = append(list, par) + if variadic { + if isVariadic { + p.error("... not on final argument") + } + isVariadic = true + } + } + p.expect(')') + + return +} + +// Signature = Parameters [ Result ] . +// Result = Type | Parameters . +// +func (p *parser) parseSignature(recv *types.Var) *types.Signature { + params, isVariadic := p.parseParameters() + + // optional result type + var results []*types.Var + if p.tok == '(' { + var variadic bool + results, variadic = p.parseParameters() + if variadic { + p.error("... not permitted on result type") + } + } + + return types.NewSignature(recv, types.NewTuple(params...), types.NewTuple(results...), isVariadic) +} + +// InterfaceType = "interface" "{" [ MethodList ] "}" . +// MethodList = Method { ";" Method } . +// Method = Name Signature . +// +// The methods of embedded interfaces are always "inlined" +// by the compiler and thus embedded interfaces are never +// visible in the export data. +// +func (p *parser) parseInterfaceType(parent *types.Package) types.Type { + var methods []*types.Func + + p.expectKeyword("interface") + p.expect('{') + for i := 0; p.tok != '}' && p.tok != scanner.EOF; i++ { + if i > 0 { + p.expect(';') + } + pkg, name := p.parseName(parent, true) + sig := p.parseSignature(nil) + methods = append(methods, types.NewFunc(token.NoPos, pkg, name, sig)) + } + p.expect('}') + + // Complete requires the type's embedded interfaces to be fully defined, + // but we do not define any + return types.NewInterface(methods, nil).Complete() +} + +// ChanType = ( "chan" [ "<-" ] | "<-" "chan" ) Type . +// +func (p *parser) parseChanType(parent *types.Package) types.Type { + dir := types.SendRecv + if p.tok == scanner.Ident { + p.expectKeyword("chan") + if p.tok == '<' { + p.expectSpecial("<-") + dir = types.SendOnly + } + } else { + p.expectSpecial("<-") + p.expectKeyword("chan") + dir = types.RecvOnly + } + elem := p.parseType(parent) + return types.NewChan(dir, elem) +} + +// Type = +// BasicType | TypeName | ArrayType | SliceType | StructType | +// PointerType | FuncType | InterfaceType | MapType | ChanType | +// "(" Type ")" . +// +// BasicType = ident . +// TypeName = ExportedName . +// SliceType = "[" "]" Type . +// PointerType = "*" Type . +// FuncType = "func" Signature . +// +func (p *parser) parseType(parent *types.Package) types.Type { + switch p.tok { + case scanner.Ident: + switch p.lit { + default: + return p.parseBasicType() + case "struct": + return p.parseStructType(parent) + case "func": + // FuncType + p.next() + return p.parseSignature(nil) + case "interface": + return p.parseInterfaceType(parent) + case "map": + return p.parseMapType(parent) + case "chan": + return p.parseChanType(parent) + } + case '@': + // TypeName + pkg, name := p.parseExportedName() + return declTypeName(pkg, name).Type() + case '[': + p.next() // look ahead + if p.tok == ']' { + // SliceType + p.next() + return types.NewSlice(p.parseType(parent)) + } + return p.parseArrayType(parent) + case '*': + // PointerType + p.next() + return types.NewPointer(p.parseType(parent)) + case '<': + return p.parseChanType(parent) + case '(': + // "(" Type ")" + p.next() + typ := p.parseType(parent) + p.expect(')') + return typ + } + p.errorf("expected type, got %s (%q)", scanner.TokenString(p.tok), p.lit) + return nil +} + +// ---------------------------------------------------------------------------- +// Declarations + +// ImportDecl = "import" PackageName PackageId . +// +func (p *parser) parseImportDecl() { + p.expectKeyword("import") + name := p.parsePackageName() + p.getPkg(p.parsePackageId(), name) +} + +// int_lit = [ "+" | "-" ] { "0" ... "9" } . +// +func (p *parser) parseInt() string { + s := "" + switch p.tok { + case '-': + s = "-" + p.next() + case '+': + p.next() + } + return s + p.expect(scanner.Int) +} + +// number = int_lit [ "p" int_lit ] . +// +func (p *parser) parseNumber() (typ *types.Basic, val constant.Value) { + // mantissa + mant := constant.MakeFromLiteral(p.parseInt(), token.INT, 0) + if mant == nil { + panic("invalid mantissa") + } + + if p.lit == "p" { + // exponent (base 2) + p.next() + exp, err := strconv.ParseInt(p.parseInt(), 10, 0) + if err != nil { + p.error(err) + } + if exp < 0 { + denom := constant.MakeInt64(1) + denom = constant.Shift(denom, token.SHL, uint(-exp)) + typ = types.Typ[types.UntypedFloat] + val = constant.BinaryOp(mant, token.QUO, denom) + return + } + if exp > 0 { + mant = constant.Shift(mant, token.SHL, uint(exp)) + } + typ = types.Typ[types.UntypedFloat] + val = mant + return + } + + typ = types.Typ[types.UntypedInt] + val = mant + return +} + +// ConstDecl = "const" ExportedName [ Type ] "=" Literal . +// Literal = bool_lit | int_lit | float_lit | complex_lit | rune_lit | string_lit . +// bool_lit = "true" | "false" . +// complex_lit = "(" float_lit "+" float_lit "i" ")" . +// rune_lit = "(" int_lit "+" int_lit ")" . +// string_lit = `"` { unicode_char } `"` . +// +func (p *parser) parseConstDecl() { + p.expectKeyword("const") + pkg, name := p.parseExportedName() + + var typ0 types.Type + if p.tok != '=' { + // constant types are never structured - no need for parent type + typ0 = p.parseType(nil) + } + + p.expect('=') + var typ types.Type + var val constant.Value + switch p.tok { + case scanner.Ident: + // bool_lit + if p.lit != "true" && p.lit != "false" { + p.error("expected true or false") + } + typ = types.Typ[types.UntypedBool] + val = constant.MakeBool(p.lit == "true") + p.next() + + case '-', scanner.Int: + // int_lit + typ, val = p.parseNumber() + + case '(': + // complex_lit or rune_lit + p.next() + if p.tok == scanner.Char { + p.next() + p.expect('+') + typ = types.Typ[types.UntypedRune] + _, val = p.parseNumber() + p.expect(')') + break + } + _, re := p.parseNumber() + p.expect('+') + _, im := p.parseNumber() + p.expectKeyword("i") + p.expect(')') + typ = types.Typ[types.UntypedComplex] + val = constant.BinaryOp(re, token.ADD, constant.MakeImag(im)) + + case scanner.Char: + // rune_lit + typ = types.Typ[types.UntypedRune] + val = constant.MakeFromLiteral(p.lit, token.CHAR, 0) + p.next() + + case scanner.String: + // string_lit + typ = types.Typ[types.UntypedString] + val = constant.MakeFromLiteral(p.lit, token.STRING, 0) + p.next() + + default: + p.errorf("expected literal got %s", scanner.TokenString(p.tok)) + } + + if typ0 == nil { + typ0 = typ + } + + pkg.Scope().Insert(types.NewConst(token.NoPos, pkg, name, typ0, val)) +} + +// TypeDecl = "type" ExportedName Type . +// +func (p *parser) parseTypeDecl() { + p.expectKeyword("type") + pkg, name := p.parseExportedName() + obj := declTypeName(pkg, name) + + // The type object may have been imported before and thus already + // have a type associated with it. We still need to parse the type + // structure, but throw it away if the object already has a type. + // This ensures that all imports refer to the same type object for + // a given type declaration. + typ := p.parseType(pkg) + + if name := obj.Type().(*types.Named); name.Underlying() == nil { + name.SetUnderlying(typ) + } +} + +// VarDecl = "var" ExportedName Type . +// +func (p *parser) parseVarDecl() { + p.expectKeyword("var") + pkg, name := p.parseExportedName() + typ := p.parseType(pkg) + pkg.Scope().Insert(types.NewVar(token.NoPos, pkg, name, typ)) +} + +// Func = Signature [ Body ] . +// Body = "{" ... "}" . +// +func (p *parser) parseFunc(recv *types.Var) *types.Signature { + sig := p.parseSignature(recv) + if p.tok == '{' { + p.next() + for i := 1; i > 0; p.next() { + switch p.tok { + case '{': + i++ + case '}': + i-- + } + } + } + return sig +} + +// MethodDecl = "func" Receiver Name Func . +// Receiver = "(" ( identifier | "?" ) [ "*" ] ExportedName ")" . +// +func (p *parser) parseMethodDecl() { + // "func" already consumed + p.expect('(') + recv, _ := p.parseParameter() // receiver + p.expect(')') + + // determine receiver base type object + base := deref(recv.Type()).(*types.Named) + + // parse method name, signature, and possibly inlined body + _, name := p.parseName(nil, false) + sig := p.parseFunc(recv) + + // methods always belong to the same package as the base type object + pkg := base.Obj().Pkg() + + // add method to type unless type was imported before + // and method exists already + // TODO(gri) This leads to a quadratic algorithm - ok for now because method counts are small. + base.AddMethod(types.NewFunc(token.NoPos, pkg, name, sig)) +} + +// FuncDecl = "func" ExportedName Func . +// +func (p *parser) parseFuncDecl() { + // "func" already consumed + pkg, name := p.parseExportedName() + typ := p.parseFunc(nil) + pkg.Scope().Insert(types.NewFunc(token.NoPos, pkg, name, typ)) +} + +// Decl = [ ImportDecl | ConstDecl | TypeDecl | VarDecl | FuncDecl | MethodDecl ] "\n" . +// +func (p *parser) parseDecl() { + if p.tok == scanner.Ident { + switch p.lit { + case "import": + p.parseImportDecl() + case "const": + p.parseConstDecl() + case "type": + p.parseTypeDecl() + case "var": + p.parseVarDecl() + case "func": + p.next() // look ahead + if p.tok == '(' { + p.parseMethodDecl() + } else { + p.parseFuncDecl() + } + } + } + p.expect('\n') +} + +// ---------------------------------------------------------------------------- +// Export + +// Export = "PackageClause { Decl } "$$" . +// PackageClause = "package" PackageName [ "safe" ] "\n" . +// +func (p *parser) parseExport() *types.Package { + p.expectKeyword("package") + name := p.parsePackageName() + if p.tok == scanner.Ident && p.lit == "safe" { + // package was compiled with -u option - ignore + p.next() + } + p.expect('\n') + + pkg := p.getPkg(p.id, name) + + for p.tok != '$' && p.tok != scanner.EOF { + p.parseDecl() + } + + if ch := p.scanner.Peek(); p.tok != '$' || ch != '$' { + // don't call next()/expect() since reading past the + // export data may cause scanner errors (e.g. NUL chars) + p.errorf("expected '$$', got %s %c", scanner.TokenString(p.tok), ch) + } + + if n := p.scanner.ErrorCount; n != 0 { + p.errorf("expected no scanner errors, got %d", n) + } + + // Record all locally referenced packages as imports. + var imports []*types.Package + for id, pkg2 := range p.localPkgs { + if pkg2.Name() == "" { + p.errorf("%s package has no name", id) + } + if id == p.id { + continue // avoid self-edge + } + imports = append(imports, pkg2) + } + sort.Sort(byPath(imports)) + pkg.SetImports(imports) + + // package was imported completely and without errors + pkg.MarkComplete() + + return pkg +} + +type byPath []*types.Package + +func (a byPath) Len() int { return len(a) } +func (a byPath) Swap(i, j int) { a[i], a[j] = a[j], a[i] } +func (a byPath) Less(i, j int) bool { return a[i].Path() < a[j].Path() } diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/iexport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/iexport.go new file mode 100644 index 000000000..be671c79b --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/iexport.go @@ -0,0 +1,723 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Indexed binary package export. +// This file was derived from $GOROOT/src/cmd/compile/internal/gc/iexport.go; +// see that file for specification of the format. + +// +build go1.11 + +package gcimporter + +import ( + "bytes" + "encoding/binary" + "go/ast" + "go/constant" + "go/token" + "go/types" + "io" + "math/big" + "reflect" + "sort" +) + +// Current indexed export format version. Increase with each format change. +// 0: Go1.11 encoding +const iexportVersion = 0 + +// IExportData returns the binary export data for pkg. +// If no file set is provided, position info will be missing. +func IExportData(fset *token.FileSet, pkg *types.Package) (b []byte, err error) { + defer func() { + if e := recover(); e != nil { + if ierr, ok := e.(internalError); ok { + err = ierr + return + } + // Not an internal error; panic again. + panic(e) + } + }() + + p := iexporter{ + out: bytes.NewBuffer(nil), + fset: fset, + allPkgs: map[*types.Package]bool{}, + stringIndex: map[string]uint64{}, + declIndex: map[types.Object]uint64{}, + typIndex: map[types.Type]uint64{}, + } + + for i, pt := range predeclared() { + p.typIndex[pt] = uint64(i) + } + if len(p.typIndex) > predeclReserved { + panic(internalErrorf("too many predeclared types: %d > %d", len(p.typIndex), predeclReserved)) + } + + // Initialize work queue with exported declarations. + scope := pkg.Scope() + for _, name := range scope.Names() { + if ast.IsExported(name) { + p.pushDecl(scope.Lookup(name)) + } + } + + // Loop until no more work. + for !p.declTodo.empty() { + p.doDecl(p.declTodo.popHead()) + } + + // Append indices to data0 section. + dataLen := uint64(p.data0.Len()) + w := p.newWriter() + w.writeIndex(p.declIndex, pkg) + w.flush() + + // Assemble header. + var hdr intWriter + hdr.WriteByte('i') + hdr.uint64(iexportVersion) + hdr.uint64(uint64(p.strings.Len())) + hdr.uint64(dataLen) + + // Flush output. + io.Copy(p.out, &hdr) + io.Copy(p.out, &p.strings) + io.Copy(p.out, &p.data0) + + return p.out.Bytes(), nil +} + +// writeIndex writes out an object index. mainIndex indicates whether +// we're writing out the main index, which is also read by +// non-compiler tools and includes a complete package description +// (i.e., name and height). +func (w *exportWriter) writeIndex(index map[types.Object]uint64, localpkg *types.Package) { + // Build a map from packages to objects from that package. + pkgObjs := map[*types.Package][]types.Object{} + + // For the main index, make sure to include every package that + // we reference, even if we're not exporting (or reexporting) + // any symbols from it. + pkgObjs[localpkg] = nil + for pkg := range w.p.allPkgs { + pkgObjs[pkg] = nil + } + + for obj := range index { + pkgObjs[obj.Pkg()] = append(pkgObjs[obj.Pkg()], obj) + } + + var pkgs []*types.Package + for pkg, objs := range pkgObjs { + pkgs = append(pkgs, pkg) + + sort.Slice(objs, func(i, j int) bool { + return objs[i].Name() < objs[j].Name() + }) + } + + sort.Slice(pkgs, func(i, j int) bool { + return pkgs[i].Path() < pkgs[j].Path() + }) + + w.uint64(uint64(len(pkgs))) + for _, pkg := range pkgs { + w.string(pkg.Path()) + w.string(pkg.Name()) + w.uint64(uint64(0)) // package height is not needed for go/types + + objs := pkgObjs[pkg] + w.uint64(uint64(len(objs))) + for _, obj := range objs { + w.string(obj.Name()) + w.uint64(index[obj]) + } + } +} + +type iexporter struct { + fset *token.FileSet + out *bytes.Buffer + + // allPkgs tracks all packages that have been referenced by + // the export data, so we can ensure to include them in the + // main index. + allPkgs map[*types.Package]bool + + declTodo objQueue + + strings intWriter + stringIndex map[string]uint64 + + data0 intWriter + declIndex map[types.Object]uint64 + typIndex map[types.Type]uint64 +} + +// stringOff returns the offset of s within the string section. +// If not already present, it's added to the end. +func (p *iexporter) stringOff(s string) uint64 { + off, ok := p.stringIndex[s] + if !ok { + off = uint64(p.strings.Len()) + p.stringIndex[s] = off + + p.strings.uint64(uint64(len(s))) + p.strings.WriteString(s) + } + return off +} + +// pushDecl adds n to the declaration work queue, if not already present. +func (p *iexporter) pushDecl(obj types.Object) { + // Package unsafe is known to the compiler and predeclared. + assert(obj.Pkg() != types.Unsafe) + + if _, ok := p.declIndex[obj]; ok { + return + } + + p.declIndex[obj] = ^uint64(0) // mark n present in work queue + p.declTodo.pushTail(obj) +} + +// exportWriter handles writing out individual data section chunks. +type exportWriter struct { + p *iexporter + + data intWriter + currPkg *types.Package + prevFile string + prevLine int64 +} + +func (p *iexporter) doDecl(obj types.Object) { + w := p.newWriter() + w.setPkg(obj.Pkg(), false) + + switch obj := obj.(type) { + case *types.Var: + w.tag('V') + w.pos(obj.Pos()) + w.typ(obj.Type(), obj.Pkg()) + + case *types.Func: + sig, _ := obj.Type().(*types.Signature) + if sig.Recv() != nil { + panic(internalErrorf("unexpected method: %v", sig)) + } + w.tag('F') + w.pos(obj.Pos()) + w.signature(sig) + + case *types.Const: + w.tag('C') + w.pos(obj.Pos()) + w.value(obj.Type(), obj.Val()) + + case *types.TypeName: + if obj.IsAlias() { + w.tag('A') + w.pos(obj.Pos()) + w.typ(obj.Type(), obj.Pkg()) + break + } + + // Defined type. + w.tag('T') + w.pos(obj.Pos()) + + underlying := obj.Type().Underlying() + w.typ(underlying, obj.Pkg()) + + t := obj.Type() + if types.IsInterface(t) { + break + } + + named, ok := t.(*types.Named) + if !ok { + panic(internalErrorf("%s is not a defined type", t)) + } + + n := named.NumMethods() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + m := named.Method(i) + w.pos(m.Pos()) + w.string(m.Name()) + sig, _ := m.Type().(*types.Signature) + w.param(sig.Recv()) + w.signature(sig) + } + + default: + panic(internalErrorf("unexpected object: %v", obj)) + } + + p.declIndex[obj] = w.flush() +} + +func (w *exportWriter) tag(tag byte) { + w.data.WriteByte(tag) +} + +func (w *exportWriter) pos(pos token.Pos) { + p := w.p.fset.Position(pos) + file := p.Filename + line := int64(p.Line) + + // When file is the same as the last position (common case), + // we can save a few bytes by delta encoding just the line + // number. + // + // Note: Because data objects may be read out of order (or not + // at all), we can only apply delta encoding within a single + // object. This is handled implicitly by tracking prevFile and + // prevLine as fields of exportWriter. + + if file == w.prevFile { + delta := line - w.prevLine + w.int64(delta) + if delta == deltaNewFile { + w.int64(-1) + } + } else { + w.int64(deltaNewFile) + w.int64(line) // line >= 0 + w.string(file) + w.prevFile = file + } + w.prevLine = line +} + +func (w *exportWriter) pkg(pkg *types.Package) { + // Ensure any referenced packages are declared in the main index. + w.p.allPkgs[pkg] = true + + w.string(pkg.Path()) +} + +func (w *exportWriter) qualifiedIdent(obj types.Object) { + // Ensure any referenced declarations are written out too. + w.p.pushDecl(obj) + + w.string(obj.Name()) + w.pkg(obj.Pkg()) +} + +func (w *exportWriter) typ(t types.Type, pkg *types.Package) { + w.data.uint64(w.p.typOff(t, pkg)) +} + +func (p *iexporter) newWriter() *exportWriter { + return &exportWriter{p: p} +} + +func (w *exportWriter) flush() uint64 { + off := uint64(w.p.data0.Len()) + io.Copy(&w.p.data0, &w.data) + return off +} + +func (p *iexporter) typOff(t types.Type, pkg *types.Package) uint64 { + off, ok := p.typIndex[t] + if !ok { + w := p.newWriter() + w.doTyp(t, pkg) + off = predeclReserved + w.flush() + p.typIndex[t] = off + } + return off +} + +func (w *exportWriter) startType(k itag) { + w.data.uint64(uint64(k)) +} + +func (w *exportWriter) doTyp(t types.Type, pkg *types.Package) { + switch t := t.(type) { + case *types.Named: + w.startType(definedType) + w.qualifiedIdent(t.Obj()) + + case *types.Pointer: + w.startType(pointerType) + w.typ(t.Elem(), pkg) + + case *types.Slice: + w.startType(sliceType) + w.typ(t.Elem(), pkg) + + case *types.Array: + w.startType(arrayType) + w.uint64(uint64(t.Len())) + w.typ(t.Elem(), pkg) + + case *types.Chan: + w.startType(chanType) + // 1 RecvOnly; 2 SendOnly; 3 SendRecv + var dir uint64 + switch t.Dir() { + case types.RecvOnly: + dir = 1 + case types.SendOnly: + dir = 2 + case types.SendRecv: + dir = 3 + } + w.uint64(dir) + w.typ(t.Elem(), pkg) + + case *types.Map: + w.startType(mapType) + w.typ(t.Key(), pkg) + w.typ(t.Elem(), pkg) + + case *types.Signature: + w.startType(signatureType) + w.setPkg(pkg, true) + w.signature(t) + + case *types.Struct: + w.startType(structType) + w.setPkg(pkg, true) + + n := t.NumFields() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + f := t.Field(i) + w.pos(f.Pos()) + w.string(f.Name()) + w.typ(f.Type(), pkg) + w.bool(f.Embedded()) + w.string(t.Tag(i)) // note (or tag) + } + + case *types.Interface: + w.startType(interfaceType) + w.setPkg(pkg, true) + + n := t.NumEmbeddeds() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + f := t.Embedded(i) + w.pos(f.Obj().Pos()) + w.typ(f.Obj().Type(), f.Obj().Pkg()) + } + + n = t.NumExplicitMethods() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + m := t.ExplicitMethod(i) + w.pos(m.Pos()) + w.string(m.Name()) + sig, _ := m.Type().(*types.Signature) + w.signature(sig) + } + + default: + panic(internalErrorf("unexpected type: %v, %v", t, reflect.TypeOf(t))) + } +} + +func (w *exportWriter) setPkg(pkg *types.Package, write bool) { + if write { + w.pkg(pkg) + } + + w.currPkg = pkg +} + +func (w *exportWriter) signature(sig *types.Signature) { + w.paramList(sig.Params()) + w.paramList(sig.Results()) + if sig.Params().Len() > 0 { + w.bool(sig.Variadic()) + } +} + +func (w *exportWriter) paramList(tup *types.Tuple) { + n := tup.Len() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + w.param(tup.At(i)) + } +} + +func (w *exportWriter) param(obj types.Object) { + w.pos(obj.Pos()) + w.localIdent(obj) + w.typ(obj.Type(), obj.Pkg()) +} + +func (w *exportWriter) value(typ types.Type, v constant.Value) { + w.typ(typ, nil) + + switch v.Kind() { + case constant.Bool: + w.bool(constant.BoolVal(v)) + case constant.Int: + var i big.Int + if i64, exact := constant.Int64Val(v); exact { + i.SetInt64(i64) + } else if ui64, exact := constant.Uint64Val(v); exact { + i.SetUint64(ui64) + } else { + i.SetString(v.ExactString(), 10) + } + w.mpint(&i, typ) + case constant.Float: + f := constantToFloat(v) + w.mpfloat(f, typ) + case constant.Complex: + w.mpfloat(constantToFloat(constant.Real(v)), typ) + w.mpfloat(constantToFloat(constant.Imag(v)), typ) + case constant.String: + w.string(constant.StringVal(v)) + case constant.Unknown: + // package contains type errors + default: + panic(internalErrorf("unexpected value %v (%T)", v, v)) + } +} + +// constantToFloat converts a constant.Value with kind constant.Float to a +// big.Float. +func constantToFloat(x constant.Value) *big.Float { + assert(x.Kind() == constant.Float) + // Use the same floating-point precision (512) as cmd/compile + // (see Mpprec in cmd/compile/internal/gc/mpfloat.go). + const mpprec = 512 + var f big.Float + f.SetPrec(mpprec) + if v, exact := constant.Float64Val(x); exact { + // float64 + f.SetFloat64(v) + } else if num, denom := constant.Num(x), constant.Denom(x); num.Kind() == constant.Int { + // TODO(gri): add big.Rat accessor to constant.Value. + n := valueToRat(num) + d := valueToRat(denom) + f.SetRat(n.Quo(n, d)) + } else { + // Value too large to represent as a fraction => inaccessible. + // TODO(gri): add big.Float accessor to constant.Value. + _, ok := f.SetString(x.ExactString()) + assert(ok) + } + return &f +} + +// mpint exports a multi-precision integer. +// +// For unsigned types, small values are written out as a single +// byte. Larger values are written out as a length-prefixed big-endian +// byte string, where the length prefix is encoded as its complement. +// For example, bytes 0, 1, and 2 directly represent the integer +// values 0, 1, and 2; while bytes 255, 254, and 253 indicate a 1-, +// 2-, and 3-byte big-endian string follow. +// +// Encoding for signed types use the same general approach as for +// unsigned types, except small values use zig-zag encoding and the +// bottom bit of length prefix byte for large values is reserved as a +// sign bit. +// +// The exact boundary between small and large encodings varies +// according to the maximum number of bytes needed to encode a value +// of type typ. As a special case, 8-bit types are always encoded as a +// single byte. +// +// TODO(mdempsky): Is this level of complexity really worthwhile? +func (w *exportWriter) mpint(x *big.Int, typ types.Type) { + basic, ok := typ.Underlying().(*types.Basic) + if !ok { + panic(internalErrorf("unexpected type %v (%T)", typ.Underlying(), typ.Underlying())) + } + + signed, maxBytes := intSize(basic) + + negative := x.Sign() < 0 + if !signed && negative { + panic(internalErrorf("negative unsigned integer; type %v, value %v", typ, x)) + } + + b := x.Bytes() + if len(b) > 0 && b[0] == 0 { + panic(internalErrorf("leading zeros")) + } + if uint(len(b)) > maxBytes { + panic(internalErrorf("bad mpint length: %d > %d (type %v, value %v)", len(b), maxBytes, typ, x)) + } + + maxSmall := 256 - maxBytes + if signed { + maxSmall = 256 - 2*maxBytes + } + if maxBytes == 1 { + maxSmall = 256 + } + + // Check if x can use small value encoding. + if len(b) <= 1 { + var ux uint + if len(b) == 1 { + ux = uint(b[0]) + } + if signed { + ux <<= 1 + if negative { + ux-- + } + } + if ux < maxSmall { + w.data.WriteByte(byte(ux)) + return + } + } + + n := 256 - uint(len(b)) + if signed { + n = 256 - 2*uint(len(b)) + if negative { + n |= 1 + } + } + if n < maxSmall || n >= 256 { + panic(internalErrorf("encoding mistake: %d, %v, %v => %d", len(b), signed, negative, n)) + } + + w.data.WriteByte(byte(n)) + w.data.Write(b) +} + +// mpfloat exports a multi-precision floating point number. +// +// The number's value is decomposed into mantissa × 2**exponent, where +// mantissa is an integer. The value is written out as mantissa (as a +// multi-precision integer) and then the exponent, except exponent is +// omitted if mantissa is zero. +func (w *exportWriter) mpfloat(f *big.Float, typ types.Type) { + if f.IsInf() { + panic("infinite constant") + } + + // Break into f = mant × 2**exp, with 0.5 <= mant < 1. + var mant big.Float + exp := int64(f.MantExp(&mant)) + + // Scale so that mant is an integer. + prec := mant.MinPrec() + mant.SetMantExp(&mant, int(prec)) + exp -= int64(prec) + + manti, acc := mant.Int(nil) + if acc != big.Exact { + panic(internalErrorf("mantissa scaling failed for %f (%s)", f, acc)) + } + w.mpint(manti, typ) + if manti.Sign() != 0 { + w.int64(exp) + } +} + +func (w *exportWriter) bool(b bool) bool { + var x uint64 + if b { + x = 1 + } + w.uint64(x) + return b +} + +func (w *exportWriter) int64(x int64) { w.data.int64(x) } +func (w *exportWriter) uint64(x uint64) { w.data.uint64(x) } +func (w *exportWriter) string(s string) { w.uint64(w.p.stringOff(s)) } + +func (w *exportWriter) localIdent(obj types.Object) { + // Anonymous parameters. + if obj == nil { + w.string("") + return + } + + name := obj.Name() + if name == "_" { + w.string("_") + return + } + + w.string(name) +} + +type intWriter struct { + bytes.Buffer +} + +func (w *intWriter) int64(x int64) { + var buf [binary.MaxVarintLen64]byte + n := binary.PutVarint(buf[:], x) + w.Write(buf[:n]) +} + +func (w *intWriter) uint64(x uint64) { + var buf [binary.MaxVarintLen64]byte + n := binary.PutUvarint(buf[:], x) + w.Write(buf[:n]) +} + +func assert(cond bool) { + if !cond { + panic("internal error: assertion failed") + } +} + +// The below is copied from go/src/cmd/compile/internal/gc/syntax.go. + +// objQueue is a FIFO queue of types.Object. The zero value of objQueue is +// a ready-to-use empty queue. +type objQueue struct { + ring []types.Object + head, tail int +} + +// empty returns true if q contains no Nodes. +func (q *objQueue) empty() bool { + return q.head == q.tail +} + +// pushTail appends n to the tail of the queue. +func (q *objQueue) pushTail(obj types.Object) { + if len(q.ring) == 0 { + q.ring = make([]types.Object, 16) + } else if q.head+len(q.ring) == q.tail { + // Grow the ring. + nring := make([]types.Object, len(q.ring)*2) + // Copy the old elements. + part := q.ring[q.head%len(q.ring):] + if q.tail-q.head <= len(part) { + part = part[:q.tail-q.head] + copy(nring, part) + } else { + pos := copy(nring, part) + copy(nring[pos:], q.ring[:q.tail%len(q.ring)]) + } + q.ring, q.head, q.tail = nring, 0, q.tail-q.head + } + + q.ring[q.tail%len(q.ring)] = obj + q.tail++ +} + +// popHead pops a node from the head of the queue. It panics if q is empty. +func (q *objQueue) popHead() types.Object { + if q.empty() { + panic("dequeue empty") + } + obj := q.ring[q.head%len(q.ring)] + q.head++ + return obj +} diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/iimport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/iimport.go new file mode 100644 index 000000000..3cb7ae5b9 --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/iimport.go @@ -0,0 +1,606 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Indexed package import. +// See cmd/compile/internal/gc/iexport.go for the export data format. + +// This file is a copy of $GOROOT/src/go/internal/gcimporter/iimport.go. + +package gcimporter + +import ( + "bytes" + "encoding/binary" + "fmt" + "go/constant" + "go/token" + "go/types" + "io" + "sort" +) + +type intReader struct { + *bytes.Reader + path string +} + +func (r *intReader) int64() int64 { + i, err := binary.ReadVarint(r.Reader) + if err != nil { + errorf("import %q: read varint error: %v", r.path, err) + } + return i +} + +func (r *intReader) uint64() uint64 { + i, err := binary.ReadUvarint(r.Reader) + if err != nil { + errorf("import %q: read varint error: %v", r.path, err) + } + return i +} + +const predeclReserved = 32 + +type itag uint64 + +const ( + // Types + definedType itag = iota + pointerType + sliceType + arrayType + chanType + mapType + signatureType + structType + interfaceType +) + +// IImportData imports a package from the serialized package data +// and returns the number of bytes consumed and a reference to the package. +// If the export data version is not recognized or the format is otherwise +// compromised, an error is returned. +func IImportData(fset *token.FileSet, imports map[string]*types.Package, data []byte, path string) (_ int, pkg *types.Package, err error) { + const currentVersion = 0 + version := -1 + defer func() { + if e := recover(); e != nil { + if version > currentVersion { + err = fmt.Errorf("cannot import %q (%v), export data is newer version - update tool", path, e) + } else { + err = fmt.Errorf("cannot import %q (%v), possibly version skew - reinstall package", path, e) + } + } + }() + + r := &intReader{bytes.NewReader(data), path} + + version = int(r.uint64()) + switch version { + case currentVersion: + default: + errorf("unknown iexport format version %d", version) + } + + sLen := int64(r.uint64()) + dLen := int64(r.uint64()) + + whence, _ := r.Seek(0, io.SeekCurrent) + stringData := data[whence : whence+sLen] + declData := data[whence+sLen : whence+sLen+dLen] + r.Seek(sLen+dLen, io.SeekCurrent) + + p := iimporter{ + ipath: path, + + stringData: stringData, + stringCache: make(map[uint64]string), + pkgCache: make(map[uint64]*types.Package), + + declData: declData, + pkgIndex: make(map[*types.Package]map[string]uint64), + typCache: make(map[uint64]types.Type), + + fake: fakeFileSet{ + fset: fset, + files: make(map[string]*token.File), + }, + } + + for i, pt := range predeclared() { + p.typCache[uint64(i)] = pt + } + + pkgList := make([]*types.Package, r.uint64()) + for i := range pkgList { + pkgPathOff := r.uint64() + pkgPath := p.stringAt(pkgPathOff) + pkgName := p.stringAt(r.uint64()) + _ = r.uint64() // package height; unused by go/types + + if pkgPath == "" { + pkgPath = path + } + pkg := imports[pkgPath] + if pkg == nil { + pkg = types.NewPackage(pkgPath, pkgName) + imports[pkgPath] = pkg + } else if pkg.Name() != pkgName { + errorf("conflicting names %s and %s for package %q", pkg.Name(), pkgName, path) + } + + p.pkgCache[pkgPathOff] = pkg + + nameIndex := make(map[string]uint64) + for nSyms := r.uint64(); nSyms > 0; nSyms-- { + name := p.stringAt(r.uint64()) + nameIndex[name] = r.uint64() + } + + p.pkgIndex[pkg] = nameIndex + pkgList[i] = pkg + } + var localpkg *types.Package + for _, pkg := range pkgList { + if pkg.Path() == path { + localpkg = pkg + } + } + + names := make([]string, 0, len(p.pkgIndex[localpkg])) + for name := range p.pkgIndex[localpkg] { + names = append(names, name) + } + sort.Strings(names) + for _, name := range names { + p.doDecl(localpkg, name) + } + + for _, typ := range p.interfaceList { + typ.Complete() + } + + // record all referenced packages as imports + list := append(([]*types.Package)(nil), pkgList[1:]...) + sort.Sort(byPath(list)) + localpkg.SetImports(list) + + // package was imported completely and without errors + localpkg.MarkComplete() + + consumed, _ := r.Seek(0, io.SeekCurrent) + return int(consumed), localpkg, nil +} + +type iimporter struct { + ipath string + + stringData []byte + stringCache map[uint64]string + pkgCache map[uint64]*types.Package + + declData []byte + pkgIndex map[*types.Package]map[string]uint64 + typCache map[uint64]types.Type + + fake fakeFileSet + interfaceList []*types.Interface +} + +func (p *iimporter) doDecl(pkg *types.Package, name string) { + // See if we've already imported this declaration. + if obj := pkg.Scope().Lookup(name); obj != nil { + return + } + + off, ok := p.pkgIndex[pkg][name] + if !ok { + errorf("%v.%v not in index", pkg, name) + } + + r := &importReader{p: p, currPkg: pkg} + r.declReader.Reset(p.declData[off:]) + + r.obj(name) +} + +func (p *iimporter) stringAt(off uint64) string { + if s, ok := p.stringCache[off]; ok { + return s + } + + slen, n := binary.Uvarint(p.stringData[off:]) + if n <= 0 { + errorf("varint failed") + } + spos := off + uint64(n) + s := string(p.stringData[spos : spos+slen]) + p.stringCache[off] = s + return s +} + +func (p *iimporter) pkgAt(off uint64) *types.Package { + if pkg, ok := p.pkgCache[off]; ok { + return pkg + } + path := p.stringAt(off) + errorf("missing package %q in %q", path, p.ipath) + return nil +} + +func (p *iimporter) typAt(off uint64, base *types.Named) types.Type { + if t, ok := p.typCache[off]; ok && (base == nil || !isInterface(t)) { + return t + } + + if off < predeclReserved { + errorf("predeclared type missing from cache: %v", off) + } + + r := &importReader{p: p} + r.declReader.Reset(p.declData[off-predeclReserved:]) + t := r.doType(base) + + if base == nil || !isInterface(t) { + p.typCache[off] = t + } + return t +} + +type importReader struct { + p *iimporter + declReader bytes.Reader + currPkg *types.Package + prevFile string + prevLine int64 +} + +func (r *importReader) obj(name string) { + tag := r.byte() + pos := r.pos() + + switch tag { + case 'A': + typ := r.typ() + + r.declare(types.NewTypeName(pos, r.currPkg, name, typ)) + + case 'C': + typ, val := r.value() + + r.declare(types.NewConst(pos, r.currPkg, name, typ, val)) + + case 'F': + sig := r.signature(nil) + + r.declare(types.NewFunc(pos, r.currPkg, name, sig)) + + case 'T': + // Types can be recursive. We need to setup a stub + // declaration before recursing. + obj := types.NewTypeName(pos, r.currPkg, name, nil) + named := types.NewNamed(obj, nil, nil) + r.declare(obj) + + underlying := r.p.typAt(r.uint64(), named).Underlying() + named.SetUnderlying(underlying) + + if !isInterface(underlying) { + for n := r.uint64(); n > 0; n-- { + mpos := r.pos() + mname := r.ident() + recv := r.param() + msig := r.signature(recv) + + named.AddMethod(types.NewFunc(mpos, r.currPkg, mname, msig)) + } + } + + case 'V': + typ := r.typ() + + r.declare(types.NewVar(pos, r.currPkg, name, typ)) + + default: + errorf("unexpected tag: %v", tag) + } +} + +func (r *importReader) declare(obj types.Object) { + obj.Pkg().Scope().Insert(obj) +} + +func (r *importReader) value() (typ types.Type, val constant.Value) { + typ = r.typ() + + switch b := typ.Underlying().(*types.Basic); b.Info() & types.IsConstType { + case types.IsBoolean: + val = constant.MakeBool(r.bool()) + + case types.IsString: + val = constant.MakeString(r.string()) + + case types.IsInteger: + val = r.mpint(b) + + case types.IsFloat: + val = r.mpfloat(b) + + case types.IsComplex: + re := r.mpfloat(b) + im := r.mpfloat(b) + val = constant.BinaryOp(re, token.ADD, constant.MakeImag(im)) + + default: + if b.Kind() == types.Invalid { + val = constant.MakeUnknown() + return + } + errorf("unexpected type %v", typ) // panics + panic("unreachable") + } + + return +} + +func intSize(b *types.Basic) (signed bool, maxBytes uint) { + if (b.Info() & types.IsUntyped) != 0 { + return true, 64 + } + + switch b.Kind() { + case types.Float32, types.Complex64: + return true, 3 + case types.Float64, types.Complex128: + return true, 7 + } + + signed = (b.Info() & types.IsUnsigned) == 0 + switch b.Kind() { + case types.Int8, types.Uint8: + maxBytes = 1 + case types.Int16, types.Uint16: + maxBytes = 2 + case types.Int32, types.Uint32: + maxBytes = 4 + default: + maxBytes = 8 + } + + return +} + +func (r *importReader) mpint(b *types.Basic) constant.Value { + signed, maxBytes := intSize(b) + + maxSmall := 256 - maxBytes + if signed { + maxSmall = 256 - 2*maxBytes + } + if maxBytes == 1 { + maxSmall = 256 + } + + n, _ := r.declReader.ReadByte() + if uint(n) < maxSmall { + v := int64(n) + if signed { + v >>= 1 + if n&1 != 0 { + v = ^v + } + } + return constant.MakeInt64(v) + } + + v := -n + if signed { + v = -(n &^ 1) >> 1 + } + if v < 1 || uint(v) > maxBytes { + errorf("weird decoding: %v, %v => %v", n, signed, v) + } + + buf := make([]byte, v) + io.ReadFull(&r.declReader, buf) + + // convert to little endian + // TODO(gri) go/constant should have a more direct conversion function + // (e.g., once it supports a big.Float based implementation) + for i, j := 0, len(buf)-1; i < j; i, j = i+1, j-1 { + buf[i], buf[j] = buf[j], buf[i] + } + + x := constant.MakeFromBytes(buf) + if signed && n&1 != 0 { + x = constant.UnaryOp(token.SUB, x, 0) + } + return x +} + +func (r *importReader) mpfloat(b *types.Basic) constant.Value { + x := r.mpint(b) + if constant.Sign(x) == 0 { + return x + } + + exp := r.int64() + switch { + case exp > 0: + x = constant.Shift(x, token.SHL, uint(exp)) + case exp < 0: + d := constant.Shift(constant.MakeInt64(1), token.SHL, uint(-exp)) + x = constant.BinaryOp(x, token.QUO, d) + } + return x +} + +func (r *importReader) ident() string { + return r.string() +} + +func (r *importReader) qualifiedIdent() (*types.Package, string) { + name := r.string() + pkg := r.pkg() + return pkg, name +} + +func (r *importReader) pos() token.Pos { + delta := r.int64() + if delta != deltaNewFile { + r.prevLine += delta + } else if l := r.int64(); l == -1 { + r.prevLine += deltaNewFile + } else { + r.prevFile = r.string() + r.prevLine = l + } + + if r.prevFile == "" && r.prevLine == 0 { + return token.NoPos + } + + return r.p.fake.pos(r.prevFile, int(r.prevLine)) +} + +func (r *importReader) typ() types.Type { + return r.p.typAt(r.uint64(), nil) +} + +func isInterface(t types.Type) bool { + _, ok := t.(*types.Interface) + return ok +} + +func (r *importReader) pkg() *types.Package { return r.p.pkgAt(r.uint64()) } +func (r *importReader) string() string { return r.p.stringAt(r.uint64()) } + +func (r *importReader) doType(base *types.Named) types.Type { + switch k := r.kind(); k { + default: + errorf("unexpected kind tag in %q: %v", r.p.ipath, k) + return nil + + case definedType: + pkg, name := r.qualifiedIdent() + r.p.doDecl(pkg, name) + return pkg.Scope().Lookup(name).(*types.TypeName).Type() + case pointerType: + return types.NewPointer(r.typ()) + case sliceType: + return types.NewSlice(r.typ()) + case arrayType: + n := r.uint64() + return types.NewArray(r.typ(), int64(n)) + case chanType: + dir := chanDir(int(r.uint64())) + return types.NewChan(dir, r.typ()) + case mapType: + return types.NewMap(r.typ(), r.typ()) + case signatureType: + r.currPkg = r.pkg() + return r.signature(nil) + + case structType: + r.currPkg = r.pkg() + + fields := make([]*types.Var, r.uint64()) + tags := make([]string, len(fields)) + for i := range fields { + fpos := r.pos() + fname := r.ident() + ftyp := r.typ() + emb := r.bool() + tag := r.string() + + fields[i] = types.NewField(fpos, r.currPkg, fname, ftyp, emb) + tags[i] = tag + } + return types.NewStruct(fields, tags) + + case interfaceType: + r.currPkg = r.pkg() + + embeddeds := make([]types.Type, r.uint64()) + for i := range embeddeds { + _ = r.pos() + embeddeds[i] = r.typ() + } + + methods := make([]*types.Func, r.uint64()) + for i := range methods { + mpos := r.pos() + mname := r.ident() + + // TODO(mdempsky): Matches bimport.go, but I + // don't agree with this. + var recv *types.Var + if base != nil { + recv = types.NewVar(token.NoPos, r.currPkg, "", base) + } + + msig := r.signature(recv) + methods[i] = types.NewFunc(mpos, r.currPkg, mname, msig) + } + + typ := newInterface(methods, embeddeds) + r.p.interfaceList = append(r.p.interfaceList, typ) + return typ + } +} + +func (r *importReader) kind() itag { + return itag(r.uint64()) +} + +func (r *importReader) signature(recv *types.Var) *types.Signature { + params := r.paramList() + results := r.paramList() + variadic := params.Len() > 0 && r.bool() + return types.NewSignature(recv, params, results, variadic) +} + +func (r *importReader) paramList() *types.Tuple { + xs := make([]*types.Var, r.uint64()) + for i := range xs { + xs[i] = r.param() + } + return types.NewTuple(xs...) +} + +func (r *importReader) param() *types.Var { + pos := r.pos() + name := r.ident() + typ := r.typ() + return types.NewParam(pos, r.currPkg, name, typ) +} + +func (r *importReader) bool() bool { + return r.uint64() != 0 +} + +func (r *importReader) int64() int64 { + n, err := binary.ReadVarint(&r.declReader) + if err != nil { + errorf("readVarint: %v", err) + } + return n +} + +func (r *importReader) uint64() uint64 { + n, err := binary.ReadUvarint(&r.declReader) + if err != nil { + errorf("readUvarint: %v", err) + } + return n +} + +func (r *importReader) byte() byte { + x, err := r.declReader.ReadByte() + if err != nil { + errorf("declReader.ReadByte: %v", err) + } + return x +} diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface10.go b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface10.go new file mode 100644 index 000000000..463f25227 --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface10.go @@ -0,0 +1,21 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.11 + +package gcimporter + +import "go/types" + +func newInterface(methods []*types.Func, embeddeds []types.Type) *types.Interface { + named := make([]*types.Named, len(embeddeds)) + for i, e := range embeddeds { + var ok bool + named[i], ok = e.(*types.Named) + if !ok { + panic("embedding of non-defined interfaces in interfaces is not supported before Go 1.11") + } + } + return types.NewInterface(methods, named) +} diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface11.go b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface11.go new file mode 100644 index 000000000..ab28b95cb --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface11.go @@ -0,0 +1,13 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.11 + +package gcimporter + +import "go/types" + +func newInterface(methods []*types.Func, embeddeds []types.Type) *types.Interface { + return types.NewInterfaceType(methods, embeddeds) +} diff --git a/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go b/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go new file mode 100644 index 000000000..fdc7da056 --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go @@ -0,0 +1,160 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package packagesdriver fetches type sizes for go/packages and go/analysis. +package packagesdriver + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "go/types" + "log" + "os" + "os/exec" + "strings" + "time" +) + +var debug = false + +// GetSizes returns the sizes used by the underlying driver with the given parameters. +func GetSizes(ctx context.Context, buildFlags, env []string, dir string, usesExportData bool) (types.Sizes, error) { + // TODO(matloob): Clean this up. This code is mostly a copy of packages.findExternalDriver. + const toolPrefix = "GOPACKAGESDRIVER=" + tool := "" + for _, env := range env { + if val := strings.TrimPrefix(env, toolPrefix); val != env { + tool = val + } + } + + if tool == "" { + var err error + tool, err = exec.LookPath("gopackagesdriver") + if err != nil { + // We did not find the driver, so use "go list". + tool = "off" + } + } + + if tool == "off" { + return GetSizesGolist(ctx, buildFlags, env, dir, usesExportData) + } + + req, err := json.Marshal(struct { + Command string `json:"command"` + Env []string `json:"env"` + BuildFlags []string `json:"build_flags"` + }{ + Command: "sizes", + Env: env, + BuildFlags: buildFlags, + }) + if err != nil { + return nil, fmt.Errorf("failed to encode message to driver tool: %v", err) + } + + buf := new(bytes.Buffer) + cmd := exec.CommandContext(ctx, tool) + cmd.Dir = dir + cmd.Env = env + cmd.Stdin = bytes.NewReader(req) + cmd.Stdout = buf + cmd.Stderr = new(bytes.Buffer) + if err := cmd.Run(); err != nil { + return nil, fmt.Errorf("%v: %v: %s", tool, err, cmd.Stderr) + } + var response struct { + // Sizes, if not nil, is the types.Sizes to use when type checking. + Sizes *types.StdSizes + } + if err := json.Unmarshal(buf.Bytes(), &response); err != nil { + return nil, err + } + return response.Sizes, nil +} + +func GetSizesGolist(ctx context.Context, buildFlags, env []string, dir string, usesExportData bool) (types.Sizes, error) { + args := []string{"list", "-f", "{{context.GOARCH}} {{context.Compiler}}"} + args = append(args, buildFlags...) + args = append(args, "--", "unsafe") + stdout, err := InvokeGo(ctx, env, dir, usesExportData, args...) + if err != nil { + return nil, err + } + fields := strings.Fields(stdout.String()) + if len(fields) < 2 { + return nil, fmt.Errorf("could not determine GOARCH and Go compiler") + } + goarch := fields[0] + compiler := fields[1] + return types.SizesFor(compiler, goarch), nil +} + +// InvokeGo returns the stdout of a go command invocation. +func InvokeGo(ctx context.Context, env []string, dir string, usesExportData bool, args ...string) (*bytes.Buffer, error) { + if debug { + defer func(start time.Time) { log.Printf("%s for %v", time.Since(start), cmdDebugStr(env, args...)) }(time.Now()) + } + stdout := new(bytes.Buffer) + stderr := new(bytes.Buffer) + cmd := exec.CommandContext(ctx, "go", args...) + // On darwin the cwd gets resolved to the real path, which breaks anything that + // expects the working directory to keep the original path, including the + // go command when dealing with modules. + // The Go stdlib has a special feature where if the cwd and the PWD are the + // same node then it trusts the PWD, so by setting it in the env for the child + // process we fix up all the paths returned by the go command. + cmd.Env = append(append([]string{}, env...), "PWD="+dir) + cmd.Dir = dir + cmd.Stdout = stdout + cmd.Stderr = stderr + if err := cmd.Run(); err != nil { + exitErr, ok := err.(*exec.ExitError) + if !ok { + // Catastrophic error: + // - executable not found + // - context cancellation + return nil, fmt.Errorf("couldn't exec 'go %v': %s %T", args, err, err) + } + + // Export mode entails a build. + // If that build fails, errors appear on stderr + // (despite the -e flag) and the Export field is blank. + // Do not fail in that case. + if !usesExportData { + return nil, fmt.Errorf("go %v: %s: %s", args, exitErr, stderr) + } + } + + // As of writing, go list -export prints some non-fatal compilation + // errors to stderr, even with -e set. We would prefer that it put + // them in the Package.Error JSON (see https://golang.org/issue/26319). + // In the meantime, there's nowhere good to put them, but they can + // be useful for debugging. Print them if $GOPACKAGESPRINTGOLISTERRORS + // is set. + if len(stderr.Bytes()) != 0 && os.Getenv("GOPACKAGESPRINTGOLISTERRORS") != "" { + fmt.Fprintf(os.Stderr, "%s stderr: <<%s>>\n", cmdDebugStr(env, args...), stderr) + } + + // debugging + if false { + fmt.Fprintf(os.Stderr, "%s stdout: <<%s>>\n", cmdDebugStr(env, args...), stdout) + } + + return stdout, nil +} + +func cmdDebugStr(envlist []string, args ...string) string { + env := make(map[string]string) + for _, kv := range envlist { + split := strings.Split(kv, "=") + k, v := split[0], split[1] + env[k] = v + } + + return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v PWD=%v go %v", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["PWD"], args) +} diff --git a/vendor/golang.org/x/tools/go/packages/doc.go b/vendor/golang.org/x/tools/go/packages/doc.go new file mode 100644 index 000000000..3799f8ed8 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/doc.go @@ -0,0 +1,222 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package packages loads Go packages for inspection and analysis. + +The Load function takes as input a list of patterns and return a list of Package +structs describing individual packages matched by those patterns. +The LoadMode controls the amount of detail in the loaded packages. + +Load passes most patterns directly to the underlying build tool, +but all patterns with the prefix "query=", where query is a +non-empty string of letters from [a-z], are reserved and may be +interpreted as query operators. + +Two query operators are currently supported: "file" and "pattern". + +The query "file=path/to/file.go" matches the package or packages enclosing +the Go source file path/to/file.go. For example "file=~/go/src/fmt/print.go" +might return the packages "fmt" and "fmt [fmt.test]". + +The query "pattern=string" causes "string" to be passed directly to +the underlying build tool. In most cases this is unnecessary, +but an application can use Load("pattern=" + x) as an escaping mechanism +to ensure that x is not interpreted as a query operator if it contains '='. + +All other query operators are reserved for future use and currently +cause Load to report an error. + +The Package struct provides basic information about the package, including + + - ID, a unique identifier for the package in the returned set; + - GoFiles, the names of the package's Go source files; + - Imports, a map from source import strings to the Packages they name; + - Types, the type information for the package's exported symbols; + - Syntax, the parsed syntax trees for the package's source code; and + - TypeInfo, the result of a complete type-check of the package syntax trees. + +(See the documentation for type Package for the complete list of fields +and more detailed descriptions.) + +For example, + + Load(nil, "bytes", "unicode...") + +returns four Package structs describing the standard library packages +bytes, unicode, unicode/utf16, and unicode/utf8. Note that one pattern +can match multiple packages and that a package might be matched by +multiple patterns: in general it is not possible to determine which +packages correspond to which patterns. + +Note that the list returned by Load contains only the packages matched +by the patterns. Their dependencies can be found by walking the import +graph using the Imports fields. + +The Load function can be configured by passing a pointer to a Config as +the first argument. A nil Config is equivalent to the zero Config, which +causes Load to run in LoadFiles mode, collecting minimal information. +See the documentation for type Config for details. + +As noted earlier, the Config.Mode controls the amount of detail +reported about the loaded packages, with each mode returning all the data of the +previous mode with some extra added. See the documentation for type LoadMode +for details. + +Most tools should pass their command-line arguments (after any flags) +uninterpreted to the loader, so that the loader can interpret them +according to the conventions of the underlying build system. +See the Example function for typical usage. + +*/ +package packages // import "golang.org/x/tools/go/packages" + +/* + +Motivation and design considerations + +The new package's design solves problems addressed by two existing +packages: go/build, which locates and describes packages, and +golang.org/x/tools/go/loader, which loads, parses and type-checks them. +The go/build.Package structure encodes too much of the 'go build' way +of organizing projects, leaving us in need of a data type that describes a +package of Go source code independent of the underlying build system. +We wanted something that works equally well with go build and vgo, and +also other build systems such as Bazel and Blaze, making it possible to +construct analysis tools that work in all these environments. +Tools such as errcheck and staticcheck were essentially unavailable to +the Go community at Google, and some of Google's internal tools for Go +are unavailable externally. +This new package provides a uniform way to obtain package metadata by +querying each of these build systems, optionally supporting their +preferred command-line notations for packages, so that tools integrate +neatly with users' build environments. The Metadata query function +executes an external query tool appropriate to the current workspace. + +Loading packages always returns the complete import graph "all the way down", +even if all you want is information about a single package, because the query +mechanisms of all the build systems we currently support ({go,vgo} list, and +blaze/bazel aspect-based query) cannot provide detailed information +about one package without visiting all its dependencies too, so there is +no additional asymptotic cost to providing transitive information. +(This property might not be true of a hypothetical 5th build system.) + +In calls to TypeCheck, all initial packages, and any package that +transitively depends on one of them, must be loaded from source. +Consider A->B->C->D->E: if A,C are initial, A,B,C must be loaded from +source; D may be loaded from export data, and E may not be loaded at all +(though it's possible that D's export data mentions it, so a +types.Package may be created for it and exposed.) + +The old loader had a feature to suppress type-checking of function +bodies on a per-package basis, primarily intended to reduce the work of +obtaining type information for imported packages. Now that imports are +satisfied by export data, the optimization no longer seems necessary. + +Despite some early attempts, the old loader did not exploit export data, +instead always using the equivalent of WholeProgram mode. This was due +to the complexity of mixing source and export data packages (now +resolved by the upward traversal mentioned above), and because export data +files were nearly always missing or stale. Now that 'go build' supports +caching, all the underlying build systems can guarantee to produce +export data in a reasonable (amortized) time. + +Test "main" packages synthesized by the build system are now reported as +first-class packages, avoiding the need for clients (such as go/ssa) to +reinvent this generation logic. + +One way in which go/packages is simpler than the old loader is in its +treatment of in-package tests. In-package tests are packages that +consist of all the files of the library under test, plus the test files. +The old loader constructed in-package tests by a two-phase process of +mutation called "augmentation": first it would construct and type check +all the ordinary library packages and type-check the packages that +depend on them; then it would add more (test) files to the package and +type-check again. This two-phase approach had four major problems: +1) in processing the tests, the loader modified the library package, + leaving no way for a client application to see both the test + package and the library package; one would mutate into the other. +2) because test files can declare additional methods on types defined in + the library portion of the package, the dispatch of method calls in + the library portion was affected by the presence of the test files. + This should have been a clue that the packages were logically + different. +3) this model of "augmentation" assumed at most one in-package test + per library package, which is true of projects using 'go build', + but not other build systems. +4) because of the two-phase nature of test processing, all packages that + import the library package had to be processed before augmentation, + forcing a "one-shot" API and preventing the client from calling Load + in several times in sequence as is now possible in WholeProgram mode. + (TypeCheck mode has a similar one-shot restriction for a different reason.) + +Early drafts of this package supported "multi-shot" operation. +Although it allowed clients to make a sequence of calls (or concurrent +calls) to Load, building up the graph of Packages incrementally, +it was of marginal value: it complicated the API +(since it allowed some options to vary across calls but not others), +it complicated the implementation, +it cannot be made to work in Types mode, as explained above, +and it was less efficient than making one combined call (when this is possible). +Among the clients we have inspected, none made multiple calls to load +but could not be easily and satisfactorily modified to make only a single call. +However, applications changes may be required. +For example, the ssadump command loads the user-specified packages +and in addition the runtime package. It is tempting to simply append +"runtime" to the user-provided list, but that does not work if the user +specified an ad-hoc package such as [a.go b.go]. +Instead, ssadump no longer requests the runtime package, +but seeks it among the dependencies of the user-specified packages, +and emits an error if it is not found. + +Overlays: The Overlay field in the Config allows providing alternate contents +for Go source files, by providing a mapping from file path to contents. +go/packages will pull in new imports added in overlay files when go/packages +is run in LoadImports mode or greater. +Overlay support for the go list driver isn't complete yet: if the file doesn't +exist on disk, it will only be recognized in an overlay if it is a non-test file +and the package would be reported even without the overlay. + +Questions & Tasks + +- Add GOARCH/GOOS? + They are not portable concepts, but could be made portable. + Our goal has been to allow users to express themselves using the conventions + of the underlying build system: if the build system honors GOARCH + during a build and during a metadata query, then so should + applications built atop that query mechanism. + Conversely, if the target architecture of the build is determined by + command-line flags, the application can pass the relevant + flags through to the build system using a command such as: + myapp -query_flag="--cpu=amd64" -query_flag="--os=darwin" + However, this approach is low-level, unwieldy, and non-portable. + GOOS and GOARCH seem important enough to warrant a dedicated option. + +- How should we handle partial failures such as a mixture of good and + malformed patterns, existing and non-existent packages, successful and + failed builds, import failures, import cycles, and so on, in a call to + Load? + +- Support bazel, blaze, and go1.10 list, not just go1.11 list. + +- Handle (and test) various partial success cases, e.g. + a mixture of good packages and: + invalid patterns + nonexistent packages + empty packages + packages with malformed package or import declarations + unreadable files + import cycles + other parse errors + type errors + Make sure we record errors at the correct place in the graph. + +- Missing packages among initial arguments are not reported. + Return bogus packages for them, like golist does. + +- "undeclared name" errors (for example) are reported out of source file + order. I suspect this is due to the breadth-first resolution now used + by go/types. Is that a bug? Discuss with gri. + +*/ diff --git a/vendor/golang.org/x/tools/go/packages/external.go b/vendor/golang.org/x/tools/go/packages/external.go new file mode 100644 index 000000000..22ff769ef --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/external.go @@ -0,0 +1,79 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file enables an external tool to intercept package requests. +// If the tool is present then its results are used in preference to +// the go list command. + +package packages + +import ( + "bytes" + "encoding/json" + "fmt" + "os/exec" + "strings" +) + +// Driver +type driverRequest struct { + Command string `json:"command"` + Mode LoadMode `json:"mode"` + Env []string `json:"env"` + BuildFlags []string `json:"build_flags"` + Tests bool `json:"tests"` + Overlay map[string][]byte `json:"overlay"` +} + +// findExternalDriver returns the file path of a tool that supplies +// the build system package structure, or "" if not found." +// If GOPACKAGESDRIVER is set in the environment findExternalTool returns its +// value, otherwise it searches for a binary named gopackagesdriver on the PATH. +func findExternalDriver(cfg *Config) driver { + const toolPrefix = "GOPACKAGESDRIVER=" + tool := "" + for _, env := range cfg.Env { + if val := strings.TrimPrefix(env, toolPrefix); val != env { + tool = val + } + } + if tool != "" && tool == "off" { + return nil + } + if tool == "" { + var err error + tool, err = exec.LookPath("gopackagesdriver") + if err != nil { + return nil + } + } + return func(cfg *Config, words ...string) (*driverResponse, error) { + req, err := json.Marshal(driverRequest{ + Mode: cfg.Mode, + Env: cfg.Env, + BuildFlags: cfg.BuildFlags, + Tests: cfg.Tests, + Overlay: cfg.Overlay, + }) + if err != nil { + return nil, fmt.Errorf("failed to encode message to driver tool: %v", err) + } + + buf := new(bytes.Buffer) + cmd := exec.CommandContext(cfg.Context, tool, words...) + cmd.Dir = cfg.Dir + cmd.Env = cfg.Env + cmd.Stdin = bytes.NewReader(req) + cmd.Stdout = buf + cmd.Stderr = new(bytes.Buffer) + if err := cmd.Run(); err != nil { + return nil, fmt.Errorf("%v: %v: %s", tool, err, cmd.Stderr) + } + var response driverResponse + if err := json.Unmarshal(buf.Bytes(), &response); err != nil { + return nil, err + } + return &response, nil + } +} diff --git a/vendor/golang.org/x/tools/go/packages/golist.go b/vendor/golang.org/x/tools/go/packages/golist.go new file mode 100644 index 000000000..00e21a755 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/golist.go @@ -0,0 +1,870 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "bytes" + "encoding/json" + "fmt" + "go/types" + "io/ioutil" + "log" + "os" + "os/exec" + "path/filepath" + "reflect" + "regexp" + "strconv" + "strings" + "sync" + "time" + + "golang.org/x/tools/go/internal/packagesdriver" + "golang.org/x/tools/internal/gopathwalk" + "golang.org/x/tools/internal/semver" +) + +// debug controls verbose logging. +var debug, _ = strconv.ParseBool(os.Getenv("GOPACKAGESDEBUG")) + +// A goTooOldError reports that the go command +// found by exec.LookPath is too old to use the new go list behavior. +type goTooOldError struct { + error +} + +// responseDeduper wraps a driverResponse, deduplicating its contents. +type responseDeduper struct { + seenRoots map[string]bool + seenPackages map[string]*Package + dr *driverResponse +} + +// init fills in r with a driverResponse. +func (r *responseDeduper) init(dr *driverResponse) { + r.dr = dr + r.seenRoots = map[string]bool{} + r.seenPackages = map[string]*Package{} + for _, pkg := range dr.Packages { + r.seenPackages[pkg.ID] = pkg + } + for _, root := range dr.Roots { + r.seenRoots[root] = true + } +} + +func (r *responseDeduper) addPackage(p *Package) { + if r.seenPackages[p.ID] != nil { + return + } + r.seenPackages[p.ID] = p + r.dr.Packages = append(r.dr.Packages, p) +} + +func (r *responseDeduper) addRoot(id string) { + if r.seenRoots[id] { + return + } + r.seenRoots[id] = true + r.dr.Roots = append(r.dr.Roots, id) +} + +// goListDriver uses the go list command to interpret the patterns and produce +// the build system package structure. +// See driver for more details. +func goListDriver(cfg *Config, patterns ...string) (*driverResponse, error) { + var sizes types.Sizes + var sizeserr error + var sizeswg sync.WaitGroup + if cfg.Mode&NeedTypesSizes != 0 || cfg.Mode&NeedTypes != 0 { + sizeswg.Add(1) + go func() { + sizes, sizeserr = getSizes(cfg) + sizeswg.Done() + }() + } + + // Determine files requested in contains patterns + var containFiles []string + var packagesNamed []string + restPatterns := make([]string, 0, len(patterns)) + // Extract file= and other [querytype]= patterns. Report an error if querytype + // doesn't exist. +extractQueries: + for _, pattern := range patterns { + eqidx := strings.Index(pattern, "=") + if eqidx < 0 { + restPatterns = append(restPatterns, pattern) + } else { + query, value := pattern[:eqidx], pattern[eqidx+len("="):] + switch query { + case "file": + containFiles = append(containFiles, value) + case "pattern": + restPatterns = append(restPatterns, value) + case "iamashamedtousethedisabledqueryname": + packagesNamed = append(packagesNamed, value) + case "": // not a reserved query + restPatterns = append(restPatterns, pattern) + default: + for _, rune := range query { + if rune < 'a' || rune > 'z' { // not a reserved query + restPatterns = append(restPatterns, pattern) + continue extractQueries + } + } + // Reject all other patterns containing "=" + return nil, fmt.Errorf("invalid query type %q in query pattern %q", query, pattern) + } + } + } + + response := &responseDeduper{} + var err error + + // See if we have any patterns to pass through to go list. Zero initial + // patterns also requires a go list call, since it's the equivalent of + // ".". + if len(restPatterns) > 0 || len(patterns) == 0 { + dr, err := golistDriver(cfg, restPatterns...) + if err != nil { + return nil, err + } + response.init(dr) + } else { + response.init(&driverResponse{}) + } + + sizeswg.Wait() + if sizeserr != nil { + return nil, sizeserr + } + // types.SizesFor always returns nil or a *types.StdSizes + response.dr.Sizes, _ = sizes.(*types.StdSizes) + + var containsCandidates []string + + if len(containFiles) != 0 { + if err := runContainsQueries(cfg, golistDriver, response, containFiles); err != nil { + return nil, err + } + } + + if len(packagesNamed) != 0 { + if err := runNamedQueries(cfg, golistDriver, response, packagesNamed); err != nil { + return nil, err + } + } + + modifiedPkgs, needPkgs, err := processGolistOverlay(cfg, response) + if err != nil { + return nil, err + } + if len(containFiles) > 0 { + containsCandidates = append(containsCandidates, modifiedPkgs...) + containsCandidates = append(containsCandidates, needPkgs...) + } + if err := addNeededOverlayPackages(cfg, golistDriver, response, needPkgs); err != nil { + return nil, err + } + // Check candidate packages for containFiles. + if len(containFiles) > 0 { + for _, id := range containsCandidates { + pkg, ok := response.seenPackages[id] + if !ok { + response.addPackage(&Package{ + ID: id, + Errors: []Error{ + { + Kind: ListError, + Msg: fmt.Sprintf("package %s expected but not seen", id), + }, + }, + }) + continue + } + for _, f := range containFiles { + for _, g := range pkg.GoFiles { + if sameFile(f, g) { + response.addRoot(id) + } + } + } + } + } + + return response.dr, nil +} + +func addNeededOverlayPackages(cfg *Config, driver driver, response *responseDeduper, pkgs []string) error { + if len(pkgs) == 0 { + return nil + } + dr, err := driver(cfg, pkgs...) + if err != nil { + return err + } + for _, pkg := range dr.Packages { + response.addPackage(pkg) + } + _, needPkgs, err := processGolistOverlay(cfg, response) + if err != nil { + return err + } + if err := addNeededOverlayPackages(cfg, driver, response, needPkgs); err != nil { + return err + } + return nil +} + +func runContainsQueries(cfg *Config, driver driver, response *responseDeduper, queries []string) error { + for _, query := range queries { + // TODO(matloob): Do only one query per directory. + fdir := filepath.Dir(query) + // Pass absolute path of directory to go list so that it knows to treat it as a directory, + // not a package path. + pattern, err := filepath.Abs(fdir) + if err != nil { + return fmt.Errorf("could not determine absolute path of file= query path %q: %v", query, err) + } + dirResponse, err := driver(cfg, pattern) + if err != nil || (len(dirResponse.Packages) == 1 && len(dirResponse.Packages[0].Errors) == 1) { + // There was an error loading the package. Try to load the file as an ad-hoc package. + // Usually the error will appear in a returned package, but may not if we're in modules mode + // and the ad-hoc is located outside a module. + var queryErr error + dirResponse, queryErr = driver(cfg, query) + if queryErr != nil { + // Return the original error if the attempt to fall back failed. + return err + } + } + isRoot := make(map[string]bool, len(dirResponse.Roots)) + for _, root := range dirResponse.Roots { + isRoot[root] = true + } + for _, pkg := range dirResponse.Packages { + // Add any new packages to the main set + // We don't bother to filter packages that will be dropped by the changes of roots, + // that will happen anyway during graph construction outside this function. + // Over-reporting packages is not a problem. + response.addPackage(pkg) + // if the package was not a root one, it cannot have the file + if !isRoot[pkg.ID] { + continue + } + for _, pkgFile := range pkg.GoFiles { + if filepath.Base(query) == filepath.Base(pkgFile) { + response.addRoot(pkg.ID) + break + } + } + } + } + return nil +} + +// modCacheRegexp splits a path in a module cache into module, module version, and package. +var modCacheRegexp = regexp.MustCompile(`(.*)@([^/\\]*)(.*)`) + +func runNamedQueries(cfg *Config, driver driver, response *responseDeduper, queries []string) error { + // calling `go env` isn't free; bail out if there's nothing to do. + if len(queries) == 0 { + return nil + } + // Determine which directories are relevant to scan. + roots, modRoot, err := roots(cfg) + if err != nil { + return err + } + + // Scan the selected directories. Simple matches, from GOPATH/GOROOT + // or the local module, can simply be "go list"ed. Matches from the + // module cache need special treatment. + var matchesMu sync.Mutex + var simpleMatches, modCacheMatches []string + add := func(root gopathwalk.Root, dir string) { + // Walk calls this concurrently; protect the result slices. + matchesMu.Lock() + defer matchesMu.Unlock() + + path := dir + if dir != root.Path { + path = dir[len(root.Path)+1:] + } + if pathMatchesQueries(path, queries) { + switch root.Type { + case gopathwalk.RootModuleCache: + modCacheMatches = append(modCacheMatches, path) + case gopathwalk.RootCurrentModule: + // We'd need to read go.mod to find the full + // import path. Relative's easier. + rel, err := filepath.Rel(cfg.Dir, dir) + if err != nil { + // This ought to be impossible, since + // we found dir in the current module. + panic(err) + } + simpleMatches = append(simpleMatches, "./"+rel) + case gopathwalk.RootGOPATH, gopathwalk.RootGOROOT: + simpleMatches = append(simpleMatches, path) + } + } + } + + startWalk := time.Now() + gopathwalk.Walk(roots, add, gopathwalk.Options{ModulesEnabled: modRoot != "", Debug: debug}) + if debug { + log.Printf("%v for walk", time.Since(startWalk)) + } + + // Weird special case: the top-level package in a module will be in + // whatever directory the user checked the repository out into. It's + // more reasonable for that to not match the package name. So, if there + // are any Go files in the mod root, query it just to be safe. + if modRoot != "" { + rel, err := filepath.Rel(cfg.Dir, modRoot) + if err != nil { + panic(err) // See above. + } + + files, err := ioutil.ReadDir(modRoot) + for _, f := range files { + if strings.HasSuffix(f.Name(), ".go") { + simpleMatches = append(simpleMatches, rel) + break + } + } + } + + addResponse := func(r *driverResponse) { + for _, pkg := range r.Packages { + response.addPackage(pkg) + for _, name := range queries { + if pkg.Name == name { + response.addRoot(pkg.ID) + break + } + } + } + } + + if len(simpleMatches) != 0 { + resp, err := driver(cfg, simpleMatches...) + if err != nil { + return err + } + addResponse(resp) + } + + // Module cache matches are tricky. We want to avoid downloading new + // versions of things, so we need to use the ones present in the cache. + // go list doesn't accept version specifiers, so we have to write out a + // temporary module, and do the list in that module. + if len(modCacheMatches) != 0 { + // Collect all the matches, deduplicating by major version + // and preferring the newest. + type modInfo struct { + mod string + major string + } + mods := make(map[modInfo]string) + var imports []string + for _, modPath := range modCacheMatches { + matches := modCacheRegexp.FindStringSubmatch(modPath) + mod, ver := filepath.ToSlash(matches[1]), matches[2] + importPath := filepath.ToSlash(filepath.Join(matches[1], matches[3])) + + major := semver.Major(ver) + if prevVer, ok := mods[modInfo{mod, major}]; !ok || semver.Compare(ver, prevVer) > 0 { + mods[modInfo{mod, major}] = ver + } + + imports = append(imports, importPath) + } + + // Build the temporary module. + var gomod bytes.Buffer + gomod.WriteString("module modquery\nrequire (\n") + for mod, version := range mods { + gomod.WriteString("\t" + mod.mod + " " + version + "\n") + } + gomod.WriteString(")\n") + + tmpCfg := *cfg + + // We're only trying to look at stuff in the module cache, so + // disable the network. This should speed things up, and has + // prevented errors in at least one case, #28518. + tmpCfg.Env = append(append([]string{"GOPROXY=off"}, cfg.Env...)) + + var err error + tmpCfg.Dir, err = ioutil.TempDir("", "gopackages-modquery") + if err != nil { + return err + } + defer os.RemoveAll(tmpCfg.Dir) + + if err := ioutil.WriteFile(filepath.Join(tmpCfg.Dir, "go.mod"), gomod.Bytes(), 0777); err != nil { + return fmt.Errorf("writing go.mod for module cache query: %v", err) + } + + // Run the query, using the import paths calculated from the matches above. + resp, err := driver(&tmpCfg, imports...) + if err != nil { + return fmt.Errorf("querying module cache matches: %v", err) + } + addResponse(resp) + } + + return nil +} + +func getSizes(cfg *Config) (types.Sizes, error) { + return packagesdriver.GetSizesGolist(cfg.Context, cfg.BuildFlags, cfg.Env, cfg.Dir, usesExportData(cfg)) +} + +// roots selects the appropriate paths to walk based on the passed-in configuration, +// particularly the environment and the presence of a go.mod in cfg.Dir's parents. +func roots(cfg *Config) ([]gopathwalk.Root, string, error) { + stdout, err := invokeGo(cfg, "env", "GOROOT", "GOPATH", "GOMOD") + if err != nil { + return nil, "", err + } + + fields := strings.Split(stdout.String(), "\n") + if len(fields) != 4 || len(fields[3]) != 0 { + return nil, "", fmt.Errorf("go env returned unexpected output: %q", stdout.String()) + } + goroot, gopath, gomod := fields[0], filepath.SplitList(fields[1]), fields[2] + var modDir string + if gomod != "" { + modDir = filepath.Dir(gomod) + } + + var roots []gopathwalk.Root + // Always add GOROOT. + roots = append(roots, gopathwalk.Root{filepath.Join(goroot, "/src"), gopathwalk.RootGOROOT}) + // If modules are enabled, scan the module dir. + if modDir != "" { + roots = append(roots, gopathwalk.Root{modDir, gopathwalk.RootCurrentModule}) + } + // Add either GOPATH/src or GOPATH/pkg/mod, depending on module mode. + for _, p := range gopath { + if modDir != "" { + roots = append(roots, gopathwalk.Root{filepath.Join(p, "/pkg/mod"), gopathwalk.RootModuleCache}) + } else { + roots = append(roots, gopathwalk.Root{filepath.Join(p, "/src"), gopathwalk.RootGOPATH}) + } + } + + return roots, modDir, nil +} + +// These functions were copied from goimports. See further documentation there. + +// pathMatchesQueries is adapted from pkgIsCandidate. +// TODO: is it reasonable to do Contains here, rather than an exact match on a path component? +func pathMatchesQueries(path string, queries []string) bool { + lastTwo := lastTwoComponents(path) + for _, query := range queries { + if strings.Contains(lastTwo, query) { + return true + } + if hasHyphenOrUpperASCII(lastTwo) && !hasHyphenOrUpperASCII(query) { + lastTwo = lowerASCIIAndRemoveHyphen(lastTwo) + if strings.Contains(lastTwo, query) { + return true + } + } + } + return false +} + +// lastTwoComponents returns at most the last two path components +// of v, using either / or \ as the path separator. +func lastTwoComponents(v string) string { + nslash := 0 + for i := len(v) - 1; i >= 0; i-- { + if v[i] == '/' || v[i] == '\\' { + nslash++ + if nslash == 2 { + return v[i:] + } + } + } + return v +} + +func hasHyphenOrUpperASCII(s string) bool { + for i := 0; i < len(s); i++ { + b := s[i] + if b == '-' || ('A' <= b && b <= 'Z') { + return true + } + } + return false +} + +func lowerASCIIAndRemoveHyphen(s string) (ret string) { + buf := make([]byte, 0, len(s)) + for i := 0; i < len(s); i++ { + b := s[i] + switch { + case b == '-': + continue + case 'A' <= b && b <= 'Z': + buf = append(buf, b+('a'-'A')) + default: + buf = append(buf, b) + } + } + return string(buf) +} + +// Fields must match go list; +// see $GOROOT/src/cmd/go/internal/load/pkg.go. +type jsonPackage struct { + ImportPath string + Dir string + Name string + Export string + GoFiles []string + CompiledGoFiles []string + CFiles []string + CgoFiles []string + CXXFiles []string + MFiles []string + HFiles []string + FFiles []string + SFiles []string + SwigFiles []string + SwigCXXFiles []string + SysoFiles []string + Imports []string + ImportMap map[string]string + Deps []string + TestGoFiles []string + TestImports []string + XTestGoFiles []string + XTestImports []string + ForTest string // q in a "p [q.test]" package, else "" + DepOnly bool + + Error *jsonPackageError +} + +type jsonPackageError struct { + ImportStack []string + Pos string + Err string +} + +func otherFiles(p *jsonPackage) [][]string { + return [][]string{p.CFiles, p.CXXFiles, p.MFiles, p.HFiles, p.FFiles, p.SFiles, p.SwigFiles, p.SwigCXXFiles, p.SysoFiles} +} + +// golistDriver uses the "go list" command to expand the pattern +// words and return metadata for the specified packages. dir may be +// "" and env may be nil, as per os/exec.Command. +func golistDriver(cfg *Config, words ...string) (*driverResponse, error) { + // go list uses the following identifiers in ImportPath and Imports: + // + // "p" -- importable package or main (command) + // "q.test" -- q's test executable + // "p [q.test]" -- variant of p as built for q's test executable + // "q_test [q.test]" -- q's external test package + // + // The packages p that are built differently for a test q.test + // are q itself, plus any helpers used by the external test q_test, + // typically including "testing" and all its dependencies. + + // Run "go list" for complete + // information on the specified packages. + buf, err := invokeGo(cfg, golistargs(cfg, words)...) + if err != nil { + return nil, err + } + seen := make(map[string]*jsonPackage) + // Decode the JSON and convert it to Package form. + var response driverResponse + for dec := json.NewDecoder(buf); dec.More(); { + p := new(jsonPackage) + if err := dec.Decode(p); err != nil { + return nil, fmt.Errorf("JSON decoding failed: %v", err) + } + + if p.ImportPath == "" { + // The documentation for go list says that “[e]rroneous packages will have + // a non-empty ImportPath”. If for some reason it comes back empty, we + // prefer to error out rather than silently discarding data or handing + // back a package without any way to refer to it. + if p.Error != nil { + return nil, Error{ + Pos: p.Error.Pos, + Msg: p.Error.Err, + } + } + return nil, fmt.Errorf("package missing import path: %+v", p) + } + + if old, found := seen[p.ImportPath]; found { + if !reflect.DeepEqual(p, old) { + return nil, fmt.Errorf("internal error: go list gives conflicting information for package %v", p.ImportPath) + } + // skip the duplicate + continue + } + seen[p.ImportPath] = p + + pkg := &Package{ + Name: p.Name, + ID: p.ImportPath, + GoFiles: absJoin(p.Dir, p.GoFiles, p.CgoFiles), + CompiledGoFiles: absJoin(p.Dir, p.CompiledGoFiles), + OtherFiles: absJoin(p.Dir, otherFiles(p)...), + } + + // Work around https://golang.org/issue/28749: + // cmd/go puts assembly, C, and C++ files in CompiledGoFiles. + // Filter out any elements of CompiledGoFiles that are also in OtherFiles. + // We have to keep this workaround in place until go1.12 is a distant memory. + if len(pkg.OtherFiles) > 0 { + other := make(map[string]bool, len(pkg.OtherFiles)) + for _, f := range pkg.OtherFiles { + other[f] = true + } + + out := pkg.CompiledGoFiles[:0] + for _, f := range pkg.CompiledGoFiles { + if other[f] { + continue + } + out = append(out, f) + } + pkg.CompiledGoFiles = out + } + + // Extract the PkgPath from the package's ID. + if i := strings.IndexByte(pkg.ID, ' '); i >= 0 { + pkg.PkgPath = pkg.ID[:i] + } else { + pkg.PkgPath = pkg.ID + } + + if pkg.PkgPath == "unsafe" { + pkg.GoFiles = nil // ignore fake unsafe.go file + } + + // Assume go list emits only absolute paths for Dir. + if p.Dir != "" && !filepath.IsAbs(p.Dir) { + log.Fatalf("internal error: go list returned non-absolute Package.Dir: %s", p.Dir) + } + + if p.Export != "" && !filepath.IsAbs(p.Export) { + pkg.ExportFile = filepath.Join(p.Dir, p.Export) + } else { + pkg.ExportFile = p.Export + } + + // imports + // + // Imports contains the IDs of all imported packages. + // ImportsMap records (path, ID) only where they differ. + ids := make(map[string]bool) + for _, id := range p.Imports { + ids[id] = true + } + pkg.Imports = make(map[string]*Package) + for path, id := range p.ImportMap { + pkg.Imports[path] = &Package{ID: id} // non-identity import + delete(ids, id) + } + for id := range ids { + if id == "C" { + continue + } + + pkg.Imports[id] = &Package{ID: id} // identity import + } + if !p.DepOnly { + response.Roots = append(response.Roots, pkg.ID) + } + + // Work around for pre-go.1.11 versions of go list. + // TODO(matloob): they should be handled by the fallback. + // Can we delete this? + if len(pkg.CompiledGoFiles) == 0 { + pkg.CompiledGoFiles = pkg.GoFiles + } + + if p.Error != nil { + pkg.Errors = append(pkg.Errors, Error{ + Pos: p.Error.Pos, + Msg: strings.TrimSpace(p.Error.Err), // Trim to work around golang.org/issue/32363. + }) + } + + response.Packages = append(response.Packages, pkg) + } + + return &response, nil +} + +// absJoin absolutizes and flattens the lists of files. +func absJoin(dir string, fileses ...[]string) (res []string) { + for _, files := range fileses { + for _, file := range files { + if !filepath.IsAbs(file) { + file = filepath.Join(dir, file) + } + res = append(res, file) + } + } + return res +} + +func golistargs(cfg *Config, words []string) []string { + const findFlags = NeedImports | NeedTypes | NeedSyntax | NeedTypesInfo + fullargs := []string{ + "list", "-e", "-json", + fmt.Sprintf("-compiled=%t", cfg.Mode&(NeedCompiledGoFiles|NeedSyntax|NeedTypesInfo|NeedTypesSizes) != 0), + fmt.Sprintf("-test=%t", cfg.Tests), + fmt.Sprintf("-export=%t", usesExportData(cfg)), + fmt.Sprintf("-deps=%t", cfg.Mode&NeedDeps != 0), + // go list doesn't let you pass -test and -find together, + // probably because you'd just get the TestMain. + fmt.Sprintf("-find=%t", !cfg.Tests && cfg.Mode&findFlags == 0), + } + fullargs = append(fullargs, cfg.BuildFlags...) + fullargs = append(fullargs, "--") + fullargs = append(fullargs, words...) + return fullargs +} + +// invokeGo returns the stdout of a go command invocation. +func invokeGo(cfg *Config, args ...string) (*bytes.Buffer, error) { + stdout := new(bytes.Buffer) + stderr := new(bytes.Buffer) + cmd := exec.CommandContext(cfg.Context, "go", args...) + // On darwin the cwd gets resolved to the real path, which breaks anything that + // expects the working directory to keep the original path, including the + // go command when dealing with modules. + // The Go stdlib has a special feature where if the cwd and the PWD are the + // same node then it trusts the PWD, so by setting it in the env for the child + // process we fix up all the paths returned by the go command. + cmd.Env = append(append([]string{}, cfg.Env...), "PWD="+cfg.Dir) + cmd.Dir = cfg.Dir + cmd.Stdout = stdout + cmd.Stderr = stderr + if debug { + defer func(start time.Time) { + log.Printf("%s for %v, stderr: <<%s>>\n", time.Since(start), cmdDebugStr(cmd, args...), stderr) + }(time.Now()) + } + + if err := cmd.Run(); err != nil { + // Check for 'go' executable not being found. + if ee, ok := err.(*exec.Error); ok && ee.Err == exec.ErrNotFound { + return nil, fmt.Errorf("'go list' driver requires 'go', but %s", exec.ErrNotFound) + } + + exitErr, ok := err.(*exec.ExitError) + if !ok { + // Catastrophic error: + // - context cancellation + return nil, fmt.Errorf("couldn't exec 'go %v': %s %T", args, err, err) + } + + // Old go version? + if strings.Contains(stderr.String(), "flag provided but not defined") { + return nil, goTooOldError{fmt.Errorf("unsupported version of go: %s: %s", exitErr, stderr)} + } + + // This error only appears in stderr. See golang.org/cl/166398 for a fix in go list to show + // the error in the Err section of stdout in case -e option is provided. + // This fix is provided for backwards compatibility. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "named files must be .go files") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for #29280: go list -e has incorrect behavior when an ad-hoc package doesn't exist. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no such file or directory") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for an instance of golang.org/issue/26755: go list -e will return a non-zero exit + // status if there's a dependency on a package that doesn't exist. But it should return + // a zero exit status and set an error on that package. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no Go files in") { + // try to extract package name from string + stderrStr := stderr.String() + var importPath string + colon := strings.Index(stderrStr, ":") + if colon > 0 && strings.HasPrefix(stderrStr, "go build ") { + importPath = stderrStr[len("go build "):colon] + } + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + importPath, strings.Trim(stderrStr, "\n")) + return bytes.NewBufferString(output), nil + } + + // Export mode entails a build. + // If that build fails, errors appear on stderr + // (despite the -e flag) and the Export field is blank. + // Do not fail in that case. + // The same is true if an ad-hoc package given to go list doesn't exist. + // TODO(matloob): Remove these once we can depend on go list to exit with a zero status with -e even when + // packages don't exist or a build fails. + if !usesExportData(cfg) && !containsGoFile(args) { + return nil, fmt.Errorf("go %v: %s: %s", args, exitErr, stderr) + } + } + + // As of writing, go list -export prints some non-fatal compilation + // errors to stderr, even with -e set. We would prefer that it put + // them in the Package.Error JSON (see https://golang.org/issue/26319). + // In the meantime, there's nowhere good to put them, but they can + // be useful for debugging. Print them if $GOPACKAGESPRINTGOLISTERRORS + // is set. + if len(stderr.Bytes()) != 0 && os.Getenv("GOPACKAGESPRINTGOLISTERRORS") != "" { + fmt.Fprintf(os.Stderr, "%s stderr: <<%s>>\n", cmdDebugStr(cmd, args...), stderr) + } + + // debugging + if false { + fmt.Fprintf(os.Stderr, "%s stdout: <<%s>>\n", cmdDebugStr(cmd, args...), stdout) + } + + return stdout, nil +} + +func containsGoFile(s []string) bool { + for _, f := range s { + if strings.HasSuffix(f, ".go") { + return true + } + } + return false +} + +func cmdDebugStr(cmd *exec.Cmd, args ...string) string { + env := make(map[string]string) + for _, kv := range cmd.Env { + split := strings.Split(kv, "=") + k, v := split[0], split[1] + env[k] = v + } + var quotedArgs []string + for _, arg := range args { + quotedArgs = append(quotedArgs, strconv.Quote(arg)) + } + + return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v PWD=%v go %s", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["PWD"], strings.Join(quotedArgs, " ")) +} diff --git a/vendor/golang.org/x/tools/go/packages/golist_overlay.go b/vendor/golang.org/x/tools/go/packages/golist_overlay.go new file mode 100644 index 000000000..ffc7a367f --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/golist_overlay.go @@ -0,0 +1,300 @@ +package packages + +import ( + "bytes" + "encoding/json" + "fmt" + "go/parser" + "go/token" + "path" + "path/filepath" + "strconv" + "strings" + "sync" +) + +// processGolistOverlay provides rudimentary support for adding +// files that don't exist on disk to an overlay. The results can be +// sometimes incorrect. +// TODO(matloob): Handle unsupported cases, including the following: +// - determining the correct package to add given a new import path +func processGolistOverlay(cfg *Config, response *responseDeduper) (modifiedPkgs, needPkgs []string, err error) { + havePkgs := make(map[string]string) // importPath -> non-test package ID + needPkgsSet := make(map[string]bool) + modifiedPkgsSet := make(map[string]bool) + + for _, pkg := range response.dr.Packages { + // This is an approximation of import path to id. This can be + // wrong for tests, vendored packages, and a number of other cases. + havePkgs[pkg.PkgPath] = pkg.ID + } + + var rootDirs map[string]string + var onceGetRootDirs sync.Once + + // If no new imports are added, it is safe to avoid loading any needPkgs. + // Otherwise, it's hard to tell which package is actually being loaded + // (due to vendoring) and whether any modified package will show up + // in the transitive set of dependencies (because new imports are added, + // potentially modifying the transitive set of dependencies). + var overlayAddsImports bool + + for opath, contents := range cfg.Overlay { + base := filepath.Base(opath) + dir := filepath.Dir(opath) + var pkg *Package + var testVariantOf *Package // if opath is a test file, this is the package it is testing + var fileExists bool + isTest := strings.HasSuffix(opath, "_test.go") + pkgName, ok := extractPackageName(opath, contents) + if !ok { + // Don't bother adding a file that doesn't even have a parsable package statement + // to the overlay. + continue + } + nextPackage: + for _, p := range response.dr.Packages { + if pkgName != p.Name { + continue + } + for _, f := range p.GoFiles { + if !sameFile(filepath.Dir(f), dir) { + continue + } + if isTest && !hasTestFiles(p) { + // TODO(matloob): Are there packages other than the 'production' variant + // of a package that this can match? This shouldn't match the test main package + // because the file is generated in another directory. + testVariantOf = p + continue nextPackage + } + pkg = p + if filepath.Base(f) == base { + fileExists = true + } + } + } + // The overlay could have included an entirely new package. + if pkg == nil { + onceGetRootDirs.Do(func() { + rootDirs = determineRootDirs(cfg) + }) + // Try to find the module or gopath dir the file is contained in. + // Then for modules, add the module opath to the beginning. + var pkgPath string + for rdir, rpath := range rootDirs { + // TODO(matloob): This doesn't properly handle symlinks. + r, err := filepath.Rel(rdir, dir) + if err != nil { + continue + } + pkgPath = filepath.ToSlash(r) + if rpath != "" { + pkgPath = path.Join(rpath, pkgPath) + } + // We only create one new package even it can belong in multiple modules or GOPATH entries. + // This is okay because tools (such as the LSP) that use overlays will recompute the overlay + // once the file is saved, and golist will do the right thing. + // TODO(matloob): Implement module tiebreaking? + break + } + if pkgPath == "" { + continue + } + isXTest := strings.HasSuffix(pkgName, "_test") + if isXTest { + pkgPath += "_test" + } + id := pkgPath + if isTest && !isXTest { + id = fmt.Sprintf("%s [%s.test]", pkgPath, pkgPath) + } + // Try to reclaim a package with the same id if it exists in the response. + for _, p := range response.dr.Packages { + if reclaimPackage(p, id, opath, contents) { + pkg = p + break + } + } + // Otherwise, create a new package + if pkg == nil { + pkg = &Package{PkgPath: pkgPath, ID: id, Name: pkgName, Imports: make(map[string]*Package)} + response.addPackage(pkg) + havePkgs[pkg.PkgPath] = id + // Add the production package's sources for a test variant. + if isTest && !isXTest && testVariantOf != nil { + pkg.GoFiles = append(pkg.GoFiles, testVariantOf.GoFiles...) + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, testVariantOf.CompiledGoFiles...) + } + } + } + if !fileExists { + pkg.GoFiles = append(pkg.GoFiles, opath) + // TODO(matloob): Adding the file to CompiledGoFiles can exhibit the wrong behavior + // if the file will be ignored due to its build tags. + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, opath) + modifiedPkgsSet[pkg.ID] = true + } + imports, err := extractImports(opath, contents) + if err != nil { + // Let the parser or type checker report errors later. + continue + } + for _, imp := range imports { + _, found := pkg.Imports[imp] + if !found { + overlayAddsImports = true + // TODO(matloob): Handle cases when the following block isn't correct. + // These include imports of test variants, imports of vendored packages, etc. + id, ok := havePkgs[imp] + if !ok { + id = imp + } + pkg.Imports[imp] = &Package{ID: id} + } + } + continue + } + + // toPkgPath tries to guess the package path given the id. + // This isn't always correct -- it's certainly wrong for + // vendored packages' paths. + toPkgPath := func(id string) string { + // TODO(matloob): Handle vendor paths. + i := strings.IndexByte(id, ' ') + if i >= 0 { + return id[:i] + } + return id + } + + // Do another pass now that new packages have been created to determine the + // set of missing packages. + for _, pkg := range response.dr.Packages { + for _, imp := range pkg.Imports { + pkgPath := toPkgPath(imp.ID) + if _, ok := havePkgs[pkgPath]; !ok { + needPkgsSet[pkgPath] = true + } + } + } + + if overlayAddsImports { + needPkgs = make([]string, 0, len(needPkgsSet)) + for pkg := range needPkgsSet { + needPkgs = append(needPkgs, pkg) + } + } + modifiedPkgs = make([]string, 0, len(modifiedPkgsSet)) + for pkg := range modifiedPkgsSet { + modifiedPkgs = append(modifiedPkgs, pkg) + } + return modifiedPkgs, needPkgs, err +} + +func hasTestFiles(p *Package) bool { + for _, f := range p.GoFiles { + if strings.HasSuffix(f, "_test.go") { + return true + } + } + return false +} + +// determineRootDirs returns a mapping from directories code can be contained in to the +// corresponding import path prefixes of those directories. +// Its result is used to try to determine the import path for a package containing +// an overlay file. +func determineRootDirs(cfg *Config) map[string]string { + // Assume modules first: + out, err := invokeGo(cfg, "list", "-m", "-json", "all") + if err != nil { + return determineRootDirsGOPATH(cfg) + } + m := map[string]string{} + type jsonMod struct{ Path, Dir string } + for dec := json.NewDecoder(out); dec.More(); { + mod := new(jsonMod) + if err := dec.Decode(mod); err != nil { + return m // Give up and return an empty map. Package won't be found for overlay. + } + if mod.Dir != "" && mod.Path != "" { + // This is a valid module; add it to the map. + m[mod.Dir] = mod.Path + } + } + return m +} + +func determineRootDirsGOPATH(cfg *Config) map[string]string { + m := map[string]string{} + out, err := invokeGo(cfg, "env", "GOPATH") + if err != nil { + // Could not determine root dir mapping. Everything is best-effort, so just return an empty map. + // When we try to find the import path for a directory, there will be no root-dir match and + // we'll give up. + return m + } + for _, p := range filepath.SplitList(string(bytes.TrimSpace(out.Bytes()))) { + m[filepath.Join(p, "src")] = "" + } + return m +} + +func extractImports(filename string, contents []byte) ([]string, error) { + f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.ImportsOnly) // TODO(matloob): reuse fileset? + if err != nil { + return nil, err + } + var res []string + for _, imp := range f.Imports { + quotedPath := imp.Path.Value + path, err := strconv.Unquote(quotedPath) + if err != nil { + return nil, err + } + res = append(res, path) + } + return res, nil +} + +// reclaimPackage attempts to reuse a package that failed to load in an overlay. +// +// If the package has errors and has no Name, GoFiles, or Imports, +// then it's possible that it doesn't yet exist on disk. +func reclaimPackage(pkg *Package, id string, filename string, contents []byte) bool { + // TODO(rstambler): Check the message of the actual error? + // It differs between $GOPATH and module mode. + if pkg.ID != id { + return false + } + if len(pkg.Errors) != 1 { + return false + } + if pkg.Name != "" || pkg.ExportFile != "" { + return false + } + if len(pkg.GoFiles) > 0 || len(pkg.CompiledGoFiles) > 0 || len(pkg.OtherFiles) > 0 { + return false + } + if len(pkg.Imports) > 0 { + return false + } + pkgName, ok := extractPackageName(filename, contents) + if !ok { + return false + } + pkg.Name = pkgName + pkg.Errors = nil + return true +} + +func extractPackageName(filename string, contents []byte) (string, bool) { + // TODO(rstambler): Check the message of the actual error? + // It differs between $GOPATH and module mode. + f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.PackageClauseOnly) // TODO(matloob): reuse fileset? + if err != nil { + return "", false + } + return f.Name.Name, true +} diff --git a/vendor/golang.org/x/tools/go/packages/packages.go b/vendor/golang.org/x/tools/go/packages/packages.go new file mode 100644 index 000000000..f20e444f4 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/packages.go @@ -0,0 +1,1075 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +// See doc.go for package documentation and implementation notes. + +import ( + "context" + "encoding/json" + "fmt" + "go/ast" + "go/parser" + "go/scanner" + "go/token" + "go/types" + "io/ioutil" + "log" + "os" + "path/filepath" + "strings" + "sync" + + "golang.org/x/tools/go/gcexportdata" +) + +// A LoadMode controls the amount of detail to return when loading. +// The bits below can be combined to specify which fields should be +// filled in the result packages. +// The zero value is a special case, equivalent to combining +// the NeedName, NeedFiles, and NeedCompiledGoFiles bits. +// ID and Errors (if present) will always be filled. +// Load may return more information than requested. +type LoadMode int + +const ( + // NeedName adds Name and PkgPath. + NeedName LoadMode = 1 << iota + + // NeedFiles adds GoFiles and OtherFiles. + NeedFiles + + // NeedCompiledGoFiles adds CompiledGoFiles. + NeedCompiledGoFiles + + // NeedImports adds Imports. If NeedDeps is not set, the Imports field will contain + // "placeholder" Packages with only the ID set. + NeedImports + + // NeedDeps adds the fields requested by the LoadMode in the packages in Imports. If NeedImports + // is not set NeedDeps has no effect. + NeedDeps + + // NeedExportsFile adds ExportsFile. + NeedExportsFile + + // NeedTypes adds Types, Fset, and IllTyped. + NeedTypes + + // NeedSyntax adds Syntax. + NeedSyntax + + // NeedTypesInfo adds TypesInfo. + NeedTypesInfo + + // NeedTypesSizes adds TypesSizes. + NeedTypesSizes +) + +const ( + // Deprecated: LoadFiles exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadFiles = NeedName | NeedFiles | NeedCompiledGoFiles + + // Deprecated: LoadImports exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadImports = LoadFiles | NeedImports | NeedDeps + + // Deprecated: LoadTypes exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadTypes = LoadImports | NeedTypes | NeedTypesSizes + + // Deprecated: LoadSyntax exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadSyntax = LoadTypes | NeedSyntax | NeedTypesInfo + + // Deprecated: LoadAllSyntax exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadAllSyntax = LoadSyntax +) + +// A Config specifies details about how packages should be loaded. +// The zero value is a valid configuration. +// Calls to Load do not modify this struct. +type Config struct { + // Mode controls the level of information returned for each package. + Mode LoadMode + + // Context specifies the context for the load operation. + // If the context is cancelled, the loader may stop early + // and return an ErrCancelled error. + // If Context is nil, the load cannot be cancelled. + Context context.Context + + // Dir is the directory in which to run the build system's query tool + // that provides information about the packages. + // If Dir is empty, the tool is run in the current directory. + Dir string + + // Env is the environment to use when invoking the build system's query tool. + // If Env is nil, the current environment is used. + // As in os/exec's Cmd, only the last value in the slice for + // each environment key is used. To specify the setting of only + // a few variables, append to the current environment, as in: + // + // opt.Env = append(os.Environ(), "GOOS=plan9", "GOARCH=386") + // + Env []string + + // BuildFlags is a list of command-line flags to be passed through to + // the build system's query tool. + BuildFlags []string + + // Fset provides source position information for syntax trees and types. + // If Fset is nil, Load will use a new fileset, but preserve Fset's value. + Fset *token.FileSet + + // ParseFile is called to read and parse each file + // when preparing a package's type-checked syntax tree. + // It must be safe to call ParseFile simultaneously from multiple goroutines. + // If ParseFile is nil, the loader will uses parser.ParseFile. + // + // ParseFile should parse the source from src and use filename only for + // recording position information. + // + // An application may supply a custom implementation of ParseFile + // to change the effective file contents or the behavior of the parser, + // or to modify the syntax tree. For example, selectively eliminating + // unwanted function bodies can significantly accelerate type checking. + ParseFile func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) + + // If Tests is set, the loader includes not just the packages + // matching a particular pattern but also any related test packages, + // including test-only variants of the package and the test executable. + // + // For example, when using the go command, loading "fmt" with Tests=true + // returns four packages, with IDs "fmt" (the standard package), + // "fmt [fmt.test]" (the package as compiled for the test), + // "fmt_test" (the test functions from source files in package fmt_test), + // and "fmt.test" (the test binary). + // + // In build systems with explicit names for tests, + // setting Tests may have no effect. + Tests bool + + // Overlay provides a mapping of absolute file paths to file contents. + // If the file with the given path already exists, the parser will use the + // alternative file contents provided by the map. + // + // Overlays provide incomplete support for when a given file doesn't + // already exist on disk. See the package doc above for more details. + Overlay map[string][]byte +} + +// driver is the type for functions that query the build system for the +// packages named by the patterns. +type driver func(cfg *Config, patterns ...string) (*driverResponse, error) + +// driverResponse contains the results for a driver query. +type driverResponse struct { + // Sizes, if not nil, is the types.Sizes to use when type checking. + Sizes *types.StdSizes + + // Roots is the set of package IDs that make up the root packages. + // We have to encode this separately because when we encode a single package + // we cannot know if it is one of the roots as that requires knowledge of the + // graph it is part of. + Roots []string `json:",omitempty"` + + // Packages is the full set of packages in the graph. + // The packages are not connected into a graph. + // The Imports if populated will be stubs that only have their ID set. + // Imports will be connected and then type and syntax information added in a + // later pass (see refine). + Packages []*Package +} + +// Load loads and returns the Go packages named by the given patterns. +// +// Config specifies loading options; +// nil behaves the same as an empty Config. +// +// Load returns an error if any of the patterns was invalid +// as defined by the underlying build system. +// It may return an empty list of packages without an error, +// for instance for an empty expansion of a valid wildcard. +// Errors associated with a particular package are recorded in the +// corresponding Package's Errors list, and do not cause Load to +// return an error. Clients may need to handle such errors before +// proceeding with further analysis. The PrintErrors function is +// provided for convenient display of all errors. +func Load(cfg *Config, patterns ...string) ([]*Package, error) { + l := newLoader(cfg) + response, err := defaultDriver(&l.Config, patterns...) + if err != nil { + return nil, err + } + l.sizes = response.Sizes + return l.refine(response.Roots, response.Packages...) +} + +// defaultDriver is a driver that looks for an external driver binary, and if +// it does not find it falls back to the built in go list driver. +func defaultDriver(cfg *Config, patterns ...string) (*driverResponse, error) { + driver := findExternalDriver(cfg) + if driver == nil { + driver = goListDriver + } + return driver(cfg, patterns...) +} + +// A Package describes a loaded Go package. +type Package struct { + // ID is a unique identifier for a package, + // in a syntax provided by the underlying build system. + // + // Because the syntax varies based on the build system, + // clients should treat IDs as opaque and not attempt to + // interpret them. + ID string + + // Name is the package name as it appears in the package source code. + Name string + + // PkgPath is the package path as used by the go/types package. + PkgPath string + + // Errors contains any errors encountered querying the metadata + // of the package, or while parsing or type-checking its files. + Errors []Error + + // GoFiles lists the absolute file paths of the package's Go source files. + GoFiles []string + + // CompiledGoFiles lists the absolute file paths of the package's source + // files that were presented to the compiler. + // This may differ from GoFiles if files are processed before compilation. + CompiledGoFiles []string + + // OtherFiles lists the absolute file paths of the package's non-Go source files, + // including assembly, C, C++, Fortran, Objective-C, SWIG, and so on. + OtherFiles []string + + // ExportFile is the absolute path to a file containing type + // information for the package as provided by the build system. + ExportFile string + + // Imports maps import paths appearing in the package's Go source files + // to corresponding loaded Packages. + Imports map[string]*Package + + // Types provides type information for the package. + // The NeedTypes LoadMode bit sets this field for packages matching the + // patterns; type information for dependencies may be missing or incomplete, + // unless NeedDeps and NeedImports are also set. + Types *types.Package + + // Fset provides position information for Types, TypesInfo, and Syntax. + // It is set only when Types is set. + Fset *token.FileSet + + // IllTyped indicates whether the package or any dependency contains errors. + // It is set only when Types is set. + IllTyped bool + + // Syntax is the package's syntax trees, for the files listed in CompiledGoFiles. + // + // The NeedSyntax LoadMode bit populates this field for packages matching the patterns. + // If NeedDeps and NeedImports are also set, this field will also be populated + // for dependencies. + Syntax []*ast.File + + // TypesInfo provides type information about the package's syntax trees. + // It is set only when Syntax is set. + TypesInfo *types.Info + + // TypesSizes provides the effective size function for types in TypesInfo. + TypesSizes types.Sizes +} + +// An Error describes a problem with a package's metadata, syntax, or types. +type Error struct { + Pos string // "file:line:col" or "file:line" or "" or "-" + Msg string + Kind ErrorKind +} + +// ErrorKind describes the source of the error, allowing the user to +// differentiate between errors generated by the driver, the parser, or the +// type-checker. +type ErrorKind int + +const ( + UnknownError ErrorKind = iota + ListError + ParseError + TypeError +) + +func (err Error) Error() string { + pos := err.Pos + if pos == "" { + pos = "-" // like token.Position{}.String() + } + return pos + ": " + err.Msg +} + +// flatPackage is the JSON form of Package +// It drops all the type and syntax fields, and transforms the Imports +// +// TODO(adonovan): identify this struct with Package, effectively +// publishing the JSON protocol. +type flatPackage struct { + ID string + Name string `json:",omitempty"` + PkgPath string `json:",omitempty"` + Errors []Error `json:",omitempty"` + GoFiles []string `json:",omitempty"` + CompiledGoFiles []string `json:",omitempty"` + OtherFiles []string `json:",omitempty"` + ExportFile string `json:",omitempty"` + Imports map[string]string `json:",omitempty"` +} + +// MarshalJSON returns the Package in its JSON form. +// For the most part, the structure fields are written out unmodified, and +// the type and syntax fields are skipped. +// The imports are written out as just a map of path to package id. +// The errors are written using a custom type that tries to preserve the +// structure of error types we know about. +// +// This method exists to enable support for additional build systems. It is +// not intended for use by clients of the API and we may change the format. +func (p *Package) MarshalJSON() ([]byte, error) { + flat := &flatPackage{ + ID: p.ID, + Name: p.Name, + PkgPath: p.PkgPath, + Errors: p.Errors, + GoFiles: p.GoFiles, + CompiledGoFiles: p.CompiledGoFiles, + OtherFiles: p.OtherFiles, + ExportFile: p.ExportFile, + } + if len(p.Imports) > 0 { + flat.Imports = make(map[string]string, len(p.Imports)) + for path, ipkg := range p.Imports { + flat.Imports[path] = ipkg.ID + } + } + return json.Marshal(flat) +} + +// UnmarshalJSON reads in a Package from its JSON format. +// See MarshalJSON for details about the format accepted. +func (p *Package) UnmarshalJSON(b []byte) error { + flat := &flatPackage{} + if err := json.Unmarshal(b, &flat); err != nil { + return err + } + *p = Package{ + ID: flat.ID, + Name: flat.Name, + PkgPath: flat.PkgPath, + Errors: flat.Errors, + GoFiles: flat.GoFiles, + CompiledGoFiles: flat.CompiledGoFiles, + OtherFiles: flat.OtherFiles, + ExportFile: flat.ExportFile, + } + if len(flat.Imports) > 0 { + p.Imports = make(map[string]*Package, len(flat.Imports)) + for path, id := range flat.Imports { + p.Imports[path] = &Package{ID: id} + } + } + return nil +} + +func (p *Package) String() string { return p.ID } + +// loaderPackage augments Package with state used during the loading phase +type loaderPackage struct { + *Package + importErrors map[string]error // maps each bad import to its error + loadOnce sync.Once + color uint8 // for cycle detection + needsrc bool // load from source (Mode >= LoadTypes) + needtypes bool // type information is either requested or depended on + initial bool // package was matched by a pattern +} + +// loader holds the working state of a single call to load. +type loader struct { + pkgs map[string]*loaderPackage + Config + sizes types.Sizes + parseCache map[string]*parseValue + parseCacheMu sync.Mutex + exportMu sync.Mutex // enforces mutual exclusion of exportdata operations + + // TODO(matloob): Add an implied mode here and use that instead of mode. + // Implied mode would contain all the fields we need the data for so we can + // get the actually requested fields. We'll zero them out before returning + // packages to the user. This will make it easier for us to get the conditions + // where we need certain modes right. +} + +type parseValue struct { + f *ast.File + err error + ready chan struct{} +} + +func newLoader(cfg *Config) *loader { + ld := &loader{ + parseCache: map[string]*parseValue{}, + } + if cfg != nil { + ld.Config = *cfg + } + if ld.Config.Mode == 0 { + ld.Config.Mode = NeedName | NeedFiles | NeedCompiledGoFiles // Preserve zero behavior of Mode for backwards compatibility. + } + if ld.Config.Env == nil { + ld.Config.Env = os.Environ() + } + if ld.Context == nil { + ld.Context = context.Background() + } + if ld.Dir == "" { + if dir, err := os.Getwd(); err == nil { + ld.Dir = dir + } + } + + if ld.Mode&NeedTypes != 0 { + if ld.Fset == nil { + ld.Fset = token.NewFileSet() + } + + // ParseFile is required even in LoadTypes mode + // because we load source if export data is missing. + if ld.ParseFile == nil { + ld.ParseFile = func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) { + const mode = parser.AllErrors | parser.ParseComments + return parser.ParseFile(fset, filename, src, mode) + } + } + } + return ld +} + +// refine connects the supplied packages into a graph and then adds type and +// and syntax information as requested by the LoadMode. +func (ld *loader) refine(roots []string, list ...*Package) ([]*Package, error) { + rootMap := make(map[string]int, len(roots)) + for i, root := range roots { + rootMap[root] = i + } + ld.pkgs = make(map[string]*loaderPackage) + // first pass, fixup and build the map and roots + var initial = make([]*loaderPackage, len(roots)) + for _, pkg := range list { + rootIndex := -1 + if i, found := rootMap[pkg.ID]; found { + rootIndex = i + } + lpkg := &loaderPackage{ + Package: pkg, + needtypes: (ld.Mode&(NeedTypes|NeedTypesInfo) != 0 && rootIndex < 0) || rootIndex >= 0, + needsrc: (ld.Mode&(NeedSyntax|NeedTypesInfo) != 0 && rootIndex < 0) || rootIndex >= 0 || + len(ld.Overlay) > 0 || // Overlays can invalidate export data. TODO(matloob): make this check fine-grained based on dependencies on overlaid files + pkg.ExportFile == "" && pkg.PkgPath != "unsafe", + } + ld.pkgs[lpkg.ID] = lpkg + if rootIndex >= 0 { + initial[rootIndex] = lpkg + lpkg.initial = true + } + } + for i, root := range roots { + if initial[i] == nil { + return nil, fmt.Errorf("root package %v is missing", root) + } + } + + // Materialize the import graph. + + const ( + white = 0 // new + grey = 1 // in progress + black = 2 // complete + ) + + // visit traverses the import graph, depth-first, + // and materializes the graph as Packages.Imports. + // + // Valid imports are saved in the Packages.Import map. + // Invalid imports (cycles and missing nodes) are saved in the importErrors map. + // Thus, even in the presence of both kinds of errors, the Import graph remains a DAG. + // + // visit returns whether the package needs src or has a transitive + // dependency on a package that does. These are the only packages + // for which we load source code. + var stack []*loaderPackage + var visit func(lpkg *loaderPackage) bool + var srcPkgs []*loaderPackage + visit = func(lpkg *loaderPackage) bool { + switch lpkg.color { + case black: + return lpkg.needsrc + case grey: + panic("internal error: grey node") + } + lpkg.color = grey + stack = append(stack, lpkg) // push + stubs := lpkg.Imports // the structure form has only stubs with the ID in the Imports + // If NeedImports isn't set, the imports fields will all be zeroed out. + // If NeedDeps isn't also set we want to keep the stubs. + if ld.Mode&NeedImports != 0 && ld.Mode&NeedDeps != 0 { + lpkg.Imports = make(map[string]*Package, len(stubs)) + for importPath, ipkg := range stubs { + var importErr error + imp := ld.pkgs[ipkg.ID] + if imp == nil { + // (includes package "C" when DisableCgo) + importErr = fmt.Errorf("missing package: %q", ipkg.ID) + } else if imp.color == grey { + importErr = fmt.Errorf("import cycle: %s", stack) + } + if importErr != nil { + if lpkg.importErrors == nil { + lpkg.importErrors = make(map[string]error) + } + lpkg.importErrors[importPath] = importErr + continue + } + + if visit(imp) { + lpkg.needsrc = true + } + lpkg.Imports[importPath] = imp.Package + } + } + if lpkg.needsrc { + srcPkgs = append(srcPkgs, lpkg) + } + if ld.Mode&NeedTypesSizes != 0 { + lpkg.TypesSizes = ld.sizes + } + stack = stack[:len(stack)-1] // pop + lpkg.color = black + + return lpkg.needsrc + } + + if ld.Mode&(NeedImports|NeedDeps) == 0 { + // We do this to drop the stub import packages that we are not even going to try to resolve. + for _, lpkg := range initial { + lpkg.Imports = nil + } + } else { + // For each initial package, create its import DAG. + for _, lpkg := range initial { + visit(lpkg) + } + } + if ld.Mode&NeedDeps != 0 { // TODO(matloob): This is only the case if NeedTypes is also set, right? + for _, lpkg := range srcPkgs { + // Complete type information is required for the + // immediate dependencies of each source package. + for _, ipkg := range lpkg.Imports { + imp := ld.pkgs[ipkg.ID] + imp.needtypes = true + } + } + } + // Load type data if needed, starting at + // the initial packages (roots of the import DAG). + if ld.Mode&NeedTypes != 0 { + var wg sync.WaitGroup + for _, lpkg := range initial { + wg.Add(1) + go func(lpkg *loaderPackage) { + ld.loadRecursive(lpkg) + wg.Done() + }(lpkg) + } + wg.Wait() + } + + result := make([]*Package, len(initial)) + importPlaceholders := make(map[string]*Package) + for i, lpkg := range initial { + result[i] = lpkg.Package + } + for i := range ld.pkgs { + // Clear all unrequested fields, for extra de-Hyrum-ization. + if ld.Mode&NeedName == 0 { + ld.pkgs[i].Name = "" + ld.pkgs[i].PkgPath = "" + } + if ld.Mode&NeedFiles == 0 { + ld.pkgs[i].GoFiles = nil + ld.pkgs[i].OtherFiles = nil + } + if ld.Mode&NeedCompiledGoFiles == 0 { + ld.pkgs[i].CompiledGoFiles = nil + } + if ld.Mode&NeedImports == 0 { + ld.pkgs[i].Imports = nil + } + if ld.Mode&NeedExportsFile == 0 { + ld.pkgs[i].ExportFile = "" + } + if ld.Mode&NeedTypes == 0 { + ld.pkgs[i].Types = nil + ld.pkgs[i].Fset = nil + ld.pkgs[i].IllTyped = false + } + if ld.Mode&NeedSyntax == 0 { + ld.pkgs[i].Syntax = nil + } + if ld.Mode&NeedTypesInfo == 0 { + ld.pkgs[i].TypesInfo = nil + } + if ld.Mode&NeedTypesSizes == 0 { + ld.pkgs[i].TypesSizes = nil + } + if ld.Mode&NeedDeps == 0 { + for j, pkg := range ld.pkgs[i].Imports { + ph, ok := importPlaceholders[pkg.ID] + if !ok { + ph = &Package{ID: pkg.ID} + importPlaceholders[pkg.ID] = ph + } + ld.pkgs[i].Imports[j] = ph + } + } + } + return result, nil +} + +// loadRecursive loads the specified package and its dependencies, +// recursively, in parallel, in topological order. +// It is atomic and idempotent. +// Precondition: ld.Mode&NeedTypes. +func (ld *loader) loadRecursive(lpkg *loaderPackage) { + lpkg.loadOnce.Do(func() { + // Load the direct dependencies, in parallel. + var wg sync.WaitGroup + for _, ipkg := range lpkg.Imports { + imp := ld.pkgs[ipkg.ID] + wg.Add(1) + go func(imp *loaderPackage) { + ld.loadRecursive(imp) + wg.Done() + }(imp) + } + wg.Wait() + + ld.loadPackage(lpkg) + }) +} + +// loadPackage loads the specified package. +// It must be called only once per Package, +// after immediate dependencies are loaded. +// Precondition: ld.Mode & NeedTypes. +func (ld *loader) loadPackage(lpkg *loaderPackage) { + if lpkg.PkgPath == "unsafe" { + // Fill in the blanks to avoid surprises. + lpkg.Types = types.Unsafe + lpkg.Fset = ld.Fset + lpkg.Syntax = []*ast.File{} + lpkg.TypesInfo = new(types.Info) + lpkg.TypesSizes = ld.sizes + return + } + + // Call NewPackage directly with explicit name. + // This avoids skew between golist and go/types when the files' + // package declarations are inconsistent. + lpkg.Types = types.NewPackage(lpkg.PkgPath, lpkg.Name) + lpkg.Fset = ld.Fset + + // Subtle: we populate all Types fields with an empty Package + // before loading export data so that export data processing + // never has to create a types.Package for an indirect dependency, + // which would then require that such created packages be explicitly + // inserted back into the Import graph as a final step after export data loading. + // The Diamond test exercises this case. + if !lpkg.needtypes { + return + } + if !lpkg.needsrc { + ld.loadFromExportData(lpkg) + return // not a source package, don't get syntax trees + } + + appendError := func(err error) { + // Convert various error types into the one true Error. + var errs []Error + switch err := err.(type) { + case Error: + // from driver + errs = append(errs, err) + + case *os.PathError: + // from parser + errs = append(errs, Error{ + Pos: err.Path + ":1", + Msg: err.Err.Error(), + Kind: ParseError, + }) + + case scanner.ErrorList: + // from parser + for _, err := range err { + errs = append(errs, Error{ + Pos: err.Pos.String(), + Msg: err.Msg, + Kind: ParseError, + }) + } + + case types.Error: + // from type checker + errs = append(errs, Error{ + Pos: err.Fset.Position(err.Pos).String(), + Msg: err.Msg, + Kind: TypeError, + }) + + default: + // unexpected impoverished error from parser? + errs = append(errs, Error{ + Pos: "-", + Msg: err.Error(), + Kind: UnknownError, + }) + + // If you see this error message, please file a bug. + log.Printf("internal error: error %q (%T) without position", err, err) + } + + lpkg.Errors = append(lpkg.Errors, errs...) + } + + files, errs := ld.parseFiles(lpkg.CompiledGoFiles) + for _, err := range errs { + appendError(err) + } + + lpkg.Syntax = files + + lpkg.TypesInfo = &types.Info{ + Types: make(map[ast.Expr]types.TypeAndValue), + Defs: make(map[*ast.Ident]types.Object), + Uses: make(map[*ast.Ident]types.Object), + Implicits: make(map[ast.Node]types.Object), + Scopes: make(map[ast.Node]*types.Scope), + Selections: make(map[*ast.SelectorExpr]*types.Selection), + } + lpkg.TypesSizes = ld.sizes + + importer := importerFunc(func(path string) (*types.Package, error) { + if path == "unsafe" { + return types.Unsafe, nil + } + + // The imports map is keyed by import path. + ipkg := lpkg.Imports[path] + if ipkg == nil { + if err := lpkg.importErrors[path]; err != nil { + return nil, err + } + // There was skew between the metadata and the + // import declarations, likely due to an edit + // race, or because the ParseFile feature was + // used to supply alternative file contents. + return nil, fmt.Errorf("no metadata for %s", path) + } + + if ipkg.Types != nil && ipkg.Types.Complete() { + return ipkg.Types, nil + } + log.Fatalf("internal error: nil Pkg importing %q from %q", path, lpkg) + panic("unreachable") + }) + + // type-check + tc := &types.Config{ + Importer: importer, + + // Type-check bodies of functions only in non-initial packages. + // Example: for import graph A->B->C and initial packages {A,C}, + // we can ignore function bodies in B. + IgnoreFuncBodies: (ld.Mode&(NeedDeps|NeedTypesInfo) == 0) && !lpkg.initial, + + Error: appendError, + Sizes: ld.sizes, + } + types.NewChecker(tc, ld.Fset, lpkg.Types, lpkg.TypesInfo).Files(lpkg.Syntax) + + lpkg.importErrors = nil // no longer needed + + // If !Cgo, the type-checker uses FakeImportC mode, so + // it doesn't invoke the importer for import "C", + // nor report an error for the import, + // or for any undefined C.f reference. + // We must detect this explicitly and correctly + // mark the package as IllTyped (by reporting an error). + // TODO(adonovan): if these errors are annoying, + // we could just set IllTyped quietly. + if tc.FakeImportC { + outer: + for _, f := range lpkg.Syntax { + for _, imp := range f.Imports { + if imp.Path.Value == `"C"` { + err := types.Error{Fset: ld.Fset, Pos: imp.Pos(), Msg: `import "C" ignored`} + appendError(err) + break outer + } + } + } + } + + // Record accumulated errors. + illTyped := len(lpkg.Errors) > 0 + if !illTyped { + for _, imp := range lpkg.Imports { + if imp.IllTyped { + illTyped = true + break + } + } + } + lpkg.IllTyped = illTyped +} + +// An importFunc is an implementation of the single-method +// types.Importer interface based on a function value. +type importerFunc func(path string) (*types.Package, error) + +func (f importerFunc) Import(path string) (*types.Package, error) { return f(path) } + +// We use a counting semaphore to limit +// the number of parallel I/O calls per process. +var ioLimit = make(chan bool, 20) + +func (ld *loader) parseFile(filename string) (*ast.File, error) { + ld.parseCacheMu.Lock() + v, ok := ld.parseCache[filename] + if ok { + // cache hit + ld.parseCacheMu.Unlock() + <-v.ready + } else { + // cache miss + v = &parseValue{ready: make(chan struct{})} + ld.parseCache[filename] = v + ld.parseCacheMu.Unlock() + + var src []byte + for f, contents := range ld.Config.Overlay { + if sameFile(f, filename) { + src = contents + } + } + var err error + if src == nil { + ioLimit <- true // wait + src, err = ioutil.ReadFile(filename) + <-ioLimit // signal + } + if err != nil { + v.err = err + } else { + v.f, v.err = ld.ParseFile(ld.Fset, filename, src) + } + + close(v.ready) + } + return v.f, v.err +} + +// parseFiles reads and parses the Go source files and returns the ASTs +// of the ones that could be at least partially parsed, along with a +// list of I/O and parse errors encountered. +// +// Because files are scanned in parallel, the token.Pos +// positions of the resulting ast.Files are not ordered. +// +func (ld *loader) parseFiles(filenames []string) ([]*ast.File, []error) { + var wg sync.WaitGroup + n := len(filenames) + parsed := make([]*ast.File, n) + errors := make([]error, n) + for i, file := range filenames { + if ld.Config.Context.Err() != nil { + parsed[i] = nil + errors[i] = ld.Config.Context.Err() + continue + } + wg.Add(1) + go func(i int, filename string) { + parsed[i], errors[i] = ld.parseFile(filename) + wg.Done() + }(i, file) + } + wg.Wait() + + // Eliminate nils, preserving order. + var o int + for _, f := range parsed { + if f != nil { + parsed[o] = f + o++ + } + } + parsed = parsed[:o] + + o = 0 + for _, err := range errors { + if err != nil { + errors[o] = err + o++ + } + } + errors = errors[:o] + + return parsed, errors +} + +// sameFile returns true if x and y have the same basename and denote +// the same file. +// +func sameFile(x, y string) bool { + if x == y { + // It could be the case that y doesn't exist. + // For instance, it may be an overlay file that + // hasn't been written to disk. To handle that case + // let x == y through. (We added the exact absolute path + // string to the CompiledGoFiles list, so the unwritten + // overlay case implies x==y.) + return true + } + if strings.EqualFold(filepath.Base(x), filepath.Base(y)) { // (optimisation) + if xi, err := os.Stat(x); err == nil { + if yi, err := os.Stat(y); err == nil { + return os.SameFile(xi, yi) + } + } + } + return false +} + +// loadFromExportData returns type information for the specified +// package, loading it from an export data file on the first request. +func (ld *loader) loadFromExportData(lpkg *loaderPackage) (*types.Package, error) { + if lpkg.PkgPath == "" { + log.Fatalf("internal error: Package %s has no PkgPath", lpkg) + } + + // Because gcexportdata.Read has the potential to create or + // modify the types.Package for each node in the transitive + // closure of dependencies of lpkg, all exportdata operations + // must be sequential. (Finer-grained locking would require + // changes to the gcexportdata API.) + // + // The exportMu lock guards the Package.Pkg field and the + // types.Package it points to, for each Package in the graph. + // + // Not all accesses to Package.Pkg need to be protected by exportMu: + // graph ordering ensures that direct dependencies of source + // packages are fully loaded before the importer reads their Pkg field. + ld.exportMu.Lock() + defer ld.exportMu.Unlock() + + if tpkg := lpkg.Types; tpkg != nil && tpkg.Complete() { + return tpkg, nil // cache hit + } + + lpkg.IllTyped = true // fail safe + + if lpkg.ExportFile == "" { + // Errors while building export data will have been printed to stderr. + return nil, fmt.Errorf("no export data file") + } + f, err := os.Open(lpkg.ExportFile) + if err != nil { + return nil, err + } + defer f.Close() + + // Read gc export data. + // + // We don't currently support gccgo export data because all + // underlying workspaces use the gc toolchain. (Even build + // systems that support gccgo don't use it for workspace + // queries.) + r, err := gcexportdata.NewReader(f) + if err != nil { + return nil, fmt.Errorf("reading %s: %v", lpkg.ExportFile, err) + } + + // Build the view. + // + // The gcexportdata machinery has no concept of package ID. + // It identifies packages by their PkgPath, which although not + // globally unique is unique within the scope of one invocation + // of the linker, type-checker, or gcexportdata. + // + // So, we must build a PkgPath-keyed view of the global + // (conceptually ID-keyed) cache of packages and pass it to + // gcexportdata. The view must contain every existing + // package that might possibly be mentioned by the + // current package---its transitive closure. + // + // In loadPackage, we unconditionally create a types.Package for + // each dependency so that export data loading does not + // create new ones. + // + // TODO(adonovan): it would be simpler and more efficient + // if the export data machinery invoked a callback to + // get-or-create a package instead of a map. + // + view := make(map[string]*types.Package) // view seen by gcexportdata + seen := make(map[*loaderPackage]bool) // all visited packages + var visit func(pkgs map[string]*Package) + visit = func(pkgs map[string]*Package) { + for _, p := range pkgs { + lpkg := ld.pkgs[p.ID] + if !seen[lpkg] { + seen[lpkg] = true + view[lpkg.PkgPath] = lpkg.Types + visit(lpkg.Imports) + } + } + } + visit(lpkg.Imports) + + viewLen := len(view) + 1 // adding the self package + // Parse the export data. + // (May modify incomplete packages in view but not create new ones.) + tpkg, err := gcexportdata.Read(r, ld.Fset, view, lpkg.PkgPath) + if err != nil { + return nil, fmt.Errorf("reading %s: %v", lpkg.ExportFile, err) + } + if viewLen != len(view) { + log.Fatalf("Unexpected package creation during export data loading") + } + + lpkg.Types = tpkg + lpkg.IllTyped = false + + return tpkg, nil +} + +func usesExportData(cfg *Config) bool { + return cfg.Mode&NeedExportsFile != 0 || cfg.Mode&NeedTypes != 0 && cfg.Mode&NeedTypesInfo == 0 +} diff --git a/vendor/golang.org/x/tools/go/packages/visit.go b/vendor/golang.org/x/tools/go/packages/visit.go new file mode 100644 index 000000000..b13cb081f --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/visit.go @@ -0,0 +1,55 @@ +package packages + +import ( + "fmt" + "os" + "sort" +) + +// Visit visits all the packages in the import graph whose roots are +// pkgs, calling the optional pre function the first time each package +// is encountered (preorder), and the optional post function after a +// package's dependencies have been visited (postorder). +// The boolean result of pre(pkg) determines whether +// the imports of package pkg are visited. +func Visit(pkgs []*Package, pre func(*Package) bool, post func(*Package)) { + seen := make(map[*Package]bool) + var visit func(*Package) + visit = func(pkg *Package) { + if !seen[pkg] { + seen[pkg] = true + + if pre == nil || pre(pkg) { + paths := make([]string, 0, len(pkg.Imports)) + for path := range pkg.Imports { + paths = append(paths, path) + } + sort.Strings(paths) // Imports is a map, this makes visit stable + for _, path := range paths { + visit(pkg.Imports[path]) + } + } + + if post != nil { + post(pkg) + } + } + } + for _, pkg := range pkgs { + visit(pkg) + } +} + +// PrintErrors prints to os.Stderr the accumulated errors of all +// packages in the import graph rooted at pkgs, dependencies first. +// PrintErrors returns the number of errors printed. +func PrintErrors(pkgs []*Package) int { + var n int + Visit(pkgs, nil, func(pkg *Package) { + for _, err := range pkg.Errors { + fmt.Fprintln(os.Stderr, err) + n++ + } + }) + return n +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go new file mode 100644 index 000000000..7219c8e9f --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go @@ -0,0 +1,196 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package fastwalk provides a faster version of filepath.Walk for file system +// scanning tools. +package fastwalk + +import ( + "errors" + "os" + "path/filepath" + "runtime" + "sync" +) + +// TraverseLink is used as a return value from WalkFuncs to indicate that the +// symlink named in the call may be traversed. +var TraverseLink = errors.New("fastwalk: traverse symlink, assuming target is a directory") + +// SkipFiles is a used as a return value from WalkFuncs to indicate that the +// callback should not be called for any other files in the current directory. +// Child directories will still be traversed. +var SkipFiles = errors.New("fastwalk: skip remaining files in directory") + +// Walk is a faster implementation of filepath.Walk. +// +// filepath.Walk's design necessarily calls os.Lstat on each file, +// even if the caller needs less info. +// Many tools need only the type of each file. +// On some platforms, this information is provided directly by the readdir +// system call, avoiding the need to stat each file individually. +// fastwalk_unix.go contains a fork of the syscall routines. +// +// See golang.org/issue/16399 +// +// Walk walks the file tree rooted at root, calling walkFn for +// each file or directory in the tree, including root. +// +// If fastWalk returns filepath.SkipDir, the directory is skipped. +// +// Unlike filepath.Walk: +// * file stat calls must be done by the user. +// The only provided metadata is the file type, which does not include +// any permission bits. +// * multiple goroutines stat the filesystem concurrently. The provided +// walkFn must be safe for concurrent use. +// * fastWalk can follow symlinks if walkFn returns the TraverseLink +// sentinel error. It is the walkFn's responsibility to prevent +// fastWalk from going into symlink cycles. +func Walk(root string, walkFn func(path string, typ os.FileMode) error) error { + // TODO(bradfitz): make numWorkers configurable? We used a + // minimum of 4 to give the kernel more info about multiple + // things we want, in hopes its I/O scheduling can take + // advantage of that. Hopefully most are in cache. Maybe 4 is + // even too low of a minimum. Profile more. + numWorkers := 4 + if n := runtime.NumCPU(); n > numWorkers { + numWorkers = n + } + + // Make sure to wait for all workers to finish, otherwise + // walkFn could still be called after returning. This Wait call + // runs after close(e.donec) below. + var wg sync.WaitGroup + defer wg.Wait() + + w := &walker{ + fn: walkFn, + enqueuec: make(chan walkItem, numWorkers), // buffered for performance + workc: make(chan walkItem, numWorkers), // buffered for performance + donec: make(chan struct{}), + + // buffered for correctness & not leaking goroutines: + resc: make(chan error, numWorkers), + } + defer close(w.donec) + + for i := 0; i < numWorkers; i++ { + wg.Add(1) + go w.doWork(&wg) + } + todo := []walkItem{{dir: root}} + out := 0 + for { + workc := w.workc + var workItem walkItem + if len(todo) == 0 { + workc = nil + } else { + workItem = todo[len(todo)-1] + } + select { + case workc <- workItem: + todo = todo[:len(todo)-1] + out++ + case it := <-w.enqueuec: + todo = append(todo, it) + case err := <-w.resc: + out-- + if err != nil { + return err + } + if out == 0 && len(todo) == 0 { + // It's safe to quit here, as long as the buffered + // enqueue channel isn't also readable, which might + // happen if the worker sends both another unit of + // work and its result before the other select was + // scheduled and both w.resc and w.enqueuec were + // readable. + select { + case it := <-w.enqueuec: + todo = append(todo, it) + default: + return nil + } + } + } + } +} + +// doWork reads directories as instructed (via workc) and runs the +// user's callback function. +func (w *walker) doWork(wg *sync.WaitGroup) { + defer wg.Done() + for { + select { + case <-w.donec: + return + case it := <-w.workc: + select { + case <-w.donec: + return + case w.resc <- w.walk(it.dir, !it.callbackDone): + } + } + } +} + +type walker struct { + fn func(path string, typ os.FileMode) error + + donec chan struct{} // closed on fastWalk's return + workc chan walkItem // to workers + enqueuec chan walkItem // from workers + resc chan error // from workers +} + +type walkItem struct { + dir string + callbackDone bool // callback already called; don't do it again +} + +func (w *walker) enqueue(it walkItem) { + select { + case w.enqueuec <- it: + case <-w.donec: + } +} + +func (w *walker) onDirEnt(dirName, baseName string, typ os.FileMode) error { + joined := dirName + string(os.PathSeparator) + baseName + if typ == os.ModeDir { + w.enqueue(walkItem{dir: joined}) + return nil + } + + err := w.fn(joined, typ) + if typ == os.ModeSymlink { + if err == TraverseLink { + // Set callbackDone so we don't call it twice for both the + // symlink-as-symlink and the symlink-as-directory later: + w.enqueue(walkItem{dir: joined, callbackDone: true}) + return nil + } + if err == filepath.SkipDir { + // Permit SkipDir on symlinks too. + return nil + } + } + return err +} + +func (w *walker) walk(root string, runUserCallback bool) error { + if runUserCallback { + err := w.fn(root, os.ModeDir) + if err == filepath.SkipDir { + return nil + } + if err != nil { + return err + } + } + + return readDir(root, w.onDirEnt) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go new file mode 100644 index 000000000..ccffec5ad --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go @@ -0,0 +1,13 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build freebsd openbsd netbsd + +package fastwalk + +import "syscall" + +func direntInode(dirent *syscall.Dirent) uint64 { + return uint64(dirent.Fileno) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go new file mode 100644 index 000000000..ab7fbc0a9 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go @@ -0,0 +1,14 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux darwin +// +build !appengine + +package fastwalk + +import "syscall" + +func direntInode(dirent *syscall.Dirent) uint64 { + return uint64(dirent.Ino) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go new file mode 100644 index 000000000..a3b26a7ba --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go @@ -0,0 +1,13 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build darwin freebsd openbsd netbsd + +package fastwalk + +import "syscall" + +func direntNamlen(dirent *syscall.Dirent) uint64 { + return uint64(dirent.Namlen) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go new file mode 100644 index 000000000..e880d358b --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go @@ -0,0 +1,29 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux +// +build !appengine + +package fastwalk + +import ( + "bytes" + "syscall" + "unsafe" +) + +func direntNamlen(dirent *syscall.Dirent) uint64 { + const fixedHdr = uint16(unsafe.Offsetof(syscall.Dirent{}.Name)) + nameBuf := (*[unsafe.Sizeof(dirent.Name)]byte)(unsafe.Pointer(&dirent.Name[0])) + const nameBufLen = uint16(len(nameBuf)) + limit := dirent.Reclen - fixedHdr + if limit > nameBufLen { + limit = nameBufLen + } + nameLen := bytes.IndexByte(nameBuf[:limit], 0) + if nameLen < 0 { + panic("failed to find terminating 0 byte in dirent") + } + return uint64(nameLen) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go new file mode 100644 index 000000000..a906b8759 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go @@ -0,0 +1,37 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build appengine !linux,!darwin,!freebsd,!openbsd,!netbsd + +package fastwalk + +import ( + "io/ioutil" + "os" +) + +// readDir calls fn for each directory entry in dirName. +// It does not descend into directories or follow symlinks. +// If fn returns a non-nil error, readDir returns with that error +// immediately. +func readDir(dirName string, fn func(dirName, entName string, typ os.FileMode) error) error { + fis, err := ioutil.ReadDir(dirName) + if err != nil { + return err + } + skipFiles := false + for _, fi := range fis { + if fi.Mode().IsRegular() && skipFiles { + continue + } + if err := fn(dirName, fi.Name(), fi.Mode()&os.ModeType); err != nil { + if err == SkipFiles { + skipFiles = true + continue + } + return err + } + } + return nil +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go new file mode 100644 index 000000000..3369b1a0b --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go @@ -0,0 +1,127 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux darwin freebsd openbsd netbsd +// +build !appengine + +package fastwalk + +import ( + "fmt" + "os" + "syscall" + "unsafe" +) + +const blockSize = 8 << 10 + +// unknownFileMode is a sentinel (and bogus) os.FileMode +// value used to represent a syscall.DT_UNKNOWN Dirent.Type. +const unknownFileMode os.FileMode = os.ModeNamedPipe | os.ModeSocket | os.ModeDevice + +func readDir(dirName string, fn func(dirName, entName string, typ os.FileMode) error) error { + fd, err := syscall.Open(dirName, 0, 0) + if err != nil { + return &os.PathError{Op: "open", Path: dirName, Err: err} + } + defer syscall.Close(fd) + + // The buffer must be at least a block long. + buf := make([]byte, blockSize) // stack-allocated; doesn't escape + bufp := 0 // starting read position in buf + nbuf := 0 // end valid data in buf + skipFiles := false + for { + if bufp >= nbuf { + bufp = 0 + nbuf, err = syscall.ReadDirent(fd, buf) + if err != nil { + return os.NewSyscallError("readdirent", err) + } + if nbuf <= 0 { + return nil + } + } + consumed, name, typ := parseDirEnt(buf[bufp:nbuf]) + bufp += consumed + if name == "" || name == "." || name == ".." { + continue + } + // Fallback for filesystems (like old XFS) that don't + // support Dirent.Type and have DT_UNKNOWN (0) there + // instead. + if typ == unknownFileMode { + fi, err := os.Lstat(dirName + "/" + name) + if err != nil { + // It got deleted in the meantime. + if os.IsNotExist(err) { + continue + } + return err + } + typ = fi.Mode() & os.ModeType + } + if skipFiles && typ.IsRegular() { + continue + } + if err := fn(dirName, name, typ); err != nil { + if err == SkipFiles { + skipFiles = true + continue + } + return err + } + } +} + +func parseDirEnt(buf []byte) (consumed int, name string, typ os.FileMode) { + // golang.org/issue/15653 + dirent := (*syscall.Dirent)(unsafe.Pointer(&buf[0])) + if v := unsafe.Offsetof(dirent.Reclen) + unsafe.Sizeof(dirent.Reclen); uintptr(len(buf)) < v { + panic(fmt.Sprintf("buf size of %d smaller than dirent header size %d", len(buf), v)) + } + if len(buf) < int(dirent.Reclen) { + panic(fmt.Sprintf("buf size %d < record length %d", len(buf), dirent.Reclen)) + } + consumed = int(dirent.Reclen) + if direntInode(dirent) == 0 { // File absent in directory. + return + } + switch dirent.Type { + case syscall.DT_REG: + typ = 0 + case syscall.DT_DIR: + typ = os.ModeDir + case syscall.DT_LNK: + typ = os.ModeSymlink + case syscall.DT_BLK: + typ = os.ModeDevice + case syscall.DT_FIFO: + typ = os.ModeNamedPipe + case syscall.DT_SOCK: + typ = os.ModeSocket + case syscall.DT_UNKNOWN: + typ = unknownFileMode + default: + // Skip weird things. + // It's probably a DT_WHT (http://lwn.net/Articles/325369/) + // or something. Revisit if/when this package is moved outside + // of goimports. goimports only cares about regular files, + // symlinks, and directories. + return + } + + nameBuf := (*[unsafe.Sizeof(dirent.Name)]byte)(unsafe.Pointer(&dirent.Name[0])) + nameLen := direntNamlen(dirent) + + // Special cases for common things: + if nameLen == 1 && nameBuf[0] == '.' { + name = "." + } else if nameLen == 2 && nameBuf[0] == '.' && nameBuf[1] == '.' { + name = ".." + } else { + name = string(nameBuf[:nameLen]) + } + return +} diff --git a/vendor/golang.org/x/tools/internal/gopathwalk/walk.go b/vendor/golang.org/x/tools/internal/gopathwalk/walk.go new file mode 100644 index 000000000..04bb96a36 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/gopathwalk/walk.go @@ -0,0 +1,250 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package gopathwalk is like filepath.Walk but specialized for finding Go +// packages, particularly in $GOPATH and $GOROOT. +package gopathwalk + +import ( + "bufio" + "bytes" + "fmt" + "go/build" + "io/ioutil" + "log" + "os" + "path/filepath" + "strings" + + "golang.org/x/tools/internal/fastwalk" +) + +// Options controls the behavior of a Walk call. +type Options struct { + Debug bool // Enable debug logging + ModulesEnabled bool // Search module caches. Also disables legacy goimports ignore rules. +} + +// RootType indicates the type of a Root. +type RootType int + +const ( + RootUnknown RootType = iota + RootGOROOT + RootGOPATH + RootCurrentModule + RootModuleCache + RootOther +) + +// A Root is a starting point for a Walk. +type Root struct { + Path string + Type RootType +} + +// SrcDirsRoots returns the roots from build.Default.SrcDirs(). Not modules-compatible. +func SrcDirsRoots(ctx *build.Context) []Root { + var roots []Root + roots = append(roots, Root{filepath.Join(ctx.GOROOT, "src"), RootGOROOT}) + for _, p := range filepath.SplitList(ctx.GOPATH) { + roots = append(roots, Root{filepath.Join(p, "src"), RootGOPATH}) + } + return roots +} + +// Walk walks Go source directories ($GOROOT, $GOPATH, etc) to find packages. +// For each package found, add will be called (concurrently) with the absolute +// paths of the containing source directory and the package directory. +// add will be called concurrently. +func Walk(roots []Root, add func(root Root, dir string), opts Options) { + for _, root := range roots { + walkDir(root, add, opts) + } +} + +func walkDir(root Root, add func(Root, string), opts Options) { + if _, err := os.Stat(root.Path); os.IsNotExist(err) { + if opts.Debug { + log.Printf("skipping nonexistant directory: %v", root.Path) + } + return + } + if opts.Debug { + log.Printf("scanning %s", root.Path) + } + w := &walker{ + root: root, + add: add, + opts: opts, + } + w.init() + if err := fastwalk.Walk(root.Path, w.walk); err != nil { + log.Printf("gopathwalk: scanning directory %v: %v", root.Path, err) + } + + if opts.Debug { + log.Printf("scanned %s", root.Path) + } +} + +// walker is the callback for fastwalk.Walk. +type walker struct { + root Root // The source directory to scan. + add func(Root, string) // The callback that will be invoked for every possible Go package dir. + opts Options // Options passed to Walk by the user. + + ignoredDirs []os.FileInfo // The ignored directories, loaded from .goimportsignore files. +} + +// init initializes the walker based on its Options. +func (w *walker) init() { + var ignoredPaths []string + if w.root.Type == RootModuleCache { + ignoredPaths = []string{"cache"} + } + if !w.opts.ModulesEnabled && w.root.Type == RootGOPATH { + ignoredPaths = w.getIgnoredDirs(w.root.Path) + ignoredPaths = append(ignoredPaths, "v", "mod") + } + + for _, p := range ignoredPaths { + full := filepath.Join(w.root.Path, p) + if fi, err := os.Stat(full); err == nil { + w.ignoredDirs = append(w.ignoredDirs, fi) + if w.opts.Debug { + log.Printf("Directory added to ignore list: %s", full) + } + } else if w.opts.Debug { + log.Printf("Error statting ignored directory: %v", err) + } + } +} + +// getIgnoredDirs reads an optional config file at /.goimportsignore +// of relative directories to ignore when scanning for go files. +// The provided path is one of the $GOPATH entries with "src" appended. +func (w *walker) getIgnoredDirs(path string) []string { + file := filepath.Join(path, ".goimportsignore") + slurp, err := ioutil.ReadFile(file) + if w.opts.Debug { + if err != nil { + log.Print(err) + } else { + log.Printf("Read %s", file) + } + } + if err != nil { + return nil + } + + var ignoredDirs []string + bs := bufio.NewScanner(bytes.NewReader(slurp)) + for bs.Scan() { + line := strings.TrimSpace(bs.Text()) + if line == "" || strings.HasPrefix(line, "#") { + continue + } + ignoredDirs = append(ignoredDirs, line) + } + return ignoredDirs +} + +func (w *walker) shouldSkipDir(fi os.FileInfo) bool { + for _, ignoredDir := range w.ignoredDirs { + if os.SameFile(fi, ignoredDir) { + return true + } + } + return false +} + +func (w *walker) walk(path string, typ os.FileMode) error { + dir := filepath.Dir(path) + if typ.IsRegular() { + if dir == w.root.Path && (w.root.Type == RootGOROOT || w.root.Type == RootGOPATH) { + // Doesn't make sense to have regular files + // directly in your $GOPATH/src or $GOROOT/src. + return fastwalk.SkipFiles + } + if !strings.HasSuffix(path, ".go") { + return nil + } + + w.add(w.root, dir) + return fastwalk.SkipFiles + } + if typ == os.ModeDir { + base := filepath.Base(path) + if base == "" || base[0] == '.' || base[0] == '_' || + base == "testdata" || + (w.root.Type == RootGOROOT && w.opts.ModulesEnabled && base == "vendor") || + (!w.opts.ModulesEnabled && base == "node_modules") { + return filepath.SkipDir + } + fi, err := os.Lstat(path) + if err == nil && w.shouldSkipDir(fi) { + return filepath.SkipDir + } + return nil + } + if typ == os.ModeSymlink { + base := filepath.Base(path) + if strings.HasPrefix(base, ".#") { + // Emacs noise. + return nil + } + fi, err := os.Lstat(path) + if err != nil { + // Just ignore it. + return nil + } + if w.shouldTraverse(dir, fi) { + return fastwalk.TraverseLink + } + } + return nil +} + +// shouldTraverse reports whether the symlink fi, found in dir, +// should be followed. It makes sure symlinks were never visited +// before to avoid symlink loops. +func (w *walker) shouldTraverse(dir string, fi os.FileInfo) bool { + path := filepath.Join(dir, fi.Name()) + target, err := filepath.EvalSymlinks(path) + if err != nil { + return false + } + ts, err := os.Stat(target) + if err != nil { + fmt.Fprintln(os.Stderr, err) + return false + } + if !ts.IsDir() { + return false + } + if w.shouldSkipDir(ts) { + return false + } + // Check for symlink loops by statting each directory component + // and seeing if any are the same file as ts. + for { + parent := filepath.Dir(path) + if parent == path { + // Made it to the root without seeing a cycle. + // Use this symlink. + return true + } + parentInfo, err := os.Stat(parent) + if err != nil { + return false + } + if os.SameFile(ts, parentInfo) { + // Cycle. Don't traverse. + return false + } + path = parent + } + +} diff --git a/vendor/golang.org/x/tools/internal/semver/semver.go b/vendor/golang.org/x/tools/internal/semver/semver.go new file mode 100644 index 000000000..4af7118e5 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/semver/semver.go @@ -0,0 +1,388 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package semver implements comparison of semantic version strings. +// In this package, semantic version strings must begin with a leading "v", +// as in "v1.0.0". +// +// The general form of a semantic version string accepted by this package is +// +// vMAJOR[.MINOR[.PATCH[-PRERELEASE][+BUILD]]] +// +// where square brackets indicate optional parts of the syntax; +// MAJOR, MINOR, and PATCH are decimal integers without extra leading zeros; +// PRERELEASE and BUILD are each a series of non-empty dot-separated identifiers +// using only alphanumeric characters and hyphens; and +// all-numeric PRERELEASE identifiers must not have leading zeros. +// +// This package follows Semantic Versioning 2.0.0 (see semver.org) +// with two exceptions. First, it requires the "v" prefix. Second, it recognizes +// vMAJOR and vMAJOR.MINOR (with no prerelease or build suffixes) +// as shorthands for vMAJOR.0.0 and vMAJOR.MINOR.0. +package semver + +// parsed returns the parsed form of a semantic version string. +type parsed struct { + major string + minor string + patch string + short string + prerelease string + build string + err string +} + +// IsValid reports whether v is a valid semantic version string. +func IsValid(v string) bool { + _, ok := parse(v) + return ok +} + +// Canonical returns the canonical formatting of the semantic version v. +// It fills in any missing .MINOR or .PATCH and discards build metadata. +// Two semantic versions compare equal only if their canonical formattings +// are identical strings. +// The canonical invalid semantic version is the empty string. +func Canonical(v string) string { + p, ok := parse(v) + if !ok { + return "" + } + if p.build != "" { + return v[:len(v)-len(p.build)] + } + if p.short != "" { + return v + p.short + } + return v +} + +// Major returns the major version prefix of the semantic version v. +// For example, Major("v2.1.0") == "v2". +// If v is an invalid semantic version string, Major returns the empty string. +func Major(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return v[:1+len(pv.major)] +} + +// MajorMinor returns the major.minor version prefix of the semantic version v. +// For example, MajorMinor("v2.1.0") == "v2.1". +// If v is an invalid semantic version string, MajorMinor returns the empty string. +func MajorMinor(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + i := 1 + len(pv.major) + if j := i + 1 + len(pv.minor); j <= len(v) && v[i] == '.' && v[i+1:j] == pv.minor { + return v[:j] + } + return v[:i] + "." + pv.minor +} + +// Prerelease returns the prerelease suffix of the semantic version v. +// For example, Prerelease("v2.1.0-pre+meta") == "-pre". +// If v is an invalid semantic version string, Prerelease returns the empty string. +func Prerelease(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return pv.prerelease +} + +// Build returns the build suffix of the semantic version v. +// For example, Build("v2.1.0+meta") == "+meta". +// If v is an invalid semantic version string, Build returns the empty string. +func Build(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return pv.build +} + +// Compare returns an integer comparing two versions according to +// according to semantic version precedence. +// The result will be 0 if v == w, -1 if v < w, or +1 if v > w. +// +// An invalid semantic version string is considered less than a valid one. +// All invalid semantic version strings compare equal to each other. +func Compare(v, w string) int { + pv, ok1 := parse(v) + pw, ok2 := parse(w) + if !ok1 && !ok2 { + return 0 + } + if !ok1 { + return -1 + } + if !ok2 { + return +1 + } + if c := compareInt(pv.major, pw.major); c != 0 { + return c + } + if c := compareInt(pv.minor, pw.minor); c != 0 { + return c + } + if c := compareInt(pv.patch, pw.patch); c != 0 { + return c + } + return comparePrerelease(pv.prerelease, pw.prerelease) +} + +// Max canonicalizes its arguments and then returns the version string +// that compares greater. +func Max(v, w string) string { + v = Canonical(v) + w = Canonical(w) + if Compare(v, w) > 0 { + return v + } + return w +} + +func parse(v string) (p parsed, ok bool) { + if v == "" || v[0] != 'v' { + p.err = "missing v prefix" + return + } + p.major, v, ok = parseInt(v[1:]) + if !ok { + p.err = "bad major version" + return + } + if v == "" { + p.minor = "0" + p.patch = "0" + p.short = ".0.0" + return + } + if v[0] != '.' { + p.err = "bad minor prefix" + ok = false + return + } + p.minor, v, ok = parseInt(v[1:]) + if !ok { + p.err = "bad minor version" + return + } + if v == "" { + p.patch = "0" + p.short = ".0" + return + } + if v[0] != '.' { + p.err = "bad patch prefix" + ok = false + return + } + p.patch, v, ok = parseInt(v[1:]) + if !ok { + p.err = "bad patch version" + return + } + if len(v) > 0 && v[0] == '-' { + p.prerelease, v, ok = parsePrerelease(v) + if !ok { + p.err = "bad prerelease" + return + } + } + if len(v) > 0 && v[0] == '+' { + p.build, v, ok = parseBuild(v) + if !ok { + p.err = "bad build" + return + } + } + if v != "" { + p.err = "junk on end" + ok = false + return + } + ok = true + return +} + +func parseInt(v string) (t, rest string, ok bool) { + if v == "" { + return + } + if v[0] < '0' || '9' < v[0] { + return + } + i := 1 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + if v[0] == '0' && i != 1 { + return + } + return v[:i], v[i:], true +} + +func parsePrerelease(v string) (t, rest string, ok bool) { + // "A pre-release version MAY be denoted by appending a hyphen and + // a series of dot separated identifiers immediately following the patch version. + // Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-]. + // Identifiers MUST NOT be empty. Numeric identifiers MUST NOT include leading zeroes." + if v == "" || v[0] != '-' { + return + } + i := 1 + start := 1 + for i < len(v) && v[i] != '+' { + if !isIdentChar(v[i]) && v[i] != '.' { + return + } + if v[i] == '.' { + if start == i || isBadNum(v[start:i]) { + return + } + start = i + 1 + } + i++ + } + if start == i || isBadNum(v[start:i]) { + return + } + return v[:i], v[i:], true +} + +func parseBuild(v string) (t, rest string, ok bool) { + if v == "" || v[0] != '+' { + return + } + i := 1 + start := 1 + for i < len(v) { + if !isIdentChar(v[i]) { + return + } + if v[i] == '.' { + if start == i { + return + } + start = i + 1 + } + i++ + } + if start == i { + return + } + return v[:i], v[i:], true +} + +func isIdentChar(c byte) bool { + return 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' || c == '-' +} + +func isBadNum(v string) bool { + i := 0 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + return i == len(v) && i > 1 && v[0] == '0' +} + +func isNum(v string) bool { + i := 0 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + return i == len(v) +} + +func compareInt(x, y string) int { + if x == y { + return 0 + } + if len(x) < len(y) { + return -1 + } + if len(x) > len(y) { + return +1 + } + if x < y { + return -1 + } else { + return +1 + } +} + +func comparePrerelease(x, y string) int { + // "When major, minor, and patch are equal, a pre-release version has + // lower precedence than a normal version. + // Example: 1.0.0-alpha < 1.0.0. + // Precedence for two pre-release versions with the same major, minor, + // and patch version MUST be determined by comparing each dot separated + // identifier from left to right until a difference is found as follows: + // identifiers consisting of only digits are compared numerically and + // identifiers with letters or hyphens are compared lexically in ASCII + // sort order. Numeric identifiers always have lower precedence than + // non-numeric identifiers. A larger set of pre-release fields has a + // higher precedence than a smaller set, if all of the preceding + // identifiers are equal. + // Example: 1.0.0-alpha < 1.0.0-alpha.1 < 1.0.0-alpha.beta < + // 1.0.0-beta < 1.0.0-beta.2 < 1.0.0-beta.11 < 1.0.0-rc.1 < 1.0.0." + if x == y { + return 0 + } + if x == "" { + return +1 + } + if y == "" { + return -1 + } + for x != "" && y != "" { + x = x[1:] // skip - or . + y = y[1:] // skip - or . + var dx, dy string + dx, x = nextIdent(x) + dy, y = nextIdent(y) + if dx != dy { + ix := isNum(dx) + iy := isNum(dy) + if ix != iy { + if ix { + return -1 + } else { + return +1 + } + } + if ix { + if len(dx) < len(dy) { + return -1 + } + if len(dx) > len(dy) { + return +1 + } + } + if dx < dy { + return -1 + } else { + return +1 + } + } + } + if x == "" { + return -1 + } else { + return +1 + } +} + +func nextIdent(x string) (dx, rest string) { + i := 0 + for i < len(x) && x[i] != '.' { + i++ + } + return x[:i], x[i:] +} diff --git a/vendor/modules.txt b/vendor/modules.txt index 18b5366ae..239eba0c3 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -1,45 +1,53 @@ # cloud.google.com/go v0.45.1 -cloud.google.com/go/storage +cloud.google.com/go/compute/metadata cloud.google.com/go/iam cloud.google.com/go/internal cloud.google.com/go/internal/optional cloud.google.com/go/internal/trace cloud.google.com/go/internal/version -cloud.google.com/go/compute/metadata -# github.com/Azure/azure-sdk-for-go v21.3.0+incompatible +cloud.google.com/go/storage +# github.com/Azure/azure-sdk-for-go v36.2.0+incompatible github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/resources/mgmt/resources github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage -github.com/Azure/azure-sdk-for-go/storage +github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-02-01/resources github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2016-01-01/storage +github.com/Azure/azure-sdk-for-go/storage github.com/Azure/azure-sdk-for-go/version -# github.com/Azure/go-autorest v10.15.4+incompatible +# github.com/Azure/go-autorest/autorest v0.9.2 github.com/Azure/go-autorest/autorest -github.com/Azure/go-autorest/autorest/adal github.com/Azure/go-autorest/autorest/azure -github.com/Azure/go-autorest/logger -github.com/Azure/go-autorest/version -github.com/Azure/go-autorest/autorest/date +# github.com/Azure/go-autorest/autorest/adal v0.8.1-0.20191028180845-3492b2aff503 +github.com/Azure/go-autorest/autorest/adal +# github.com/Azure/go-autorest/autorest/azure/cli v0.2.0 github.com/Azure/go-autorest/autorest/azure/cli +# github.com/Azure/go-autorest/autorest/date v0.2.0 +github.com/Azure/go-autorest/autorest/date +# github.com/Azure/go-autorest/autorest/to v0.3.0 github.com/Azure/go-autorest/autorest/to +# github.com/Azure/go-autorest/autorest/validation v0.2.0 github.com/Azure/go-autorest/autorest/validation +# github.com/Azure/go-autorest/logger v0.1.0 +github.com/Azure/go-autorest/logger +# github.com/Azure/go-autorest/tracing v0.5.0 +github.com/Azure/go-autorest/tracing # github.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4 github.com/Azure/go-ntlmssp # github.com/ChrisTrenkamp/goxpath v0.0.0-20170922090931-c385f95c6022 github.com/ChrisTrenkamp/goxpath -github.com/ChrisTrenkamp/goxpath/tree -github.com/ChrisTrenkamp/goxpath/tree/xmltree github.com/ChrisTrenkamp/goxpath/internal/execxp -github.com/ChrisTrenkamp/goxpath/parser -github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlbuilder -github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlele github.com/ChrisTrenkamp/goxpath/internal/execxp/findutil github.com/ChrisTrenkamp/goxpath/internal/execxp/intfns github.com/ChrisTrenkamp/goxpath/internal/xsort github.com/ChrisTrenkamp/goxpath/lexer +github.com/ChrisTrenkamp/goxpath/parser github.com/ChrisTrenkamp/goxpath/parser/pathexpr -github.com/ChrisTrenkamp/goxpath/xconst +github.com/ChrisTrenkamp/goxpath/tree +github.com/ChrisTrenkamp/goxpath/tree/xmltree +github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlbuilder +github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlele github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlnode +github.com/ChrisTrenkamp/goxpath/xconst # github.com/Unknwon/com v0.0.0-20151008135407-28b053d5a292 github.com/Unknwon/com # github.com/agext/levenshtein v1.2.2 @@ -49,17 +57,17 @@ github.com/agl/ed25519 github.com/agl/ed25519/edwards25519 # github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190329064014-6e358769c32a github.com/aliyun/alibaba-cloud-sdk-go/sdk -github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/credentials -github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests -github.com/aliyun/alibaba-cloud-sdk-go/services/location -github.com/aliyun/alibaba-cloud-sdk-go/services/sts github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth +github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/credentials github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/credentials/provider +github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/signers github.com/aliyun/alibaba-cloud-sdk-go/sdk/endpoints github.com/aliyun/alibaba-cloud-sdk-go/sdk/errors +github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses github.com/aliyun/alibaba-cloud-sdk-go/sdk/utils -github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/signers +github.com/aliyun/alibaba-cloud-sdk-go/services/location +github.com/aliyun/alibaba-cloud-sdk-go/services/sts # github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70 github.com/aliyun/aliyun-oss-go-sdk/oss # github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible @@ -76,52 +84,56 @@ github.com/apparentlymart/go-cidr/cidr github.com/apparentlymart/go-dump/dump # github.com/apparentlymart/go-textseg v1.0.0 github.com/apparentlymart/go-textseg/textseg +# github.com/apparentlymart/go-versions v0.0.2-0.20180815153302-64b99f7cb171 +github.com/apparentlymart/go-versions/versions +github.com/apparentlymart/go-versions/versions/constraints # github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2 github.com/armon/circbuf # github.com/armon/go-radix v1.0.0 github.com/armon/go-radix -# github.com/aws/aws-sdk-go v1.22.0 +# github.com/aws/aws-sdk-go v1.25.3 github.com/aws/aws-sdk-go/aws +github.com/aws/aws-sdk-go/aws/arn github.com/aws/aws-sdk-go/aws/awserr -github.com/aws/aws-sdk-go/service/dynamodb -github.com/aws/aws-sdk-go/service/s3 -github.com/aws/aws-sdk-go/aws/credentials -github.com/aws/aws-sdk-go/aws/endpoints -github.com/aws/aws-sdk-go/internal/sdkio github.com/aws/aws-sdk-go/aws/awsutil github.com/aws/aws-sdk-go/aws/client github.com/aws/aws-sdk-go/aws/client/metadata +github.com/aws/aws-sdk-go/aws/corehandlers +github.com/aws/aws-sdk-go/aws/credentials +github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds +github.com/aws/aws-sdk-go/aws/credentials/endpointcreds +github.com/aws/aws-sdk-go/aws/credentials/processcreds +github.com/aws/aws-sdk-go/aws/credentials/stscreds github.com/aws/aws-sdk-go/aws/crr +github.com/aws/aws-sdk-go/aws/csm +github.com/aws/aws-sdk-go/aws/defaults +github.com/aws/aws-sdk-go/aws/ec2metadata +github.com/aws/aws-sdk-go/aws/endpoints github.com/aws/aws-sdk-go/aws/request +github.com/aws/aws-sdk-go/aws/session github.com/aws/aws-sdk-go/aws/signer/v4 -github.com/aws/aws-sdk-go/private/protocol -github.com/aws/aws-sdk-go/private/protocol/jsonrpc +github.com/aws/aws-sdk-go/internal/ini github.com/aws/aws-sdk-go/internal/s3err +github.com/aws/aws-sdk-go/internal/sdkio +github.com/aws/aws-sdk-go/internal/sdkmath +github.com/aws/aws-sdk-go/internal/sdkrand +github.com/aws/aws-sdk-go/internal/sdkuri +github.com/aws/aws-sdk-go/internal/shareddefaults +github.com/aws/aws-sdk-go/private/protocol github.com/aws/aws-sdk-go/private/protocol/eventstream github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi +github.com/aws/aws-sdk-go/private/protocol/json/jsonutil +github.com/aws/aws-sdk-go/private/protocol/jsonrpc +github.com/aws/aws-sdk-go/private/protocol/query +github.com/aws/aws-sdk-go/private/protocol/query/queryutil github.com/aws/aws-sdk-go/private/protocol/rest github.com/aws/aws-sdk-go/private/protocol/restxml github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil -github.com/aws/aws-sdk-go/aws/arn -github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds -github.com/aws/aws-sdk-go/aws/credentials/stscreds -github.com/aws/aws-sdk-go/aws/defaults -github.com/aws/aws-sdk-go/aws/ec2metadata -github.com/aws/aws-sdk-go/aws/session +github.com/aws/aws-sdk-go/service/dynamodb github.com/aws/aws-sdk-go/service/iam +github.com/aws/aws-sdk-go/service/s3 github.com/aws/aws-sdk-go/service/sts -github.com/aws/aws-sdk-go/internal/ini -github.com/aws/aws-sdk-go/internal/shareddefaults -github.com/aws/aws-sdk-go/internal/sdkrand -github.com/aws/aws-sdk-go/private/protocol/json/jsonutil -github.com/aws/aws-sdk-go/private/protocol/query -github.com/aws/aws-sdk-go/internal/sdkuri github.com/aws/aws-sdk-go/service/sts/stsiface -github.com/aws/aws-sdk-go/aws/corehandlers -github.com/aws/aws-sdk-go/aws/credentials/endpointcreds -github.com/aws/aws-sdk-go/aws/credentials/processcreds -github.com/aws/aws-sdk-go/aws/csm -github.com/aws/aws-sdk-go/private/protocol/query/queryutil # github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d github.com/bgentry/go-netrc/netrc # github.com/bgentry/speakeasy v0.1.0 @@ -133,26 +145,26 @@ github.com/bmatcuk/doublestar # github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e github.com/chzyer/readline # github.com/coreos/etcd v3.3.10+incompatible +github.com/coreos/etcd/auth/authpb github.com/coreos/etcd/client github.com/coreos/etcd/clientv3 github.com/coreos/etcd/clientv3/concurrency -github.com/coreos/etcd/pkg/transport -github.com/coreos/etcd/pkg/pathutil -github.com/coreos/etcd/pkg/srv -github.com/coreos/etcd/pkg/types -github.com/coreos/etcd/version -github.com/coreos/etcd/auth/authpb github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes github.com/coreos/etcd/etcdserver/etcdserverpb github.com/coreos/etcd/mvcc/mvccpb +github.com/coreos/etcd/pkg/pathutil +github.com/coreos/etcd/pkg/srv github.com/coreos/etcd/pkg/tlsutil +github.com/coreos/etcd/pkg/transport +github.com/coreos/etcd/pkg/types +github.com/coreos/etcd/version # github.com/coreos/go-semver v0.2.0 github.com/coreos/go-semver/semver # github.com/davecgh/go-spew v1.1.1 github.com/davecgh/go-spew/spew # github.com/dgrijalva/jwt-go v3.2.0+incompatible github.com/dgrijalva/jwt-go -# github.com/dimchansky/utfbom v1.0.0 +# github.com/dimchansky/utfbom v1.1.0 github.com/dimchansky/utfbom # github.com/dylanmei/iso8601 v0.1.0 github.com/dylanmei/iso8601 @@ -168,16 +180,18 @@ github.com/gogo/protobuf/proto github.com/gogo/protobuf/protoc-gen-gogo/descriptor # github.com/golang/mock v1.3.1 github.com/golang/mock/gomock +github.com/golang/mock/mockgen +github.com/golang/mock/mockgen/model # github.com/golang/protobuf v1.3.2 github.com/golang/protobuf/proto +github.com/golang/protobuf/protoc-gen-go/descriptor github.com/golang/protobuf/ptypes github.com/golang/protobuf/ptypes/any github.com/golang/protobuf/ptypes/duration github.com/golang/protobuf/ptypes/timestamp -github.com/golang/protobuf/protoc-gen-go/descriptor # github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db github.com/golang/snappy -# github.com/google/go-cmp v0.3.0 +# github.com/google/go-cmp v0.3.1 github.com/google/go-cmp/cmp github.com/google/go-cmp/cmp/cmpopts github.com/google/go-cmp/cmp/internal/diff @@ -192,14 +206,8 @@ github.com/google/uuid github.com/googleapis/gax-go/v2 # github.com/gophercloud/gophercloud v0.0.0-20190208042652-bc37892e1968 github.com/gophercloud/gophercloud +github.com/gophercloud/gophercloud/internal github.com/gophercloud/gophercloud/openstack -github.com/gophercloud/gophercloud/openstack/objectstorage/v1/containers -github.com/gophercloud/gophercloud/openstack/objectstorage/v1/objects -github.com/gophercloud/gophercloud/pagination -github.com/gophercloud/gophercloud/openstack/identity/v2/tokens -github.com/gophercloud/gophercloud/openstack/identity/v3/tokens -github.com/gophercloud/gophercloud/openstack/utils -github.com/gophercloud/gophercloud/openstack/objectstorage/v1/accounts github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions github.com/gophercloud/gophercloud/openstack/blockstorage/v1/volumes github.com/gophercloud/gophercloud/openstack/blockstorage/v2/snapshots @@ -224,15 +232,19 @@ github.com/gophercloud/gophercloud/openstack/containerinfra/v1/clusters github.com/gophercloud/gophercloud/openstack/containerinfra/v1/clustertemplates github.com/gophercloud/gophercloud/openstack/db/v1/configurations github.com/gophercloud/gophercloud/openstack/db/v1/databases +github.com/gophercloud/gophercloud/openstack/db/v1/datastores github.com/gophercloud/gophercloud/openstack/db/v1/instances github.com/gophercloud/gophercloud/openstack/db/v1/users github.com/gophercloud/gophercloud/openstack/dns/v2/recordsets github.com/gophercloud/gophercloud/openstack/dns/v2/zones +github.com/gophercloud/gophercloud/openstack/identity/v2/tenants +github.com/gophercloud/gophercloud/openstack/identity/v2/tokens github.com/gophercloud/gophercloud/openstack/identity/v3/endpoints github.com/gophercloud/gophercloud/openstack/identity/v3/groups github.com/gophercloud/gophercloud/openstack/identity/v3/projects github.com/gophercloud/gophercloud/openstack/identity/v3/roles github.com/gophercloud/gophercloud/openstack/identity/v3/services +github.com/gophercloud/gophercloud/openstack/identity/v3/tokens github.com/gophercloud/gophercloud/openstack/identity/v3/users github.com/gophercloud/gophercloud/openstack/imageservice/v2/imagedata github.com/gophercloud/gophercloud/openstack/imageservice/v2/images @@ -269,6 +281,9 @@ github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/vpnaas/sit github.com/gophercloud/gophercloud/openstack/networking/v2/networks github.com/gophercloud/gophercloud/openstack/networking/v2/ports github.com/gophercloud/gophercloud/openstack/networking/v2/subnets +github.com/gophercloud/gophercloud/openstack/objectstorage/v1/accounts +github.com/gophercloud/gophercloud/openstack/objectstorage/v1/containers +github.com/gophercloud/gophercloud/openstack/objectstorage/v1/objects github.com/gophercloud/gophercloud/openstack/objectstorage/v1/swauth github.com/gophercloud/gophercloud/openstack/sharedfilesystems/v2/errors github.com/gophercloud/gophercloud/openstack/sharedfilesystems/v2/messages @@ -276,28 +291,28 @@ github.com/gophercloud/gophercloud/openstack/sharedfilesystems/v2/securityservic github.com/gophercloud/gophercloud/openstack/sharedfilesystems/v2/sharenetworks github.com/gophercloud/gophercloud/openstack/sharedfilesystems/v2/shares github.com/gophercloud/gophercloud/openstack/sharedfilesystems/v2/snapshots -github.com/gophercloud/gophercloud/openstack/identity/v2/tenants -github.com/gophercloud/gophercloud/openstack/db/v1/datastores -github.com/gophercloud/gophercloud/internal +github.com/gophercloud/gophercloud/openstack/utils +github.com/gophercloud/gophercloud/pagination # github.com/gophercloud/utils v0.0.0-20190128072930-fbb6ab446f01 github.com/gophercloud/utils/openstack/clientconfig -# github.com/hashicorp/aws-sdk-go-base v0.3.0 +# github.com/hashicorp/aws-sdk-go-base v0.4.0 github.com/hashicorp/aws-sdk-go-base # github.com/hashicorp/consul v0.0.0-20171026175957-610f3c86a089 github.com/hashicorp/consul/api -github.com/hashicorp/consul/testutil github.com/hashicorp/consul/lib/freeport +github.com/hashicorp/consul/testutil github.com/hashicorp/consul/testutil/retry # github.com/hashicorp/errwrap v1.0.0 github.com/hashicorp/errwrap -# github.com/hashicorp/go-azure-helpers v0.0.0-20190129193224-166dfd221bb2 +# github.com/hashicorp/go-azure-helpers v0.10.0 github.com/hashicorp/go-azure-helpers/authentication +github.com/hashicorp/go-azure-helpers/sender github.com/hashicorp/go-azure-helpers/storage # github.com/hashicorp/go-checkpoint v0.5.0 github.com/hashicorp/go-checkpoint -# github.com/hashicorp/go-cleanhttp v0.5.0 +# github.com/hashicorp/go-cleanhttp v0.5.1 github.com/hashicorp/go-cleanhttp -# github.com/hashicorp/go-getter v1.4.0 +# github.com/hashicorp/go-getter v1.4.2-0.20200106182914-9813cbd4eb02 github.com/hashicorp/go-getter github.com/hashicorp/go-getter/helper/url # github.com/hashicorp/go-hclog v0.0.0-20181001195459-61d530d6c27f @@ -313,13 +328,13 @@ github.com/hashicorp/go-retryablehttp github.com/hashicorp/go-rootcerts # github.com/hashicorp/go-safetemp v1.0.0 github.com/hashicorp/go-safetemp -# github.com/hashicorp/go-slug v0.3.0 +# github.com/hashicorp/go-slug v0.4.1 github.com/hashicorp/go-slug -# github.com/hashicorp/go-tfe v0.3.23 +# github.com/hashicorp/go-tfe v0.3.27 github.com/hashicorp/go-tfe # github.com/hashicorp/go-uuid v1.0.1 github.com/hashicorp/go-uuid -# github.com/hashicorp/go-version v1.1.0 +# github.com/hashicorp/go-version v1.2.0 github.com/hashicorp/go-version # github.com/hashicorp/golang-lru v0.5.1 github.com/hashicorp/golang-lru/simplelru @@ -328,46 +343,43 @@ github.com/hashicorp/hcl github.com/hashicorp/hcl/hcl/ast github.com/hashicorp/hcl/hcl/parser github.com/hashicorp/hcl/hcl/printer -github.com/hashicorp/hcl/hcl/token -github.com/hashicorp/hcl/json/parser github.com/hashicorp/hcl/hcl/scanner github.com/hashicorp/hcl/hcl/strconv +github.com/hashicorp/hcl/hcl/token +github.com/hashicorp/hcl/json/parser github.com/hashicorp/hcl/json/scanner github.com/hashicorp/hcl/json/token -# github.com/hashicorp/hcl/v2 v2.0.0 +# github.com/hashicorp/hcl/v2 v2.3.0 github.com/hashicorp/hcl/v2 -github.com/hashicorp/hcl/v2/hclsyntax +github.com/hashicorp/hcl/v2/ext/customdecode +github.com/hashicorp/hcl/v2/ext/dynblock +github.com/hashicorp/hcl/v2/ext/tryfunc +github.com/hashicorp/hcl/v2/ext/typeexpr +github.com/hashicorp/hcl/v2/gohcl github.com/hashicorp/hcl/v2/hcldec -github.com/hashicorp/hcl/v2/hclwrite -github.com/hashicorp/hcl/v2/json github.com/hashicorp/hcl/v2/hcled github.com/hashicorp/hcl/v2/hclparse -github.com/hashicorp/hcl/v2/gohcl -github.com/hashicorp/hcl/v2/ext/typeexpr -github.com/hashicorp/hcl/v2/ext/dynblock +github.com/hashicorp/hcl/v2/hclsyntax github.com/hashicorp/hcl/v2/hcltest -# github.com/hashicorp/hcl2 v0.0.0-20190821123243-0c888d1241f6 -github.com/hashicorp/hcl2/gohcl -github.com/hashicorp/hcl2/hcl -github.com/hashicorp/hcl2/hcl/hclsyntax -github.com/hashicorp/hcl2/hclparse -github.com/hashicorp/hcl2/hclwrite -github.com/hashicorp/hcl2/hcl/json +github.com/hashicorp/hcl/v2/hclwrite +github.com/hashicorp/hcl/v2/json # github.com/hashicorp/hil v0.0.0-20190212112733-ab17b08d6590 github.com/hashicorp/hil github.com/hashicorp/hil/ast github.com/hashicorp/hil/parser github.com/hashicorp/hil/scanner -# github.com/hashicorp/logutils v1.0.0 -github.com/hashicorp/logutils # github.com/hashicorp/serf v0.0.0-20160124182025-e4ec8cc423bb github.com/hashicorp/serf/coordinate -# github.com/hashicorp/terraform-config-inspect v0.0.0-20190821133035-82a99dc22ef4 +# github.com/hashicorp/terraform-config-inspect v0.0.0-20191212124732-c6ae6269b9d7 github.com/hashicorp/terraform-config-inspect/tfconfig +# github.com/hashicorp/terraform-svchost v0.0.0-20191011084731-65d371908596 +github.com/hashicorp/terraform-svchost +github.com/hashicorp/terraform-svchost/auth +github.com/hashicorp/terraform-svchost/disco # github.com/hashicorp/vault v0.10.4 -github.com/hashicorp/vault/helper/pgpkeys -github.com/hashicorp/vault/helper/jsonutil github.com/hashicorp/vault/helper/compressutil +github.com/hashicorp/vault/helper/jsonutil +github.com/hashicorp/vault/helper/pgpkeys # github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb github.com/hashicorp/yamux # github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af @@ -375,30 +387,30 @@ github.com/jmespath/go-jmespath # github.com/joyent/triton-go v0.0.0-20180313100802-d8f9c0314926 github.com/joyent/triton-go github.com/joyent/triton-go/authentication +github.com/joyent/triton-go/client github.com/joyent/triton-go/errors github.com/joyent/triton-go/storage -github.com/joyent/triton-go/client # github.com/json-iterator/go v1.1.5 github.com/json-iterator/go # github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 github.com/kardianos/osext # github.com/keybase/go-crypto v0.0.0-20161004153544-93f5b35093ba -github.com/keybase/go-crypto/openpgp -github.com/keybase/go-crypto/openpgp/packet -github.com/keybase/go-crypto/openpgp/armor -github.com/keybase/go-crypto/openpgp/errors -github.com/keybase/go-crypto/openpgp/s2k -github.com/keybase/go-crypto/rsa github.com/keybase/go-crypto/brainpool github.com/keybase/go-crypto/cast5 +github.com/keybase/go-crypto/openpgp +github.com/keybase/go-crypto/openpgp/armor github.com/keybase/go-crypto/openpgp/elgamal +github.com/keybase/go-crypto/openpgp/errors +github.com/keybase/go-crypto/openpgp/packet +github.com/keybase/go-crypto/openpgp/s2k +github.com/keybase/go-crypto/rsa # github.com/lib/pq v1.0.0 github.com/lib/pq github.com/lib/pq/oid +# github.com/likexian/gokit v0.20.15 +github.com/likexian/gokit/assert # github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 github.com/lusis/go-artifactory/src/artifactory.v401 -# github.com/marstr/guid v1.1.0 -github.com/marstr/guid # github.com/masterzen/simplexml v0.0.0-20160608183007-4572e39b1ab9 github.com/masterzen/simplexml/dom # github.com/masterzen/winrm v0.0.0-20190223112901-5e5c9a7fe54b @@ -428,7 +440,7 @@ github.com/mitchellh/go-wordwrap github.com/mitchellh/hashstructure # github.com/mitchellh/mapstructure v1.1.2 github.com/mitchellh/mapstructure -# github.com/mitchellh/panicwrap v0.0.0-20190213213626-17011010aaa4 +# github.com/mitchellh/panicwrap v1.0.0 github.com/mitchellh/panicwrap # github.com/mitchellh/prefixedio v0.0.0-20190213213902-5733675afd51 github.com/mitchellh/prefixedio @@ -438,6 +450,8 @@ github.com/mitchellh/reflectwalk github.com/modern-go/concurrent # github.com/modern-go/reflect2 v1.0.1 github.com/modern-go/reflect2 +# github.com/mozillazg/go-httpheader v0.2.1 +github.com/mozillazg/go-httpheader # github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d github.com/nu7hatch/gouuid # github.com/oklog/run v1.0.0 @@ -450,8 +464,8 @@ github.com/pkg/browser github.com/pkg/errors # github.com/posener/complete v1.2.1 github.com/posener/complete -github.com/posener/complete/cmd/install github.com/posener/complete/cmd +github.com/posener/complete/cmd/install github.com/posener/complete/match # github.com/satori/go.uuid v1.2.0 github.com/satori/go.uuid @@ -460,15 +474,23 @@ github.com/spf13/afero github.com/spf13/afero/mem # github.com/svanharmelen/jsonapi v0.0.0-20180618144545-0c0828c3f16d github.com/svanharmelen/jsonapi +# github.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/errors +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813 +# github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c +github.com/tencentyun/cos-go-sdk-v5 # github.com/terraform-providers/terraform-provider-openstack v1.15.0 github.com/terraform-providers/terraform-provider-openstack/openstack # github.com/ugorji/go v0.0.0-20180813092308-00b869d2f4a5 github.com/ugorji/go/codec # github.com/ulikunitz/xz v0.5.5 github.com/ulikunitz/xz +github.com/ulikunitz/xz/internal/hash github.com/ulikunitz/xz/internal/xlog github.com/ulikunitz/xz/lzma -github.com/ulikunitz/xz/internal/hash # github.com/vmihailenco/msgpack v4.0.1+incompatible github.com/vmihailenco/msgpack github.com/vmihailenco/msgpack/codes @@ -476,89 +498,84 @@ github.com/vmihailenco/msgpack/codes github.com/xanzy/ssh-agent # github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557 github.com/xlab/treeprint -# github.com/zclconf/go-cty v1.1.0 +# github.com/zclconf/go-cty v1.3.1 github.com/zclconf/go-cty/cty -github.com/zclconf/go-cty/cty/gocty github.com/zclconf/go-cty/cty/convert -github.com/zclconf/go-cty/cty/json github.com/zclconf/go-cty/cty/function github.com/zclconf/go-cty/cty/function/stdlib +github.com/zclconf/go-cty/cty/gocty +github.com/zclconf/go-cty/cty/json github.com/zclconf/go-cty/cty/msgpack github.com/zclconf/go-cty/cty/set # github.com/zclconf/go-cty-yaml v1.0.1 github.com/zclconf/go-cty-yaml # go.opencensus.io v0.22.0 -go.opencensus.io/trace -go.opencensus.io/plugin/ochttp +go.opencensus.io go.opencensus.io/internal -go.opencensus.io/trace/internal -go.opencensus.io/trace/tracestate +go.opencensus.io/internal/tagencoding +go.opencensus.io/metric/metricdata +go.opencensus.io/metric/metricproducer +go.opencensus.io/plugin/ochttp go.opencensus.io/plugin/ochttp/propagation/b3 +go.opencensus.io/resource go.opencensus.io/stats +go.opencensus.io/stats/internal go.opencensus.io/stats/view go.opencensus.io/tag +go.opencensus.io/trace +go.opencensus.io/trace/internal go.opencensus.io/trace/propagation -go.opencensus.io -go.opencensus.io/metric/metricdata -go.opencensus.io/stats/internal -go.opencensus.io/internal/tagencoding -go.opencensus.io/metric/metricproducer -go.opencensus.io/resource +go.opencensus.io/trace/tracestate # golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4 -golang.org/x/crypto/ssh -golang.org/x/crypto/ssh/agent -golang.org/x/crypto/ssh/knownhosts golang.org/x/crypto/bcrypt -golang.org/x/crypto/openpgp -golang.org/x/crypto/pkcs12 +golang.org/x/crypto/blowfish +golang.org/x/crypto/cast5 golang.org/x/crypto/curve25519 golang.org/x/crypto/ed25519 +golang.org/x/crypto/ed25519/internal/edwards25519 golang.org/x/crypto/internal/chacha20 -golang.org/x/crypto/poly1305 -golang.org/x/crypto/blowfish +golang.org/x/crypto/internal/subtle +golang.org/x/crypto/md4 +golang.org/x/crypto/openpgp golang.org/x/crypto/openpgp/armor +golang.org/x/crypto/openpgp/elgamal golang.org/x/crypto/openpgp/errors golang.org/x/crypto/openpgp/packet golang.org/x/crypto/openpgp/s2k +golang.org/x/crypto/pkcs12 golang.org/x/crypto/pkcs12/internal/rc2 -golang.org/x/crypto/ed25519/internal/edwards25519 -golang.org/x/crypto/internal/subtle -golang.org/x/crypto/md4 -golang.org/x/crypto/cast5 -golang.org/x/crypto/openpgp/elgamal -# golang.org/x/net v0.0.0-20190620200207-3b0461eec859 +golang.org/x/crypto/poly1305 +golang.org/x/crypto/ssh +golang.org/x/crypto/ssh/agent +golang.org/x/crypto/ssh/knownhosts +# golang.org/x/net v0.0.0-20191009170851-d66e71096ffb golang.org/x/net/context -golang.org/x/net/idna -golang.org/x/net/trace golang.org/x/net/context/ctxhttp +golang.org/x/net/html +golang.org/x/net/html/atom golang.org/x/net/html/charset -golang.org/x/net/internal/timeseries +golang.org/x/net/http/httpguts golang.org/x/net/http2 golang.org/x/net/http2/hpack -golang.org/x/net/html -golang.org/x/net/http/httpguts -golang.org/x/net/html/atom +golang.org/x/net/idna +golang.org/x/net/internal/timeseries +golang.org/x/net/trace # golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 golang.org/x/oauth2 -golang.org/x/oauth2/jwt +golang.org/x/oauth2/google golang.org/x/oauth2/internal golang.org/x/oauth2/jws -golang.org/x/oauth2/google +golang.org/x/oauth2/jwt # golang.org/x/sys v0.0.0-20190804053845-51ab0e2deafa -golang.org/x/sys/windows -golang.org/x/sys/unix golang.org/x/sys/cpu +golang.org/x/sys/unix +golang.org/x/sys/windows # golang.org/x/text v0.3.2 -golang.org/x/text/unicode/norm -golang.org/x/text/transform -golang.org/x/text/secure/bidirule -golang.org/x/text/unicode/bidi golang.org/x/text/encoding golang.org/x/text/encoding/charmap golang.org/x/text/encoding/htmlindex -golang.org/x/text/language -golang.org/x/text/encoding/internal/identifier golang.org/x/text/encoding/internal +golang.org/x/text/encoding/internal/identifier golang.org/x/text/encoding/japanese golang.org/x/text/encoding/korean golang.org/x/text/encoding/simplifiedchinese @@ -566,58 +583,73 @@ golang.org/x/text/encoding/traditionalchinese golang.org/x/text/encoding/unicode golang.org/x/text/internal/language golang.org/x/text/internal/language/compact -golang.org/x/text/internal/utf8internal -golang.org/x/text/runes golang.org/x/text/internal/tag +golang.org/x/text/internal/utf8internal +golang.org/x/text/language +golang.org/x/text/runes +golang.org/x/text/secure/bidirule +golang.org/x/text/transform +golang.org/x/text/unicode/bidi +golang.org/x/text/unicode/norm # golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 golang.org/x/time/rate +# golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0 +golang.org/x/tools/cmd/cover +golang.org/x/tools/cmd/stringer +golang.org/x/tools/cover +golang.org/x/tools/go/gcexportdata +golang.org/x/tools/go/internal/gcimporter +golang.org/x/tools/go/internal/packagesdriver +golang.org/x/tools/go/packages +golang.org/x/tools/internal/fastwalk +golang.org/x/tools/internal/gopathwalk +golang.org/x/tools/internal/semver # google.golang.org/api v0.9.0 +google.golang.org/api/gensupport +google.golang.org/api/googleapi +google.golang.org/api/googleapi/internal/uritemplates +google.golang.org/api/googleapi/transport +google.golang.org/api/internal google.golang.org/api/iterator google.golang.org/api/option -google.golang.org/api/googleapi google.golang.org/api/storage/v1 google.golang.org/api/transport/http -google.golang.org/api/internal -google.golang.org/api/googleapi/internal/uritemplates -google.golang.org/api/gensupport -google.golang.org/api/googleapi/transport google.golang.org/api/transport/http/internal/propagation # google.golang.org/appengine v1.6.1 -google.golang.org/appengine/urlfetch google.golang.org/appengine google.golang.org/appengine/datastore -google.golang.org/appengine/internal -google.golang.org/appengine/internal/urlfetch -google.golang.org/appengine/internal/app_identity -google.golang.org/appengine/internal/modules google.golang.org/appengine/datastore/internal/cloudkey -google.golang.org/appengine/internal/datastore -google.golang.org/appengine/internal/base -google.golang.org/appengine/internal/log -google.golang.org/appengine/internal/remote_api google.golang.org/appengine/datastore/internal/cloudpb +google.golang.org/appengine/internal +google.golang.org/appengine/internal/app_identity +google.golang.org/appengine/internal/base +google.golang.org/appengine/internal/datastore +google.golang.org/appengine/internal/log +google.golang.org/appengine/internal/modules +google.golang.org/appengine/internal/remote_api +google.golang.org/appengine/internal/urlfetch +google.golang.org/appengine/urlfetch # google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 -google.golang.org/genproto/googleapis/iam/v1 -google.golang.org/genproto/googleapis/rpc/status -google.golang.org/genproto/googleapis/rpc/code google.golang.org/genproto/googleapis/api/annotations +google.golang.org/genproto/googleapis/iam/v1 +google.golang.org/genproto/googleapis/rpc/code +google.golang.org/genproto/googleapis/rpc/status google.golang.org/genproto/googleapis/type/expr # google.golang.org/grpc v1.21.1 google.golang.org/grpc -google.golang.org/grpc/test/bufconn -google.golang.org/grpc/codes -google.golang.org/grpc/status -google.golang.org/grpc/metadata -google.golang.org/grpc/credentials -google.golang.org/grpc/health -google.golang.org/grpc/health/grpc_health_v1 -google.golang.org/grpc/grpclog -google.golang.org/grpc/keepalive google.golang.org/grpc/balancer +google.golang.org/grpc/balancer/base google.golang.org/grpc/balancer/roundrobin +google.golang.org/grpc/binarylog/grpc_binarylog_v1 +google.golang.org/grpc/codes google.golang.org/grpc/connectivity +google.golang.org/grpc/credentials +google.golang.org/grpc/credentials/internal google.golang.org/grpc/encoding google.golang.org/grpc/encoding/proto +google.golang.org/grpc/grpclog +google.golang.org/grpc/health +google.golang.org/grpc/health/grpc_health_v1 google.golang.org/grpc/internal google.golang.org/grpc/internal/backoff google.golang.org/grpc/internal/balancerload @@ -626,18 +658,19 @@ google.golang.org/grpc/internal/channelz google.golang.org/grpc/internal/envconfig google.golang.org/grpc/internal/grpcrand google.golang.org/grpc/internal/grpcsync +google.golang.org/grpc/internal/syscall google.golang.org/grpc/internal/transport +google.golang.org/grpc/keepalive +google.golang.org/grpc/metadata google.golang.org/grpc/naming google.golang.org/grpc/peer google.golang.org/grpc/resolver google.golang.org/grpc/resolver/dns google.golang.org/grpc/resolver/passthrough google.golang.org/grpc/stats +google.golang.org/grpc/status google.golang.org/grpc/tap -google.golang.org/grpc/credentials/internal -google.golang.org/grpc/balancer/base -google.golang.org/grpc/binarylog/grpc_binarylog_v1 -google.golang.org/grpc/internal/syscall +google.golang.org/grpc/test/bufconn # gopkg.in/ini.v1 v1.42.0 gopkg.in/ini.v1 # gopkg.in/yaml.v2 v2.2.2 diff --git a/version/version.go b/version/version.go index 90b10da42..168a4d657 100644 --- a/version/version.go +++ b/version/version.go @@ -11,7 +11,7 @@ import ( ) // The main version number that is being run at the moment. -var Version = "0.12.11" +var Version = "0.13.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release diff --git a/website/docs/backends/types/artifactory.html.md b/website/docs/backends/types/artifactory.html.md index 24ec55cb3..ce30ae716 100644 --- a/website/docs/backends/types/artifactory.html.md +++ b/website/docs/backends/types/artifactory.html.md @@ -33,7 +33,7 @@ terraform { } ``` -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/azurerm.html.md b/website/docs/backends/types/azurerm.html.md index aca2a54e4..02d57b2ee 100644 --- a/website/docs/backends/types/azurerm.html.md +++ b/website/docs/backends/types/azurerm.html.md @@ -77,7 +77,7 @@ terraform { -> **NOTE:** When using a Service Principal or an Access Key - we recommend using a [Partial Configuration](/docs/backends/config.html) for the credentials. -## Example Referencing +## Data Source Configuration When authenticating using a Service Principal: diff --git a/website/docs/backends/types/consul.html.md b/website/docs/backends/types/consul.html.md index 66e610d2b..c73ab6f3d 100644 --- a/website/docs/backends/types/consul.html.md +++ b/website/docs/backends/types/consul.html.md @@ -29,7 +29,7 @@ terraform { Note that for the access credentials we recommend using a [partial configuration](/docs/backends/config.html). -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/cos.html.md b/website/docs/backends/types/cos.html.md new file mode 100644 index 000000000..bf1d09945 --- /dev/null +++ b/website/docs/backends/types/cos.html.md @@ -0,0 +1,61 @@ +--- +layout: "backend-types" +page_title: "Backend Type: cos" +sidebar_current: "docs-backends-types-standard-cos" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# COS + +**Kind: Standard (with locking)** + +Stores the state as an object in a configurable prefix in a given bucket on [Tencent Cloud Object Storage](https://intl.cloud.tencent.com/product/cos) (COS). +This backend also supports [state locking](/docs/state/locking.html). + +~> **Warning!** It is highly recommended that you enable [Object Versioning](https://intl.cloud.tencent.com/document/product/436/19883) +on the COS bucket to allow for state recovery in the case of accidental deletions and human error. + +## Example Configuration + +```hcl +terraform { + backend "cos" { + region = "ap-guangzhou" + bucket = "bucket-for-terraform-state-1258798060" + prefix = "terraform/state" + } +} +``` + +This assumes we have a [COS Bucket](https://www.terraform.io/docs/providers/tencentcloud/r/cos_bucket.html) created named `bucket-for-terraform-state-1258798060`, +Terraform state will be written into the file `terraform/state/terraform.tfstate`. + +## Data Source Configuration + +To make use of the COS remote state in another configuration, use the [`terraform_remote_state` data source](/docs/providers/terraform/d/remote_state.html). + +```hcl +data "terraform_remote_state" "foo" { + backend = "cos" + + config = { + region = "ap-guangzhou" + bucket = "bucket-for-terraform-state-1258798060" + prefix = "terraform/state" + } +} +``` + +## Configuration variables + +The following configuration options or environment variables are supported: + + * `secret_id` - (Optional) Secret id of Tencent Cloud. It supports environment variables `TENCENTCLOUD_SECRET_ID`. + * `secret_key` - (Optional) Secret key of Tencent Cloud. It supports environment variables `TENCENTCLOUD_SECRET_KEY`. + * `region` - (Optional) The region of the COS bucket. It supports environment variables `TENCENTCLOUD_REGION`. + * `bucket` - (Required) The name of the COS bucket. You shall manually create it first. + * `prefix` - (Optional) The directory for saving the state file in bucket. Default to "env:". + * `key` - (Optional) The path for saving the state file in bucket. Defaults to `terraform.tfstate`. + * `encrypt` - (Optional) Whether to enable server side encryption of the state file. If it is true, COS will use 'AES256' encryption algorithm to encrypt state file. + * `acl` - (Optional) Object ACL to be applied to the state file, allows `private` and `public-read`. Defaults to `private`. diff --git a/website/docs/backends/types/etcd.html.md b/website/docs/backends/types/etcd.html.md index 04232c02e..302d5486b 100644 --- a/website/docs/backends/types/etcd.html.md +++ b/website/docs/backends/types/etcd.html.md @@ -23,7 +23,7 @@ terraform { } ``` -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/etcdv3.html.md b/website/docs/backends/types/etcdv3.html.md index 1be49373f..43257c8ce 100644 --- a/website/docs/backends/types/etcdv3.html.md +++ b/website/docs/backends/types/etcdv3.html.md @@ -29,7 +29,7 @@ terraform { Note that for the access credentials we recommend using a [partial configuration](/docs/backends/config.html). -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/gcs.html.md b/website/docs/backends/types/gcs.html.md index 72226b6d3..62ee24b44 100644 --- a/website/docs/backends/types/gcs.html.md +++ b/website/docs/backends/types/gcs.html.md @@ -28,7 +28,7 @@ terraform { } ``` -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { @@ -52,15 +52,27 @@ resource "template_file" "bar" { The following configuration options are supported: - * `bucket` - (Required) The name of the GCS bucket. - This name must be globally unique. - For more information, see [Bucket Naming Guidelines](https://cloud.google.com/storage/docs/bucketnaming.html#requirements). - * `credentials` / `GOOGLE_CREDENTIALS` - (Optional) Local path to Google Cloud Platform account credentials in JSON format. - If unset, [Google Application Default Credentials](https://developers.google.com/identity/protocols/application-default-credentials) are used. - The provided credentials need to have the `devstorage.read_write` scope and `WRITER` permissions on the bucket. - * `access_token` - (Optional) A temporary [OAuth 2.0 access token] obtained from - the Google Authorization server, i.e. the `Authorization: Bearer` token used to - authenticate HTTP requests to GCP APIs. This is an alternative to `credentials`. If both are specified, `access_token` will be used over the `credentials` field. - * `prefix` - (Optional) GCS prefix inside the bucket. Named states for workspaces are stored in an object called `/.tfstate`. - * `path` - (Deprecated) GCS path to the state file of the default state. For backwards compatibility only, use `prefix` instead. - * `encryption_key` / `GOOGLE_ENCRYPTION_KEY` - (Optional) A 32 byte base64 encoded 'customer supplied encryption key' used to encrypt all state. For more information see [Customer Supplied Encryption Keys](https://cloud.google.com/storage/docs/encryption#customer-supplied). + * `bucket` - (Required) The name of the GCS bucket. This name must be + globally unique. For more information, see [Bucket Naming + Guidelines](https://cloud.google.com/storage/docs/bucketnaming.html#requirements). + * `credentials` / `GOOGLE_BACKEND_CREDENTIALS` / `GOOGLE_CREDENTIALS` - + (Optional) Local path to Google Cloud Platform account credentials in JSON + format. If unset, [Google Application Default + Credentials](https://developers.google.com/identity/protocols/application-default-credentials) + are used. The provided credentials need to have the + `devstorage.read_write` scope and `WRITER` permissions on the bucket. + **Warning**: if using the Google Cloud Platform provider as well, it will + also pick up the `GOOGLE_CREDENTIALS` environment variable. + * `access_token` - (Optional) A temporary [OAuth 2.0 access token] obtained + from the Google Authorization server, i.e. the `Authorization: Bearer` token + used to authenticate HTTP requests to GCP APIs. This is an alternative to + `credentials`. If both are specified, `access_token` will be used over the + `credentials` field. + * `prefix` - (Optional) GCS prefix inside the bucket. Named states for + workspaces are stored in an object called `/.tfstate`. + * `path` - (Deprecated) GCS path to the state file of the default state. For + backwards compatibility only, use `prefix` instead. + * `encryption_key` / `GOOGLE_ENCRYPTION_KEY` - (Optional) A 32 byte base64 + encoded 'customer supplied encryption key' used to encrypt all state. For + more information see [Customer Supplied Encryption + Keys](https://cloud.google.com/storage/docs/encryption#customer-supplied). diff --git a/website/docs/backends/types/http.html.md b/website/docs/backends/types/http.html.md index ba0a723eb..f4a97aceb 100644 --- a/website/docs/backends/types/http.html.md +++ b/website/docs/backends/types/http.html.md @@ -30,7 +30,7 @@ terraform { } ``` -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/local.html.md b/website/docs/backends/types/local.html.md index fb30fdebd..b8a690cb6 100644 --- a/website/docs/backends/types/local.html.md +++ b/website/docs/backends/types/local.html.md @@ -23,7 +23,7 @@ terraform { } ``` -## Example Reference +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/manta.html.md b/website/docs/backends/types/manta.html.md index 583e7c108..926891618 100644 --- a/website/docs/backends/types/manta.html.md +++ b/website/docs/backends/types/manta.html.md @@ -26,7 +26,7 @@ terraform { Note that for the access credentials we recommend using a [partial configuration](/docs/backends/config.html). -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/oss.html.md b/website/docs/backends/types/oss.html.md index 08e3a11f8..301e4b470 100644 --- a/website/docs/backends/types/oss.html.md +++ b/website/docs/backends/types/oss.html.md @@ -16,6 +16,11 @@ This backend also supports state locking and consistency checking via [Alibaba Cloud Table Store](https://www.alibabacloud.com/help/doc-detail/27280.htm), which can be enabled by setting the `tablestore_table` field to an existing TableStore table name. +-> **Note:** The OSS backend is available from terraform version 0.12.2. + +!> **Warning:** If you set `tablestore_table`, please ensure the table does not contain primary key named +`LockID`, `Info` and `Digest`. Otherwise, there will throw an error `OTSParameterInvalid Duplicated attribute column ...`. + ## Example Configuration ```hcl @@ -37,9 +42,9 @@ a [OTS TableStore](https://www.terraform.io/docs/providers/alicloud/r/ots_table. Terraform state will be written into the file `path/mystate/version-1.tfstate`. The `TableStore` must have a primary key of type `string`. -## Using the OSS remote state +## Data Source Configuration -To make use of the OSS remote state we can use the +To make use of the OSS remote state in another configuration, use the [`terraform_remote_state` data source](/docs/providers/terraform/d/remote_state.html). @@ -78,6 +83,7 @@ The following configuration options or environment variables are supported: * `access_key` - (Optional) Alibaba Cloud access key. It supports environment variables `ALICLOUD_ACCESS_KEY` and `ALICLOUD_ACCESS_KEY_ID`. * `secret_key` - (Optional) Alibaba Cloud secret access key. It supports environment variables `ALICLOUD_SECRET_KEY` and `ALICLOUD_ACCESS_KEY_SECRET`. * `security_token` - (Optional) STS access token. It supports environment variable `ALICLOUD_SECURITY_TOKEN`. + * `ecs_role_name` - (Optional, Available in 0.12.14+) The RAM Role Name attached on a ECS instance for API operations. You can retrieve this from the 'Access Control' section of the Alibaba Cloud console. * `region` - (Optional) The region of the OSS bucket. It supports environment variables `ALICLOUD_REGION` and `ALICLOUD_DEFAULT_REGION`. * `endpoint` - (Optional) A custom endpoint for the OSS API. It supports environment variables `ALICLOUD_OSS_ENDPOINT` and `OSS_ENDPOINT`. * `bucket` - (Required) The name of the OSS bucket. @@ -90,7 +96,7 @@ The following configuration options or environment variables are supported: * `acl` - (Optional) [Object ACL](https://www.alibabacloud.com/help/doc-detail/52284.htm) to be applied to the state file. - * `shared_credentials_file` - (Optional, Available in 0.12.8+) This is the path to the shared credentials file. If this is not set and a profile is specified, `~/.aliyun/config.json` will be used. + * `shared_credentials_file` - (Optional, Available in 0.12.8+) This is the path to the shared credentials file. It can also be sourced from the `ALICLOUD_SHARED_CREDENTIALS_FILE` environment variable. If this is not set and a profile is specified, `~/.aliyun/config.json` will be used. * `profile` - (Optional, Available in 0.12.8+) This is the Alibaba Cloud profile name as set in the shared credentials file. It can also be sourced from the `ALICLOUD_PROFILE` environment variable. * `assume_role` - (Optional, Available in 0.12.6+) If provided with a role ARN, will attempt to assume this role using the supplied credentials. diff --git a/website/docs/backends/types/pg.html.md b/website/docs/backends/types/pg.html.md index 76ceaa34c..9c837accd 100644 --- a/website/docs/backends/types/pg.html.md +++ b/website/docs/backends/types/pg.html.md @@ -52,9 +52,9 @@ To use a Postgres server running on the same machine as Terraform, configure loc terraform init -backend-config="conn_str=postgres://localhost/terraform_backend?sslmode=disable" ``` -## Example Referencing +## Data Source Configuration -To make use of the pg remote state we can use the [`terraform_remote_state` data source](/docs/providers/terraform/d/remote_state.html). +To make use of the pg remote state in another configuration, use the [`terraform_remote_state` data source](/docs/providers/terraform/d/remote_state.html). ```hcl data "terraform_remote_state" "network" { diff --git a/website/docs/backends/types/remote.html.md b/website/docs/backends/types/remote.html.md index d4320e810..1af0313cd 100644 --- a/website/docs/backends/types/remote.html.md +++ b/website/docs/backends/types/remote.html.md @@ -22,8 +22,6 @@ Cloud's run environment, with log output streaming to the local terminal. Remote Terraform Cloud can also be used with local operations, in which case only state is stored in the Terraform Cloud backend. - - ## Command Support Currently the remote backend supports the following Terraform commands: @@ -49,32 +47,57 @@ Currently the remote backend supports the following Terraform commands: ## Workspaces -The remote backend can work with either a single remote workspace, or with -multiple similarly-named remote workspaces (like `networking-dev` and -`networking-prod`). The `workspaces` block of the backend configuration +The remote backend can work with either a single remote Terraform Cloud workspace, +or with multiple similarly-named remote workspaces (like `networking-dev` +and `networking-prod`). The `workspaces` block of the backend configuration determines which mode it uses: -- To use a single workspace, set `workspaces.name` to the remote workspace's - full name (like `networking`). +- To use a single remote Terraform Cloud workspace, set `workspaces.name` to the + remote workspace's full name (like `networking`). -- To use multiple workspaces, set `workspaces.prefix` to a prefix used in +- To use multiple remote workspaces, set `workspaces.prefix` to a prefix used in all of the desired remote workspace names. For example, set - `prefix = "networking-"` to use a group of workspaces with names like - `networking-dev` and `networking-prod`. + `prefix = "networking-"` to use Terraform cloud workspaces with + names like `networking-dev` and `networking-prod`. This is helpful when + mapping multiple Terraform CLI [workspaces](../../state/workspaces.html) + used in a single Terraform configuration to multiple Terraform Cloud + workspaces. - When interacting with workspaces on the command line, Terraform uses - shortened names without the common prefix. For example, if - `prefix = "networking-"`, use `terraform workspace select prod` to switch to - the `networking-prod` workspace. +When interacting with workspaces on the command line, Terraform uses +shortened names without the common prefix. For example, if +`prefix = "networking-"`, use `terraform workspace select prod` to switch to +the Terraform CLI workspace `prod` within the current configuration. Remote +Terraform operations such as `plan` and `apply` executed against that Terraform +CLI workspace will be executed in the Terraform Cloud workspace `networking-prod`. + +Additionally, the [`${terraform.workspace}`](../../state/workspaces.html#current-workspace-interpolation) +interpolation sequence should be removed from Terraform configurations that run +remote operations against Terraform Cloud workspaces. The reason for this is that +each Terraform Cloud workspace currently only uses the single `default` Terraform +CLI workspace internally. In other words, if your Terraform configuration +used `${terraform.workspace}` to return `dev` or `prod`, remote runs in Terraform Cloud +would always evaluate it as `default` regardless of +which workspace you had set with the `terraform workspace select` command. That +would most likely not be what you wanted. (It is ok to use `${terraform.workspace}` +in local operations.) The backend configuration requires either `name` or `prefix`. Omitting both or setting both results in a configuration error. If previous state is present when you run `terraform init` and the corresponding remote workspaces are empty or absent, Terraform will create workspaces and/or -update the remote state accordingly. +update the remote state accordingly. However, if your workspace needs variables +set or requires a specific version of Terraform for remote operations, we +recommend that you create your remote workspaces on Terraform Cloud before +running any remote operations against them. -## Example Configuration +## Example Configurations + +-> **Note:** We recommend omitting the token from the configuration, and instead using + [`terraform login`](/docs/commands/login.html) or manually configuring + `credentials` in the [CLI config file](/docs/commands/cli-config.html#credentials). + +### Basic Configuration ```hcl # Using a single workspace: @@ -102,23 +125,7 @@ terraform { } ``` -## Example Reference - -```hcl -data "terraform_remote_state" "foo" { - backend = "remote" - - config = { - organization = "company" - - workspaces { - name = "workspace" - } - } -} -``` - -## Example configuration using CLI input +### Using CLI Input ```hcl # main.tf @@ -144,6 +151,22 @@ Running `terraform init` with the backend file: terraform init -backend-config=backend.hcl ``` +### Data Source Configuration + +```hcl +data "terraform_remote_state" "foo" { + backend = "remote" + + config = { + organization = "company" + + workspaces = { + name = "workspace" + } + } +} +``` + ## Configuration variables The following configuration options are supported: @@ -153,8 +176,9 @@ The following configuration options are supported: * `organization` - (Required) The name of the organization containing the targeted workspace(s). * `token` - (Optional) The token used to authenticate with the remote backend. - We recommend omitting the token from the configuration, and instead setting it - as `credentials` in the + We recommend omitting the token from the configuration, and instead using + [`terraform login`](/docs/commands/login.html) or manually configuring + `credentials` in the [CLI config file](/docs/commands/cli-config.html#credentials). * `workspaces` - (Required) A block specifying which remote workspace(s) to use. The `workspaces` block supports the following keys: @@ -164,5 +188,31 @@ The following configuration options are supported: * `prefix` - (Optional) A prefix used in the names of one or more remote workspaces, all of which can be used with this configuration. The full workspace names are used in Terraform Cloud, and the short names - (minus the prefix) are used on the command line. If omitted, only the - default workspace can be used. This option conflicts with `name`. + (minus the prefix) are used on the command line for Terraform CLI workspaces. + If omitted, only the default workspace can be used. This option conflicts with `name`. + +-> **Note:** You must use the `name` key when configuring a `terraform_remote_state` +data source that retrieves state from another Terraform Cloud workspace. The `prefix` key is only +intended for use when configuring an instance of the remote backend. + +## Excluding Files from Upload with .terraformignore + +-> **Version note:** `.terraformignore` support was added in Terraform 0.12.11. + +When executing a remote `plan` or `apply` in a [CLI-driven run](/docs/cloud/run/cli.html), +an archive of your configuration directory is uploaded to Terraform Cloud. You can define +paths to ignore from upload via a `.terraformignore` file at the root of your configuration directory. If this file is not present, the archive will exclude the following by default: + +* .git/ directories +* .terraform/ directories (exclusive of .terraform/modules) + +The `.terraformignore` file can include rules as one would include in a +[.gitignore file](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository#Ignoring-Files) + + +* Comments (starting with `#`) or blank lines are ignored +* End a pattern with a forward slash / to specify a directory +* Negate a pattern by starting it with an exclamation point `!` + +Note that unlike `.gitignore`, only the `.terraformignore` at the root of the configuration +directory is considered. diff --git a/website/docs/backends/types/s3.html.md b/website/docs/backends/types/s3.html.md index a260a63ba..bdc8000e2 100644 --- a/website/docs/backends/types/s3.html.md +++ b/website/docs/backends/types/s3.html.md @@ -102,9 +102,9 @@ This is seen in the following AWS IAM Statement: } ``` -## Using the S3 remote state +## Data Source Configuration -To make use of the S3 remote state we can use the +To make use of the S3 remote state in another configuration, use the [`terraform_remote_state` data source](/docs/providers/terraform/d/remote_state.html). @@ -355,7 +355,7 @@ $ terraform apply ### Running Terraform in Amazon EC2 Teams that make extensive use of Terraform for infrastructure management -often [run Terraform in automation](/guides/running-terraform-in-automation.html) +often [run Terraform in automation](https://learn.hashicorp.com/terraform/development/running-terraform-in-automation) to ensure a consistent operating environment and to limit access to the various secrets and other sensitive information that Terraform configurations tend to require. diff --git a/website/docs/backends/types/swift.html.md b/website/docs/backends/types/swift.html.md index d5db2dbba..d201ef945 100644 --- a/website/docs/backends/types/swift.html.md +++ b/website/docs/backends/types/swift.html.md @@ -29,7 +29,7 @@ This will create a container called `terraform-state` and an object within that For the access credentials we recommend using a [partial configuration](/docs/backends/config.html). -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/backends/types/terraform-enterprise.html.md b/website/docs/backends/types/terraform-enterprise.html.md index 791828087..b4b9ee364 100644 --- a/website/docs/backends/types/terraform-enterprise.html.md +++ b/website/docs/backends/types/terraform-enterprise.html.md @@ -42,7 +42,7 @@ terraform { We recommend using a [partial configuration](/docs/backends/config.html) and omitting the access token, which can be provided as an environment variable. -## Example Referencing +## Data Source Configuration ```hcl data "terraform_remote_state" "foo" { diff --git a/website/docs/cli-index.html.md b/website/docs/cli-index.html.md index dcdccc5c4..b3c8e0acf 100644 --- a/website/docs/cli-index.html.md +++ b/website/docs/cli-index.html.md @@ -30,7 +30,7 @@ _intermediate and advanced users,_ who need to find complete and detailed information quickly. - **New user?** Try the - [Getting Started guide](https://learn.hashicorp.com/terraform/getting-started/install.html) + [Getting Started guide](https://learn.hashicorp.com/terraform/getting-started/install) at [Learn Terraform](https://learn.hashicorp.com/terraform), then return here once you've used Terraform to manage some simple resources. - **Curious about Terraform?** See [Introduction to Terraform](/intro/index.html) diff --git a/website/docs/commands/0.12upgrade.html.markdown b/website/docs/commands/0.12upgrade.html.markdown index c5d292650..3783be54e 100644 --- a/website/docs/commands/0.12upgrade.html.markdown +++ b/website/docs/commands/0.12upgrade.html.markdown @@ -99,9 +99,17 @@ are not supported by the tool itself, but if you are on a Unix-style system you can achieve this using the `find` command as follows: ``` -find . -name '*.tf' -printf "%h\n" | xargs -n1 terraform 0.12upgrade -yes +find . -name '*.tf' -printf "%h\n" | uniq | xargs -n1 terraform 0.12upgrade -yes ``` +On Mac OS X, the `find` included with the system does not support the `-printf` argument. You can install GNU find using Homebrew in order to use that argument: + +``` +brew install findutils +``` +Once installed, run the above command line using `gfind` instead of `find`. + + Note that the above includes the `-yes` option to override the interactive prompt, so be sure you have a clean work tree before running it. diff --git a/website/docs/commands/apply.html.markdown b/website/docs/commands/apply.html.markdown index 11c372469..9fa019c20 100644 --- a/website/docs/commands/apply.html.markdown +++ b/website/docs/commands/apply.html.markdown @@ -20,13 +20,17 @@ By default, `apply` scans the current directory for the configuration and applies the changes appropriately. However, a path to another configuration or an execution plan can be provided. Explicit execution plans files can be used to split plan and apply into separate steps within -[automation systems](/guides/running-terraform-in-automation.html). +[automation systems](https://learn.hashicorp.com/terraform/development/running-terraform-in-automation). The command-line flags are all optional. The list of available flags are: * `-backup=path` - Path to the backup file. Defaults to `-state-out` with the ".backup" extension. Disabled by setting to "-". +* `-compact-warnings` - If Terraform produces any warnings that are not + accompanied by errors, show them in a more compact form that includes only + the summary messages. + * `-lock=true` - Lock the state file when locking is supported. * `-lock-timeout=0s` - Duration to retry a state lock. diff --git a/website/docs/commands/cli-config.html.markdown b/website/docs/commands/cli-config.html.markdown index 331169862..fd4e9a552 100644 --- a/website/docs/commands/cli-config.html.markdown +++ b/website/docs/commands/cli-config.html.markdown @@ -63,35 +63,86 @@ The following settings can be set in the CLI configuration file: [plugin caching](/docs/configuration/providers.html#provider-plugin-cache) and specifies, as a string, the location of the plugin cache directory. -- `credentials` — provides credentials for use with Terraform Cloud. - Terraform uses this when performing remote operations or state access with - the [remote backend](../backends/types/remote.html) and when accessing - Terraform Cloud's [private module registry.](/docs/cloud/registry/index.html) +- `credentials` - configures credentials for use with Terraform Cloud or + Terraform Enterprise. See [Credentials](#credentials) below for more + information. - This setting is a repeatable block, where the block label is a hostname - (either `app.terraform.io` or the hostname of a Terraform Enterprise instance) and - the block body contains a `token` attribute. Whenever Terraform accesses - state, modules, or remote operations from that hostname, it will - authenticate with that API token. +- `credentials_helper` - configures an external helper program for the storage + and retrieval of credentials for Terraform Cloud or Terraform Enterprise. + See [Credentials Helpers](#credentials-helpers) below for more information. - ``` hcl - credentials "app.terraform.io" { - token = "xxxxxx.atlasv1.zzzzzzzzzzzzz" - } - ``` +## Credentials - ~> **Important:** The token provided here must be a - [user token](/docs/cloud/users-teams-organizations/users.html#api-tokens) - or a - [team token](/docs/cloud/users-teams-organizations/api-tokens.html#team-api-tokens); - organization tokens cannot be used for command-line Terraform actions. +[Terraform Cloud](/docs/cloud/index.html) provides a number of remote network +services for use with Terraform, and +[Terraform Enterprise](/docs/enterprise/index.html) allows hosting those +services inside your own infrastructure. For example, these systems offer both +[remote operations](/docs/cloud/run/cli.html) and a +[private module registry](/docs/cloud/registry/index.html). - -> **Note:** The credentials hostname must match the hostname in your module - sources and/or backend configuration. If your Terraform Enterprise instance - is available at multiple hostnames, use one of them consistently. (The SaaS - version of Terraform Cloud responds to API calls at both its current - hostname, app.terraform.io, and its historical hostname, - atlas.hashicorp.com.) +When interacting with Terraform-specific network services, Terraform expects +to find API tokens in CLI configuration files in `credentials` blocks: + +```hcl +credentials "app.terraform.io" { + token = "xxxxxx.atlasv1.zzzzzzzzzzzzz" +} +``` + +You can have multiple `credentials` blocks if you regularly use services from +multiple hosts. Many users will configure only one, for either +Terraform Cloud (at `app.terraform.io`) or for their organization's own +Terraform Enterprise host. Each `credentials` block contains a `token` argument +giving the API token to use for that host. + +~> **Important:** If you are using Terraform Cloud or Terraform Enterprise, +the token provided must be either a +[user token](/docs/cloud/users-teams-organizations/users.html#api-tokens) +or a +[team token](/docs/cloud/users-teams-organizations/api-tokens.html#team-api-tokens); +organization tokens cannot be used for command-line Terraform actions. + +-> **Note:** The credentials hostname must match the hostname in your module +sources and/or backend configuration. If your Terraform Enterprise instance +is available at multiple hostnames, use only one of them consistently. +Terraform Cloud responds to API calls at both its current hostname +`app.terraform.io`, and its historical hostname `atlas.hashicorp.com`. + +If you are running the Terraform CLI interactively on a computer that is capable +of also running a web browser, you can optionally obtain credentials and save +them in the CLI configuration automatically using +[the `terraform login` command](./login.html). + +### Credentials Helpers + +If you would prefer not to store your API tokens directly in the CLI +configuration as described in the previous section, you can optionally instruct +Terraform to use a different credentials storage mechanism by configuring a +special kind of plugin program called a _credentials helper_. + +```hcl +credentials_helper "example" { + args = [] +} +``` + +`credentials_helper` is a configuration block that can appear at most once +in the CLI configuration. Its label (`"example"` above) is the name of the +credentials helper to use. The `args` argument is optional and allows passing +additional arguments to the helper program, for example if it needs to be +configured with the address of a remote host to access for credentials. + +A configured credentials helper will be consulted only to retrieve credentials +for hosts that are _not_ explicitly configured in a `credentials` block as +described in the previous section. +Conversely, this means you can override the credentials returned by the helper +for a specific hostname by writing a `credentials` block alongside the +`credentials_helper` block. + +Terraform does not include any credentials helpers in the main distribution. +To learn how to write and install your own credentials helpers to integrate +with existing in-house credentials management systems, see +[the guide to Credentials Helper internals](/docs/internals/credentials-helpers.html). ## Deprecated Settings diff --git a/website/docs/commands/graph.html.markdown b/website/docs/commands/graph.html.markdown index 2de101a03..56cb3fb5e 100644 --- a/website/docs/commands/graph.html.markdown +++ b/website/docs/commands/graph.html.markdown @@ -36,12 +36,12 @@ Options: * `-draw-cycles` - Highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors. -* `-module-depth=n` - Specifies the depth of modules to show in the output. - By default this is `-1`, which will expand all. - * `-type=plan` - Type of graph to output. Can be: `plan`, `plan-destroy`, `apply`, `validate`, `input`, `refresh`. +* `-module-depth=n` - (deprecated) In prior versions of Terraform, specified the + depth of modules to show in the output. + ## Generating Images The output of `terraform graph` is in the DOT format, which can diff --git a/website/docs/commands/init.html.markdown b/website/docs/commands/init.html.markdown index 3a6fab09c..81067b3aa 100644 --- a/website/docs/commands/init.html.markdown +++ b/website/docs/commands/init.html.markdown @@ -163,4 +163,4 @@ other interesting features such as integration with version control hooks. There are some special concerns when running `init` in such an environment, including optionally making plugins available locally to avoid repeated re-installation. For more information, see -[`Running Terraform in Automation`](/guides/running-terraform-in-automation.html). +[`Running Terraform in Automation`](https://learn.hashicorp.com/terraform/development/running-terraform-in-automation). diff --git a/website/docs/commands/login.html.markdown b/website/docs/commands/login.html.markdown new file mode 100644 index 000000000..085781f54 --- /dev/null +++ b/website/docs/commands/login.html.markdown @@ -0,0 +1,45 @@ +--- +layout: "docs" +page_title: "Command: login" +sidebar_current: "docs-commands-login" +description: |- + The terraform login command can be used to automatically obtain and save an API token for Terraform Cloud, Terraform Enterprise, or any other host that offers Terraform services. +--- + +# Command: login + +The `terraform login` command can be used to automatically obtain and save an +API token for Terraform Cloud, Terraform Enterprise, or any other host that offers Terraform services. + +-> **Note:** This command is suitable only for use in interactive scenarios +where it is possible to launch a web browser on the same host where Terraform +is running. If you are running Terraform in an unattended automation scenario, +you can +[configure credentials manually in the CLI configuration](https://www.terraform.io/docs/commands/cli-config.html#credentials). + +## Usage + +Usage: `terraform login [hostname]` + +If you don't provide an explicit hostname, Terraform will assume you want to +log in to Terraform Cloud at `app.terraform.io`. + +## Credentials Storage + +By default, Terraform will obtain an API token and save it in plain text in a +local CLI configuration file called `credentials.tfrc.json`. When you run +`terraform login`, it will explain specifically where it intends to save +the API token and give you a chance to cancel if the current configuration is +not as desired. + +If you don't wish to store your API token in the default location, you can +optionally configure a +[credentials helper program](cli-config.html#credentials-helpers) which knows +how to store and later retrieve credentials in some other system, such as +your organization's existing secrets management system. + +## Login Server Support + +The `terraform login` command works with any server supporting the +[login protocol](/docs/internals/login-protocol.html), including Terraform Cloud +and Terraform Enterprise. diff --git a/website/docs/commands/logout.html.markdown b/website/docs/commands/logout.html.markdown new file mode 100644 index 000000000..644ff5171 --- /dev/null +++ b/website/docs/commands/logout.html.markdown @@ -0,0 +1,30 @@ +--- +layout: "docs" +page_title: "Command: logout" +sidebar_current: "docs-commands-logout" +description: |- + The terraform logout command is used to remove credentials stored by terraform login. +--- + +# Command: logout + +The `terraform logout` command is used to remove credentials stored by +`terraform login`. These credentials are API tokens for Terraform Cloud, +Terraform Enterprise, or any other host that offers Terraform services. + +## Usage + +Usage: `terraform logout [hostname]` + +If you don't provide an explicit hostname, Terraform will assume you want to +log out of Terraform Cloud at `app.terraform.io`. + +-> **Note:** the API token is only removed from local storage, not destroyed on +the remote server, so it will remain valid until manually revoked. + +## Credentials Storage + +By default, Terraform will remove the token stored in plain text in a local CLI +configuration file called `credentials.tfrc.json`. If you have configured a +[credentials helper program](cli-config.html#credentials-helpers), Terraform +will use the helper's `forget` command to remove it. diff --git a/website/docs/commands/plan.html.markdown b/website/docs/commands/plan.html.markdown index 838daf7a2..be5f228bc 100644 --- a/website/docs/commands/plan.html.markdown +++ b/website/docs/commands/plan.html.markdown @@ -21,7 +21,7 @@ will behave as expected. The optional `-out` argument can be used to save the generated plan to a file for later execution with `terraform apply`, which can be useful when -[running Terraform in automation](/guides/running-terraform-in-automation.html). +[running Terraform in automation](https://learn.hashicorp.com/terraform/development/running-terraform-in-automation). ## Usage @@ -37,6 +37,10 @@ inspect a planfile. The command-line flags are all optional. The list of available flags are: +* `-compact-warnings` - If Terraform produces any warnings that are not + accompanied by errors, show them in a more compact form that includes only + the summary messages. + * `-destroy` - If set, generates a plan to destroy all the known resources. * `-detailed-exitcode` - Return a detailed exit code when the command exits. diff --git a/website/docs/commands/refresh.html.markdown b/website/docs/commands/refresh.html.markdown index c03e68911..4700bf3e9 100644 --- a/website/docs/commands/refresh.html.markdown +++ b/website/docs/commands/refresh.html.markdown @@ -29,6 +29,10 @@ The command-line flags are all optional. The list of available flags are: * `-backup=path` - Path to the backup file. Defaults to `-state-out` with the ".backup" extension. Disabled by setting to "-". +* `-compact-warnings` - If Terraform produces any warnings that are not + accompanied by errors, show them in a more compact form that includes only + the summary messages. + * `-input=true` - Ask for input for variables if not directly set. * `-lock=true` - Lock the state file when locking is supported. diff --git a/website/docs/commands/state/show.html.md b/website/docs/commands/state/show.html.md index 237784787..dcc86974f 100644 --- a/website/docs/commands/state/show.html.md +++ b/website/docs/commands/state/show.html.md @@ -19,10 +19,6 @@ Usage: `terraform state show [options] ADDRESS` The command will show the attributes of a single resource in the state file that matches the given address. -The attributes are listed in alphabetical order (with the except of "id" -which is always at the top). They are outputted in a way that is easy -to parse on the command-line. - This command requires an address that points to a single resource in the state. Addresses are in [resource addressing format](/docs/commands/state/addressing.html). @@ -32,6 +28,11 @@ The command-line flags are all optional. The list of available flags are: * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". Ignored when [remote state](/docs/state/remote.html) is used. +The output of `terraform state show` is intended for human consumption, not +programmatic consumption. To extract state data for use in other software, use +[`terraform show -json`](../show.html#json-output) and decode the result +using the documented structure. + ## Example: Show a Resource The example below shows a `packet_device` resource named `worker`: diff --git a/website/docs/commands/taint.html.markdown b/website/docs/commands/taint.html.markdown index 8108ff433..a948fd1f7 100644 --- a/website/docs/commands/taint.html.markdown +++ b/website/docs/commands/taint.html.markdown @@ -36,12 +36,13 @@ the case. Usage: `terraform taint [options] address` The `address` argument is the address of the resource to mark as tainted. -The address is in the usual resource address syntax, as shown in -the output from other commands, such as: +The address is in +[the resource address syntax](/docs/internals/resource-addressing.html) syntax, +as shown in the output from other commands, such as: * `aws_instance.foo` * `aws_instance.bar[1]` - * `aws_instance.baz[\"key\"]` (quotes in resource addresses must be escaped on the command line, so that they are not interpreted by your shell) + * `aws_instance.baz``[\"key\"]` (quotes in resource addresses must be escaped on the command line, so that they are not interpreted by your shell) * `module.foo.module.bar.aws_instance.qux` The command-line flags are all optional. The list of available flags are: @@ -70,14 +71,36 @@ This example will taint a single resource: ``` $ terraform taint aws_security_group.allow_all -The resource aws_security_group.allow_all in the module root has been marked as tainted! +The resource aws_security_group.allow_all in the module root has been marked as tainted. ``` +## Example: Tainting a single resource created with for_each + +It is necessary to wrap the resource in single quotes and escape the quotes. +This example will taint a single resource created with for_each: + +``` +$ terraform taint 'module.route_tables.azurerm_route_table.rt[\"DefaultSubnet\"]' +The resource module.route_tables.azurerm_route_table.rt["DefaultSubnet"] in the module root has been marked as tainted. +``` + + ## Example: Tainting a Resource within a Module This example will only taint a resource within a module: ``` $ terraform taint "module.couchbase.aws_instance.cb_node[9]" -Resource instance module.couchbase.aws_instance.cb_node[9] has been marked as tainted! +Resource instance module.couchbase.aws_instance.cb_node[9] has been marked as tainted. +``` + +Although we recommend that most configurations use only one level of nesting +and employ [module composition](/docs/modules/composition.html), it's possible +to have multiple levels of nested modules. In that case the resource instance +address must include all of the steps to the target instance, as in the +following example: + +``` +$ terraform taint "module.child.module.grandchild.aws_instance.example[2]" +Resource instance module.child.module.grandchild.aws_instance.example[2] has been marked as tainted. ``` diff --git a/website/docs/configuration-0-11/interpolation.html.md b/website/docs/configuration-0-11/interpolation.html.md index 3bde9d080..738783625 100644 --- a/website/docs/configuration-0-11/interpolation.html.md +++ b/website/docs/configuration-0-11/interpolation.html.md @@ -39,27 +39,27 @@ Use the `var.` prefix followed by the variable name. For example, #### User map variables -The syntax is `var.MAP["KEY"]`. For example, `${var.amis["us-east-1"]}` +The syntax is `var.[""]`. For example, `${var.amis["us-east-1"]}` would get the value of the `us-east-1` key within the `amis` map variable. #### User list variables -The syntax is `"${var.LIST}"`. For example, `"${var.subnets}"` +The syntax is `"${var.}"`. For example, `"${var.subnets}"` would get the value of the `subnets` list, as a list. You can also return list elements by index: `${var.subnets[idx]}`. #### Attributes of your own resource -The syntax is `self.ATTRIBUTE`. For example `${self.private_ip}` +The syntax is `self.`. For example `${self.private_ip}` will interpolate that resource's private IP address. --> **Note**: The `self.ATTRIBUTE` syntax is only allowed and valid within +-> **Note**: The `self.` syntax is only allowed and valid within provisioners. #### Attributes of other resources -The syntax is `TYPE.NAME.ATTRIBUTE`. For example, +The syntax is `..`. For example, `${aws_instance.web.id}` will interpolate the ID attribute from the `aws_instance` resource named `web`. If the resource has a `count` attribute set, you can access individual attributes with a zero-based @@ -68,27 +68,27 @@ syntax to get a list of all the attributes: `${aws_instance.web.*.id}`. #### Attributes of a data source -The syntax is `data.TYPE.NAME.ATTRIBUTE`. For example. `${data.aws_ami.ubuntu.id}` will interpolate the `id` attribute from the `aws_ami` [data source](./data-sources.html) named `ubuntu`. If the data source has a `count` +The syntax is `data...`. For example. `${data.aws_ami.ubuntu.id}` will interpolate the `id` attribute from the `aws_ami` [data source](./data-sources.html) named `ubuntu`. If the data source has a `count` attribute set, you can access individual attributes with a zero-based index, such as `${data.aws_subnet.example.0.cidr_block}`. You can also use the splat syntax to get a list of all the attributes: `${data.aws_subnet.example.*.cidr_block}`. #### Outputs from a module -The syntax is `MODULE.NAME.OUTPUT`. For example `${module.foo.bar}` will +The syntax is `module..`. For example `${module.foo.bar}` will interpolate the `bar` output from the `foo` [module](/docs/modules/index.html). #### Count information -The syntax is `count.FIELD`. For example, `${count.index}` will +The syntax is `count.index`. For example, `${count.index}` will interpolate the current index in a multi-count resource. For more information on `count`, see the [resource configuration page](./resources.html). #### Path information -The syntax is `path.TYPE`. TYPE can be `cwd`, `module`, or `root`. +The syntax is `path.`. TYPE can be `cwd`, `module`, or `root`. `cwd` will interpolate the current working directory. `module` will interpolate the path to the current module. `root` will interpolate the path of the root module. In general, you probably want the @@ -96,7 +96,7 @@ path of the root module. In general, you probably want the #### Terraform meta information -The syntax is `terraform.FIELD`. This variable type contains metadata about +The syntax is `terraform.`. This variable type contains metadata about the currently executing Terraform run. FIELD can currently only be `env` to reference the currently active [state environment](/docs/state/environments.html). @@ -146,7 +146,7 @@ Terraform ships with built-in functions. Functions are called with the syntax `name(arg, arg2, ...)`. For example, to read a file: `${file("path.txt")}`. -~> **NOTE**: Proper escaping is required for JSON field values containing quotes +~> **Note**: Proper escaping is required for JSON field values containing quotes (`"`) such as `environment` values. If directly setting the JSON, they should be escaped as `\"` in the JSON, e.g. `"value": "I \"love\" escaped quotes"`. If using a Terraform variable value, they should be escaped as `\\\"` in the diff --git a/website/docs/configuration/expressions.html.md b/website/docs/configuration/expressions.html.md index 7dfd45487..040e2f732 100644 --- a/website/docs/configuration/expressions.html.md +++ b/website/docs/configuration/expressions.html.md @@ -22,8 +22,8 @@ and a number of built-in functions. Expressions can be used in a number of places in the Terraform language, but some contexts limit which expression constructs are allowed, such as requiring a literal value of a particular type or forbidding -references to resource attributes. Each language feature's documentation -describes any restrictions it places on expressions. +[references to resource attributes](/docs/configuration/expressions.html#references-to-resource-attributes). +Each language feature's documentation describes any restrictions it places on expressions. You can experiment with the behavior of Terraform's expressions from the Terraform expression console, by running @@ -171,6 +171,9 @@ The following named values are available: If the resource has the `count` argument set, the value of this expression is a _list_ of objects representing its instances. + If the resource has the `for_each` argument set, the value of this expression + is a _map_ of objects representing its instances. + For more information, see [references to resource attributes](#references-to-resource-attributes) below. * `var.` is the value of the @@ -183,7 +186,8 @@ The following named values are available: * `data..` is an object representing a [data resource](./data-sources.html) of the given data source type and name. If the resource has the `count` argument set, the value - is a list of objects representing its instances. + is a list of objects representing its instances. If the resource has the `for_each` + argument set, the value is a map of objects representing its instances. * `path.module` is the filesystem path of the module where the expression is placed. * `path.root` is the filesystem path of the root module of the configuration. @@ -291,11 +295,11 @@ for use in references, as follows: To obtain a map of values of a particular argument for _labelled_ nested block types, use a [`for` expression](#for-expressions): - `[for k, device in aws_instance.example.device : k => device.size]`. + `{for k, device in aws_instance.example.device : k => device.size}`. -When a particular resource has the special +When a resource has the [`count`](https://www.terraform.io/docs/configuration/resources.html#count-multiple-resource-instances-by-count) -argument set, the resource itself becomes a list of instance objects rather than +argument set, the resource itself becomes a _list_ of instance objects rather than a single object. In that case, access the attributes of the instances using either [splat expressions](#splat-expressions) or index syntax: @@ -303,6 +307,33 @@ either [splat expressions](#splat-expressions) or index syntax: instances. * `aws_instance.example[0].id` returns just the id of the first instance. +When a resource has the +[`for_each`](/docs/configuration/resources.html#for_each-multiple-resource-instances-defined-by-a-map-or-set-of-strings) +argument set, the resource itself becomes a _map_ of instance objects rather than +a single object, and attributes of instances must be specified by key, or can +be accessed using a [`for` expression](#for-expressions). + +* `aws_instance.example["a"].id` returns the id of the "a"-keyed resource. +* `[for value in aws_instance.example: value.id]` returns a list of all of the ids + of each of the instances. + +Note that unlike `count`, splat expressions are _not_ directly applicable to resources managed with `for_each`, as splat expressions are for lists only. You may apply a splat expression to values in a map like so: + +* `values(aws_instance.example)[*].id` + +### Local Named Values + +Within the bodies of certain expressions, or in some other specific contexts, +there are other named values available beyond the global values listed above. +(For example, the body of a resource block where `count` is set can use a +special `count.index` value.) These local names are described in the +documentation for the specific contexts where they appear. + +-> **Note:** Local named values are often referred to as _variables_ or +_temporary variables_ in their documentation. These are not [input +variables](./variables.html); they are just arbitrary names +that temporarily represent a value. + ### Values Not Yet Known When Terraform is planning a set of changes that will apply your configuration, @@ -581,11 +612,14 @@ The above expression is equivalent to the following `for` expression: [for o in var.list : o.interfaces[0].name] ``` -Splat expressions also have another useful effect: if they are applied to -a value that is _not_ a list or tuple then the value is automatically wrapped -in a single-element list before processing. That is, `var.single_object[*].id` -is equivalent to `[var.single_object][*].id`, or effectively -`[var.single_object.id]`. This behavior is not interesting in most cases, +Splat expressions are for lists only (and thus cannot be used [to reference resources +created with `for_each`](/docs/configuration/resources.html#referring-to-instances-1), +which are represented as maps in Terraform). However, if a splat expression is applied +to a value that is _not_ a list or tuple then the value is automatically wrapped in +a single-element list before processing. + +For example, `var.single_object[*].id` is equivalent to `[var.single_object][*].id`, +or effectively `[var.single_object.id]`. This behavior is not interesting in most cases, but it is particularly useful when referring to resources that may or may not have `count` set, and thus may or may not produce a tuple value: @@ -630,29 +664,31 @@ form. This covers many uses, but some resource types include repeatable _nested blocks_ in their arguments, which do not accept expressions: ```hcl -resource "aws_security_group" "example" { - name = "example" # can use expressions here +resource "aws_elastic_beanstalk_environment" "tfenvtest" { + name = "tf-test-name" # can use expressions here - ingress { - # but the "ingress" block is always a literal block + setting { + # but the "setting" block is always a literal block } } ``` -You can dynamically construct repeatable nested blocks like `ingress` using a +You can dynamically construct repeatable nested blocks like `setting` using a special `dynamic` block type, which is supported inside `resource`, `data`, `provider`, and `provisioner` blocks: ```hcl -resource "aws_security_group" "example" { - name = "example" # can use expressions here +resource "aws_elastic_beanstalk_environment" "tfenvtest" { + name = "tf-test-name" + application = "${aws_elastic_beanstalk_application.tftest.name}" + solution_stack_name = "64bit Amazon Linux 2018.03 v2.11.4 running Go 1.12.6" - dynamic "ingress" { - for_each = var.service_ports + dynamic "setting" { + for_each = var.settings content { - from_port = ingress.value - to_port = ingress.value - protocol = "tcp" + namespace = setting.value["namespace"] + name = setting.value["name"] + value = setting.value["value"] } } } @@ -662,12 +698,12 @@ A `dynamic` block acts much like a `for` expression, but produces nested blocks instead of a complex typed value. It iterates over a given complex value, and generates a nested block for each element of that complex value. -- The label of the dynamic block (`"ingress"` in the example above) specifies +- The label of the dynamic block (`"setting"` in the example above) specifies what kind of nested block to generate. - The `for_each` argument provides the complex value to iterate over. - The `iterator` argument (optional) sets the name of a temporary variable that represents the current element of the complex value. If omitted, the name - of the variable defaults to the label of the `dynamic` block (`"ingress"` in + of the variable defaults to the label of the `dynamic` block (`"setting"` in the example above). - The `labels` argument (optional) is a list of strings that specifies the block labels, in order, to use for each generated block. You can use the temporary @@ -679,7 +715,7 @@ Since the `for_each` argument accepts any collection or structural value, you can use a `for` expression or splat expression to transform an existing collection. -The iterator object (`ingress` in the example above) has two attributes: +The iterator object (`setting` in the example above) has two attributes: * `key` is the map key or list element index for the current element. If the `for_each` expression produces a _set_ value then `key` is identical to @@ -692,9 +728,15 @@ to generate meta-argument blocks such as `lifecycle` and `provisioner` blocks, since Terraform must process these before it is safe to evaluate expressions. -If you need to iterate over combinations of values from multiple collections, -use [`setproduct`](./functions/setproduct.html) to create a single collection -containing all of the combinations. +The `for_each` value must be a map or set with one element per desired +nested block. If you need to declare resource instances based on a nested +data structure or combinations of elements from multiple data structures you +can use Terraform expressions and functions to derive a suitable value. +For some common examples of such situations, see the +[`flatten`](/docs/configuration/functions/flatten.html) +and +[`setproduct`](/docs/configuration/functions/setproduct.html) +functions. ### Best Practices for `dynamic` Blocks diff --git a/website/docs/configuration/functions/base64sha256.html.md b/website/docs/configuration/functions/base64sha256.html.md index 343b6d641..381f410be 100644 --- a/website/docs/configuration/functions/base64sha256.html.md +++ b/website/docs/configuration/functions/base64sha256.html.md @@ -14,7 +14,7 @@ earlier, see [0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). `base64sha256` computes the SHA256 hash of a given string and encodes it with -Base64. This is not equivalent to base64encode(sha256512("test")) since sha512() +Base64. This is not equivalent to `base64encode(sha256("test"))` since `sha256()` returns hexadecimal representation. The given string is first encoded as UTF-8 and then the SHA256 algorithm is applied diff --git a/website/docs/configuration/functions/base64sha512.html.md b/website/docs/configuration/functions/base64sha512.html.md index cfca09ca5..b910d78f3 100644 --- a/website/docs/configuration/functions/base64sha512.html.md +++ b/website/docs/configuration/functions/base64sha512.html.md @@ -14,7 +14,7 @@ earlier, see [0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). `base64sha512` computes the SHA512 hash of a given string and encodes it with -Base64. This is not equivalent to base64encode(sha512("test")) since sha512() +Base64. This is not equivalent to `base64encode(sha512("test"))` since `sha512()` returns hexadecimal representation. The given string is first encoded as UTF-8 and then the SHA512 algorithm is applied diff --git a/website/docs/configuration/functions/can.html.md b/website/docs/configuration/functions/can.html.md new file mode 100644 index 000000000..c957020cb --- /dev/null +++ b/website/docs/configuration/functions/can.html.md @@ -0,0 +1,80 @@ +--- +layout: "functions" +page_title: "can - Functions - Configuration Language" +sidebar_current: "docs-funcs-conversion-can" +description: |- + The can function tries to evaluate an expression given as an argument and + indicates whether the evaluation succeeded. +--- + +# `can` Function + +-> **Note:** This page is about Terraform 0.12 and later. For Terraform 0.11 and +earlier, see +[0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). + +`can` evaluates the given expression and returns a boolean value indicating +whether the expression produced a result without any errors. + +This is a special function that is able to catch errors produced when evaluating +its argument. For most situations where you could use `can` it's better to use +[`try`](./try.html) instead, because it allows for more concise definition of +fallback values for failing expressions. + +The primary purpose of `can` is to turn an error condition into a boolean +validation result when writing +[custom variable validation rules](../variables.html#custom-validation-rules). +For example: + +``` +variable "timestamp" { + type = string + + validation { # NOTE: custom validation is currently an opt-in experiment (see link above) + # formatdate fails if the second argument is not a valid timestamp + condition = can(formatdate("", var.timestamp)) + error_message = "The timestamp argument requires a valid RFC 3339 timestamp." + } +} +``` + +The `can` function can only catch and handle _dynamic_ errors resulting from +access to data that isn't known until runtime. It will not catch errors +relating to expressions that can be proven to be invalid for any input, such +as a malformed resource reference. + +~> **Warning:** The `can` function is intended only for simple tests in +variable validation rules. Although it can technically accept any sort of +expression and be used elsewhere in the configuration, we recommend against +using it in other contexts. For error handling elsewhere in the configuration, +prefer to use [`try`](./try.html). + +## Examples + +``` +> local.foo +{ + "bar" = "baz" +} +> can(local.foo.bar) +true +> can(local.foo.boop) +false +``` + +The `can` function will _not_ catch errors relating to constructs that are +provably invalid even before dynamic expression evaluation, such as a malformed +reference or a reference to a top-level object that has not been declared: + +``` +> can(local.nonexist) + +Error: Reference to undeclared local value + +A local value with the name "nonexist" has not been declared. +``` + +## Related Functions + +* [`try`](./try.html), which tries evaluating a sequence of expressions and + returns the result of the first one that succeeds. diff --git a/website/docs/configuration/functions/cidrsubnets.html.md b/website/docs/configuration/functions/cidrsubnets.html.md index e9156190f..7308cf5de 100644 --- a/website/docs/configuration/functions/cidrsubnets.html.md +++ b/website/docs/configuration/functions/cidrsubnets.html.md @@ -7,17 +7,17 @@ description: |- ranges within a particular CIDR prefix. --- -# `cidrsubnet` Function +# `cidrsubnets` Function -> **Note:** This page is about Terraform 0.12 and later. For Terraform 0.11 and earlier, see [0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). -`cidrsubnet` calculates a sequence of consecutive IP address ranges within +`cidrsubnets` calculates a sequence of consecutive IP address ranges within a particular CIDR prefix. ```hcl -cidrsubnet(prefix, newbits...) +cidrsubnets(prefix, newbits...) ``` `prefix` must be given in CIDR notation, as defined in diff --git a/website/docs/configuration/functions/flatten.html.md b/website/docs/configuration/functions/flatten.html.md index a6899fabd..0c290aa95 100644 --- a/website/docs/configuration/functions/flatten.html.md +++ b/website/docs/configuration/functions/flatten.html.md @@ -31,3 +31,85 @@ flattened recursively: ``` Indirectly-nested lists, such as those in maps, are _not_ flattened. + +## Flattening nested structures for `for_each` + +The +[resource `for_each`](/docs/configuration/resources.html#for_each-multiple-resource-instances-defined-by-a-map-or-set-of-strings) +and +[`dynamic` block](/docs/configuration/expressions.html#dynamic-blocks) +language features both require a collection value that has one element for +each repetition. + +Sometimes your input data structure isn't naturally in a suitable shape for +use in a `for_each` argument, and `flatten` can be a useful helper function +when reducing a nested data structure into a flat one. + +For example, consider a module that declares a variable like the following: + +```hcl +variable "networks" { + type = map(object({ + cidr_block = string + subnets = map(object({ + cidr_block = string + }) + }) +} +``` + +The above is a reasonable way to model objects that naturally form a tree, +such as top-level networks and their subnets. The repetition for the top-level +networks can use this variable directly, because it's already in a form +where the resulting instances match one-to-one with map elements: + +```hcl +resource "aws_vpc" "example" { + for_each = var.networks + + cidr_block = each.value.cidr_block +} +``` + +However, in order to declare all of the _subnets_ with a single `resource` +block, we must first flatten the structure to produce a collection where each +top-level element represents a single subnet: + +```hcl +locals { + # flatten ensures that this local value is a flat list of objects, rather + # than a list of lists of objects. + network_subnets = flatten([ + for network_key, network in var.networks : [ + for subnet_key, subnet in network.subnets : { + network_key = network_key + subnet_key = subnet_key + network_id = aws_vpc.example[network_key].id + cidr_block = subnet.cidr_block + } + ] + ]) +} + +resource "aws_subnet" "example" { + # local.network_subnets is a list, so we must now project it into a map + # where each key is unique. We'll combine the network and subnet keys to + # produce a single unique key per instance. + for_each = { + for subnet in local.network_subnets : "${subnet.network_key}.${subnet.subnet_key}" => subnet + } + + vpc_id = each.value.network_id + availability_zone = each.value.subnet_key + cidr_block = each.value_cidr_block +} +``` + +The above results in one subnet instance per subnet object, while retaining +the associations between the subnets and their containing networks. + +## Related Functions + +* [`setproduct`](./setproduct.html) finds all of the combinations of multiple + lists or sets of values, which can also be useful when preparing collections + for use with `for_each` constructs. diff --git a/website/docs/configuration/functions/merge.html.md b/website/docs/configuration/functions/merge.html.md index 162e5a4d2..edcc0a18a 100644 --- a/website/docs/configuration/functions/merge.html.md +++ b/website/docs/configuration/functions/merge.html.md @@ -3,8 +3,9 @@ layout: "functions" page_title: "merge - Functions - Configuration Language" sidebar_current: "docs-funcs-collection-merge" description: |- - The merge function takes an arbitrary number of maps and returns a single - map after merging the keys from each argument. + The merge function takes an arbitrary number maps or objects, and returns a + single map or object that contains a merged set of elements from all + arguments. --- # `merge` Function @@ -13,19 +14,33 @@ description: |- earlier, see [0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). -`merge` takes an arbitrary number of maps and returns a single map that -contains a merged set of elements from all of the maps. +`merge` takes an arbitrary number of maps or objects, and returns a single map +or object that contains a merged set of elements from all arguments. -If more than one given map defines the same key then the one that is later -in the argument sequence takes precedence. +If more than one given map or object defines the same key or attribute, then +the one that is later in the argument sequence takes precedence. If the +argument types do not match, the resulting type will be an object matching the +type structure of the attributes after the merging rules have been applied. ## Examples ``` -> merge({"a"="b", "c"="d"}, {"e"="f", "c"="z"}) +> merge({a="b", c="d"}, {e="f", c="z"}) { "a" = "b" "c" = "z" "e" = "f" } ``` + +``` +> merge({a="b"}, {a=[1,2], c="z"}, {d=3}) +{ + "a" = [ + 1, + 2, + ] + "c" = "z" + "d" = 3 +} +``` diff --git a/website/docs/configuration/functions/range.html.md b/website/docs/configuration/functions/range.html.md index daaab05bf..21bf4a3b4 100644 --- a/website/docs/configuration/functions/range.html.md +++ b/website/docs/configuration/functions/range.html.md @@ -36,7 +36,7 @@ The sequence-building algorithm follows the following pseudocode: ``` let num = start -while num <= limit: (or, for negative step, num >= limit) +while num < limit: (or, for negative step, num >= limit) append num to the sequence num = num + step return the sequence diff --git a/website/docs/configuration/functions/setintersection.html.md b/website/docs/configuration/functions/setintersection.html.md index ef48dc036..6444aeb8b 100644 --- a/website/docs/configuration/functions/setintersection.html.md +++ b/website/docs/configuration/functions/setintersection.html.md @@ -40,5 +40,6 @@ the ordering of the given elements is not preserved. a given element value. * [`setproduct`](./setproduct.html) computes the _Cartesian product_ of multiple sets. +* [`setsubtract`](./setsubtract.html) computes the _relative complement_ of two sets * [`setunion`](./setunion.html) computes the _union_ of multiple sets. diff --git a/website/docs/configuration/functions/setproduct.html.md b/website/docs/configuration/functions/setproduct.html.md index dd5fdd22d..6b9f77e5b 100644 --- a/website/docs/configuration/functions/setproduct.html.md +++ b/website/docs/configuration/functions/setproduct.html.md @@ -118,11 +118,112 @@ elements all have a consistent type: ] ``` +## Finding combinations for `for_each` + +The +[resource `for_each`](/docs/configuration/resources.html#for_each-multiple-resource-instances-defined-by-a-map-or-set-of-strings) +and +[`dynamic` block](/docs/configuration/expressions.html#dynamic-blocks) +language features both require a collection value that has one element for +each repetition. + +Sometimes your input data comes in separate values that cannot be directly +used in a `for_each` argument, and `setproduct` can be a useful helper function +for the situation where you want to find all unique combinations of elements in +a number of different collections. + +For example, consider a module that declares variables like the following: + +```hcl +variable "networks" { + type = map(object({ + base_cidr_block = string + })) +} + +variable "subnets" { + type = map(object({ + number = number + })) +} +``` + +If the goal is to create each of the defined subnets per each of the defined +networks, creating the top-level networks can directly use `var.networks` +because it's already in a form where the resulting instances match one-to-one +with map elements: + +```hcl +resource "aws_vpc" "example" { + for_each = var.networks + + cidr_block = each.value.base_cidr_block +} +``` + +However, in order to declare all of the _subnets_ with a single `resource` +block, we must first produce a collection whose elements represent all of +the combinations of networks and subnets, so that each element itself +represents a subnet: + +```hcl +locals { + # setproduct works with sets and lists, but our variables are both maps + # so we'll need to convert them first. + networks = [ + for key, network in var.networks : { + key = key + cidr_block = network.cidr_block + } + ] + subnets = [ + for key, subnet in var.subnets : { + key = key + number = subnet.number + } + ] + + network_subnets = [ + # in pair, element zero is a network and element one is a subnet, + # in all unique combinations. + for pair in setproduct(local.networks, local.subnets) : { + network_key = pair[0].key + subnet_key = pair[1].key + network_id = aws_vpc.example[pair[0].key].id + + # The cidr_block is derived from the corresponding network. See the + # cidrsubnet function for more information on how this calculation works. + cidr_block = cidrsubnet(pair[0].cidr_block, 4, pair[1].number) + } + ] +} + +resource "aws_subnet" "example" { + # local.network_subnets is a list, so we must now project it into a map + # where each key is unique. We'll combine the network and subnet keys to + # produce a single unique key per instance. + for_each = { + for subnet in local.network_subnets : "${subnet.network_key}.${subnet.subnet_key}" => subnet + } + + vpc_id = each.value.network_id + availability_zone = each.value.subnet_key + cidr_block = each.value_cidr_block +} +``` + +The above results in one subnet instance per combination of network and subnet +elements in the input variables. + ## Related Functions * [`contains`](./contains.html) tests whether a given list or set contains a given element value. +* [`flatten`](./flatten.html) is useful for flattening heirarchical data + into a single list, for situations where the relationships between two + object types are defined explicitly. * [`setintersection`](./setintersection.html) computes the _intersection_ of multiple sets. +* [`setsubtract`](./setsubtract.html) computes the _relative complement_ of two sets * [`setunion`](./setunion.html) computes the _union_ of multiple sets. diff --git a/website/docs/configuration/functions/setsubtract.html.md b/website/docs/configuration/functions/setsubtract.html.md new file mode 100644 index 000000000..0bf3b7acc --- /dev/null +++ b/website/docs/configuration/functions/setsubtract.html.md @@ -0,0 +1,49 @@ +--- +layout: "functions" +page_title: "setsubtract - Functions - Configuration Language" +sidebar_current: "docs-funcs-collection-setsubtract" +description: |- + The setsubtract function returns a new set containing the elements + from the first set that are not present in the second set +--- + +# `setsubtract` Function + +-> **Note:** This page is about Terraform 0.12 and later. For Terraform 0.11 and +earlier, see +[0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). + +The `setsubtract` function returns a new set containing the elements from the first set that are not present in the second set. In other words, it computes the +[relative complement](https://en.wikipedia.org/wiki/Complement_(set_theory)#Relative_complement) of the first set in the second set. + +```hcl +setsubtract(a, b) +``` + +## Examples + +``` +> setsubtract(["a", "b", "c"], ["a", "c"]) +[ + "b", +] +``` + +### Set Difference (Symmetric Difference) + +``` +> setunion(setsubtract(["a", "b", "c"], ["a", "c", "d"]), setsubtract(["a", "c", "d"], ["a", "b", "c"])) +[ + "b", + "d", +] +``` + + +## Related Functions + +* [`setintersection`](./setintersection.html) computes the _intersection_ of multiple sets +* [`setproduct`](./setproduct.html) computes the _Cartesian product_ of multiple + sets. +* [`setunion`](./setunion.html) computes the _union_ of + multiple sets. diff --git a/website/docs/configuration/functions/setunion.html.md b/website/docs/configuration/functions/setunion.html.md index 77e3d92d9..41103e588 100644 --- a/website/docs/configuration/functions/setunion.html.md +++ b/website/docs/configuration/functions/setunion.html.md @@ -45,3 +45,4 @@ the ordering of the given elements is not preserved. multiple sets. * [`setproduct`](./setproduct.html) computes the _Cartesian product_ of multiple sets. +* [`setsubtract`](./setsubtract.html) computes the _relative complement_ of two sets diff --git a/website/docs/configuration/functions/templatefile.html.md b/website/docs/configuration/functions/templatefile.html.md index cfe5e347f..55756bed9 100644 --- a/website/docs/configuration/functions/templatefile.html.md +++ b/website/docs/configuration/functions/templatefile.html.md @@ -29,7 +29,9 @@ into a separate file for readability. The "vars" argument must be a map. Within the template file, each of the keys in the map is available as a variable for interpolation. The template may also use any other function available in the Terraform language, except that -recursive calls to `templatefile` are not permitted. +recursive calls to `templatefile` are not permitted. Variable names must +each start with a letter, followed by zero or more letters, digits, or +underscores. Strings in the Terraform language are sequences of Unicode characters, so this function will interpret the file contents as UTF-8 encoded text and @@ -64,6 +66,59 @@ backend 10.0.0.2:8080 ``` +### Generating JSON or YAML from a template + +If the string you want to generate will be in JSON or YAML syntax, it's +often tricky and tedious to write a template that will generate valid JSON or +YAML that will be interpreted correctly when using lots of individual +interpolation sequences and directives. + +Instead, you can write a template that consists only of a single interpolated +call to either [`jsonencode`](./jsonencode.html) or +[`yamlencode`](./yamlencode.html), specifying the value to encode using +[normal Terraform expression syntax](/docs/configuration/expressions.html) +as in the following examples: + +``` +${jsonencode({ + "backends": [for addr in ip_addrs : "${addr}:${port}"], +})} +``` + +``` +${yamlencode({ + "backends": [for addr in ip_addrs : "${addr}:${port}"], +})} +``` + +Given the same input as the `backends.tmpl` example in the previous section, +this will produce a valid JSON or YAML representation of the given data +structure, without the need to manually handle escaping or delimiters. +In the latest examples above, the repetition based on elements of `ip_addrs` is +achieved by using a +[`for` expression](/docs/configuration/expressions.html#for-expressions) +rather than by using +[template directives](/docs/configuration/expressions.html#directives). + +```json +{"backends":["10.0.0.1:8080","10.0.0.2:8080"]} +``` + +If the resulting template is small, you can choose instead to write +`jsonencode` or `yamlencode` calls inline in your main configuration files, and +avoid creating separate template files at all: + +```hcl +locals { + backend_config_json = jsonencode({ + "backends": [for addr in ip_addrs : "${addr}:${port}"], + }) +} +``` + +For more information, see the main documentation for +[`jsonencode`](./jsonencode.html) and [`yamlencode`](./yamlencode.html). + ## Related Functions * [`file`](./file.html) reads a file from disk and returns its literal contents diff --git a/website/docs/configuration/functions/trim.html.md b/website/docs/configuration/functions/trim.html.md new file mode 100644 index 000000000..f6402a83c --- /dev/null +++ b/website/docs/configuration/functions/trim.html.md @@ -0,0 +1,31 @@ +--- +layout: "functions" +page_title: "trim - Functions - Configuration Language" +sidebar_current: "docs-funcs-string-trim" +description: |- + The trim function removes the specified characters from the start and end of + a given string. +--- + +# `trim` Function + +-> **Note:** This page is about Terraform 0.12 and later. For Terraform 0.11 and +earlier, see +[0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). + +`trim` removes the specified characters from the start and end of the given +string. + +## Examples + +``` +> trim("?!hello?!", "!?") +hello +``` + +## Related Functions + +* [`trimprefix`](./trimprefix.html) removes a word from the start of a string. +* [`trimsuffix`](./trimsuffix.html) removes a word from the end of a string. +* [`trimspace`](./trimspace.html) removes all types of whitespace from + both the start and the end of a string. diff --git a/website/docs/configuration/functions/trimprefix.html.md b/website/docs/configuration/functions/trimprefix.html.md new file mode 100644 index 000000000..f2198834f --- /dev/null +++ b/website/docs/configuration/functions/trimprefix.html.md @@ -0,0 +1,30 @@ +--- +layout: "functions" +page_title: "trimprefix - Functions - Configuration Language" +sidebar_current: "docs-funcs-string-trimprefix" +description: |- + The trimprefix function removes the specified prefix from the start of a + given string. +--- + +# `trimprefix` Function + +-> **Note:** This page is about Terraform 0.12 and later. For Terraform 0.11 and +earlier, see +[0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). + +`trimprefix` removes the specified prefix from the start of the given string. + +## Examples + +``` +> trimprefix("helloworld", "hello") +world +``` + +## Related Functions + +* [`trim`](./trim.html) removes characters at the start and end of a string. +* [`trimsuffix`](./trimsuffix.html) removes a word from the end of a string. +* [`trimspace`](./trimspace.html) removes all types of whitespace from + both the start and the end of a string. diff --git a/website/docs/configuration/functions/trimsuffix.html.md b/website/docs/configuration/functions/trimsuffix.html.md new file mode 100644 index 000000000..aec898687 --- /dev/null +++ b/website/docs/configuration/functions/trimsuffix.html.md @@ -0,0 +1,30 @@ +--- +layout: "functions" +page_title: "trimsuffix - Functions - Configuration Language" +sidebar_current: "docs-funcs-string-trimsuffix" +description: |- + The trimsuffix function removes the specified suffix from the end of a + given string. +--- + +# `trimsuffix` Function + +-> **Note:** This page is about Terraform 0.12 and later. For Terraform 0.11 and +earlier, see +[0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). + +`trimsuffix` removes the specified suffix from the end of the given string. + +## Examples + +``` +> trimsuffix("helloworld", "world") +hello +``` + +## Related Functions + +* [`trim`](./trim.html) removes characters at the start and end of a string. +* [`trimprefix`](./trimprefix.html) removes a word from the start of a string. +* [`trimspace`](./trimspace.html) removes all types of whitespace from + both the start and the end of a string. diff --git a/website/docs/configuration/functions/try.html.md b/website/docs/configuration/functions/try.html.md new file mode 100644 index 000000000..bf8d98795 --- /dev/null +++ b/website/docs/configuration/functions/try.html.md @@ -0,0 +1,117 @@ +--- +layout: "functions" +page_title: "try - Functions - Configuration Language" +sidebar_current: "docs-funcs-conversion-try" +description: |- + The try function tries to evaluate a sequence of expressions given as + arguments and returns the result of the first one that does not produce + any errors. +--- + +# `try` Function + +-> **Note:** This page is about Terraform 0.12 and later. For Terraform 0.11 and +earlier, see +[0.11 Configuration Language: Interpolation Syntax](../../configuration-0-11/interpolation.html). + +`try` evaluates all of its argument expressions in turn and returns the result +of the first one that does not produce any errors. + +This is a special function that is able to catch errors produced when evaluating +its arguments, which is particularly useful when working with complex data +structures whose shape is not well-known at implementation time. + +For example, if some data is retrieved from an external system in JSON or YAML +format and then decoded, the result may have attributes that are not guaranteed +to be set. We can use `try` to produce a normalized data structure which has +a predictable type that can therefore be used more conveniently elsewhere in +the configuration: + +```hcl +locals { + raw_value = yamldecode("${path.module}/example.yaml") + normalized_value = { + name = tostring(try(local.raw_value.name, null)) + groups = try(local.raw_value.groups, []) + } +} +``` + +With the above local value expressions, configuration elsewhere in the module +can refer to `local.normalized_value` attributes without the need to repeatedly +check for and handle absent attributes that would otherwise produce errors. + +We can also use `try` to deal with situations where a value might be provided +in two different forms, allowing us to normalize to the most general form: + +```hcl +variable "example" { + type = any +} + +locals { + example = try( + [tostring(var.example)], + tolist(var.example), + ) +} +``` + +The above permits `var.example` to be either a list or a single string. If it's +a single string then it'll be normalized to a single-element list containing +that string, again allowing expressions elsewhere in the configuration to just +assume that `local.example` is always a list. + +This second example contains two expressions that can both potentially fail. +For example, if `var.example` were set to `{}` then it could be converted to +neither a string nor a list. If `try` exhausts all of the given expressions +without any succeeding, it will return an error describing all of the problems +it encountered. + +We strongly suggest using `try` only in special local values whose expressions +perform normalization, so that the error handling is confined to a single +location in the module and the rest of the module can just use straightforward +references to the normalized structure and thus be more readable for future +maintainers. + +The `try` function can only catch and handle _dynamic_ errors resulting from +access to data that isn't known until runtime. It will not catch errors +relating to expressions that can be proven to be invalid for any input, such +as a malformed resource reference. + +~> **Warning:** The `try` function is intended only for concise testing of the +presence of and types of object attributes. Although it can technically accept +any sort of expression, we recommend using it only with simple attribute +references and type conversion functions as shown in the examples above. +Overuse of `try` to suppress errors will lead to a configuration that is hard +to understand and maintain. + +## Examples + +``` +> local.foo +{ + "bar" = "baz" +} +> try(local.foo.bar, "fallback") +baz +> try(local.foo.boop, "fallback") +fallback +``` + +The `try` function will _not_ catch errors relating to constructs that are +provably invalid even before dynamic expression evaluation, such as a malformed +reference or a reference to a top-level object that has not been declared: + +``` +> try(local.nonexist, "fallback") + +Error: Reference to undeclared local value + +A local value with the name "nonexist" has not been declared. +``` + +## Related Functions + +* [`can`](./can.html), which tries evaluating an expression and returns a + boolean value indicating whether it succeeded. diff --git a/website/docs/configuration/modules.html.md b/website/docs/configuration/modules.html.md index 9d81357e3..400e7449e 100644 --- a/website/docs/configuration/modules.html.md +++ b/website/docs/configuration/modules.html.md @@ -200,6 +200,25 @@ provider configuration must be destroyed before that provider configuration is removed, unless the related resources are re-configured to use a different provider configuration first. +### Provider Version Constraints in Modules + +To declare that a module requires particular versions of a specific provider, +use a [`required_providers`](terraform.html#specifying-required-provider-versions) +block inside a `terraform` block: + +```hcl +terraform { + required_providers { + aws = ">= 2.7.0" + } +} +``` + +Shared modules should constrain only the minimum allowed version, using a `>=` +constraint. This specifies the minimum version the provider is compatible +with while allowing users to upgrade to newer provider versions without +altering the module source code. + ### Implicit Provider Inheritance For convenience in simple configurations, a child module automatically inherits @@ -238,7 +257,7 @@ or a child module may need to use different provider settings than its parent. For such situations, it's necessary to pass providers explicitly as we will see in the next section. -## Passing Providers Explicitly +### Passing Providers Explicitly When child modules each need a different configuration of a particular provider, or where the child module requires a different provider configuration @@ -328,19 +347,17 @@ provider "aws" { Each resource should then have its own `provider` attribute set to either `"aws.src"` or `"aws.dst"` to choose which of the two provider instances to use. -At this time it is required to write an explicit proxy configuration block -even for default (un-aliased) provider configurations when they will be passed -via an explicit `providers` block: +A proxy configuration block is one that is either completely empty or that +contains only the `alias` argument. It serves as a placeholder for +provider configurations passed between modules. Although an empty proxy +configuration block is valid, it is not necessary: proxy configuration blocks +are needed only to establish which _alias_ provider configurations a child +module is expecting. -```hcl -provider "aws" { -} -``` - -If such a block is not present, the child module will behave as if it has no -configurations of this type at all, which may cause input prompts to supply -any required provider configuration arguments. This limitation will be -addressed in a future version of Terraform. +A proxy configuration block must not include the `version` argument. To specify +version constraints for a particular child module without creating a local +module configuration, use the [`required_providers`](/docs/configuration/terraform.html#specifying-required-provider-versions) +setting inside a `terraform` block. ## Multiple Instances of a Module diff --git a/website/docs/configuration/outputs.html.md b/website/docs/configuration/outputs.html.md index a0c270154..2ab93e4a1 100644 --- a/website/docs/configuration/outputs.html.md +++ b/website/docs/configuration/outputs.html.md @@ -52,6 +52,9 @@ refers to the `private_ip` attribute exposed by an `aws_instance` resource defined elsewhere in this module (not shown). Any valid expression is allowed as an output value. +-> **Note:** Outputs are only rendered when Terraform applies your plan. Running +`terraform plan` will not render outputs. + ## Accessing Child Module Outputs In a parent module, outputs of child modules are available in expressions as diff --git a/website/docs/configuration/providers.html.md b/website/docs/configuration/providers.html.md index 96a911b39..dff4f6cc9 100644 --- a/website/docs/configuration/providers.html.md +++ b/website/docs/configuration/providers.html.md @@ -93,9 +93,9 @@ for installation instructions. For more information, see [the `terraform init` command](/docs/commands/init.html). -## `version`: Provider Versions +## Provider Versions -[inpage-versions]: #version-provider-versions +[inpage-versions]: #provider-versions Providers are plugins released on a separate rhythm from Terraform itself, and so they have their own version numbers. For production use, you should @@ -118,31 +118,23 @@ suggested below. * provider.aws: version = "~> 1.0" ``` -To constrain the provider version as suggested, add the `version` meta-argument -to the provider configuration block: +To constrain the provider version as suggested, add a `required_providers` +block inside a `terraform` block: ```hcl -provider "aws" { - version = "~> 1.0" - - region = "us-east-1" +terraform { + required_providers { + aws = "~> 1.0" + } } ``` -This meta-argument applies to all providers. -[The `terraform providers` command](/docs/commands/providers.html) can be used +Use [the `terraform providers` command](/docs/commands/providers.html) to view the specified version constraints for all providers used in the current configuration. -The `version` argument value may either be a single explicit version or -a version constraint string. Constraint strings use the following syntax to -specify a _range_ of versions that are acceptable: - -* `>= 1.2.0`: version 1.2.0 or newer -* `<= 1.2.0`: version 1.2.0 or older -* `~> 1.2.0`: any non-beta version `>= 1.2.0` and `< 1.3.0`, e.g. `1.2.X` -* `~> 1.2`: any non-beta version `>= 1.2.0` and `< 2.0.0`, e.g. `1.X.Y` -* `>= 1.0.0, <= 2.0.0`: any version between 1.0.0 and 2.0.0 inclusive +For more information on the `required_providers` block, see +[Specifying Required Provider Versions](https://www.terraform.io/docs/configuration/terraform.html#specifying-required-provider-versions). When `terraform init` is re-run with providers already installed, it will use an already-installed provider that meets the constraints in preference @@ -150,6 +142,13 @@ to downloading a new version. To upgrade to the latest acceptable version of each provider, run `terraform init -upgrade`. This command also upgrades to the latest versions of all Terraform modules. +Provider version constraints can also be specified using a `version` argument +within a `provider` block, but that simultaneously declares a new provider +configuration that may cause problems particularly when writing shared modules. +For that reason, we recommend using the `required_providers` block as described +above, and _not_ using the `version` argument within `provider` blocks. +`version` is still supported for compatibility with older Terraform versions. + ## `alias`: Multiple Provider Instances [inpage-alias]: #alias-multiple-provider-instances @@ -246,7 +245,7 @@ Operating system | User plugins directory Windows | `%APPDATA%\terraform.d\plugins` All other systems | `~/.terraform.d/plugins` -Once a plugin is installed, `terraform init` can initialize it normally. +Once a plugin is installed, `terraform init` can initialize it normally. You must run this command from the directory where the configuration files are located. Providers distributed by HashiCorp can also go in the user plugins directory. If a manually installed version meets the configuration's version constraints, diff --git a/website/docs/configuration/resources.html.md b/website/docs/configuration/resources.html.md index e2b337250..e8c22fde8 100644 --- a/website/docs/configuration/resources.html.md +++ b/website/docs/configuration/resources.html.md @@ -246,7 +246,7 @@ resource "aws_instance" "server" { ami = "ami-a1b2c3d4" instance_type = "t2.micro" - tags { + tags = { Name = "Server ${count.index}" } } @@ -309,7 +309,7 @@ resource "aws_instance" "server" { instance_type = "t2.micro" subnet_id = var.subnet_ids[count.index] - tags { + tags = { Name = "Server ${count.index}" } } @@ -340,6 +340,12 @@ infrastructure object associated with it (as described above in [Resource Behavior](#resource-behavior)), and each is separately created, updated, or destroyed when the configuration is applied. +-> **Note:** The keys of the map (or all the values in the case of a set of strings) must +be _known values_, or you will get an error message that `for_each` has dependencies +that cannot be determined before apply, and a `-target` may be needed. `for_each` keys +cannot be the result (or rely on the result of) of impure functions, including `uuid`, `bcrypt`, +or `timestamp`, as their evaluation is deferred resource during evaluation. + ```hcl resource "azurerm_resource_group" "rg" { for_each = { @@ -396,7 +402,7 @@ resource "aws_instance" "server" { instance_type = "t2.micro" subnet_id = each.key # note: each.key and each.value are the same for a set - tags { + tags = { Name = "Server ${each.key}" } } @@ -411,6 +417,16 @@ can't refer to any resource attributes that aren't known until after a configuration is applied (such as a unique ID generated by the remote API when an object is created). +The `for_each` value must be a map or set with one element per desired +resource instance. If you need to declare resource instances based on a nested +data structure or combinations of elements from multiple data structures you +can use Terraform expressions and functions to derive a suitable value. +For some common examples of such situations, see the +[`flatten`](/docs/configuration/functions/flatten.html) +and +[`setproduct`](/docs/configuration/functions/setproduct.html) +functions. + ### `provider`: Selecting a Non-default Provider Configuration [inpage-provider]: #provider-selecting-a-non-default-provider-configuration diff --git a/website/docs/configuration/syntax-json.html.md b/website/docs/configuration/syntax-json.html.md index 1399f5b56..c6cb5d57f 100644 --- a/website/docs/configuration/syntax-json.html.md +++ b/website/docs/configuration/syntax-json.html.md @@ -93,7 +93,7 @@ resource "aws_instance" "example" { ``` Within each top-level block type the rules for mapping to JSON are slightly -different (see [Block-type-specific Exceptions][inpage-exceptions] below), but the following general rules apply in most cases: +different (see the [block-type-specific exceptions](#block-type-specific-exceptions) below), but the following general rules apply in most cases: * The JSON object representing the block body contains properties that correspond either to argument names or to nested block type names. diff --git a/website/docs/configuration/syntax.html.md b/website/docs/configuration/syntax.html.md index 4279eb513..2b9e6b419 100644 --- a/website/docs/configuration/syntax.html.md +++ b/website/docs/configuration/syntax.html.md @@ -32,7 +32,7 @@ It is not necessary to know all of the details of HCL syntax in order to use Terraform, and so this page summarizes the most important details. If you are interested, you can find a full definition of HCL syntax in -[the HCL native syntax specification](https://github.com/hashicorp/hcl/blob/hcl2/hcl/hclsyntax/spec.md). +[the HCL native syntax specification](https://github.com/hashicorp/hcl/blob/hcl2/hclsyntax/spec.md). ## Arguments and Blocks diff --git a/website/docs/configuration/terraform.html.md b/website/docs/configuration/terraform.html.md index bc9bca4fa..50a28e723 100644 --- a/website/docs/configuration/terraform.html.md +++ b/website/docs/configuration/terraform.html.md @@ -103,12 +103,6 @@ to newer versions of Terraform without altering the module. The `required_providers` setting is a map specifying a version constraint for each provider required by your configuration. -This is one of several ways to define -[provider version constraints](./providers.html#provider-versions), -and is particularly suited to re-usable modules that expect a provider -configuration to be provided by their caller but still need to impose a -minimum version for that provider. - ```hcl terraform { required_providers { @@ -117,7 +111,71 @@ terraform { } ``` +Version constraint strings within the `required_providers` block use the +same version constraint syntax as for +[the `required_version` argument](#specifying-a-required-terraform-version) +described above. + +When a configuration contains multiple version constraints for a single +provider -- for example, if you're using multiple modules and each one has +its own constraint -- _all_ of the constraints must hold to select a single +provider version for the whole configuration. + Re-usable modules should constrain only the minimum allowed version, such as `>= 1.0.0`. This specifies the earliest version that the module is compatible with while leaving the user of the module flexibility to upgrade to newer versions of the provider without altering the module. + +Root modules should use a `~>` constraint to set both a lower and upper bound +on versions for each provider they depend on, as described in +[Provider Versions](providers.html#provider-versions). + +An alternate syntax is also supported, but not intended for use at this time. +It exists to support future enhancements. + +```hcl +terraform { + required_providers { + aws = { + version = ">= 2.7.0" + } + } +} +``` + +## Experimental Language Features + +From time to time the Terraform team will introduce new language features +initially via an opt-in experiment, so that the community can try the new +feature and give feedback on it prior to it becoming a backward-compatibility +constraint. + +In releases where experimental features are available, you can enable them on +a per-module basis by setting the `experiments` argument inside a `terraform` +block: + +```hcl +terraform { + experiments = [example] +} +``` + +The above would opt in to an experiment named `example`, assuming such an +experiment were available in the current Terraform version. + +Experiments are subject to arbitrary changes in later releases and, depending on +the outcome of the experiment, may change drastically before final release or +may not be released in stable form at all. Such breaking changes may appear +even in minor and patch releases. We do not recommend using experimental +features in Terraform modules intended for production use. + +In order to make that explicit and to avoid module callers inadvertently +depending on an experimental feature, any module with experiments enabled will +generate a warning on every `terraform plan` or `terraform apply`. If you +want to try experimental features in a shared module, we recommend enabling the +experiment only in alpha or beta releases of the module. + +The introduction and completion of experiments is reported in +[Terraform's changelog](https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md), +so you can watch the release notes there to discover which experiment keywords, +if any, are available in a particular Terraform release. diff --git a/website/docs/configuration/variables.html.md b/website/docs/configuration/variables.html.md index f13afeedf..993eeb3d9 100644 --- a/website/docs/configuration/variables.html.md +++ b/website/docs/configuration/variables.html.md @@ -23,7 +23,7 @@ When you declare them in [child modules](./modules.html), the calling module should pass values in the `module` block. Input variable usage is introduced in the Getting Started guide section -[_Input Variables_](/intro/getting-started/variables.html). +[_Input Variables_](https://learn.hashicorp.com/terraform/getting-started/variables). -> **Note:** For brevity, input variables are often referred to as just "variables" or "Terraform variables" when it is clear from context what sort of @@ -164,6 +164,64 @@ might be included in documentation about the module, and so it should be written from the perspective of the user of the module rather than its maintainer. For commentary for module maintainers, use comments. +## Custom Validation Rules + +~> *Warning:* This feature is currently experimental and is subject to breaking +changes even in minor releases. We welcome your feedback, but cannot +recommend using this feature in production modules yet. + +In addition to Type Constraints as described above, a module author can specify +arbitrary custom validation rules for a particular variable using a `validation` +block nested within the corresponding `variable` block: + +```hcl +variable "image_id" { + type = string + description = "The id of the machine image (AMI) to use for the server." + + validation { + condition = length(var.image_id) > 4 && substr(var.image_id, 0, 4) == "ami-" + error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"." + } +} +``` + +The `condition` argument is an expression that must use the value of the +variable to return `true` if the value is valid, or `false` if it is invalid. +The expression can refer only to the variable that the condition applies to, +and _must not_ produce errors. + +If the failure of an expression is the basis of the validation decision, use +[the `can` function](./functions/can.html) to detect such errors. For example: + +```hcl +variable "image_id" { + type = string + description = "The id of the machine image (AMI) to use for the server." + + validation { + # regex(...) fails if it cannot find a match + condition = can(regex("^ami-", var.image_id)) + error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"." + } +} +``` + +If `condition` evaluates to `false`, Terraform will produce an error message +that includes the sentences given in `error_message`. The error message string +should be at least one full sentence explaining the constraint that failed, +using a sentence structure similar to the above examples. + +This is [an experimental language feature](./terraform.html#experimental-language-features) +that currently requires an explicit opt-in using the experiment keyword +`variable_validation`: + +```hcl +terraform { + experiments = [variable_validation] +} +``` + ## Assigning Values to Root Module Variables When variables are declared in the root module of your configuration, they diff --git a/website/docs/import/usage.html.md b/website/docs/import/usage.html.md index 3d6aa8c4e..717635a77 100644 --- a/website/docs/import/usage.html.md +++ b/website/docs/import/usage.html.md @@ -39,12 +39,14 @@ resource configuration: $ terraform import aws_instance.example i-abcd1234 ``` -This command locates the AWS instance with ID `i-abcd1234` and attaches -its existing settings, as described by the EC2 API, to the name -`aws_instance.example` in the Terraform state. +This command locates the AWS instance with ID `i-abcd1234`. Then it attaches +the existing settings of the instance, as described by the EC2 API, to the +name `aws_instance.example` of a module. In this example the module path +implies that the root module is used. Finally, the mapping is saved in the +Terraform state. -It is also possible to import to resources in child modules and to single -instances of a resource with `count` set. See +It is also possible to import to resources in child modules, using their paths, +and to single instances of a resource with `count` or `for_each` set. See [_Resource Addressing_](/docs/internals/resource-addressing.html) for more details on how to specify a target resource. diff --git a/website/docs/internals/credentials-helpers.html.md b/website/docs/internals/credentials-helpers.html.md new file mode 100644 index 000000000..c599c2b18 --- /dev/null +++ b/website/docs/internals/credentials-helpers.html.md @@ -0,0 +1,171 @@ +--- +layout: "docs" +page_title: "Credentials Helpers" +sidebar_current: "docs-internals-credentials-helpers" +description: |- + Credentials helpers are external programs that know how to store and retrieve API tokens for remote Terraform services. +--- + +# Credentials Helpers + +For Terraform-specific features that interact with remote network services, +such as [module registries](/docs/registry/) and +[remote operations](/docs/cloud/run/cli.html), Terraform by default looks for +API credentials to use in these calls in +[the CLI configuration](/docs/commands/cli-config.html). + +Credentials helpers offer an alternative approach that allows you to customize +how Terraform obtains credentials using an external program, which can then +directly access an existing secrets management system in your organization. + +This page is about how to write and install a credentials helper. To learn +how to configure a credentials helper that was already installed, see +[the CLI config Credentials Helpers section](/docs/commands/cli-config.html#credentials-helpers). + +## How Terraform finds Credentials Helpers + +A credentials helper is a normal executable program that is installed in a +particular location and whose name follows a specific naming convention. + +A credentials helper called "credstore", for example, would be implemented as +an executable program named `terraform-credentials-credstore` (with an `.exe` +extension on Windows only), and installed in one of the +[default plugin search locations](/docs/extend/how-terraform-works.html#plugin-locations). + +## How Terraform runs Credentials Helpers + +Once Terraform has located the configured credentials helper, it will execute +it once for each credentials request that cannot be satisfied by a `credentials` +block in the CLI configuration. + +For the following examples, we'll assume a "credstore" credentials helper +configured as follows: + +``` +credentials_helper "credstore" { + args = ["--host=credstore.example.com"] +} +``` + +Terraform runs the helper program with each of the arguments given in `args`, +followed by an _verb_ and then the hostname that the verb will apply to. +The current set of verbs are: + +* `get`: retrieve the credentials for the given hostname +* `store`: store new credentials for the given hostname +* `forget`: delete any stored credentials for the given hostname + +To represent credentials, the credentials helper protocol uses a JSON object +whose contents correspond with the contents of +[`credentials` blocks in the CLI configuration](/docs/commands/cli-config.html#credentials). +To represent an API token, the object contains a property called "token" whose +value is the token string: + +```json +{ + "token": "example-token-value" +} +``` + +The following sections describe the specific expected behaviors for each of the +three verbs. + +## `get`: retrieve the credentials for the given hostname + +To retrieve credentials for `app.terraform.io`, Terraform would run the +"credstore" helper as follows: + +``` +terraform-credentials-credstore --host=credstore.example.com get app.terraform.io +``` + +If the credentials helper is able to provide credentials for the given host +then it must print a JSON credentials object to its stdout stream and then +exit with status code zero to indicate success. + +If it is unable to provide the requested credentials for any reason, it must +print an end-user-oriented plain text error message to its stderr stream and +then exit with a _non-zero_ status code. + +## `store`: store new credentials for the given hostname + +To store new credentials for `app.terraform.io`, Terraform would run the +"credstore" helper as follows: + +``` +terraform-credentials-credstore --host=credstore.example.com store app.terraform.io +``` + +Terraform then writes a JSON credentials object to the helper program's stdin +stream. If the helper is able to store the given credentials then it must do +so and then exit with status code zero and no output on stdout or stderr to +indicate success. + +If it is unable to store the given credentials for any reason, it _must_ still +fully read its stdin until EOF and then print an end-user-oriented plain text +error message to its stderr stream before exiting with a non-zero status +code. + +The new credentials must fully replace any existing credentials stored for the +given hostname. + +## `forget`: delete any stored credentials for the given hostname + +To forget any existing credentials for `app.terraform.io`, Terraform would run +the "credstore" helper as follows: + +``` +terraform-credentials-credstore --host=credstore.example.com forget app.terraform.io +``` + +No JSON credentials objects are used for the `forget` verb. + +If the helper program is able to delete its stored credentials for the given +hostname or if there are no such credentials stored already then it must +exist with status code zero and produce no output on stdout or stderr. + +If it is unable to forget the stored credentials for any reason, particularly +if the helper cannot be sure that the credentials are no longer available for +retrieval, the helper program must print an end-user-oriented plain text error +message to its stderr stream and then exit with a non-zero status code. + +## Handling Other Commands + +The credentials helper protocol may be extended with additional verbs in future, +so for forward-compatibility a credentials helper must react to any unsupported +verb by printing an end-user-oriented plain text error message to its stderr +stream and then exiting with a non-zero status code. + +## Handling Unsupported Credentials Object Properties + +Currently Terraform defines only the `token` property within JSON credentials +objects, but this format might be extended in future. + +If a credentials helper is asked to store an object that has any properties +other than `token` and if it is not able to faithfully retain them then it +must behave as if the object is unstorable, returning an error. It must _not_ +store the `token` value in isolation and silently drop other properties, as +that might change the meaning of the credentials object. + +If technically possible within the constraints of the target system, a +credentials helper should prefer to store the whole JSON object as-is for +later retrieval. For systems that are more constrained, it's acceptable to +store only the `token` string so long as the program rejects objects containing +other properties as described above. + +## Installing a Credentials Helper + +Terraform does not have any automatic installation mechanism for credentials +helpers. Instead, the user must extract the helper program executable into +one of the [default plugin search locations](/docs/extend/how-terraform-works.html#plugin-locations). + +If you are packaging a credentials helper for distribution, place it in an +named with the expected naming scheme (`terraform-credentials-example`) and, +if the containing archive format supports it and it's meaningful for the +target operating system, mark the file as executable to increase the chances +that it will work immediately after extraction. + +Terraform does _not_ honor the `-plugin-dir` argument to `terraform init` when +searching for credentials helpers, because credentials are also used by other +commands that can be run prior to `terraform init`. Only the default search +locations are supported. diff --git a/website/docs/internals/login-protocol.html.markdown b/website/docs/internals/login-protocol.html.markdown new file mode 100644 index 000000000..560a471be --- /dev/null +++ b/website/docs/internals/login-protocol.html.markdown @@ -0,0 +1,114 @@ +--- +layout: "docs" +page_title: "Login Protocol" +sidebar_current: "docs-internals-login-protocol" +description: |- + The login protocol is used for authenticating Terraform against servers providing Terraform-native services. +--- + +# Server-side Login Protocol + +~> **Note:** You don't need to read these docs to _use_ +[`terraform login`](/docs/commands/login.html). The information below is for +anyone intending to implement the server side of `terraform login` in order to +offer Terraform-native services in a third-party system. + +The `terraform login` command supports performing an OAuth 2.0 authorization +request using configuration provided by the target host. You may wish to +implement this protocol if you are producing a third-party implementation of +any [Terraform-native services](/docs/internals/remote-service-discovery.html), +such as a Terraform module registry. + +First, Terraform uses +[remote service discovery](/docs/internals/remote-service-discovery.html) to +find the OAuth configuration for the host. The host must support the service +name `login.v1` and define for it an object containing OAuth client +configuration values, like this: + +```json +{ + "login.v1": { + "client": "terraform-cli", + "grant_types": ["authz_code"], + "authz": "/oauth/authorization", + "token": "/oauth/token", + "ports": [10000, 10010], + } +} +``` + +The properties within the discovery object are as follows: + +* `client` (Required): The `client_id` value to use when making requests, as + defined in [RFC 6749 section 2.2](https://tools.ietf.org/html/rfc6749#section-2.2). + + Because Terraform is a _public client_ (it is installed on end-user systems + and thus cannot protect an OAuth client secret), the `client_id` is purely + advisory and the server must not use it as a guarantee that an authorization + request is truly coming from Terraform. + +* `grant_types` (Optional): A JSON array of strings describing a set of OAuth + 2.0 grant types the server is able to support. A "grant type" selects a + specific mechanism by which an OAuth server authenticates the request and + issues an authorization token. + + Terraform CLI currently only supports a single grant type: + + * `authz_code`: [authorization code grant](https://tools.ietf.org/html/rfc6749#section-4.1). + Both the `authz` and `token` properties are required when `authz_code` is + present. + + Other grant types may be supported in future versions of Terraform CLI, + and may impose different requirements on the `authz` and `token` properties. + If not specified, `grant_types` defaults to `["authz_code"]`. + +* `authz` (Required if needed for a given grant type): the server's + [authorization endpoint](https://tools.ietf.org/html/rfc6749#section-3.1). + If given as a relative URL, it is resolved from the location of the + service discovery document. + +* `token` (Required if needed for a given grant type): the server's + [token endpoint](https://tools.ietf.org/html/rfc6749#section-3.2). + If given as a relative URL, it is resolved from the location of the + service discovery document. + +* `ports` (Optional): A two-element JSON array giving an inclusive range of + TCP ports that Terraform may use for the temporary HTTP server it will start + to provide the [redirection endpoint](https://tools.ietf.org/html/rfc6749#section-3.1.2) + for the first step of an authorization code grant. Terraform opens a TCP + listen port on the loopback interface in order to receive the response from + the server's authorization endpoint. + + If not specified, Terraform is free to select any TCP port greater than or + equal to 1024. + + Terraform allows constraining this port range for interoperability with OAuth + server implementations that require each `client_id` to be associated with + a fixed set of valid redirection endpoint URLs. Configure such a server + to expect a range of URLs of the form `http://localhost:10000/` + with different consecutive port numbers, and then specify that port range + using `ports`. + + We recommend allowing at least 10 distinct port numbers if possible, and + assigning them to numbers greater than or equal to 10000, to minimize the + risk that all of the possible ports will already be in use on a particular + system. + +When requesting an authorization code grant, Terraform CLI implements the +[Proof Key for Code Exchange](https://tools.ietf.org/html/rfc7636) extension in +order to protect against other applications on the system intercepting the +incoming request to the redirection endpoint. We strongly recommend that you +select an OAuth server implementation that also implements this extension and +verifies the code challenge sent to the token endpoint. + +Terraform CLI does not support OAuth refresh tokens or token expiration. If your +server issues time-limited tokens, Terraform CLI will simply begin receiving +authorization errors once the token expires, after which the user can run +`terraform login` again to obtain a new token. + +-> **Note:** As a special case, Terraform can use a +[Resource Owner Password Credentials Grant](https://tools.ietf.org/html/rfc6749#section-4.3) +only when interacting with `app.terraform.io` ([Terraform Cloud](/docs/cloud/index.html)), +under the recommendation in the OAuth specification to use this grant type only +when the client and server are closely related. The `password` grant type is +not supported for any other hostname and will be ignored. diff --git a/website/docs/internals/remote-service-discovery.html.md b/website/docs/internals/remote-service-discovery.html.md index 84ba3c6e0..87e56ba74 100644 --- a/website/docs/internals/remote-service-discovery.html.md +++ b/website/docs/internals/remote-service-discovery.html.md @@ -83,8 +83,9 @@ version 1 of the module registry protocol: ## Supported Services -At present, only one service identifier is in use: +At present, the following service identifiers are in use: +* `login.v1`: [login protocol version 1](/docs/commands/login.html#protocol-v1) * `modules.v1`: [module registry API version 1](/docs/registry/api.html) ## Authentication diff --git a/website/docs/modules/composition.html.markdown b/website/docs/modules/composition.html.markdown index 5e5437ec4..53dbf228b 100644 --- a/website/docs/modules/composition.html.markdown +++ b/website/docs/modules/composition.html.markdown @@ -89,7 +89,7 @@ pass those values into the module from data sources instead: ```hcl data "aws_vpc" "main" { - tags { + tags = { Environment = "production" } } @@ -117,7 +117,7 @@ reasons, certain infrastructure may be shared across multiple development environments, while in production the infrastructure is unique and managed directly by the production configuration. -Rather than trying to write a module that itself tries detect whether something +Rather than trying to write a module that itself tries to detect whether something exists and create it if not, we recommend applying the dependency inversion approach: making the module accept the object it needs as an argument, via an input variable. diff --git a/website/docs/plugins/basics.html.md b/website/docs/plugins/basics.html.md index bba437822..46b766a84 100644 --- a/website/docs/plugins/basics.html.md +++ b/website/docs/plugins/basics.html.md @@ -13,7 +13,7 @@ topic in Terraform, and is not required knowledge for day-to-day usage. If you don't plan on writing any plugins, this section of the documentation is not necessary to read. For general use of Terraform, please see our [Intro to Terraform](/intro/index.html) and [Getting -Started](/intro/getting-started/install.html) guides. +Started](https://learn.hashicorp.com/terraform/getting-started/install) guides. This page documents the basics of how the plugin system in Terraform works, and how to setup a basic development environment for plugin development diff --git a/website/docs/plugins/provider.html.md b/website/docs/plugins/provider.html.md index c4fa7a443..5bcf819bc 100644 --- a/website/docs/plugins/provider.html.md +++ b/website/docs/plugins/provider.html.md @@ -13,7 +13,7 @@ topic in Terraform, and is not required knowledge for day-to-day usage. If you don't plan on writing any plugins, this section of the documentation is not necessary to read. For general use of Terraform, please see our [Intro to Terraform](/intro/index.html) and [Getting -Started](/intro/getting-started/install.html) guides. +Started](https://learn.hashicorp.com/terraform/getting-started/install) guides. A provider in Terraform is responsible for the lifecycle of a resource: create, read, update, delete. An example of a provider is AWS, which diff --git a/website/docs/providers/index.html.markdown b/website/docs/providers/index.html.markdown index 0421863b1..04c6026f6 100644 --- a/website/docs/providers/index.html.markdown +++ b/website/docs/providers/index.html.markdown @@ -15,7 +15,7 @@ infrastructure type can be represented as a resource in Terraform. A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. Alibaba Cloud, AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Cloud, -DNSimple, CloudFlare). +DNSimple, Cloudflare). Use the navigation to the left to find available providers by type or scroll down to see all providers. @@ -28,16 +28,21 @@ down to see all providers. - [Alibaba Cloud](/docs/providers/alicloud/index.html) - [Archive](/docs/providers/archive/index.html) - [Arukas](/docs/providers/arukas/index.html) +- [Auth0](/docs/providers/auth0/index.html) - [Avi Vantage](/docs/providers/avi/index.html) - [Aviatrix](/docs/providers/aviatrix/index.html) - [AWS](/docs/providers/aws/index.html) - [Azure](/docs/providers/azurerm/index.html) - [Azure Active Directory](/docs/providers/azuread/index.html) - [Azure Stack](/docs/providers/azurestack/index.html) +- [A10 Networks](/docs/providers/vthunder/index.html) +- [BaiduCloud](/docs/providers/baiducloud/index.html) - [Bitbucket](/docs/providers/bitbucket/index.html) - [Brightbox](/docs/providers/brightbox/index.html) - [CenturyLinkCloud](/docs/providers/clc/index.html) +- [Check Point](/docs/providers/checkpoint/index.html) - [Chef](/docs/providers/chef/index.html) +- [CherryServers](/docs/providers/cherryservers/index.html) - [Circonus](/docs/providers/circonus/index.html) - [Cisco ASA](/docs/providers/ciscoasa/index.html) - [Cisco ACI](/docs/providers/aci/index.html) @@ -52,6 +57,7 @@ down to see all providers. - [DNSimple](/docs/providers/dnsimple/index.html) - [DNSMadeEasy](/docs/providers/dme/index.html) - [Docker](/docs/providers/docker/index.html) +- [Dome9](/docs/providers/dome9/index.html) - [Dyn](/docs/providers/dyn/index.html) - [Exoscale](/docs/providers/exoscale/index.html) - [External](/docs/providers/external/index.html) @@ -71,17 +77,21 @@ down to see all providers. - [Hetzner Cloud](/docs/providers/hcloud/index.html) - [HTTP](/docs/providers/http/index.html) - [HuaweiCloud](/docs/providers/huaweicloud/index.html) +- [HuaweiCloudStack](/docs/providers/huaweicloudstack/index.html) - [Icinga2](/docs/providers/icinga2/index.html) - [Ignition](/docs/providers/ignition/index.html) +- [Incapsula](/docs/providers/incapsula/index.html) - [InfluxDB](/docs/providers/influxdb/index.html) - [JDCloud](/docs/providers/jdcloud/index.html) - [Kubernetes](/docs/providers/kubernetes/index.html) +- [LaunchDarkly](/docs/providers/launchdarkly/index.html) - [Librato](/docs/providers/librato/index.html) - [Linode](/docs/providers/linode/index.html) - [Local](/docs/providers/local/index.html) - [Logentries](/docs/providers/logentries/index.html) - [LogicMonitor](/docs/providers/logicmonitor/index.html) - [Mailgun](/docs/providers/mailgun/index.html) +- [MetalCloud](/docs/providers/metalcloud/index.html) - [MongoDB Atlas](/docs/providers/mongodbatlas/index.html) - [MySQL](/docs/providers/mysql/index.html) - [Naver Cloud](/docs/providers/ncloud/index.html) @@ -92,6 +102,9 @@ down to see all providers. - [Null](/docs/providers/null/index.html) - [Nutanix](/docs/providers/nutanix/index.html) - [1&1](/docs/providers/oneandone/index.html) +- [Okta](/docs/providers/okta/index.html) +- [Okta ASA](/docs/providers/oktaasa/index.html) +- [OpenNebula](/docs/providers/opennebula/index.html) - [OpenStack](/docs/providers/openstack/index.html) - [OpenTelekomCloud](/docs/providers/opentelekomcloud/index.html) - [OpsGenie](/docs/providers/opsgenie/index.html) @@ -106,6 +119,7 @@ down to see all providers. - [PowerDNS](/docs/providers/powerdns/index.html) - [ProfitBricks](/docs/providers/profitbricks/index.html) - [Pureport](/docs/providers/pureport/index.html) +- [Quorum](/docs/providers/quorum/index.html) - [RabbitMQ](/docs/providers/rabbitmq/index.html) - [Rancher](/docs/providers/rancher/index.html) - [Rancher2](/docs/providers/rancher2/index.html) @@ -119,6 +133,7 @@ down to see all providers. - [Skytap](/docs/providers/skytap/index.html) - [SoftLayer](/docs/providers/softlayer/index.html) - [Spotinst](/docs/providers/spotinst/index.html) +- [StackPath](/docs/providers/stackpath/index.html) - [StatusCake](/docs/providers/statuscake/index.html) - [TelefonicaOpenCloud](/docs/providers/telefonicaopencloud/index.html) - [Template](/docs/providers/template/index.html) @@ -130,10 +145,12 @@ down to see all providers. - [UCloud](/docs/providers/ucloud/index.html) - [UltraDNS](/docs/providers/ultradns/index.html) - [Vault](/docs/providers/vault/index.html) +- [Venafi](/docs/providers/venafi/index.html) - [VMware NSX-T](/docs/providers/nsxt/index.html) - [VMware vCloud Director](/docs/providers/vcd/index.html) - [VMware vRA7](/docs/providers/vra7/index.html) - [VMware vSphere](/docs/providers/vsphere/index.html) +- [Vultr](/docs/providers/vultr/index.html) - [Yandex](/docs/providers/yandex/index.html) diff --git a/website/docs/providers/type/cloud-index.html.markdown b/website/docs/providers/type/cloud-index.html.markdown index bf1363bf9..8b503e352 100644 --- a/website/docs/providers/type/cloud-index.html.markdown +++ b/website/docs/providers/type/cloud-index.html.markdown @@ -18,8 +18,10 @@ vendor in close collaboration with HashiCorp, and are tested by HashiCorp. - [Arukas](/docs/providers/arukas/index.html) +- [BaiduCloud](/docs/providers/baiducloud/index.html) - [Brightbox](/docs/providers/brightbox/index.html) - [CenturyLinkCloud](/docs/providers/clc/index.html) +- [CherryServers](/docs/providers/cherryservers/index.html) - [Cisco ACI](/docs/providers/aci/index.html) - [CloudScale.ch](/docs/providers/cloudscale/index.html) - [CloudStack](/docs/providers/cloudstack/index.html) @@ -32,10 +34,13 @@ vendor in close collaboration with HashiCorp, and are tested by HashiCorp. - [Heroku](/docs/providers/heroku/index.html) - [Hetzner Cloud](/docs/providers/hcloud/index.html) - [HuaweiCloud](/docs/providers/huaweicloud/index.html) +- [HuaweiCloudStack](/docs/providers/huaweicloudstack/index.html) - [JDCloud](/docs/providers/jdcloud/index.html) - [Linode](/docs/providers/linode/index.html) +- [MetalCloud](/docs/providers/metalcloud/index.html) - [Naver Cloud](/docs/providers/ncloud/index.html) - [Nutanix](/docs/providers/nutanix/index.html) +- [OpenNebula](/docs/providers/opennebula/index.html) - [OpenStack](/docs/providers/openstack/index.html) - [OpenTelekomCloud](/docs/providers/opentelekomcloud/index.html) - [OVH](/docs/providers/ovh/index.html) @@ -45,9 +50,11 @@ vendor in close collaboration with HashiCorp, and are tested by HashiCorp. - [Skytap](/docs/providers/skytap/index.html) - [Selectel](/docs/providers/selectel/index.html) - [SoftLayer](/docs/providers/softlayer/index.html) +- [StackPath](/docs/providers/stackpath/index.html) - [TelefonicaOpenCloud](/docs/providers/telefonicaopencloud/index.html) - [TencentCloud](/docs/providers/tencentcloud/index.html) - [Triton](/docs/providers/triton/index.html) - [UCloud](/docs/providers/ucloud/index.html) -- [Yandex](/docs/providers/yandex/index.html) +- [Vultr](/docs/providers/vultr/index.html) +- [Yandex.Cloud](/docs/providers/yandex/index.html) - [1&1](/docs/providers/oneandone/index.html) diff --git a/website/docs/providers/type/community-index.html.markdown b/website/docs/providers/type/community-index.html.markdown index 0e86b9d23..af064ef2b 100644 --- a/website/docs/providers/type/community-index.html.markdown +++ b/website/docs/providers/type/community-index.html.markdown @@ -17,31 +17,44 @@ please fill out this [community providers form](https://docs.google.com/forms/d/
-- [1Password](https://github.com/anasinnyk/terraform-provider-1password/) + +- [1Password](https://github.com/anasinnyk/terraform-provider-1password) - [Abiquo](https://github.com/abiquo/terraform-provider-abiquo) -- [Active Directory](https://github.com/GSLabDev/terraform-provider-ad) +- [Active Directory - adlerrobert](https://github.com/adlerrobert/terraform-provider-activedirectory) +- [Active Directory - GSLabDev](https://github.com/GSLabDev/terraform-provider-ad) +- [Airtable](https://github.com/paultyng/terraform-provider-airtable) - [Aiven](https://github.com/aiven/terraform-provider-aiven) - [AlienVault](https://github.com/form3tech-oss/terraform-provider-alienvault) - [AnsibleVault](https://github.com/MeilleursAgents/terraform-provider-ansiblevault) - [Apigee](https://github.com/zambien/terraform-provider-apigee) - [Artifactory](https://github.com/atlassian/terraform-provider-artifactory) - [Auth](https://github.com/Shuttl-Tech/terraform-provider-auth) -- [Auth0](https://github.com/bocodigitalmedia/terraform-provider-auth0) +- [Auth0](https://github.com/alexkappa/terraform-provider-auth0) +- [Automic Continuous Delivery](https://github.com/Automic/terraform-provider-cda) - [AVI](https://github.com/avinetworks/terraform-provider-avi) - [Aviatrix](https://github.com/AviatrixSystems/terraform-provider-aviatrix) - [AWX](https://github.com/mauromedda/terraform-provider-awx) - [Azure Devops](https://github.com/agarciamiravet/terraform-provider-azuredevops) -- [CDA](https://github.com/Automic/terraform-provider-cda) +- [Bitbucket Server](https://github.com/gavinbunney/terraform-provider-bitbucketserver) +- [CDAP](https://github.com/GoogleCloudPlatform/terraform-provider-cdap) +- [CDS](https://github.com/capitalonline/terraform-provider-cds) +- [Centreon](https://github.com/smutel/terraform-provider-centreon) +- [Checkly](https://github.com/bitfield/terraform-provider-checkly) - [Cherry Servers](https://github.com/cherryservers/terraform-provider-cherryservers) +- [Citrix ADC](https://github.com/citrix/terraform-provider-citrixadc) - [Cloud Foundry](https://github.com/cloudfoundry-community/terraform-provider-cf) -- [CloudAMQP](https://github.com/cloudamqp/terraform-provider) -- [CloudKarafka](https://github.com/cloudkarafka/terraform-provider) -- [CloudMQTT](https://github.com/cloudmqtt/terraform-provider) +- [Cloud.dk](https://github.com/danitso/terraform-provider-clouddk) +- [Cloudability](https://github.com/skyscrapr/terraform-provider-cloudability) +- [CloudAMQP](https://github.com/cloudamqp/terraform-provider-cloudamqp) +- [Cloudforms](https://github.com/GSLabDev/terraform-provider-cloudforms) +- [CloudKarafka](https://github.com/CloudKarafka/terraform-provider-cloudkarafka) +- [CloudMQTT](https://github.com/CloudMQTT/terraform-provider-cloudmqtt) - [CloudPassage Halo](https://gitlab.com/kiwicom/terraform-provider-cphalo) - [CodeClimate](https://github.com/babbel/terraform-provider-codeclimate) - [Confidant](https://github.com/stripe/terraform-provider-confidant) - [Consul ACL](https://github.com/Ashald/terraform-provider-consulacl) -- [CoreOS Container Linux Configs](https://github.com/coreos/terraform-provider-ct) +- [CoreOS Container Linux Configs](https://github.com/poseidon/terraform-provider-ct) +- [Coveo Cloud](https://github.com/ernesto-arm/coveo-provider) - [CouchDB](https://github.com/nicolai86/terraform-provider-couchdb) - [Credhub](https://github.com/orange-cloudfoundry/terraform-provider-credhub) - [Cronitor](https://github.com/nauxliu/terraform-provider-cronitor) @@ -55,10 +68,12 @@ please fill out this [community providers form](https://docs.google.com/forms/d/ - [EfficientIP](https://github.com/alexissavin/terraform-provider-solidserver) - [Elastic Cloud Enterprise (ECE)](https://github.com/Ascendon/terraform-provider-ece) - [Elasticsearch](https://github.com/phillbaker/terraform-provider-elasticsearch) -- [ElephantSQL](https://github.com/elephantsql/terraform-provider) +- [ElephantSQL](https://github.com/ElephantSQL/terraform-provider-elephantsql) - [Enterprise Cloud](https://github.com/nttcom/terraform-provider-ecl) - [ESXI](https://github.com/josenk/terraform-provider-esxi) -- [Foreman](https://github.com/wayfair/terraform-provider-foreman) +- [Flowdock](https://github.com/sirenfei/terraform-provider-flowdock) +- [Foreman - wayfair](https://github.com/wayfair/terraform-provider-foreman) +- [Foreman - HanseMerkur](https://github.com/HanseMerkur/terraform-provider-foreman) - [Gandi](https://github.com/tiramiseb/terraform-provider-gandi) - [Generic Rest API](https://github.com/Mastercard/terraform-provider-restapi) - [Git](https://github.com/fourplusone/terraform-provider-git) @@ -66,49 +81,64 @@ please fill out this [community providers form](https://docs.google.com/forms/d/ - [GitHub File](https://github.com/form3tech-oss/terraform-provider-githubfile) - [GitInfo](https://github.com/Teralytic/terraform-provider-gitinfo) - [Glue](https://github.com/MikeSouza/terraform-provider-glue) -- [GoCD](https://github.com/drewsonne/terraform-provider-gocd) +- [GoCD](https://github.com/beamly/terraform-provider-gocd) - [Google Calendar](https://github.com/sethvargo/terraform-provider-googlecalendar) - [Google G Suite](https://github.com/DeviaVir/terraform-provider-gsuite) - [GorillaStack](https://github.com/GorillaStack/terraform-provider-gorillastack) +- [Greylog](https://github.com/suzuki-shunsuke/go-graylog) +- [Harbor](https://github.com/BESTSELLER/terraform-harbor-provider) - [Hiera](https://github.com/ribbybibby/terraform-provider-hiera) +- [Hiera 5](https://gitlab.com/sbitio/terraform-provider-hiera5) - [HPE OneView](https://github.com/HewlettPackard/terraform-provider-oneview) - [HTTP File Upload](https://github.com/GSLabDev/terraform-provider-httpfileupload) - [IBM Cloud](https://github.com/IBM-Cloud/terraform-provider-ibm) - [IIJ GIO](https://github.com/iij/terraform-provider-p2pub) - [Infoblox](https://github.com/hiscox/terraform-provider-infoblox) - [InsightOPS](https://github.com/Tweddle-SE-Team/terraform-provider-insight) +- [Instaclustr](https://github.com/instaclustr/terraform-provider-instaclustr) +- [Instana](https://github.com/gessnerfl/terraform-provider-instana) +- [Iron.io](https://github.com/danitso/terraform-provider-ironio) - [Jira](https://github.com/anubhavmishra/terraform-provider-jira) - [Jira (Extended)](https://github.com/fourplusone/terraform-provider-jira) - [JumpCloud](https://github.com/CognotektGmbH/terraform-provider-jumpcloud/) +- [JunOS](https://github.com/jeremmfr/terraform-provider-junos) - [Kafka](https://github.com/Mongey/terraform-provider-kafka) - [Kafka Connect](https://github.com/b-social/terraform-provider-kafkaconnect) - [Keboola](https://github.com/plmwong/terraform-provider-keboola) - [Keycloak](https://github.com/mrparkers/terraform-provider-keycloak) -- [Kibana](https://github.com/ewilde/terraform-provider-kibana) - [Keyring](https://github.com/rremer/terraform-provider-keyring) +- [Kibana](https://github.com/ewilde/terraform-provider-kibana) - [Kong](https://github.com/kevholditch/terraform-provider-kong) - [Ksyun](https://github.com/KscSDK/terraform-provider-ksyun) +- [Kubectl](https://github.com/gavinbunney/terraform-provider-kubectl) +- [Kubernetes](https://github.com/mingfang/terraform-provider-k8s) - [libvirt](https://github.com/dmacvicar/terraform-provider-libvirt) - [Logentries](https://github.com/dikhan/terraform-provider-logentries) - [Logz.io](https://github.com/jonboydell/logzio_terraform_provider) - [LXD](https://github.com/sl1pm4t/terraform-provider-lxd) +- [MAAS](https://github.com/Roblox/terraform-provider-maas) - [Manifold](https://github.com/manifoldco/terraform-provider-manifold) -- [Matchbox](https://github.com/coreos/terraform-provider-matchbox) +- [Matchbox](https://github.com/poseidon/terraform-provider-matchbox) +- [MinIO](https://github.com/aminueza/terraform-provider-minio) - [MongoDB Atlas](https://github.com/akshaykarle/terraform-provider-mongodbatlas) +- [Nagios XI](https://github.com/devopsdunkin/terraform-provider-nagios) - [Name](https://github.com/mhumeSF/terraform-provider-namedotcom) - [Nelson](https://github.com/getnelson/terraform-provider-nelson) - [NetApp](https://github.com/miechus/terraform-provider-netapp) - [NSX-V](https://github.com/GSLabDev/terraform-provider-nsxv) -- [Okta](https://github.com/articulate/terraform-provider-okta) -- [Online.net](https://github.com/src-d/terraform-provider-online-net) +- [Oktawave](https://github.com/oktawave-code/terraform-provider-oktawave) +- [Online.net](https://github.com/src-d/terraform-provider-online) - [Open Day Light](https://github.com/GSLabDev/terraform-provider-odl) - [OpenAPI](https://github.com/dikhan/terraform-provider-openapi) - [OpenFaaS](https://github.com/ewilde/terraform-provider-openfaas) +- [Openshift](https://github.com/llomgui/terraform-provider-openshift) - [OpenvCloud](https://github.com/gig-tech/terraform-provider-ovc) - [oVirt](https://github.com/oVirt/terraform-provider-ovirt) - [Pass](https://github.com/camptocamp/terraform-provider-pass) +- [PHPIPAM](https://github.com/lord-kyron/terraform-provider-phpipam) - [Pingdom](https://github.com/russellcardullo/terraform-provider-pingdom) - [Pivotal Tracker](https://github.com/xchapter7x/terraform-provider-pivotaltracker) +- [Prometheus Operator](https://github.com/greg-gajda/terraform-provider-po) - [Proxmox](https://github.com/Telmate/terraform-provider-proxmox) - [Puppet CA](https://github.com/camptocamp/terraform-provider-puppetca) - [PuppetDB](https://github.com/camptocamp/terraform-provider-puppetdb) @@ -116,7 +146,7 @@ please fill out this [community providers form](https://docs.google.com/forms/d/ - [QingCloud](https://github.com/yunify/terraform-provider-qingcloud) - [Qiniu](https://github.com/qiniu/terraform-provider-qiniu) - [Redshift](https://github.com/frankfarrell/terraform-provider-redshift) -- [RKE](https://github.com/yamamoto-febc/terraform-provider-rke) +- [RKE](https://github.com/rancher/terraform-provider-rke) - [Rollbar](https://github.com/babbel/terraform-provider-rollbar) - [SakuraCloud](https://github.com/sacloud/terraform-provider-sakuracloud) - [SCVMM](https://github.com/GSLabDev/terraform-provider-scvmm) @@ -125,7 +155,6 @@ please fill out this [community providers form](https://docs.google.com/forms/d/ - [Sentry](https://github.com/jianyuan/terraform-provider-sentry) - [Sewan](https://github.com/SewanDevs/terraform-provider-sewan) - [Shell](https://github.com/scottwinkler/terraform-provider-shell) -- [Smartronix](https://github.com/changli3/terraform-provider-smartronix) - [Snowflake](https://github.com/ShopRunner/terraform-provider-snowflake) - [snowflakedb](https://github.com/chanzuckerberg/terraform-provider-snowflake) - [sops](https://github.com/carlpett/terraform-provider-sops) @@ -136,16 +165,23 @@ please fill out this [community providers form](https://docs.google.com/forms/d/ - [Stripe](https://github.com/franckverrot/terraform-provider-stripe) - [Sumo Logic](https://github.com/SumoLogic/sumologic-terraform-provider) - [TeamCity](https://github.com/cvbarros/terraform-provider-teamcity) -- [Transloadit](https://github.com/bocodigitalmedia/terraform-provider-transloadit) +- [Telegram](https://github.com/yi-jiayu/terraform-provider-telegram) +- [Time](https://github.com/bflad/terraform-provider-time) +- [Transloadit](https://github.com/delphire/terraform-provider-transloadit) - [Trello](https://github.com/jtsaito/terraform-provider-trello) +- [tumblr](https://github.com/rfiestas/terraform-provider-tumblr) +- [Unifi](https://github.com/paultyng/terraform-provider-unifi) +- [UpCloud](https://github.com/UpCloudLtd/terraform-provider-upcloud/) - [Updown.io](https://github.com/mvisonneau/terraform-provider-updown) - [Uptimerobot](https://github.com/louy/terraform-provider-uptimerobot) - [Vaulted](https://github.com/sumup-oss/terraform-provider-vaulted) +- [Veeam](https://github.com/GSLabDev/terraform-provider-veeam) - [Venafi](https://github.com/Venafi/terraform-provider-venafi) - [vRealize Automation](https://github.com/GSLabDev/terraform-provider-vra) - [Vultr](https://github.com/squat/terraform-provider-vultr) -- [Wavefront](https://github.com/spaceapegames/terraform-provider-wavefront) +- [Wavefront](https://github.com/wavefrontHQ/terraform-provider-wavefront) - [Win DNS](https://github.com/PortOfPortland/terraform-provider-windns) +- [XML](https://github.com/ssomagani/terraform-provider-xml) - [YAML](https://github.com/Ashald/terraform-provider-yaml) - [Zendesk](https://github.com/nukosuke/terraform-provider-zendesk) - [ZeroTier](https://github.com/cormacrelf/terraform-provider-zerotier) diff --git a/website/docs/providers/type/infra-index.html.markdown b/website/docs/providers/type/infra-index.html.markdown index 39922698e..e83b39c5f 100644 --- a/website/docs/providers/type/infra-index.html.markdown +++ b/website/docs/providers/type/infra-index.html.markdown @@ -20,10 +20,13 @@ and are tested by HashiCorp. - [Chef](/docs/providers/chef/index.html) - [Consul](/docs/providers/consul/index.html) - [Docker](/docs/providers/docker/index.html) +- [Dome9](/docs/providers/dome9/index.html) - [Helm](/docs/providers/helm/index.html) - [Kubernetes](/docs/providers/kubernetes/index.html) - [Mailgun](/docs/providers/mailgun/index.html) - [Nomad](/docs/providers/nomad/index.html) +- [Okta](/docs/providers/okta/index.html) +- [Okta ASA](/docs/providers/oktaasa/index.html) - [RabbitMQ](/docs/providers/rabbitmq/index.html) - [Rancher](/docs/providers/rancher/index.html) - [Rancher2](/docs/providers/rancher2/index.html) @@ -33,3 +36,4 @@ and are tested by HashiCorp. - [Terraform](/docs/providers/terraform/index.html) - [Terraform Cloud](/docs/providers/tfe/index.html) - [Vault](/docs/providers/vault/index.html) +- [Venafi](/docs/providers/venafi/index.html) diff --git a/website/docs/providers/type/misc-index.html.markdown b/website/docs/providers/type/misc-index.html.markdown index ca1f04b7e..9929e0203 100644 --- a/website/docs/providers/type/misc-index.html.markdown +++ b/website/docs/providers/type/misc-index.html.markdown @@ -26,3 +26,5 @@ by the vendors and the Terraform community, and are tested by HashiCorp. - [Random](/docs/providers/random/index.html) - [Template](/docs/providers/template/index.html) - [TLS](/docs/providers/tls/index.html) +- [Quorum](/docs/providers/quorum/index.html) + diff --git a/website/docs/providers/type/monitor-index.html.markdown b/website/docs/providers/type/monitor-index.html.markdown index 64027f534..57cd0f91d 100644 --- a/website/docs/providers/type/monitor-index.html.markdown +++ b/website/docs/providers/type/monitor-index.html.markdown @@ -19,11 +19,13 @@ HashiCorp, and are tested by HashiCorp. --- +- [Auth0](/docs/providers/auth0/index.html) - [Circonus](/docs/providers/circonus/index.html) - [Datadog](/docs/providers/datadog/index.html) - [Dyn](/docs/providers/dyn/index.html) - [Grafana](/docs/providers/grafana/index.html) - [Icinga2](/docs/providers/icinga2/index.html) +- [LaunchDarkly](/docs/providers/launchdarkly/index.html) - [Librato](/docs/providers/librato/index.html) - [Logentries](/docs/providers/logentries/index.html) - [LogicMonitor](/docs/providers/logicmonitor/index.html) diff --git a/website/docs/providers/type/network-index.html.markdown b/website/docs/providers/type/network-index.html.markdown index a722b643e..9f7bf9eb4 100644 --- a/website/docs/providers/type/network-index.html.markdown +++ b/website/docs/providers/type/network-index.html.markdown @@ -20,6 +20,8 @@ in close collaboration with HashiCorp, and are tested by HashiCorp. - [Akamai](/docs/providers/akamai/index.html) - [Avi Vantage](/docs/providers/avi/index.html) - [Aviatrix](/docs/providers/aviatrix/index.html) +- [A10 Networks](/docs/providers/vthunder/index.html) +- [Check Point](/docs/providers/checkpoint/index.html) - [Cloudflare](/docs/providers/cloudflare/index.html) - [Cisco ASA](/docs/providers/ciscoasa/index.html) - [DNS](/docs/providers/dns/index.html) @@ -28,6 +30,7 @@ in close collaboration with HashiCorp, and are tested by HashiCorp. - [F5 BIG-IP](/docs/providers/bigip/index.html) - [FortiOS](/docs/providers/fortios/index.html) - [HTTP](/docs/providers/http/index.html) +- [Incapsula](/docs/providers/incapsula/index.html) - [NS1](/docs/providers/ns1/index.html) - [Palo Alto Networks](/docs/providers/panos/index.html) - [PowerDNS](/docs/providers/powerdns/index.html) diff --git a/website/docs/provisioners/index.html.markdown b/website/docs/provisioners/index.html.markdown index e033a8e12..45a7e43b5 100644 --- a/website/docs/provisioners/index.html.markdown +++ b/website/docs/provisioners/index.html.markdown @@ -65,7 +65,7 @@ is immediately available on system boot. For example: * Oracle Cloud Infrastructure: `metadata` or `extended_metadata` on [`oci_core_instance`](/docs/providers/oci/r/core_instance.html) or [`oci_core_instance_configuration`](/docs/providers/oci/r/core_instance_configuration.html). -* VMWare vSphere: Attach a virtual CDROM to +* VMware vSphere: Attach a virtual CDROM to [`vsphere_virtual_machine`](/docs/providers/vsphere/r/virtual_machine.html) using the `cdrom` block, containing a file called `user-data.txt`. diff --git a/website/docs/provisioners/puppet.html.markdown b/website/docs/provisioners/puppet.html.markdown index f4d598145..844671b54 100644 --- a/website/docs/provisioners/puppet.html.markdown +++ b/website/docs/provisioners/puppet.html.markdown @@ -78,7 +78,7 @@ The following arguments are supported: a certificate from the Puppet master CA (defaults to the FQDN of the resource). -* `extension_request (map)` - (Optional) A map of [extension +* `extension_requests (map)` - (Optional) A map of [extension requests](https://puppet.com/docs/puppet/latest/ssl_attributes_extensions.html#concept-932) to be embedded in the certificate signing request before it is sent to the Puppet master CA and then transferred to the final certificate when the CSR diff --git a/website/docs/registry/modules/publish.html.md b/website/docs/registry/modules/publish.html.md index 093edc5d9..9c343284e 100644 --- a/website/docs/registry/modules/publish.html.md +++ b/website/docs/registry/modules/publish.html.md @@ -89,7 +89,7 @@ The webhook will notify the registry of the new version and it will appear on the registry usually in less than a minute. If your version doesn't appear properly, you may force a sync with GitHub -by viewing your module on the registry and clicking "Force GitHub Sync" +by viewing your module on the registry and clicking "Resync Module" under the "Manage Module" dropdown. This process may take a few minutes. Please only do this if you do not see the version appear, since it will cause the registry to resync _all versions_ of your module. diff --git a/website/docs/registry/modules/verified.html.md b/website/docs/registry/modules/verified.html.md index d7ff853ac..90dacdf8c 100644 --- a/website/docs/registry/modules/verified.html.md +++ b/website/docs/registry/modules/verified.html.md @@ -16,8 +16,8 @@ The blue verification badge appears next to modules that are verified. ![Verified module listing](/assets/images/docs/registry-verified.png) -If a module is verified, it is promised to be actively maintained and of -high quality. It isn't indicative of flexibility or feature support; very +Verified modules are expected to be actively maintained by the Cloud providers. +The verified badge isn’t indicative of flexibility or feature support; very simple modules can be verified just because they're great examples of modules. Likewise, an unverified module could be extremely high quality and actively maintained. An unverified module shouldn't be assumed to be poor quality, it diff --git a/website/docs/registry/private.html.md b/website/docs/registry/private.html.md index 7a28faf72..80a8b7502 100644 --- a/website/docs/registry/private.html.md +++ b/website/docs/registry/private.html.md @@ -24,7 +24,7 @@ created by other teams, you will benefit from a private module registry. ## Terraform Cloud's Private Registry [Terraform Cloud](https://www.hashicorp.com/products/terraform) -includes a private module registry, available at both Pro and Premium tiers. +includes a private module registry. It is available to all accounts, including free organizations. It uses the same VCS-backed tagged release workflow as the Terraform Registry, but imports modules from your private VCS repos (on any of Terraform Cloud's supported VCS diff --git a/website/docs/registry/providers/docs.html.md b/website/docs/registry/providers/docs.html.md new file mode 100644 index 000000000..7f2e25cb4 --- /dev/null +++ b/website/docs/registry/providers/docs.html.md @@ -0,0 +1,247 @@ +--- +layout: "registry" +page_title: "Terraform Registry - Provider Documentation" +sidebar_current: "docs-registry-provider-docs" +description: |- + Expected document structure for publishing providers to the Terraform Registry. +--- + +# Provider Documentation + +The [Terraform Registry][terraform-registry] displays documentation for the providers it hosts. This page describes the expected format for provider documentation. + +## Publishing + +-> **Note:** Publishing is currently in a closed beta. Although we do not expect this document to change significantly before opening provider publishing to the community, this reference currently only applies to providers already appearing on the [Terraform Registry providers list][terraform-registry-providers]. + +The Terraform Registry publishes providers from their Git repositories, creating a version for each Git tag that matches the [Semver](https://semver.org/) versioning format. Provider documentation is published automatically as part of the provider release process. + +Provider documentation is always tied to a provider version. A given version always displays the documentation from that version's Git commit, and the only way to publish updated documentation is to release a new version of the provider. + +### Storage Limits + +The maximum number of documents allowed for a single provider version is 1000. + +Each document can contain no more than 500KB of data. Documents which exceed this limit will be truncated, and a note will be displayed in the Terraform Registry. + +## Format + +Provider documentation should be a directory of Markdown documents in the provider repository. Each Markdown document is rendered as a separate page. The directory should include a document for the provider index, a document for each resource and data source, and optional documents for any guides. + +### Directory Structure + +| Location | Filename | Description | +|-|-|-| +| `docs/` | `index.md` | Index page for the provider. | +| `docs/guides/` | `.md` | Additional documentation for guides. | +| `docs/resources/` | `.md` | Information for a Resource. Filename should not include a `_` prefix. | +| `docs/data-sources/` | `.md` | Information on a provider data source. | + +-> **Note:** In order to support provider docs which have already been formatted for publishing to [terraform.io][terraform-io-providers], the Terraform Registry also supports docs in a `website/docs/` legacy directory with file extensions of `.html.markdown` or `.html.md`. + +### Headers + +We strongly suggest that provider docs include the following sections to help users understand how to use the provider. Create additional sections if they would enhance usability of the resource (for example, “Imports” or “Customizable Timeouts”). + +#### Index Headers + + # Provider + + Summary of what the provider is for, including use cases and links to + app/service documentation. + + ## Example Usage + + ```hcl + // Code block with an example of how to use this provider. + ``` + + ## Argument Reference + + * List any arguments for the provider block. + +#### Resource/Data Source Headers + + # Resource/Data Source + + Description of what this resource does, with links to official + app/service documentation. + + ## Example Usage + + ```hcl + // Code block with an example of how to use this resource. + ``` + + ## Argument Reference + + * `attribute_name` - (Optional/Required) List arguments this resource takes. + + ## Attribute Reference + + * `attribute_name` - List attributes that this resource exports. + +### YAML Frontmatter + +Markdown source files may contain YAML frontmatter, which provides organizational information and display hints. Frontmatter can be omitted for resources and data sources that don't require a subcategory. + +Frontmatter is not rendered in the Terraform Registry web UI. + +#### Example + +```markdown +--- +page_title: "Authenticating with Foo Service via OAuth" +subcategory: "Authentication" +--- +``` + +#### Supported Attributes + +The following frontmatter attributes are supported by the Terraform Registry: + +* **page_title** - The title of this document, which will display in the docs navigation. This is only required for documents in the `guides/` folder. +* **subcategory** - An optional additional layer of grouping that affects the display of the docs navigation; [see Subcategories below](#subcategories) for more details. Resources and data sources should be organized into subcategories if the number of resources would be difficult to quickly scan for a user. Guides should be separated into subcategories if there are multiple guides which fit into 2 or more distinct groupings. + +### Callouts + +If you start a paragraph with a special arrow-like sigil, it will become a colored callout box. You can't make multi-paragraph callouts. For colorblind users (and for clarity in general), callouts will automatically start with a strong-emphasized word to indicate their function. + +Sigil | Text prefix | Color +------|-------------------|------- +`->` | `**Note**` | blue +`~>` | `**Note**` | yellow +`!>` | `**Warning**` | red + +## Navigation Hierarchy + +Provider docs are organized by category: resources, data sources, and guides. At a minimum, a provider must contain an index (`docs/index.md`) and at least one resource or data source. + +### Typical Structure + +A provider named `example` with a resource and data source for `instance` would have these 3 files: + +``` +docs/ + index.md + data-sources/ + instance.md + resources/ + instance.md +``` + +After publishing this provider version, its page on the Terraform Registry would display a navigation which resembles this hierarchy: + +* example Provider +* Resources + * example_instance +* Data Sources + * example_instance + +### Subcategories + +To group these resources by a service or other dimension, add the optional `subcategory` field to the YAML frontmatter of the resource and data source: + +```markdown +--- +subcategory: "Compute" +--- +``` + +This would change the navigation hierarchy to the following: + +* example Provider +* Compute + * Resources + * example_instance + * Data Sources + * example_instance + +Resources and data sources without a subcategory will be rendered before any subcategories. + +### Guides + +Providers can optionally include 1 or more guides. These can assist users in using the provider for certain scenarios. + +``` +docs/ + index.md + guides/ + authenticating.md + data-sources/ + instance.md + resources/ + instance.md +``` + +The title for guides is controlled with the `page_title` attribute in the YAML frontmatter: + +```markdown +--- +page_title: "Authenticating with Example Cloud" +--- +``` + +The `page_title` is used (instead of the filename) for rendering the link to this guide in the navigation: + +* example Provider +* Guides + * Authenticating with Example Cloud +* Resources + * example_instance +* Data Sources + * example_instance + +Guides are always rendered before resources, data sources, and any subcategories. + +If a `page_title` attribute is not found, the title will default to the filename without the extension. + +### Guides Subcategories + +If a provider has many guides, you can use subcategories to group them into separate top-level sections. For example, given the following directory structure: + +``` +docs/ + index.md + guides/ + authenticating-basic.md + authenticating-oauth.md + setup.md + data-sources/ + instance.md + resources/ + instance.md +``` + +Assuming that these three guides have titles similar to their filenames, and the first two include `subcategory: "Authentication"` in their frontmatter, the Terraform Registry would display this navigation structure: + +* example Provider +* Guides + * Initial Setup +* Authentication + * Authenticating with Basic Authentication + * Authenticating with OAuth +* Resources + * example_instance +* Data Sources + * example_instance + +Guides without a subcategory are always rendered before guides with subcategories. Both are always rendered before resources and data sources. + +## Migrating Legacy Providers Docs + +For most provider docs already published to [terraform.io][terraform-io-providers], no changes are required to publish them to the Terraform Registry. + +~> **Important:** The only exceptions are providers which organize resources, data sources, or guides into subcategories. See the [Subcategories](#subcategories) section above for more information. + +If you want to publish docs on the Terraform Registry that are not currently published to terraform.io, take the following steps to migrate to the newer format: + +1. Move the `website/docs/` folder to `docs/` +2. Expand the folder names to match the Terraform Registry's expected format: + * Rename `docs/d/` to `docs/data-sources/` + * Rename `docs/r/` to `docs/resources/` +3. Change file suffixes from `.html.markdown` or `.html.md` to `.md`. + +[terraform-registry]: https://registry.terraform.io +[terraform-registry-providers]: https://registry.terraform.io/browse/providers +[terraform-io-providers]: https://www.terraform.io/docs/providers/ diff --git a/website/docs/state/remote.html.md b/website/docs/state/remote.html.md index bb3c05396..ace197fac 100644 --- a/website/docs/state/remote.html.md +++ b/website/docs/state/remote.html.md @@ -17,7 +17,7 @@ Terraform at the same time. With _remote_ state, Terraform writes the state data to a remote data store, which can then be shared between all members of a team. Terraform supports storing state in [Terraform Cloud](https://www.hashicorp.com/products/terraform/), -[HashiCorp Consul](https://www.consul.io/), Amazon S3, and more. +[HashiCorp Consul](https://www.consul.io/), Amazon S3, Alibaba Cloud OSS, and more. Remote state is a feature of [backends](/docs/backends). Configuring and using remote backends is easy and you can get started with remote state diff --git a/website/docs/state/sensitive-data.html.md b/website/docs/state/sensitive-data.html.md index 8ecaf4ea9..f5ccaa116 100644 --- a/website/docs/state/sensitive-data.html.md +++ b/website/docs/state/sensitive-data.html.md @@ -8,45 +8,33 @@ description: |- # Sensitive Data in State -Terraform state can contain sensitive data depending on the resources in-use +Terraform state can contain sensitive data, depending on the resources in use and your definition of "sensitive." The state contains resource IDs and all resource attributes. For resources such as databases, this may contain initial passwords. -Some resources (such as AWS IAM Access Keys) have options for PGP encrypting the -values within the state. This is implemented on a per-resource basis and -you should assume the value is plaintext unless otherwise documented. +When using local state, state is stored in plain-text JSON files. -When using local state, state is stored in plain-text JSON files. When -using [remote state](/docs/state/remote.html), state is only ever held in memory when used by Terraform. -It may be encrypted at rest but this depends on the specific remote state -backend. - -It is important to keep this in mind if you do (or plan to) store sensitive -data (e.g. database passwords, user passwords, private keys) as it may affect -the risk of exposure of such sensitive data. +When using [remote state](/docs/state/remote.html), state is only ever held in +memory when used by Terraform. It may be encrypted at rest, but this depends on +the specific remote state backend. ## Recommendations -Storing state remotely may provide you encryption at rest depending on the -backend you choose. As of Terraform 0.9, Terraform will only hold the state -value in memory when remote state is in use. It is never explicitly persisted -to disk. +If you manage any sensitive data with Terraform (like database passwords, user +passwords, or private keys), treat the state itself as sensitive data. -For example, encryption at rest can be enabled with the S3 backend and IAM -policies and logging can be used to identify any invalid access. Requests for -the state go over a TLS connection. +Storing state remotely can provide better security. As of Terraform 0.9, +Terraform does not persist state to the local disk when remote state is in use, +and some backends can be configured to encrypt the state data at rest. -[Terraform Cloud](https://www.hashicorp.com/products/terraform/) is -a commercial product from HashiCorp that also acts as a [backend](/docs/backends) -and provides encryption at rest for state. Terraform Cloud also knows -the identity of the user requesting state and maintains a history of state -changes. This can be used to provide access control and detect any breaches. +For example: -## Future Work - -Long term, the Terraform project wants to further improve the ability to -secure sensitive data. There are plans to provide a -generic mechanism for specific state attributes to be encrypted or even -completely omitted from the state. These do not exist yet except on a -resource-by-resource basis if documented. +- [Terraform Cloud](/docs/cloud/index.html) always encrypts state at rest and + protects it with TLS in transit. Terraform Cloud also knows the identity of + the user requesting state and maintains a history of state changes. This can + be used to control access and track activity. [Terraform Enterprise](/docs/enterprise/index.html) + also supports detailed audit logging. +- The S3 backend supports encryption at rest when the `encrypt` option is + enabled. IAM policies and logging can be used to identify any invalid access. + Requests for the state go over a TLS connection. diff --git a/website/docs/state/workspaces.html.md b/website/docs/state/workspaces.html.md index 568d4f823..899d5a5e4 100644 --- a/website/docs/state/workspaces.html.md +++ b/website/docs/state/workspaces.html.md @@ -27,6 +27,7 @@ Multiple workspaces are currently supported by the following backends: * [AzureRM](/docs/backends/types/azurerm.html) * [Consul](/docs/backends/types/consul.html) + * [COS](/docs/backends/types/cos.html) * [GCS](/docs/backends/types/gcs.html) * [Local](/docs/backends/types/local.html) * [Manta](/docs/backends/types/manta.html) @@ -39,6 +40,13 @@ It was renamed in 0.10 based on feedback about confusion caused by the overloading of the word "environment" both within Terraform itself and within organizations that use Terraform. +-> **Note**: The Terraform CLI workspace concept described in this document is +different from but related to the Terraform Cloud +[workspace](/docs/cloud/workspaces/index.html) concept. +If you use multiple Terraform CLI workspaces in a single Terraform configuration +and are migrating that configuration to Terraform Cloud, see this [migration +document](/docs/cloud/migrate/workspaces.html). + ## Using Workspaces Terraform starts with a single workspace named "default". This @@ -70,7 +78,10 @@ Terraform workspace. Within your Terraform configuration, you may include the name of the current workspace using the `${terraform.workspace}` interpolation sequence. This can -be used anywhere interpolations are allowed. +be used anywhere interpolations are allowed. However, it should **not** be +used in remote operations against Terraform Cloud workspaces. For an +explanation, see the [remote backend](../backends/types/remote.html#workspaces) +document. Referencing the current workspace is useful for changing behavior based on the workspace. For example, for non-default workspaces, it may be useful @@ -89,7 +100,7 @@ tagging behavior: ```hcl resource "aws_instance" "example" { - tags { + tags = { Name = "web - ${terraform.workspace}" } @@ -195,4 +206,6 @@ meant to be a shared resource. They aren't a private, local-only notion The "current workspace" name is stored only locally in the ignored `.terraform` directory. This allows multiple team members to work on -different workspaces concurrently. +different workspaces concurrently. The "current workspace" name is **not** +currently meaningful in Terraform Cloud workspaces since it will always +have the value `default`. diff --git a/website/guides/core-workflow.html.md b/website/guides/core-workflow.html.md index a7b2feedd..afecf4dd3 100644 --- a/website/guides/core-workflow.html.md +++ b/website/guides/core-workflow.html.md @@ -157,7 +157,7 @@ Terraform operations are executed in a shared Continuous Integration (CI) environment. The work needed to create such a CI environment is nontrivial, and is outside the scope of this core workflow overview, but a full deep dive on this topic can be found in our -[Running Terraform in Automation](https://www.terraform.io/guides/running-terraform-in-automation.html) +[Running Terraform in Automation](https://learn.hashicorp.com/terraform/development/running-terraform-in-automation) guide. This longer iteration cycle of committing changes to version control and then diff --git a/website/guides/running-terraform-in-automation.html.md b/website/guides/running-terraform-in-automation.html.md deleted file mode 100644 index 04c18a704..000000000 --- a/website/guides/running-terraform-in-automation.html.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -layout: "guides" -page_title: "Running Terraform in Automation - Guides" -sidebar_current: "guides-running-terraform-in-automation" -description: |- - Terraform can, with some caveats, be run in automated processes such as - continuous delivery pipelines. Ths guide describes some techniques for - doing so and some gotchas to watch out for. ---- - -# Running Terraform in Automation - -~> **This is an advanced guide!** When getting started with Terraform, it's -recommended to use it locally from the command line. Automation can become -valuable once Terraform is being used regularly in production, or by a larger -team, but this guide assumes familiarity with the normal, local CLI -workflow. - -For teams that use Terraform as a key part of a change management and -deployment pipeline, it can be desirable to orchestrate Terraform runs in some -sort of automation in order to ensure consistency between runs, and provide -other interesting features such as integration with version control hooks. - -Automation of Terraform can come in various forms, and to varying degrees. -Some teams continue to run Terraform locally but use _wrapper scripts_ to -prepare a consistent working directory for Terraform to run in, while other -teams run Terraform entirely within an orchestration tool such as Jenkins. - -This guide covers some things that should be considered when implementing -such automation, both to ensure safe operation of Terraform and to accommodate -some current limitations in Terraform's workflow that require careful -attention in automation. - -The guide assumes that Terraform will be running in an _non-interactive_ -environment, where it is not possible to prompt for input at the terminal. -This is not necessarily true for wrapper scripts, but is often true when -running in orchestration tools. - -This is a general guide, giving an overview of things to consider when -implementing orchestration of Terraform. Due to its general nature, it is not -possible to go into specifics about any particular tools, though other -tool-specific guides may be produced later if best practices emerge around -such a tool. - -## Automated Workflow Overview - -When running Terraform in automation, the focus is usually on the core -plan/apply cycle. The main path, then, is broadly the same as for CLI -usage: - -1. Initialize the Terraform working directory. -2. Produce a plan for changing resources to match the current configuration. -3. Have a human operator review that plan, to ensure it is acceptable. -4. Apply the changes described by the plan. - -Steps 1, 2 and 4 can be carried out using the familiar Terraform CLI commands, -with some additional options: - -* `terraform init -input=false` to initialize the working directory. -* `terraform plan -out=tfplan -input=false` to create a plan and save it to the local file `tfplan`. -* `terraform apply -input=false tfplan` to apply the plan stored in the file `tfplan`. - -The `-input=false` option indicates that Terraform should not attempt to -prompt for input, and instead expect all necessary values to be provided by -either configuration files or the command line. It may therefore be necessary -to use the `-var` and `-var-file` options on `terraform plan` to specify any -variable values that would traditionally have been manually-entered under -interactive usage. - -It is strongly recommended to use a backend that supports -[remote state](/docs/state/remote.html), since that allows Terraform to -automatically save the state in a persistent location where it can be found -and updated by subsequent runs. Selecting a backend that supports -[state locking](/docs/state/locking.html) will additionally provide safety -against race conditions that can be caused by concurrent Terraform runs. - -## Controlling Terraform Output in Automation - -By default, some Terraform commands conclude by presenting a description -of a possible next step to the user, often including a specific command -to run next. - -An automation tool will often abstract away the details of exactly which -commands are being run, causing these messages to be confusing and -un-actionable, and possibly harmful if they inadvertently encourage a user to -bypass the automation tool entirely. - -When the environment variable `TF_IN_AUTOMATION` is set to any non-empty -value, Terraform makes some minor adjustments to its output to de-emphasize -specific commands to run. The specific changes made will vary over time, -but generally-speaking Terraform will consider this variable to indicate that -there is some wrapping application that will help the user with the next -step. - -To reduce complexity, this feature is implemented primarily for the main -workflow commands described above. Other ancillary commands may still produce -command line suggestions, regardless of this setting. - -## Plan and Apply on different machines - -When running in an orchestration tool, it can be difficult or impossible to -ensure that the `plan` and `apply` subcommands are run on the same machine, -in the same directory, with all of the same files present. - -Running `plan` and `apply` on different machines requires some additional -steps to ensure correct behavior. A robust strategy is as follows: - -* After `plan` completes, archive the entire working directory, including the - `.terraform` subdirectory created during `init`, and save it somewhere - where it will be available to the apply step. A common choice is as a - "build artifact" within the chosen orchestration tool. -* Before running `apply`, obtain the archive created in the previous step - and extract it _at the same absolute path_. This re-creates everything - that was present after plan, avoiding strange issues where local files - were created during the plan step. - -Terraform currently makes some assumptions which must be accommodated by -such an automation setup: - -* The saved plan file can contain absolute paths to child modules and other - data files referred to by configuration. Therefore it is necessary to ensure - that the archived configuration is extracted at an identical absolute path. - This is most commonly achieved by running Terraform in some sort of isolation, - such as a Docker container, where the filesystem layout can be controlled. -* Terraform assumes that the plan will be applied on the same operating system - and CPU architecture as where it was created. For example, this means that - it is not possible to create a plan on a Windows computer and then apply it - on a Linux server. -* Terraform expects the provider plugins that were used to produce a - plan to be available and identical when the plan is applied, to ensure - that the plan is interpreted correctly. An error will be produced if - Terraform or any plugins are upgraded between creating and applying a plan. -* Terraform can't automatically detect if the credentials used to create a - plan grant access to the same resources used to apply that plan. If using - different credentials for each (e.g. to generate the plan using read-only - credentials) it is important to ensure that the two are consistent - in which account on the corresponding service they belong to. - -~> The plan file contains a full copy of the configuration, the state that -the plan applies to, and any variables passed to `terraform plan`. If any of -these contain sensitive data then the archived working directory containing -the plan file should be protected accordingly. For provider authentication -credentials, it is recommended to use environment variables instead where -possible since these are _not_ included in the plan or persisted to disk -by Terraform in any other way. - -## Interactive Approval of Plans - -Another challenge with automating the Terraform workflow is the desire for an -interactive approval step between plan and apply. To implement this robustly, -it is important to ensure that either only one plan can be outstanding at a -time or that the two steps are connected such that approving a plan passes -along enough information to the apply step to ensure that the correct plan is -applied, as opposed to some later plan that also exists. - -Different orchestration tools address this in different ways, but generally -this is implemented via a _build pipeline_ feature, where different steps -can be applied in sequence, with later steps having access to data produced -by earlier steps. - -The recommended approach is to allow only one plan to be outstanding at a -time. When a plan is applied, any other existing plans that were produced -against the same state are invalidated, since they must now be recomputed -relative to the new state. By forcing plans to be approved (or dismissed) in -sequence, this can be avoided. - -## Auto-Approval of Plans - -While manual review of plans is strongly recommended for production -use-cases, it is sometimes desirable to take a more automatic approach -when deploying in pre-production or development situations. - -Where manual approval is not required, a simpler sequence of commands -can be used: - -* `terraform init -input=false` -* `terraform apply -input=false -auto-approve` - -This variant of the `apply` command implicitly creates a new plan and then -immediately applies it. The `-auto-approve` option tells Terraform not -to require interactive approval of the plan before applying it. - -~> When Terraform is empowered to make destructive changes to infrastructure, -manual review of plans is always recommended unless downtime is tolerated -in the event of unintended changes. Use automatic approval **only** with -non-critical infrastructure. - -## Testing Pull Requests with `terraform plan` - -`terraform plan` can be used as a way to perform certain limited verification -of the validity of a Terraform configuration, without affecting real -infrastructure. Although the plan step updates the state to match real -resources, thus ensuring an accurate plan, the updated state is _not_ -persisted, and so this command can safely be used to produce "throwaway" plans -that are created only to aid in code review. - -When implementing such a workflow, hooks can be used within the code review -tool in question (for example, Github Pull Requests) to trigger an orchestration -tool for each new commit under review. Terraform can be run in this case -as follows: - -* `terraform plan -input=false` - -As in the "main" workflow, it may be necessary to provide `-var` or `-var-file` -as appropriate. The `-out` option is not used in this scenario because a -plan produced for code review purposes will never be applied. Instead, a -new plan can be created and applied from the primary version control branch -once the change is merged. - -~> Beware that passing sensitive/secret data to Terraform via -variables or via environment variables will make it possible for anyone who -can submit a PR to discover those values, so this flow must be -used with care on an open source project, or on any private project where -some or all contributors should not have direct access to credentials, etc. - -## Multi-environment Deployment - -Automation of Terraform often goes hand-in-hand with creating the same -configuration multiple times to produce parallel environments for use-cases -such as pre-release testing or multi-tenant infrastructure. Automation -in such a situation can help ensure that the correct settings are used for -each environment, and that the working directory is properly configured -before each operation. - -The two most interesting commands for multi-environment orchestration are -`terraform init` and `terraform workspace`. The former can be used with -additional options to tailor the backend configuration for any differences -between environments, while the latter can be used to safely switch between -multiple states for the same config stored in a single backend. - -Where possible, it's recommended to use a single backend configuration for -all environments and use the `terraform workspace` command to switch -between workspaces: - -* `terraform init -input=false` -* `terraform workspace select QA` - -In this usage model, a fixed naming scheme is used within the backend -storage to allow multiple states to exist without any further configuration. - -Alternatively, the automation tool can set the environment variable -`TF_WORKSPACE` to an existing workspace name, which overrides any selection -made with the `terraform workspace select` command. Using this environment -variable is recommended only for non-interactive usage, since in a local shell -environment it can be easy to forget the variable is set and apply changes -to the wrong state. - -In some more complex situations it is impossible to share the same -[backend configuration](/docs/backends/config.html) across environments. For -example, the environments may exist in entirely separate accounts within the -target service, and thus need to use different credentials or endpoints for the -backend itself. In such situations, backend configuration settings can be -overridden via -[the `-backend-config` option to `terraform init`](/docs/commands/init.html#backend-config). - -## Pre-installed Plugins - -In default usage, [`terraform init`](/docs/commands/init.html#backend-config) -downloads and installs the plugins for any providers used in the configuration -automatically, placing them in a subdirectory of the `.terraform` directory. -This affords a simpler workflow for straightforward cases, and allows each -configuration to potentially use different versions of plugins. - -In automation environments, it can be desirable to disable this behavior -and instead provide a fixed set of plugins already installed on the system -where Terraform is running. This then avoids the overhead of re-downloading -the plugins on each execution, and allows the system administrator to control -which plugins are available. - -To use this mechanism, create a directory somewhere on the system where -Terraform will run and place into it the plugin executable files. The -plugin release archives are available for download on -[releases.hashicorp.com](https://releases.hashicorp.com/). Be sure to -download the appropriate archive for the target operating system and -architecture. - -After extracting the necessary plugins, the contents of the new plugin -directory will look something like this: - -``` -$ ls -lah /usr/lib/custom-terraform-plugins --rwxrwxr-x 1 user user 84M Jun 13 15:13 terraform-provider-aws-v1.0.0-x3 --rwxrwxr-x 1 user user 84M Jun 13 15:15 terraform-provider-rundeck-v2.3.0-x3 --rwxrwxr-x 1 user user 84M Jun 13 15:15 terraform-provider-mysql-v1.2.0-x3 -``` - -The version information at the end of the filenames is important so that -Terraform can infer the version number of each plugin. Multiple versions of the -same provider plugin can be installed, and Terraform will use the newest one -that matches the -[provider version constraints](/docs/configuration/providers.html#provider-versions) -in the Terraform configuration. - -With this directory populated, the usual auto-download and -[plugin discovery](/docs/extend/how-terraform-works.html#discovery) -behavior can be bypassed using the `-plugin-dir` option to `terraform init`: - -* `terraform init -input=false -plugin-dir=/usr/lib/custom-terraform-plugins` - -When this option is used, only the plugins in the given directory are -available for use. This gives the system administrator a high level of -control over the execution environment, but on the other hand it prevents -use of newer plugin versions that have not yet been installed into the -local plugin directory. Which approach is more appropriate will depend on -unique constraints within each organization. - -Plugins can also be provided along with the configuration by creating a -`terraform.d/plugins/OS_ARCH` directory, which will be searched before -automatically downloading additional plugins. The `-get-plugins=false` flag can -be used to prevent Terraform from automatically downloading additional plugins. - -## Terraform Cloud - -As an alternative to home-grown automation solutions, Hashicorp offers -[Terraform Cloud](https://www.hashicorp.com/products/terraform/). - -Internally, Terraform Cloud runs the same Terraform CLI commands -described above, using the same release binaries offered for download on this -site. - -Terraform Cloud builds on the core Terraform CLI functionality to add -additional features such as role-based access control, orchestration of the -plan and apply lifecycle, a user interface for reviewing and approving plans, -and much more. - -It will always be possible to run Terraform via in-house automation, to -allow for usage in situations where Terraform Cloud is not appropriate. -It is recommended to consider Terraform Cloud as an alternative to -in-house solutions, since it provides an out-of-the-box solution that -already incorporates the best practices described in this guide and can thus -reduce time spent developing and maintaining an in-house alternative. diff --git a/website/intro/examples/consul.html.markdown b/website/intro/examples/consul.html.markdown index ecd3df359..2ec2ae117 100644 --- a/website/intro/examples/consul.html.markdown +++ b/website/intro/examples/consul.html.markdown @@ -36,7 +36,7 @@ visiting the [Web UI](https://demo.consul.io/ui/dc1/kv/). We can see that the `tf_test/id` and `tf_test/public_dns` values have been set. -You can now [tear down the infrastructure](/intro/getting-started/destroy.html) +You can now [tear down the infrastructure](https://learn.hashicorp.com/terraform/getting-started/destroy). Because we set the `delete` property of two of the Consul keys, Terraform will clean up those keys on destroy. We can verify this by using the Web UI. diff --git a/website/intro/examples/index.html.markdown b/website/intro/examples/index.html.markdown index c4d60aeaf..a70af7a59 100644 --- a/website/intro/examples/index.html.markdown +++ b/website/intro/examples/index.html.markdown @@ -25,7 +25,7 @@ Experimenting in this way can help you learn how the Terraform lifecycle works, as well as how to repeatedly create and destroy infrastructure. If you're completely new to Terraform, we recommend reading the -[getting started guide](/intro/getting-started/install.html) before diving into +[getting started guide](https://learn.hashicorp.com/terraform/getting-started/install) before diving into the examples. However, due to the intuitive configuration Terraform uses it isn't required. diff --git a/website/intro/getting-started/build.html.md b/website/intro/getting-started/build.html.md deleted file mode 100644 index d38575e3b..000000000 --- a/website/intro/getting-started/build.html.md +++ /dev/null @@ -1,291 +0,0 @@ ---- -layout: "intro" -page_title: "Build Infrastructure" -sidebar_current: "gettingstarted-build" -description: |- - With Terraform installed, let's dive right into it and start creating some infrastructure. ---- - -# Build Infrastructure - -With Terraform installed, let's dive right into it and start creating -some infrastructure. - -We'll build infrastructure on -[AWS](https://aws.amazon.com) for this Getting Started guide -since it is popular and generally understood, but Terraform -can [manage many providers](/docs/providers/index.html), -including multiple providers in a single configuration. -Some examples of this are in the -[use cases section](/intro/use-cases.html). - -If you don't have an AWS account, -[create one now](https://aws.amazon.com/free/). -For the getting started guide, we'll only be using resources -which qualify under the AWS -[free-tier](https://aws.amazon.com/free/), -meaning it will be free. - -~> **Warning!** If you're not using an account that qualifies under the AWS -[free-tier](https://aws.amazon.com/free/), you may be charged to run these -examples. The most you should be charged should only be a few dollars, but -we're not responsible for any charges that may incur. - -## Configuration - -The set of files used to describe infrastructure in Terraform is simply -known as a Terraform _configuration_. We're going to write our first -configuration now to launch a single AWS EC2 instance. - -The format of the configuration files is -[documented here](/docs/configuration/index.html). -Configuration files can -[also be JSON](/docs/configuration/syntax.html), but we recommend only using JSON when the -configuration is generated by a machine. - -The entire configuration is shown below. We'll go over each part -after. Save the contents to a file named `example.tf`. Verify that -there are no other `*.tf` files in your directory, since Terraform -loads all of them. - -```hcl -provider "aws" { - access_key = "ACCESS_KEY_HERE" - secret_key = "SECRET_KEY_HERE" - region = "us-east-1" -} - -resource "aws_instance" "example" { - ami = "ami-2757f631" - instance_type = "t2.micro" -} -``` - -~> **Note**: The above configuration is designed to work on most EC2 accounts, -with access to a default VPC. For EC2 Classic users, please use `t1.micro` for -`instance_type`, and `ami-408c7f28` for the `ami`. If you use a region other than -`us-east-1` then you will need to choose an AMI in that region -as AMI IDs are region specific. - -Replace the `ACCESS_KEY_HERE` and `SECRET_KEY_HERE` with your -AWS access key and secret key, available from -[this page](https://console.aws.amazon.com/iam/home?#security_credential). -We're hardcoding them for now, but will extract these into -variables later in the getting started guide. - -~> **Note**: If you simply leave out AWS credentials, Terraform will -automatically search for saved API credentials (for example, -in `~/.aws/credentials`) or IAM instance profile credentials. -This option is much cleaner for situations where tf files are checked into -source control or where there is more than one admin user. -See details [here](https://aws.amazon.com/blogs/apn/terraform-beyond-the-basics-with-aws/). -Leaving IAM credentials out of the Terraform configs allows you to leave those -credentials out of source control, and also use different IAM credentials -for each user without having to modify the configuration files. - -This is a complete configuration that Terraform is ready to apply. -The general structure should be intuitive and straightforward. - -The `provider` block is used to configure the named provider, in -our case "aws". A provider is responsible for creating and -managing resources. Multiple provider blocks can exist in a -Terraform configuration if the infrastructure needs them. - -The `resource` block defines a resource that exists within -the infrastructure. A resource might be a physical component such -as an EC2 instance, or it can be a logical resource such as -a Heroku application. - -The resource block has two strings before opening the block: -the resource type and the resource name. In our example, the -resource type is "aws\_instance" and the name is "example." -The prefix of the type maps to the provider. In our case -"aws\_instance" automatically tells Terraform that it is -managed by the "aws" provider. - -Within the resource block itself is configuration for that -resource. This is dependent on each resource provider and -is fully documented within our -[providers reference](/docs/providers/index.html). For our EC2 instance, we specify -an AMI for Ubuntu, and request a "t2.micro" instance so we -qualify under the free tier. - -## Initialization - -The first command to run for a new configuration -- or after checking out -an existing configuration from version control -- is `terraform init`, which -initializes various local settings and data that will be used by subsequent -commands. - -Terraform uses a plugin based architecture to support the numerous infrastructure -and service providers available. As of Terraform version 0.10.0, each "Provider" is its -own encapsulated binary distributed separately from Terraform itself. The -`terraform init` command will automatically download and install any Provider -binary for the providers in use within the configuration, which in this case is -just the `aws` provider: - - -``` -$ terraform init -Initializing the backend... -Initializing provider plugins... -- downloading plugin for provider "aws"... - -The following providers do not have any version constraints in configuration, -so the latest version was installed. - -To prevent automatic upgrades to new major versions that may contain breaking -changes, it is recommended to add version = "..." constraints to the -corresponding provider blocks in configuration, with the constraint strings -suggested below. - -* provider.aws: version = "~> 1.0" - -Terraform has been successfully initialized! - -You may now begin working with Terraform. Try running "terraform plan" to see -any changes that are required for your infrastructure. All Terraform commands -should now work. - -If you ever set or change modules or backend configuration for Terraform, -rerun this command to reinitialize your environment. If you forget, other -commands will detect it and remind you to do so if necessary. -``` - -The `aws` provider plugin is downloaded and installed in a subdirectory of -the current working directory, along with various other book-keeping files. - -The output specifies which version of the plugin was installed, and suggests -specifying that version in configuration to ensure that running -`terraform init` in future will install a compatible version. This step -is not necessary for following the getting started guide, since this -configuration will be discarded at the end. - -## Apply Changes - -~> **Note:** The commands shown in this guide apply to Terraform 0.11 and - above. Earlier versions require using the `terraform plan` command to - see the execution plan before applying it. Use `terraform version` - to confirm your running version. - -In the same directory as the `example.tf` file you created, run -`terraform apply`. You should see output similar to below, though we've -truncated some of the output to save space: - -``` -$ terraform apply -# ... - -+ aws_instance.example - ami: "ami-2757f631" - availability_zone: "" - ebs_block_device.#: "" - ephemeral_block_device.#: "" - instance_state: "" - instance_type: "t2.micro" - key_name: "" - placement_group: "" - private_dns: "" - private_ip: "" - public_dns: "" - public_ip: "" - root_block_device.#: "" - security_groups.#: "" - source_dest_check: "true" - subnet_id: "" - tenancy: "" - vpc_security_group_ids.#: "" -``` - -This output shows the _execution plan_, describing which actions Terraform -will take in order to change real infrastructure to match the configuration. -The output format is similar to the diff format generated by tools -such as Git. The output has a `+` next to `aws_instance.example`, -meaning that Terraform will create this resource. Beneath that, -it shows the attributes that will be set. When the value displayed -is ``, it means that the value won't be known -until the resource is created. - - -If `terraform apply` failed with an error, read the error message -and fix the error that occurred. At this stage, it is likely to be a -syntax error in the configuration. - -If the plan was created successfully, Terraform will now pause and wait for -approval before proceeding. If anything in the plan seems incorrect or -dangerous, it is safe to abort here with no changes made to your infrastructure. -In this case the plan looks acceptable, so type `yes` at the confirmation -prompt to proceed. - -Executing the plan will take a few minutes since Terraform waits for the EC2 -instance to become available: - -``` -# ... -aws_instance.example: Creating... - ami: "" => "ami-2757f631" - instance_type: "" => "t2.micro" - [...] - -aws_instance.example: Still creating... (10s elapsed) -aws_instance.example: Creation complete - -Apply complete! Resources: 1 added, 0 changed, 0 destroyed. - -# ... -``` - -After this, Terraform is all done! You can go to the EC2 console to see the newly -created EC2 instance. (Make sure you're looking at the same region that was -configured in the provider configuration!) - -Terraform also wrote some data into the `terraform.tfstate` file. This state -file is extremely important; it keeps track of the IDs of created resources -so that Terraform knows what it is managing. This file must be saved and -distributed to anyone who might run Terraform. It is generally recommended to -[setup remote state](https://www.terraform.io/docs/state/remote.html) -when working with Terraform, to share the state automatically, but this is -not necessary for simple situations like this Getting Started guide. - -You can inspect the current state using `terraform show`: - -``` -$ terraform show -aws_instance.example: - id = i-32cf65a8 - ami = ami-2757f631 - availability_zone = us-east-1a - instance_state = running - instance_type = t2.micro - private_ip = 172.31.30.244 - public_dns = ec2-52-90-212-55.compute-1.amazonaws.com - public_ip = 52.90.212.55 - subnet_id = subnet-1497024d - vpc_security_group_ids.# = 1 - vpc_security_group_ids.3348721628 = sg-67652003 -``` - -You can see that by creating our resource, we've also gathered -a lot of information about it. These values can actually be referenced -to configure other resources or outputs, which will be covered later in -the getting started guide. - -## Provisioning - -The EC2 instance we launched at this point is based on the AMI -given, but has no additional software installed. If you're running -an image-based infrastructure (perhaps creating images with -[Packer](https://www.packer.io)), then this is all you need. - -However, many infrastructures still require some sort of initialization -or software provisioning step. Terraform supports provisioners, -which we'll cover a little bit later in the getting started guide, -in order to do this. - -## Next - -Congratulations! You've built your first infrastructure with Terraform. -You've seen the configuration syntax, an example of a basic execution -plan, and understand the state file. - -Next, we're going to move on to [changing and destroying infrastructure](/intro/getting-started/change.html). diff --git a/website/intro/getting-started/change.html.md b/website/intro/getting-started/change.html.md deleted file mode 100644 index d65bbe648..000000000 --- a/website/intro/getting-started/change.html.md +++ /dev/null @@ -1,123 +0,0 @@ ---- -layout: "intro" -page_title: "Change Infrastructure" -sidebar_current: "gettingstarted-change" -description: |- - In the previous page, you created your first infrastructure with Terraform: a single EC2 instance. In this page, we're going to modify that resource, and see how Terraform handles change. ---- - -# Change Infrastructure - -In the previous page, you created your first infrastructure with -Terraform: a single EC2 instance. In this page, we're going to -modify that resource, and see how Terraform handles change. - -Infrastructure is continuously evolving, and Terraform was built -to help manage and enact that change. As you change Terraform -configurations, Terraform builds an execution plan that only -modifies what is necessary to reach your desired state. - -By using Terraform to change infrastructure, you can version -control not only your configurations but also your state so you -can see how the infrastructure evolved over time. - -## Configuration - -Let's modify the `ami` of our instance. Edit the `aws_instance.example` -resource in your configuration and change it to the following: - -```hcl -resource "aws_instance" "example" { - ami = "ami-b374d5a5" - instance_type = "t2.micro" -} -``` - -~> **Note:** EC2 Classic users please use AMI `ami-656be372` and type `t1.micro` - -We've changed the AMI from being an Ubuntu 16.04 LTS AMI to being -an Ubuntu 16.10 AMI. Terraform configurations are meant to be -changed like this. You can also completely remove resources -and Terraform will know to destroy the old one. - -## Apply Changes - -After changing the configuration, run `terraform apply` again to see how -Terraform will apply this change to the existing resources. - -``` -$ terraform apply -# ... - --/+ aws_instance.example - ami: "ami-2757f631" => "ami-b374d5a5" (forces new resource) - availability_zone: "us-east-1a" => "" - ebs_block_device.#: "0" => "" - ephemeral_block_device.#: "0" => "" - instance_state: "running" => "" - instance_type: "t2.micro" => "t2.micro" - private_dns: "ip-172-31-17-94.ec2.internal" => "" - private_ip: "172.31.17.94" => "" - public_dns: "ec2-54-82-183-4.compute-1.amazonaws.com" => "" - public_ip: "54.82.183.4" => "" - subnet_id: "subnet-1497024d" => "" - vpc_security_group_ids.#: "1" => "" -``` - -The prefix `-/+` means that Terraform will destroy and recreate -the resource, rather than updating it in-place. While some attributes -can be updated in-place (which are shown with the `~` prefix), changing the -AMI for an EC2 instance requires recreating it. Terraform handles these details -for you, and the execution plan makes it clear what Terraform will do. - -Additionally, the execution plan shows that the AMI change is what -required the resource to be replaced. Using this information, -you can adjust your changes to possibly avoid destroy/create updates -if they are not acceptable in some situations. - -Once again, Terraform prompts for approval of the execution plan before -proceeding. Answer `yes` to execute the planned steps: - - -``` -# ... -aws_instance.example: Refreshing state... (ID: i-64c268fe) -aws_instance.example: Destroying... -aws_instance.example: Destruction complete -aws_instance.example: Creating... - ami: "" => "ami-b374d5a5" - availability_zone: "" => "" - ebs_block_device.#: "" => "" - ephemeral_block_device.#: "" => "" - instance_state: "" => "" - instance_type: "" => "t2.micro" - key_name: "" => "" - placement_group: "" => "" - private_dns: "" => "" - private_ip: "" => "" - public_dns: "" => "" - public_ip: "" => "" - root_block_device.#: "" => "" - security_groups.#: "" => "" - source_dest_check: "" => "true" - subnet_id: "" => "" - tenancy: "" => "" - vpc_security_group_ids.#: "" => "" -aws_instance.example: Still creating... (10s elapsed) -aws_instance.example: Still creating... (20s elapsed) -aws_instance.example: Creation complete - -Apply complete! Resources: 1 added, 0 changed, 1 destroyed. - -# ... -``` - -As indicated by the execution plan, Terraform first destroyed the existing -instance and then created a new one in its place. You can use `terraform show` -again to see the new values associated with this instance. - -## Next - -You've now seen how easy it is to modify infrastructure with -Terraform. Feel free to play around with this more before continuing. -In the next section we're going to [destroy our infrastructure](/intro/getting-started/destroy.html). diff --git a/website/intro/getting-started/dependencies.html.md b/website/intro/getting-started/dependencies.html.md deleted file mode 100644 index 32d159b77..000000000 --- a/website/intro/getting-started/dependencies.html.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -layout: "intro" -page_title: "Resource Dependencies" -sidebar_current: "gettingstarted-deps" -description: |- - In this page, we're going to introduce resource dependencies, where we'll not only see a configuration with multiple resources for the first time, but also scenarios where resource parameters use information from other resources. ---- - -# Resource Dependencies - -In this page, we're going to introduce resource dependencies, -where we'll not only see a configuration with multiple resources -for the first time, but also scenarios where resource parameters -use information from other resources. - -Up to this point, our example has only contained a single resource. -Real infrastructure has a diverse set of resources and resource -types. Terraform configurations can contain multiple resources, -multiple resource types, and these types can even span multiple -providers. - -On this page, we'll show a basic example of multiple resources -and how to reference the attributes of other resources to configure -subsequent resources. - -## Assigning an Elastic IP - -We'll improve our configuration by assigning an elastic IP to -the EC2 instance we're managing. Modify your `example.tf` and -add the following: - -```hcl -resource "aws_eip" "ip" { - instance = "${aws_instance.example.id}" -} -``` - -This should look familiar from the earlier example of adding -an EC2 instance resource, except this time we're building -an "aws\_eip" resource type. This resource type allocates -and associates an -[elastic IP](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) -to an EC2 instance. - -The only parameter for -[aws\_eip](/docs/providers/aws/r/eip.html) is "instance" which -is the EC2 instance to assign the IP to. For this value, we -use an interpolation to use an attribute from the EC2 instance -we managed earlier. - -The syntax for this interpolation should be straightforward: -it requests the "id" attribute from the "aws\_instance.example" -resource. - -## Apply Changes - -Run `terraform apply` to see how Terraform plans to apply this change. -The output will look similar to the following: - -``` -$ terraform apply - -+ aws_eip.ip - allocation_id: "" - association_id: "" - domain: "" - instance: "${aws_instance.example.id}" - network_interface: "" - private_ip: "" - public_ip: "" - -+ aws_instance.example - ami: "ami-b374d5a5" - availability_zone: "" - ebs_block_device.#: "" - ephemeral_block_device.#: "" - instance_state: "" - instance_type: "t2.micro" - key_name: "" - placement_group: "" - private_dns: "" - private_ip: "" - public_dns: "" - public_ip: "" - root_block_device.#: "" - security_groups.#: "" - source_dest_check: "true" - subnet_id: "" - tenancy: "" - vpc_security_group_ids.#: "" -``` - -Terraform will create two resources: the instance and the elastic -IP. In the "instance" value for the "aws\_eip", you can see the -raw interpolation is still present. This is because this variable -won't be known until the "aws\_instance" is created. It will be -replaced at apply-time. - -As usual, Terraform prompts for confirmation before making any changes. -Answer `yes` to apply. The continued output will look similar to the -following: - -``` -# ... -aws_instance.example: Creating... - ami: "" => "ami-b374d5a5" - instance_type: "" => "t2.micro" - [..] -aws_instance.example: Still creating... (10s elapsed) -aws_instance.example: Creation complete -aws_eip.ip: Creating... - allocation_id: "" => "" - association_id: "" => "" - domain: "" => "" - instance: "" => "i-f3d77d69" - network_interface: "" => "" - private_ip: "" => "" - public_ip: "" => "" -aws_eip.ip: Creation complete - -Apply complete! Resources: 2 added, 0 changed, 0 destroyed. -``` - -As shown above, Terraform created the EC2 instance before creating the Elastic -IP address. Due to the interpolation expression that passes the ID of the EC2 -instance to the Elastic IP address, Terraform is able to infer a dependency, -and knows it must create the instance first. - -## Implicit and Explicit Dependencies - -By studying the resource attributes used in interpolation expressions, -Terraform can automatically infer when one resource depends on another. -In the example above, the expression `${aws_instance.example.id}` creates -an _implicit dependency_ on the `aws_instance` named `example`. - -Terraform uses this dependency information to determine the correct order -in which to create the different resources. In the example above, Terraform -knows that the `aws_instance` must be created before the `aws_eip`. - -Implicit dependencies via interpolation expressions are the primary way -to inform Terraform about these relationships, and should be used whenever -possible. - -Sometimes there are dependencies between resources that are _not_ visible to -Terraform. The `depends_on` argument is accepted by any resource and accepts -a list of resources to create _explicit dependencies_ for. - -For example, perhaps an application we will run on our EC2 instance expects -to use a specific Amazon S3 bucket, but that dependency is configured -inside the application code and thus not visible to Terraform. In -that case, we can use `depends_on` to explicitly declare the dependency: - -```hcl -# New resource for the S3 bucket our application will use. -resource "aws_s3_bucket" "example" { - # NOTE: S3 bucket names must be unique across _all_ AWS accounts, so - # this name must be changed before applying this example to avoid naming - # conflicts. - bucket = "terraform-getting-started-guide" - acl = "private" -} - -# Change the aws_instance we declared earlier to now include "depends_on" -resource "aws_instance" "example" { - ami = "ami-2757f631" - instance_type = "t2.micro" - - # Tells Terraform that this EC2 instance must be created only after the - # S3 bucket has been created. - depends_on = ["aws_s3_bucket.example"] -} -``` - -## Non-Dependent Resources - -We can continue to build this configuration by adding another EC2 instance: - -```hcl -resource "aws_instance" "another" { - ami = "ami-b374d5a5" - instance_type = "t2.micro" -} -``` - -Because this new instance does not depend on any other resource, it can -be created in parallel with the other resources. Where possible, Terraform -will perform operations concurrently to reduce the total time taken to -apply changes. - -Before moving on, remove this new resource from your configuration and -run `terraform apply` again to destroy it. We won't use this second instance -any further in the getting started guide. - -## Next - -In this page you were introduced to using multiple resources, interpolating -attributes from one resource into another, and declaring dependencies between -resources to define operation ordering. - -In the next section, [we'll use provisioners](/intro/getting-started/provision.html) -to do some basic bootstrapping of our launched instance. diff --git a/website/intro/getting-started/destroy.html.md b/website/intro/getting-started/destroy.html.md deleted file mode 100644 index 643b543b0..000000000 --- a/website/intro/getting-started/destroy.html.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -layout: "intro" -page_title: "Destroy Infrastructure" -sidebar_current: "gettingstarted-destroy" -description: |- - We've now seen how to build and change infrastructure. Before we move on to creating multiple resources and showing resource dependencies, we're going to go over how to completely destroy the Terraform-managed infrastructure. ---- - -# Destroy Infrastructure - -We've now seen how to build and change infrastructure. Before we -move on to creating multiple resources and showing resource -dependencies, we're going to go over how to completely destroy -the Terraform-managed infrastructure. - -Destroying your infrastructure is a rare event in production -environments. But if you're using Terraform to spin up multiple -environments such as development, test, QA environments, then -destroying is a useful action. - -## Destroy - -Resources can be destroyed using the `terraform destroy` command, which is -similar to `terraform apply` but it behaves as if all of the resources have -been removed from the configuration. - -``` -$ terraform destroy -# ... - - - aws_instance.example -``` - -The `-` prefix indicates that the instance will be destroyed. As with apply, -Terraform shows its execution plan and waits for approval before making any -changes. - -Answer `yes` to execute this plan and destroy the infrastructure: - -``` -# ... -aws_instance.example: Destroying... - -Destroy complete! Resources: 1 destroyed. - -# ... -``` - -Just like with `apply`, Terraform determines the order in which -things must be destroyed. In this case there was only one resource, so no -ordering was necessary. In more complicated cases with multiple resources, -Terraform will destroy them in a suitable order to respect dependencies, -as we'll see later in this guide. - -## Next - -You now know how to create, modify, and destroy infrastructure -from a local machine. - -Next, we move on to features that make Terraform configurations -slightly more useful: [variables, resource dependencies, provisioning, -and more](/intro/getting-started/dependencies.html). diff --git a/website/intro/getting-started/install.html.markdown b/website/intro/getting-started/install.html.markdown deleted file mode 100644 index ed2de691b..000000000 --- a/website/intro/getting-started/install.html.markdown +++ /dev/null @@ -1,64 +0,0 @@ ---- -layout: "intro" -page_title: "Installing Terraform" -sidebar_current: "gettingstarted-install" -description: |- - Terraform must first be installed on your machine. Terraform is distributed as - a binary package for all supported platforms and architecture. This page will - not cover how to compile Terraform from source. ---- - -# Install Terraform - -Terraform must first be installed on your machine. Terraform is distributed as a -[binary package](/downloads.html) for all supported platforms and architectures. -This page will not cover how to compile Terraform from source, but compiling -from source is covered in the [documentation](/docs/index.html) for those who -want to be sure they're compiling source they trust into the final binary. - -## Installing Terraform - -To install Terraform, find the [appropriate package](/downloads.html) for your -system and download it. Terraform is packaged as a zip archive. - -After downloading Terraform, unzip the package. Terraform runs as a single -binary named `terraform`. Any other files in the package can be safely removed -and Terraform will still function. - -The final step is to make sure that the `terraform` binary is available on the `PATH`. -See [this page](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux) -for instructions on setting the PATH on Linux and Mac. -[This page](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) -contains instructions for setting the PATH on Windows. - -## Verifying the Installation - -After installing Terraform, verify the installation worked by opening a new -terminal session and checking that `terraform` is available. By executing -`terraform` you should see help output similar to this: - -```text -$ terraform -Usage: terraform [--version] [--help] [args] - -The available commands for execution are listed below. -The most common, useful commands are shown first, followed by -less common or more advanced commands. If you're just getting -started with Terraform, stick with the common commands. For the -other commands, please read the help and docs before usage. - -Common commands: - apply Builds or changes infrastructure - console Interactive console for Terraform interpolations -# ... -``` - -If you get an error that `terraform` could not be found, your `PATH` environment -variable was not set up properly. Please go back and ensure that your `PATH` -variable contains the directory where Terraform was installed. - -## Next Steps - -Time to [build infrastructure](/intro/getting-started/build.html) using a -minimal Terraform configuration file. You will be able to examine Terraform's -execution plan before you deploy it to AWS. diff --git a/website/intro/getting-started/modules.html.md b/website/intro/getting-started/modules.html.md deleted file mode 100644 index 82a984fde..000000000 --- a/website/intro/getting-started/modules.html.md +++ /dev/null @@ -1,269 +0,0 @@ ---- -layout: "intro" -page_title: "Modules" -sidebar_current: "gettingstarted-modules" -description: |- - Up to this point, we've been configuring Terraform by editing Terraform configurations directly. As our infrastructure grows, this practice has a few key problems: a lack of organization, a lack of reusability, and difficulties in management for teams. ---- - -# Modules - -Up to this point, we've been configuring Terraform by editing Terraform -configurations directly. As our infrastructure grows, this practice has a few -key problems: a lack of organization, a lack of reusability, and difficulties -in management for teams. - -_Modules_ in Terraform are self-contained packages of Terraform configurations -that are managed as a group. Modules are used to create reusable components, -improve organization, and to treat pieces of infrastructure as a black box. - -This section of the getting started will cover the basics of using modules. -Writing modules is covered in more detail in the -[modules documentation](/docs/modules/index.html). - -~> **Warning!** The examples on this page are _**not** eligible_ for -[the AWS free tier](https://aws.amazon.com/free/). Do not try the examples -on this page unless you're willing to spend a small amount of money. - -## Using Modules - -If you have any instances running from prior steps in the getting -started guide, use `terraform destroy` to destroy them, and remove all -configuration files. - -The [Terraform Registry](https://registry.terraform.io/) includes a directory -of ready-to-use modules for various common purposes, which can serve as -larger building-blocks for your infrastructure. - -In this example, we're going to use -[the Consul Terraform module for AWS](https://registry.terraform.io/modules/hashicorp/consul/aws), -which will set up a complete [Consul](https://www.consul.io) cluster. -This and other modules can be found via the search feature on the Terraform -Registry site. - -Create a configuration file with the following contents: - -```hcl -provider "aws" { - access_key = "AWS ACCESS KEY" - secret_key = "AWS SECRET KEY" - region = "us-east-1" -} - -module "consul" { - source = "hashicorp/consul/aws" - - num_servers = "3" -} -``` - -The `module` block begins with the example given on the Terraform Registry -page for this module, telling Terraform to create and manage this module. -This is similar to a `resource` block: it has a name used within this -configuration -- in this case, `"consul"` -- and a set of input values -that are listed in -[the module's "Inputs" documentation](https://registry.terraform.io/modules/hashicorp/consul/aws?tab=inputs). - -(Note that the `provider` block can be omitted in favor of environment -variables. See the [AWS Provider docs](/docs/providers/aws/index.html) -for details. This module requires that your AWS account has a default VPC.) - -The `source` attribute is the only mandatory argument for modules. It tells -Terraform where the module can be retrieved. Terraform automatically -downloads and manages modules for you. - -In this case, the module is retrieved from the official Terraform Registry. -Terraform can also retrieve modules from a variety of sources, including -private module registries or directly from Git, Mercurial, HTTP, and local -files. - -The other attributes shown are inputs to our module. This module supports many -additional inputs, but all are optional and have reasonable values for -experimentation. - -After adding a new module to configuration, it is necessary to run (or re-run) -`terraform init` to obtain and install the new module's source code: - -``` -$ terraform init -# ... -``` - -By default, this command does not check for new module versions that may be -available, so it is safe to run multiple times. The `-upgrade` option will -additionally check for any newer versions of existing modules and providers -that may be available. - -## Apply Changes - -With the Consul module (and its dependencies) installed, we can now apply -these changes to create the resources described within. - -If you run `terraform apply`, you will see a large list of all of the -resources encapsulated in the module. The output is similar to what we -saw when using resources directly, but the resource names now have -module paths prefixed to their names, like in the following example: - -``` - + module.consul.module.consul_clients.aws_autoscaling_group.autoscaling_group - id: - arn: - default_cooldown: - desired_capacity: "6" - force_delete: "false" - health_check_grace_period: "300" - health_check_type: "EC2" - launch_configuration: "${aws_launch_configuration.launch_configuration.name}" - max_size: "6" - metrics_granularity: "1Minute" - min_size: "6" - name: - protect_from_scale_in: "false" - tag.#: "2" - tag.2151078592.key: "consul-clients" - tag.2151078592.propagate_at_launch: "true" - tag.2151078592.value: "consul-example" - tag.462896764.key: "Name" - tag.462896764.propagate_at_launch: "true" - tag.462896764.value: "consul-example-client" - termination_policies.#: "1" - termination_policies.0: "Default" - vpc_zone_identifier.#: "6" - vpc_zone_identifier.1880739334: "subnet-5ce4282a" - vpc_zone_identifier.3458061785: "subnet-16600f73" - vpc_zone_identifier.4176925006: "subnet-485abd10" - vpc_zone_identifier.4226228233: "subnet-40a9b86b" - vpc_zone_identifier.595613151: "subnet-5131b95d" - vpc_zone_identifier.765942872: "subnet-595ae164" - wait_for_capacity_timeout: "10m" -``` - -The `module.consul.module.consul_clients` prefix shown above indicates -not only that the resource is from the `module "consul"` block we wrote, -but in fact that this module has its own `module "consul_clients"` block -within it. Modules can be nested to decompose complex systems into -manageable components. - -The full set of resources created by this module includes an autoscaling group, -security groups, IAM roles and other individual resources that all support -the Consul cluster that will be created. - -Note that as we warned above, the resources created by this module are -not eligible for the AWS free tier and so proceeding further will have some -cost associated. To proceed with the creation of the Consul cluster, type -`yes` at the confirmation prompt. - -``` -# ... - -module.consul.module.consul_clients.aws_security_group.lc_security_group: Creating... - description: "" => "Security group for the consul-example-client launch configuration" - egress.#: "" => "" - ingress.#: "" => "" - name: "" => "" - name_prefix: "" => "consul-example-client" - owner_id: "" => "" - revoke_rules_on_delete: "" => "false" - vpc_id: "" => "vpc-22099946" - -# ... - -Apply complete! Resources: 34 added, 0 changed, 0 destroyed. -``` - -After several minutes and many log messages about all of the resources -being created, you'll have a three-server Consul cluster up and running. -Without needing any knowledge of how Consul works, how to install Consul, -or how to form a Consul cluster, you've created a working cluster in just -a few minutes. - -## Module Outputs - -Just as the module instance had input arguments such as `num_servers` above, -a module can also produce _output_ values, similar to resource attributes. - -[The module's outputs reference](https://registry.terraform.io/modules/hashicorp/consul/aws?tab=outputs) -describes all of the different values it produces. Overall, it exposes the -id of each of the resources it creates, as well as echoing back some of the -input values. - -One of the supported outputs is called `asg_name_servers`, and its value -is the name of the auto-scaling group that was created to manage the Consul -servers. - -To reference this, we'll just put it into our _own_ output value. This -value could actually be used anywhere: in another resource, to configure -another provider, etc. - -Add the following to the end of the existing configuration file created -above: - -```hcl -output "consul_server_asg_name" { - value = "${module.consul.asg_name_servers}" -} -``` - -The syntax for referencing module outputs is `${module.NAME.OUTPUT}`, where -`NAME` is the module name given in the header of the `module` configuration -block and `OUTPUT` is the name of the output to reference. - -If you run `terraform apply` again, Terraform will make no changes to -infrastructure, but you'll now see the "consul\_server\_asg\_name" output with -the name of the created auto-scaling group: - -``` -# ... - -Apply complete! Resources: 0 added, 0 changed, 0 destroyed. - -Outputs: - -consul_server_asg_name = tf-asg-2017103123350991200000000a -``` - -If you look in the Auto-scaling Groups section of the EC2 console you should -find an autoscaling group of this name, and from there find the three -Consul servers it is running. (If you can't find it, make sure you're looking -in the right region!) - -## Destroy - -Just as with top-level resources, we can destroy the resources created by -the Consul module to avoid ongoing costs: - -``` -$ terraform destroy -# ... - -Terraform will perform the following actions: - - - module.consul.module.consul_clients.aws_autoscaling_group.autoscaling_group - - - module.consul.module.consul_clients.aws_iam_instance_profile.instance_profile - - - module.consul.module.consul_clients.aws_iam_role.instance_role - -# ... -``` - -As usual, Terraform describes all of the actions it will take. In this case, -it plans to destroy all of the resources that were created by the module. -Type `yes` to confirm and, after a few minutes and even more log output, -all of the resources should be destroyed: - -``` -Destroy complete! Resources: 34 destroyed. -``` - -With all of the resources destroyed, you can delete the configuration file -we created above. We will not make any further use of it, and so this avoids -the risk of accidentally re-creating the Consul cluster. - -## Next - -For more information on modules, the types of sources supported, how -to write modules, and more, read the in-depth -[module documentation](/docs/modules/index.html). - -Next, we learn about [Terraform's remote collaboration features](/intro/getting-started/remote.html). diff --git a/website/intro/getting-started/next-steps.html.markdown b/website/intro/getting-started/next-steps.html.markdown deleted file mode 100644 index 1271c0fa5..000000000 --- a/website/intro/getting-started/next-steps.html.markdown +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: "intro" -page_title: "Next Steps" -sidebar_current: "gettingstarted-nextsteps" -description: |- - That concludes the getting started guide for Terraform. Hopefully you're now able to not only see what Terraform is useful for, but you're also able to put this knowledge to use to improve building your own infrastructure. ---- - -# Next Steps - -That concludes the getting started guide for Terraform. Hopefully -you're now able to not only see what Terraform is useful for, but -you're also able to put this knowledge to use to improve building -your own infrastructure. - -We've covered the basics for all of these features in this guide. - -As a next step, the following resources are available: - -* [Documentation](/docs/index.html) - The documentation is an in-depth - reference guide to all the features of Terraform, including - technical details about the internals of how Terraform operates. - -* [Examples](/intro/examples/index.html) - The examples have more full - featured configuration files, showing some of the possibilities - with Terraform. - -* [Import](/docs/import/index.html) - The import section of the documentation - covers importing existing infrastructure into Terraform. - diff --git a/website/intro/getting-started/outputs.html.md b/website/intro/getting-started/outputs.html.md deleted file mode 100644 index 51100a77b..000000000 --- a/website/intro/getting-started/outputs.html.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -layout: "intro" -page_title: "Output Variables" -sidebar_current: "gettingstarted-outputs" -description: |- - In the previous section, we introduced input variables as a way to parameterize Terraform configurations. In this page, we introduce output variables as a way to organize data to be easily queried and shown back to the Terraform user. ---- - -# Output Variables - -In the previous section, we introduced input variables as a way -to parameterize Terraform configurations. In this page, we -introduce output variables as a way to organize data to be -easily queried and shown back to the Terraform user. - -When building potentially complex infrastructure, Terraform -stores hundreds or thousands of attribute values for all your -resources. But as a user of Terraform, you may only be interested -in a few values of importance, such as a load balancer IP, -VPN address, etc. - -Outputs are a way to tell Terraform what data is important. -This data is outputted when `apply` is called, and can be -queried using the `terraform output` command. - -## Defining Outputs - -Let's define an output to show us the public IP address of the -elastic IP address that we create. Add this to any of your -`*.tf` files: - -```hcl -output "ip" { - value = "${aws_eip.ip.public_ip}" -} -``` - -This defines an output variable named "ip". The name of the variable -must conform to Terraform variable naming conventions if it is -to be used as an input to other modules. The `value` field -specifies what the value will be, and almost always contains -one or more interpolations, since the output data is typically -dynamic. In this case, we're outputting the -`public_ip` attribute of the elastic IP address. - -Multiple `output` blocks can be defined to specify multiple -output variables. - -## Viewing Outputs - -Run `terraform apply` to populate the output. This only needs -to be done once after the output is defined. The apply output -should change slightly. At the end you should see this: - -``` -$ terraform apply -... - -Apply complete! Resources: 0 added, 0 changed, 0 destroyed. - -Outputs: - - ip = 50.17.232.209 -``` - -`apply` highlights the outputs. You can also query the outputs -after apply-time using `terraform output`: - -``` -$ terraform output ip -50.17.232.209 -``` - -This command is useful for scripts to extract outputs. - -## Next - -You now know how to parameterize configurations with input -variables, extract important data using output variables, -and bootstrap resources using provisioners. - -Next, we're going to take a look at -[how to use modules](/intro/getting-started/modules.html), a useful -abstraction to organize and reuse Terraform configurations. diff --git a/website/intro/getting-started/provision.html.md b/website/intro/getting-started/provision.html.md deleted file mode 100644 index 3cc6a691a..000000000 --- a/website/intro/getting-started/provision.html.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -layout: "intro" -page_title: "Provision" -sidebar_current: "gettingstarted-provision" -description: |- - Introduces provisioners that can initialize instances when they're created. ---- - -# Provision - -You're now able to create and modify infrastructure. Now let's see -how to use provisioners to initialize instances when they're created. - -If you're using an image-based infrastructure (perhaps with images -created with [Packer](https://www.packer.io)), then what you've -learned so far is good enough. But if you need to do some initial -setup on your instances, then provisioners let you upload files, -run shell scripts, or install and trigger other software like -configuration management tools, etc. - -## Defining a Provisioner - -To define a provisioner, modify the resource block defining the -"example" EC2 instance to look like the following: - -```hcl -resource "aws_instance" "example" { - ami = "ami-b374d5a5" - instance_type = "t2.micro" - - provisioner "local-exec" { - command = "echo ${aws_instance.example.public_ip} > ip_address.txt" - } -} -``` - -This adds a `provisioner` block within the `resource` block. Multiple -`provisioner` blocks can be added to define multiple provisioning steps. -Terraform supports -[multiple provisioners](/docs/provisioners/index.html), -but for this example we are using the `local-exec` provisioner. - -The `local-exec` provisioner executes a command locally on the machine -running Terraform. We're using this provisioner versus the others so -we don't have to worry about specifying any -[connection info](/docs/provisioners/connection.html) right now. - -## Running Provisioners - -Provisioners are only run when a resource is _created_. They -are not a replacement for configuration management and changing -the software of an already-running server, and are instead just -meant as a way to bootstrap a server. For configuration management, -you should use Terraform provisioning to invoke a real configuration -management solution. - -Make sure that your infrastructure is -[destroyed](/intro/getting-started/destroy.html) if it isn't already, -then run `apply`: - -``` -$ terraform apply -# ... - -aws_instance.example: Creating... - ami: "" => "ami-b374d5a5" - instance_type: "" => "t2.micro" -aws_eip.ip: Creating... - instance: "" => "i-213f350a" - -Apply complete! Resources: 2 added, 0 changed, 0 destroyed. -``` - -Terraform will output anything from provisioners to the console, -but in this case there is no output. However, we can verify -everything worked by looking at the `ip_address.txt` file: - -``` -$ cat ip_address.txt -54.192.26.128 -``` - -It contains the IP, just as we asked! - -## Failed Provisioners and Tainted Resources - -If a resource successfully creates but fails during provisioning, -Terraform will error and mark the resource as "tainted". A -resource that is tainted has been physically created, but can't -be considered safe to use since provisioning failed. - -When you generate your next execution plan, Terraform will not attempt to restart -provisioning on the same resource because it isn't guaranteed to be safe. Instead, -Terraform will remove any tainted resources and create new resources, attempting to -provision them again after creation. - -Terraform also does not automatically roll back and destroy the resource -during the apply when the failure happens, because that would go -against the execution plan: the execution plan would've said a -resource will be created, but does not say it will ever be deleted. -If you create an execution plan with a tainted resource, however, the -plan will clearly state that the resource will be destroyed because -it is tainted. - -## Destroy Provisioners - -Provisioners can also be defined that run only during a destroy -operation. These are useful for performing system cleanup, extracting -data, etc. - -For many resources, using built-in cleanup mechanisms is recommended -if possible (such as init scripts), but provisioners can be used if -necessary. - -The getting started guide won't show any destroy provisioner examples. -If you need to use destroy provisioners, please -[see the provisioner documentation](/docs/provisioners). - -## Next - -Provisioning is important for being able to bootstrap instances. -As another reminder, it is not a replacement for configuration -management. It is meant to simply bootstrap machines. If you use -configuration management, you should use the provisioning as a way -to bootstrap the configuration management tool. - -In the next section, we start looking at [variables as a way to -parameterize our configurations](/intro/getting-started/variables.html). diff --git a/website/intro/getting-started/remote.html.markdown b/website/intro/getting-started/remote.html.markdown deleted file mode 100644 index 52b4363db..000000000 --- a/website/intro/getting-started/remote.html.markdown +++ /dev/null @@ -1,113 +0,0 @@ ---- -layout: "intro" -page_title: "Terraform Remote" -sidebar_current: "gettingstarted-remote" -description: |- - We've now seen how to build, change, and destroy infrastructure from a local machine. However, you can use Atlas by HashiCorp to run Terraform remotely to version and audit the history of your infrastructure. ---- - -# Remote Backends - -We've now seen how to build, change, and destroy infrastructure -from a local machine. This is great for testing and development, -but in production environments it is more responsible to share responsibility -for infrastructure. The best way to do this is by running Terraform in a remote -environment with shared access to state. - -Terraform supports team-based workflows with a feature known as [remote -backends](/docs/backends). Remote backends allow Terraform to use a shared -storage space for state data, so any member of your team can use Terraform to -manage the same infrastructure. - -Depending on the features you wish to use, Terraform has multiple remote -backend options. You could use Consul for state storage, locking, and -environments. This is a free and open source option. You can use S3 which -only supports state storage, for a low cost and minimally featured solution. - -[Terraform Cloud](https://www.hashicorp.com/products/terraform/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform) -is HashiCorp's commercial solution and also acts as a remote backend. -Terraform Cloud allows teams to easily version, audit, and collaborate -on infrastructure changes. Each proposed change generates -a Terraform plan which can be reviewed and collaborated on as a team. -When a proposed change is accepted, the Terraform logs are stored, -resulting in a linear history of infrastructure states to -help with auditing and policy enforcement. Additional benefits to -running Terraform remotely include moving access -credentials off of developer machines and freeing local machines -from long-running Terraform processes. - -## How to Store State Remotely - -First, we'll use [Consul](https://www.consul.io) as our backend. Consul -is a free and open source solution that provides state storage, locking, and -environments. It is a great way to get started with Terraform backends. - -We'll use the [demo Consul server](https://demo.consul.io) for this guide. -This should not be used for real data. Additionally, the demo server doesn't -permit locking. If you want to play with [state locking](/docs/state/locking.html), -you'll have to run your own Consul server or use a backend that supports locking. - -First, configure the backend in your configuration: - -```hcl -terraform { - backend "consul" { - address = "demo.consul.io" - path = "getting-started-RANDOMSTRING" - lock = false - scheme = "https" - } -} -``` - -Please replace "RANDOMSTRING" with some random text. The demo server is -public and we want to try to avoid overlapping with someone else running -through the getting started guide. - -The `backend` section configures the backend you want to use. After -configuring a backend, run `terraform init` to setup Terraform. It should -ask if you want to migrate your state to Consul. Say "yes" and Terraform -will copy your state. - -Now, if you run `terraform apply`, Terraform should state that there are -no changes: - -``` -$ terraform apply -# ... - -No changes. Infrastructure is up-to-date. - -This means that Terraform did not detect any differences between your -configuration and real physical resources that exist. As a result, Terraform -doesn't need to do anything. -``` - -Terraform is now storing your state remotely in Consul. Remote state -storage makes collaboration easier and keeps state and secret information -off your local disk. Remote state is loaded only in memory when it is used. - -If you want to move back to local state, you can remove the backend configuration -block from your configuration and run `terraform init` again. Terraform will -once again ask if you want to migrate your state back to local. - -## Terraform Cloud - -[Terraform Cloud](https://www.hashicorp.com/products/terraform/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform) is a commercial solution which combines a predictable and reliable shared run environment with tools to help you work together on Terraform configurations and modules. - -Although Terraform Cloud can act as a standard remote backend to support Terraform runs on local machines, it works even better as a remote run environment. It supports two main workflows for performing Terraform runs: - -- A VCS-driven workflow, in which it automatically queues plans whenever changes are committed to your configuration's VCS repo. -- An API-driven workflow, in which a CI pipeline or other automated tool can upload configurations directly. - -For a hands-on introduction to Terraform Cloud, [follow the Terraform Cloud getting started guide](/docs/cloud/getting-started/index.html). - - -## Next -You now know how to create, modify, destroy, version, and -collaborate on infrastructure. With these building blocks, -you can effectively experiment with any part of Terraform. - -We've now concluded the getting started guide, however -there are a number of [next steps](/intro/getting-started/next-steps.html) -to get started with Terraform. diff --git a/website/intro/getting-started/variables.html.md b/website/intro/getting-started/variables.html.md deleted file mode 100644 index 007fe5f44..000000000 --- a/website/intro/getting-started/variables.html.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -layout: "intro" -page_title: "Input Variables" -sidebar_current: "gettingstarted-variables" -description: |- - You now have enough Terraform knowledge to create useful configurations, but we're still hardcoding access keys, AMIs, etc. To become truly shareable and committable to version control, we need to parameterize the configurations. This page introduces input variables as a way to do this. ---- - -# Input Variables - -You now have enough Terraform knowledge to create useful -configurations, but we're still hard-coding access keys, -AMIs, etc. To become truly shareable and version -controlled, we need to parameterize the configurations. This page -introduces input variables as a way to do this. - -## Defining Variables - -Let's first extract our access key, secret key, and region -into a few variables. Create another file `variables.tf` with -the following contents. - --> **Note**: that the file can be named anything, since Terraform loads all -files ending in `.tf` in a directory. - -```hcl -variable "access_key" {} -variable "secret_key" {} -variable "region" { - default = "us-east-1" -} -``` - -This defines three variables within your Terraform configuration. The first -two have empty blocks `{}`. The third sets a default. If a default value is -set, the variable is optional. Otherwise, the variable is required. If you run -`terraform plan` now, Terraform will prompt you for the values for unset string -variables. - -## Using Variables in Configuration - -Next, replace the AWS provider configuration with the following: - -```hcl -provider "aws" { - access_key = "${var.access_key}" - secret_key = "${var.secret_key}" - region = "${var.region}" -} -``` - -This uses more interpolations, this time prefixed with `var.`. This -tells Terraform that you're accessing variables. This configures -the AWS provider with the given variables. - -## Assigning Variables - -There are multiple ways to assign variables. Below is also the order -in which variable values are chosen. The following is the descending order -of precedence in which variables are considered. - -#### Command-line flags - -You can set variables directly on the command-line with the -`-var` flag. Any command in Terraform that inspects the configuration -accepts this flag, such as `apply`, `plan`, and `refresh`: - -``` -$ terraform apply \ - -var 'access_key=foo' \ - -var 'secret_key=bar' -# ... -``` - -Once again, setting variables this way will not save them, and they'll -have to be input repeatedly as commands are executed. - -#### From a file - -To persist variable values, create a file and assign variables within -this file. Create a file named `terraform.tfvars` with the following -contents: - -```hcl -access_key = "foo" -secret_key = "bar" -``` - -For all files which match `terraform.tfvars` or `*.auto.tfvars` present in the -current directory, Terraform automatically loads them to populate variables. If -the file is named something else, you can use the `-var-file` flag directly to -specify a file. These files are the same syntax as Terraform -configuration files. And like Terraform configuration files, these files -can also be JSON. - -We don't recommend saving usernames and password to version control, but you -can create a local secret variables file and use `-var-file` to load it. - -You can use multiple `-var-file` arguments in a single command, with some -checked in to version control and others not checked in. For example: - -``` -$ terraform apply \ - -var-file="secret.tfvars" \ - -var-file="production.tfvars" -``` - -#### From environment variables - -Terraform will read environment variables in the form of `TF_VAR_name` -to find the value for a variable. For example, the `TF_VAR_access_key` -variable can be set to set the `access_key` variable. - --> **Note**: Environment variables can only populate string-type variables. -List and map type variables must be populated via one of the other mechanisms. - -#### UI Input - -If you execute `terraform apply` with certain variables unspecified, -Terraform will ask you to input their values interactively. These -values are not saved, but this provides a convenient workflow when getting -started with Terraform. UI Input is not recommended for everyday use of -Terraform. - --> **Note**: In Terraform versions 0.11 and earlier, UI Input is only supported -for string variables. List and map variables must be populated via one of the -other mechanisms. Terraform 0.12 introduces the ability to populate complex -variable types from the UI prompt. - -#### Variable Defaults - -If no value is assigned to a variable via any of these methods and the -variable has a `default` key in its declaration, that value will be used -for the variable. - - -## Lists - -Lists are defined either explicitly or implicitly - -```hcl -# implicitly by using brackets [...] -variable "cidrs" { default = [] } - -# explicitly -variable "cidrs" { type = "list" } -``` - -You can specify lists in a `terraform.tfvars` file: - -```hcl -cidrs = [ "10.0.0.0/16", "10.1.0.0/16" ] -``` - -## Maps - -We've replaced our sensitive strings with variables, but we still -are hard-coding AMIs. Unfortunately, AMIs are specific to the region -that is in use. One option is to just ask the user to input the proper -AMI for the region, but Terraform can do better than that with -_maps_. - -Maps are a way to create variables that are lookup tables. An example -will show this best. Let's extract our AMIs into a map and add -support for the `us-west-2` region as well: - -```hcl -variable "amis" { - type = "map" - default = { - "us-east-1" = "ami-b374d5a5" - "us-west-2" = "ami-4b32be2b" - } -} -``` - -A variable can have a `map` type assigned explicitly, or it can be implicitly -declared as a map by specifying a default value that is a map. The above -demonstrates both. - -Then, replace the `aws_instance` with the following: - -```hcl -resource "aws_instance" "example" { - ami = "${lookup(var.amis, var.region)}" - instance_type = "t2.micro" -} -``` - -This introduces a new type of interpolation: a function call. The -`lookup` function does a dynamic lookup in a map for a key. The -key is `var.region`, which specifies that the value of the region -variables is the key. - -While we don't use it in our example, it is worth noting that you -can also do a static lookup of a map directly with -`${var.amis["us-east-1"]}`. - -## Assigning Maps - -We set defaults above, but maps can also be set using the `-var` and -`-var-file` values. For example: - -``` -$ terraform apply -var 'amis={ us-east-1 = "foo", us-west-2 = "bar" }' -# ... -``` - --> **Note**: Even if every key will be assigned as input, the variable must be -established as a map by setting its default to `{}`. - -Here is an example of setting a map's keys from a file. Starting with these -variable definitions: - -```hcl -variable "region" {} -variable "amis" { - type = "map" -} -``` - -You can specify keys in a `terraform.tfvars` file: - -```hcl -amis = { - "us-east-1" = "ami-abc123" - "us-west-2" = "ami-def456" -} -``` - -And access them via `lookup()`: - -```hcl -output "ami" { - value = "${lookup(var.amis, var.region)}" -} -``` - -Like so: - -``` -$ terraform apply -var region=us-west-2 - -Apply complete! Resources: 0 added, 0 changed, 0 destroyed. - -Outputs: - - ami = ami-def456 -``` - -## Next - -Terraform provides variables for parameterizing your configurations. -Maps let you build lookup tables in cases where that makes sense. -Setting and using variables is uniform throughout your configurations. - -In the next section, we'll take a look at -[output variables](/intro/getting-started/outputs.html) as a mechanism -to expose certain values more prominently to the Terraform operator. diff --git a/website/intro/index.html.markdown b/website/intro/index.html.markdown index ad39b9085..e87baa5f7 100644 --- a/website/intro/index.html.markdown +++ b/website/intro/index.html.markdown @@ -36,6 +36,8 @@ low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. + + Examples work best to showcase Terraform. Please see the [use cases](/intro/use-cases.html). @@ -73,5 +75,5 @@ See the page on [Terraform use cases](/intro/use-cases.html) to see the multiple ways Terraform can be used. Then see [how Terraform compares to other software](/intro/vs/index.html) to see how it fits into your existing infrastructure. Finally, continue onwards with -the [getting started guide](/intro/getting-started/install.html) to use +the [getting started guide](https://learn.hashicorp.com/terraform/getting-started/install) to use Terraform to manage real infrastructure and to see how it works. diff --git a/website/layouts/backend-types.erb b/website/layouts/backend-types.erb index 315470890..39f1200b6 100644 --- a/website/layouts/backend-types.erb +++ b/website/layouts/backend-types.erb @@ -36,6 +36,9 @@ > consul + > + cos + > etcd diff --git a/website/layouts/docs.erb b/website/layouts/docs.erb index f1197097f..38765f2ba 100644 --- a/website/layouts/docs.erb +++ b/website/layouts/docs.erb @@ -186,6 +186,14 @@ init + > + login + + + > + logout + + > output @@ -429,6 +437,10 @@ > Internals