-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google_bigquery_job re-run after 6 months #9768
Comments
From an- admittedly naive- provider's perspective, this is working as intended. Since the job in the API has disappeared, it should be recreated, and we can't distinguish between a nonexistent job and one that used to exist. The only potential fix we could consider is to not check if the job is present in the API, but that would actually break import for the resource- we don't know in the |
I think storing the hash of the source as a state for the load jobs would be appropriate since it should only run the load job when the source file is changed. We tried to achieve this behavior by adding |
Applying tf plan broke because of [this](hashicorp/terraform-provider-google#9768) outstanding tf issue. This is a temporary workaround until that's resolved. GitOrigin-RevId: 6506c3cf5b1d2f3e3604693e835a32a95ef0f311
Applying tf plan broke because of [this](hashicorp/terraform-provider-google#9768) outstanding tf issue. This is a temporary workaround until that's resolved. GitOrigin-RevId: a4ae4e02b30239aae971fe59c2d9c17a495f658a
First Referencing the response by @rileykarson
I think this is appropriate given how the provider works natively. A BigQuery Job is created once and cannot be modified. If there is a change to a BigQuery Job resource, this will result in a new Job being created in BigQuery.
I think this is appropriate too. Importing a job, whether it exists in BigQuery or has been purged after the 180 period, should indicate to Terraform not to create the resource because it already exists/existed. Isn't this inline with other uses of |
I'm inclined to agree with @SunilPaiEmpire points above. It is frustrating because We are considering a script we can run as part of our workflow that will check for and remove jobs over a certain age but it complicates what would otherwise be a very tidy process. |
Applying tf plan broke because of [this](hashicorp/terraform-provider-google#9768) outstanding tf issue. This is a temporary workaround until that's resolved. GitOrigin-RevId: 9a62d5d7ab6375d53578769a2a1b3ea3c1d7b624
…7736) ## Description of the change Bump job_id in `google_bigquery_job` to `v5`, since it's been ~6 months since the previous version bump (see issue for context). ## Type of change > All pull requests must have at least one of the following labels applied (otherwise the PR will fail): | Label | Description | |----------------------------- |----------------------------------------------------------------------------------------------------------- | | Type: Bug | non-breaking change that fixes an issue | | Type: Feature | non-breaking change that adds functionality | | Type: Breaking Change | fix or feature that would cause existing functionality to not work as expected | | Type: Non-breaking refactor | change addresses some tech debt item or prepares for a later change, but does not change functionality | | Type: Configuration Change | adjusts configuration to achieve some end related to functionality, development, performance, or security | | Type: Dependency Upgrade | upgrades a project dependency - these changes are not included in release notes | ## Related issues hashicorp/terraform-provider-google#9768 ## Checklists ### Development **This box MUST be checked by the submitter prior to merging**: - [x] **Double- and triple-checked that there is no Personally Identifiable Information (PII) being mistakenly added in this pull request** These boxes should be checked by the submitter prior to merging: - [ ] Tests have been written to cover the code changed/added as part of this pull request ### Code review These boxes should be checked by reviewers prior to merging: - [x] This pull request has a descriptive title and information useful to a reviewer - [x] This pull request has been moved out of a Draft state, has no "Work In Progress" label, and has assigned reviewers - [x] Potential security implications or infrastructural changes have been considered, if relevant GitOrigin-RevId: e8b29bdd30cdd29a8ce07c495693f3158deaf003
…corp#9768) Co-authored-by: Shivang Dixit <[email protected]> [upstream:3e3c54bbd42010a2bf2eb821d2ec397e43a189a1] Signed-off-by: Modular Magician <[email protected]>
#16928) [upstream:3e3c54bbd42010a2bf2eb821d2ec397e43a189a1] Signed-off-by: Modular Magician <[email protected]>
Revisiting this as the same issue was recently brought up again by another user. @rileykarson Could you forward it to our internal issue tracker too? Since the jobs are not persistent past 6 months on the server side, we'll need to think of something on the provider level. Could you elaborate on what "not check if the job is present in the API" is? Is it that we don't read the live state at all as we would never attempt to re-create any BigQuery job in Terraform? |
Yeah- instead of clearing the resource id on a 404 error (this code) we'd and preserve the old state of the resource by immediately exiting Read w/o an error. We'd have to figure out what updates to a purged job definition mean as part of that, I think. We could track purged jobs with a boolean clientside, and then use a CustomizeDiff to upgrade update plans to deletions, or add a guard to Update so that updates are no-ops. I think for the import case I was thinking about, we could only allow importing valid jobs by tracking a flag clientside for that- we'd start it false and set it to true after a successful read. Since Import is ultimately a Read call, we'd return an error on a 404 where that flag is still false. Or just take the "import of nonexisting jobs always succeeds" approach from #9768 (comment). |
Thanks again for the details. I'm trying to see if we can simplify the handling by just preserving any existing state without having to rely on additional client-side field. Can you confirm that if we
@rileykarson Could you correct any misunderstanding above? What do you think of the overall plan? |
I wonder if ephemeral resouces might be a good fit for this? https://developer.hashicorp.com/terraform/language/resources/ephemeral |
@wj-chen sorry I missed that when you sent it! Looks reasonable, with minor notes below.
Yes
We can't readily print arbitrary messages unfortunately- we could update a state value on the resource, but it's hard to display that to an end user without triggering a diff.
Ephemeral resources are more akin to a secret value than a job resource! They're ephemeral in the sense that they don't get permanently recorded in state and only exist for the length of I haven't worked directly with them yet and am not a HashiCorp employee, but the mental model I've used based on what I've read is "temporary datasource" more than anything. |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Terraform v1.0.1
on linux_amd64
Your version of Terraform is out of date! The latest version
is 1.0.4. You can update by downloading from https://www.terraform.io/downloads.html
Affected Resource(s)
Terraform Configuration Files
Expected Behavior
Since the job already ran and there is no change in the JSON file, the BQ load job must not be re-created.
Actual Behavior
After 6 months, GCP deletes the BQ job history, and load jobs are re-created even though there is no change in JSON files.
Steps to Reproduce
terraform apply
terraform apply
terraform apply
The issue is caused by the design of Terraform state management for
google_bigquery_job
load jobs and GCP job history lifetime (6 months).b/371632037
The text was updated successfully, but these errors were encountered: