-
Notifications
You must be signed in to change notification settings - Fork 49
SPLAT-2410: vsphere conversion #352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Skipping CI for Draft Pull Request. |
|
/test ? |
|
@jcpowermac: The following commands are available to trigger required jobs: The following commands are available to trigger optional jobs: Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/test e2e-vsphere-capi-techpreview |
9b35375 to
c7b2126
Compare
WalkthroughAdds VMware vSphere platform support across Machine API Migration: wires vSphere types and schemes, integrates feature gates, extends controllers’ platform switches, and introduces bidirectional vSphere conversion logic between CAPI and MAPI for Machines and MachineSets. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Pre-merge checks (3 passed)✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
8fec374 to
050ec85
Compare
050ec85 to
f988cc6
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
notes:
|
|
/test e2e-vsphere-capi-techpreview |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
pkg/controllers/machinesync/machine_sync_controller.go (3)
1284-1293: Bug: parsing providerSpec uses AWS parser and the second read usesatwice
- This function is AWS-specific; it will fail for VSphere/OpenStack.
- Second parse reads from
ainstead ofb.Minimal fix now, plus TODO to make provider-agnostic:
- ps1, err := mapi2capi.AWSProviderSpecFromRawExtension(a.Spec.ProviderSpec.Value) + ps1, err := mapi2capi.AWSProviderSpecFromRawExtension(a.Spec.ProviderSpec.Value) @@ - ps2, err := mapi2capi.AWSProviderSpecFromRawExtension(a.Spec.ProviderSpec.Value) + ps2, err := mapi2capi.AWSProviderSpecFromRawExtension(b.Spec.ProviderSpec.Value)Follow-up: replace the AWS-specific parsing by normalizing and deep-equaling the Raw bytes (YAML->JSON) so it works for all providers. I can send a patch.
1029-1032: Returning formatted error even when err is nil
fmt.Errorf("failed to remove finalizer: %w", err)returns a non-nil error even iferris nil. Guard before wrapping.- _, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer) - - return false, fmt.Errorf("failed to remove finalizer: %w", err) + if _, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer); err != nil { + return false, fmt.Errorf("failed to remove finalizer: %w", err) + } + return false, nilApply the same pattern to the other two locations.
Also applies to: 1149-1152, 1191-1195
1105-1111: RemoveFinalizer called on possibly nil infraMachine
infraMachinecan be nil; callingRemoveFinalizerwill panic. Guard withutil.IsNilObject(infraMachine).- if changed, err := util.RemoveFinalizer(ctx, r.Client, infraMachine, SyncFinalizer); err != nil { + if !util.IsNilObject(infraMachine) { + if changed, err := util.RemoveFinalizer(ctx, r.Client, infraMachine, SyncFinalizer); err != nil { // ... - } else if changed { - logger.Info("Removing Cluster API infra machine sync finalizer") - hasChanged = true + } else if changed { + logger.Info("Removing Cluster API infra machine sync finalizer") + hasChanged = true + } }Apply similarly in reconcileCAPItoMAPIMachineDeletion.
Also applies to: 1175-1177
pkg/controllers/machinesetsync/machineset_sync_controller.go (1)
1224-1232: AWS-only ProviderSpec parsing breaks non-AWS (including vSphere) comparisons. Replace with provider-agnostic RawExtension diff.This will fail on vSphere/OpenStack/PowerVS. Decode RawExtension JSON generically and compare.
Apply:
- ps1, err := mapi2capi.AWSProviderSpecFromRawExtension(a.Spec.Template.Spec.ProviderSpec.Value) + ps1, err := parseRawProviderSpec(a.Spec.Template.Spec.ProviderSpec.Value) if err != nil { return nil, fmt.Errorf("unable to parse first MAPI machine set providerSpec: %w", err) } - ps2, err := mapi2capi.AWSProviderSpecFromRawExtension(b.Spec.Template.Spec.ProviderSpec.Value) + ps2, err := parseRawProviderSpec(b.Spec.Template.Spec.ProviderSpec.Value) if err != nil { return nil, fmt.Errorf("unable to parse second MAPI machine set providerSpec: %w", err) }Add helper (outside this hunk):
// parseRawProviderSpec decodes ProviderSpec.Value.Raw as generic JSON for cross-provider diffs. func parseRawProviderSpec(raw *runtime.RawExtension) (map[string]any, error) { if raw == nil || len(raw.Raw) == 0 { return nil, nil } var m map[string]any if err := json.Unmarshal(raw.Raw, &m); err != nil { return nil, err } return m, nil }And import:
import ( + "encoding/json"
🧹 Nitpick comments (6)
pkg/conversion/mapi2capi/vsphere.go (1)
298-305: Comment contradicts code for PowerOffModeComment says set to TrySoft, but code sets Hard. Pick one and document why.
- // Setting this to VirtualMachinePowerOpModeTrySoft VirtualMachinePowerOpMode = "trySoft" - // The timeout might have to be adjusted - vsphereMachine.Spec.PowerOffMode = vspherev1.VirtualMachinePowerOpModeHard + // Prefer soft power-off to reduce guest impact; adjust timeout if needed. + vsphereMachine.Spec.PowerOffMode = vspherev1.VirtualMachinePowerOpModeTrySoftpkg/conversion/capi2mapi/vsphere.go (1)
156-166: Wiping template labels may hide desired propagationClearing
Template.Spec.ObjectMeta.Labels = nilis risky; labels often carry selectors/ops signals. Consider preserving or making this conditional behind a flag.pkg/controllers/machinesync/machine_sync_controller.go (1)
350-353: Event recorded on possibly nil objectWhen mirroring CAPI->MAPI and mapiMachine is nil, event should be recorded on capiMachine (or on the newly created object). Avoid passing nil.
- r.Recorder.Event(mapiMachine, corev1.EventTypeWarning, "ConversionWarning", warning) + r.Recorder.Event(capiMachine, corev1.EventTypeWarning, "ConversionWarning", warning)cmd/machine-api-migration/main.go (1)
327-346: Cache selection: consider watching CAPI namespace onlyCurrent ByObject watches MAPI objects in MAPI namespace and defaults CAPI ns, which is fine. If we add vSphere templates/clusters listing at scale, consider adding explicit cache entries to minimize memory.
pkg/controllers/machinesetsync/machineset_sync_controller.go (2)
1177-1199: vSphere infra template compare path looks sound; consider aligning ObjectMeta diff across providers.OpenStack/PowerVS branches use deep.Equal for ObjectMeta while AWS/vSphere use util.ObjectMetaEqual. For consistent semantics, prefer util.ObjectMetaEqual everywhere.
- if diffObjectMeta := deep.Equal(typedInfraMachineTemplate1.ObjectMeta, typedinfraMachineTemplate2.ObjectMeta); len(diffObjectMeta) > 0 { + if diffObjectMeta := util.ObjectMetaEqual(typedInfraMachineTemplate1.ObjectMeta, typedinfraMachineTemplate2.ObjectMeta); len(diffObjectMeta) > 0 { diff[".metadata"] = diffObjectMeta }
1135-1138: Wrong error sentinel in OpenStack type assertion.Should return errAssertingCAPIOpenStackMachineTemplate, not the IBMPowerVS one.
- if !ok { - return nil, errAssertingCAPIIBMPowerVSMachineTemplate - } + if !ok { + return nil, errAssertingCAPIOpenStackMachineTemplate + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (6)
cmd/machine-api-migration/main.go(4 hunks)pkg/controllers/common.go(3 hunks)pkg/controllers/machinesetsync/machineset_sync_controller.go(7 hunks)pkg/controllers/machinesync/machine_sync_controller.go(5 hunks)pkg/conversion/capi2mapi/vsphere.go(1 hunks)pkg/conversion/mapi2capi/vsphere.go(1 hunks)
🔇 Additional comments (9)
pkg/conversion/capi2mapi/vsphere.go (1)
236-238: Credentials secret name: confirm canonical valueHardcoding "vsphere-cloud-credentials" may not match existing MAPI clusters (commonly "vsphere-credentials"). Confirm against MAPI VSphere provider.
I can switch to a configurable value or detect via existing secrets if you prefer.
pkg/controllers/common.go (1)
50-52: VSphere wiring looks goodProvider switch correctly returns VSphere types for machine and template.
Also applies to: 69-71
pkg/controllers/machinesync/machine_sync_controller.go (1)
569-571: VSphere paths are correctly plumbed throughConversion routing and infra diff leverage VSphere types as expected.
Also applies to: 602-614, 1385-1404
cmd/machine-api-migration/main.go (1)
68-77: VSphere scheme and gating hooked up correctlyScheme registration and platform gating include vSphere; controllers will start only when enabled via FeatureGateMachineAPIMigration.
Also applies to: 201-206
pkg/controllers/machinesetsync/machineset_sync_controller.go (5)
51-51: Import for vSphere types looks correct.
84-86: New vSphere assertion error wired consistently with existing patterns.
727-740: vSphere CAPI->MAPI conversion branch is correct and mirrors other providers.
755-756: vSphere MAPI->CAPI conversion hook added correctly.
1100-1101: vSphere list initializers look good.
| case *vspherev1.VSphereMachineTemplateList: | ||
| for _, template := range list.Items { | ||
| if template.GetName() != newInfraMachineTemplateName { | ||
| outdatedTemplates = append(outdatedTemplates, &template) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix address-of-loop-variable bug when collecting outdated templates (also affects other providers).
Using &template from a range loop appends the same pointer repeatedly. Iterate by index and take the address of the slice element. Apply this across all provider cases in this function.
Apply:
case *awsv1.AWSMachineTemplateList:
- for _, template := range list.Items {
- if template.GetName() != newInfraMachineTemplateName {
- outdatedTemplates = append(outdatedTemplates, &template)
- }
- }
+ for i := range list.Items {
+ tmpl := &list.Items[i]
+ if tmpl.GetName() != newInfraMachineTemplateName {
+ outdatedTemplates = append(outdatedTemplates, tmpl)
+ }
+ }
case *ibmpowervsv1.IBMPowerVSMachineTemplateList:
- for _, template := range list.Items {
- if template.GetName() != newInfraMachineTemplateName {
- outdatedTemplates = append(outdatedTemplates, &template)
- }
- }
+ for i := range list.Items {
+ tmpl := &list.Items[i]
+ if tmpl.GetName() != newInfraMachineTemplateName {
+ outdatedTemplates = append(outdatedTemplates, tmpl)
+ }
+ }
case *openstackv1.OpenStackMachineTemplateList:
- for _, template := range list.Items {
- if template.GetName() != newInfraMachineTemplateName {
- outdatedTemplates = append(outdatedTemplates, &template)
- }
- }
+ for i := range list.Items {
+ tmpl := &list.Items[i]
+ if tmpl.GetName() != newInfraMachineTemplateName {
+ outdatedTemplates = append(outdatedTemplates, tmpl)
+ }
+ }
case *vspherev1.VSphereMachineTemplateList:
- for _, template := range list.Items {
- if template.GetName() != newInfraMachineTemplateName {
- outdatedTemplates = append(outdatedTemplates, &template)
- }
- }
+ for i := range list.Items {
+ tmpl := &list.Items[i]
+ if tmpl.GetName() != newInfraMachineTemplateName {
+ outdatedTemplates = append(outdatedTemplates, tmpl)
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case *vspherev1.VSphereMachineTemplateList: | |
| for _, template := range list.Items { | |
| if template.GetName() != newInfraMachineTemplateName { | |
| outdatedTemplates = append(outdatedTemplates, &template) | |
| } | |
| } | |
| case *vspherev1.VSphereMachineTemplateList: | |
| for i := range list.Items { | |
| tmpl := &list.Items[i] | |
| if tmpl.GetName() != newInfraMachineTemplateName { | |
| outdatedTemplates = append(outdatedTemplates, tmpl) | |
| } | |
| } |
🤖 Prompt for AI Agents
In pkg/controllers/machinesetsync/machineset_sync_controller.go around lines 388
to 393, the code takes the address of the range loop variable (e.g. &template)
which results in appending the same pointer repeatedly; change the loops to
iterate by index and append the address of the slice element (e.g.
&list.Items[i]) instead, and apply this index-based addressing fix to every
provider case in this function that currently uses &loopVar in a range loop to
collect outdated templates.
| mapiCloneMode, err := convertCAPVCloneModeToMAPI(fldPath.Child("cloneMode"), m.vsphereMachine.Spec.CloneMode) | ||
| if err != nil { | ||
| errors = append(errors, err) | ||
| } | ||
|
|
||
| // Convert network configuration | ||
| mapiNetworkSpec, networkWarnings, networkErrors := convertCAPVNetworkSpecToMAPI(fldPath.Child("network"), m.vsphereMachine.Spec.Network) | ||
| if len(networkErrors) > 0 { | ||
| errors = append(errors, networkErrors...) | ||
| } | ||
| warnings = append(warnings, networkWarnings...) | ||
|
|
||
| // Convert data disks | ||
| mapiDataDisks, diskWarnings, diskErrors := convertCAPVDataDisksToMAPI(fldPath.Child("dataDisks"), m.vsphereMachine.Spec.DataDisks) | ||
| if len(diskErrors) > 0 { | ||
| errors = append(errors, diskErrors...) | ||
| } | ||
| warnings = append(warnings, diskWarnings...) | ||
|
|
||
| mapiProviderConfig := mapiv1.VSphereMachineProviderSpec{ | ||
| TypeMeta: metav1.TypeMeta{ | ||
| Kind: "VSphereMachineProviderSpec", | ||
| APIVersion: "machine.openshift.io/v1beta1", | ||
| }, | ||
| Template: m.vsphereMachine.Spec.Template, | ||
| NumCPUs: m.vsphereMachine.Spec.NumCPUs, | ||
| NumCoresPerSocket: m.vsphereMachine.Spec.NumCoresPerSocket, | ||
| MemoryMiB: m.vsphereMachine.Spec.MemoryMiB, | ||
| DiskGiB: m.vsphereMachine.Spec.DiskGiB, | ||
| TagIDs: m.vsphereMachine.Spec.TagIDs, | ||
| Snapshot: m.vsphereMachine.Spec.Snapshot, | ||
| CloneMode: mapiCloneMode, | ||
| Network: mapiNetworkSpec, | ||
| DataDisks: mapiDataDisks, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading old CAPV fields; should use VirtualMachineCloneSpec
In CAPV v1beta1 these fields (Template/CloneMode/Snapshot/NumCPUs/NumCoresPerSocket/MemoryMiB/DiskGiB/TagIDs/Network) live under Spec.VirtualMachineCloneSpec. Current code reads top-level fields, losing data and causing asymmetric conversion.
- mapiCloneMode, err := convertCAPVCloneModeToMAPI(fldPath.Child("cloneMode"), m.vsphereMachine.Spec.CloneMode)
+ mapiCloneMode, err := convertCAPVCloneModeToMAPI(fldPath.Child("cloneMode"), m.vsphereMachine.Spec.VirtualMachineCloneSpec.CloneMode)
@@
- mapiNetworkSpec, networkWarnings, networkErrors := convertCAPVNetworkSpecToMAPI(fldPath.Child("network"), m.vsphereMachine.Spec.Network)
+ mapiNetworkSpec, networkWarnings, networkErrors := convertCAPVNetworkSpecToMAPI(fldPath.Child("network"), m.vsphereMachine.Spec.VirtualMachineCloneSpec.Network)
@@
- Template: m.vsphereMachine.Spec.Template,
- NumCPUs: m.vsphereMachine.Spec.NumCPUs,
- NumCoresPerSocket: m.vsphereMachine.Spec.NumCoresPerSocket,
- MemoryMiB: m.vsphereMachine.Spec.MemoryMiB,
- DiskGiB: m.vsphereMachine.Spec.DiskGiB,
- TagIDs: m.vsphereMachine.Spec.TagIDs,
- Snapshot: m.vsphereMachine.Spec.Snapshot,
+ Template: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Template,
+ NumCPUs: m.vsphereMachine.Spec.VirtualMachineCloneSpec.NumCPUs,
+ NumCoresPerSocket: m.vsphereMachine.Spec.VirtualMachineCloneSpec.NumCoresPerSocket,
+ MemoryMiB: m.vsphereMachine.Spec.VirtualMachineCloneSpec.MemoryMiB,
+ DiskGiB: m.vsphereMachine.Spec.VirtualMachineCloneSpec.DiskGiB,
+ TagIDs: m.vsphereMachine.Spec.VirtualMachineCloneSpec.TagIDs,
+ Snapshot: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Snapshot,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| mapiCloneMode, err := convertCAPVCloneModeToMAPI(fldPath.Child("cloneMode"), m.vsphereMachine.Spec.CloneMode) | |
| if err != nil { | |
| errors = append(errors, err) | |
| } | |
| // Convert network configuration | |
| mapiNetworkSpec, networkWarnings, networkErrors := convertCAPVNetworkSpecToMAPI(fldPath.Child("network"), m.vsphereMachine.Spec.Network) | |
| if len(networkErrors) > 0 { | |
| errors = append(errors, networkErrors...) | |
| } | |
| warnings = append(warnings, networkWarnings...) | |
| // Convert data disks | |
| mapiDataDisks, diskWarnings, diskErrors := convertCAPVDataDisksToMAPI(fldPath.Child("dataDisks"), m.vsphereMachine.Spec.DataDisks) | |
| if len(diskErrors) > 0 { | |
| errors = append(errors, diskErrors...) | |
| } | |
| warnings = append(warnings, diskWarnings...) | |
| mapiProviderConfig := mapiv1.VSphereMachineProviderSpec{ | |
| TypeMeta: metav1.TypeMeta{ | |
| Kind: "VSphereMachineProviderSpec", | |
| APIVersion: "machine.openshift.io/v1beta1", | |
| }, | |
| Template: m.vsphereMachine.Spec.Template, | |
| NumCPUs: m.vsphereMachine.Spec.NumCPUs, | |
| NumCoresPerSocket: m.vsphereMachine.Spec.NumCoresPerSocket, | |
| MemoryMiB: m.vsphereMachine.Spec.MemoryMiB, | |
| DiskGiB: m.vsphereMachine.Spec.DiskGiB, | |
| TagIDs: m.vsphereMachine.Spec.TagIDs, | |
| Snapshot: m.vsphereMachine.Spec.Snapshot, | |
| CloneMode: mapiCloneMode, | |
| Network: mapiNetworkSpec, | |
| DataDisks: mapiDataDisks, | |
| } | |
| mapiCloneMode, err := convertCAPVCloneModeToMAPI(fldPath.Child("cloneMode"), m.vsphereMachine.Spec.VirtualMachineCloneSpec.CloneMode) | |
| if err != nil { | |
| errors = append(errors, err) | |
| } | |
| // Convert network configuration | |
| mapiNetworkSpec, networkWarnings, networkErrors := convertCAPVNetworkSpecToMAPI(fldPath.Child("network"), m.vsphereMachine.Spec.VirtualMachineCloneSpec.Network) | |
| if len(networkErrors) > 0 { | |
| errors = append(errors, networkErrors...) | |
| } | |
| warnings = append(warnings, networkWarnings...) | |
| // Convert data disks | |
| mapiDataDisks, diskWarnings, diskErrors := convertCAPVDataDisksToMAPI(fldPath.Child("dataDisks"), m.vsphereMachine.Spec.DataDisks) | |
| if len(diskErrors) > 0 { | |
| errors = append(errors, diskErrors...) | |
| } | |
| warnings = append(warnings, diskWarnings...) | |
| mapiProviderConfig := mapiv1.VSphereMachineProviderSpec{ | |
| TypeMeta: metav1.TypeMeta{ | |
| Kind: "VSphereMachineProviderSpec", | |
| APIVersion: "machine.openshift.io/v1beta1", | |
| }, | |
| Template: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Template, | |
| NumCPUs: m.vsphereMachine.Spec.VirtualMachineCloneSpec.NumCPUs, | |
| NumCoresPerSocket: m.vsphereMachine.Spec.VirtualMachineCloneSpec.NumCoresPerSocket, | |
| MemoryMiB: m.vsphereMachine.Spec.VirtualMachineCloneSpec.MemoryMiB, | |
| DiskGiB: m.vsphereMachine.Spec.VirtualMachineCloneSpec.DiskGiB, | |
| TagIDs: m.vsphereMachine.Spec.VirtualMachineCloneSpec.TagIDs, | |
| Snapshot: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Snapshot, | |
| CloneMode: mapiCloneMode, | |
| Network: mapiNetworkSpec, | |
| DataDisks: mapiDataDisks, | |
| } |
🤖 Prompt for AI Agents
In pkg/conversion/capi2mapi/vsphere.go around lines 184 to 218, the code is
reading CAPV v1beta1 fields from the top-level Spec but those fields (Template,
CloneMode, Snapshot, NumCPUs, NumCoresPerSocket, MemoryMiB, DiskGiB, TagIDs,
Network, DataDisks) live under Spec.VirtualMachineCloneSpec; update the code to
first read vmCloneSpec := m.vsphereMachine.Spec.VirtualMachineCloneSpec (handle
nil/missing VMCloneSpec with a nil-check and append an informative error if
needed), then pass fldPath.Child("virtualMachineCloneSpec") or adjust child
field paths when calling convertCAPVCloneModeToMAPI,
convertCAPVNetworkSpecToMAPI, and convertCAPVDataDisksToMAPI so they operate on
vmCloneSpec fields, and populate mapiProviderConfig fields from vmCloneSpec
(e.g., Template = vmCloneSpec.Template, NumCPUs = vmCloneSpec.NumCPUs, CloneMode
= converted value from vmCloneSpec.CloneMode, Network = converted network from
vmCloneSpec.Network, DataDisks = converted disks from vmCloneSpec.DataDisks,
etc.) ensuring existing error/warning aggregation remains intact.
| // Create workspace from the CAPV spec fields | ||
| workspace := &mapiv1.Workspace{ | ||
| Server: m.vsphereMachine.Spec.Server, | ||
| Datacenter: m.vsphereMachine.Spec.Datacenter, | ||
| Folder: m.vsphereMachine.Spec.Folder, | ||
| Datastore: m.vsphereMachine.Spec.Datastore, | ||
| ResourcePool: m.vsphereMachine.Spec.ResourcePool, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Workspace fields sourced from wrong location
Workspace should be built from Spec.VirtualMachineCloneSpec.{Server,Datacenter,Folder,Datastore,ResourcePool}.
- workspace := &mapiv1.Workspace{
- Server: m.vsphereMachine.Spec.Server,
- Datacenter: m.vsphereMachine.Spec.Datacenter,
- Folder: m.vsphereMachine.Spec.Folder,
- Datastore: m.vsphereMachine.Spec.Datastore,
- ResourcePool: m.vsphereMachine.Spec.ResourcePool,
- }
+ workspace := &mapiv1.Workspace{
+ Server: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Server,
+ Datacenter: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Datacenter,
+ Folder: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Folder,
+ Datastore: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Datastore,
+ ResourcePool: m.vsphereMachine.Spec.VirtualMachineCloneSpec.ResourcePool,
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Create workspace from the CAPV spec fields | |
| workspace := &mapiv1.Workspace{ | |
| Server: m.vsphereMachine.Spec.Server, | |
| Datacenter: m.vsphereMachine.Spec.Datacenter, | |
| Folder: m.vsphereMachine.Spec.Folder, | |
| Datastore: m.vsphereMachine.Spec.Datastore, | |
| ResourcePool: m.vsphereMachine.Spec.ResourcePool, | |
| } | |
| // Create workspace from the CAPV spec fields | |
| workspace := &mapiv1.Workspace{ | |
| Server: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Server, | |
| Datacenter: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Datacenter, | |
| Folder: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Folder, | |
| Datastore: m.vsphereMachine.Spec.VirtualMachineCloneSpec.Datastore, | |
| ResourcePool: m.vsphereMachine.Spec.VirtualMachineCloneSpec.ResourcePool, | |
| } |
🤖 Prompt for AI Agents
In pkg/conversion/capi2mapi/vsphere.go around lines 220 to 227, the Workspace is
being populated from m.vsphereMachine.Spec.* fields but should source Server,
Datacenter, Folder, Datastore and ResourcePool from
m.vsphereMachine.Spec.VirtualMachineCloneSpec instead; update the Workspace
construction to use m.vsphereMachine.Spec.VirtualMachineCloneSpec.Server,
.Datacenter, .Folder, .Datastore and .ResourcePool (and add a nil-check for
VirtualMachineCloneSpec if necessary before accessing its fields to avoid
panics).
| if len(device.AddressesFromPools) > 0 { | ||
| addressesFromPools := make([]mapiv1.AddressesFromPool, len(device.AddressesFromPools)) | ||
| for j, pool := range device.AddressesFromPools { | ||
| addressesFromPools[j] = mapiv1.AddressesFromPool{ | ||
| Group: *pool.APIGroup, | ||
| Resource: pool.Kind, // This might need adjustment based on actual mapping | ||
| Name: pool.Name, | ||
| } | ||
| } | ||
| devices[i].AddressesFromPools = addressesFromPools | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Nil APIGroup deref and Resource/Kind mapping
*pool.APIGroup will panic if APIGroup is nil. Also, mapping CAPV Kind to MAPI Resource needs conversion (e.g., IPAddressPool -> ipaddresspools).
- Group: *pool.APIGroup,
- Resource: pool.Kind, // This might need adjustment based on actual mapping
+ Group: derefOrEmpty(pool.APIGroup),
+ Resource: kindToResource(pool.Kind),Add helpers:
func derefOrEmpty(p *string) string { if p == nil { return "" }; return *p }
func kindToResource(kind string) string {
if kind == "IPAddressPool" { return "ipaddresspools" }
return strings.ToLower(kind) + "s"
}Import strings. Mirror the reverse mapping in mapi2capi.
🤖 Prompt for AI Agents
In pkg/conversion/capi2mapi/vsphere.go around lines 286-296, avoid dereferencing
pool.APIGroup directly and correctly map CAPV Kind to MAPI resource names: add a
helper derefOrEmpty(*string) string that returns "" if nil and use it for Group,
add a kindToResource(kind string) string that maps "IPAddressPool" to
"ipaddresspools" and otherwise returns strings.ToLower(kind) + "s", import
"strings", and replace direct uses of *pool.APIGroup and pool.Kind with
derefOrEmpty(pool.APIGroup) and kindToResource(pool.Kind); also ensure the
reverse mapping exists in mapi2capi to mirror this conversion.
| vsphereMachineTemplate, err := vsphereMachineToVSphereMachineTemplate(vsphereMachine.(*vspherev1.VSphereMachine), v.machineSet.Name, v.infrastructure.Status.InfrastructureName) | ||
| if err != nil { | ||
| return nil, nil, warnings, fmt.Errorf("failed to convert VSphereMachine to VSphereMachineTemplate: %w", err) | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong namespace passed to VSphereMachineTemplate
You pass the cluster name (v.infrastructure.Status.InfrastructureName) as the template's namespace. That should be the CAPI namespace (e.g., openshift-cluster-api), or left unset for the caller to assign. Using the cluster name will misplace the template.
Apply this diff to stop passing an invalid namespace (let the controller set it later):
- vsphereMachineTemplate, err := vsphereMachineToVSphereMachineTemplate(vsphereMachine.(*vspherev1.VSphereMachine), v.machineSet.Name, v.infrastructure.Status.InfrastructureName)
+ vsphereMachineTemplate, err := vsphereMachineToVSphereMachineTemplate(vsphereMachine.(*vspherev1.VSphereMachine), v.machineSet.Name, "")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| vsphereMachineTemplate, err := vsphereMachineToVSphereMachineTemplate(vsphereMachine.(*vspherev1.VSphereMachine), v.machineSet.Name, v.infrastructure.Status.InfrastructureName) | |
| if err != nil { | |
| return nil, nil, warnings, fmt.Errorf("failed to convert VSphereMachine to VSphereMachineTemplate: %w", err) | |
| } | |
| vsphereMachineTemplate, err := vsphereMachineToVSphereMachineTemplate(vsphereMachine.(*vspherev1.VSphereMachine), v.machineSet.Name, "") | |
| if err != nil { | |
| return nil, nil, warnings, fmt.Errorf("failed to convert VSphereMachine to VSphereMachineTemplate: %w", err) | |
| } |
🤖 Prompt for AI Agents
In pkg/conversion/mapi2capi/vsphere.go around lines 176 to 180, you're passing
the cluster name (v.infrastructure.Status.InfrastructureName) as the
VSphereMachineTemplate namespace which is incorrect; instead stop passing the
invalid namespace and leave it unset so the caller/controller can assign it —
update the call to vsphereMachineToVSphereMachineTemplate to pass an empty
string (or the proper CAPI namespace if intentionally known, e.g.,
v.machineSet.Namespace) for the namespace argument and keep the rest of the call
and error handling as-is.
| // todo: jcallen ??? | ||
| capiMachineSet.Spec.Template.Spec.InfrastructureRef.Namespace = v.machineSet.Namespace | ||
|
|
||
| // Set bootstrap configuration from provider spec if available |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
InfrastructureRef namespace set to MAPI ns
capiMachineSet.Spec.Template.Spec.InfrastructureRef.Namespace = v.machineSet.Namespace points to the MAPI namespace. Infra resources should live in the CAPI namespace. Either omit here and let the caller/controller set it, or pass the correct namespace in.
- capiMachineSet.Spec.Template.Spec.InfrastructureRef.Namespace = v.machineSet.Namespace
+ // Leave unset here; controller assigns the CAPI namespace before create.
+ // capiMachineSet.Spec.Template.Spec.InfrastructureRef.Namespace = ""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // todo: jcallen ??? | |
| capiMachineSet.Spec.Template.Spec.InfrastructureRef.Namespace = v.machineSet.Namespace | |
| // Set bootstrap configuration from provider spec if available | |
| // todo: jcallen ??? | |
| // Leave unset here; controller assigns the CAPI namespace before create. | |
| // capiMachineSet.Spec.Template.Spec.InfrastructureRef.Namespace = "" | |
| // Set bootstrap configuration from provider spec if available |
🤖 Prompt for AI Agents
In pkg/conversion/mapi2capi/vsphere.go around lines 191 to 194, the code sets
capiMachineSet.Spec.Template.Spec.InfrastructureRef.Namespace to the MAPI
machine set namespace which is incorrect because infra resources must live in
the CAPI namespace; either remove this assignment so the caller/controller can
set the correct namespace, or set it to the CAPI namespace (e.g., use the CAPI
MachineSet/Cluster namespace variable available in scope —
capiMachineSet.Namespace or the explicit CAPI namespace field passed into the
converter) so the InfrastructureRef points to the proper CAPI namespace.
| // generateInfraMachineTemplateNameWithSpecHash generates a name with a hash suffix based on the spec. | ||
| // This is a simplified implementation - in practice, you'd want to use a proper hash function. | ||
| func generateInfraMachineTemplateNameWithSpecHash(name string, spec interface{}) (string, error) { | ||
| // For now, just return the name - in a real implementation, you'd compute a hash | ||
| // of the spec and append it to ensure uniqueness | ||
| return name, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Name hashing stub will cause template name collisions
Returning the bare name prevents rolling updates and deduplication. Use a stable hash of the Spec appended to the base name.
-func generateInfraMachineTemplateNameWithSpecHash(name string, spec interface{}) (string, error) {
- // For now, just return the name - in a real implementation, you'd compute a hash
- // of the spec and append it to ensure uniqueness
- return name, nil
-}
+func generateInfraMachineTemplateNameWithSpecHash(name string, spec interface{}) (string, error) {
+ b, err := yaml.Marshal(spec)
+ if err != nil {
+ return "", fmt.Errorf("marshal spec: %w", err)
+ }
+ sum := sha256.Sum256(b)
+ short := hex.EncodeToString(sum[:])[:12]
+ return fmt.Sprintf("%s-%s", name, short), nil
+}Note: add imports crypto/sha256 and encoding/hex.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // generateInfraMachineTemplateNameWithSpecHash generates a name with a hash suffix based on the spec. | |
| // This is a simplified implementation - in practice, you'd want to use a proper hash function. | |
| func generateInfraMachineTemplateNameWithSpecHash(name string, spec interface{}) (string, error) { | |
| // For now, just return the name - in a real implementation, you'd compute a hash | |
| // of the spec and append it to ensure uniqueness | |
| return name, nil | |
| } | |
| // generateInfraMachineTemplateNameWithSpecHash generates a name with a hash suffix based on the spec. | |
| // This is a simplified implementation - in practice, you'd want to use a proper hash function. | |
| func generateInfraMachineTemplateNameWithSpecHash(name string, spec interface{}) (string, error) { | |
| b, err := yaml.Marshal(spec) | |
| if err != nil { | |
| return "", fmt.Errorf("marshal spec: %w", err) | |
| } | |
| sum := sha256.Sum256(b) | |
| short := hex.EncodeToString(sum[:])[:12] | |
| return fmt.Sprintf("%s-%s", name, short), nil | |
| } |
| // Convert AddressesFromPools | ||
| if len(device.AddressesFromPools) > 0 { | ||
| addressesFromPools := make([]corev1.TypedLocalObjectReference, len(device.AddressesFromPools)) | ||
| for j, pool := range device.AddressesFromPools { | ||
| addressesFromPools[j] = corev1.TypedLocalObjectReference{ | ||
| APIGroup: &pool.Group, | ||
| Kind: pool.Resource, // This might need adjustment based on actual mapping | ||
| Name: pool.Name, | ||
| } | ||
| } | ||
| devices[i].AddressesFromPools = addressesFromPools | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
AddressesFromPools: Resource vs Kind mismatch
MAPI uses plural resource (e.g., "ipaddresspools"), while CAPV expects Kind (e.g., "IPAddressPool"). Mapping pool.Resource directly to Kind will break lookups.
- APIGroup: &pool.Group,
- Kind: pool.Resource, // This might need adjustment based on actual mapping
+ APIGroup: &pool.Group,
+ Kind: resourceToKind(pool.Group, pool.Resource),
Name: pool.Name,Add a small mapper:
func resourceToKind(group, resource string) string {
if strings.EqualFold(group, "ipam.cluster.x-k8s.io") && resource == "ipaddresspools" {
return "IPAddressPool"
}
// best-effort fallback
return resource
}Also add strings import.
Would you like me to wire this helper in both directions (also in capi2mapi)?
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "PWD: $(pwd)"
echo
echo "---- File snippet: pkg/conversion/mapi2capi/vsphere.go (if present) ----"
if [ -f pkg/conversion/mapi2capi/vsphere.go ]; then
nl -ba pkg/conversion/mapi2capi/vsphere.go | sed -n '1,260p'
echo "---- lines 320-400 ----"
nl -ba pkg/conversion/mapi2capi/vsphere.go | sed -n '320,400p'
else
echo "File not found: pkg/conversion/mapi2capi/vsphere.go"
fi
echo
echo "---- Search for 'AddressesFromPools' occurrences ----"
rg -n --hidden -S "AddressesFromPools" || true
echo
echo "---- Show context for occurrences of 'AddressesFromPools' ----"
rg -n --hidden -S -C3 "AddressesFromPools" || true
echo
echo "---- Search for 'pool.Resource' occurrences ----"
rg -n --hidden -S "pool\\.Resource" || true
echo
echo "---- Search for struct definitions or types that include fields 'Group' and 'Resource' ----"
rg -n --hidden -S "struct\\s*\\{[^}]*Group[^}]*Resource|type\\s+\\w+\\s+struct\\s*\\{[^}]*Group|Resource\\s+string" || true
echo
echo "---- Search for literal 'ipaddresspools' or 'ipaddresspool' in repo ----"
rg -n --hidden -S "ipaddresspools|ipaddresspool" || true
echo
echo "---- Search for existing resource->kind mapping helpers ----"
rg -n --hidden -S "resourceToKind|ResourceToKind|resource_to_kind|toKind\\(|ToKind\\(" || true
echo
echo "---- Search for other conversions that set TypedLocalObjectReference.Kind from a '.Resource' field ----"
rg -n --hidden -S "Kind:\\s*pool\\.Resource|Kind:\\s*.*\\.Resource" || true
echo
echo "---- Done ----"Length of output: 138
🏁 Script executed:
#!/bin/bash
# safer script: don't exit on first error
set -uo pipefail
FILE=pkg/conversion/mapi2capi/vsphere.go
echo "PWD: $(pwd)"
echo
if [ -f "$FILE" ]; then
echo "---- Showing $FILE (first 400 lines) ----"
sed -n '1,400p' "$FILE" || true
else
echo "File not found: $FILE"
fi
echo
echo "---- ripgrep (rg) available? ----"
if command -v rg >/dev/null 2>&1; then
echo "rg: available"
else
echo "rg: not found"
fi
echo
echo "---- Search for 'AddressesFromPools' occurrences ----"
if command -v rg >/dev/null 2>&1; then
rg -n -S -C3 "AddressesFromPools" || true
else
grep -R --line-number -n -E "AddressesFromPools" . || true
fi
echo
echo "---- Search for 'pool.Resource' occurrences ----"
if command -v rg >/dev/null 2>&1; then
rg -n -S "pool\\.Resource" || true
else
grep -R --line-number -n "pool.Resource" . || true
fi
echo
echo "---- Search for 'ipaddresspools' or 'ipaddresspool' ----"
if command -v rg >/dev/null 2>&1; then
rg -n -S "ipaddresspools|ipaddresspool" || true
else
grep -R --line-number -n -E "ipaddresspools|ipaddresspool" . || true
fi
echo
echo "---- Search for existing resource->kind helper functions ----"
if command -v rg >/dev/null 2>&1; then
rg -n -S "resourceToKind|ResourceToKind|resource_to_kind|toKind\\(|ToKind\\(" || true
else
grep -R --line-number -n -E "resourceToKind|ResourceToKind|resource_to_kind|toKind\\(|ToKind\\(" . || true
fi
echo
echo "---- Search for other conversions assigning TypedLocalObjectReference.Kind from a '.Resource' field ----"
if command -v rg >/dev/null 2>&1; then
rg -n -S "Kind:\\s*.*\\.Resource|TypedLocalObjectReference\\{[^}]*Kind:\\s*.*\\.Resource" || true
else
grep -R --line-number -n -E "Kind:\\s*.*\\.Resource|TypedLocalObjectReference\\{[^}]*Kind:\\s*.*\\.Resource" . || true
fi
echo
echo "Done"Length of output: 18970
Fix AddressesFromPools Kind/Resource mismatch
MAPI uses resource names (e.g. "ipaddresspools"); CAPV expects Kind (e.g. "IPAddressPool"). Current code assigns Resource <-> Kind directly which will break lookups. Replace the direct assignments in both conversions with small mapping helpers.
Files and suggested diffs:
File: pkg/conversion/mapi2capi/vsphere.go (around lines 351-362)
- APIGroup: &pool.Group,
- Kind: pool.Resource, // This might need adjustment based on actual mapping
+ APIGroup: &pool.Group,
+ Kind: resourceToKind(pool.Group, pool.Resource),
Name: pool.Name,File: pkg/conversion/capi2mapi/vsphere.go (around lines 285-293)
- Group: *pool.APIGroup,
- Resource: pool.Kind, // This might need adjustment based on actual mapping
+ Group: *pool.APIGroup,
+ Resource: kindToResource(*pool.APIGroup, pool.Kind),
Name: pool.Name,Add small helpers (and import "strings"):
func resourceToKind(group, resource string) string {
if strings.EqualFold(group, "ipam.cluster.x-k8s.io") && resource == "ipaddresspools" {
return "IPAddressPool"
}
return resource
}
func kindToResource(group, kind string) string {
if strings.EqualFold(group, "ipam.cluster.x-k8s.io") && strings.EqualFold(kind, "IPAddressPool") {
return "ipaddresspools"
}
return strings.ToLower(kind) // best-effort fallback
}🤖 Prompt for AI Agents
In pkg/conversion/mapi2capi/vsphere.go around lines 351-362 the code wrongly
assigns MAPI resource names to CAPV Kind (e.g. "ipaddresspools" ->
"ipaddresspools") which breaks lookups; add a small helper resourceToKind(group,
resource string) string (and import "strings") that returns "IPAddressPool" when
group equals "ipam.cluster.x-k8s.io" and resource == "ipaddresspools", otherwise
returns resource, then replace direct Kind assignment with
resourceToKind(pool.Group, pool.Resource); make the analogous change in
pkg/conversion/capi2mapi/vsphere.go (around lines 285-293) by adding
kindToResource(group, kind string) string that maps "IPAddressPool" ->
"ipaddresspools" for the same group and otherwise returns strings.ToLower(kind),
import "strings" there as well, and use kindToResource when converting CAPV Kind
back to MAPI Resource.
|
@jcpowermac: This pull request references SPLAT-2410 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jcpowermac: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
No description provided.