Skip to content

Conversation

@oliashish
Copy link
Contributor

@oliashish oliashish commented Jan 22, 2026

Currently there is only one hardcoded string openstack-aee-default-env for ansibleEE environment variable, which is applied throughout the cluser.
This PR tries to let users ability to customize the ansibleEE env configMap for deployments and nodeset levels individually. The precedence order for ansibleEEEnvConfigMap is deployment > nodeset > default.

Test Cases

  • No ConfigMap (uses default)
  • New ConfigMap with no data referenced in NodeSet (overrides default, does nothing)
  • New ConfigMap with some data referenced in NodeSet (overrides default, sets env)
  • New ConfigMap in referenced in Deployment (overrides NodeSet's CM, set env for deployment specific job pods)

Jira: OSPRH-18955

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 22, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: oliashish
Once this PR has been reviewed and has the lgtm label, please assign dprince for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot requested review from fultonj and rebtoor January 22, 2026 13:24
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/d91bb4d5006d46cb8f53bf9db1e962da

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 12m 34s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 23m 20s
cifmw-crc-podified-edpm-baremetal FAILURE in 48m 11s
✔️ adoption-standalone-to-crc-ceph-provider SUCCESS in 2h 56m 12s
✔️ openstack-operator-tempest-multinode SUCCESS in 1h 43m 43s

Currently there is only one hardcoded string `openstack-aee-default-env`
for ansibleEE environment variable, which is applied throughout the
cluser.
This PR tries to let users ability to customize the ansibleEE env
configMap for deployments and nodeset levels individually.
The precedence order for ansibleEEEnvConfigMap is deployment > nodeset >
default.
@oliashish oliashish force-pushed the fix/configurable-envConfigMapName branch from b1bf4a9 to 0822da4 Compare January 23, 2026 21:19
@oliashish oliashish changed the title WIP: added customizable ansibleEE env ConfigMap name Fix: Added customizable ansibleEE env ConfigMap name Jan 23, 2026
if envConfigMapName == "" {
envConfigMapName = "openstack-aee-default-env"
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we verify if this block is okay for backward compatibility? Or is there a better way to approach this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@oliashish oliashish requested a review from slagle January 23, 2026 21:30
// If not specified, defaults to "openstack-aee-default-env".
// This can be overridden at the Deployment level.
AnsibleEEEnvConfigMapName string `json:"ansibleEEEnvConfigMapName,omitempty"`
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can drop this from the NodeSet actually. I don't see where it's used honestly. And given that the env vars apply to the entire Deployment, there isn't a way to set different env vars per NodeSet anyway, so it should only be allowed on the Deployment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was done considering that, any env configuration applied to/at NodeSet level, will propagate to each deployment(if not set at deployment level) which is part of that NodeSet.

This might be useful in cases where we need different configuration for different deployment and we can use NodeSet as umbrella for a default behavior. Examples

  • Different compute or machine types needing different ansible runner settings
  • Reusable NodeSet definitions with default values.
  • Multiple Deployments reusing the same NodeSet with its predefined env configuration, until a different configuration is required at deployment level or for any specific use cases

This might be useful for end user who is configuring the environment and don't want to write same env configurations for each deployment(example 100 OSDPD) yaml config file, rather the user can provide this at NodeSet and that will be used by all 100 OSDPDs. This would be less error prone and easily managable.

With this approach we can manage NodeSet level(Default) as well Deployment(specific) both.

What do you say?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the env vars that will be set for Ansible from the ConfigMap apply globally to the ansible execution. Given a Deployment can be for multiple NodeSets, there is no way to handle different env vars per NodeSet in the same Deployment.

Even in this PR, I don't see how the ConfigMap is used at the NodeSet level. It's never set anywhere. In deployment_controller.go, the value is gotten from the Deployment, and then in ansible_execution.go, that's the value that is used. The value on the NodeSet is never used, afaict.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the env vars that will be set for Ansible from the ConfigMap apply globally to the ansible execution. Given a Deployment can be for multiple NodeSets, there is no way to handle different env vars per NodeSet in the same Deployment.

No, there is a loop for handling different env vars per NodeSet

for _, nodeSet := range nodeSets.Items {
...........
............
          // override the ansibleEEEnvConfigMapName on nodeset if there is any provided for a specific deployment.
		if instance.Spec.AnsibleEEEnvConfigMapName != "" {
			ansibleEESpec.AnsibleEEEnvConfigMapName = instance.Spec.AnsibleEEEnvConfigMapName
		}
}

Even in this PR, I don't see how the ConfigMap is used at the NodeSet level. It's never set anywhere.


func (instance OpenStackDataPlaneNodeSet) GetAnsibleEESpec() AnsibleEESpec {
	return AnsibleEESpec{
		NetworkAttachments:        instance.Spec.NetworkAttachments,
		ExtraMounts:               instance.Spec.NodeTemplate.ExtraMounts,
		Env:                       instance.Spec.Env,
		ServiceAccountName:        instance.Name,
		AnsibleEEEnvConfigMapName: instance.Spec.AnsibleEEEnvConfigMapName, // HERE, the instance is NodeSet type, this will only be overridden if ConfigMap for environment is set in deployment
	}
}

In deployment_controller.go, the value is gotten from the Deployment, and then in ansible_execution.go, that's the value that is used. The value on the NodeSet is never used, afaict.

Only if env is explicitly set in deployment otherwise it is used from what is in Nodeset.

if envConfigMapName == "" {
envConfigMapName = "openstack-aee-default-env"
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@oliashish
Copy link
Contributor Author

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 28, 2026

@oliashish: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/precommit-check 0822da4 link true /test precommit-check

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants