Skip to content

Conversation

@bertiethorpe
Copy link
Member

No description provided.

@sjpb
Copy link
Collaborator

sjpb commented Nov 7, 2025

Should probably do a fatimage build for this to check it doens't somehow break that. We don't need to bump the image though.

Copy link
Collaborator

@sjpb sjpb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial review

@@ -0,0 +1,11 @@
---

- hosts: "{{ target_hosts | default('all') }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah the "highest" group we should ever use is cluster. That is all instances controlled by the appliance. Hosts in all but not in cluster are ones we have maybe added into an inventory but don't want to control. E.g. external NFS, external Pulp, ...

TBF we should document that somewhere!

@@ -0,0 +1,11 @@
---

- hosts: "{{ target_hosts | default('all') }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think target_hosts is necessary, this can just be hosts: cluster. If you want to only run on some as an adhoc you can just use ansible-playbook --limit ... which is what we do for e.g. the rebuild adhoc.

I know you suggested being able to "tweak" this for rebuild groups (presumably when running this from site, rather than as an ad-hoc, but TBH with the way that's passed via vars: at the moment you can't override it from inventory anyway, so I wouldn't bother.

gather_facts: no
become: no
vars:
protected_environments:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In common inventory please, so it can be overriden. And in keeping with naming for other similar things I'd:

  1. Define it in environments/common/group_vars/all/defaults.yml
  2. Call it appliances_protected_environments (note the plural, I don't like it TBH but it's what we've got)

register: env_confirm_safe
when:
- appliances_environment_name in protected_environments
- not (prd_continue | default(false) | bool)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I said in the ticket I didn't like prd_continue. There are two options here:

  1. A better name. Not sure but initial ideas: appliances_protected_environment_continue or appliances_protected_environment_autoapprove (a la TF) or something
  2. Maybe the logic copes with appliances_protected_environment being falsy, and in that case it always continues? Then you can just set from extravars or whatever without needing a 2nd var at all (Note for extravars you need a | bool as that lets people do -e foo=no and it works "as expected".

tasks:
- name: Lock/Unlock instances
openstack.cloud.server_action:
action: "{{ server_action | default('lock') }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fac the parameter is "action", not "state" seems a bit crackers but that's not on you, so the var name makes sense, although I'm maybe tempted by appliances_server_action (see below). For this case I think having the default defined here does make sense TBH - there's no way really I can see having this set in inventory to provide differences per site or per instance would make sense.

- ansible.builtin.import_playbook: safe-env.yml

- name: Lock all instances
vars:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants