Skip to content

Conversation

@gekios
Copy link

@gekios gekios commented Jul 1, 2019

No description provided.

def get_down_osds(self):
data, _ = self.client.run_cmd('ceph osd tree -f json')
down_osds = []
self.down_osds = []
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not neccessary. See L20: if this file

The constructor of this class is calling

       self.down_osds = self.get_down_osds()
       self.out_osds = self.get_out_osds()

which makes the assignment of self.down_osds redundant.

If you encountered that down_osds is not populated you might the root cause in this function.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the issue that I saw was that the self.out_osds was populated initially through the constructor but when the script is marking some disk out it doesn't get updated and that causes removing disks ignoring the threshold which in the end breaks the ceph cluster. I first checked the get_out_osds() function and it looks good.

self.down_osds.append(node_osd.get('id'))
return self.down_osds

def get_out_osds(self):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here.


if len(self.gateways) < 2:
self.reboot_allowed = False
if len(self.gateways) >= 2:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reverted logic makes more sense here. Good catch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants