initial orangefs work for sdp#1
initial orangefs work for sdp#1AJMKing wants to merge 18 commits intoSKA-ScienceDataProcessor:masterfrom
Conversation
oneswig
left a comment
There was a problem hiding this comment.
Thanks Ali, some good stuff here but we'll need to reshape a bit here and there. I'll try to get the Heat template work done and then I'll try to do a test run of the code checked in.
inventory.pvfs
Outdated
| @@ -0,0 +1,21 @@ | |||
| # Ansible Shade uses OpenStack clients running locally | |||
There was a problem hiding this comment.
Inventory files should be auto-generated by the Heat role (which you didn't run, so that's fine).
I'm wondering where your DNS is coming from, have you tinkered with that?
What's interesting here is that we have overlapping roles: a node can be both server and client. This needs a little bit of thinking (but probably, in the first instance, manual adjustment)
There was a problem hiding this comment.
i had SSH config point to the nodes and sort out the keys. So ansible will be resolving names that way. not sure how this works with heat.
| @@ -0,0 +1,10 @@ | |||
| [Unit] | |||
There was a problem hiding this comment.
As discussed, if this could get tucked into the RPM that'd be grand.
| @@ -0,0 +1,29 @@ | |||
| --- | |||
| - name: copy rpms to server | |||
There was a problem hiding this comment.
I don't think we should have the RPMs as artefacts in this repo, but how we handle them otherwise is TBD. Best option may be to make a locally-accessible repo server which contains the OrangeFS RPMs, plus a few others we've got kicking around.
| name: "{{ item }}" | ||
| state: present | ||
| with_items: | ||
| - /tmp/orangefs-2.9.7-1.fc25.x86_64.rpm |
There was a problem hiding this comment.
These two items appear to be the same...
There was a problem hiding this comment.
you meen the two RPMs? one should provide the server only one provides the client.
orangefs/tasks/main.yml
Outdated
| when: inventory_hostname in groups['orange-data'] | ||
|
|
||
| - name: setup mdraid 5 on nvmes | ||
| command: mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 creates=/dev/md0 |
There was a problem hiding this comment.
If /dev/md0 already exists, does this fail? If it doesn't, it'll get flattened in the next task. If it does, it means the playbook won't rerun.
There was a problem hiding this comment.
if md0 exists the command wont run. if it dose not, then it will try to build the array. If the case that md0 was deleted or the mdadm stop, but raid superblocks exist, then the playbook will fail. Should probably brakeout this into a role that sets up mdraid, since lvm is also an option.
orangefs/tasks/main.yml
Outdated
| @@ -0,0 +1,87 @@ | |||
| --- | |||
There was a problem hiding this comment.
It might be tidier syntax to conditionally include task files for client, server, etc. after the common tasks here.
orangefs/tasks/main.yml~
Outdated
| @@ -0,0 +1,90 @@ | |||
| --- | |||
There was a problem hiding this comment.
You Emacs users have no shame :-)
orange_setup/tasks/main.yml~
Outdated
| @@ -0,0 +1,20 @@ | |||
| --- | |||
using the new directory structure for p3-appliances. Generalised some components (a bit) to enable cluster deployment with FC25 images.
Orangefs
Move Alasdair's latest work into new directory layout
More tweaks, almost there - getting unknown protocol (-92) on mount
Final fixes, deliver a working filesystem from baseline infrastructure
First cut at maintenance actions playbook
Branch for the initial orangefs work for SDP. Contains two new roles to setup and configure Orangefs, with the rpms, and client unit file.