Author |
Message
|
ryddan |
Posted: Thu Dec 03, 2020 6:07 am Post subject: vgextend drbdpool on RDQM HA setup? |
|
|
 Newbie
Joined: 10 Sep 2001 Posts: 9 Location: Sweden
|
Is it possible to extend drbdpool?
I need to create a new QueueManager on RDQM but there is not enough space in drbdpool. |
|
Back to top |
|
 |
crashdog |
Posted: Thu Dec 10, 2020 3:22 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
|
Back to top |
|
 |
ryddan |
Posted: Mon Dec 14, 2020 7:26 am Post subject: |
|
|
 Newbie
Joined: 10 Sep 2001 Posts: 9 Location: Sweden
|
|
Back to top |
|
 |
crashdog |
Posted: Thu Dec 17, 2020 3:47 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
The question is what you really want to achieve. You mentioned in your first post that you want to create a new rdqm. Not extend the space for an existing rdqm.
I just tried to extend drdbpool on my lab system.
For that I just extended the physical drive (in my case a virtual disk on MS hyper-v).
then using fdisk to resize the partition. Reboot and use pvresize to add the newly created space. In my case I added 10 GB to sdb:
Before :
Code: |
[root@t-mqhub1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 127G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 512M 0 part /boot
└─sda3 8:3 0 126G 0 part
├─vg_root-lv_root 253:0 0 44G 0 lvm /
├─vg_root-lv_swap 253:1 0 16G 0 lvm [SWAP]
├─vg_root-lv_home 253:2 0 512M 0 lvm /home
├─vg_root-lv_var 253:3 0 49G 0 lvm /var
└─vg_root-lv_tmp 253:4 0 1G 0 lvm /tmp
sdb 8:16 0 110G 0 disk
└─sdb1 8:17 0 110G 0 part
├─drbdpool-tmqhubha1_00 253:5 0 10G 0 lvm
│ └─drbd100 147:100 0 10G 0 disk /var/mqm/vols/tmqhubha1
└─drbdpool-tmqhubha3_00 253:6 0 10G 0 lvm
└─drbd101 147:101 0 10G 0 disk
[root@t-mqhub1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name drbdpool
PV Size <100.00 GiB / not usable 0
Allocatable yes
PE Size 4.00 MiB
Total PE 25599
Free PE 20479
Allocated PE 5120
PV UUID L63qCU-ik9Q-rdOv-SysX-ddVd-8pN7-hGZWhx
|
after:
Code: |
[root@t-mqhub1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name drbdpool
PV Size <110.00 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 28159
Free PE 23039
Allocated PE 5120
PV UUID L63qCU-ik9Q-rdOv-SysX-ddVd-8pN7-hGZWhx
|
the existing rdqms are happily running just like before:
Code: |
[root@t-mqhub1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 127G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 512M 0 part /boot
└─sda3 8:3 0 126G 0 part
├─vg_root-lv_root 253:0 0 44G 0 lvm /
├─vg_root-lv_swap 253:1 0 16G 0 lvm [SWAP]
├─vg_root-lv_home 253:2 0 512M 0 lvm /home
├─vg_root-lv_var 253:3 0 49G 0 lvm /var
└─vg_root-lv_tmp 253:4 0 1G 0 lvm /tmp
sdb 8:16 0 110G 0 disk
└─sdb1 8:17 0 110G 0 part
├─drbdpool-tmqhubha1_00 253:5 0 10G 0 lvm
│ └─drbd100 147:100 0 10G 0 disk /var/mqm/vols/tmqhubha1
└─drbdpool-tmqhubha3_00 253:6 0 10G 0 lvm
└─drbd101 147:101 0 10G 0 disk
[mqm@t-mqhub1 ~]$ rdqmstatus -m TMQHUBHA1
Node: t-mqhub1.md80.ch
Queue manager status: Running
CPU: 0.04%
Memory: 184MB
Queue manager file system: 87MB used, 9.8GB allocated [1%]
HA role: Primary
HA status: Normal
HA control: Enabled
HA current location: This node
HA preferred location: This node
HA floating IP interface: eth0:1
HA floating IP address: 192.168.178.91
Node: t-mqhub2.md80.ch
HA status: Normal
Node: t-mqhub3.md80.ch
HA status: Normal
|
Addendum : you have to do this on all 3 nodes of course.
Hope this helps.
Cheers,
Gerhard _________________ You win again gravity ! |
|
Back to top |
|
 |
ryddan |
Posted: Thu Dec 17, 2020 4:32 am Post subject: |
|
|
 Newbie
Joined: 10 Sep 2001 Posts: 9 Location: Sweden
|
Thanks for your help trying this.
Seems like it should work then.
I normally add a new empty disk to volume group
Prepare disk first : pvcreate /dev/sdX
then
vgentend VolumeGroup /dev/sdX
Have you also tried to restart one or more of the servers after?
I guess I also have test it first in labb setup. What I really would like is IBM to write instructions for it if it's possible. |
|
Back to top |
|
 |
crashdog |
Posted: Thu Dec 17, 2020 4:43 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
Yes, reboot is no issue. I also set the preferred node and it switched just fine.
Code: |
[mqm@t-mqhub1 ~]$ rdqmadm -m TMQHUBHA1 -p -n t-mqhub2.md80.ch
The preferred replicated data node has been set to 't-mqhub2.md80.ch' for queue
manager 'TMQHUBHA1'.
[mqm@t-mqhub1 ~]$ dspmq
QMNAME(TMQHUBHA1) STATUS(Running elsewhere)
QMNAME(TMQHUBHA3) STATUS(Running elsewhere)
|
Code: |
[mqm@t-mqhub2 ~]$ dspmq
QMNAME(TMQHUBHA1) STATUS(Running)
QMNAME(TMQHUBHA3) STATUS(Running elsewhere)
[mqm@t-mqhub2 ~]$ rdqmstatus -m TMQHUBHA1
Node: t-mqhub2.md80.ch
Queue manager status: Running
CPU: 0.06%
Memory: 181MB
Queue manager file system: 87MB used, 9.8GB allocated [1%]
HA role: Primary
HA status: Normal
HA control: Enabled
HA current location: This node
HA preferred location: This node
HA floating IP interface: eth0:1
HA floating IP address: 192.168.178.91
Node: t-mqhub1.md80.ch
HA status: Normal
Node: t-mqhub3.md80.ch
HA status: Normal
|
Maybe IBM considers this a too common task to specifically document it. Respectively they might look at it as OS task and not directly MQ related.
Cheers,
Gerhard _________________ You win again gravity ! |
|
Back to top |
|
 |
|