locking exclusif avec clvm
Le
Eric Belhomme

Bonjour,
Voici le contexte : deux serveurs Debian Lenny connectés à un SAN en FC.
Les deux serveurs voient les LUNs exportés par le SAN.
Les deux serveurs executent clvm recompilé avec openais à la place de
cman.
openais semble fonctionner juste comme il faut :
Apr 9 3:07:54.102914 [CLM ] CLM CONFIGURATION CHANGE
Apr 9 3:07:54.102934 [CLM ] New Configuration:
Apr 9 3:07:54.102964 [CLM ] r(0) ip(192.168.66.161)
Apr 9 3:07:54.102985 [CLM ] Members Left:
Apr 9 3:07:54.103008 [CLM ] Members Joined:
Apr 9 3:07:54.103140 [CLM ] CLM CONFIGURATION CHANGE
Apr 9 3:07:54.103163 [CLM ] New Configuration:
Apr 9 3:07:54.103190 [CLM ] r(0) ip(192.168.66.119)
Apr 9 3:07:54.103215 [CLM ] r(0) ip(192.168.66.161)
J'ai créé un PV comme ceci (sur un des noeuds) :
pvcreate /dev/sdb
sdb etant le volume mappé depuis le SAN
puis un VG :
vgcreate -cy vg_lun1 /dev/sdb
et enfin un LV :
lvcreate -n TEST -L 20G vg_lun1
Le problème :
Je veux un accès exclusif aux LV, c'est à dire que si
/dev/vg_lun1/test est monté sur un noeud, je ne dois pas pouvoir le
monter sur un autre. Or ça n'est pas le cas : je peux monter le LV
simultanément sur les 2 noeuds !
Pourtant, openais a l'air de faire son job :
Apr 9 3:09:50.071141 [LCK ] LIB request: saLckResourceUnlock V_vg_lun1-1
Apr 9 3:09:50.071333 [LCK ] EXEC request: saLckResourceUnlock V_vg_lun1-1
Apr 9 3:09:50.071437 [LCK ] LIB request: saLckResourceClose V_vg_lun1-1
Apr 9 3:09:50.071548 [LCK ] EXEC request: saLckResourceClose V_vg_lun1-1
Apr 9 3:17:19.273458 [LCK ] EXEC request: saLckResourceOpen V_vg_lun1-2
Apr 9 3:17:19.274246 [LCK ] EXEC request: saLckResourceLock V_vg_lun1-2
Apr 9 3:17:19.320474 [LCK ] EXEC request: saLckResourceOpen zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-1
Apr 9 3:17:19.321269 [LCK ] EXEC request: saLckResourceLock zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-1
Apr 9 3:17:19.321818 [LCK ] EXEC request: saLckResourceOpen zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-2
Apr 9 3:17:19.322666 [LCK ] EXEC request: saLckResourceLock zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-2
Apr 9 3:17:19.324213 [LCK ] EXEC request: saLckResourceUnlock V_vg_lun1-2
Apr 9 3:17:19.324562 [LCK ] EXEC request: saLckResourceClose V_vg_lun1-2
Apr 9 3:18:30.854003 [LCK ] EXEC request: saLckResourceOpen V_vg_lun1-1
Apr 9 3:18:30.854743 [LCK ] EXEC request: saLckResourceLock V_vg_lun1-1
Apr 9 3:18:30.860982 [LCK ] EXEC request: saLckResourceUnlock V_vg_lun1-1
Apr 9 3:18:30.861380 [LCK ] EXEC request: saLckResourceClose V_vg_lun1-1
On voit bien les open/close et lock/unlock
Mais si je lance clvm en debug :
# clvmd -d 1
CLVMD[218cf770]: Apr 9 03:45:04 CLVMD started
CLVMD[218cf770]: Apr 9 03:45:04 Our local node id is -1062714719
CLVMD[218cf770]: Apr 9 03:45:04 Add_internal_client, fd = 7
CLVMD[218cf770]: Apr 9 03:45:04 Connected to OpenAIS
CLVMD[218cf770]: Apr 9 03:45:04 Cluster ready, doing some more initialisation
CLVMD[218cf770]: Apr 9 03:45:04 starting LVM thread
CLVMD[40800950]: Apr 9 03:45:04 LVM thread function started
CLVMD[218cf770]: Apr 9 03:45:04 clvmd ready for work
CLVMD[218cf770]: Apr 9 03:45:04 Using timeout of 60 seconds
CLVMD[218cf770]: Apr 9 03:45:04 confchg callback. 1 joined, 0 left, 2 members
File descriptor 4 left open
File descriptor 5 left open
File descriptor 6 left open
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
CLVMD[40800950]: Apr 9 03:45:04 LVM thread waiting for work
Vous voyez le WARNING ??? pourant ma config lvm _devrait_ être bonne :
global {
locking_type = 3
}
Une idée ?
--
Rico
Voici le contexte : deux serveurs Debian Lenny connectés à un SAN en FC.
Les deux serveurs voient les LUNs exportés par le SAN.
Les deux serveurs executent clvm recompilé avec openais à la place de
cman.
openais semble fonctionner juste comme il faut :
Apr 9 3:07:54.102914 [CLM ] CLM CONFIGURATION CHANGE
Apr 9 3:07:54.102934 [CLM ] New Configuration:
Apr 9 3:07:54.102964 [CLM ] r(0) ip(192.168.66.161)
Apr 9 3:07:54.102985 [CLM ] Members Left:
Apr 9 3:07:54.103008 [CLM ] Members Joined:
Apr 9 3:07:54.103140 [CLM ] CLM CONFIGURATION CHANGE
Apr 9 3:07:54.103163 [CLM ] New Configuration:
Apr 9 3:07:54.103190 [CLM ] r(0) ip(192.168.66.119)
Apr 9 3:07:54.103215 [CLM ] r(0) ip(192.168.66.161)
J'ai créé un PV comme ceci (sur un des noeuds) :
pvcreate /dev/sdb
sdb etant le volume mappé depuis le SAN
puis un VG :
vgcreate -cy vg_lun1 /dev/sdb
et enfin un LV :
lvcreate -n TEST -L 20G vg_lun1
Le problème :
Je veux un accès exclusif aux LV, c'est à dire que si
/dev/vg_lun1/test est monté sur un noeud, je ne dois pas pouvoir le
monter sur un autre. Or ça n'est pas le cas : je peux monter le LV
simultanément sur les 2 noeuds !
Pourtant, openais a l'air de faire son job :
Apr 9 3:09:50.071141 [LCK ] LIB request: saLckResourceUnlock V_vg_lun1-1
Apr 9 3:09:50.071333 [LCK ] EXEC request: saLckResourceUnlock V_vg_lun1-1
Apr 9 3:09:50.071437 [LCK ] LIB request: saLckResourceClose V_vg_lun1-1
Apr 9 3:09:50.071548 [LCK ] EXEC request: saLckResourceClose V_vg_lun1-1
Apr 9 3:17:19.273458 [LCK ] EXEC request: saLckResourceOpen V_vg_lun1-2
Apr 9 3:17:19.274246 [LCK ] EXEC request: saLckResourceLock V_vg_lun1-2
Apr 9 3:17:19.320474 [LCK ] EXEC request: saLckResourceOpen zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-1
Apr 9 3:17:19.321269 [LCK ] EXEC request: saLckResourceLock zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-1
Apr 9 3:17:19.321818 [LCK ] EXEC request: saLckResourceOpen zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-2
Apr 9 3:17:19.322666 [LCK ] EXEC request: saLckResourceLock zSkX3CNc3eZlawIaf8TwWhXASSDW8E2MoQYt6ofI9DWKWDssLFYpQmViefdClzSz-2
Apr 9 3:17:19.324213 [LCK ] EXEC request: saLckResourceUnlock V_vg_lun1-2
Apr 9 3:17:19.324562 [LCK ] EXEC request: saLckResourceClose V_vg_lun1-2
Apr 9 3:18:30.854003 [LCK ] EXEC request: saLckResourceOpen V_vg_lun1-1
Apr 9 3:18:30.854743 [LCK ] EXEC request: saLckResourceLock V_vg_lun1-1
Apr 9 3:18:30.860982 [LCK ] EXEC request: saLckResourceUnlock V_vg_lun1-1
Apr 9 3:18:30.861380 [LCK ] EXEC request: saLckResourceClose V_vg_lun1-1
On voit bien les open/close et lock/unlock
Mais si je lance clvm en debug :
# clvmd -d 1
CLVMD[218cf770]: Apr 9 03:45:04 CLVMD started
CLVMD[218cf770]: Apr 9 03:45:04 Our local node id is -1062714719
CLVMD[218cf770]: Apr 9 03:45:04 Add_internal_client, fd = 7
CLVMD[218cf770]: Apr 9 03:45:04 Connected to OpenAIS
CLVMD[218cf770]: Apr 9 03:45:04 Cluster ready, doing some more initialisation
CLVMD[218cf770]: Apr 9 03:45:04 starting LVM thread
CLVMD[40800950]: Apr 9 03:45:04 LVM thread function started
CLVMD[218cf770]: Apr 9 03:45:04 clvmd ready for work
CLVMD[218cf770]: Apr 9 03:45:04 Using timeout of 60 seconds
CLVMD[218cf770]: Apr 9 03:45:04 confchg callback. 1 joined, 0 left, 2 members
File descriptor 4 left open
File descriptor 5 left open
File descriptor 6 left open
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
CLVMD[40800950]: Apr 9 03:45:04 LVM thread waiting for work
Vous voyez le WARNING ??? pourant ma config lvm _devrait_ être bonne :
global {
locking_type = 3
}
Une idée ?
--
Rico
Au fait tu utilises quoi comme FS? Parce que normalement, le FS doit
couiner et refuser de monter simplement...
--
My assertion that we can do better with computer languages is a
persistent belief and fond hope, but you'll note I don't actually claim
to be either rational or right. Except when it's convenient.
Larry Wall
là c'est du ext3. mais ca pourra tout aussi bien etre su swapfs ou du xfs
--
Rico