Bienvenidos

Todos estos pasos descriptos fueron probados en ambientes productivos

miércoles, 14 de julio de 2010

Comandos ZONAS

Chequear zona

zoneadm list -cv (lista las zonas y sus estados)
zoneadm -z zona1 ready (de estado install a ready)
zoneadm -z zona1 boot (de estado ready a running)
zlogin -C zona1 (logueo en la zona para levantar servicios ...
..despues de esto , se puede entrar con telnet ssh)
Nota: si es la primera vez,hay que ingresar varios datos de configuracion TZ,hostaname etc)

Otros chequeos zona

zoneadm -z zona1 reboot (funciona si la zona esta running)
zoneadm -z zona1 halt
zoneadm -z zona1 verify
zoneadm -z zona1 uninstall (desinstalar zona)
zoneadm -z zona1 uninstall -F (desinstalar zona)
zonecfg -z zona1 delete (debe estar desinstalada previamente)
zonecfg -z zona1 info

Backup configuracion zona

zonecfg -z zona1 export >zona1.config

Configuracion zona

Restore de Zona

Caso 1 Se perdio la configuracion

zonecfg -z zona1 -f zona1.config
zoneadm list -cv (debe aparecer la zona como configurada)
zoneadm -z zona1 install (debe aparecer la zona como instalada)(Ir a paso 2)
rm /export/zona1/root/etc/.UNCONFIGURED (para evitar preguntas al ingresar por consola)

Caso 2 La zona esta en estado instalada)

ufsrestore -ivf zona1.DMP
mv dev /export/zona1/
mv root /export/zona1/
zoneadm list -cv
zoneadm -z zona1 ready
zoneadm list -cv
zoneadm -z zona1 boot
zoneadm list -cv
telnet 10.71.100.99
ssh -l root 10.71.100.99
zlogin -C zona1

Borrar / Agregar IP en zonas

# zonecfg -z zona1
zonecfg:zona1> info
zonepath: /export/zona1
autoboot: true
net:
address: 10.27.33.43
physical: ce2
zonecfg:zona1> remove net address=10.27.33.43
zonecfg:zona1> info
zonepath: /export/zona1
zonecfg:zona1> add net
zonecfg:zona1:net> set address=10.27.33.49
zonecfg:zona1:net> set physical=ce2
zonecfg:zona1:net> end

Paso a Paso para crear Zonas

1) formatear los discos todo el slice 0
2) crear los metadb
[ SKOL ] / # metainit d60 3 1 c2t50060E800456EE02d0s0 1 c2t50060E800456EE02d1s0 1 c2t50060E800456EE02d2s0
d60: Concat/Stripe is setup
[ SKOL ] / # metastat d60
d60: Concat/Stripe
Size: 100346880 blocks (47 GB)
Stripe 0:
Device Start Block Dbase Reloc
/dev/dsk/c2t50060E800456EE02d0s0 0 No Yes
Stripe 1:
Device Start Block Dbase Reloc
/dev/dsk/c2t50060E800456EE02d1s0 7680 No Yes
Stripe 2:
Device Start Block Dbase Reloc
/dev/dsk/c2t50060E800456EE02d2s0 7680 No Yes

Device Relocation Information:
Device Reloc Device ID
/dev/dsk/c2t50060E800456EE02d0 Yes id1,ssd@n60060e800456ee00000056ee00000020
/dev/dsk/c2t50060E800456EE02d1 Yes id1,ssd@n60060e800456ee00000056ee00000021
/dev/dsk/c2t50060E800456EE02d2 Yes id1,ssd@n60060e800456ee00000056ee00000022
[ SKOL ] / # metainit d61 -p d60 6g
d61: Soft Partition is setup
[ SKOL ] / # metainit d62 -p d60 10g
d62: Soft Partition is setup
[ SKOL ] / # metainit d63 -p d60 30g
d63: Soft Partition is setup
[ SKOL ] / #

LUEGO creo los FS
[ SKOL ] / # newfs /dev/md/rdsk/d61
newfs: construct a new file system /dev/md/rdsk/d61: (y/n)? y
[ SKOL ] / # newfs /dev/md/rdsk/d62
newfs: construct a new file system /dev/md/rdsk/d62: (y/n)? y
[ SKOL ] / # newfs /dev/md/rdsk/d63
newfs: construct a new file system /dev/md/rdsk/d63: (y/n)? y
[ SKOL ] / #

mkdir -p /export/zona1
cd /export/
chmod 700 zona1
mkdir /u00
mkdir /u01
mount /export/zona1

zonecfg -z zona1 -f /usr/scripts/SOL10/CREAR.ZONAS/crea.zona1.SKOL.ksh
[ SKOL ] / # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
- zona1 configured /export/zona1
[ SKOL ] / #
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # chmod 700 /export/zona1
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # zoneadm list -cv
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # ls -ld /export/zona1
drwx------ 3 root root 512 Apr 10 08:53 /export/zona1
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # zoneadm -z zona1 install
Preparing to install zone .
Checking file system on device to be mounted at
Checking file system on device to be mounted at
Creating list of files to copy from the global zone.
Copying 124550 files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize 1021 packages on the zone.
Initializing package 49 of 1021 : percent complete: 4%
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
- zona1 installed /export/zona1
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # zoneadm -z zona1 ready
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
1 zona1 ready /export/zona1
[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS #

[ SKOL ] /usr/scripts/SOL10/CREAR.ZONAS # zoneadm -z zona1 boot

Este es el script que uso para crear la zona
create -b
set zonepath=/export/zona1
set autoboot=true
add fs
set dir=/bea
set special=/dev/md/dsk/d92
set raw=/dev/md/rdsk/d92
set type=ufs
end
add fs
set dir=/u01
set special=/dev/md/dsk/d99
set raw=/dev/md/rdsk/d99
set type=ufs
end
add fs
set dir=/u02
set special=/dev/md/dsk/d100
set raw=/dev/md/rdsk/d100
set type=ufs
end
add net
set address=10.67.133.144
set physical=ce2
end

Usando fsnap para backupear zonas

Normalmente uso ufsdump ,pero esta es otra opcion valida.
Aca va un ejemplo.

[asun0001] /space # fssnap -F ufs -o bs=/space /

/dev/fssnap/0
[asun0001] /space # ls -lh /space
total 3442
drwxrwxrwx 2 root root 8.0K Jul 3 12:02 lost+found
-rw------- 1 root root 11G Oct 10 11:47 snapshot0
[asun0001] /space #
[asun0001] /space # fssnap -i
0 /
[asun0001] /space # /usr/lib/fs/ufs/fssnap -i /
Snapshot number : 0
Block Device : /dev/fssnap/0
Raw Device : /dev/rfssnap/0
Mount point : /
Device state : idle
Backing store path : /space/snapshot0
Backing store size : 1472 KB
Maximum backing store size : Unlimited
Snapshot create time : Tue Oct 10 11:45:11 2006
Copy-on-write granularity : 32 KB

[asun0001] / # mount -F ufs -o ro /dev/fssnap/0 /space/KKK
[asun0001] / # df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 11G 7.3G 3.1G 71% /
/dev/dsk/c0t0d0s5 5.8G 1.2G 4.5G 22% /var
swap 3.3G 40K 3.3G 1% /tmp
swap 3.3G 72K 3.3G 1% /var/run
/dev/dsk/c0t1d0s0 9.8G 4.4G 5.3G 46% /export/big-zone
/dev/dsk/c0t0d0s7 49G 52M 49G 1% /space
/dev/fssnap/0 11G 7.3G 3.1G 71% /space/KKK
[asun0001] / #

[asun0001] / # zonecfg -z small-zone export >/space/small-zone.config
[asun0001] / # zonecfg -z big-zone export >/space/big-zone.config
asun0001] / # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
4 big-zone running /export/big-zone
- small-zone configured /export/small-zone
[asun0001] / #
[asun0001] / # zlogin -S small-zone init 0
[asun0001] / # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
4 big-zone running /export/big-zone
- small-zone installed /export/small-zone
[asun0001] / #
[asun0001] /export # zoneadm -z small-zone uninstall -F
[asun0001] /export # zonecfg -z small-zone delete -F
[asun0001] /export #
[asun0001] / # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
4 big-zone running /export/big-zone
[asun0001] / #
[asun0001] / # zonecfg -z small-zone -f /space/small-zone.config
[asun0001] / # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
4 big-zone running /export/big-zone
- small-zone configured /export/small-zone
[asun0001] / # zoneadm -z small-zone verify
[asun0001] / # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
4 big-zone running /export/big-zone
- small-zone configured /export/small-zone
[asun0001] / #

[asun0001] / # zoneadm -z small-zone install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying 3186 files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize 1605 packages on the zone.
Initialized 1605 packages on zone.
Zone is initialized.
Installation of 2 packages was skipped.
Installation of these packages generated warnings:
The file contains a log of the zone installation.
[asun0001] / #

Cambio de Storage/Mirror/Unmirror con Veritas

La finalidad es cambiar el Storage SAN de Ibm Tipo Shark y cambiarlo por Ibm DS8000.
Para realizarlo, voy a agregar al DiskGRoup ya existente, de nombre BGT, 4 discos de igual capacidad.
Primero los voy a labelear con el Format, luego un vxdctl enable para que los vea Veritas.
Luego con el vxdiskadm voy a agregar esos discos al DG existente BGT.
Despues Mirroreo los volumenes con el vxassist, chequeo el status con vxtask list.
Cuando finalize, corto el mirror y saco las referencias a los discos viejos dentro del veritas.
Toda la tarea me Llevo 40 minutos aproximadamente, eltotal ocupado de los vol/fs era de 120 gb.
[SKOL] /usr/scripts # format
Searching for disks...done

c10t6005076306FFC600000000000000F203d0: configured with capacity of 65.98GB
c10t6005076306FFC600000000000000F204d0: configured with capacity of 65.98GB
c10t6005076306FFC600000000000000F205d0: configured with capacity of 65.98GB
c10t6005076306FFC600000000000000F300d0: configured with capacity of 65.98GB


AVAILABLE DISK SELECTIONS:
0. c0t8d0
/pci@7c,700000/pci@1/pci@1/scsi@2/sd@8,0
1. c0t9d0
/pci@7c,700000/pci@1/pci@1/scsi@2/sd@9,0
2. c0t10d0
/pci@7c,700000/pci@1/pci@1/scsi@2/sd@a,0
3. c0t11d0
/pci@7c,700000/pci@1/pci@1/scsi@2/sd@b,0
4. c2t5006048C52A66167d0
/pci@7c,600000/SUNW,qlc@1/fp@0,0/ssd@w5006048c52a66167,0
5. c3t5005076300C9AF0Dd0
/pci@7c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076300c9af0d,0
6. c3t5005076300C9AF0Dd1
/pci@7c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076300c9af0d,1
7. c3t5005076300C9AF0Dd2
/pci@7c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076300c9af0d,2
8. c3t5005076300C9AF0Dd3
/pci@7c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076300c9af0d,3
9. c3t5005076300C9AF0Dd4
/pci@7c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076300c9af0d,4
10. c3t5005076300C9AF0Dd5
/pci@7c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w5005076300c9af0d,5
Specify disk (enter its number): 261
selecting c10t6005076306FFC600000000000000F203d0
Disk not labeled. Label it now? y
Specify disk (enter its number): 262
selecting c10t6005076306FFC600000000000000F204d0
Disk not labeled. Label it now? y
Specify disk (enter its number)[262]: 263
selecting c10t6005076306FFC600000000000000F205d0
Disk not labeled. Label it now? y
Specify disk (enter its number)[263]: 264
selecting c10t6005076306FFC600000000000000F300d0
Disk not labeled. Label it now? y
[SKOL] # vxdctl enable
[SKOL] # vxdiskadm
Volume Manager Support Operations
Menu: VolumeManager/Disk

1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
22 Change/Display the default disk layouts
23 Mark a disk as allocator-reserved for a disk group
24 Turn off the allocator-reserved flag on a disk
list List disk information

? Display help about menu
?? Display help about the menuing system
q Exit from menus

Select an operation to perform: 1

Add or initialize disks
Menu: VolumeManager/Disk/AddDisks
........
.......
Select disk devices to add: [,all,list,q,?] IBM_DS8x000_3
Here is the disk selected. Output format: [Device_Name]

IBM_DS8x000_3

Continue operation? [y,n,q,?] (default: y) y
Which disk group [,none,list,q,?] (default: none) BGT
Use a default disk name for the disk? [y,n,q,?] (default: y) y
Add disk as a spare disk for BGT? [y,n,q,?] (default: n) n
Exclude disk from hot-relocation use? [y,n,q,?] (default: n) n
Add site tag to disk? [y,n,q,?] (default: n) n
The selected disks will be added to the disk group BGT with
default disk names.
IBM_DS8x000_3
Continue with operation? [y,n,q,?] (default: y) y
IBM_DS8x000_3
Encapsulate this device? [y,n,q,?] (default: y) n
IBM_DS8x000_3
Instead of encapsulating, initialize? [y,n,q,?] (default: n) y
Initializing device IBM_DS8x000_3.
Enter desired private region length
[,q,?] (default: 65536)
Adding disk device IBM_DS8x000_3 to disk group BGT with disk
name BGT05.
Add or initialize other disks? [y,n,q,?] (default: n) y
Select disk devices to add: [,all,list,q,?] IBM_DS8x000_4
Here is the disk selected. Output format: [Device_Name]

IBM_DS8x000_4

Continue operation? [y,n,q,?] (default: y)
Which disk group [,none,list,q,?] (default: none) BGT
Use a default disk name for the disk? [y,n,q,?] (default: y)
Add disk as a spare disk for BGT? [y,n,q,?] (default: n)
Exclude disk from hot-relocation use? [y,n,q,?] (default: n)
Add site tag to disk? [y,n,q,?] (default: n)
IBM_DS8x000_4
Continue with operation? [y,n,q,?] (default: y)
IBM_DS8x000_4
Encapsulate this device? [y,n,q,?] (default: y) n
IBM_DS8x000_4
Instead of encapsulating, initialize? [y,n,q,?] (default: n) y
Initializing device IBM_DS8x000_4.
Enter desired private region length
[,q,?] (default: 65536)
Adding disk device IBM_DS8x000_4 to disk group BGT with disk
name BGT06.
Add or initialize other disks? [y,n,q,?] (default: n) y
Select disk devices to add: [,all,list,q,?] IBM_DS8x000_5
IBM_DS8x000_5
Which disk group [,none,list,q,?] (default: none) BGT
Use a default disk name for the disk? [y,n,q,?] (default: y)
Add disk as a spare disk for BGT? [y,n,q,?] (default: n)
Exclude disk from hot-relocation use? [y,n,q,?] (default: n)
Add site tag to disk? [y,n,q,?] (default: n)
IBM_DS8x000_5
Continue with operation? [y,n,q,?] (default: y)
IBM_DS8x000_5
Encapsulate this device? [y,n,q,?] (default: y) n
IBM_DS8x000_5
Instead of encapsulating, initialize? [y,n,q,?] (default: n) y
Initializing device IBM_DS8x000_5.
Enter desired private region length
[,q,?] (default: 65536)
Adding disk device IBM_DS8x000_5 to disk group BGT with disk
name BGT07.
Add or initialize other disks? [y,n,q,?] (default: n) y
Select disk devices to add: [,all,list,q,?] IBM_DS8x000_6
IBM_DS8x000_6
Continue operation? [y,n,q,?] (default: y)
Which disk group [,none,list,q,?] (default: none) BGT
Use a default disk name for the disk? [y,n,q,?] (default: y)
Add disk as a spare disk for BGT? [y,n,q,?] (default: n)
Exclude disk from hot-relocation use? [y,n,q,?] (default: n)
Add site tag to disk? [y,n,q,?] (default: n)
IBM_DS8x000_6
Continue with operation? [y,n,q,?] (default: y)
IBM_DS8x000_6
Encapsulate this device? [y,n,q,?] (default: y) n
IBM_DS8x000_6
Instead of encapsulating, initialize? [y,n,q,?] (default: n) y
Initializing device IBM_DS8x000_6.
Enter desired private region length
[,q,?] (default: 65536)
Adding disk device IBM_DS8x000_6 to disk group BGT with disk
name BGT08.
Add or initialize other disks? [y,n,q,?] (default: n) n
Select an operation to perform: q
Goodbye.
[SKOL] # AHORA VEO LOS DISCOS NUEVOS AGREGADOS AL DG
[SKOL] # vxprint -ht -g BGT
DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg BGT default default 49000 1242237191.240.SKOL

dm BGT01 IBM_SHARK0_121 auto 65536 128794368 -
dm BGT02 IBM_SHARK0_122 auto 65536 128794368 -
dm BGT03 IBM_SHARK0_123 auto 65536 128794368 -
dm BGT04 IBM_SHARK0_124 auto 65536 128794368 -
dm BGT05 IBM_DS8x000_3 auto 65536 138313472 -
dm BGT06 IBM_DS8x000_4 auto 65536 138313472 -
dm BGT07 IBM_DS8x000_5 auto 65536 138313472 -
dm BGT08 IBM_DS8x000_6 auto 65536 138313472 -

v vol01 - ENABLED ACTIVE 140392448 SELECT vol01-01 fsgen
pl vol01-01 vol01 ENABLED ACTIVE 140392448 STRIPE 4/128 RW
sd BGT01-01 vol01-01 BGT01 0 35098112 0/0 IBM_SHARK0_121 ENA
sd BGT02-01 vol01-01 BGT02 0 35098112 1/0 IBM_SHARK0_122 ENA
sd BGT03-01 vol01-01 BGT03 0 35098112 2/0 IBM_SHARK0_123 ENA
sd BGT04-01 vol01-01 BGT04 0 35098112 3/0 IBM_SHARK0_124 ENA

v vol02 - ENABLED ACTIVE 374784000 SELECT vol02-01 fsgen
pl vol02-01 vol02 ENABLED ACTIVE 374784000 STRIPE 4/128 RW
sd BGT01-02 vol02-01 BGT01 35098112 93696000 0/0 IBM_SHARK0_121 ENA
sd BGT02-02 vol02-01 BGT02 35098112 93696000 1/0 IBM_SHARK0_122 ENA
sd BGT03-02 vol02-01 BGT03 35098112 93696000 2/0 IBM_SHARK0_123 ENA
sd BGT04-02 vol02-01 BGT04 35098112 93696000 3/0 IBM_SHARK0_124 ENA

[SKOL] /usr/scripts # vxassist -g BGT maxsize BGT05 BGT06 BGT07 BGT08
Maximum volume size: 553252864 (270143Mb)
########## Comienzo el mirror de los volumenes vol01 y vol02
[SKOL] /usr/scripts # vxassist -g BGT mirror vol01 BGT05 BGT06 BGT07 BGT08
[SKOL] /usr/scripts # vxassist -g BGT mirror vol02 BGT05 BGT06 BGT07 BGT08
[SKOL] /usr/scripts # vxprint -ht -g BGT
DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg BGT default default 49000 1242237191.240.SKOL

dm BGT01 IBM_SHARK0_121 auto 65536 128794368 -
dm BGT02 IBM_SHARK0_122 auto 65536 128794368 -
dm BGT03 IBM_SHARK0_123 auto 65536 128794368 -
dm BGT04 IBM_SHARK0_124 auto 65536 128794368 -
dm BGT05 IBM_DS8x000_3 auto 65536 138313472 -
dm BGT06 IBM_DS8x000_4 auto 65536 138313472 -
dm BGT07 IBM_DS8x000_5 auto 65536 138313472 -
dm BGT08 IBM_DS8x000_6 auto 65536 138313472 -

v vol01 - ENABLED ACTIVE 140392448 SELECT - fsgen
pl vol01-01 vol01 ENABLED ACTIVE 140392448 STRIPE 4/128 RW
sd BGT01-01 vol01-01 BGT01 0 35098112 0/0 IBM_SHARK0_121 ENA
sd BGT02-01 vol01-01 BGT02 0 35098112 1/0 IBM_SHARK0_122 ENA
sd BGT03-01 vol01-01 BGT03 0 35098112 2/0 IBM_SHARK0_123 ENA
sd BGT04-01 vol01-01 BGT04 0 35098112 3/0 IBM_SHARK0_124 ENA
pl vol01-02 vol01 ENABLED ACTIVE 140392448 STRIPE 4/128 RW
sd BGT05-01 vol01-02 BGT05 0 35098112 0/0 IBM_DS8x000_3 ENA
sd BGT06-01 vol01-02 BGT06 0 35098112 1/0 IBM_DS8x000_4 ENA
sd BGT07-01 vol01-02 BGT07 0 35098112 2/0 IBM_DS8x000_5 ENA
sd BGT08-01 vol01-02 BGT08 0 35098112 3/0 IBM_DS8x000_6 ENA

v vol02 - ENABLED ACTIVE 374784000 SELECT - fsgen
pl vol02-01 vol02 ENABLED ACTIVE 374784000 STRIPE 4/128 RW
sd BGT01-02 vol02-01 BGT01 35098112 93696000 0/0 IBM_SHARK0_121 ENA
sd BGT02-02 vol02-01 BGT02 35098112 93696000 1/0 IBM_SHARK0_122 ENA
sd BGT03-02 vol02-01 BGT03 35098112 93696000 2/0 IBM_SHARK0_123 ENA
sd BGT04-02 vol02-01 BGT04 35098112 93696000 3/0 IBM_SHARK0_124 ENA
pl vol02-02 vol02 ENABLED ACTIVE 374784000 STRIPE 4/128 RW
sd BGT05-02 vol02-02 BGT05 35098112 93696000 0/0 IBM_DS8x000_3 ENA
sd BGT06-02 vol02-02 BGT06 35098112 93696000 1/0 IBM_DS8x000_4 ENA
sd BGT07-02 vol02-02 BGT07 35098112 93696000 2/0 IBM_DS8x000_5 ENA
sd BGT08-02 vol02-02 BGT08 35098112 93696000 3/0 IBM_DS8x000_6 ENA
######### Corto el mirror con los discos Viejos
[SKOL] # vxplex -g BGT -o rm dis vol01-01
[SKOL] # vxplex -g BGT -o rm dis vol02-01
[SKOL] # vxprint -ht -g BGT
DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg BGT default default 49000 1242237191.240.SKOL

dm BGT01 IBM_SHARK0_121 auto 65536 128794368 -
dm BGT02 IBM_SHARK0_122 auto 65536 128794368 -
dm BGT03 IBM_SHARK0_123 auto 65536 128794368 -
dm BGT04 IBM_SHARK0_124 auto 65536 128794368 -
dm BGT05 IBM_DS8x000_3 auto 65536 138313472 -
dm BGT06 IBM_DS8x000_4 auto 65536 138313472 -
dm BGT07 IBM_DS8x000_5 auto 65536 138313472 -
dm BGT08 IBM_DS8x000_6 auto 65536 138313472 -

v vol01 - ENABLED ACTIVE 140392448 SELECT vol01-02 fsgen
pl vol01-02 vol01 ENABLED ACTIVE 140392448 STRIPE 4/128 RW
sd BGT05-01 vol01-02 BGT05 0 35098112 0/0 IBM_DS8x000_3 ENA
sd BGT06-01 vol01-02 BGT06 0 35098112 1/0 IBM_DS8x000_4 ENA
sd BGT07-01 vol01-02 BGT07 0 35098112 2/0 IBM_DS8x000_5 ENA
sd BGT08-01 vol01-02 BGT08 0 35098112 3/0 IBM_DS8x000_6 ENA

v vol02 - ENABLED ACTIVE 374784000 SELECT vol02-02 fsgen
pl vol02-02 vol02 ENABLED ACTIVE 374784000 STRIPE 4/128 RW
sd BGT05-02 vol02-02 BGT05 35098112 93696000 0/0 IBM_DS8x000_3 ENA
sd BGT06-02 vol02-02 BGT06 35098112 93696000 1/0 IBM_DS8x000_4 ENA
sd BGT07-02 vol02-02 BGT07 35098112 93696000 2/0 IBM_DS8x000_5 ENA
sd BGT08-02 vol02-02 BGT08 35098112 93696000 3/0 IBM_DS8x000_6 ENA
##### Saco la referencia de los discos viejos al DG BGT, quedando de esta manera solo los nuevos
[SKOL] # vxdg -g BGT rmdisk BGT01
[SKOL] # vxdg -g BGT rmdisk BGT02
[SKOL] # vxdg -g BGT rmdisk BGT03
[SKOL] # vxdg -g BGT rmdisk BGT04
[SKOL] /usr/scripts # vxprint -ht -g BGT
DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg BGT default default 49000 1242237191.240.SKOL

dm BGT05 IBM_DS8x000_3 auto 65536 138313472 -
dm BGT06 IBM_DS8x000_4 auto 65536 138313472 -
dm BGT07 IBM_DS8x000_5 auto 65536 138313472 -
dm BGT08 IBM_DS8x000_6 auto 65536 138313472 -

v vol01 - ENABLED ACTIVE 140392448 SELECT vol01-02 fsgen
pl vol01-02 vol01 ENABLED ACTIVE 140392448 STRIPE 4/128 RW
sd BGT05-01 vol01-02 BGT05 0 35098112 0/0 IBM_DS8x000_3 ENA
sd BGT06-01 vol01-02 BGT06 0 35098112 1/0 IBM_DS8x000_4 ENA
sd BGT07-01 vol01-02 BGT07 0 35098112 2/0 IBM_DS8x000_5 ENA
sd BGT08-01 vol01-02 BGT08 0 35098112 3/0 IBM_DS8x000_6 ENA

v vol02 - ENABLED ACTIVE 374784000 SELECT vol02-02 fsgen
pl vol02-02 vol02 ENABLED ACTIVE 374784000 STRIPE 4/128 RW
sd BGT05-02 vol02-02 BGT05 35098112 93696000 0/0 IBM_DS8x000_3 ENA
sd BGT06-02 vol02-02 BGT06 35098112 93696000 1/0 IBM_DS8x000_4 ENA
sd BGT07-02 vol02-02 BGT07 35098112 93696000 2/0 IBM_DS8x000_5 ENA
sd BGT08-02 vol02-02 BGT08 35098112 93696000 3/0 IBM_DS8x000_6 ENA
[SKOL] # df -h|grep BGT
/dev/vx/dsk/BGT/vol01 67G 34G 32G 52% /carga
/dev/vx/dsk/BGT/vol02 179G 96G 82G 54% /interfaz
# exit

martes, 13 de julio de 2010

Algo de ZFS

[nuve] / # df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/s10x_u6wos_07b 67G 3.4G 62G 6% /
rpool/export 67G 19K 62G 1% /export
rpool/export/home 67G 18K 62G 1% /export/home
rpool 67G 35K 62G 1% /rpool
[nuve] / # zfs create rpool/SOFT
[nuve] / # df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/s10x_u6wos_07b 67G 3.4G 62G 6% /
rpool/export 67G 19K 62G 1% /export
rpool/export/home 67G 18K 62G 1% /export/home
rpool 67G 35K 62G 1% /rpool
rpool/SOFT 67G 18K 62G 1% /rpool/SOFT
[nuve] / # zfs set quota=10G rpool/SOFT
[nuve] / # df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/s10x_u6wos_07b 67G 3.4G 62G 6% /
rpool/export 67G 19K 62G 1% /export
rpool/export/home 67G 18K 62G 1% /export/home
rpool 67G 36K 62G 1% /rpool
rpool/SOFT 10G 18K 10G 1% /rpool/SOFT
[nuve] / #

[nuve] / # zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.56G 60.4G 36.5K /rpool
rpool/ROOT 3.42G 60.4G 18K legacy
rpool/ROOT/s10x_u6wos_07b 3.42G 60.4G 3.42G /
rpool/SOFT 1.13G 8.87G 1.13G /rpool/SOFT
rpool/dump 1.00G 60.4G 1.00G -
rpool/export 37K 60.4G 19K /export
rpool/export/home 18K 60.4G 18K /export/home
rpool/swap 1G 61.4G 16K -
[nuve] / #

Para Cambiar el punto de montaje del Filesystems
[nuve] / # zfs set mountpoint=/SOFT rpool/SOFT
[nuve] / # df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/s10x_u6wos_07b 67G 3.4G 60G 6% /
rpool/export 67G 19K 60G 1% /export
rpool/export/home 67G 18K 60G 1% /export/home
rpool 67G 36K 60G 1% /rpool
rpool/SOFT 10G 1.9G 8.1G 20% /SOFT
[nuve] / #

Restaurar la base de Servicios SVC ( Repositorio )

Para cuando muere la base del SVC en solaris 10 ejecutar:
/lib/svc/bin/restore_repository
La opción que hay que elegir es la del booteo que aparece.

Mensaje de error con el problema:
svc.configd: Fatal error: /etc/svc/volatile/svc_nonpersist.db: integrity check failed. Details in /etc/svc/volatile/db_errors

Cambio IP Solaris 10

PARA CAMBIAR LA IP , tener en cuenta ( en los primeros releases de solaris 10 no estaba linkeado el ipnodes al hosts ) :
que no solo se debe cambiar el /etc/inet/hosts, sino tambien el archivo /etc/inet/ipnodes
ejemplo :
skol] / # more /etc/inet/ipnodes
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
10.67.33.115 skol loghost
[skol ] / # more /etc/inet/hosts
#
# Internet host table
#
127.0.0.1 localhost
10.67.33.115 skol skol. loghost
10.67.33.55 mail mail. mailhost mailhost.

Procesos que consumen mas Memoria

ps -efo vsz,pid,comm | sort -rn
Para pasarlo a MB la primer columna se divide por 1024

Configurar Syslog remoto

En el /etc/syslog.conf , hay que agregar al final
*.info;mail.none;kern.none;daemon.none @10.67.33.115
auth.notice @10.67.33.115
daemon.info @10.67.33.115

Tambien es conveniente agregar la direccion ip del host que va a recibir los logs en el /etc/host local,
10.67.33.115 loghost

y reiniciar /etc/init.d/syslog stop/start

Habilitar X en Suse

vi /etc/sysconfig/displaymanager
habilitarle el port 6000
# TCP port 6000 of Xserver. When set to "no" (default) Xserver is
DISPLAYMANAGER_XSERVER_TCP_PORT_6000_OPEN="yes"
/sbin/SuSEconfig
/usr/sbin/rcxdm restart
reboot

lunes, 12 de julio de 2010

Agrandar un fs con SVM que esta dentro de un Disk Set

Para Agrandar un fs con SVM que esta dentro de un Disk Set, se debe por ejemplo en este caso queremos
agrandar el fs /u/app/oracle/admin/SKOL/arch perteneciente al d74
# df -h
Filesystem size used avail capacity Mounted on
/dev/md/SKOL/dsk/d74 10.0G 90M 9.8G 1% /u/app/oracle/admin/SKOL/arch
# metastat -s SKOL
SKOL/d74: Soft Partition
Device: SKOL/d70
State: Okay
Size: 20971520 blocks (10 GB)
Extent Start Block Block count
0 482345472 20971520

SKOL/d70: Concat/Stripe
Size: 628838400 blocks (299 GB)
Stripe 0: (interlace: 128 blocks)
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t60060E800456EE00000056EE000000F4d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F5d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F6d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F7d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F8d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F9d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FAd0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FBd0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FCd0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FDd0s0 0 No Okay Yes

SKOL/d73: Soft Partition
Device: SKOL/d70
State: Okay
Size: 157286400 blocks (75 GB)
Extent Start Block Block count
0 325058944 157286400

SKOL/d72: Soft Partition
Device: SKOL/d70
State: Okay
Size: 157286400 blocks (75 GB)
Extent Start Block Block count
0 167772416 157286400

SKOL/d71: Soft Partition
Device: SKOL/d70
State: Okay
Size: 167772160 blocks (80 GB)
Extent Start Block Block count
0 128 167772160

Con este comando Agrego 15 gb al filesystem que ya tenia 10gb, es decir lo llevare a 25 gb totales.
# metattach -s SKOL d74 15g
SKOL/d74: Soft Partition has been grown

# metastat -s SKOL d74
SKOL/d74: Soft Partition
Device: SKOL/d70
State: Okay
Size: 52428800 blocks (25 GB)
Extent Start Block Block count
0 482345472 52428800

SKOL/d70: Concat/Stripe
Size: 628838400 blocks (299 GB)
Stripe 0: (interlace: 128 blocks)
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t60060E800456EE00000056EE000000F4d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F5d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F6d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F7d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F8d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000F9d0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FAd0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FBd0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FCd0s0 0 No Okay Yes
/dev/dsk/c6t60060E800456EE00000056EE000000FDd0s0 0 No Okay Yes

Device Relocation Information:
Device Reloc Device ID
/dev/dsk/c6t60060E800456EE00000056EE000000F4d0 Yes id1,ssd@n60060e800456ee00000056ee000000f4
/dev/dsk/c6t60060E800456EE00000056EE000000F5d0 Yes id1,ssd@n60060e800456ee00000056ee000000f5
/dev/dsk/c6t60060E800456EE00000056EE000000F6d0 Yes id1,ssd@n60060e800456ee00000056ee000000f6
/dev/dsk/c6t60060E800456EE00000056EE000000F7d0 Yes id1,ssd@n60060e800456ee00000056ee000000f7
/dev/dsk/c6t60060E800456EE00000056EE000000F8d0 Yes id1,ssd@n60060e800456ee00000056ee000000f8
/dev/dsk/c6t60060E800456EE00000056EE000000F9d0 Yes id1,ssd@n60060e800456ee00000056ee000000f9
/dev/dsk/c6t60060E800456EE00000056EE000000FAd0 Yes id1,ssd@n60060e800456ee00000056ee000000fa
/dev/dsk/c6t60060E800456EE00000056EE000000FBd0 Yes id1,ssd@n60060e800456ee00000056ee000000fb
/dev/dsk/c6t60060E800456EE00000056EE000000FCd0 Yes id1,ssd@n60060e800456ee00000056ee000000fc
/dev/dsk/c6t60060E800456EE00000056EE000000FDd0 Yes id1,ssd@n60060e800456ee00000056ee000000fd

# df -h |grep d74
/dev/md/SKOL/dsk/d74 10.0G 90M 9.8G 1% /u/app/oracle/admin/SKOL/arch

# growfs -M /u/app/oracle/admin/SKOL/arch /dev/md/SKOL/rdsk/d74
Warning: 4096 sector(s) in last cylinder unallocated
/dev/md/SKOL/rdsk/d74: 52428800 sectors in 8534 cylinders of 48 tracks, 128 sectors
25600.0MB in 534 cyl groups (16 c/g, 48.00MB/g, 128 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
..........
super-block backups for last 10 cylinder groups at:
51512864, 51611296, 51709728, 51808160, 51906592, 52005024, 52103456,
52201888, 52300320, 52398752
# df -h |grep d74
/dev/md/SKOL/dsk/d74 25G 105M 25G 1% /u/app/oracle/admin/SKOL/arch
#

Sacar estado STOP_FAILED ( sun cluster 3.0 )

Para Limpiar el estado STOP FAILED un resource group de un Sun Cluster 3.0 se debe realizar lo descripto en este ejemplo.

Con el comando scstat -g identifico los resource group que debo limpiar
scstat -g |more
-- Resource Groups and Resources -
Group Name Resources
---------- ---------
Resources: oracle-rg cluster1 oracle-server-rs oracle-hastorage-rs servicios-hastorage-rs servicios-rs oracle-LSN_
PR01-rs oracle-LSN_PR02-rs oracle-LSN_PR03-rs oracle-LSN_PR04-rs oracle-LSN_PR05-rs

-- Resource Groups --

Group Name Node Name State
---------- --------- -----

Group: oracle-rg server1 Error--stop failed
Group: oracle-rg server2 Offline

# scswitch -c -h server1 -j oracle-server-rs -f STOP_FAILED
scswitch: NOTICE: Operation succeeded, but resource group oracle-rg remains in ERROR_STOP_FAILED state on node server1 because some resources in the group remain
online while others are offline. To clear this condition, switch the resource group offline.

# scswitch -F -g oracle-rg
# scstat -g |more

-- Resource Groups and Resources --

Group Name Resources
---------- ---------
Resources: oracle-rg cluster1 oracle-server-rs oracle-hastorage-rs servicios-hastorage-rs servicios-rs oracle-LSN_
PR01-rs oracle-LSN_PR02-rs oracle-LSN_PR03-rs oracle-LSN_PR04-rs oracle-LSN_PR05-rs

-- Resource Groups --

Group Name Node Name State
---------- --------- -----
Group: oracle-rg server1 Online
Group: oracle-rg server2 Offline

Ahora debo realizar un scswitch con cada uno de los recursos asociados al resource group oracle-rg

# scswitch -n -j oracle-hastorage-rs
# scswitch -e -j oracle-hastorage-rs
# scswitch -F -g oracle-rg
# scswitch -n -j oracle-LSN_PR05-rs
# scswitch -n -j oracle-LSN_PR04-rs
# scswitch -n -j oracle-LSN_PR03-rs
# scswitch -n -j oracle-LSN_PR02-rs
# scswitch -n -j oracle-LSN_PR01-rs
# scswitch -n -j oracle-server-rs
# scswitch -n -j oracle-hastorage-rs