Tags

aix HMC VIOS gpfs AIX6.1 hdlm LPAR Multibos dlnkmgr mmcrnsd Clusterware chuser mksysb mmadddisk CRS Filesystem IBM AIX Lun Nim Oracle RAC OracleRac chdev find multibos -R oprocd oracle /etc/security/limits /etc/syslog.conf Css DLPAR Grid HBA Hitachi HitachiHDLM IVM Io Npiv Oracle Clusterware Parallel File System Rac SAS SSH San Storage Update TL Update Vios Wwpn backup vios boot chfs chpasswd chps crfs crsd evmd extendvg histsize instfix lsdev lspath mkvdev mmchdisk mmcrfs mmdelnsd mmlsfs mmlsnsd mmmount mmrestripefs mmrpldisk mmsdrfs ocssd pwdadm restore rmdev swap syslog tar upgrade %iused /etc/hosts /etc/resolv.conf 2.1.22 3.4.0.1 ADMIN HP Admin HPUX Aix Movies Aix Question Aix TCP stack Aix Update Aix Videos Tutorial Aix command Aix hottest Aix tips Aix tutorial Aixpert Backing Up Vios Backup Ethernet Backup and Restore Burleson Certified Creatingapair DNS DinamicLinkManager Ethernet adapter in aix Free Education GPFSV3.1 Gpfs 3.2 HACMP HEA HP UX HP administracion HP administration HPUX HSNM2Cli Hardware HitachiArrayStorage HitachiReplicationAMS2100 IP Ipvirtuales Ive LHEA Lmon Mandamientos TI Migrar vSphere 4.1 a 5.1 Mirroring Missing paths N_port Id Network Next Vios Npiv configuration Npiv setup Ntp Offline Online Operating system Oracle Rac 11g OracleCluster OracleGridControl PATH Package Mangement PasswordExpired Performance PermitRootLogin Power Vm ProfessionalCertification RAC 10 RMC RSCT Rbac Re-balancing Replacegpfsdisk Rid RoleBasedAccessControl Rsync SNMGuiinstallation SSD SSD in AIX SSP Setuid Solid State Drive Striping System System ADministrator skill SystemAdministration SystemAdministrator TCPIP TL Telnet Tips Unix VULNERABILITY Vio Server Vios Command Vios New Vios Tips Vip Wwn Zoning actualizar la HMC aix 6.1 alt_disk_install altadisponibilidad archive a directory auluref aurgref authentication auunitadd backgroundprocess backup bash boot problems bootinfo bootinfo -r bootlis bootlist bosboot cd/dvd cdrom cfgmgr cfgsvc chlv chvg chvg -g rootvg clonar connection crsctl daemon datafile default paging deletingapair diag dlnkmgrview-lu du -kx el340 to el350 emctl errlog error paths errpt evm faq hmc fijar password fileset filesets filesystem root full find / -xdev forced-commands-only fsck ftp ftp aix unsuccessful_login_count getupgfiles gpfs 3.4.0.0 gpfs 3.4.0.1 gpfs 3.4.0.7 grow your rootvg hd6 hitachistorage hmc reports hmc update hscroot ifconfig importvg inetd.conf install gpfs installp inutoc isos jfs kdb kern lck ldap leasblocksalgoritmforloadbalancing linux lmd lms loadbalancing log logger loopmount lpar hmcshutdown lpar information lparstat lparstat -i lsmap lsnim lspartition lsps -a lsrsrc -l lssrc lssw lsvg rootvg maintenance mode memory micro-partitioning migration gpfs mirrovg mkdev mkdvd mkfs mklv mkps mkramdisk mkvt mmchconfig mmchconfig 6027-1371 mmchfs mmdelcluster-a mmdelfs mmdf mmfs.log.latest mmgetstate mmlsdisk mmlsnsd-F mmlspv mmumount mount mount iso image multibos -RX multibos -Xac multibos -sX name news num_cmd_elems ons oracle vs sqlserver oslevel -s paginación paging paging space parent passwd passwords paths pendingioblock performanceproblems perl preguntas procesosbackground ps -ef racg racgimon ram disk re-synchonizeapair realmem repair filesystem resource allocated restore vios restoringtheprimaryvolumefromasecondaryvolume resume rmcdomainstatus rmpath rmuser rmvdev rootvg rootvg backup rotate server sharedprocessor shrink skill TI smit update_all snapshot snmp snmpdv3 splittingapair spot sqlserver srvctl srvctl modify database ssh with forced commands sshd_config start startrac startsnmen.bat startsrc stop stoprac sudo suspend svmon swapvg syslog.conf syslogd syslogd.conf tutorial aix tutorial hmc umask update network hmc updhmc upgrade firmware upgrade nodes in gpfs upgrade steps user v7000 videos online aix viosbr virtual ethernet virtualizacion de FC vlan vmware xargs xdev xntpd

domingo, 6 de julio de 2014

http://www.josemariagonzalez.es/2014/05/30/como-instalar-el-vma-vsphere-management-assistant.html

jueves, 19 de junio de 2014

Balanceo File System GPFS con discos distintos.

Como balancear un file system gpfs cuando los discos que lo componen no son todos iguales
Primero hago un análisis del Filesystem

root@pm-db1-test:/gpfs6/pm10test > mmdf gpfs6 --block-size 1G
disk                disk size  failure holds    holds              free GB             free GB
name                    in GB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 2.3 TB)
nsd_hdisk14               200       -1 yes      yes              68 ( 34%)             1 ( 00%)
nsd_hdisk6                200       -1 yes      yes             200 (100%)             1 ( 0%)
nsd_hdisk9                 50       -1 yes      yes               1 (  00%)             1 ( 0%)
nsd_hdisk17               200       -1 yes      yes               1 (  00%)             1 ( 0%)
                -------------                         -------------------- -------------------
(pool total)              600                                   268 ( 45%)             1 ( 00%)

                =============                         ==================== ===================
(total)                   600                                   268 ( 45%)             1 ( 00%)

Inode Information
-----------------
Number of used inodes:            4280
Number of free inodes:          202568
Number of allocated inodes:     206848
Maximum number of inodes:       206848


Suspendo el disco que forma parte del GPFS

root@pm-db1-test:/gpfs6/pm10test > mmchdisk gpfs6 suspend -d nsd_hdisk9


Si me fijo ahora el disco suspendido está marcado con un * además podemos ver que tiene 0% disponible
con lo que es necesario rebalancear el filesystem.


root@pm-db1-test:/gpfs6/pm10test > mmdf gpfs6 --block-size 1G
disk                disk size  failure holds    holds              free GB             free GB
name                    in GB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 2.3 TB)
nsd_hdisk14               200       -1 yes      yes              68 ( 34%)             1 ( 00%)
nsd_hdisk6                200       -1 yes      yes             200 (100%)             1 ( 0%)
nsd_hdisk9                 50       -1 yes      yes               1 (  00%)             1 ( 0%) *
nsd_hdisk17               200       -1 yes      yes               1 (  00%)             1 ( 0%)
                -------------                         -------------------- -------------------
(pool total)              600                                   268 ( 45%)             1 ( 00%)

                =============                         ==================== ===================
(total)                   600                                   268 ( 45%)             1 ( 00%)

Inode Information
-----------------
Number of used inodes:            4280
Number of free inodes:          202568
Number of allocated inodes:     206848
Maximum number of inodes:       206848

Rebalanceo el Filesystem antes de eliminar el disco

root@pm-db1-test:/gpfs6/pm10test > mmrestripefs gpfs6 -b
GPFS: 6027-589 Scanning file system metadata, phase 1 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 2 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 3 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 4 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-565 Scanning user file metadata ...
   0.30 % complete on Thu Jun 19 09:35:23 2014  (     28613 inodes       1185 MB)
   1.75 % complete on Thu Jun 19 09:35:45 2014  (     28617 inodes       6836 MB)
   2.41 % complete on Thu Jun 19 09:36:13 2014  (     28617 inodes       9429 MB)
   2.56 % complete on Thu Jun 19 09:36:34 2014  (     28621 inodes      10021 MB)
   2.79 % complete on Thu Jun 19 09:36:56 2014  (     28621 inodes      10917 MB)
   2.89 % complete on Thu Jun 19 09:37:24 2014  (     28623 inodes      11301 MB)
   3.02 % complete on Thu Jun 19 09:37:47 2014  (     28631 inodes      11808 MB)
   3.16 % complete on Thu Jun 19 09:38:12 2014  (     28640 inodes      12366 MB)
   3.18 % complete on Thu Jun 19 09:38:33 2014  (     28641 inodes      12463 MB)
   3.32 % complete on Thu Jun 19 09:38:59 2014  (     86060 inodes      12986 MB)
   3.38 % complete on Thu Jun 19 09:39:21 2014  (     98355 inodes      13216 MB)
   3.78 % complete on Thu Jun 19 09:39:48 2014  (     98357 inodes      14807 MB)
   3.95 % complete on Thu Jun 19 09:40:09 2014  (    135233 inodes      15452 MB)
   4.58 % complete on Thu Jun 19 09:40:33 2014  (    135245 inodes      17925 MB)
   5.77 % complete on Thu Jun 19 09:40:57 2014  (    135246 inodes      22598 MB)
   7.30 % complete on Thu Jun 19 09:41:18 2014  (    147466 inodes      28559 MB)
   7.70 % complete on Thu Jun 19 09:41:43 2014  (    147466 inodes      30114 MB)
   8.56 % complete on Thu Jun 19 09:42:13 2014  (    153640 inodes      33488 MB)
   9.54 % complete on Thu Jun 19 09:42:35 2014  (    153641 inodes      37346 MB)
  10.13 % complete on Thu Jun 19 09:43:07 2014  (    153644 inodes      39635 MB)
  11.44 % complete on Thu Jun 19 09:43:29 2014  (    153646 inodes      44756 MB)
  12.24 % complete on Thu Jun 19 09:43:50 2014  (    153654 inodes      47901 MB)
  13.54 % complete on Thu Jun 19 09:44:12 2014  (    153672 inodes      52988 MB)
  13.61 % complete on Thu Jun 19 09:44:35 2014  (    153675 inodes      53277 MB)
  13.71 % complete on Thu Jun 19 09:44:58 2014  (    165938 inodes      53659 MB)
  13.83 % complete on Thu Jun 19 09:45:19 2014  (    165950 inodes      54141 MB)
  14.81 % complete on Thu Jun 19 09:45:49 2014  (    191956 inodes      57977 MB)
  15.28 % complete on Thu Jun 19 09:46:10 2014  (    196638 inodes      59794 MB)
  15.38 % complete on Thu Jun 19 09:46:33 2014  (    200736 inodes      60181 MB)
  15.43 % complete on Thu Jun 19 09:47:00 2014  (    200739 inodes      60374 MB)
  15.48 % complete on Thu Jun 19 09:47:24 2014  (    200741 inodes      60567 MB)
  15.55 % complete on Thu Jun 19 09:47:47 2014  (    200744 inodes      60856 MB)
  15.76 % complete on Thu Jun 19 09:48:15 2014  (    200749 inodes      61657 MB)
  15.82 % complete on Thu Jun 19 09:48:35 2014  (    200752 inodes      61898 MB)
  15.89 % complete on Thu Jun 19 09:48:56 2014  (    200756 inodes      62188 MB)
  15.96 % complete on Thu Jun 19 09:49:24 2014  (    200766 inodes      62457 MB)
  16.01 % complete on Thu Jun 19 09:49:44 2014  (    200768 inodes      62651 MB)
  16.19 % complete on Thu Jun 19 09:50:05 2014  (    200772 inodes      63357 MB)
  16.24 % complete on Thu Jun 19 09:50:30 2014  (    200791 inodes      63550 MB)
  16.26 % complete on Thu Jun 19 09:50:51 2014  (    206848 inodes      63648 MB)
 100.00 % complete on Thu Jun 19 09:50:52 2014
GPFS: 6027-552 Scan completed successfully.

Checkeamos que el disco a eliminar tiene el 100% libre

root@pm-db1-test:/gpfs6/pm10test > mmdf gpfs6
disk                disk size  failure holds    holds              free KB             free KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 2.3 TB)
nsd_hdisk14         209715200       -1 yes      yes        76011520 ( 36%)         73536 ( 0%)
nsd_hdisk6          209715200       -1 yes      yes        76115968 ( 36%)         28416 ( 0%)
nsd_hdisk9           52428800       -1 yes      yes        52328448 (100%)          4864 ( 0%) *
nsd_hdisk17         209715200       -1 yes      yes        76075008 ( 36%)         73952 ( 0%)
                -------------                         -------------------- -------------------
(pool total)        629145600                             228202496 ( 36%)        175904 ( 00%)

                =============                         ==================== ===================
(total)             629145600                             228202496 ( 36%)        175904 ( 00%)

Inode Information
-----------------
Number of used inodes:            4280
Number of free inodes:          202568
Number of allocated inodes:     206848
Maximum number of inodes:       206848

Detengo el disco antes de eliminar

root@pm-db1-test:/gpfs6/pm10test > mmchdisk gpfs6 stop -d nsd_hdisk9

Elimino el disco

root@pm-db1-test:/gpfs6/pm10test > mmdeldisk gpfs6 nsd_hdisk9 -r -a
Deleting disks ...
Scanning system storage pool
GPFS: 6027-589 Scanning file system metadata, phase 1 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 2 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 3 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 4 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-565 Scanning user file metadata ...
 100.00 % complete on Thu Jun 19 09:52:29 2014
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-379 Could not invalidate disk(s).
Checking Allocation Map for storage pool 'system'
GPFS: 6027-370 tsdeldisk64 completed.
mmdeldisk: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

Me fijo si el disco fue removido del gpfs6

root@pm-db1-test:/gpfs6/pm10test > mmdf gpfs6
disk                disk size  failure holds    holds              free KB             free KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 2.3 TB)
nsd_hdisk14         209715200       -1 yes      yes        76046336 ( 36%)         73536 ( 0%)
nsd_hdisk6          209715200       -1 yes      yes        76115968 ( 36%)         28416 ( 0%)
nsd_hdisk17         209715200       -1 yes      yes        76075008 ( 36%)         73952 ( 0%)
                -------------                         -------------------- -------------------
(pool total)        629145600                             228237312 ( 36%)        175904 ( 00%)

                =============                         ==================== ===================
(total)             629145600                             228237312 ( 36%)        175904 ( 00%)

Inode Information
-----------------
Number of used inodes:            4280
Number of free inodes:          202568
Number of allocated inodes:     206848
Maximum number of inodes:       206848

root@pm-db1-test:/gpfs6/pm10test > mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 gpfs1         nsd_hdisk4   (directly attached)
 gpfs2         nsd_hdisk5   (directly attached)
 gpfs6         nsd_hdisk14  (directly attached)
 gpfs6         nsd_hdisk17  (directly attached)
 gpfs6         nsd_hdisk6   (directly attached)
 gpfs7         nsd_hdisk15  (directly attached)
 gpfs7         nsd_hdisk7   (directly attached)
 gpfs7         nsd_hdisk10  (directly attached)
 gpfs7         nsd_hdisk12  (directly attached)
 gpfs7         nsd_hdisk16  (directly attached)
 gpfs7         nsd_hdisk18  (directly attached)
 (free disk)   nsd_hdisk9   (directly attached)
 (free disk)   nsd_tb_hdisk13 (directly attached)
 (free disk)   nsd_tb_hdisk3 (directly attached)


Borro el Disco nsd

root@pm-db1-test:/gpfs6/pm10test > mmdelnsd nsd_hdisk9
mmdelnsd: Processing disk nsd_hdisk9
mmdelnsd: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
root@pm-db1-test:/gpfs6/pm10test > lspv
hdisk1          none                                nsd_hdisk14
hdisk2          none                                nsd_hdisk15
hdisk3          none                                nsd_hdisk6
hdisk4          none                                nsd_hdisk7
hdisk5          none                                None
hdisk6          none                                nsd_hdisk10
hdisk7          none                                nsd_hdisk12
hdisk8          none                                nsd_hdisk16
hdisk9          none                                nsd_hdisk17
hdisk10         00cd9514489fa863                    swapvg          active
hdisk11         00cd9514f79fc340                    appsvg          active
hdisk12         00cd951448c61170                    appsvg          active
hdisk13         none                                nsd_tb_hdisk3
hdisk14         none                                nsd_hdisk4
hdisk15         none                                nsd_hdisk5
hdisk16         none                                nsd_tb_hdisk13
hdisk0          00cd951467067023                    rootvg          active
hdisk17         none                                nsd_hdisk18
hdisk18         00f81ae386bb5e11                    altinst_rootvg


Elimino el disco físico del AIX ( Ambos Nodos del cluster)


root@pm-db1-test:/gpfs6/pm10test > rmdev -Rdl hdisk5
hdisk5 deleted

Desde el lado del Storage amplio el Disco y lo llevo de 50G a 200G
y ejecuto el cfgmgr para detectar el disco nuevamente


root@pm-db1-test:/gpfs6/pm10test > cfgmgr

Inicializo el disco

root@pm-db1-test:/gpfs6/pm10test > dd if=/dev/zero of=/dev/hdisk5 bs=100k count=1000
1000+0 records in.
1000+0 records out.

Seteo los parámetros que me interesan

root@pm-db1-test:/gpfs6/pm10test > chdev -l hdisk5 -a reserve_policy=no_reserve -a queue_depth=8 -a rw_timeout=60
hdisk5 changed

Defino el archivo de configuración del GPFS

root@pm-db1-test:/gpfs6/pm10test > echo '#DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredName:StoragePool' > /tmp/gpfs5
root@pm-db1-test:/gpfs6/pm10test > echo 'hdisk5:::::nsd_hdisk9:' >> /tmp/gpfs5

Creo en disco NSD

root@pm-db1-test:/gpfs6/pm10test > mmcrnsd -F /tmp/gpfs5
mmcrnsd: Processing disk hdisk5
mmcrnsd: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

Agrego el disco al FileSystem gpfs6

root@pm-db1-test:/gpfs6/pm10test > mmadddisk gpfs6 -F /tmp/gpfs5

GPFS: 6027-531 The following disks of gpfs6 will be formatted on node pm-db2-test:
    nsd_hdisk9: size 209715200 KB
Extending Allocation Map
Checking Allocation Map for storage pool 'system'
GPFS: 6027-1503 Completed adding disks to file system gpfs6.
mmadddisk: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
root@pm-db1-test:/gpfs6/pm10test >

Revisamos que el filesystem quedó OK ejecutando el primer comando

root@pm-db1-test:/gpfs6/pm10test > mmdf gpfs6 --block-size 1G
disk                disk size  failure holds    holds              free GB             free GB
name                    in GB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 2.3 TB)
nsd_hdisk14               200       -1 yes      yes              73 ( 36%)             1 ( 0%)
nsd_hdisk6                200       -1 yes      yes              73 ( 36%)             1 ( 0%)
nsd_hdisk9                200       -1 yes      yes             200 (100%)             1 ( 0%)
nsd_hdisk17               200       -1 yes      yes              73 ( 36%)             1 ( 0%)
                -------------                         -------------------- -------------------
(pool total)              800                                   418 ( 52%)             1 ( 00%)

                =============                         ==================== ===================
(total)                   800                                   418 ( 52%)             1 ( 00%)

Y ahora rebalanceamos para que todos queden al mismo nivel de ocupación.

root@pm-db1-test:/gpfs6/pm10test > mmrestripefs gpfs6 -b
GPFS: 6027-589 Scanning file system metadata, phase 1 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 2 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 3 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 4 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-565 Scanning user file metadata ...
   0.13 % complete on Thu Jun 19 10:11:44 2014  (      4018 inodes        525 MB)
   1.44 % complete on Thu Jun 19 10:12:12 2014  (     22450 inodes       5631 MB)
   1.90 % complete on Thu Jun 19 10:12:44 2014  (     28614 inodes       7422 MB)
   2.90 % complete on Thu Jun 19 10:13:16 2014  (     28618 inodes      11344 MB)
   2.97 % complete on Thu Jun 19 10:13:41 2014  (     28621 inodes      11633 MB)
   3.30 % complete on Thu Jun 19 10:14:08 2014  (     28622 inodes      12929 MB)
   3.38 % complete on Thu Jun 19 10:14:28 2014  (     28623 inodes      13217 MB)
   3.40 % complete on Thu Jun 19 10:14:58 2014  (     28625 inodes      13313 MB)
   3.65 % complete on Thu Jun 19 10:15:23 2014  (     28639 inodes      14282 MB)
   3.70 % complete on Thu Jun 19 10:15:45 2014  (     28640 inodes      14475 MB)
   3.72 % complete on Thu Jun 19 10:16:10 2014  (     28641 inodes      14475 MB)
   3.73 % complete on Thu Jun 19 10:16:42 2014  (     86056 inodes      14585 MB)
   3.86 % complete on Thu Jun 19 10:17:04 2014  (     98344 inodes      15087 MB)
   3.89 % complete on Thu Jun 19 10:17:25 2014  (     98355 inodes      15216 MB)
   4.08 % complete on Thu Jun 19 10:17:58 2014  (    116788 inodes      15967 MB)
   4.75 % complete on Thu Jun 19 10:18:19 2014  (    116799 inodes      18604 MB)
   4.95 % complete on Thu Jun 19 10:18:59 2014  (    135234 inodes      19362 MB)
   5.38 % complete on Thu Jun 19 10:19:22 2014  (    135249 inodes      21057 MB)
   5.69 % complete on Thu Jun 19 10:19:47 2014  (    135250 inodes      22275 MB)
   6.00 % complete on Thu Jun 19 10:20:10 2014  (    135251 inodes      23485 MB)
   6.14 % complete on Thu Jun 19 10:20:35 2014  (    147470 inodes      24044 MB)
   7.17 % complete on Thu Jun 19 10:20:59 2014  (    153633 inodes      28041 MB)
   7.96 % complete on Thu Jun 19 10:21:20 2014  (    153644 inodes      31152 MB)
   9.16 % complete on Thu Jun 19 10:21:47 2014  (    153644 inodes      35852 MB)
   9.89 % complete on Thu Jun 19 10:22:11 2014  (    153645 inodes      38710 MB)
  10.17 % complete on Thu Jun 19 10:22:36 2014  (    153646 inodes      39807 MB)
  10.71 % complete on Thu Jun 19 10:22:58 2014  (    153647 inodes      41903 MB)
  11.75 % complete on Thu Jun 19 10:23:20 2014  (    153647 inodes      45999 MB)
  12.31 % complete on Thu Jun 19 10:23:47 2014  (    153666 inodes      48192 MB)
  12.57 % complete on Thu Jun 19 10:24:22 2014  (    153668 inodes      49208 MB)
  12.64 % complete on Thu Jun 19 10:24:53 2014  (    153673 inodes      49448 MB)
  12.66 % complete on Thu Jun 19 10:25:19 2014  (    153674 inodes      49544 MB)
  12.71 % complete on Thu Jun 19 10:25:47 2014  (    165935 inodes      49759 MB)
  12.76 % complete on Thu Jun 19 10:26:12 2014  (    165936 inodes      49951 MB)
  13.03 % complete on Thu Jun 19 10:26:34 2014  (    165951 inodes      50974 MB)
  13.72 % complete on Thu Jun 19 10:26:56 2014  (    165962 inodes      53689 MB)
  13.76 % complete on Thu Jun 19 10:27:26 2014  (    165964 inodes      53833 MB)
  13.91 % complete on Thu Jun 19 10:27:53 2014  (    178205 inodes      54442 MB)
  13.96 % complete on Thu Jun 19 10:28:34 2014  (    178207 inodes      54634 MB)
  14.12 % complete on Thu Jun 19 10:28:56 2014  (    196639 inodes      55253 MB)
  14.19 % complete on Thu Jun 19 10:29:23 2014  (    196641 inodes      55542 MB)
  14.19 % complete on Thu Jun 19 10:29:44 2014  (    200737 inodes      55544 MB)
  14.50 % complete on Thu Jun 19 10:30:08 2014  (    200739 inodes      56737 MB)
  14.52 % complete on Thu Jun 19 10:30:34 2014  (    200740 inodes      56833 MB)
  14.57 % complete on Thu Jun 19 10:31:03 2014  (    200742 inodes      57026 MB)
  14.62 % complete on Thu Jun 19 10:31:25 2014  (    200743 inodes      57218 MB)
  14.65 % complete on Thu Jun 19 10:31:52 2014  (    200745 inodes      57315 MB)
  14.67 % complete on Thu Jun 19 10:32:21 2014  (    200746 inodes      57412 MB)
  14.85 % complete on Thu Jun 19 10:32:52 2014  (    200750 inodes      58117 MB)
  14.89 % complete on Thu Jun 19 10:33:19 2014  (    200752 inodes      58261 MB)
  14.91 % complete on Thu Jun 19 10:33:41 2014  (    200753 inodes      58357 MB)
  14.96 % complete on Thu Jun 19 10:34:04 2014  (    200757 inodes      58552 MB)
  14.98 % complete on Thu Jun 19 10:34:31 2014  (    200762 inodes      58627 MB)
  15.03 % complete on Thu Jun 19 10:35:11 2014  (    200766 inodes      58820 MB)
  15.08 % complete on Thu Jun 19 10:35:50 2014  (    200768 inodes      59014 MB)
  15.10 % complete on Thu Jun 19 10:36:14 2014  (    200772 inodes      59112 MB)
  15.29 % complete on Thu Jun 19 10:36:44 2014  (    200784 inodes      59817 MB)
  15.33 % complete on Thu Jun 19 10:37:18 2014  (    206848 inodes      60011 MB)
 100.00 % complete on Thu Jun 19 10:37:29 2014
GPFS: 6027-552 Scan completed successfully.


Todo OK.

root@pm-db1-test:/gpfs6/pm10test > mmdf gpfs6
disk                disk size  failure holds    holds              free KB             free KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 2.3 TB)
nsd_hdisk14         209715200       -1 yes      yes       109422592 ( 52%)         84448 ( 0%)
nsd_hdisk6          209715200       -1 yes      yes       109503488 ( 52%)         36160 ( 00%)
nsd_hdisk9          209715200       -1 yes      yes       109511680 ( 52%)         28928 ( 0%)
nsd_hdisk17         209715200       -1 yes      yes       109483008 ( 52%)         56032 ( 00%)
                -------------                         -------------------- -------------------
(pool total)        838860800                             437920768 ( 52%)        205568 ( 00%)

                =============                         ==================== ===================
(total)             838860800                             437920768 ( 52%)        205568 ( 00%)

Inode Information
-----------------
Number of used inodes:            4291
Number of free inodes:          202557
Number of allocated inodes:     206848
Maximum number of inodes:       206848