AIX (Advanced Interactive eXecutive) is a series of proprietary Unix operating systems developed and sold by IBM.
Performance Optimization With Enhanced RISC (POWER) version 7 enables a unique performance advantage for AIX OS.
POWER7 features new capabilities using multiple cores and multiple CPU threads, creating a pool of virtual CPUs.
AIX 7 includes a new built-in clustering capability called Cluster Aware
AIX POWER7 systems include the Active Memory Expansion feature.

Wednesday, November 30, 2011

Moving CDROM from 1 Partition to Another Partition..!!

CHECK THE CDROM ON THE PARENT LPAR 

Check the cdrom and I/O adaper providing the cdrom . Check the physical location of I/O
adapter providing the cdrom 

# lsdev -l cd0 -F parent
scsi1
# lsdev -l scsi1 -F parent
sisscsia0
# lsdev -l sisscsia0 -F physloc
U789D.001.DQD07P3-P1-C4
# lsdev -l scsi1 -F physloc
U789D.001.DQD07P3-P1-C4-T2

REMOVE THE CDROM FROM THE LPAR 
#
# rmdev -dl cd0
cd0 deleted
#

MOVING THE ADAPTER 
With the help of physical address determined above go to the properties of the managed
system and veryfy the I/O adapter
































Move the verified adapter to the destination lpar 

















Select the adapter to move and the lpar to which the adapter is to be moved 


















If you are facing the same error as I faced below go to the parent lpar and delete the
parent adapter of the cdrom (the adapter you want to move has to be deleted from the
parent lpar )














DELETE THE ADAPTER 

# rmdev -Rdl sisscsia0
rmt0 deleted
scsi0 deleted
scsi1 deleted
sisscsia0 deleted



















VERIFY ON THE TARGET LPAR


Go to the terminal console of the target lpar and verify that the cdrom is now moved 
Verify the adapter by physical location


















# hostname
gurapp123
# cfgmgr
# lsdev -l cd0
cd0 Available 02-08-01-2,0 16 Bit LVD SCSI DVD-RAM Drive
# mkdir /cdrom
# mount -v cdrfs -o ro /dev/cd0 /cdrom
# umount /cdrom
# lsdev -l cd0 -F parent
scsi1
# lsdev -l scsi1 -F physloc
U789D.001.DQD07P3-P1-C4-T2
#

Friday, November 25, 2011

Datapath Errors...!!!!

Hostname:/tmp > datapath query adapter

Active Adapters :2

Adpt#     Name   State     Mode             Select     Errors  Paths  Active
    0   fscsi0  NORMAL   ACTIVE           70265214          0     12      12
    1   fscsi1  NORMAL   ACTIVE               2473          0     12      12

Hostname:/tmp > datapath query device

Total Devices : 6


DEV#:   0  DEVICE NAME: vpath0  TYPE: 2107900         POLICY:    Optimized
SERIAL: 75L05910400
==========================================================================
Path#      Adapter/Hard Disk          State     Mode     Select     Errors
    0          fscsi0/hdisk1           OPEN   NORMAL   14033352          0
    1          fscsi0/hdisk5           OPEN   NORMAL   14035182          0
    2          fscsi1/hdisk3           OPEN   NORMAL        466          0
    3         fscsi1/hdisk10           OPEN   NORMAL        478          0

DEV#:   1  DEVICE NAME: vpath2  TYPE: 2107900         POLICY:    Optimized
SERIAL: 75L05910402
==========================================================================
Path#      Adapter/Hard Disk          State     Mode     Select     Errors
    0          fscsi0/hdisk2           OPEN   NORMAL    3804635          0
    1          fscsi0/hdisk6           OPEN   NORMAL    3807376          0
    2          fscsi1/hdisk4           OPEN   NORMAL        232          0
    3         fscsi1/hdisk11           OPEN   NORMAL        230          0

DEV#:   2  DEVICE NAME: vpath4  TYPE: 2107900         POLICY:    Optimized
SERIAL: 75CWXA12250
==========================================================================
Path#      Adapter/Hard Disk          State     Mode     Select     Errors
    0         fscsi0/hdisk17           OPEN   NORMAL     590944          0
    1         fscsi0/hdisk19           OPEN   NORMAL     589958          0
    2         fscsi1/hdisk14           OPEN   NORMAL        288          0
    3         fscsi1/hdisk21           OPEN   NORMAL        262          0

DEV#:   3  DEVICE NAME: vpath5  TYPE: 2107900         POLICY:    Optimized
SERIAL: 75CWXA12251
==========================================================================
Path#      Adapter/Hard Disk          State     Mode     Select     Errors
    0         fscsi0/hdisk18           OPEN   NORMAL    2123442          0
    1         fscsi0/hdisk20           OPEN   NORMAL    2124663          0
    2         fscsi1/hdisk15           OPEN   NORMAL        389          0
    3         fscsi1/hdisk22           OPEN   NORMAL        379          0

DEV#:   4  DEVICE NAME: vpath1  TYPE: 2107900         POLICY:    Optimized
SERIAL: 75L05910403
==========================================================================
Path#      Adapter/Hard Disk          State     Mode     Select     Errors
    0          fscsi0/hdisk7           OPEN   NORMAL       5057          0
    1          fscsi0/hdisk8           OPEN   NORMAL       5146          0
    2          fscsi1/hdisk9           OPEN   NORMAL          0          0
    3         fscsi1/hdisk12           OPEN   NORMAL          0          0

DEV#:   5  DEVICE NAME: vpath7  TYPE: 2107900         POLICY:    Optimized
SERIAL: 75CWXA10403
==========================================================================
Path#      Adapter/Hard Disk          State     Mode     Select     Errors
    0         fscsi0/hdisk29           OPEN   NORMAL       5207          0
    1         fscsi0/hdisk30           OPEN   NORMAL       5266          0
    2         fscsi1/hdisk13           OPEN   NORMAL          0          0
    3         fscsi1/hdisk16           OPEN   NORMAL          0          0



1) datapath set adapter fscsi1 offline
2) datapath remove adapter fscsi1
3) rmdev -dl fscsi1 -R
4) cfgmgr -vl



For DEGRAD  error do the following
===========================

Hostname:/root > datapath query adapter

Active Adapters :2

Adpt#     Name   State     Mode             Select     Errors  Paths  Active
    0   fscsi0  NORMAL   ACTIVE         1054574508          0     96      64
    1   fscsi1  DEGRAD   ACTIVE          143796542       1561     96      60


=========================================

datapath set adapter 1 offline
datapath remove adapter 1
rmdev -dl fcs1 -R
cfgmgr
lspath

Generating the ssh Keygen to login to server without password prompt

Hostname:/home/username/.ssh > ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
7f:21:a4:57:fd:ce:13:a6:8b:0e:d5:2c:17:33:18:e6 username@Hostname
The key's randomart image is:
+--[ RSA 2048]----+
|           o     |
|          o o.   |
|          .E.+.  |
|         o .o +. |
|        S oo.+ o.|
|         o..o.oo.|
|         .. .. .o|
|          ... . .|
|          .o .   |
+-----------------+

Hostname:/home/username/.ssh >

Hostname:/home/username/.ssh > ls -lart
total 152
-rw-r--r--    1 username  staff         61537 Aug 25 08:42 known_hosts
drwxr-xr-x    4 username  staff          4096 Aug 26 10:54 ..
-rw-r--r--    1 username  staff           395 Aug 26 12:38 id_rsa.pub
-rw-------    1 username  staff          1675 Aug 26 12:38 id_rsa
drwx------    2 username  staff           256 Aug 26 12:38 .

Hostname:/home/username/.ssh > cat -n id_rsa.pub
     1  ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMjTKlVGLSRpPBAdGZs07Go6XifBHVMBBZNxortPv9MQibR9xHZU5LguzN+dFgpSY58nxbozmE7eMYqJryaaecLxluzkY3uCYdQljVF/oWm0TvoYlzwScji8BIXKyqORjMHfjbHcIQgT5EPAup0IdCIihEwohsZ3Ur7elKfvOwPYm8+UPrfJrba7lBl6KQ50u5ssHznj7kXuzIWU2HrTVn1q+/Jjr3iXEcdQro6olOfvBifBHxfhImZezGHOsmVdzCwX08SzLBgCgvOwedTbgSzu8/iA/1mp8dnoVzCaOZ2511cT4TI2K052PXi6MSMRrgEwiMhOE2uE2APRvrxS7/ username@Hostname

Hostname:/home/username/.ssh >
Hostname:/home/username/.ssh > scp -p id_rsa.pub username@newhostname:/home/username/.ssh/authorized_keys
username@newhostname password:
id_rsa.pub                                                                                                                 100%  395     0.4KB/s   00:00

Hostname:/home/username/.ssh >
Hostname:/home/username/.ssh > cd

Hostname:/home/username > ssh newhostname
Last login: Fri Aug 26 12:42:03 WET 2011 on ssh from Hostname
*******************************************************************************
*                                                                             *
*  AIX Version 6.1                                                            *
*                                                                             *
*  - setup network and routes                                                 *
*  - modify /etc/snmpdv3.conf                                                 *
*  - modify /etc/motd                                                         *
*  - install sdd driver if SAN used                                           *
*                                                                             *
*******************************************************************************

newhostname:/home/username > exit
Connection to newhostname closed.

Hostname:/home/username >

Remove a single failed path

I noticed that the VIO server (VIOS) error log was reporting some failed paths for LUNs connecting to the SAN. The VIOS command errlog -ls
(the equivalent of the AIX command errpt -a) showed errors on the Fibre Channel adapter fscsi2:

Diagnostic Analysis
Diagnostic Log sequence number: 1126130
Resource tested:        fscsi2
Menu Number:            2603902
Description:


Error Log Analysis has detected multiple communication
errors.  These errors can be caused by attached devices,
a switch, a hub, or a SCSI-to-FC convertor.

If connected to a switch, refer to the Storage Area
Network (SAN) problem determination procedures for
additional problem resolution.
Multiple path Redundancy

Each LUN that had a failed path still had other paths on this VIOS functioning correctly. In addition, each of the LUNs is presented to the VIO client via MPIO through this and another VIO server. That makes for a lot of redundancy, which gave us some breathing space to sort out the real cause of one of the many paths being lost. In the meantime, it's quite easy to remove the failing path on the VIOS using the rmpath command.

View paths for a LUN

First, I used the VIOS lspath command from the VIOS restricted shell to look at a single PV. This showed that there were multiple paths from the VIOS through to the SAN (in this case going to SVC).

lspath -dev hdisk63
Or via the AIX shell after logging in to the VIOS as padmin and running oem_setup_env:
lspath -l hdisk63
Whichever version of the lspath command you use, here's the output showing several paths for the same disk.
status  name    parent connection
Enabled hdisk63 fscsi0 500507680110239f,3d000000000000 <  Four
Enabled hdisk63 fscsi0 50050768014025bd,3d000000000000 <  paths
Enabled hdisk63 fscsi0 50050768011025bd,3d000000000000 <  via
Enabled hdisk63 fscsi0 500507680140239f,3d000000000000 <  fscsi0
Enabled hdisk63 fscsi1 500507680130239f,3d000000000000 < Another
Enabled hdisk63 fscsi1 500507680120239f,3d000000000000 < four
Enabled hdisk63 fscsi1 50050768013025bd,3d000000000000 < from
Enabled hdisk63 fscsi1 50050768012025bd,3d000000000000 < fscsi1
Enabled hdisk63 fscsi2 500507680110239f,3d000000000000 < Three good paths on fscsi2
Failed  hdisk63 fscsi2 50050768014025bd,3d000000000000  <--- This failed path needs to be removed or recovered
Enabled hdisk63 fscsi2 50050768011025bd,3d000000000000 < Three good paths on fscsi2
Enabled hdisk63 fscsi2 500507680140239f,3d000000000000 < Three good paths on fscsi2
Option 1: Sledgehammer special

Removing all the paths for hdisk63 via fscsi2 would work, but it would remove the successful paths to fscsi2 at the same time.  A bit drastic, but let's face it, sledgehammers had to be invented for a reason. Anyway, as there are several other paths to the same LUN - four via fscsi0 and another four via fscsi1, removing three good paths from fscsi2, as well as the one that has failed isn't really a problem. After all four fscsi2 paths are exterminated, you can rediscover the three good paths using the VIOS cfgdev command or the AIX command cfgmgr.

Here are the steps I took to remove all four paths for fscsi2 from hdisk63:

rmpath -dev hdisk63 -fscsi2

lspath -dev hdisk63
status  name    parent connection

Enabled hdisk63 fscsi0 500507680110239f,3d000000000000
Enabled hdisk63 fscsi0 50050768014025bd,3d000000000000
Enabled hdisk63 fscsi0 50050768011025bd,3d000000000000
Enabled hdisk63 fscsi0 500507680140239f,3d000000000000
Enabled hdisk63 fscsi1 500507680130239f,3d000000000000
Enabled hdisk63 fscsi1 500507680120239f,3d000000000000
Enabled hdisk63 fscsi1 50050768013025bd,3d000000000000
Enabled hdisk63 fscsi1 50050768012025bd,3d000000000000
Defined hdisk63 fscsi2 500507680110239f,3d000000000000
Defined hdisk63 fscsi2 50050768014025bd,3d000000000000
Defined hdisk63 fscsi2 50050768011025bd,3d000000000000
Defined hdisk63 fscsi2 500507680140239f,3d000000000000


Aussie Cultural Lesson
Here's a little aside for the benefit of readers not overly familiar with Australian slang. A "dummy" is a pacifier / comforter sometimes given to babies to, well, pacify them. On occasion some babies have been known to expunge the said dummy with speed and skill of Olympian standards.



Well, the rmpath command didn't actually remove the paths. It kept them Defined in the ODM. When I ran cfgdev (or cfgmgr), the command spat the dummy.

Some error messages may contain invalid information
for the Virtual I/O Server environment.

Method error (/usr/lib/methods/cfgscsidisk -l hdisk63 ):
        0514-082 The requested function could only be performed for some
                 of the specified paths.
At this point, lspath shows that the three good paths have recovered, but the failed path is still Defined and the cause of the above error.

lspath -dev hdisk63
status  name    parent connection

Enabled hdisk63 fscsi0 500507680110239f,3d000000000000
Enabled hdisk63 fscsi0 50050768014025bd,3d000000000000
Enabled hdisk63 fscsi0 50050768011025bd,3d000000000000
Enabled hdisk63 fscsi0 500507680140239f,3d000000000000
Enabled hdisk63 fscsi1 500507680130239f,3d000000000000
Enabled hdisk63 fscsi1 500507680120239f,3d000000000000
Enabled hdisk63 fscsi1 50050768013025bd,3d000000000000
Enabled hdisk63 fscsi1 50050768012025bd,3d000000000000
Enabled hdisk63 fscsi2 500507680110239f,3d000000000000
Defined hdisk63 fscsi2 50050768014025bd,3d000000000000
Enabled hdisk63 fscsi2 50050768011025bd,3d000000000000
Enabled hdisk63 fscsi2 500507680140239f,3d000000000000

Option 2: Search and destroy

We would have been better to remove the fscsi2 paths from the ODM altogether, using the rmpath command with the -rm flag. This is similar to the -d flag on the rmdev command, as it deletes the references from the ODM.

rmpath -dev hdisk63  -pdev fscsi2 -rm
paths Deleted

Now all the paths via fscsi2 for this hdisk are gone:
lspath -dev hdisk63
status  name    parent connection

Enabled hdisk63 fscsi0 500507680110239f,3d000000000000
Enabled hdisk63 fscsi0 50050768014025bd,3d000000000000
Enabled hdisk63 fscsi0 50050768011025bd,3d000000000000
Enabled hdisk63 fscsi0 500507680140239f,3d000000000000
Enabled hdisk63 fscsi1 500507680130239f,3d000000000000
Enabled hdisk63 fscsi1 500507680120239f,3d000000000000
Enabled hdisk63 fscsi1 50050768013025bd,3d000000000000
Enabled hdisk63 fscsi1 50050768012025bd,3d000000000000
Then when you rediscoveri the paths via cfgdev / cfgmgr it only brings back the three good ones. No error message on cfgdev this time:
cfgdev
lspath -dev hdisk63
status  name    parent connection

Enabled hdisk63 fscsi0 500507680110239f,3d000000000000
Enabled hdisk63 fscsi0 50050768014025bd,3d000000000000
Enabled hdisk63 fscsi0 50050768011025bd,3d000000000000
Enabled hdisk63 fscsi0 500507680140239f,3d000000000000
Enabled hdisk63 fscsi1 500507680130239f,3d000000000000
Enabled hdisk63 fscsi1 500507680120239f,3d000000000000
Enabled hdisk63 fscsi1 50050768013025bd,3d000000000000
Enabled hdisk63 fscsi1 50050768012025bd,3d000000000000
Enabled hdisk63 fscsi2 500507680110239f,3d000000000000
Enabled hdisk63 fscsi2 50050768011025bd,3d000000000000
Enabled hdisk63 fscsi2 500507680140239f,3d000000000000
Option 3: Can you be more specific?

A better solution would be to remove just the bad path. As hdisk63 is already fixed, let's do it on a different LUN which also has a bad path:

 lspath -dev hdisk54
status  name    parent connection

Enabled hdisk54 fscsi0 500507680110239f,34000000000000
Enabled hdisk54 fscsi0 50050768014025bd,34000000000000
Enabled hdisk54 fscsi0 50050768011025bd,34000000000000
Enabled hdisk54 fscsi0 500507680140239f,34000000000000
Enabled hdisk54 fscsi1 500507680130239f,34000000000000
Enabled hdisk54 fscsi1 500507680120239f,34000000000000
Enabled hdisk54 fscsi1 50050768013025bd,34000000000000
Enabled hdisk54 fscsi1 50050768012025bd,34000000000000
Enabled hdisk54 fscsi2 500507680110239f,34000000000000
Failed  hdisk54 fscsi2 50050768014025bd,34000000000000
Enabled hdisk54 fscsi2 50050768011025bd,34000000000000
Enabled hdisk54 fscsi2 500507680140239f,34000000000000


The rmpath command allows you to narrow the path you want to remove down to a single connection. Here's an extract from the command documentation for the VIOS rmpath command:

rmpath command

Purpose


Removes from the system a path to an MPIO-capable device.

Syntax

rmpath { [ -dev Name ] [ -pdev Parent ] [ -conn Connection ] } [ -rm ]

Once again, I'll use the -rm flag to remove the path from the ODM. Otherwise it would simply go from Available to Defined and still report a problem when running cfgmgr. But this time, I can narrow the path down to a single connection using the -conn flag:

rmpath -dev hdisk54 -pdev fscsi2 -conn "50050768014025bd,34000000000000" -rm

path Deleted
lspath -dev hdisk54
status  name    parent connection

Enabled hdisk54 fscsi0 500507680110239f,34000000000000
Enabled hdisk54 fscsi0 50050768014025bd,34000000000000
Enabled hdisk54 fscsi0 50050768011025bd,34000000000000
Enabled hdisk54 fscsi0 500507680140239f,34000000000000
Enabled hdisk54 fscsi1 500507680130239f,34000000000000
Enabled hdisk54 fscsi1 500507680120239f,34000000000000
Enabled hdisk54 fscsi1 50050768013025bd,34000000000000
Enabled hdisk54 fscsi1 50050768012025bd,34000000000000
Enabled hdisk54 fscsi2 500507680110239f,34000000000000
Enabled hdisk54 fscsi2 50050768011025bd,34000000000000
Enabled hdisk54 fscsi2 500507680140239f,34000000000000
Looking for failure


The lspath command allows you to list paths by their status. This allows you to list all of the failed paths.

lspath -status failed
status    name    parent connection


Available ses1    sas0   a00,0   < What are these guys
Available ses2    sas0   20a00,0
< doing here?
Failed    hdisk3  fscsi2 50050768014025bd,1000000000000 < This line is where we want to start
Failed    hdisk4  fscsi2 50050768014025bd,2000000000000
Failed    hdisk6  fscsi2 50050768014025bd,19000000000000
Failed    hdisk7  fscsi2 50050768014025bd,1a000000000000
Failed    hdisk8  fscsi2 50050768014025bd,1b000000000000
Failed    hdisk9  fscsi2 50050768014025bd,1c000000000000
Failed    hdisk10 fscsi2 50050768014025bd,1d000000000000
Failed    hdisk11 fscsi2 50050768014025bd,e000000000000
Failed    hdisk12 fscsi2 50050768014025bd,23000000000000
Failed    hdisk13 fscsi2 50050768014025bd,24000000000000
Failed    hdisk16 fscsi2 50050768014025bd,5000000000000
Failed    hdisk17 fscsi2 50050768014025bd,6000000000000
Failed    hdisk18 fscsi2 50050768014025bd,7000000000000
Failed    hdisk20 fscsi2 50050768014025bd,9000000000000
Failed    hdisk22 fscsi2 50050768014025bd,b000000000000
Failed    hdisk32 fscsi2 50050768014025bd,16000000000000
Failed    hdisk21 fscsi2 50050768014025bd,a000000000000
Failed    hdisk25 fscsi2 50050768014025bd,f000000000000
Failed    hdisk26 fscsi2 50050768014025bd,10000000000000
Failed    hdisk27 fscsi2 50050768014025bd,11000000000000
Failed    hdisk28 fscsi2 50050768014025bd,12000000000000
Failed    hdisk29 fscsi2 50050768014025bd,13000000000000
Failed    hdisk33 fscsi2 50050768014025bd,17000000000000
Failed    hdisk34 fscsi2 50050768014025bd,18000000000000
Failed    hdisk35 fscsi2 50050768014025bd,1e000000000000
Failed    hdisk36 fscsi2 50050768014025bd,1f000000000000
Failed    hdisk37 fscsi2 50050768014025bd,20000000000000
Failed    hdisk38 fscsi2 50050768014025bd,21000000000000
Failed    hdisk39 fscsi2 50050768014025bd,22000000000000
Failed    hdisk40 fscsi2 50050768014025bd,26000000000000
Failed    hdisk41 fscsi2 50050768014025bd,27000000000000
Failed    hdisk42 fscsi2 50050768014025bd,28000000000000
Failed    hdisk43 fscsi2 50050768014025bd,29000000000000
Failed    hdisk44 fscsi2 50050768014025bd,2a000000000000
Failed    hdisk47 fscsi2 50050768014025bd,2d000000000000
Failed    hdisk48 fscsi2 50050768014025bd,2e000000000000
Failed    hdisk49 fscsi2 50050768014025bd,2f000000000000
Failed    hdisk50 fscsi2 50050768014025bd,30000000000000
Failed    hdisk51 fscsi2 50050768014025bd,31000000000000
Failed    hdisk5  fscsi2 50050768014025bd,3000000000000
Failed    hdisk45 fscsi2 50050768014025bd,2b000000000000
Failed    hdisk52 fscsi2 50050768014025bd,32000000000000
Failed    hdisk53 fscsi2 50050768014025bd,33000000000000
Failed    hdisk19 fscsi2 50050768014025bd,8000000000000
Failed    hdisk61 fscsi2 50050768014025bd,3b000000000000
Failed    hdisk64 fscsi2 50050768014025bd,3e000000000000
Failed    hdisk65 fscsi2 50050768014025bd,3f000000000000
Failed    hdisk66 fscsi2 50050768014025bd,40000000000000
Failed    hdisk67 fscsi2 50050768014025bd,41000000000000
Failed    hdisk68 fscsi2 50050768014025bd,42000000000000
Failed    hdisk70 fscsi2 50050768014025bd,44000000000000

It's easy enough to script this now:

lspath -status failed | grep Failed | while read status hdisk parent connection
do
rmpath -dev $hdisk -pdev $parent -conn $connection -rm
done

It seems smarter not to throw out the good paths with the bad one and then repair the damage.

How to enable the paths of the AIX Box

Different ways to Enable the path of the channel.

# lspath -l hdisk4
Failed  hdisk4 fscsi0
Failed  hdisk4 fscsi0
Enabled hdisk4 fscsi1
Enabled hdisk4 fscsi1

# rmpath -l hdisk4 -p fscsi0
paths Defined

#  lspath -l hdisk4
Defined hdisk4 fscsi0
Defined hdisk4 fscsi0
Enabled hdisk4 fscsi1
Enabled hdisk4 fscsi1

#  rmpath -d -l hdisk4 -p fscsi0
paths Deleted

# cfgmgr

# lspath -l hdisk4
Enabled hdisk4 fscsi0
Enabled hdisk4 fscsi0
Enabled hdisk4 fscsi1
Enabled hdisk4 fscsi1

Or you can use the chpath to enable the path as shown below

#chpath -l hdiskxx -p fscsixx -s enable

Tuesday, October 4, 2011

Datastage InfoSphere Information Server patch installation instructions on AIX Box

About patches

Patches for IBM InfoSphere Information Server Version 8.1 consist of two files.
Table 2. Patch component file names and descriptions
Patch componentDescription
README.txt Contains important information specific to the patch.
*.ispkgPatch package installed with the Update Installer.

Update Installer

Patches are installed with the Update Installer that is part of the IBM InfoSphere Information Server installation. Updates to the Update Installer are made available separately.
Before you install a patch, follow these steps to ensure that you are using the latest version of the Update Installer:
  1. From the command line run the following command:
    • Windows: C:\IBM\InformationServer\Updates\bin\VersionInfo.bat
    • Linux, UNIX: /opt/IBM/InformationServer/Updates/bin/VersionInfo.sh
    For example, the following Update Installer version information is displayed for Version 8.1.0.160:
    IBM Information Server Update Installer Version 8.1.0.160
  2. Ensure that the version number returned by the VersionInfo script is the same version as the latest Update Installer

Using console mode installation on Linux and UNIX operating systems

Follow these steps to install a patch on Linux or UNIX operating systems:
  1. Log in as the root user.
  2. If your installation includes , source the dsenv file. In the command prompt window, type the following command:
    . /opt/IBM/InformationServer/Server/DSEngine/dsenv
  3. If you are installing a patch on IBM AIX:
    1. Unset LDR_CNTRL after sourcing dsenv to avoid adversely impacting the amount of available memory in IBM WebSphere Application Server. Type the following command:
      unset LDR_CNTRL
    2. On the services tier, use only the JRE for WebSphere Application Server. By default, the JRE is located in the /opt/IBM/WebSphere/AppServer/java directory. Type the following command:
      /opt/IBM/WebSphere/AppServer/java/bin/java -jar updater.jar other arguments
      Note: If your topology contains a computer dedicated to the engine tier, when installing on the engine tier, use the JRE from the InfoSphere Information Server Version 8.1 Fix Pack 1 package. Alternatively, you can make a copy of the JRE located in the opt/IBM/InformationServer/ASBNode/lib directory.
  4. Type the following command in the /opt/IBM/InformationServer/Updates/bin directory to install a patch:
    ./InstallUpdate.sh -console
    Alternatively, you can specify all the command line parameters and values to avoid interactive installation prompts. Type the following command to install a patch:
    ./InstallUpdate.sh –p patch_JR000000_type_os.ispkg -user admin -password AdminPassword -wasadmin wasadmin -waspassword WebSpherePassword –console

Uninstalling a patch

Not all patches can be uninstalled. The README.txt file included with each patch indicates whether the patch can be uninstalled. Only the last patch installed can be uninstalled. To uninstall a patch that is not the last installed patch, you must first uninstall the latest installed patch and then uninstall all later patches. Installing a patch that cannot be uninstalled prevents all previously installed patches from being uninstalled because patches must be uninstalled in reverse order of their installation. To determine which patches have been installed, locate the Version.xml file:
  • Windows: C:\IBM\InformationServer\Version.xml
  • Linux, UNIX: /opt/IBM/InformationServer/Version.xml
In the Version.xml file, the entry for the last patch installed is at the end of the history section and is similar to the following example:
...
<History>
...
    <Sequence description="Description of the patch" id="1" installLocation="" 
lastUpdateDate="Thu Feb 14 13:13:52 EST 2008" patch="patch_JR000000" 
rollback="/opt/IBM/InformationServer/Updates/patch_JR000000" 
status="Success" version=""/>
</History>
...
You must use the Update Installer in console mode when you uninstall patches.

Uninstalling on Linux and UNIX operating systems

Follow these steps to uninstall the latest patch installed on Linux or UNIX operating systems:
  1. Log in as the root user.
  2. If your installation includes , source the dsenv file. In the command prompt window, type the following command:
    . /opt/IBM/InformationServer/Server/DSEngine/dsenv
  3. If you are installing a patch on IBM AIX, unset LDR_CNTRL after sourcing dsenv to avoid adversely impacting the amount of available memory in IBM WebSphere Application Server. Type the following command:
    unset LDR_CNTRL
  4. Type the following command in the /opt/IBM/InformationServer/Updates/bin directory to uninstall the latest installed patch:
    ./InstallUpdate.sh -rollback patch_JR000000
 

Patch installation log files

Viewing the log files that are created during installation is useful when you are troubleshooting installation problems. During the installation and uninstallation process, the log file is simultaneously updated in the following directories:
Microsoft® Windows:
  • C:\IBM\InformationServer\logs
  • C:\IBM\InformationServer\Updates\PatchName
Linux, UNIX:
  • /opt/IBM/InformationServer/logs
  • /opt/IBM/InformationServer/Updates/PatchName
After a successful installation, the log file is named ISInstall.YYYY.MM.DD.HH.MM.SS.log where YYYY.MM.DD.HH.MM.SS is the date and time that the installation was started.
After an unsuccessful installation, the ISInstall.YYYY.MM.DD.HH.MM.SS.log file is included in the InformationServer/logs directory, and all logs that are used for troubleshooting are included in the InformationServer/isdump-operating_system-YYYY.MM.DD.HH.MM.SS.zip file.
Before doing the patch installation the application should be stopped and started
as shown below.
---> stopDatastage_all.sh #!/bin/sh # echo "Setting environment..." . /opt/IBM/InformationServer/Server/DSEngine/dsenv echo "Stopping DataStage agents..." /opt/IBM/InformationServer/ASBNode/bin/NodeAgents.sh stop sleep 10 # echo "Stopping DataStage Engine" /opt/IBM/InformationServer/Server/DSEngine/bin/uv -admin -stop sleep 10 # echo "Stopping WAS..." /opt/IBM/InformationServer/ASBServer/bin/MetadataServer.sh stop
#DataStage listener #su - dsadm "-c /opt/IBM/InformationServer/Server/DSSAPbin/dsidocd.rc stop"
sleep 10 # echo "Stopping DB2" #DB2 su - db2inst1 -c "db2 force application all" sleep 2 su - db2inst1 -c "db2 force application all" su - db2inst1 -c db2stop sleep 5 su - db2inst1 -c db2stop force
 
---> startDatastage_all.sh #!/bin/sh #DB2 kaynnistys # su - db2inst1 -c db2start sleep 5
#Was kaynnistys /opt/IBM/InformationServer/ASBServer/bin/MetadataServer.sh run sleep 15 # Datastage Moottorin kaynnistys /opt/IBM/InformationServer/Server/DSEngine/bin/uv -admin -start sleep 10 #ASBAgentin kaynnistys /opt/IBM/InformationServer/ASBNode/bin/NodeAgents.sh start
#Datastage listener #su - dsadm "-c /opt/IBM/InformationServer/Server/DSSAPbin/dsidocd.rc start"
 

Thursday, September 22, 2011

Processors History with the Capacity...!!!

Here's a historical view of the Power chip family, moving from the Power1 implemented in IBM's 1 micrometer chip baking processes up to the Power7 implemented in 45 nanometer processes:





This is the latest-greatest public roadmap for the Power chips, which as you can see is a little short on details for Power8 and which doesn't even mention the plus versions of the chips:









The following AIX roadmap is interesting in that it shows IBM's Unix variant being tweaked to exploit the Power7+ chips sometime in the second half of 2011.

 



To see where IBM might be taking the Power7+ and Power8 chips, it makes sense to look at how the chips and memory components have evolved over time. Here's how the latest several generations of Power chips have stacked up:



POWER7 :
POWER7 was released in February 2010 and was a substantial evolution from the POWER6 design, focusing more on power efficiency through multiple cores and simultaneous multithreading.
While the POWER6 features a dual-core processor, each capable of two-way simultaneous multithreading (SMT), the IBM POWER7 processor has eight cores, and four threads per core, for a total capacity of 32 simultaneous threads. Its power consumption is similar to the preceding POWER6, while quadrupling the number of cores, with each core having higher performance.





















POWER7
========
Produced                         2010
Designed by                    IBM
Max. CPU clock rate      2.4 GHz  to 4.25 GHz
Min. feature size             45 nm
Instruction set                 Power Architecture
Microarchitecture           Power ISA v.2.06
Cores                            4, 6, 8
L1 cache                       32+32 KB/core
L2 cache                       256 KB/core
L3 cache                       32 MB

As of July 2011, the range of POWER7 systems includes "Express" models (710, 720, 730, 740 and 750), Enterprise models (770, 780 and 795) and High Performance computing models (755 and 775). Enterprise models differ in having Capacity on Demand capabilities. Maximum specifications are shown in the table below.

IBM POWER7 servers
Name
Number of chips
Number of cores
CPU clock frequency
710 Express
1
6
3.7 GHz
710 Express
1
8
3.55 GHz
720 Express
1
8
3.0 GHz
730 Express
2
12
3.7 GHz
730 Express
2
16
3.55 GHz
740 Express
2
12
3.7 GHz
740 Express
2
16
3.55 GHz
750 Express
4
24
3.72 GHz
750 Express
4
32
3.22 GHz or 3.61 GHz
755
4
32
3.61 GHz
770
8
48
3.5 GHz
770
8
64
3.1 GHz
775 (Per Node)
32
256
3.83 GHz
780 (MaxCore mode)
8
64
3.86 GHz
780 (TurboCore mode)
8
32
4.14 GHz
795
32
192
3.7 GHz
795 (MaxCore mode)
32
256
4.0 GHz
795 (TurboCore mode)
32
128
4.25 GHz



POWER6 :
POWER6 was announced on May 21, 2007. It adds VMX to the POWER series. It also introduces the second generation of IBM ViVA, ViVA-2. It is a dual-core design, reaching 5.0 GHz at 65 nm. It has very advanced interchip communication technology. Its power consumption is nearly the same as the preceding POWER5, whilst offering doubled performance.
















As of 2008, the range of POWER6 systems includes "Express" models (the 520, 550 and 560) and Enterprise models (the 570 and 595). The various system models are designed to serve any sized business. For example, the 520 Express is marketed to small businesses while the Power 595 is marketed for large, multi-environment data centers. The main difference between the Express and Enterprise models is that the latter include Capacity Upgrade on Demand (CUoD) capabilities and hot-pluggable processor and memory "books". All Power systems are noted for their excellent scalability and storage capabilities.


IBM POWER6 servers
Name
Number of sockets
Number of cores
CPU clock frequency
520 Express
2
4
4.2 GHz or 4.7 GHz
550 Express
4
8
4.2 GHz or 5.0 GHz
560 Express
8
16
3.6 GHz
570
8
16
4.4 GHz or 5.0 GHz
570
16
32
4.2 GHz
575
16
32
4.7 GHz
595
32
64
4.2 GHz or 5.0 GHz


POWER6
========
Produced               2007
Designed by          IBM
Min. feature size    65 nm
Instruction set        Power Architecture
Microarchitecture  Power ISA v.2.05
Cores                   2
L1 cache              64+64 KB/core
L2 cache              4 MB/core
L3 cache              32 MB/chip (off-chip)


POWER5
POWER5 MCM with four processors and four 36 MB external L3 cache modules.
IBM introduced the POWER5 processor in 2004. It is a dual-core processor with support for simultaneous multithreading with two threads, so it implements 4 logical processors. Using the Virtual Vector Architecture, several POWER5 processors can act together as a single vector processor. The POWER5 added more instructions to the ISA.
The POWER5+ added even more instructions, bringing the ISA to version 2.02.
Key enhancements introduced into the POWER5 processor and system design points include:
  • Designed for entry and high-end servers
  • Simultaneous multi-threading
  • Dynamic resource balancing to efficiently allocate system resources to each thread
  • Software-controlled thread prioritization
  • Dynamic power management to reduce power consumption without affecting performance
  • Micro-Partitioning technology (hardware support for Shared Processor Partitions)
  • Virtual storage, virtual Ethernet
  • Enhanced scalability, parallelism
  • Enhanced memory subsystem
  • Improved performance
  • Compatibility with existing POWER4 systems
  • Enhanced reliability, availability, serviceability























POWER4 design
POWER5 design
L1 data cache
2-way set associative FIFO a
4-way set associative LRU b
L2 cache
8-way set associative 1.44 MB
10-way set associative 1.9 MB
L3 cache
32 MB (118 clock cycles)
36 MB (~80 clock cycles)
Memory bandwidth
4 GB/second per chip
~16 GB/second per chip
Simultaneous multi-threading
No
Yes
Processor addressing
1 processor
1/10th of processor
Dynamic power management
No
Yes
Size
412 mm 2
389 mm 2