AIX (Advanced Interactive eXecutive) is a series of proprietary Unix operating systems developed and sold by IBM.
Performance Optimization With Enhanced RISC (POWER) version 7 enables a unique performance advantage for AIX OS.
POWER7 features new capabilities using multiple cores and multiple CPU threads, creating a pool of virtual CPUs.
AIX 7 includes a new built-in clustering capability called Cluster Aware
AIX POWER7 systems include the Active Memory Expansion feature.

Sunday, December 30, 2012

AIX 5.3


The EOM date (end of marketing) has been announced for AIX 5.3: 04/11; meaning that AIX 5.3 will no longer be marketed by IBM from April 2011, and that it is now time for customers to start thinking about upgrading to AIX 6.1. The EOS (end of service) date for AIX 5.3 is 04/12, meaning AIX 5.3 will be serviced by IBM until April 2012. After that, IBM will only service AIX 5.3 for an additional fee. The EOL (end of life) date is 04/16, which is the end of life date at April 2016. The final technology level for AIX 5.3 is technology level 12. Some service packs for TL12 will be released though.

IBM has also announced EOM and EOS dates for HACMP 5.4 and PowerHA 5.5, so if you're using any of these versions, you also need to upgrade to PowerHA 6.1:

Sep 30, 2010: EOM HACMP 5.4, PowerHA 5.5
Sep 30, 2011: EOS HACMP 5.4
Sep 30, 2012: EOS HACMP 5.5
TOPICS: AIX, EMC, INSTALLATION, POWERHA / HACMP, STORAGE AREA NETWORK, SYSTEM ADMINISTRATION↑
Quick setup guide for HACMP
Use this procedure to quickly configure an HACMP cluster, consisting of 2 nodes and disk heartbeating.

Prerequisites:

Make sure you have the following in place:

Have the IP addresses and host names of both nodes, and for a service IP label. Add these into the /etc/hosts files on both nodes of the new HACMP cluster.
Make sure you have the HACMP software installed on both nodes. Just install all the filesets of the HACMP CD-ROM, and you should be good.
Make sure you have this entry in /etc/inittab (as one of the last entries):
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit
In case you're using EMC SAN storage, make sure you configure you're disks correctly as hdiskpower devices. Or, if you're using a mksysb image, you may want to follow this procedure EMC ODM cleanup.
Steps:
Create the cluster and its nodes:
# smitty hacmp
Initialization and Standard Configuration
Configure an HACMP Cluster and Nodes
Enter a cluster name and select the nodes you're going to use. It is vital here to have the hostnames and IP address correctly entered in the /etc/hosts file of both nodes.
Create an IP service label:
# smitty hacmp
Initialization and Standard Configuration
Configure Resources to Make Highly Available
Configure Service IP Labels/Addresses
Add a Service IP Label/Address
Enter an IP Label/Address (press F4 to select one), and enter a Network name (again, press F4 to select one).
Set up a resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Add a Resource Group
Enter the name of the resource group. It's a good habit to make sure that a resource group name ends with "rg", so you can recognize it as a resource group. Also, select the participating nodes. For the "Fallback Policy", it is a good idea to change it to "Never Fallback". This way, when the primary node in the cluster comes up, and the resource group is up-and-running on the secondary node, you won't see a failover occur from the secondary to the primary node.

Note: The order of the nodes is determined by the order you select the nodes here. If you put in "node01 node02" here, then "node01" is the primary node. If you want to have this any other way, now is a good time to correctly enter the order of node priority.
Add the Servie IP/Label to the resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Change/Show Resources for a Resource Group (standard)
Select the resource group you've created earlier, and add the Service IP/Label.
Run a verification/synchronization:
# smitty hacmp
Extended Configuration
Extended Verification and Synchronization
Just hit [ENTER] here. Resolve any issues that may come up from this synchronization attempt. Repeat this process until the verification/synchronization process returns "Ok". It's a good idea here to select to "Automatically correct errors".
Start the HACMP cluster:
# smitty hacmp
System Management (C-SPOC)
Manage HACMP Services
Start Cluster Services
Select both nodes to start. Make sure to also start the Cluster Information Daemon.
Check the status of the cluster:
# clstat -o
# cldump
Wait until the cluster is stable and both nodes are up.
Basically, the cluster is now up-and-running. However, during the Verification & Synchronization step, it will complain about not having a non-IP network. The next part is for setting up a disk heartbeat network, that will allow the nodes of the HACMP cluster to exchange disk heartbeat packets over a SAN disk. We're assuming here, you're using EMC storage. The process on other types of SAN storage is more or less similar, except for some differences, e.g. SAN disks on EMC storage are called "hdiskpower" devices, and they're called "vpath" devices on IBM SAN storage.

First, look at the available SAN disk devices on your nodes, and select a small disk, that won't be used to store any data on, but only for the purpose of doing the disk heartbeat. It is a good habit, to request your SAN storage admin to zone a small LUN as a disk heartbeating device to both nodes of the HACMP cluster. Make a note of the PVID of this disk device, for example, if you choose to use device hdiskpower4:
# lspv | grep hdiskpower4
hdiskpower4   000a807f6b9cc8e5    None
So, we're going to set up the disk heartbeat network on device hdiskpower4, with PVID 000a807f6b9cc8e5:
Create an concurrent volume group:
# smitty hacmp
System Management (C-SPOC)
HACMP Concurrent Logical Volume Management
Concurrent Volume Groups
Create a Concurrent Volume Group
Select both nodes to create the concurrent volume group on by pressing F7 for each node. Then select the correct PVID. Give the new volume group a name, for example "hbvg".
Set up the disk heartbeat network:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Networks
Add a Network to the HACMP Cluster
Select "diskhb" and accept the default Network Name.
Run a discovery:
# smitty hacmp
Extended Configuration
Discover HACMP-related Information from Configured Nodes
Add the disk device:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Communication Interfaces/Devices
Add Communication Interfaces/Devices
Add Discovered Communication Interface and Devices
Communication Devices
Select the disk device on both nodes by selecting the same disk on each node by pressing F7.
Run a Verification & Synchronization again, as described earlier above. Then check with clstat and/or cldump again, to check if the disk heartbeat network comes online.
TOPICS: AIX, POWERHA / HACMP, SYSTEM ADMINISTRATION↑
NFS mounts on HACMP failing
When you want to mount an NFS file system on a node of an HACMP cluster, there are a couple of items you need check, before it will work:

Make sure the hostname and IP address of the HACMP node are resolvable and provide the correct output, by running:
# nslookup [hostname]
# nslookup [ip-address]
The next thing you will want to check on the NFS server, if the node names of your HACMP cluster nodes are correctly added to the /etc/exports file. If they are, run:
# exportfs -va
The last, and tricky item you will want to check is, if a service IP label is defined as an IP alias on the same adapter as your nodes hostname, e.g.:
# netstat -nr
Routing tables
Destination   Gateway       Flags  Refs  Use    If  Exp  Groups

Route Tree for Protocol Family 2 (Internet):
default       10.251.14.1   UG      4    180100 en1  -     -
10.251.14.0   10.251.14.50  UHSb    0         0 en1  -     -
10.251.14.50  127.0.0.1     UGHS    3    791253 lo0  -     -
The example above shows you that the default gateway is defined on the en1 interface. The next command shows you where your Service IP label lives:
# netstat -i
Name  Mtu   Network   Address         Ipkts   Ierrs Opkts
en1   1500  link#2    0.2.55.d3.75.77 2587851 0      940024
en1   1500  10.251.14 node01          2587851 0      940024
en1   1500  10.251.20 serviceip       2587851 0      940024
lo0   16896 link#1                    1912870 0     1914185
lo0   16896 127       loopback        1912870 0     1914185
lo0   16896 ::1                       1912870 0     1914185
As you can see, the Service IP label (in the example above called "serviceip") is defined on en1. In that case, for NFS to work, you also want to add the "serviceip" to the /etc/exports file on the NFS server and re-run "exportfs -va". And you should also make sure that hostname "serviceip" resolves to an IP address correctly (and of course the IP address resolves to the correct hostname) on both the NFS server and the client.