AIX (Advanced Interactive eXecutive) is a series of proprietary Unix operating systems developed and sold by IBM.
Performance Optimization With Enhanced RISC (POWER) version 7 enables a unique performance advantage for AIX OS.
POWER7 features new capabilities using multiple cores and multiple CPU threads, creating a pool of virtual CPUs.
AIX 7 includes a new built-in clustering capability called Cluster Aware
AIX POWER7 systems include the Active Memory Expansion feature.

Sunday, December 30, 2012

Paging space best practices


Here are a couple of rules that your paging spaces should adhere to, for best performance:

The size of paging space should match the size of the memory.
Use more than one paging space, on different disks to each other.
All paging spaces should have the same size.
All paging spaces should be mirrored.
Paging spaces should not be put on "hot" disks.

How to restore an image.data file from tape


Restoring from tape:

First change the block size of the tape device to 512:

# chdev -l rmt0 -a block_size=512
Check to make sure the block size of the tape drive has been changed:
# tctl -f /dev/rmt0 status
You will receive output similar to this:
rmt0 Available 09-08-00-0,0 LVD SCSI 4mm Tape Drive
attribute     value description                          user_settable

block_size    512   BLOCK size (0=variable length)       True
compress      yes   Use data COMPRESSION                 True
density_set_1 71    DENSITY setting #1                   True
density_set_2 38    DENSITY setting #2                   True
extfm         yes   Use EXTENDED file marks              True
mode          yes   Use DEVICE BUFFERS during writes     True
ret           no    RETENSION on tape change or reset    True
ret_error     no    RETURN error on tape change or reset True
size_in_mb    36000 Size in Megabytes                    False
Change to the /tmp directory (or a directory where you would like to store the /image.data file from the mksysb image) and restore the /image.data file from the tape:
# cd /tmp
# restore -s2 -xqvf /dev/rmt0.1 ./image.data

How to restore an image.data file from an existing mksysb file


Change the /tmp directory (or a directory where you would like to store the /image.data file from the mksysb image) and restore the /image.data file from the mksysb:

# cd /tmp
# restore -xqvf [/location/of/mksysb/file] ./image.data
If you want to list the files in a mksysb image first, you can run the following command:
# restore -Tqvf [/location/of/mksysb/file]

How to edit an image.data file to break a mirror


Create a new image.data file by running the following command:

# cd /
# mkszfile
Edit the image.data file to break the mirror, by running the following command:
# vi /image.data
What you are looking for are the "lv_data" stanzas. There will be one for every logical volume associated with rootvg.

The following is an example of an lv_data stanza from an image.data file of a mirrored rootvg. The lines that need changing are marked bold:
lv_data:
VOLUME_GROUP= rootvg
LV_SOURCE_DISK_LIST= hdisk0 hdisk1
LV_IDENTIFIER= 00cead4a00004c0000000117b1e92c90.2
LOGICAL_VOLUME= hd6
VG_STAT= active/complete
TYPE= paging
MAX_LPS= 512
COPIES= 2
LPs= 124
STALE_PPs= 0
INTER_POLICY= minimum
INTRA_POLICY= middle
MOUNT_POINT=
MIRROR_WRITE_CONSISTENCY= off
LV_SEPARATE_PV= yes
PERMISSION= read/write
LV_STATE= opened/syncd
WRITE_VERIFY= off
PP_SIZE= 128
SCHED_POLICY= parallel
PP= 248
BB_POLICY= non-relocatable
RELOCATABLE= yes
UPPER_BOUND= 32
LABEL=
MAPFILE= /tmp/vgdata/rootvg/hd6.map
LV_MIN_LPS= 124
STRIPE_WIDTH=
STRIPE_SIZE=
SERIALIZE_IO= no
FS_TAG=
DEV_SUBTYP=
Note: There are two disks in the 'LV_SOURCE_DISK_LIST', THE 'COPIES' value reflects two copies, and the 'PP' value is double that of the 'LPs' value.

The following is an example of the same lv_data stanza after manually breaking the mirror. The lines that have been changed are marked bold. Edit each 'lv_data' stanza in the image.data file as shown below to break the mirrors.
lv_data:
VOLUME_GROUP= rootvg
LV_SOURCE_DISK_LIST= hdisk0
LV_IDENTIFIER= 00cead4a00004c0000000117b1e92c90.2
LOGICAL_VOLUME= hd6
VG_STAT= active/complete
TYPE= paging
MAX_LPS= 512
COPIES= 1
LPs= 124
STALE_PPs= 0
INTER_POLICY= minimum
INTRA_POLICY= middle
MOUNT_POINT=
MIRROR_WRITE_CONSISTENCY= off
LV_SEPARATE_PV= yes
PERMISSION= read/write
LV_STATE= opened/syncd
WRITE_VERIFY= off
PP_SIZE= 128
SCHED_POLICY= parallel
PP= 124
BB_POLICY= non-relocatable
RELOCATABLE= yes
UPPER_BOUND= 32
LABEL=
MAPFILE= /tmp/vgdata/rootvg/hd6.map
LV_MIN_LPS= 124
STRIPE_WIDTH=
STRIPE_SIZE=
SERIALIZE_IO= no
FS_TAG=
DEV_SUBTYP=
Note: The 'LV_SOURCE_DISK_LIST' has been reduced to one disk, the 'COPIES' value has been changed to reflect one copy, and the 'PP' value has been changed so that it is equal to the 'LPs' value.

Save the edited image.data file. At this point you can use the edited image.data file to do one of the following: You can now use your newly edited image.data file to create a new mksysb to file, tape, or DVD.

E.g.: To file or tape: place the edited image.data file in the / (root) directory and rerun your mksysb command without using the "-i" flag. If running the backup through SMIT, make sure you set the option "Generate new /image.data file?" to 'no' (By default it is set to 'yes').

To DVD: Use the -i flag and specify the [/location] of the edited image.data file. If running through SMIT specify the edited image.data file location in the "User supplied image.data file" field.

Within NIM you would create an 'image_data' resource for use with NIM to restore a mksysb without preserving mirrors.

Note: If you don't want to edit the image.data file manually, here's a script that you can use to have it updated to a single disk for you, assuming your image_data file is called /image.data:
cat /image.data | while read LINE ; do
  if [ "${LINE}" = "COPIES= 2" ] ; then
    COPIESFLAG=1
    echo "COPIES= 1"
  else
    if [ ${COPIESFLAG} -eq 1 ] ; then
      PP=`echo ${LINE} | awk '{print $1}'`
      if [ "${PP}" = "PP=" ] ; then
        PPNUM=`echo ${LINE} | awk '{print $2}'`
        ((PPNUMNEW=$PPNUM/2))
        echo "PP= ${PPNUMNEW}"
        COPIESFLAG=0
      else
        echo "${LINE}"
      fi
    else
      echo "${LINE}"
    fi
  fi
done > /image.data.1disk

Creating an image_data resource without preserving mirrors for use with NIM


Transfer the /image.data file to the NIM master and store it in the location you desire. It is a good idea to place the file, or any NIM resource for that matter, in a descriptive manor, for example: /export/nim/image_data. This will ensure you can easily identify your "image_data" NIM resource file locations, should you have the need for multiple "image_data" resources.

Make sure your image.data filenames are descriptive also. A common way to name the file would be in relation to your clientname, for example: server1_image_data.

Run the nim command, or use smitty and the fast path 'nim_mkres' to define the file that you have edited using the steps above:

From command line on the NIM master:

# nim -o define -t image_data -a server=master -a location=/export/nim/image_data/server1_image_data -a comments="image.data file with broken mirror for server1" server1_image_data
NOTE: "server1_image_data" is the name given to the 'image_data' resource.

Using smit on the NIM master:
# smit nim_mkres
Select 'image_data' as the Resource Type. Then complete the following screen:
                       Define a Resource

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

                               [Entry Fields]
* Resource Name            [server1_image_data]
* Resource Type             image_data
* Server of Resource       [master]
* Location of Resource     [/export/nim/image_data/server1_image_data]
  Comments                 []

  Source for Replication   []
Run the following command to make sure the 'image_data' resource was created:
# lsnim -t image_data
The command will give output similar to the following:
# lsnim -t image_data
server1_image_data     resources       image_data
Run the following command to get information about the 'image_data' resource:
# lsnim -l server1_image_data
server1_image_data:
   class       = resources
   type        = image_data
   Rstate      = ready for use
   prev_state  = unavailable for use
   location    = /export/nim/image_data/server1_image_data
   alloc_count = 0
   server      = master

Using the image_data resource to restore a mksysb without preserving mirrors using NIM


Specify using the 'image_data' resource when running the 'bosinst' command from the NIM master:

From command line on the NIM master:

# nim -o bos_inst -a source=mksysb -a lpp_source=[lpp_source] -a spot=[SPOT] -a mksysb=[mksysb] -a image_data=mksysb_image_data -a accept_licenses=yes server1
Using smit on the NIM master:
# smit nim_bosinst
Select the client to install. Select 'mksysb' as the type of install. Select a SPOT at the same level as the mksysb you are installing. Select an lpp_source at the same level than the mksysb you are installing.

NOTE: It is recommended to use an lpp_source at the same AIX Technology Level, but if using an lpp_source at a higher level than the mksysb, the system will be updated to the level of the lpp_source during installation. This will only update Technology Levels.

If you're using an AIX 5300-08 mksysb, you cannot use an AIX 6.1 lpp_source. This will not migrate the version of AIX you are running to a higher version. If you're using an AIX 5300-08 mksysb and allocate a 5300-09 lpp_source, this will update your target system to 5300-09.
         Install the Base Operating System on Standalone Clients

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP]                                         [Entry Fields]
* Installation Target                       server1
* Installation TYPE                         mksysb
* SPOT                                      SPOTaix53tl09sp3
  LPP_SOURCE                               [LPPaix53tl09sp3]
  MKSYSB                                    server1_mksysb

  BOSINST_DATA to use during installation  []
  IMAGE_DATA to use during installation    [server1_image_date]

How to unconfigure items after mksysb recovery using NIM


There will be a situation where you want to test a mksysb recovery to a different host. The major issue with this is, that you bring up a server within the same network, that is a copy of an actual server that's already in your network. To avoid running into 2 exactly the same servers in your network, here's how you do this:

First make sure that you have a separate IP address available for the server to be recovered, for configuration on your test server. You definitely don't want to bring up a second server in your network with the same IP configuration.

Make sure you have a mksysb created of the server that you wish to recover onto another server. Then, create a simple script that disables all the items that you don't want to have running after the mksysb recovery, for example:

# cat /export/nim/cust_scripts/custom.ksh
#!/bin/ksh

# Save a copy of /etc/inittab
cp /etc/inittab /etc/inittab.org

# Remove unwanted entries from the inittab
rmitab hacmp 2>/dev/null
rmitab tsmsched 2>/dev/null
rmitab tsm 2>/dev/null
rmitab clinit 2>/dev/null
rmitab pst_clinit 2>/dev/null
rmitab qdaemon 2>/dev/null
rmitab sddsrv 2>/dev/null
rmitab nimclient 2>/dev/null
rmitab nimsh 2>/dev/null
rmitab naviagent 2>/dev/null

# Get rid of the crontabs
mkdir -p /var/spool/cron/crontabs.org
mv /var/spool/cron/crontabs/* /var/spool/cron/crontabs.org/

# Disable start scripts
chmod 000 /etc/rc.d/rc2.d/S01app

# copy inetd.conf
cp /etc/inetd.conf /etc/inetd.conf.org
# take out unwanted items
cat /etc/inetd.conf.org | grep -v bgssd > /etc/inetd.conf

# remove the hacmp cluster configuration
if [ -x /usr/es/sbin/cluster/utilities/clrmclstr ] ; then
        /usr/es/sbin/cluster/utilities/clrmclstr
fi

# clear the error report
errclear 0

# clean out mail queue
rm /var/spool/mqueue/*
The next thing you need to do, is to configure this script as a 'script resource' in NIM. Run:
# smitty nim_mkres
Select 'script' and complete the form afterwards. For example, if you called it 'UnConfig_Script':
# lsnim -l UnConfig_Script
UnConfig_Script:
   class       = resources
   type        = script
   comments    =
   Rstate      = ready for use
   prev_state  = unavailable for use
   location    = /export/nim/cust_scripts/custom.ksh
   alloc_count = 0
   server      = master
Then, when you are ready to perform the actual mksysb recovery using "smitty nim_bosinst", you can add this script resource on the following line:
Customization SCRIPT to run after installation [UnConfig_Script]

Nmon analyser - A free tool to produce performance reports



Searching for an easy way to create high-quality graphs that you can print, publish to the Web, or cut and paste into performance reports? Look no further. The nmon_analyser tool takes files produced by the NMON performance tool, turns them into Microsoft Excel spreadsheets, and automatically produces these graphs.

You can download the tool here:
http://www.ibm.com/developerworks/aix/library/au-nmon_analyser/

MD5 for AIX


If you need to run an MD5 check-sum on a file on AIX, you will notice that there's not md5 or md5sum command available on AIX. Instead, use the following command to do this:

# csum -h MD5 [filename]
Note: csum can't handle files larger than 2 GB.

AIX 5.3


The EOM date (end of marketing) has been announced for AIX 5.3: 04/11; meaning that AIX 5.3 will no longer be marketed by IBM from April 2011, and that it is now time for customers to start thinking about upgrading to AIX 6.1. The EOS (end of service) date for AIX 5.3 is 04/12, meaning AIX 5.3 will be serviced by IBM until April 2012. After that, IBM will only service AIX 5.3 for an additional fee. The EOL (end of life) date is 04/16, which is the end of life date at April 2016. The final technology level for AIX 5.3 is technology level 12. Some service packs for TL12 will be released though.

IBM has also announced EOM and EOS dates for HACMP 5.4 and PowerHA 5.5, so if you're using any of these versions, you also need to upgrade to PowerHA 6.1:

Sep 30, 2010: EOM HACMP 5.4, PowerHA 5.5
Sep 30, 2011: EOS HACMP 5.4
Sep 30, 2012: EOS HACMP 5.5
TOPICS: AIX, EMC, INSTALLATION, POWERHA / HACMP, STORAGE AREA NETWORK, SYSTEM ADMINISTRATION↑
Quick setup guide for HACMP
Use this procedure to quickly configure an HACMP cluster, consisting of 2 nodes and disk heartbeating.

Prerequisites:

Make sure you have the following in place:

Have the IP addresses and host names of both nodes, and for a service IP label. Add these into the /etc/hosts files on both nodes of the new HACMP cluster.
Make sure you have the HACMP software installed on both nodes. Just install all the filesets of the HACMP CD-ROM, and you should be good.
Make sure you have this entry in /etc/inittab (as one of the last entries):
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit
In case you're using EMC SAN storage, make sure you configure you're disks correctly as hdiskpower devices. Or, if you're using a mksysb image, you may want to follow this procedure EMC ODM cleanup.
Steps:
Create the cluster and its nodes:
# smitty hacmp
Initialization and Standard Configuration
Configure an HACMP Cluster and Nodes
Enter a cluster name and select the nodes you're going to use. It is vital here to have the hostnames and IP address correctly entered in the /etc/hosts file of both nodes.
Create an IP service label:
# smitty hacmp
Initialization and Standard Configuration
Configure Resources to Make Highly Available
Configure Service IP Labels/Addresses
Add a Service IP Label/Address
Enter an IP Label/Address (press F4 to select one), and enter a Network name (again, press F4 to select one).
Set up a resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Add a Resource Group
Enter the name of the resource group. It's a good habit to make sure that a resource group name ends with "rg", so you can recognize it as a resource group. Also, select the participating nodes. For the "Fallback Policy", it is a good idea to change it to "Never Fallback". This way, when the primary node in the cluster comes up, and the resource group is up-and-running on the secondary node, you won't see a failover occur from the secondary to the primary node.

Note: The order of the nodes is determined by the order you select the nodes here. If you put in "node01 node02" here, then "node01" is the primary node. If you want to have this any other way, now is a good time to correctly enter the order of node priority.
Add the Servie IP/Label to the resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Change/Show Resources for a Resource Group (standard)
Select the resource group you've created earlier, and add the Service IP/Label.
Run a verification/synchronization:
# smitty hacmp
Extended Configuration
Extended Verification and Synchronization
Just hit [ENTER] here. Resolve any issues that may come up from this synchronization attempt. Repeat this process until the verification/synchronization process returns "Ok". It's a good idea here to select to "Automatically correct errors".
Start the HACMP cluster:
# smitty hacmp
System Management (C-SPOC)
Manage HACMP Services
Start Cluster Services
Select both nodes to start. Make sure to also start the Cluster Information Daemon.
Check the status of the cluster:
# clstat -o
# cldump
Wait until the cluster is stable and both nodes are up.
Basically, the cluster is now up-and-running. However, during the Verification & Synchronization step, it will complain about not having a non-IP network. The next part is for setting up a disk heartbeat network, that will allow the nodes of the HACMP cluster to exchange disk heartbeat packets over a SAN disk. We're assuming here, you're using EMC storage. The process on other types of SAN storage is more or less similar, except for some differences, e.g. SAN disks on EMC storage are called "hdiskpower" devices, and they're called "vpath" devices on IBM SAN storage.

First, look at the available SAN disk devices on your nodes, and select a small disk, that won't be used to store any data on, but only for the purpose of doing the disk heartbeat. It is a good habit, to request your SAN storage admin to zone a small LUN as a disk heartbeating device to both nodes of the HACMP cluster. Make a note of the PVID of this disk device, for example, if you choose to use device hdiskpower4:
# lspv | grep hdiskpower4
hdiskpower4   000a807f6b9cc8e5    None
So, we're going to set up the disk heartbeat network on device hdiskpower4, with PVID 000a807f6b9cc8e5:
Create an concurrent volume group:
# smitty hacmp
System Management (C-SPOC)
HACMP Concurrent Logical Volume Management
Concurrent Volume Groups
Create a Concurrent Volume Group
Select both nodes to create the concurrent volume group on by pressing F7 for each node. Then select the correct PVID. Give the new volume group a name, for example "hbvg".
Set up the disk heartbeat network:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Networks
Add a Network to the HACMP Cluster
Select "diskhb" and accept the default Network Name.
Run a discovery:
# smitty hacmp
Extended Configuration
Discover HACMP-related Information from Configured Nodes
Add the disk device:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Communication Interfaces/Devices
Add Communication Interfaces/Devices
Add Discovered Communication Interface and Devices
Communication Devices
Select the disk device on both nodes by selecting the same disk on each node by pressing F7.
Run a Verification & Synchronization again, as described earlier above. Then check with clstat and/or cldump again, to check if the disk heartbeat network comes online.
TOPICS: AIX, POWERHA / HACMP, SYSTEM ADMINISTRATION↑
NFS mounts on HACMP failing
When you want to mount an NFS file system on a node of an HACMP cluster, there are a couple of items you need check, before it will work:

Make sure the hostname and IP address of the HACMP node are resolvable and provide the correct output, by running:
# nslookup [hostname]
# nslookup [ip-address]
The next thing you will want to check on the NFS server, if the node names of your HACMP cluster nodes are correctly added to the /etc/exports file. If they are, run:
# exportfs -va
The last, and tricky item you will want to check is, if a service IP label is defined as an IP alias on the same adapter as your nodes hostname, e.g.:
# netstat -nr
Routing tables
Destination   Gateway       Flags  Refs  Use    If  Exp  Groups

Route Tree for Protocol Family 2 (Internet):
default       10.251.14.1   UG      4    180100 en1  -     -
10.251.14.0   10.251.14.50  UHSb    0         0 en1  -     -
10.251.14.50  127.0.0.1     UGHS    3    791253 lo0  -     -
The example above shows you that the default gateway is defined on the en1 interface. The next command shows you where your Service IP label lives:
# netstat -i
Name  Mtu   Network   Address         Ipkts   Ierrs Opkts
en1   1500  link#2    0.2.55.d3.75.77 2587851 0      940024
en1   1500  10.251.14 node01          2587851 0      940024
en1   1500  10.251.20 serviceip       2587851 0      940024
lo0   16896 link#1                    1912870 0     1914185
lo0   16896 127       loopback        1912870 0     1914185
lo0   16896 ::1                       1912870 0     1914185
As you can see, the Service IP label (in the example above called "serviceip") is defined on en1. In that case, for NFS to work, you also want to add the "serviceip" to the /etc/exports file on the NFS server and re-run "exportfs -va". And you should also make sure that hostname "serviceip" resolves to an IP address correctly (and of course the IP address resolves to the correct hostname) on both the NFS server and the client.

install_all_updates


A usefull command to update software on your AIX server is install_all_updates. It is similar to running smitty update_all, but it works from the command line. The only thing you need to provide is the directory name, for example:

# install_all_updates -d .
This installs all the software updates from the current directory. Of course, you will have to make sure the current directory contains any software. Don't worry about generating a Table Of Contents (.toc) file in this directory, because install_all_updates generates one for you.

By default, install_all_updates will apply the filesets. Use -c to commit any software. Also, by default, it will expand any file systems; use -x to prevent this behavior). It will install any requisites by default (use -n to prevent). You can use -p to run a preview, and you can use -s to skip the recommended maintenance or technology level verification at the end of the install_all_updates output. You may have to use the -Y option to agree to all licence agreements.

To install all available updates from the cdrom, and agree to all license agreements, and skip the recommended maintenance or technology level verification, run:
# install_all_updates -d /cdrom -Y -s

Translate hardware address to physical location


This is how to translate a hardware address to a physical location:

The command lscfg shows the hardware addresses of all hardware. For example, the following command will give you more detail on an individual device (e.g. ent1):

# lscfg -pvl ent1
ent1 U788C.001.AAC1535-P1-T2 2-Port 10/100/1000 Base-TX PCI-X Adapter

2-Port 10/100/1000 Base-TX PCI-X Adapter:
Network Address.............001125C5E831
ROM Level.(alterable).......DV0210
Hardware Location Code......U788C.001.AAC1535-P1-T2

PLATFORM SPECIFIC

Name: ethernet
Node: ethernet@1,1
Device Type: network
Physical Location: U788C.001.AAC1535-P1-T2
This ent1 device is an 'Internal Port'. If we check ent2 on the same box:
# lscfg -pvl ent2
ent2 U788C.001.AAC1535-P1-C13-T1 2-Port 10/100/1000 Base-TX PCI-X

2-Port 10/100/1000 Base-TX PCI-X Adapter:
Part Number.................03N5298
FRU Number..................03N5298
EC Level....................H138454
Brand.......................H0
Manufacture ID..............YL1021
Network Address.............001A64A8D516
ROM Level.(alterable).......DV0210
Hardware Location Code......U788C.001.AAC1535-P1-C13-T1

PLATFORM SPECIFIC

Name: ethernet
Node: ethernet@1
Device Type: network
Physical Location: U788C.001.AAC1535-P1-C13-T1
This is a device on a PCI I/O card.

For a physical address like U788C.001.AAC1535-P1-C13-T1:
U788C.001.AAC1535 - This part identifies the 'system unit/drawer'. If your system is made up of several drawers, then look on the front and match the ID to this section of the address. Now go round the back of the server.
P1 - This is the PCI bus number. You may only have one.
C13 - Card Slot C13. They are numbered on the back of the server.
T1 - This is port 1 of 2 that are on the card.
Your internal ports won't have the Card Slot numbers, just the T number, representing the port. This should be marked on the back of your server. E.g.: U788C.001.AAC1535-P1-T2 means unit U788C.001.AAC1535, PCI bus P1, port T2 and you should be able to see T2 printed on the back of the server.

Clearing password history



Sometimes when password rules are very strict, a user may have problems creating a new password that is both easy to remember, and still adheres to the password rules. To aid the user, it could be useful to clear the password history for his or her account. The password history is stored in /etc/secuirty/pwdhist.pag. The command you use to remove the password history is:

# chuser histsize=0 username
Actually, this command not only removes the password history, but also changes the setting of histsize for the account to zero, meaning, that a user is never checked again on re-using old passwords. After running the command above, you may want to set it back again to the default value:
# grep -p ^default /etc/security/user | grep histsize
        histsize = 20

How to change the HMC password



You can ssh as user hscroot to the HMC, and change the password this way:

hscroot@hmc> chhmcusr -u hscroot -t passwd
Enter the new password:
Retype the new password:

Changing a password without a prompt in AIX


If you have to change the password for a user, and you have a need to script this, for example if you have the password for multiple users, or on several different servers, then here's an easy way to change the password for a user, without having to type the password on the command line prompt:

# echo "user:password" | chpasswd
TOPICS: AIX, SYSTEM ADMINISTRATION↑
Sendmail tips
To find out if sendmail is running:

# ps -ef | grep sendmail
To stop and restart sendmail:
# stopsrc -s sendmail
# startsrc -s sendmail -a "-bd -q30m"
Or:
# refresh -s sendmail
Use the -v flag on the mail command for "verbose" output. This is especially useful if you can't deliver mail, but also don't get any errors. E.g.:
# cat /etc/motd |mailx -v -s"test" email@address.com
To get sendmail to work on a system without DNS, create and/or edit /etc/netsvc.conf. It should contain 1 line only:
hosts=local
If you see the following error in the error report when starting sendmail:
DETECTING MODULE
'srchevn.c'@line:'355'
FAILING MODULE
sendmail
Then verify that your /etc/mail/sendmail.cf file is correct, and/or try starting the sendmail daemon as follows (instead of just running "startsrc -s sendmail"):
# startsrc -s sendmail -a "-bd -q30m"

Restrict a user to FTP to a particular folder in AIX


You can restrict a certain user account to only access a single folder. This is handled by file /etc/ftpaccess.ctl. There's a manual page available within AIX on file ftpaccess.ctl:

# man ftpaccess.ctl
In general, file /etc/ftpusers controls which accounts are allowed to FTP to a server. So, if this file exists, you will have to add the account to this file.

Here's an example of what you would set in the ftpaccess.ctl if you wanted user ftp to have login to /home/ftp. The user will be able to change directory forward, but not outside this directory. Also, when user ftp logs in and runs pwd it will show only "/" and not "/home/ftp".
# cat /etc/ftpaccess.ctl
useronly: ftp
If the user is required to write files to the server with specific access, for example, read and write access for user, group and others, then this can be accomplished by the user itself by running the FTP command:
ftp> site umask 111
200 UMASK set to 111 (was 027)
ftp> site umask
200 Current UMASK is 111
To further restrict the FTP account to a server, especially for accounts that are only used for FTP purposes, make sure to disable login and remote login for the account via smitty user.
TOPICS: AIX, SYSTEM ADMINISTRATION↑
PS1
The following piece of code fits nicely in the /etc/profile file. It makes sure the PS1, the prompt is set in such a way, that you can see who is logged in at what system and what the current path is. At the same time it also sets the window title the same way.

H=`uname -n`
if [ $(whoami) = "root" ] ; then
PS1='^[]2;${USER}@(${H}) ${PWD##/*/}^G^M${USER}@(${H}) ${PWD##/*/} # '
else
PS1='^[]2;${USER}@(${H}) ${PWD##/*/}^G^M${USER}@(${H}) ${PWD##/*/} $ '
fi
Note: to type the special characters, such as ^], you have to type first CRTL-V, and then CTRL-]. Likewise for ^G: type it as CTRL-V and then CTRL-G.

Second note: the escape characters only work properly when setting the windot title using PuTTY. If you or any of your users use Reflection to access the servers, the escape codes don't work. In that case, shorten it to:
if [ $(whoami) = "root" ] ; then
PS1='${USER}@(${H}) ${PWD##/*/} # '
else
PS1='${USER}@(${H}) ${PWD##/*/} $ '
fi
TOPICS: AIX, NETWORKING, SYSTEM ADMINISTRATION↑
IP alias
To configure IP aliases on AIX:

Use the ifconfig command to create an IP alias. To have the alias created when the system starts, add the ifconfig command to the /etc/rc.net script.

The following example creates an alias on the en1 network interface. The alias must be defined on the same subnet as the network interface.

# ifconfig en1 alias 9.37.207.29 netmask 255.255.255.0 up
The following example deletes the alias:
# ifconfig en1 delete 9.37.207.29

Adding a fileset to a SPOT in AIX


For example, if you wish to add the bos.alt_disk_install.rte fileset to a SPOT:

List the available spots:

# lsnim -t spot | grep 61
SPOTaix61tl05sp03     resources       spot
SPOTaix61tl03sp07     resources       spot
List the available lpp sources:
# lsnim -t lpp_source | grep 61
LPPaix61tl05sp03       resources       lpp_source
LPPaix61tl03sp07       resources       lpp_source
Check if the SPOT already has this file set:
# nim -o showres SPOTaix61tl05sp03 | grep -i bos.alt
No output is shown. The fileset is not part of the SPOT. Check if the LPP Source has the file set:
# nim -o showres LPPaix61tl05sp03 | grep -i bos.alt
  bos.alt_disk_install.boot_images 6.1.5.2                    I  N usr
  bos.alt_disk_install.rte    6.1.5.1                    I  N usr,root
Install the first fileset (bos.alt_disk_install.boot_images) in the SPOT. The other fileset is a prerequisite of the first fileset and will be automatically installed as well.
# nim -o cust -a filesets=bos.alt_disk_install.boot_images
-a lpp_source=LPPaix61tl05sp03 SPOTaix61tl05sp03
Note: Use the -F option to force a fileset into the SPOT, if needed (e.g. when the SPOT is in use for a client).

Check if the SPOT now has the fileset installed:
# nim -o showres SPOTaix61tl05sp03 | grep -i bos.alt
  bos.alt_disk_install.boot_images
  bos.alt_disk_install.rte 6.1.5.1 C F Alternate Disk Installation

Nimadm for specialist


A very good article about migrating AIX from version 5.3 to 6.1 can be found on the following page of IBM developerWorks:

http://www.ibm.com/developerworks/aix/library/au-migrate_nimadm/index.html?ca=drs

For a smooth nimadm process, make sure that you clean up as much filesets of your server as possible (get rid of the things you no longer need). The more filesets that need to be migrated, the longer the process will take. Also make sure that openssl/openssh is up-to-date on the server to be migrated; this is likely to break when you have old versions installed.

Very useful is also a gigabit Ethernet connection between the NIM server and the server to be upgraded, as the nimadm process copies over the client rootvg to the NIM server and back.

The log file for a nimadm process can be found on the NIM server in /var/adm/ras/alt_mig.

Longer login names


User names can only be eight characters or fewer in AIX version 5.2 and earlier. Starting with AIX version 5.3, IBM increased the maximum number of characters to 255. To verify the setting in AIX 5.3 and later, you can extract the value from getconf:

# getconf LOGIN_NAME_MAX
9
Or use lsattr:
# lsattr -El sys0 -a max_logname
max_logname 9 Maximum login name length at boot time True
To change the value, simply adjust the v_max_logname parameter (shown as max_logname in lsattr) using chdev to the maximum number of characters desired plus one to accommodate the terminating character. For example, if you want to have user names that are 128 characters long, you would adjust the v_max_logname parameter to 129:
# chdev -l sys0 -a max_logname=129
sys0 changed
Please note that this change will not go into effect until you have rebooted the operating system. Once the server has been rebooted, you can verify that the change has taken effect:
# getconf LOGIN_NAME_MAX
128
Keep in mind, however, that if your environment includes IBM RS/6000 servers prior to AIX version 5.3 or operating systems that cannot handle user names longer than eight characters and you rely on NIS or other authentication measures, it would be wise to continue with the eight-character user names.

Difference between major and minor numbers


A major number refers to a type of device, and a minor number specifies a particular device of that type or sometimes the operation mode of that device type.

Example:

# lsdev -Cc tape
rmt0 Available 3F-08-02 IBM 3580 Ultrium Tape Drive (FCP)
rmt1 Available 3F-08-02 IBM 3592 Tape Drive (FCP)
smc0 Available 3F-08-02 IBM 3576 Library Medium Changer (FCP)
In the list above:

rmt1 is a standalone IBM 3592 tape drive;
rmt0 is an LTO4 drive of a library;
smc0 is the medium changer (or robotic part) of above tape library.

Now look at their major and minor numbers:
# ls -l /dev/rmt* /dev/smc*
crw-rw-rwT 1 root system 38, 0 Nov 13 17:40 /dev/rmt0
crw-rw-rwT 1 root system 38,128 Nov 13 17:40 /dev/rmt1
crw-rw-rwT 1 root system 38, 1 Nov 13 17:40 /dev/rmt0.1
crw-rw-rwT 1 root system 38, 66 Nov 13 17:40 /dev/smc0
All use IBM tape device driver (and so have the same major number of 38), but actually they are different entities (with minor number of 0, 128 and 66 respectively). Also, compare rmt0 and rmt0.1. It's the same device, but with different mode of operation.

bootlist: Multiple boot logical volumes found


This describes how to resolve the following error when setting the bootlist:

# bootlist -m normal hdisk2 hdisk3
0514-229 bootlist: Multiple boot logical volumes found on 'hdisk2'.
Use the 'blv' attribute to specify the one from which to boot.
To resolve this: clear the boot logical volumes from the disks:
# chpv -c hdisk2
# chpv -c hdisk3
Verify that the disks can no longer be used to boot from by running:
# ipl_varyon -i
Then re-run bosboot on both disks:
# bosboot -ad /dev/hdisk2
bosboot: Boot image is 38224 512 byte blocks.
# bosboot -ad /dev/hdisk3
bosboot: Boot image is 38224 512 byte blocks.
Finally, set the bootlist again:
# bootlist -m normal hdisk2 hdisk3
Another way around it is by specifying hd5 using the blv attribute:
# bootlist -m normal hdisk2 blv=hd5 hdisk3 blv=hd5
This will set the correct boot logical volume, but the error will show up if you ever run the bootlist command again without the blv attribute.
TOPICS: AIX, LVM, SYSTEM ADMINISTRATION↑
Mirrorvg without locking the volume group
When you run the mirrorvg command, you will (by default) lock the volume group it is run against. This way, you have no way of knowing what the status is of the sync process that occurs after mirrorvg has run the mklvcopy commands for all the logical volumes in the volume group. Especially with very large volume groups, this can be a problem.

The solution however is easy: Make sure to run the mirrorvg command with the -s option, to prevent it to run the sync. Then, when mirrorvg has completed, run the syncvg yourself with the -P option.

For example, if you wish to mirror the rootvg from hdisk0 to hdisk1:

# mirrorvg -s rootvg hdisk1
Of course, make sure the new disk is included in the boot list for the rootvg:
# bootlist -m normal hdisk0 hdisk1
Now rootvg is mirrored, but not yet synced. Run "lsvg -l rootvg", and you'll see this. So run the syncvg command yourself. With the -P option you can specify the number of threads that should be started to perform the sync process. Usually, you can specify at least 2 to 3 times the number of cores in the system. Using the -P option has an extra feature: there will be no lock on the volume group, allowing you to run "lsvg rootvg" within another window, to check the status of the sync process.
# syncvg -P 4 -v rootvg
And in another window:
# lsvg rootvg | grep STALE | xargs
STALE PVs: 1 STALE PPs: 73
TOPICS: AIX, LVM, SYSTEM ADMINISTRATION↑
File system creation time
To determine the time and date a file system was created, you can use the getlvcb command. First, figure out what the logical volume is that is used for a partical file system, for example, if you want to know for the /opt file system:

# lsfs /opt
Name         Nodename Mount Pt VFS   Size    Options Auto Accounting
/dev/hd10opt --       /opt     jfs2  4194304 --      yes  no
So file system /opt is located on logical volume hd10opt. Then run the getlvcb command:
# getlvcb -AT hd10opt
  AIX LVCB
  intrapolicy = c
  copies = 2
  interpolicy = m
  lvid = 00f69a1100004c000000012f9dca819a.9
  lvname = hd10opt
  label = /opt
  machine id = 69A114C00
  number lps = 8
  relocatable = y
  strict = y
  stripe width = 0
  stripe size in exponent = 0
  type = jfs2
  upperbound = 32
  fs = vfs=jfs2:log=/dev/hd8:vol=/opt:free=false:quota=no
  time created  = Thu Apr 28 20:26:36 2011
  time modified = Thu Apr 28 20:40:38 2011
You can clearly see the "time created" for this file system in the example above.

VGs (normal, big, and scalable)


The VG type, commonly known as standard or normal, allows a maximum of 32 physical volumes (PVs). A standard or normal VG is no more than 1016 physical partitions (PPs) per PV and has an upper limit of 256 logical volumes (LVs) per VG. Subsequently, a new VG type was introduced which was referred to as big VG. A big VG allows up to 128 PVs and a maximum of 512 LVs.

AIX 5L Version 5.3 has introduced a new VG type called scalable volume group (scalable VG). A scalable VG allows a maximum of 1024 PVs and 4096 LVs. The maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis. This opens up the prospect of configuring VGs with a relatively small number of disks and fine-grained storage allocation options through a large number of PPs, which are small in size. The scalable VG can hold up to 2,097,152 (2048 K) PPs. As with the older VG types, the size is specified in units of megabytes and the size variable must be equal to a power of 2. The range of PP sizes starts at 1 (1 MB) and goes up to 131,072 (128 GB). This is more than two orders of magnitude above the 1024 (1 GB), which is the maximum for both normal and big VG types in AIX 5L Version 5.2. The new maximum PP size provides an architectural support for 256 petabyte disks.

The table below shows the variation of configuration limits with different VG types. Note that the maximum number of user definable LVs is given by the maximum number of LVs per VG minus 1 because one LV is reserved for system use. Consequently, system administrators can configure 255 LVs in normal VGs, 511 in big VGs, and 4095 in scalable VGs.

VG type Max PVs Max LVs Max PPs per VG Max PP size
Normal VG 32 256 32,512 (1016 * 32) 1 GB
Big VG 128 512 130,048 (1016 * 128) 1 GB
Scalable VG 1024 4096 2,097,152 128 GB

The scalable VG implementation in AIX 5L Version 5.3 provides configuration flexibility with respect to the number of PVs and LVs that can be accommodated by a given instance of the new VG type. The configuration options allow any scalable VG to contain 32, 64, 128, 256, 512, 768, or 1024 disks and 256, 512, 1024, 2048, or 4096 LVs. You do not need to configure the maximum values of 1024 PVs and 4096 LVs at the time of VG creation to account for potential future growth. You can always increase the initial settings at a later date as required.

The System Management Interface Tool (SMIT) and the Web-based System Manager graphical user interface fully support the scalable VG. Existing SMIT panels, which are related to VG management tasks, have been changed and many new panels added to account for the scalable VG type. For example, you can use the new SMIT fast path _mksvg to directly access the Add a Scalable VG SMIT menu.

The user commands mkvg, chvg, and lsvg have been enhanced in support of the scalable VG type.

For more information:
http://www.ibm.com/developerworks/aix/library/au-aix5l-lvm.html.

Renaming disk devices


Getting disk devices named the same way on, for example, 2 nodes of a PowerHA cluster, can be really difficult. For us humans though, it's very useful to have the disks named the same way on all nodes, so we can recognize the disks a lot faster, and don't have to worry about picking the wrong disk.

The way to get around this usually involved either creating dummy disk devices or running configuration manager on a specific adapter, like: cfgmgr -vl fcs0. This complicated procedure is not needed any more since AIX 7.1 and AIX 6.1 TL6, because a new command has been made available, called rendev, which is very easy to use for renaming devices:

# lspv
hdisk0  00c8b12ce3c7d496  rootvg  active
hdisk1  00c8b12cf28e737b  None

# rendev -l hdisk1 -n hdisk99

# lspv
hdisk0  00c8b12ce3c7d496  rootvg  active
hdisk99 00c8b12cf28e737b  None
TOPICS: AIX, BACKUP & RESTORE, SYSTEM ADMINISTRATION↑
Lsmksysb
There's a simple command to list information about a mksysb image, called lsmksysb:

# lsmksysb -lf mksysb.image
VOLUME GROUP:      rootvg
BACKUP DATE/TIME:  Mon Jun 6 04:00:06 MST 2011
UNAME INFO:        AIX testaix1 1 6 0008CB1A4C00
BACKUP OSLEVEL:    6.1.6.0
MAINTENANCE LEVEL: 6100-06
BACKUP SIZE (MB):  49920
SHRINK SIZE (MB):  17377
VG DATA ONLY:      no

rootvg:
LV NAME    TYPE     LPs  PPs  PVs  LV STATE      MOUNT POINT
hd5        boot     1    2    2    closed/syncd  N/A
hd6        paging   32   64   2    open/syncd    N/A
hd8        jfs2log  1    2    2    open/syncd    N/A
hd4        jfs2     8    16   2    open/syncd    /
hd2        jfs2     40   80   2    open/syncd    /usr
hd9var     jfs2     40   80   2    open/syncd    /var
hd3        jfs2     40   80   2    open/syncd    /tmp
hd1        jfs2     8    16   2    open/syncd    /home
hd10opt    jfs2     8    16   2    open/syncd    /opt
dumplv1    sysdump  16   16   1    open/syncd    N/A
dumplv2    sysdump  16   16   1    open/syncd    N/A
hd11admin  jfs2     1    2    2    open/syncd    /admin

Use dd to backup raw partition



The savevg command can be used to backup user volume groups. All logical volume information is archived, as well as JFS and JFS2 mounted filesystems. However, this command cannot be used to backup raw logical volumes.

Save the contents of a raw logical volume onto a file using:

# dd if=/dev/lvname of=/file/system/lvname.dd
This will create a copy of logical volume "lvname" to a file "lvname.dd" in file system /file/system. Make sure that wherever you write your output file to (in the example above to /file/system) has enough disk space available to hold a full copy of the logical volume. If the logical volume is 100 GB, you'll need 100 GB of file system space for the copy.

If you want to test how this works, you can create a logical volume with a file system on top of it, and create some files in that file system. Then unmount he filesystem, and use dd to copy the logical volume as described above.

Then, throw away the file system using "rmfs -r", and after that has been completed, recreate the logical volume and the file system. If you now mount the file system, you will see, that it is empty. Unmount the file system, and use the following dd command to restore your backup copy:
# dd if=/file/system/lvname.dd of=/dev/lvname
Then, mount the file system again, and you will see that the contents of the file system (the files you've placed in it) are back.

Restoring individual files from a mksysb image


Sometimes, you just need that one single file from a mksysb image backup. It's really not that difficult to accomplish this.

First of all, go to the directory that contains the mksysb image file:

# cd /sysadm/iosbackup
In this example, were using the mksysb image of a Virtual I/O server, created using iosbackup. This is basically the same as a mksysb image from a regular AIX system. The image file for this mksysb backup is called vio1.mksysb

First, try to locate the file you're looking for; For example, if you're looking for file nimbck.ksh:
# restore -T -q -l -f vio1.mksysb | grep nimbck.ksh
New volume on vio1.mksysb:
Cluster size is 51200 bytes (100 blocks).
The volume number is 1.
The backup date is: Thu Jun  9 23:00:28 MST 2011
Files are backed up by name.
The user is padmin.
-rwxr-xr-x- 10   staff  May 23  08:37  1801 ./home/padmin/nimbck.ksh
Here you can see the original file was located in /home/padmin.

Now recover that one single file:
# restore -x -q -f vio1.mksysb ./home/padmin/nimbck.ksh
x ./home/padmin/nimbck.ksh
Note that it is important to add the dot before the filename that needs to be recovered. Otherwise it won't work. Your file is now restore to ./home/padmin/nimbck.ksh, which is a relative folder from the current directory you're in right now:
# cd ./home/padmin
# ls -als nimbck.ksh
4 -rwxr-xr-x    1 10  staff  1801 May 23 08:37 nimbck.ksh

Too many open files


















To determine if the number of open files is growing over a period of time, issue lsof to report the open files against a PID on a periodic basis. For example:

# lsof -p (PID of process) -r (interval) > lsof.out
Note: The interval is in seconds, 1800 for 30 minutes.

This output does not give the actual file names to which the handles are open. It provides only the name of the file system (directory) in which they are contained. The lsof command indicates if the open file is associated with an open socket or a file. When it references a file, it identifies the file system and the inode, not the file name.

Run the following command to determine the file name:
# df -kP filesystem_from_lsof | awk '{print $6}' | tail -1
Now note the filesystem name. And then run:
# find filesystem_name -inum inode_from_lsof -print
This will show the actual file name.

To increase the number, change or add the nofiles=XXXXX parameter in the /etc/security/limits file, run:
# chuser nofiles=XXXXX user_id
You can also use svmon:
# svmon -P java_pid -m | grep pers
This lists opens files in the format: filesystem_device:inode. Use the same procedure as above for finding the actual file name.

clstat: Failed retrieving cluster information.


If clstat is not working, you may get the following error, when running clstat:

# clstat
Failed retrieving cluster information.

There are a number of possible causes:
clinfoES or snmpd subsystems are not active.
snmp is unresponsive.
snmp is not configured correctly.
Cluster services are not active on any nodes.

Refer to the HACMP Administration Guide for more information.
Additional information for verifying the SNMP configuration on AIX 6
can be found in /usr/es/sbin/cluster/README5.5.0.UPDATE
To resolve this, first of all, go ahead and read the README that is referred to. You'll find that you have to enable an entry in /etc/snmdv3.conf:
Commands clstat or cldump will not start if the internet MIB tree is not enabled in snmpdv3.conf file. This behavior is usually seen in AIX 6.1 onwards where this internet MIB entry was intentionally disabled as a security issue. This internet MIB entry is required to view/resolve risc6000clsmuxpd (1.3.6.1.4.1.2.3.1.2.1.5) MIB sub tree which is used by clstat or cldump functionality.

There are two ways to enable this MIB sub tree(risc6000clsmuxpd) they are:

1) Enable the main internet MIB entry by adding this line in /etc/snmpdv3.conf file

VACM_VIEW defaultView internet - included -

But doing so is not advisable as it unlocks the entire MIB tree

2) Enable only the MIB sub tree for risc6000clsmuxpd without enabling the main MIB tree by adding this line in /etc/snmpdv3.conf file

VACM_VIEW defaultView 1.3.6.1.4.1.2.3.1.2.1.5 - included -

Note: After enabling the MIB entry above snmp daemon must be restarted with the following commands as shown below:

# stopsrc -s snmpd
# startsrc -s snmpd

After snmp is restarted leave the daemon running for about two minutes before attempting to start clstat or cldump.
Sometimes, even after doing this, clstat or cldump still don't work. The next thing may sound silly, but edit the /etc/snmpdv3.conf file, and take out the coments. Change this:
smux 1.3.6.1.4.1.2.3.1.2.1.2 gated_password  # gated
smux 1.3.6.1.4.1.2.3.1.2.1.5 clsmuxpd_password # HACMP/ES for AIX ...
To:
smux 1.3.6.1.4.1.2.3.1.2.1.2 gated_password
smux 1.3.6.1.4.1.2.3.1.2.1.5 clsmuxpd_password
Then, recycle the deamons on all cluster nodes. This can be done while the cluster is up and running:
# stopsrc -s hostmibd
# stopsrc -s snmpmibd
# stopsrc -s aixmibd
# stopsrc -s snmpd
# sleep 4
# chssys -s hostmibd -a "-c public"
# chssys -s aixmibd  -a "-c public"
# chssys -s snmpmibd  -a "-c public"
# sleep 4
# startsrc -s snmpd
# startsrc -s aixmibd
# startsrc -s snmpmibd
# startsrc -s hostmibd
# sleep 120
# stopsrc -s clinfoES
# startsrc -s clinfoES
# sleep 120
Now, to verify that it works, run either clstat or cldump, or the following command:
# snmpinfo -m dump -v -o /usr/es/sbin/cluster/hacmp.defs cluster
Still not working at this point? Then run an Extended Verification and Synchronization:
# smitty cm_ver_and_sync.select
After that, clstat, cldump and snmpinfo should work.

AIX fibre channel error - FCS_ERR6


This error can occur if the fibre channel adapter is extremely busy. The AIX FC adapter driver is trying to map an I/O buffer for DMA access, so the FC adapter can read or write into the buffer. The DMA mapping is done by making a request to the PCI bus device driver.

The PCI bus device driver is saying that it can't satisfy the request right now. There was simply too much IO at that moment, and the adapter couldn't handle them all. When the FC adapter is configured, we tell the PCI bus driver how much resource to set aside for us, and it may have gone over the limit. It is therefore recommended to increase the max_xfer_size on the fibre channel devices.

It depends on the type of fibre channel adapter, but usually the possible sizes are:

0x100000, 0x200000, 0x400000, 0x800000, 0x1000000

To view the current setting type the following command:

# lsattr -El fcsX -a max_xfer_size
Replace the X with the fibre channel adapter number.

You should get an output similar to the following:
max_xfer_size 0x100000 Maximum Transfer Size True
The value can be changed as follows, after which the server needs to be rebooted:
# chdev -l fcsX -a max_xfer_size=0x1000000 -P

Migrating users from one AIX system to another


Since the files involved in the following procedure are flat ASCII files and their format has not changed from V4 to V5, the users can be migrated between systems running the same or different versions of AIX (for example, from V4 to V5).

Files that can be copied over:

/etc/group
/etc/passwd
/etc/security/group
/etc/security/limits
/etc/security/passwd
/etc/security/.ids
/etc/security/environ
/etc/security/.profile
NOTE: Edit the passwd file so the root entry is as follows:
root:!:0:0::/:/usr/bin/ksh
When you copy the /etc/passwd and /etc/group files, make sure they contain at least a minimum set of essential user and group definitions.

Listed specifically as users are the following:
root, daemon, bin, sys, adm, uucp, guest, nobody, lpd

Listed specifically as groups are the following:
system, staff, bin, sys, adm, uucp, mail, security, cron, printq, audit, ecs, nobody, usr

If the bos.compat.links fileset is installed, you can copy the /etc/security/mkuser.defaults file over. If it is not installed, the file is located as mkuser.default in the /usr/lib/security directory. If you copy over mkuser.defaults, changes must be made to the stanzas. Replace group with pgrp, and program with shell. A proper stanza should look like the following:
    user:
            pgrp = staff
            groups = staff
            shell = /usr/bin/ksh
            home = /home/$USER
The following files may also be copied over, as long as the AIX version in the new machine is the same:
/etc/security/login.cfg
/etc/security/user
NOTE: If you decide to copy these two files, open the /etc/security/user file and make sure that variables such as tty, registry, auth1 and so forth are set properly with the new machine. Otherwise, do not copy these two files, and just add all the user stanzas to the new created files in the new machine.

Once the files are moved over, execute the following:
# usrck -t ALL
# pwdck -t ALL
# grpck -t ALL
This will clear up any discrepancies (such as uucp not having an entry in /etc/security/passwd). Ideally this should be run on the source system before copying over the files as well as after porting these files to the new system.

NOTE: It is possible to find user ID conflicts when migrating users from older versions of AIX to newer versions. AIX has added new user IDs in different release cycles. These are reserved IDs and should not be deleted. If your old user IDs conflict with the newer AIX system user IDs, it is advised that you assign new user IDs to these older IDs.

From: http://www-01.ibm.com/support/docview.wss?uid=isg3T1000231

mkpasswd


An interesting open source project is Expect. It's a tool that can be used to automate interactive applications.

You can download the RPM for Expect can be downloaded from http://www.perzl.org/aix/index.php?n=Main.Expect, and the home page for Expect is http://www.nist.gov/el/msid/expect.cfm.

A very interesting tool that is part of the Expect RPM is "mkpasswd". It is a little Tcl script that uses Expect to work with the passwd program to generate a random password and set it immediately. A somewhat adjusted version of "mkpasswd" can be downloaded here. The adjusted version of mkpasswd will generate a random password for a user, with a length of 8 characters (the maximum password length by default for AIX), if you run for example:

# /usr/local/bin/mkpasswd username
sXRk1wd3
To see the interactive work performed by Expect for mkpasswd, use the -v option:
# /usr/local/bin/mkpasswd -v username
spawn /bin/passwd username
Changing password for "username"
username's New password:
Enter the new password again:
password for username is s8qh1qWZ
By using mkpasswd, you'll never have to come up with a random password yourself again, and it will prevent Unix system admins from assigning new passwords to accounts that are easily guessible, such as "changeme", or "abc1234".

Now, what if you would want to let "other" users (non-root users) to run this utility, and at the same time prevent them from resetting the password of user root?

Let's say you want user pete to be able to reset other user's passwords. Add the following entries to the /etc/sudoers file by running visudo:
# visudo

Cmnd_Alias MKPASSWD = /usr/local/bin/mkpasswd, \
                      ! /usr/local/bin/mkpasswd root
pete ALL=(ALL) NOPASSWD:MKPASSWD
This will allow pete to run the /usr/local/bin/mkpasswd utility, which he can use to reset passwords.

First, to check what he can run, use the "sudo -l" command:
# su - pete
$ sudo -l
User pete may run the following commands on this host:
(ALL) NOPASSWD: /usr/local/bin/mkpasswd, !/usr/local/bin/mkpasswd root
Then, an attempt, using pete's account, to reset another user's password (which is successful):
$ sudo /usr/local/bin/mkpasswd mark
oe09'ySMj
Then another attempt, to reset the root password (which fails):
$ sudo /usr/local/bin/mkpasswd root
Sorry, user pete is not allowed to execute
'/usr/local/bin/mkpasswd root' as root.

Unconfiguring child objects


When removing a device on AIX, you may run into a message saying that a child device is not in a correct state. For example:

# rmdev -dl fcs3
Method error (/usr/lib/methods/ucfgcommo):
0514-029 Cannot perform the requested function because a
child device of the specified device is not in a correct state.
To determine what the child devices are, use the -p option of the lsdev command. From the man page of the lsdev command:
-p Parent
     Specifies the device logical name from the Customized Devices
     object class for the parent of devices to be displayed. The
     -p Parent flag can be used to show the child devices of the
     given Parent. The Parent argument to the -p flag may contain
     the same wildcard charcters that can be used with the odmget
     command. This flag cannot be used with the -P flag.
For example:
# lsdev -p fcs3        
fcnet3 Defined   07-01-01 Fibre Channel Network Protocol Device
fscsi3 Available 07-01-02 FC SCSI I/O Controller Protocol Device
To remove the device, and all child devices, use the -R option. From the man page for the rmdev command:
-R
     Unconfigures the device and its children.
     When used with the -d or -S flags, the
     children are undefined or stopped, respectively.
The command to remove adapter fcs3 and all child devices, will be:
# rmdev -Rdl fcs3

Erasing disks




During a system decommission process, it is advisable to format or at least erase all drives. There are 2 ways of accomplishing that:

If you have time:

AIX allows disks to be erased via the Format media service aid in the AIX diagnostic package. To erase a hard disk, run the following command:

# diag -T format
This will start the Format media service aid in a menu driven interface. If prompted, choose your terminal. You will then be presented with a resource selection list. Choose the hdisk devices you want to erase from this list and commit your changes according to the instructions on the screen.

Once you have committed your selection, choose Erase Disk from the menu. You will then be asked to confirm your selection. Choose Yes. You will be asked if you want to Read data from drive or Write patterns to drive. Choose Write patterns to drive. You will then have the opportunity to modify the disk erasure options. After you specify the options you prefer, choose Commit Your Changes. The disk is now erased. Please note, that it can take a long time for this process to complete.

If you want to do it quick-and-dirty:

For each disk, use the dd command to overwrite the data on the disk. For example:
for disk in $(lspv | awk '{print $1}') ; do
   dd if=/dev/zero of=/dev/r${disk} bs=1024 count=10
   echo $disk wiped
done
This does the trick, as it reads zeroes from /dev/zero and outputs 10 times 1024 zeroes to each disk. That overwrites anything on the start of the disk, rendering the disk useless

Mounting USB drive on AIX




To familiarize yourself with using USB drives on AIX, take a look at the following article at IBM developerWorks:

http://www.ibm.com/developerworks/aix/library/au-flashdrive/

Before you start using it, make sure you DLPAR the USB controller to your LPAR, if not done so already. You should see the USB devices on your system:

# lsconf | grep usb
+ usbhc0 U78C0.001.DBJX589-P2          USB Host Controller
+ usbhc1 U78C0.001.DBJX589-P2          USB Host Controller
+ usbhc2 U78C0.001.DBJX589-P2          USB Enhanced Host Controller
+ usbms0 U78C0.001.DBJX589-P2-C8-T5-L1 USB Mass Storage
After you plug in the USB drive, run cfgmgr to discover the drive, or if you don't want the run the whole cfgmgr, run:
# /etc/methods/cfgusb -l usb0
Some devices may not be recognized by AIX, and may require you to run the lquerypv command:
# lquerypv -h /dev/usbms0
To create a 2 TB file system on the drive, run:
# mkfs -olog=INLINE,ea=v2 -s2000G -Vjfs2 /dev/usbms0
To mount the file system, run:
# mount -o log=INLINE /dev/usbms0 /usbmnt
Then enjoy using a 2 TB file system:
# df -g /usbmnt
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/usbms0     2000.00   1986.27    1%     3182     1% /usbmnt

Resolving LED code 555


If your system hangs with LED code 555, it will most likely mean that one of your rootvg file systems is corrupt. The following link will provide information on how to resolve it:

http://www-304.ibm.com/support/docview.wss?uid=isg3T1000217

After completing the procedure, the system may still hang with LED code 555. If that happens, boot the system from media and enter service mode again, and access the volume group. Then check what the boot disk is according to:

# lslv -m hd5
Then also check your bootlist:
# bootlist -m normal -o
If these 2 don't match, set the boot list to the correct disk, as indicated by the lslv command above. For example, to set it to hdisk1, run:
# bootlist -m normal hdisk1
And then, make sure you can run the bosboot commands:
# bosboot -ad /dev/hdisk1
# bosboot -ad /dev/ipldevice
Note: exchange hdisk1 in the example above with the disk that was indicated by the lslv command.

If the bosboot on the ipldevice fails, you have 2 options: Recover the system from a mksysb image, or recreate hd5. First, create a copy of your ODM:
# mount /dev/hd4 /mnt
# mount /dev/hd2 /mnt/usr
# mkdir /mnt/etc/objrepos/bak
# cp /mnt/etc/objrepos/Cu* /mnt/etc/objrepos/bak
# cp /etc/objrepos/Cu* /mnt/etc/objrepos
# umount /dev/hd2
# umount /dev/hd4
# exit
Then, recreate hd5, for example, for hdisk1:
# rmlv hd5
# cd /dev
# rm ipldevice
# rm ipl_blv
# mklv -y hd5 -t boot -ae rootvg 1 hdisk1
# ln /dev/rhd5 /dev/ipl_blv
# ln /dev/rhdisk1 /dev/ipldevice
# bosboot -ad /dev/hdisk1
If things still won't boot at this time, the only option you have left is to recover the system from a mksysb image.


Change default value of hcheck_interval

The default value of hcheck_interval for VSCSI hdisks is set to 0, meaning that health checking is disabled. The hcheck_interval attribute of an hdisk can only be changed online if the volume group to which the hdisk belongs, is not active. If the volume group is active, the ODM value of the hcheck_interval can be altered in the CuAt class, as shown in the following example for hdisk0:


# chdev -l hdisk0 -a hcheck_interval=60 -P

The change will then be applied once the system is rebooted. However, it is possible to change the default value of the hcheck_interval attribute in the PdAt ODM class. As a result, you won't have to worry about its value anymore and newly discovered hdisks will automatically get the new default value, as illustrated in the example below:


# odmget -q 'attribute = hcheck_interval AND uniquetype = \
PCM/friend/vscsi' PdAt | sed 's/deflt = \"0\"/deflt = \"60\"/' \
| odmchange -o PdAt -q 'attribute = hcheck_interval AND \
uniquetype = PCM/friend/vscsi'


How to make a system backup of a VIOS

To create a system backup of a Virtual I/O Server (VIOS), run the following commands (as user root):


# /usr/ios/cli/ioscli viosbr -backup -file vios_config_bkup 
-frequency daily -numfiles 10 
# /usr/ios/cli/ioscli backupios -nomedialib -file /mksysb/$(hostname).mksysb -mksysb


The first command (viosbr) will create a backup of the configuration information to /home/padmin/cfgbackups. It will also schedule the command to run every day, and keep up to 10 files in /home/padmin/cfgbackups. 

The second command is the mksysb equivalent for a Virtual I/O Server: backupios. This command will create the mksysb image in the /mksysb folder, and exclude any ISO repositiory in rootvg, and anything else excluded in /etc/exclude.rootvg.

Wednesday, December 26, 2012

Raid Levels in Unix

The different raid levels available today

Raid 0 - Stripping data across the disks.
This stripes the data across all the disks present in the
array. This improves the read and write performance. Eg. Reading a large file takes a
long time in comparison to reading the same file from a Raid 0 system.They is no data
redundancy in this case.

Raid 1 - Mirroring.
In case of Raid 0 it was observed that there was no redundancy,i.e if one
disk fails then the data is lost. Raid 1 overcomes that problem by mirroring the data. So
if one disk fails the data is still accessible through the other disk.

Raid 2
RAID level that does not use one or more of the "standard" techniques of mirroring,
striping and/or parity. It is implemented by splitting data at bit level and spreading it
across the data disks and redundant disk. It uses a special algorithm called as ECC
(error correction code) which is accompanied across each data block. These are tallied
when the data is read from the disk to maintain data integrity.

Raid 3 - data is striped across multiple disks at a byte level.
The data is stripped with parity and
the parity is maintained in a separate disk. So if that disk goes off , it results in a data
loss.

Raid 4 
Similar to Raid 3 the only difference is that the data is striped across multiple disks at
block level.

Raid 5 
Block-level striping with distributed parity. The data and parity is stripped across all
disks thus increasing the data redundancy. Minimum three disks are required and if
any one disk goes off the data is still secure.

Raid 6
Block-level striping with dual distributed parity. Its stripes blocks of data and parity
across all disks in the Raid except that it maintains two sets of parity information for
each parcel of data thus increasing the data redundancy. So if two disk go off the data
is still intact.

Raid 7 
Asynchronous, cached striping with dedicated parity. This level is not a open industry
standard. It is based on the concepts of Raid 3 and 4 and a great deal of cache is
included across multiple levels. Also there is a specialised real time processor to
manage the array asynchronously.

Configuring MPIO for Virtual AIX client

Configuring MPIO for the virtual AIX client
This document describes the procedure to set up Multi-Path I/O on the AIX clients of
the virtual I/O server.
Procedure:
This procedure assumes that the disks are already allocated to both the VIO servers
involved in this configuration.
· Creating Virtual Server and Client SCSI Adapters
First of all, via HMC create SCSI server adapters on the two VIO servers and
then two virtual client SCSI adapters on the newly created client partition, each
mapping to one of the VIO servers´ server SCSI adapter.
An example:
Here is an example of configuring and exporting an ESS LUN from both the
VIO servers to a client partition:
· Selecting the disk to export
You can check for the ESS LUN that you are going to use for MPIO by
running the following command on the VIO servers.
On the first VIO server:
$ lsdev -type disk
name status description
..
hdisk3 Available MPIO Other FC SCSI Disk Drive
hdisk4 Available MPIO Other FC SCSI Disk Drive
hdisk5 Available MPIO Other FC SCSI Disk Drive
..
$lspv
..
hdisk3 00c3e35c99c0a332 None
hdisk4 00c3e35c99c0a51c None
hdisk5 00c3e35ca560f919 None
..
In this case hdisk5 is the ESS disk that we are going to use for MPIO.
Then run the following command to list the attributes of the disk that you choose for MPIO:
$lsdev -dev hdisk5 -attr
..
algorithm fail_over Algorithm True
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
..
pvid 00c3e35ca560f9190000000000000000 Physical volume identifier
False
..
reserve_policy single_path Reserve Policy True
Note down the lun_id, pvid and the reserve_policy of the hdisk4.
· Command to change reservation policy on the disk
You see that the reserve policy is set to single_path.
Change this to no_reserve by running the following command:
$ chdev -dev hdisk5 -attr reserve_policy=no_reserve
hdisk4 changed
On the second VIO server:
On the second VIO server too, find the hdisk# that has the same pvid, it could
be a different one than the one on the first VIO server, but the pvid should the
same.
$ lspv
..
hdisk7 00c3e35ca560f919 None
..
The pvid of the hdisk7 is the same as the hdisk5 on the first VIO server.
$ lsdev -type disk
name status description
..
hdisk7 Available MPIO Other FC SCSI Disk Drive
..
$lsdev -dev hdisk7 -attr
..
algorithm fail_over Algorithm True
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
pvid 00c3e35ca560f9190000000000000000 Physical volume identifier
False
..
reserve_policy single_path Reserve Policy True
You will note that the lun_id, pvid of the hdisk7 on this server are the same as
the hdisk4 on the first VIO server.
$ chdev -dev hdisk7 -attr reserve_policy=no_reserve
hdisk6 changed
· Creating the Virtual Target Device
Now on both the VIO servers run the mkvdev command using the appropriate
hdisk#s respectively.
$ mkvdev -vdev hdisk# -vadapter vhost# -dev vhdisk#
The above command might have failed when run on the second VIO server, if
the reserve_policy was not set to no_reserve on the hdisk.
After the above command runs succesfully on both the servers, we have
same LUN exported to the client with mkvdev command on both servers.
· Check for correct mapping between the server and the client


Double check the client via the HMC that the correct slot numbers match the
respective slot numbers on the servers.
In the example, the slot number 4 for the client virtual scsi adapter maps to
slot number 5 of the VIO server VIO1_nimtb158.
And the slot number 5 for the client virtual SCSI adapter maps to the slot
number 5 of the VIO server VIO1_nimtb159.
· On the client partition
Now you are ready to install the client. You can install the client using any of
the following methods described in the red book on virtualization at
http://www.redbooks.ibm.com/redpieces/abstracts/sg247940.html:
1. NIM installation
2. Alternate disk installation
3. using the CD media
Once you install the client, run the following commands to check for MPIO:
# lsdev -Cc disk
hdisk0 Available Virtual SCSI Disk Drive
# lspv
hdisk0 00c3e35ca560f919 rootvg active
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
· Dual Path
When one of the VIO servers goes down, the path coming from that server
shows as failed with the lspath command.
# lspath
Failed hdisk0 vscsi0
Enabled hdisk0 vscsi1
· Path Failure Detection
The path shows up in the "failed" mode, even after the VIO server is up
again. We need to either change the status with the “chpath” command to
“enabled” state or set the the attributes “hcheck_interval” and “hcheck_mode” to
“60” and “nonactive” respectively for a path failure to be detected automatically.
· Setting the related attributes
Here is the command to be run for setting the above attributes on the client
partition:
$ chdev -l hdisk# -a hcheck_interval=60 –a hcheck_mode=nonactive -P
The VIO AIX client needs to be rebooted for hcheck_interval attribute to take
effect.
· EMC for Storage
In case of using EMC device as the storage device attached to VIO server,
then make sure of the following:
1. Powerpath version 4.4. is installed on the VIO servers.
2. Create hdiskpower devices which are shared between both the VIO
servers.
· Additional Information
Another thing to take note of is that you cannot have the same name for
Virtual SCSI Server Adapter and Virtual Target Device. The mkvdev command
will error out if the same name for both is used.
$ mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev hdiskpower0
Method error (/usr/lib/methods/define -g -d):
0514-013 Logical name is required.
The reserve attribute is named differently for an EMC device than the attribute
for ESS or FasTt storage device. It is “reserve_lock”.
Run the following command as padmin for checking the value of the
attribute.
$ lsdev -dev hdiskpower# -attr reserve_lock
Run the following command as padmin for changing the value of the attribute.
$ chdev -dev hdiskpower# -attr reserve_lock=no
· Commands to change the Fibre Channel Adapter attributes
And also change the following attributes of the fscsi#, fc_err_recov to “fast_fail”
and dyntrk to “yes”
$ chdev -dev fscsi# -attr fc_err_recov=fast_fail dyntrk=yes –perm
The reason for changing the fc_err_recov to “fast_fail” is that if the Fibre
Channel adapter driver detects a link event such as a lost link between a storage
device and a switch, then any new I/O or future retries of the failed I/Os will be
failed immediately by the adapter until the adapter driver detects that the device
has rejoined the fabric. The default setting for this attribute is 'delayed_fail’.
Setting the dyntrk attribute to “yes” makes AIX tolerate cabling changes in the
SAN.
The VIOS needs to be rebooted for fscsi# attributes to take effect


Moving an LPAR to another Server

Steps for migrating LPAR

1.Have Storage zone the LPARs disk to the new HBA(s). Also have them add an additional 40GB drive for the new boot disk. By doing this we have a back out to the old boot disk on the old frame.
2. Collect data from the current LPAR:
a. Network information – Write down IP and ipv4 alias(s) for each interface
b. Run “oslevel –r” - will need this when setting up NIM for the mksysb recovery
c. Is the LPAR running AIO, if so will need to configure after the mksysb recovery
d. Run “lspv”, save this output, contains volume group and PVID information
e. Any other customizations you deem neccessary

3. create mksysb backup of this LPAR
4. Reconfigure the NIM machine for this LPAR, with new Ethernet MAC address. Foolproof method is to remove the machine and re-create it.
5. In NIM, configure the LPAR for a mksysb recovery. Select the appropriate SPOT and LPP Source, base on “oslevel –r” data collected in step 2.
6. Shut down the LPAR on the old frame (Halt the LPAR)
7. Move network cables, fibre cables, disk, zoning
8. if needed, to the LPAR on the new frame
9. On the HMC, bring up the LPAR on the new frame in SMS mode and select a network boot. Verify SMS profile has only a single HBA (if Clarrion attached, zoned to a single SP), otherwise the recovery will fail with a 554.
10. Follow prompts for building a new OS. Select the new 40GB drive for the boot disk (use lspv info collected in Step 2 to identify the correct 40GB drive). Leave defaults for remaining questions NO (shrink file systems, recover devices, and import volume groups).
11. After the LPAR has booted, from the console (the network interface may be down):
a. lspv Note the hdisk# of the bootdisk
b. bootlist –m normal –o Verify boot list is set – if not, set it
bootlist –m normal –o hdisk#
c. ifconfig en0 down If interface got configured, down it
d. ifconfig en0 detach and remove it

e. lsdev –Cc adapter Note Ethernet interfaces (ex. ent0, ent1)
f. rmdev –dl Remove all en devices
g. rmdev –dl Remove all ent devices
h. cfgmgr Will rediscover the en/ent devices
i. chdev –l -a media_speed=100_Full_Duplex Set on each interface unless
running GIG, leave defaults

j. Configure the network interfaces and aliases Use info recorded from step 2 mktcpip –h -a -m -i -g -A no –t N/A –s
chdev –l en# -a alias4=,

k. Verify that the network is working.

12. If LPAR was running AIO (data collected in Step 2), verify it is running (smitty aio)
13. Check for any other customizations which may have been made on this LPAR
14. Vary on the volume groups, use the “lspv” data collected in Step 2 to identify by PVID a hdisk in each volume group. Run for each volume group:
a. importvg –y hdisk# Will vary on all hdisk in the volume group
b. varyonvg
c. mount all Verify mounts are good
15. Verify paging space is configured appropriately
a. lsps –a Look for Active and Auto set to yes
b. chps –ay pagingXX Run for each paging space, sets Auto
c. swapon /dev/pagingxx Run for each paging space, sets Active

16. Verify LPAR is running 64 bit
a. bootinfo –K If 64, you are good

b. ln –sf /usr/lib/boot/unix_64 /unix If 32, change to run 64 bit
c. ln –sf /usr/lib/boot/unix_64 /usr/lib/boot/unix
d. bosboot –ak /usr/lib/boot/unix_64

17. If LPAR has Power Path
a. Run “powermt config” Creates the powerpath0 device
b. Run “pprootdev on” Sets Power Path control of the boot disk
c. If Clariion, make configuration changes to enable SP failover

chdev -l powerpath0 -Pa QueueDepthAdj=1
chdev –l fcsX –Pa num_cmd_elems=2048 For each fiber adapter
chdev –l fscsiX –Pa fc_err_recov=fast_fail For each fiber adapter
d. Halt the LPAR
e. Activate the Normal profile If Sym/DMX – verify two HBA’s in profile
f. If Clarrion attached, have Storage add zone to 2nd SP
i. Run cfgmgr Configure the 2nd set of disk

g. Run “pprootdev fix” Put rootdisk pvid’s back on hdisk
h. lspv grep rootvg Get boot disk hdisk#
i. bootlist –m normal –o hdisk# hdisk# Set the boot list with both hdisk

20. From the HMC, remove the LPAR profile from the old frame
21. Pull cables from the old LPAR (Ethernet and fiber), deactivate patch panel ports
22. Update documentation, Server Master, AIX Hardware spreadsheet, Patch Panel spreadsheet
23. Return the old boot disk to storage.
How will you install optional software on a AIX machine?
Firstly, what is optional software?Optional softwares are those which are not pre-installed when you install an AIX machine.

Secondly, how will you identify the software product?The OS software could be identified with the following format as:-

versionnumber.releasenumber.modificationlevel.fixlevel
Versionnumber- it may be 1 to 2 digits.Releasenumber- It may be 1 to 2 digits.Modificationlevel- It can be from 1 to 4 digits.Fixlevel- It can be from 1 to 4 digits.

 Now let us see some of the key words required to go further:-
1. Fileset- It is the smallest installable unit for the AIX OS, example of a completely installable unit is bos.net.uucp & example of a separately installable part of a product like bos.net.nfs.client
2. Packages- It is a group of separately installable filesets, which provides a set of related functions. Eg:- bos.net
3. Licensed Program Products (LPP) - They are a complete software product including all packages associated with that licensed program. Eg: - BOS
4. Bundles - They are a list of sofwares which contains filesets, packages, LPPs, which are used for a specified use. Bundles are the actual software.
Example:- Server bundle, network bundle, graphics bundle
5. PTFs- It stands for Program Temporary Fix. It is a temporary solution to the problem. The problem is a result of a defect in a current unaltered release of the program. This problem is diagnosed by IBM.
6. APAR - It stands for Authorized Program Analysis Report. This is just a report of the problem caused due to the defect in a current unaltered release of the program. It is an emergency fix.

Let’s say you already have an older version of software available on your system. Now you want to install a newer version of the software. In this example, you can install this software in applied mode. If you install software in applied mode, it maintains the older version in un-available state (not removed) and the newer version will be in Applied mode and made available to you. That means both the old and the new version are there on your system. If you are convinced with the newer version you can commit the newer version. Once the newer version is committed, the older version which was maintained will be removed and if the newer version is rejected, then the older version is made available again and the newer version gets removed i.e. rejected.
Coming to the final part of the article, let us get into the business of installing software.

Now we can see that the software is installed using SMIT in this way.
We can list the installed softwares using the command
#smit list_installed           
#lslpp -l bos.rte.*