AIX (Advanced Interactive eXecutive) is a series of proprietary Unix operating systems developed and sold by IBM.
Performance Optimization With Enhanced RISC (POWER) version 7 enables a unique performance advantage for AIX OS.
POWER7 features new capabilities using multiple cores and multiple CPU threads, creating a pool of virtual CPUs.
AIX 7 includes a new built-in clustering capability called Cluster Aware
AIX POWER7 systems include the Active Memory Expansion feature.

Wednesday, August 31, 2011

AIX BOOTING


Introduction
The initial step in booting is the Power On Self Test (POST). Its purpose is to verify the basic hardware is functional state. The memory, keyboard, communication and audio device are also initialized. It is during this step you can press a function key to choose a different boot list. ( F1 – maintenance mode. F2 – Diagnostic mode in AIX). The System Read Only Storage (System ROS) is specific to each system type. It is necessary for AIX 5L V5.3 to boot, but it does not build the data structure required for booting. It will locate and load bootstrap code. System ROS contains generic boot information and is operating system independent. Software ROS (also named as bootstrap) forms an IPL control block which is compatible with AIX 5L V5.3, takes control and builds AIX 5L specific boot information. A specific file system located in memory and named RAMFS file system is created. Software ROS then locates, loads, and turns control over to AIX 5L boot logical volume (BLV). Software ROS is AIX 5L information created based on machine type and is responsible for completing machine preparation enable it to start AIX 5L kernel. A complete list of files that are a part of the BLV can be obtained from directory /usr/lib/boot.

Boot phase 1
  • The init process started from the RAMFS executes the boot script rc.boot. if init process fails for some reason code c06 is shown in the LED display.
  • At this stage the restbase command is called to copy a partial image of ODM from the BLV into the RAMFS. If this operation is successful LED display show 510, otherwise LED code 548 shown.
  • After this, the cfgmgr –f command reads the Config_rules class from the reduced ODM. In this class, devices with the attribute phase=1 are considered as the base devices. Base devices are the devices that are necessary to access rootvg. For example, if the rootvg is located on the hard disk, all devices starting from motherboard to the disk will have to be initialized. So rootvg can be activated in the boot phase 2.
  • At the end of the boot phase 1, bootinfo –b command is called to determine the last boot device, at this stage, the LED shows 511.

Boot phase 2
  • In the boot phase 2, the rc.boot script is passed to the parameter 2. During this phase the following steps are taken.
  • The root volume group is varied on with special version of varyonvg command named ipl_varyon command. If this command is successful the system will display 517, otherwise one of the LED codes will appear: 552, 554, 556 and the boot process is halted.
  • Then the root file system hd4 is checked using the command fsck –f command. This will verify only whether the file system was unmounted cleanly before the last shutdown. If the command fails the LED shows 555.
  • The root file system is mounted on a temporary mount point /mnt in the RAMFS. If this fails .557 will appear on the LED display.
  • The /usr file system is verified using the fsck –f command and then mounted. If the operation fails. 518 appear.
  • The /var file system is verified using the fsck –f command and then mounted.
  • The corecopy command checks if a dump occurred. If it did, it is copied from default dump device, /dev/hd6 (paging space), to the default copy directory /var/adm/ras after this /var is unmounted.
  • Then the primary paging space from rootvg /dev/hd6 will be activated.
  • The mergdev process is called and all /dev file system from the RAMFS are copied on to disk. All customized ODM files from the RAM file system are copied to disk both ODM versions from hd4 and hd5 are now synchronized.
  • Finally the root file system from rootvg disk is mounted over the root file system from the RAMFS. The mount points for the rootvg file systems became available.
  • Now the /var and /usr file system from the rootvg are mounted again on their ordinary mount points.
  • There is no console available at this stage, so all the boot message will be copied to alog. The alog command maintains and manages the logs.

Boot phase 3
  • After phase 2 is completed rootvg is activated and the following steps are taken. /etc/init process is started. It reads /etc/inittab file and calls the rc.boot with argument 3.
  • The /tmp file system is mounted.
  • The rootvg is synchronized by calling the syncvg command and launching it as background process. As a result all the stale partitions from the rootvg are updated, at this stage, the LED code 553 is shown.
  • At this stage cfgmgr command is called, if the system is booted in normal mode the cfgmgr is called with the option –p2. the cfgmgr command reads the Config_rules file from the ODM and calls all methods corresponding to either phase=2 or phase=3. All other devices that are not base devices are configured at this time.
  • Then the console is configured by the cfgcon command. After the configuration of the console. Boot message are sent to the console, if no STDOUT redirection is made. However, all missed messages can be found in /var/adm/ras/conslog. LED codes that can be displayed at this time are: c31 - console not configured. Provides instruction to select console. c32 - console is LFT terminal. c33 console is TTY. c34 console is a file on disk.
  • Finally, the synchronization of the ODM in the BLV with the ODM from the / (root) file system is done by the save base command.
  • The syncd daemon and the errdemon are started The LED display is turned off.
  • If the file /etc/nologin exists, it will be removed.
  • If there are devices marked as missing in CuDv a message is displayed on console.
  • The message system initialization completed is sent to the console.
  • The execution of the rc.boot has complete.
  • Process init will continue processing the next command from /etc/inittab. System initialization.
  • During system startup, after the root file system has been mounted in the pre-initialization process. The following sequence of event occurs:
  1. The init command is run as the last step of the startup process.
  2. The init command attempts to read the /etc/inittab file.
  3. If the /etc/inittab file exists. The init command attempts to locate an initdefault entry in the /etc/inittab file.
a) if the initdefault entry exist. The init command uses the specified run level as the initial system run level.
b) if the initdefault entry does not exists, the init command requests that the user enter a run level from the console
c) If the user enters an S, s, M, or m run level, the init command enters the maintenance run level. This is the only run level that does not require a properly formatted /etc/inittab file.
  1. If the /etc/inittab file does not exist, the init command places the system in the maintenance run level by default.
  2. The init command rereads the /etc/inittab file every 60 second. If the /etc/inittab file has changed since the last time the init command read it, the new command in the /etc/inittab file are executed.
  • The /etc/inittab file controls the initialization process
  • The /etc/inittab file supplies the script to the init command’s role as a general process dispatcher. The process that constitutes the majority of the init command’s process dispatching activities is the /etc/getty line process, which initiates individual terminal lines. Other processes typically dispatched by the init command are daemons and the shell.



Understanding the Boot Process
During the boot process, the system tests the hardware, loads and executes the operating system, and configures devices. To boot the operating system, the following resources are required:
  • A boot image that can be loaded after the machine is turned on or reset.
  • Access to the root and /usr file systems.
There are three types of system boots:

Hard Disk Boot
A machine is started for normal operations with the key in the Normal position. For more information, see "Understanding System Boot Processing" .
Diskless Network Boot
A diskless or dataless workstation is started remotely over a network. A machine is started for normal operations with the key in the Normal position. One or more remote file servers provide the files and programs that diskless or dataless workstations need to boot.
Service Boot
A machine is started from a hard disk, network, tape, or CD-ROM with the key set in the Service position. This condition is also called maintenance mode. In maintenance mode, a system administrator can perform tasks such as installing new or updated software and running diagnostic checks. For more information, see "Understanding the Service Boot Process" .

During a hard disk boot, the boot image is found on a local disk created when the operating system was installed. During the boot process, the system configures all devices found in the machine and initializes other basic software required for the system to operate (such as the Logical Volume Manager). At the end of this process, the file systems are mounted and ready for use. For more information about the file system used during boot processing, see "Understanding the RAM File System" .
The same general requirements apply to diskless network clients. They also require a boot image and access to the operating system file tree. Diskless network clients have no local file systems and get all their information by way of remote access.

No comments: