Virtual I/O Server installation overview
The Virtual I/O Server The Virtual I/O Server is a dedicated partition that runs a special operating system called IOS. This special type of partition has physical resources assigned to it in its HMC profile. The administrator issues server partition IOS commands to create virtual resources which present virtual LAN, virtual SCSI adapters, and virtual disk drives client partitions. The client partition’s operating systems recognize these resources as physical devices. The Virtual I/O Server is responsible for managing the interaction between the client LPAR and the physical device supporting the virtualized service. Once the administrator logs in to the Virtual I/O Server as the user padmin, he or she has access to a restricted Korn shell session. The administrator uses IOS commands to create, change, and remove these physical and virtual devices as well as to configure and manage the VIO server. Executing the help command on the VIO server command line lists the commands that are available in padmin’s restricted Korn Shell session
Virtual I/O Server installation
Virtual I/O Server installation
VIO Server code is packaged and shipped as an AIX mksysb image
on a VIO DVD
Installation methods
– DVD install
– HMC install - Open rshterm and type “installios”; follow the
prompts
– Network Installation Manager (NIM)
VIO Server can support multiple client types
– AIX 5.3
– SUSE Linux Enterprise Server 9 or 10 for POWER
– Red Hat Enterprise Linux AS for POWER Version 3 and 4
Virtual I/O Server Administration The VIO server uses a command line interface running in a restricted shell
– no smitty or GUI
There is no root login on the VIO Server
A special user – padmin – executes VIO server commands
First login after install, user padmin is prompted to change password
After that, padmin runs the command “license –accept”
Slightly modified commands are used for managing devices, networks,
code installation and maintenance, etc.
The padmin user can start a root AIX shell for setting up third-party
devices using the command “oem_setup_env”
Enabling the Advanced POWER Virtualization Feature
IBM Virtual I/O Server
The Virtual I/O Server is part of the IBM eServer p5 Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.
The Virtual I/O Server is part of the IBM eServer p5 Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.
Installation
You have two options to install the AIX-based VIO Server:
1. Install from CD
2. Install from network via an AIX NIM-Server
Installation method
#1 is probably the more frequently used method in a pure Linux environment as installation method #2 requires the presence of an AIX NIM (Network Installation Management) server. Both methods differ only in the initial boot step and are then the same. They both lead to the following installation screen:
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM STARTING SOFTWARE IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM PLEASE WAIT... IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMElapsed time since release of system processors: 51910 mins 20 secs
------------------------------------------------------------------------------- Welcome to the Virtual I/O Server. boot image timestamp: 10:22 03/23 The current time and date: 17:23:47 08/10/2005 number of processors: 1 size of memory: 2048MB boot device: /pci@800000020000002/pci@2,3/ide@1/disk@0:\ppc\chrp\bootfile.exeSPLPAR info: entitled_capacity: 50 platcpus_active: 2This system is SMT enabled: smt_status: 00000007; smt_threads: 2 kernel size: 10481246; 32 bit kernel
-------------------------------------------------------------------------------
The next step then is to define the system console. After some time you should see the following screen:
******* Please define the System Console. *******Type a 1 and press Enter to use this terminal as the system console.
Then Choose language of installation
>>> 1 Type 1 and press Enter to have English during install.
This is the main installation menu of the AIX-based VIO-Server:
Welcome to Base Operating System
Installation and Maintenance
Type the number of your choice and press Enter. Choice is indicated by >>>.>>>
1 Start Install Now with Default Settings
2 Change/Show Installation Settings and Install
3 Start Maintenance Mode for System Recovery
88 Help ? 99 Previous Menu
>>> Choice [1]:
Select Hard disk where you need to install VIO base operating system as we do in AIX Base operating system.
Once the installation is over. You will get login Prompt similar to AIX server.
VIO server is nothing but AIX on top of that Virtualisation software loaded on it. Generally on VIO server we do not host any application. Its basically used for sharing I/O resources ( DISK & Network ) to the client LPAR hosted in same Physical server.
Initial setup
After the reboot you are presented with the VIO-Server login prompt. You can't login as user root as you have to use the special user id padmin. No initial default password is set. Immediately after login you are forced to set a new password.
Before you can do anything you have to accept the I/O Server license.
This is done with the license command
#license -accept
Once you are logged in as user padmin you find yourself in a restricted Korn shell with only a limited set of commands. You can see all available commands with the command help. All these commands are shell aliases to a single SUID-binary called ioscli which is located in the directory /usr/ios/cli/bin. If you are familiar with AIX you will recognize most commands but most command line parameters differ from the AIX versions.
As there are no man pages available you can see all options for each command separately by issueing the command help . Here is an example for the command lsmap:
$ help lsmap
Usage: lsmap {-vadapter ServerVirtualAdapter -plc PhysicalLocationCode
-all}
[-net] [-fmt delimiter]
Displays the mapping between physical and virtual devices.
-all Displays mapping for all the server virtual adapter
devices.
-vadapter Specifies the server virtual adapter device
by device name.
-plc Specifies the server virtual adapter device
by physical location code.
-net Specifies supplied device is a virtual server
Ethernet adapter.
-fmt Divides output by a user-specified delimiter.
A very important command is oem_setup_env which gives you access to the regular AIX command line interface. This is provided solely for the installation of OEM device drivers
Virtual SCSI setup
To map a LV # mkvg: creates the volume group, where a new LV will be created using the mklv command
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with the LV
# mkvdev: maps the virtual SCSI server adapter to the LV
# lsmap -all: shows the mapping information
To map a physical disk
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with a physical disk
# mkvdev: maps the virtual SCSI server adapter to a physical disk
# lsmap -all: shows the mapping information
Client partition commands
No commands needed, the Linux kernel is notified immediately
Create new volume group datavg with member disk hdisk1
# mkvg -vg datavg hdisk1
Create new logical volume vdisk0 in volume group
# mklv -lv vdisk0 datavg 10G
Maps the virtual SCSI server adapter to the logical volume
# mkvdev -vdev vdisk0 -vadapter vhost0
Display the mapping information
#lsmap -all
Virtual Ethernet setup
To list all virtual and physical adapters use the lsdev -type adapter command.
$ lsdev -type adapter
name status description
ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ide0 Available ATA/IDE Controller Device
sisscsia0 Available PCI-X Dual Channel Ultra320 SCSI Adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
Choose the virtual Ethernet adapter we want to map to the physical Ethernet adapter.
$ lsdev -virtualname status description
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
The command mkvdev maps a physical adapter to a virtual adapter, creates a layer 2 network bridge and defines the default virtual adapter with its default VLAN ID. It creates a new Ethernet interface, e.g., ent3.
Make sure the physical and virtual interfaces are unconfigured (down or detached).
Scenario A (one VIO server)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:
$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
ent3 Available
en3
et3
This has created a new shared ethernet adapter ent3 (you can verify that with the lsdev command). Now configure the TCP/IP settings for this new shared ethernet adapter (ent3). Please note that you have to specify the interface (en3) and not the adapter (ent3).
$ mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com
Scenario B (two VIO servers) Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:
$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
Configure the TCP/IP settings for the new shared ethernet adapter (ent3):
$mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com
Client partition commands
No new commands needed just the typical TCP/IP configuration is done on the virtual Ethernet interface that it is defined in the client partition profile on the HMC
No comments:
Post a Comment