WPAR Short notes

Posted: May 30, 2011 in Uncategorized

WPAR & 6.1

It’s a software based virtualization solution for creating and managing multiple individual AIX OS environments within a single AIX based LPAR.

Live partition Mobility: It’s a PowerVM feature. Its ability to migrate a running LPAR between systems

WPARs reduce the number of managed LPARs

Inside the WPAR, the application having the following benefits:

Private execution environments

Dedicated network addresses and filesystems.

Interprocess communication that is restricted to processes executing only in the same WPAR

System WPAR: It’s a instance of AIX. Contains dedicated writable filesystems and system service daemons. It can share the global environment /usr and /opt filesystems in read only mode.

Application WPAR: It’s a WPAR that’s host only a single application or process. It shares file system of the global environment. It will not run any system service daemons

It’s not possible to remotely log in to an application partition or remotely.

Global Environment: It owns all physical and virtual resources of the LPAR. It allocates the resources to the WPAR. Most performance and tuning activities are performed from this environment. Sys admin must be logged in to the global environment to create,activate and manage WPARs.

Processes: A process running inside a WPAR can only see other processes in the WPAR.

Processe running in other WPARs and global environment are invisible to it. Processes can only access resources that are explicitly available inside the WPAR.

Users: Application WPARs inherit their user profiles from the global environment, so they have same privileges that the global environment does. System WPARs maintain independent set of users.

Resources: Resources created or owned by the global environment can only used by the global environment unless they are explicitly shared with a WPAR. Resources created or owned by a WPAR are visible only to that WPAR and global environment. To isolation of filesystems between system WPARs. A separate directory tree under the /wpars directory is created for each wpar. Inside this directory each WPAR maintains its own home,tmp,var directories. A system wpar will also mount the global environments /opt and /usr filesystems as readonly. Application wpars do not create their own filesystems, so they are usually allowed access to the filesystems owned by the global environments.

Each system WPAR assigned its own network address. Communication between WPARs running under the same instance via the loopback interface.

When to use workload partitions:

  • Improve application availability
  • Simplify OS and APP management
  • Manage application resource utilization

Upper limit of the no. of WPARs that can be executed within LPAR is 8192.

WPAR administration:

  • To use main WPAR menu: smit wpar
  • To use application WPAR menu: smit manage_appwpar
  • To use system WPAR menu: smit mange_syswpar

Create System WPAR: mkwpar –n wpar001

Mkwpar –n wpar001 –N address=9.3.5.182

First OS creates and mounts the WPAR’s file system. Next it populates them with the necessary ststem files. Finally it synchronizes the root part of the installed software. When the creation of new WPAR is complete, it is left in the defines state.

Starting a WPAR:

Lswpar ( Defines state)

Name state type hostname directory

Wpar001 D S Wpar001 /wpars/wpar001

Startwpar wpar001(Mounting file systems and adding IP address.)

Lswpar ( Defines state)

Name state type hostname directory

Wpar001 A S Wpar001 /wpars/wpar001

You can login to the wpar using clogin from the global environment or telnet.clogin doesn’t depends on a TCP/IP connection.

To determine you are in the WPAR or inside the global environment execute the uname –W command. It returns 0 if you are in the global environment. And a value other than 0 if you are inside a WPAR

Stopping of a WPAR: shutdown –F (stopping the WPAR from inside the WPAR)

Stopwpar wpar001(stopping the WPAR from Global Environment).

-F ( Stopping a WPAR from global environment force the WPAR shutdown)

-N shutdown immediately.

Rebooting a WPAR: shudown –Fr (Rebooting WPAR from inside the wpar)

Rebootwpar wpar001( Reboot from global environment)

Changing a WPAR:

You can change WPAR’s name only when the WPAR is in the Defined state.

Chwpar –n wpar001.

Broken state:If a WPAR gets an undefined state.

Investigation:

Logs check( /var/adm/ras, /var/adm/WPARs)

Check the processes ps – @(It shows processes by WPAR)

Removing a WPAR: Verify WPAR is in Defined state. Take Backup. Rmwpar wpar001.

WPAR states:

Defined D WPAR is created but not started

Active A It’s a normal state.

Broken B When failure occurs

Transitional T WPAR is in the process of changing from one state to another.

Paused P This state is reached when a WPAR has had a successful checkpoint or restore data.

Mobile partitions can be created by c flag

Creating Application WPAR: Execute the application with in the WPAR: wparexec. Ex: wparexec /hga/myapp

Wparexec command starting myapp immediately after creation. This type of WPAR only exists while the application is running. When the application ends, the WPAR also ends and all of its resources are freed.

If the application WPAR has a dependency on a filesystem that is not mounted, it will mount the file system automatically.

Lswpar(Transitional State)

Name state type hostname directory

Myapp T A myapp /

Lswpar(Active State)

Name state type hostname directory

Myapp A A myapp /

Lswpar( It disappears)

File Systems:

Types of File systems: namefs, jfs, jfs2, NFS.

By default system creates /,/tmp,/home,/var as jfs2 and /opt,/proc,/usr and namefs.

Creating a filesystem for a running WPAR:crfs –v jfs2 –m /wpars/wpar001/newfs –u wpr00 –a logname=INLINE –a size=1G

Changing a file system: chfs –a size=512M /wpars/wpar001/newfs

Backing up the global environment: Stop all wpars, Then run a mksysb, mkdvd, mkcd command with the –N flag.

IBM workload partition manager for AIX is a tool for monitoring and managing WPARs.

Aix 6.1

Workload partition manager (Extra software package need to install)

  • Live application mobility ( move one partition from one system to another)
  • Automatically move partition if necessary

Aix 6 requisites: power 4,5,6

Wpar: light weight miniature aix running in aix. It’s a hypervisor partitioning

Wpars share the global system resources with the copy of aix. It’s shares aix OS kernel and its shares processors, memory and adapters for IO from global resources.

Each wpar shared /usr /opt with global AIX read only.

Private filesystems: /, /tmp, /var, /home.

Its own network ip address and hostname

A separate administrative and security domain

2 types of wpar

  • · System
  • · Application

Live application mobility: Moving a running wpar to another machine or LPAR.

  • Install new machine(move wpar very fast way)
  • Multi system work load balancing ( load balancing of cpus, memory and IO)
  • Use mobility when upgrade machine(aix or firmware) or for repair

System wpar: it’s a copy of aix

  • Create it and its goes to defined state, run it, activate it and we can stop it and if its not required remove it.
  • It’s a complete virtualized os environment ( runs multiple services and applications)
  • Runs services like inetd,cron,syslog
  • Own root user, users and groups.
  • Does not share any file systems with other wpars or global system.

Application wpar:

  • Isolate an individual application
  • Light weight , one process . can start further processes.
  • Created and started in seconds
  • Starts when created. Automatically removed when application stops.
  • Shares global file systems
  • Good for HPC(high performance computing) means long running applications

Wpar manager:

  • Install wpar agent and it will talk to the all wpars in a machine
  • Wpar manager can see the wpars running on the machine
  • By using web browser it can communicate with wpars
  • Web servers ruuuning
  • It’s a graphical interface
  • Create, remove and moving wpars
  • Start & stop them
  • Monitoring & reporting
  • Manual relocation
  • Automated relocation

Workload application mobility: relocate

  • On wpar manager select wparàclick relocateàselect target aix
  • Chkptwpar –k àit freeze wpar, save the wpar processes, state to a statefile on nfs.kill the wpar processes once they are no longer needed
  • Restartwpar: this command take the statefile, rebuild the wpar processes & state and start the wpar.

Reasons for using wpars:

  • Reducing system admin time. Many applications on one instance. Reduces install and update of aix, monitoring, becakup, recovery etc..
  • Application encapsulation, treated apps as an isolated unitsàcreate/remove start/stop, checkpoint/resume
  • Rapid environment creation of a new application
  • Reduce costs only one copy of aix plus shared access to aix disks.
  • Simple to move an application to a different machine, application mobility, performance balancing,

Starting and stopping wpar:

  • Access the wpar standard console. It’s a secured link port no:hostname: 14443/ibm/console logon
  • Managed systems(an entire physical server or a LPAR) and work load partitions are under resource views tab
  • Wpar active state means its running, its not runningà defined, green tick means mobility, transitional stateàworking
  • Select wpars in defined stateàactionsàstartàok
  • Select wpars in active stateàactionsàstopàselect normal stop/hard stop/force wpar to stopàok
  • Monitoring the action using monitoringàtask activity
  • (OR) run the /usr/sbin/stopwpar –h sec_wpar on global system.

Application Mobility(moving wpar between machines):

  • Check the wpar mobility or not if not u cant move
  • Select wparàactionsàrelocateà click browseàok
  • Monitor the activity using task connectivity from monitoring tab.

Creating a WPAR(quick way):

  • newàgive wpar nameàgive hostnameàgive managed systemàselect system/applicationàif its application give application name and select/deselect enable mobilityàif its system wpar select/deselect use private /usr,/opt and enable mobility.
  • Give nfs server and remote directoty if u select enable mobility.
  • (OR) /usr/sbin/mkwpar –c –h wparname –n wp13 –M dev=/nfs/wp13root directory=/ host=managed system mountopts=rw vfs=nfs –R active=yes –S
  • It’s a defined state so actionsàstart.

Creating wpar(detailed way):

  • Guided activitiesàCreate workload partitionànextàselect partition type(System/application)àgive partition nameànextàdeploy this wpar to an existing managed systemàgive managed system nameàgive passwordà click on start workload partition when system starts and start the wpar immediately upon deployment à nextàenable relocationàgive network detailsàgive nfs server name and remote directory

Mobility between power4, power5 and power6 machines:

  • Compatibility check: select wparàclick on actionsàcompatability(it shows managed systems that meet the basic requirements for relocating the selected wpar)
  • Wpar cannot move between different machines like power 4 to 5. so first you have to stop the wpar and removed with preserve local file systems on server option. Then wpar is undeployed state. Then click wparàactionsàdeployàenter target system à click on start the wpar immediately upon deployment, preserve file systemsàok.

WPAR properties:

  • Change properties: select wparàactionsàview/modify wpar
  • Change the processors using resource control

Access and controlling via the command line:

  • Lswparà gives wpar details (name,state,type,hostname,directory)
  • Stratwpar mywpar
  • Stopwpar –hN mywpar
  • Lswpar –L mywpar
  • Mkwpar –n first
  • Mkwpar –n –h -N netmask address
  • -c for checkpoint
  • -M directory=/ vfs=nfs host=9.9.9.9 dev=/nfs/wp13 /opt
  • Startwpar wp13
  • Clogin wp13

Application Mobility:

Source AIX: /opt/mcr/bin/chkptwpar wp13 –d /wpars/wp13/tmp/state –o /wpars/wp13/tmp/checkpoint.log –k

Rmwpar –p wp13

Target AIX: /opt/mcr/bin/restartwpar wp13 –d /wpars/wp13/tmp/state –o /wpars/wp13/rmp/restart.log

Running application wpar:

Wparexec –n temp –h hostname /usr/bin/sleep 30

Process: starting wpar, mounting, loading, stopping

Comparing WPAR & Global AIX:

  • Wpar: df –n (/, /home, /tmp, /var ànfs mounts, /opt,/usr à read only)
  • Host wp13 à hostname and ip address
  • All ip addresses of wpars must be placed as ip alias on global aix
  • No physical volumes available on wpars.
  • No paging space available on wpars
  • All processes reside on wpars must be same running on global aix
  • Ps –ef -@|pg( extra column will that is wpar name)
  • Ps –ef -@ wp13|pg
  • Topas -@ on global aix
  • Topas on wpar will give some results for wpar and some are global aix. Yellow are global aix and white are wpar.

NIM Short Notes

Posted: May 30, 2011 in Uncategorized

NIM

Master: It refers the machine where you setup and maintain your NIM environment

Client: It can be target for NIM Master-operations. Such as installation, updation.

NIM Classes: Machines, Network, Resources, Groups

Group: Collection of machines or resources

Resources : lpp_source, SPOT, mksysb, bosinst_data, script, image_data, installp_bundle

Lsnim – extract the contents in nim master

Lsnim –c machines à Shows the machine names

Lsnim –l

/etc/bootptab: This file is used by the bootpd daemon. With no operations this file empty. This file gets updated automatically by the NIM master when a NIM operation is executed that requires the client machine to boot from a NIM SPOT.

/etc/exports: Any sort of installation, boot, mksysb savevg etc operation requires the use of NFS. This file will be updated with which locations are NFS exported from the master to the client and the permissions associated with those exports.

/etc/hosts: It gives a relationship between a system’s hostname and an IP address. If your IP address does not match up to the correct hostname your installation fails.

/etc/niminfo: This file should always exist on the NIM master. This file is built when you first initialize the NIM environment. This is required to run nim commands and perform nim operations. If the /etc/niminfo file accidentally deleted you can rebuild the file

/tftpboot: The main purpose of this directory is to hold the boot images that are created by nim when a boot or installation is initiated. This directory also holds informational files about the clients that are having a boot or installation operation performed.

SPOT: Shared product object tree.Its a directory of code(installed filesets). That is used during client booting procedure. This content equals to /usr file system(Binaries, executables and libraries, header files and shell scripts).

boot images can exist in the /tftpboot directory. kernels will be stored in /tftpboot directory.

lsnim -t spot -> list the different available spots

To find oslevel -r output by lsnim -l . If the SPOT and mksysb are not at the same level, installation will only work if the SPOT is at higher level than the mksysb.

lpp_source: Similar to AIX install CD’s. It contains AIX licensed program products(LPPs) in Backup File Format.

Lpp_source with attribute simages=yes means create a SPOT, and to install the AIX operating system.

Lpp_source with attribute simages=no means can not be used to install base AIX operating system.

lpp_source types: lsnim -t lpp_source

Mksysb:this resource is a file containing the image of the root volume group of machine.

It is used to restore a machine.

Defining MKSYSB resource:Nim –o define –t mksysb –a source= -a server=master –a location= resource name.

Lsnim –t mksyb

bosinst_data: bosinst_data resource is a flat ascii file, like bosinst.data used for restoring system backup images from tape or CD/DVD. this resource is for Push/pull installation of multiple machines at the same time.

script: contains all the commands to perform customization. such as file system resizing, additional user creation ..

To start a nim environment

  1. Select a machine to be the master
  2. Install AIX for the master
  3. install NIM file sets : bos.sysmgt.nim.master, bos.sysmgt.nim.spot
  4. Configure the selected machine as a NIM Master using smitty nimconfigà Mention Network name, Interface : nimconfig –a netname=net_10_1_1 –a pif_name=en0 –a netboot_kernel=mp –a cable_type=tp –a client_reg=no
  5. When machine added to NIM environment, the /etc/niminfo file is created.
  6. To rebuild the NIM master /etc/niminfo file, use the nimconfig –r command
  7. To rebuild and recover NIM client /etc/niminfo file use the niminit command. Ex: niminit –a master= -a name=
  8. Create file systems for nim à The lpp_source and SPOT resources are directories and the related filesystems must be created.
  9. Define basic resources(lpp_source, SPOT)à smitty nim_mkres
  10. Define the client (smitty nim_mkmac)
  11. Start the client installation(smitty nim_task_inst)
  12. Verify the /etc/bootptab
  13. Verify the boot files created or not in the /bootptab

NIM Daemons: nimesis, nimd, bootpd, tftpd

NIM Master uses the bootpd and tftpd

The bootpd daemon will also use the /etc/bootptab file when a NIM client is configured to be booted from the NIM master.

The tftpd daemon uses the /etc/tftpaccess.ctl file to determine which directory hierarchy is allowed to share.

/var/adm/ras directory contains the nim master log files.

/usr/ad/ras/nimlog file contains information about failed NIM operations.

Alog command to view the nim logs: alog –f /usr/adm/ras/nimlog -o

Estimation of minimum disk requirements: lpp_source à 6GB, SPOTà2GB, mksysbà40GB

File System Hierarchy:

/tftpboot: This is used for NIM master boot images(Kernels) and info files for the NIM clients.

/export/lpp_source: Used for storing versions of AIX base level filesets in specific directories

/export/spot: Used for storing non /usr

/export/images: This is used for storing system backup images. Images can be created by NIM mksysb.

/export/mksysb: This directory for the mksysb image files to install on clients-approx 1.5GB per image.

/export/res: for bosinst_data, image_data and scripts.

/export/53 contains lppsource_53TL6 and spot_53TL6

NIM server size depends upon how many versions of AIX filesets, TLs, PTFs, Service Packs

Filesets for NIM Master.: bos.net.tcp.server, bos.net.nfs.server, bos.sysmgt.nim.master, bos.sysmgt.nim.spot.

Master config: smitty nimàconfigure NIM environmentàadvanced configurationàInitialize nim master only. ( give details like network name and interface)

Making the lppsource:

  • Copy software from CD or DVD into /export/53 file system Smitty bffcreateà ( give input device, software package to copy , directory for storing sw package)
  • Define it as a NIM resource: smitty nimàconfigure the NIM environment àAdvanced configurationà Create basic installation resourcesà create a new LPP_source ( give resource server, LPP_source name, LPP_source directory)

Making the SPOT: smitty NIMà configure the NIM environmentà Advanced configuration à Create Basic installation resources à Create a New Spot ( give Resource server, input device, SPOT name and SPOT directory.

NIM Configuration:

Define Client Machine: smitty nim à perform nim administration tasksà manage machinesà define a machine(NIM machine name, machine type(standalone), hardware platform type(chrp), kernel to use for network boot(mp), cable type(tp),

Display NIM network objects: lsnim –l –c networks

The Basic NIM installation resources:

1) one nim lpp_source and one SPOT

2) for mksysb installation mksysb resource and SPOT

Define lpp_source: nim –o define –t lpp_source –a server=master –a location=/export/lpp_source/lpp5300 –a source=/dev/cd0 lpp5300

Creating a NIM lpp_source from a directory àNim –o define –t lpp_source –a server=master –a location=/export/lpp_source/lpp5300 lpp5300

Removing NIM lpp_source: nim –o remove lpp5300

Check a NIM lpp_source: nim –Fo check lpp5304

Creating a NIM spot: nim –o define –t spot –a server=master –a location=/export/spot –a source=lpp5300 –a installp_flags=-aQg spot5300

Listing Filesets in a SPOT: nim –o lslpp –a filesets=all –a lslpp_flags=La spot6100-01

Nim –o lslpp spot6100-01

Listing fixes in a SPOT: nim –o fix_query spot6100-01

TL of a SPOT: lsnim –l spot6100-01|grep oslevel

Listing Client Filesets: nim –o lslpp –a filesets=all –a lslpp_flags=-La client.

Removing the NIM spot à nim –o remove spot5300

Checking the SPOTànim –o check spot5300

Resetting the NIM spot à nim –Fo check spot5300

Create a nim clientà nim –o define –t standalone –a if1=”net_10_1_1 lpar55 0 ent0” LPAR55

Define NIM machines using smit nim_mkmac

Removing NIM client Definition: nim –o remove LPAR55

Installing NIM Clients:

Base Operating System Installation

System Clone installation

Automated customization after a generic BOS install.

BOS install through NIM:

  • Nim –o allocate –a spot=spot5304 –a lpp_source=lpp5304 LPAR55
  • Initiate the install: nim –o bos_inst –a source=spot5304 –a installp_flags=agX –a accept_licenses=yes LPAR55
  • If the installation is unsuccessful, you need to reallocate the resources
  • Reset and deallocate NIM resources: Nim –Fo reset LPAR55, Nim –Fo deallocate –a subclass=all LPAR55
  • View the progress of installation: Nim –o showlog –a log_type=boot LPAR55

Using SMIT to install standalone client: smitty nim_bosinstà select a target for the operationà select the installation type à select the lpp_sourceà select the spot

After Initial program load à SMS Menuà setup remote IPLàinterpartition logical LANà Select IP parameters(Client IP, Server IP, Gateway, Subnetmask)àPing TestàExecute Ping Testà Select Boot Optionà Select install/Boot Device(Network)à Select normal boot mode

Steps to migrate the NIM master to AIX 5L V5.3

  1. Unmount all NFS mounts
  2. Document the AIX and NIM master configuration(snap –ac, lsnim)
  3. Perform NIM database backup à smitty nim_backup_dbà Backup the NIM database
  4. Perform a mksysb of the NIM Master
  5. insert Aix 5l v5.3 CD volume 1 into the CD drive

Creating mksysb from NIM client:

Nim –o define –t mksysb –a server=master –a source=lpar5 –a mk_image=yes –a location=/export/images/mksysb.lpar5 mksysb_lpar5

Backup the VIO server:

Backupios –file /home/padmin/viobackup/VIO.mksysb –mksysb

Restoring the VIO server:

Defining mksysb resource: smitty nim_mkres(select mksysb)à Define Spot resource: smitty nim_mkres(select spot)à perform the BOS installation

NIM Commands

nimconfig -a pif_name=en0 -a netname=net1àTo initialise the NIM master with network name net1

nimconfig -r àTo rebuild /etc/niminfo file which contains the variables for NIM

nim -o define -t lpp_source -a source=/dev/cd0 -a server=master –a location=/export/lpp_source/lpp_source1 lpp_source1à To define lpp_source1 image in /export/lpp_source/lpp_source directory from source cd0

nim -o define -t mksysb -a server=master -a location=/resources/mksysb.image mksysb1à To define mksysb resource mksysb1, from source /resources/mksysb.image on master

nim -o remove inst_resourceà To remove the resource

nim –o showres lpp_source6100àListing contents of the lpp_source.

Nim –o showres –a instfix_flags=T lppsource_61_01

nim -o check lpp_source1à To check the status of lpp_source lpp_source1

nim -o allocate -a spot=spot1 -a lpp_source=lpp_source1 node1à To allocate the resources spot1 and lpp_source1 to the client node1

nim -o bos_inst node1à To initialise NIM for the BOS installation on node1 with the allocated resources

nim -o dkls_init dcmdsà To initialize the machine dcmds as diskless operation

nim -o dtls_init dcmdsàTo initialize the machine dcmds for dataless operation

nim -o cust dcmdsàTo initialize the machine dcmds for customize operation

nim -o diag dcmdsàTo initialize the machine dcmds for diag operation

nim -o maint dcmdsàTo initialize the machine dcmds for maintenance operation

nim -o define -t standalone -a platform=rspc -a if1=”net1 dcmds xxxxx” -a cable_type1=bnc dcmdsàTo define the machine dcmds as standalone with platform as rspc and network as net1 with cable

type bnc and mac address xxxxx

nim -o unconfig masteràTo unconfigure nim master master

nim -o allocate -a spot=spot1 dcmdsàTo allocate the resource spot1 from machine dcmds

nim -o deallocate -a spot=spot1 dcmdsàTo de allocate the resource spot1 from machine dcmds

nim -o remove dcmdsàTo remove machine dcmds after removing all resources associated to it

nim -o reboot dcmdsàTo reboot ther client dcmds

nim -o define -t lpp_source -a location=/software/lpp1 -a server=master -a source=/dev/cd0 lpp1àTo define lppsource lpp1 on master at /software/lpp1 directory from source device /dev/cd0

lsnim àTo list the nim resources

lsnim -l dcmdsà To list the detailed info about the object dcmds

lsnim -O dcmds àTo list the operation dcmds object can support

lsnim -c resources dcmds àTo list the resources allocated to the machine dcmds

nimclientà The client version of nim command (User can obtain same results of nim in server )

NIM Master Configuration:

Nim Installation:

File sets required for NIM installation:

  • bos.sysmgt.nim.master
  • bos.sysmgt.nim.client
  • bos.sysmgt.nim.spot

Put volume 1 of your media installp –acgXd /dev/cd0 bos.sysmgt.nim OR use smit install_all

Initial Setup: smit nim_config_env

Initializing the NIM master: nimconfig –a pif_name=en0 –a master_port=1058 –a netname=master_net –a cable_type=bnc

Or smitty nimconfig.

Lsnim –l master à you will see information about NIM master

Lsnim –l |more àThe boot resource created a /tftpboot directory to hold all of your boot images. All NIM clients that are on the same subnet as this master will be assigned to master_net network.

Set up first lpp_source resource: Create file system called /export/nim/lpp_source.

Nim –o define –t lpp_source –a location=/export/nim/lpp_source/53_05 –a server=master –a comments=’5300-05 lpp_source’ –a multi_volume=yes –a source=/dev/cd0 –a packages=all 5305_lpp

Or

Smit nim_mkresà select lpp_source

If you wish to add other volumes you can use

A) bffcreate the volumes into the lpp_source

B) Use NIM to add the volumes smitty nim_res_opàselect lpp_sourceàselect updateàgive target lpp_source and source

Lsnim –l 5305_lpp

Rstate: If this is not set to ‘ready for use’ then you cannot use this resource. Running a check on the lpp_source will allow you to clear this up. Nim –o check

Set up first SPOT resource: create file system called /export/nim/spot.

Nim –o define –t spot –a server=master –a source=5305_lpp –a location=/export/nim/spot –a auto_expand=yes –a comments=’5300-05 spot’ 5305_spot

OR

Smitty nim_mkresàselect SPOT.

Lsnim –l 5305_spot

Unconfiguring NIM master: nim –o unconfigure master.

Installing software on a client: smitty nim (or smit nim_inst_latest)àPerform NIM software installation and Maintenance tasksàInstall and update softwareàInstall Softwareàselect the client and the lpp_source.

Updating client software to a latest level: smitty nim(or nim_update_all)àperform nim software installation and maintenance tasksàinstall and update softwareàupdate installed software to latest levelàselect the client then select the lpp_source.

Alternate Disk Install for new TLs: smitty nim(or nim_alt_clone)àPerform NIM software installation and Maintenance tasksà Alternate Disk Installationà Clone the rootvg to an alternate disk ( select taget machine and disk)

Alternate disk install for a new release: smit nim(or nimadm_migrate)àPerform NIM software installation and Maintenance tasksàAlternate disk installationà NIM alternate disk migrationàperform nim alternate disk migration( select client, disk name, LPP source and SPOT name)

Performing Installs from the client: smit nim(or nim_client_inst) à install and upgrade software

RTE installation:

  • Requires lpp_source and spot
  • Default is to install the BOS.autoi bundle
  • Define the client on the NIM master
  • Prepare the NIM master to supply RTE install resources to the client
  • Initiate the installation from the client

Defining the client: smit nim_mkmac (give hostname) enter give machine typeàstandalone, hardware platform typeàchrp, communication protocol need by clientànimsh, cable typeàN/A.

Client on a new network: smit nim_mkmacà give hostname and enter. Type of network attached to network install interface àent(Ethernet network) enter. Give NIM networkànetwork2 and subnetmaskà255.255.255.0, default gateway used by machine and master.

Setting up the master to install: smit nim_bosinstàselect target machineàselect the installation type (rte)àselect the lpp_sourceàselect the SPOTàinstall the base OS on standalone clients

Checking the NIM master: lsnim –l client, tail –l /etc/bootptab(bf field in /etc/bootptab specifies the boot file that will

be transferred to the client using TFTP after the client contacts the master using BOOTP) , ls –l /tftpboot(Actually a symbolic link) , showmont –e ( shows exported files)

Typical Install Sequence:

  • Client initates BOOTP request to NIM server.
  • NIM server responds with information about the boot file (bootptab file)
  • Client initates TFTP of boot file from NIM server.
  • Client runs boot file
  • Client NFS mounts SPOT and lpp_source
  • Install Operating system

Accessing SMS: Open HMCà select LPARà Activateà select profile (default)àclick on open a terminal windowàAdvancedàselect boot mode SMS àok à select remote IPLà select the adapteràselect internet protocol version (IPv4)àselect network service(BOOTP)àSetting IP parameters( select client IP, server IP, gateway, subnet mask)àsetting the bootlist(select install/boot deviceàselect networkà select network service(BOOTP)àselect the normal boot mode) à setting the boot(are you sure you want to exit SMS(yes).

Monitoring progress on the master: lsnim –l client ( info à prompting_for_data_at_console)

Installation: Main BOS installation menu ( select install now with default settings)

To view the bosinst log à nim –o showlog –a log_type=bosinst client.

Listing valid operations for an object type: lsnim –Pot master

Listing valid operations for an object: lsnim –O client

Rebuilding the /etc/niminfo file: nimconfig –r

Niminit -a name=client –a master=master

Backing up the NIM database: smitty nim_backup_db (default value is /etc/objrepos/nimdb.backup)

Restore the previously created backup: smitty nim_restore_db

NIM Log files:

/var/adm/ras/nimlog

/var/adm/ras/nim.installp

/var/adm/ras/nimsh.log

/var/adm/ras/bosinstlog

/var/adm/ras/nim.setup

High availability ( Alternate NIM master)

/etc/niminfo: Lists the active NIM master, and a list of valid alternate masters.

Configure alternate NIM master: smit niminit_altmstr

Synchronizing NIM database: smit nim_altmstr ( select synchronize alternate master’s NIM DB)

VIO Short Notes

Posted: May 30, 2011 in Uncategorized

VIO Short Note

PowerVM: It allows to increase the utilization of servers. Power VM includes Logical partitioning, Micro Partitioning, Systems Virtualization, VIO, hypervisor and so on.

Simultaneous Multi Threading : SMT is an IBM microprocessor technology that allows 2 separate H/W instruction streams to run concurrently on the same physical processor.

Virtual Ethernet : VLAN allows secure connection between logical partitions without the need for a physical IO adapter or cabling. The ability to securely share Ethernet bandwidth across multiple partitions increases H/W utilization.

Virtual SCSI: VSCSI provides secure communication between the partitions and VIO server.The combination of VSCSI and VIO capabilities allows you to share storage adapter bandwidth and to subdivide single large disks into smaller segments. The adapters and disks can shared across multiple partitions, increase utilization.

VIO server : Physical resources allows you to share the group of partitions.The VIO server can use both virtualized storage and network adapters, making use of VSCSI and virtual Ethernet.

Redundant VIO server: AIX or linux partitions can be a client of one or more VIO servers at the same time. A good strategy to improve availability for sets of client partitions is to connect them to 2 VIO servers. The reason for redundancy is ability to upgrade latest technologies without affecting production workloads.

Micro Partitioning: Sharing the processing capacity from one or more logical partitions. The benefit of Micro Partitioning is that it allows significantly increased overall utilization . n of processor resources. A micro partition must have 0.1 processing units. Maximum no.of partitions on any system P server is 254.

Uncapped Mode : The processing capacity can exceed the entitled capacity when resources are available in the shared processor pool and the micro partition is eligible to run.

Capped Mode : The processing capacity can never exceed the entitled capacity.

Virtual Processors :A virtual processor is a representation of a physical processor that is presented to the operating system running in a micro partition.

If a micro partition is having 1.60 processing units , and 2 virtual processors. Each virtual processor will have 0.80 processing units.

Dedicated processors : Dedicated processors are whole processors that are assigned to dedicated LPARs . The minimum processor allocation for an LPAR is one.

IVM(Integrated virtualization manager): IVM is a h/w management solution that performs a subset of the HMC features for a single server, avoiding the need of a dedicated HMC server.

Live partition Mobility: Allows you to move running AIX or Linux partitions from one physical Power6 server to another without disturb.

VIO

Version for VIO 1.5

For VIO command line interface is IOSCLI

The environment for VIO is oem_setup_env

The command for configuration through smit is cfgassist

Initial login to the VIO server is padmin

Help for vio commands ex: help errlog

Hardware requirements for creating VIO :

  1. Power 5 or 6
  2. HMC
  3. At least one storage adapter
  4. If you want to share Physical disk then one big Physical disk
  5. Ethernet adapter
  6. At least 512 MB memory

Latest version for vio is 2.1 fixpack 23

Copying the virtual IO server DVD media to a NIM server:

Mount /cdrom

Cd /cdrom

Cp /cdrom/bosinst.data /nim/resources

Execute the smitty installios command

Using smitty installios you can install the VIO S/w.

Topas –cecdisp flag shows the detailed disk statistics

Viostat –extdisk flag shows detailed disk statistics.

Wklmgr and wkldagent for handling workload manager. They can be used to record performance data and that can be viewed by wkldout.

Chtcpip command for changing tcpip parameters

Viosecure command for handling the secure settings

Mksp : to create a storage pool

Chsp: Adds or removes physical volumes from the storage pool

Lssp: lists information about storage pool

Mkbdsp: Attaches storage from storage pool to virtual SCSI adapter

Rmbdsp: removes storage from virtual scsi adapter and return it to storage pool

Default storage pool is rootvg

Creation of VIO server using HMC version 7 :

Select the managed system -> Configuration -> Create Logical Partition -> VIO server

Enter the partition name and ID.

Check the mover service box if the VIO server partition to be created will be supporting partition mobility.

Give a partition profile name ex:default

Processors : You can assign entire processors to your partition for dedicated use, or you can assign partial processors units from the shared processor pool. Select shared.

Specify the minimum, desired and maximum processing units.

Specify minimum, desired and maximum virtual processors. And select the uncapped weight is 191

The system will try to allocate the desired values

The partition will not start if the managed system cannot provide the minimum amount of processing units.

You cannot dynamically increase the amount of processing units to more than the maximum,

Assign the memory also min, desired and max.

The ratio between minimum and maximum amount of memory cannot be more than 1/64

IO selects the physical IO adapters for the partition. Required means the partition will not be able to start unless these are available in this partition. Desired means that the partition can start also without these adapters. A required adapter can not be moved in a dynamic LPAR operation.

VIO server partition requires a fiber channel adapter to attach SAN disks for the client partitions. It also requires an Ethernet adapter for shared Ethernet adapter bridging to external networks.

VIO requires minimum of 30GB of disk space.

Create Virtual Ethernet and SCSI adapters: increase the maximum no of virtual adapters to 100

The maximum no of adapters must not set more than 1024.

In actions -> select create -> Ethernet adapter give Adapter ID and VLAN id.

Select Access External Network Check Box to use this adapter as a gateway between internal and external network.

And also create SCSI adapter also.

VIO server S/W installation :

  1. Place the CD/DVD in P5 Box
  2. Activate the VIO server by clicking the activate. Select the default partition
  3. Then check the Open terminal window or console section and click the advanced. And OK.
  4. Under the boot mode drop down list select SMS.

After installation is complete login with padmin and press a(for s/w maintenance agreement terms)

License –accept for accepting the license.

Creating a shared Ethernet adapter

  1. lsdev –virtual ( check the virtual Ethernet adapter)
  2. lsdev –type adapter ( Check the physical Ethernet adapter)
  3. you use the lsmap –all –net command to check the slot numbers of the virtual Ethernet adapter.
  4. mkvdev –sea ent0 –vadapter ent2 –default ent2 –defaultid 1
  5. lsmap –all –net
  6. use the cfgassist or mktcpip command configure the tcp/ip or
  7. mktcpip –hostname vio_server1 –inetaddr 9.3.5.196 –interface ent3 –netmask 255.255.244.0 –gateway 9.3.4.1

Defining virtual disks

Virtual disks can either be whole physical disks, logical volumes or files. The physical disks can be local or SAN disks.

Create the virtual disks

  1. login to the padmin and run cfgdev command to rebuild the list of visible devices.
  2. lsdev –virtual (make sure virtual scsi server adapters available ex:vhost0)
  3. lsmap –all à to check the slot numbers and vhost adapter numbers.
  4. mkvg –f –vg rootvg_clients hdisk2 à Creating rootvg_clients vg.
  5. mklv –lv dbsrv_rvg rootvg_clients 10G

Creating virtual device mappings:

  1. lsdev –vpd |grep vhost
  2. mkvdev –vdev dbsrv_rvg -vadapter vhost2 –dev dbsrv_rvg
  3. lsdev –virtual
  4. lsmap –all

fget_config –Av command provided on the IBM DS4000 series for a listing of LUN names.

Virtual SCSI Optical devices:

A dvd or cd device can be virtualized and assigned to client partitions. Only one VIO client can access the device at a time.

Steps :

  1. let the DVD drive assign to VIO server
  2. Create a server SCSI adapter using the HMC.
  3. Run the cfgdev command to get the new vhost adapter. Check using lsdev –virtual
  4. Create the virtual device for the DVD drive.(mkvdev –vdev cd0 –vadapter vhost3 –dev vcd)
  5. Create a client scsi adapter in each lpar using the HMC.
  6. Run the cfgmgr

Moving the drive :

  1. Find the vscsi adapter using lscfg |grep Cn(n is the slot number)
  2. rmdev –Rl vscsin
  3. run the cfgmgr in target LPAR

Through dsh command find which lpar is currently holding the drive.

Unconfiguring the dvd drive :

  1. rmdev –dev vcd –ucfg
  2. lsdev –slots
  3. rmdev –dev pci5 –recursive –ucfg
  4. cfgdev
  5. lsdev –virtual

Mirroring the VIO rootvg:

  1. chvg –factor 6 rootvg (rootvg can include upto 5 PVs with 6096 PPs)
  2. extendvg –f rootvg hdisk2
  3. lspv
  4. mirrorios –f hdisk2
  5. lsvg –lv rootvg
  6. bootlist –mode –normal –ls

Creating Partitions :

  1. Create new partition using HMC with AIX/linux
  2. give partition ID and Partition name
  3. Give proper memory settings(min/max/desired)
  4. Skip the physical IO
  5. give proper processing units (min/desired/max)
  6. Create virtual ethernet adapter ( give adapter ID and VLAN id)
  7. Create virtual SCSI adapter
  8. In optional settings
  • · Enable connection monitoring
  • · Automatically start with managed system
  • · Enable redundant error path reporting
  1. bootmodes select normal

Advanced Virtualization:

Providing continuous availability of VIO servers : use multiple VIO servers for providing highly available virtual scsi and shared Ethernet services.

IVM supports a single VIO server.

Virtual scsi redundancy can be achieved by using MPIO and LVM mirroring at client partition and VIO server level.

Continuous availability for VIO

  • Shared Ethernet adapter failover
  • Network interface backup in the client
  • MPIO in the client with SAN
  • LVM Mirroring

Virtual Scsi Redundancy:

Virtual scsi redundancy can be achieved using MPIO and LVM mirroring.

Client is using MPIO to access a SAN disk, and LVM mirroring to access 2 scsi disks.

MPIO: MPIO for highly available virtual scsi configuration. The disks on the storage are assigned to both virtual IO servers. The MPIO for virtual scsi devices only supports failover mode.

Configuring MPIO:

  • Create 2 virtual IO server partitions
  • Install both VIO servers
  • Change fc_err_recov( to fast_fail and dyntrk(AIX tolerate cabling changes) to yes. ( chdev –dev fscsi0 –attr fc_err_recov=fast_fail dyntrk=yes –perm
  • Reboot the VIO servers
  • Create the client partitions. Add virtual Ethernet adapters
  • Use the fget_config(fget_config –vA) command to get the LUN to hdisk mappings.
  • Use the lsdev –dev hdisk –vpd command to retrieve the information.
  • The reserve_policy for each disk must be set to no_reserve.(chdev –dev hdisk2 –attr reserve_policy=no_reserve)
  • Map the hdisks to vhost adapters.( mkvdev –vdev hdisk2 –vadapter vhost0 –dev app_server)
  • Install the client partitions.
  • Configure the client partitions
  • Testing MPIO

Configure the client partitions:

  • Check the MPIO configuration (lspv, lsdev –Cc disk)
  • Run lspath
  • Enable the health check mode (chdev –l hdisk0 –a hcheck_interval=50 –P
  • Enable the vscsi client adapter path timeout ( chdev –l vscsi0 –a vscsi_path_to=30 –P)
  • Changing the priority of a path( chpath –l hdisk0 –p vscsi0 –a priority=2)

Testing MPIO:

  • Lspath
  • Shutdown VIO2
  • Lspath
  • Start the vio2
  • Lspath

LVM Mirroring: This is for setting up highly available virtual scsi configuration. The client partitions are configured with 2 virtual scsi adapters. Each of these virtual scsi adapters is connected to a different VIO server and provides one disk to the client partition.

Configuring LVM Mirroring:

  • Create 2 virtual IO partitions, select one Ethernet adapter and one storage adapter
  • Install both VIO servers
  • Configure the virtual scsi adapters on both servers
  • Create client partitions. Each client partition needs to be configured with 2 virtual scsi adapters.
  • Add one or two virtual Ethernet adapters
  • Create the volume group and logical volumes on VIO1 and VIO2
  • A logical volume from the rootvg_clients VG should be mapped to each of the 4 vhost devices.( mkvdev –vdev nimsrv_rvg –vadapter vhost0 –dev vnimsrv_rvg)
  • Lsmap –all
  • When you bring up the client partitions you should have hdisk0 and hdisk1. Mirror the rootvg.
  • Lspv
  • Lsdev –Cc disk
  • Extendvg rootvg hdisk1
  • Mirrorvg –m rootvg hdisk1
  • Test LVM mirroring

Testing LVM mirroring:

  • Lsvg –l rootvg
  • Shutdown VIO2
  • Lspv hdisk1 (check the pvstate, stale partitions)
  • Reactivate VIO and varyonvg rootvg
  • Lspv hdisk1
  • Lsvg –l rootvg

Shared Ethernet adapter: It can be used to connect a physical network to a virtual Ethernet network. Several client partitions to share one physical adapter.

Shared Ethernet Redundancy: This is for temporary failure of communication with external networks. Approaches to achieve continuous availability:

  • Shared Ethernet adapter failover
  • Network interface backup

Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA failover configuration 2 VIO servers have the bridging functionality of the SEA. They use a control channel to determine which of them is supplying the Ethernet service to the client. The client partition gets one virtual Ethernet adapter bridged by 2 VIO servers.

Requirements for configuring SEA failover:

  • One SEA on one VIOs acts as the primary adapter and the second SEA on the second VIOs acts as a backup adapter.
  • Each SEA must have at least one virtual Ethernet adapter with the “access external network flag(trunk flag) checked. This enables the SEA to provide bridging functionality between the 2 VIO servers.
  • This adapter on both the SEA’s has the same pvid
  • Priority value defines which of the 2 SEA’s will be the primary and which is the secondary. An adapter with priority 1 will have the highest priority.

Procedure for configuring SEA failover:

  • Configure a virtual Ethernet adapter via DLPAR. (ent2)
    • Select the VIOàClick task buttonàchoose DLPARàvirtual adapters
    • Click actionsàCreateàEthernet adapter
    • Enter Slot number for the virtual Ethernet adapter into adapter ID
    • Enter the Port virtual Lan ID(PVID). The PVID allows the virtual Ethernet adapter to communicate with other virtual Ethernet adapters that have the same PVID.
    • Select IEEE 802.1
    • Check the box “access external network”
    • Give the virtual adapter a low trunk priority
    • Click OK.
  • Create another virtual adapter to be used as a control channel on VIOS1.( give another VLAN ID, do not check the box “access external network” (ent3)
  • Create SEA on VIO1 with failover attribute. ( mkvdev –sea ent0 –vadapter ent2 –default ent2 –defaultid 1 –attr ha_mode=auto ctl_chan=ent3. Ex: ent4
  • Create VLAN Ethernet adapter on the SEA to communicate to the external VLAN tagged network ( mkvdev –vlan ent4 –tagid 222) Ex:ent5
  • Assign an IP address to SEA VLAN adapter on VIOS1. using mktcpip
  • Same steps to VIO2 also. ( give the higher trunk priority:2)

Client LPAR Procedure:

  • Create client LPAR same as above.

Network interface backup : NIB can be used to provide redundant access to external networks when 2 VIO servers used.

Configuring NIB:

  • Create 2 VIO server partitions
  • Install both VIO servers
  • Configure each VIO server with one virtual Ethernet adapter. Each VIO server needs to be a different VLAN.
  • Define SEA with the correct VLAN ID
  • Add virtual Scsi adapters
  • Create client partitions
  • Define the ether channel using smitty etherchannel

Configuring multiple shared processor pools:

ConfigurationàShared processor pool management àSelect the pool nameà

VIOs Security:

Enable basic firewall settings: viosecure –firewall on

view all open ports on firewall configuration: viosecure –firewall view

To view current security settings: viosecure –view nonint

Change system security settings to default: viosecure –level default

List all failed logins : lsfailedlogin

Dump the global command log: lsgcl

Backup:

Create a mksysb file of the system on a nfs mount: backupios –file /mnt/vios.mksysb –mksysb

Create a backup of all structures of VGs and/or storage pools: savevgstruct vdiskvg ( data will be stored to /home/ios/vgbackups)

List all backups made with savevgstruct: restorevgstruct –ls

Backup the system to a NFS mounted file system: backupios –file /mnt

Performance Monitoring:

Retrieve statistics for ent0: entstat –all ent0

Reset the statistics for ent0: entstat –reset ent0

View disk statistics: viostat 2

Show summary for the system in stats: viostat –sys 2

Show disk stats by adapter: viostat –adapter 2

Turn on disk performance counters: chdev –dev sys0 –attr iostat=true

Topas –cecdisp

Link aggregation on the VIO server:

Link aggregation means you can give one IP address to two network cards and connect to two different switches for redundancy purpose. One network card will be active on one time.

Devices à communication à Etherchannel/IEEE 802.3 ad Link Aggregation à Add an etherchannel / Link aggregation

Select ent0 and mode 8023ad

Select backup adapter as redundancy ex: ent1

Automatically virtual adapter will be created named ent2.

Then put IP address : smitty tcpip à Minimum configuration and startup à select ent2 à Put IP address

VLANs:

HACMP Short Notes

Posted: May 30, 2011 in Uncategorized

HACMP

HACMP : High Availability Cluster Multi-Processing

High Availability : Elimination of both planned and unplanned system and application downtime. This is achieved through elimination of H/W and S/W single points of failure.

Cluster Topology : The Nodes, networks, storage, clients, persistent node ip label/devices

Cluster resources: HACMP can move these components from one node to others Ex: Service labels, File systems and applications

RSCT Version: 2.4.2

SDD Version: 1.3.1.3

HA Configuration :

  • Define the cluster and nodes
  • Define the networks and disks
  • Define the topology
  • Verify and synchronize
  • Define the resources and resource groups
  • Verify and synchronize

After Installation changes : /etc/inittab,/etc/rc.net,/etc/services,/etc/snmpd.conf,/etc/snmpd.peers,/etc/syslog.conf,

/etc/trcfmt,/var/spool/cron/crontabs/root,/etc/hosts , HACMP group will add

Software Components:

Application server

HACMP Layer

RSCT Layer

AIX Layer

LVM Layer

TCP/IP Layer

HACMP Services :

Cluster communication daemon(clcomdES)

Cluster Manager (clstrmgrES)

Cluster information daemon(clinfoES)

Cluster lock manager (cllockd)

Cluster SMUX peer daemon (clsmuxpd)

HACMP Deamons: clstrmgr, clinfo, clmuxpd, cllockd.

HA supports up to 32 nodes

HA supports up to 48 networks

HA supports up to 64 resource groups per cluster

HA supports up to 128 cluster resources

IP Label : The label that is associated with a particular IP address as defined by the DNS (/etc/hosts)

Base IP label : The default IP address. That is set on the interface by aix on startup.

Service IP label: a service is provided and it may be bound on a single/multiple nodes. These addresses that HACMP keep highly available.

IP alias: An IP alias is an IP address that is added to an interface. Rather than replacing its base IP address.

RSCT Monitors the state of the network interfaces and devices.

IPAT via replacement : The service IP label will replace the boot IP address on the interface.

IPAT via aliasing: The service IP label will be added as an alias on the interface.

Persistent IP address: this can be assigned to a network for a particular node.

In HACMP the NFS export : /use/es/sbin/cluster/etc/exports

Shared LVM:

  • Shared volume group is a volume group that resides entirely on the external disks shared by cluster nodes
  • Shared LVM can be made available on Non concurrent access mode, Concurrent Access mode, Enhanced concurrent access mode.

NON concurrent access mode: This environment typically uses journaled file systems to manage data.

Create a non concurrent shared volume group: smitty mkvgàGive VG name, No for automatically available after system restart, Yes for Activate VG after it is created, give VG major number

Create a non concurrent shared file system: smitty crjfsàRename FS names, No to mount automatically system restart, test newly created FS by mounting and unmounting it.

Importing a volume group to a fallover node:

  • · Varyoff the volume group
  • · Run discover process
  • · Import a volume group

Concurrent Acccess Mode: It’s not supported for file systems. Instead must use raw LV’s and Physical disks.

Creating concurrent access volume group:

  • · Verify the disk status using lsdev –Cc disk
  • · Smitty cl_convgàCreate a concurrent volume groupàenter
  • · Import the volume group using importvg –C –y vg_name physical_volume_name
  • · Varyonvg vgname

Create LV’s on the concurrent VG: smitty cl_conlv.

Enhanced concurrent mode VG’s: This can be used for both concurrent and non concurrent access. This VG is varied on all nodes in the cluster, The access for modifying the data is only granted to the node that has the resource group active.

Active or passive mode:

Active varyon: all high level operations permitted.

Passive varyon: Read only permissions on the VG.

Create an enhanced concurrent mode VG: mkvg –n –s 32 –C –y myvg hdisk11 hdisk12

Resource group behaviour:

Cascading: Fallover using dynamic node priority. Online on first available node

Rotating : Failover to next priority node in the list. Never fallback. Online using distribution policy.

Concurrent : Online on all available nodes . never fallback

RG dependencies:Clrgdependency –t

/etc/hosts : /etc/hosts for name resolution. All cluster node IP interfaces must be added on this file.

/etc/inittab : hacmp:2:once:/usr/es/sbin/cluster/etc/rc.init>/dev/console 2> &1 will strat the clcomdES and clstrmgrES.

/etc/rc.net file is called by cfgmgr. To configure and start TCP/IP during the boot process.

C-SPOC uses clcomdES to execute commands on remote nodes.

C-SPC commands located in /usr/es/sbin/cluster/cspoc

you should not stop a node with the forced option on more than one node at a time and also the RG in concurrent mode.

Cluster commands are in /usr/es/sbin/cluster

User Administration : cl_usergroup

Create a concurrent VG — > smitty cl_convg

To find the resource group information: clrginfo –P

HACMP Planning:

Maximum no.of nodes in a cluster is 32

In an HACMP Cluster, the heartbeat messages are exchanged via IP networks and Point-to-Point networks

IP Label represents the name associated with a specific IP address

Service IP label/address: The service IP address is an IP address used for client access.

2 types of service IP addresses:

Shared Service IP address: It can be active only on one node at a time.

Node bound service IP address: An IP address that can be configured only one node

Method of providing high availability service IP addresses:

IP address takeover via IP aliases

IPAT via IP replacement

IP alias is an IP address that is configured on a communication interface in addition to the base ip address. IP alias is an AIX function that is supported by HACMP. AIX supports multiple IP aliases on each communication interface. Each IP alias can be a different subnet.

Network Interface:

Service Interface: This interface used for providing access to the application running on that node. The service IP address is monitored by HACMP via RSCT heartbeat.

Boot Interface: This is a communication interface. With IPAT via aliasing, during failover the service IP label is aliased onto the boot interface

Persistent node IP label: Its useful for administrative purpose.

When an application is started or moved to another node together with its associated resource group, the service IP address can be configured in two ways.

  • Replacing the base IP address of a communication interface. The service IP label and boot IP label must be on same subnet.
  • Configuring one communication interface with an additional IP address on top of the existing one. This method is IP aliasing. All Ip addresses/labels must be on different subnet.

Default method is IP aliasing.

HACMP Security: Implemented directly by clcomdES, Uses HACMP ODM classes and the /usr/es/sbin/cluster/rhosts file to determine partners.

Resource Group Takeover relationship:

Resource Group: It’s a logical entity containing the resources to be made highly available by HACMP.

Resources: Filesystems, NFS, Raw logical volumes, Raw physical disks, Service IP addresses/Labels, Application servers, startup/stop scripts.

To made highly available by the HACMP each resource should be included in a Resource group.

Resource group takeover relationship:

  1. Cascading
  2. Rotating
  3. Concurrent
  4. Custom

Cascading:

    • Cascading resource group is activated on its home node by default.
    • Resource group can be activated on low priority node if the highest priority node is not available at cluster startup.
    • If node failure resource group falls over to the available node with the next priority.
    • Upon node reintegration into the cluster, a cascading resource group falls back to its home node by default.
    • Attributes:

1. Inactive takeover(IT): Initial acquisition of a resource group in case the home node is not available.

2. Fallover priority can be configured in default node priority list.

3. cascading without fallback is an attribute that modifies the fall back behavior. If cwof flag is set to true, the resource group will not fall back to any node joining. When the flag is false the resource group falls back to the higher priority node.

Rotating:

    • At cluster startup first available node in the node priority list will activate the resource group.
    • If the resource group is on the takeover node. It will never fallback to a higher priority node if one becomes available.
    • Rotating resource groups require the use of IP address takeover. The nodes in the resource chain must all share the same network connection to the resource group.

Concurrent:

    • A concurrent RG can be active on multiple nodes at the same time.

Custom:

    • Users have to explicitly specify the desired startup, fallover and fallback procedures.
    • This support only IPAT – via aliasing service IP addresses.

Startup Options:

  • Online on home node only
  • Online on first available node
  • Online on all available nodes
  • Online using distribution policyàThe resource group will only be brought online if the node has no other resource group online. You can find this by lssrc –ls clstrmgrES

Fallover Options:

  • Fallover to next priority node in list
  • Fallover using dynamic node priorityàThe fallover node can be selected on the basis of either its available CPU, its available memory or the lowest disk usage. HACMP uses RSCT to gather all this information then the resource group will fallover to the node that best meets.
  • Bring offlineàThe resource group will be brought offline in the event of an error occur. This option is designed for resource groups that are online on all available nodes.

Fallback Options:

  • Fallback to higher priority node in the list
  • Never fallback

Basic Steps to implement an HACMP cluster:

  • Planning
  • Install and connect the hardware
  • Configure shared storage
  • Installing and configuring application software
  • Install HACMP software and reboot each node
  • Define the cluster topology
  • Synchronize the cluster topology
  • Configure cluster resources
  • Configure cluster resource group and shared storage
  • Synchronize the cluster
  • Test the cluster

HACMP installation and configuration:

HACMP release notes : /usr/es/lpp/cluster/doc

Smitty install_all à fast path for installation

Cluster.es and cluster.cspoc images must be installed on all servers

Start the cluster communication daemon à startsrc –s clcomdES

Upgrading the cluster options: node by node migration and snapshot conversion

Steps for migration:

  • Stop cluster services on all nodes
  • Upgrade the HACMP software on each node
  • Start cluster services on one node at a time

Convert from supported version of HAS to hacmp

  • Current s/w should be commited
  • Save snapshot
  • Remove the old version
  • Install HA 5.1 and verify

Check previous version of cluster: lslpp –h “cluster”

To save your HACMP configuration, create a snapshot in HACMP

Remove old version of HACMP: smitty install_remove ( select software name cluster*)

Lppchk –v and lppchk –c cluster* both commands run clean if the installation is ok.

After you have installed HA on cluster nodes you need to convert and apply the snapshot. converting the snapshot must be performed before rebooting the cluster nodes

Clconvert_snapshot –C –v version –s à It converts HA old version snapshot to new version

After installation rebooting the cluster services is required because to activate the new cluster manager.

Verification and synchronization : smitty hacmpàextended configurationà extended verification and configuration à verify changes only

Perform Node-by-Node Migration:

  • Save the current configuration in snapshot.
  • Stop cluster services on one node using graceful with takeover
  • Verify the cluster services
  • Install hacmp latest version.
  • Check the installed software using lppchk
  • Reboot the node.
  • Restart the HACMP software ( smitty hacmpàSystem ManagementàManage cluster servicesàstart cluster services
  • Repeat above steps on all nodes
  • Logs documenting on /tmp/hacmp.out /tmp/cm.log /tmp/clstrmgr.debug
  • Config_too_long message appears when the cluster manager detects that an event has been processing for more than the specified time. To change the time interval ( smitty hacmpà extended configurationàextended event configurationàchange/show time until warning)

Cluster snapshots are saved in the /usr/es/sbin/cluster/snapshots.

Synchronization process will fail when migration is incomplete. To back out from the change you must restore the active ODM. (smitty hacmp à Problem determination toolsà Restore HACMP configuration database from active configuration.)

Upgrading HACMP new version involves converting the ODM from previous release to the current release. That is done by /usr/es/sbin/cluster/conversion/cl_convert –F –v 5.1

The log file for the conversion is /tmp/clconvert.log.

Clean up process once installation interrupted.( smitty installà software maintenance and installationà clean up after a interrupted installation)

Network Configuration:

Physical Networks: TCP/IP based, such as Ethernet and token ring Device based, RS 232 target mode SSA(tmssa)

Configuring cluster Topology:

Standard and Extended configuration

Smitty hacmpàInitialization and standard configuration

IP aliasing is used as the default mechanism for service IP label/address assignment to a network interface.

  • Configure nodes : Smitty hacmpàInitialization and standard configurationàconfigure nodes to an hacmp clusterà (Give cluster name and node names)
  • Configure resources: Use configure resources to make highly available ( configure IP address/label, Application server, Volume groups, Logical volumes, File systems
  • Configure resource groups: Use configure HACMP resource groups . you can choose cascading, rotating, custom, concurrent
  • Assign resources to each resource group: configure HACMP resource groupsà Change/show resources for a Resource group.
  • Verify and synchronize the cluster configuration
  • Display the cluster configuration

Steps for cluster configuration using extended path:

  • Run discovery: Running discovery retrieves current AIX configuration information from all cluster nodes.
  • Configuring an HA cluster: smitty hacmpàextended configurationàextended topology configurationàconfigure an HACMP clusteràAdd/change/show an HA cluster
  • Defining a node: smitty hacmpàextended configurationàextended topology configurationàconfigure HACMP nodesàAdd a node to the HACMP cluster
  • Defining sites: This is optional.
  • Defining network: Run discover before network configuration.
    1. IP based networks: smitty hacmpàextended configurationàextended topology configurationàconfigure HACMP networksàAdd a network to the HACMP clusteràselect the type of networkà(enter network name, type, netmask, enable IP takeover via IP aliases(default is true), IP address offset for heartbeating over IP aliases.
  • Defining communication interfaces: smitty hacmpàextended configurationàextended topology configurationàHACMP cotmmunication interfaces/DevicesàSelect communication interfacesàadd node name, network name, network interface, IPlabel/address, network type
  • Defining communication devices: smitty hacmpàextended configurationàextended topology configurationàconfigure HACMP communication interface/devicesàselect communication devices
  • To see boot IP labels on a node use netstat –in
  • Defining persistent IP labels: It always stays on the same node, does not require installing an additional physical interface, its not part of any resource group.smitty hacmpàextended topology configurationàconfigure persistent node IP label/addressesàadd persistent node IP label(enter node name, network name, node IP label/address)

Resource Group Configuration

  • Smitty hacmpàinitialization and standard configurationàConfigure HACMP resource groupsà Add a standard resource groupà Select cascading/Rotating/Concurrent/Custom (enter resource group name, participating node names)
  • Assigning resources to the RG. Smitty hacmpàinitialization and standard configurationà Configure HACMP resource groupsàchange/show resources for a standard resource group( add service IP label/address, VG, FS, Application servers.

Resource group and application management:

  • Bring a resource group offline: smitty cl_adminàselect hacmp resource group and application managementàBring a resource group offline.
  • Bring a resource group online: smitty hacmp àselect hacmp resource group and application managementàBring a resource group online.
  • Move a resource group: smitty hacmp à select hacmp resource group and application managementà Move a resource group to another node

C-SPOC: Under smitty cl_admin

  • Manage HACMP services
  • HACMP Communication interface management
  • HACMP resource group and application manipulation
  • HACMP log viewing and management
  • HACMP file collection management
  • HACMP security and users management
  • HACMP LVM
  • HACMP concurrent LVM
  • HACMP physical volume management

Post Implementation and administration:

C-Spoc commands are located in the /usr/es/sbin/cluster/cspoc directory.

HACMP for AIX ODM object classes are stored in /etc/es/objrepos.

User group administration in hacmp is smitty cl_usergroup

Problem Determination:

To verify the cluster configuration use smitty clverify.dialog

Log file to store output: /var/hacmp/clverify/clverify.log

HACMP Log Files:

/usr/es/adm/cluster.log: Generated by HACMP scripts and daemons.

/tmp/hacmp.out: This log file contains line – by – line record of every command executed by scripts.

/usr/es/sbin/cluster/history/cluster.mmddyyyy: System creates cluster history file everyday.

/tmp/clstrmgr.debug: This messages generated by clstrmgrES activity.

/tmp/cspoc.log: generated by hacmp c-spoc commands

/tmp/dms_loads.out: stores log messages every time hacmp triggers the deadman switch

/var/hacmp/clverify/clverify.log: cluster verification log.

/var/ha/log/grpsvcs, /var/ha/log/topsvcs, /var/ha/log/grpglsm: daemon logs.

Snapshots: The primary information saved in a cluster snapshot is the data stored in the HACMP ODM classes(HACMPcluster, HACMPnode, HACMPnetwork, HACMPdaemons).

The cluster snapshot utility stores the data it saves in two separate files:

ODM data file(.odm), Cluster state information file(.info)

To create a cluster snapshot: smitty hacmpàhacmp extended configurationàhacmp snapshot configurationàadd a cluster snapshot

Cluster Verification and testing:

High and Low water mark values are 33 and 24

The default value for syncd is 60.

Before starting the clu ster clcomd daemon is added to the /etc/inittab and started by init.

Verify the status of the cluster services: lssrc –g cluster ( cluster manager daemon(clstrmgrES), cluster SMUX peer daemon(clsmuxpd) and cluster topology services daemon(topsvcd) should be running.

Status of different cluster subsystems: lssrc –g topsvcs and lssrc –g emsvcs.

In /tmp/hacmp.out file look for the node_up and node_up_complete events.

To check the HACMP cluster status: /usr/sbin/cluster/clstat. To use this command you should have started the clinfo daemon.

To change the snmp version : /usr/sbin/snmpv3_ssw -1.

Stop the cluster services by using smitty clstop : graceful, takeover, forced. In the log file /tmp/hacmp.out search for node_down and node_down_complete.

Graceful: Node will be released, but will not be acquired by other nodes.

Graceful with takeover: Node will be released and acquired by other nodes.

Forced: Cluster services will be stopped but resource group will not be released.

Resource group states: online, offline, aquiring, releasing, error, temporary error, or unknown.

Find the resource group status: /usr/es/sbin/cluster/utilities/clfindres or clRGinfo.

Options: -t : If you want to display the settling time –p: display priority override locations

To review cluster topology: /usr/es/sbin/cluster/utilities/cltopinfo.

Different type of NFS mounts: hard and soft

Hard mount is default choice.

NFS export file: /usr/es/sbin/cluster/etc/exports.

If the adapter configured with a service IP address : verify in /tmp/hacmp.out event swap_adapter has occurred, Service IP address has been moved using the command netstat –in .

You can implement RS232 heartbeat network between any 2 nodes.

To test a serial connection lsdev –Cc tty, baud rate is set to 38400, parity to none, bits per character to 8

Test to see RSCT is functioning or not : lssrc –ls topsvcs

RSCT verification: lssrc –ls topsvcs. To check RSCT group services: lssrc –ls grpsvcs

Monitor heartbeat over all the defines networks: cllsif.log from /var/ha/run/topsvcs.clustername.

Prerequisites:

PowerHA Version 5.5 à AIX v5300-9 àRSCT levet 2.4.10

BOS components: bos.rte.*, bos.adt.*, bos.net.tcp.*,

Bos.clvm.enh ( when using the enhanced concurrent resource manager access)

Cluster.es.nfs fileset comes with the powerHA installation medium installs the NFSv4. From aix BOS bos.net.nfs.server 5.3.7.0 and bos.net.nfs.client 5.3.7.0 is required.

Check all the nodes must have same version of RSCT using lslpp –l rsct

Installing powerHA: release notes: /usr/es/sbin/cluster/release_notes

Enter smitty install_allàselect input deviceàPress f4 for a software listingàenter

Steps for increase the size of a shared lun:

  • Stop the cluster on all nodes
  • Run cfgmgr
  • Varyonvg vgname
  • Lsattr –El hdisk#
  • Chvg –g vgname
  • Lsvg vgname
  • Varyoffvg vgname
  • On subsequent cluster nodes that share the vg. (run cfgmgr, lsattr –El hdisk#, importvg –L vgname hdisk#)
  • Synchronize

PowerHA creates a backup copy of the modified files during synchronization on all nodes. These backups are stored in /var/hacmp/filebackup directory.

The file collection logs are stored in /var/hacmp/log/clutils.log file.

User and group Administration:

Adding a user: smitty cl_usergroupàselect users in a HACMP clusteràAdd a user to the cluster.(list users, change/show characteristics of a user in cluster, Removing a user from the cluster

Adding a group: smitty cl_usergroupàselect groups in a HACMP clusteràAdd a group to the cluster.(list groups, change/show characteristics of a group in cluster, Removing a group from the cluster

Command is used to change password on all cluster nodes: /usr/es/sbin/cluster/utilities/clpasswd

Smitty cl_usergroupàusers in a HACMP cluster

  • Add a user to the cluster
  • List users in the cluster
  • Change/show characteristics of a user in the cluster
  • Remove a user from the cluster

Smitty cl_usergroupàGroups in a HACMP cluster

  • Add a group to the cluster
  • List groups to the cluster
  • Change a group in the cluster
  • Remove a group

Smitty cl_usergroupàPasswords in an HACMP cluster

Importing VG automatically: smitty hacmpàExtended configurationàHACMP extended resource configurationàChange/show resources and attributes for a resource groupàAutomatically import volume groups to true

C-SPOC LVM: smitty cl_admin à HACMP Logical Volume Management

  • Shared Volume groups
  • Shared Logical volumes
  • Shared File systems
  • Synchronize shared LVM mirrors (Synchronize by VG/Synchronize by LV)
  • Synchronize a shared VG definition

C-SPOC concurrent LVM: smitty cl_admin à HACMP concurrent LVM

  • Concurrent volume groups
  • Concurrent Logical volumes
  • Synchronize concurrent LVM mirrors

C-SPOC Physical volume management: smitty cl_adminàHACMP physical volume management

  • Add a disk to the cluster
  • Remove a disk from the cluster
  • Cluster disk replacement
  • Cluster datapath device management

Cluster Verification: smitty hacmpàExtended verificationàExtended verification and synchronization. Verification log files stored in /var/hacmp/clverify.

/var/hacmp/clverify/clverify.log à Verification log

/var/hacmp/clverify/pass/nodename à If verification succeeds

/var/hacmp/clverify/fail/nodename à If verification fails

Automatic cluster verification: Each time you start cluster services and every 24 hours.

Configure automatic cluster verification: smitty hacmpàproblem determination toolsàhacmp verification à Automatic cluster configuration monitoring.

Cluster status Monitoring: /usr/es/sbin/cluster/clstat –a and o.

/usr/es/sbin/cluster/utilities/cldumpàIt provides snapshot of the key cluster status components.

Clshowsrv: It displays the status

Disk Heartbeat:

  • It’s a non-IP heartbeat
  • It’s use dedicated disk/LUN
  • It’s a point to point network
  • If more than 2 nodes exist in your cluster, you will need a minimum of n number of non-IP heartbeat networks.
  • Disk heartbeating will typically requires 4 seeks/second. That is each of two nodes will write to the disk and read from the disk once/second. Filemon tool monitors the seeks.

Configuring disk heartbeat:

  • Vpaths are configured as member disks of an enhanced concurrent volume group. Smitty lvmàselect volume groupsàAdd a volume groupàGive VG name, PV names, VG major number, Set create VG concurrent capable to enhanced concurrent.
  • Import the new VG on all nodes using smitty importvg or importvg –V 53 –y c23vg vpath5
  • Create the diskhb networkàsmitty hacmpàextended configuration àextended topology configurationàconfigure hacmp networksàAdd a network to the HACMP clusteràchoose diskhb
  • Add 2 communication devicesà smitty hacmpàextended configuration àextended topology configurationàConfigure HACMP communication Interfaces/DevicesàAdd communication interfaces/devicesàAdd pre-defined communication interfaces and devicesà communication devicesàchoose the diskhb
  • Create one communication device for other node also

Testing Disk Heartbeat connectivity:/usr/sbin/rsct/dhb_read is used to test the validity of a diskhb connection.

Dhb_read –p vpath0 –r for receives data over diskhb network

Dhb_read –p vpath3 –t for transmits data over diskhb network.

Monitoring disk heartbeat: Monitor the activity of the disk heartbeats via lssrc –ls topsvcs. Monitor the Missed HBS field.

Configure HACMP Application Monitoring: smitty cm_cfg_appmonàAdd a process application monitoràgive process names, app startup/stop scripts

Application availability analysis tool: smitty hacmpàsystem managementàResource group and application managementàapplication availability analysis

Commands:

List the cluster topology : /usr/es/sbin/cluster/utilities/cllsif

/usr/es/sbin/cluster/clstat

Start cluster : smitty clstart .. Monitor with /tmp/hacmp.out and check for node_up_complete.

Stop the cluster : smitty cl_stop àMonitor with /tmp/hacmp.out and check fr node_down_complete.

Determine the state of cluster: /usr/es/sbin/cluster/utilities/clcheck_server

Display the status of HACMP subsystems: clshowsrv –v/-a

Display the topology information: cltopinfo –c/-n/-w/-i

Monitor the heartbeat activity: lssrc –ls topsvcs [ check for dropped, errors]

Display resource group attributes: clrginfo –v, -p, -t, -c, -a OR clfindres

HMC & LPAR Short Notes

Posted: May 30, 2011 in Uncategorized

HMC AND LPAR

HMC device is required to perform LPAR , DLPAR and CUOD configuration and management.

A single HMC can manage 48 i5 systems and 254 LPARs

In a partition there is a maximum no. of 64 virtual processors.

A mix of dedicated and shared processors within the same partition is not supported.

Sharing a pool of virtualized processors is known as Micro Partitioning technology

The maximum no.of physical processors on p5 is 64.

In Micro partition technology the minimum capacity is 1/10 processing units.

Virtual Ethernet enables inter partition communication without a dedicated physical network adapter.

The virtual IO server owns the real resources that are shared with other clients.

Shared Ethernet adapter is a new service that acts as a layer 2 network switch to route network traffic from a virtual Ethernet to a real network adapter.

On p5 – 595 Max no.of processors- 64, Max Memory Size – 2TB, Dedicated processor partitions-64, Shared processor partitions- 254.

HMC model for p5 – 595 is 7310-C04 or 7310-CR3

HMC Functions: LPAR, DLPAR, Capacity on demand without reboot, Inventory and microcode management, Remote power control.

254 partitions supported by one HMC.

A Partition Profile is used to allocate resources such as processing units, memory and IO cards to a partition. Several partition profiles may be created for the same partition.

System profile is a collection of partition profiles. A partition profile cannot be added to a system profile if the partition resources are already committed to another partition profile.

To change from one system profile to another, all the running partitions must be shutdown.

To find the current firmware level: lscfg –vp |grep –p ‘Platform Firmware:’

Simultaneous multi threading : The instructions from the OS are loaded simultaneously into the processor and executed.

DLPAR : DLPAR allows us to add, move or remove processors, memory and IO resources to, from or between active partitions manually without having to restart or shutdown the partition.

Unused processing units available in shared processor pool.

Dedicated processors are whole processors that are assigned to a single partition. The minimum no. of dedicated processors you must assign is one processor.

When a partition with dedicated processors is powered down, their processors will be available to the shared processor pool. This capability is enabled by “Allow idle processors to be shared”.

Idle processors from active partitions with dedicated processors can be used by any uncapped partition that requires additional processing units to complete its jobs.

Shared processor minimum processing unit is 0.1

Capped : The processor usage never exceeds the assigned processing capacity.

Uncapped : Processing capacity may be exceeded when the shared processor pool has spare processing units.

Weight is a number range between 0-255. If there are 3 processors available in the shared processor pool , partition A has an uncapped weight of 80 and B has 160. The LPAR A will receive 1 processing unit and B receive 2 processing units.

Minimum Memory is the minimum amount of memory which needed by the logical partition to start.

Desired memory is the requested amount of memory for the partition. The partition will receive an amount of memory between minimum and desired. Desired memory is the amount of memory which LPAR needs to have when the lpar is powered on. If the managed system does not have the desired amount of memory but only has lesser memory , those uncommitted memory resources will be assigned to the LPAR when the LPAR is activated.

You cant increase the memory more than maximum value.

Installed memory is the total no. of memory units that are installed on the managed system

Creating a new LPAR :

Server and Partition à Server Management à right click partitionsà Createà logical partition

Give partition ID(Numeric between 1 and 254) and name (max 31 characters)

Give partition type (AIX or linux, i5/OS, VIO)

Select work load management group NO

Give profile name

Specify the Min, Desired and Max memory

Select the dedicated/shared processors

If you select dedicated then give min,desired and max processors

If you select the shared give min,desired and max processing units and click advanced

And click the radio button(capped/uncapped) and give the virtual processors(min,max,desired)

If you select the uncapped give the weight also.

Allocate physical IO resources : Select the IO and click the add as required/add as desired.

IO resources can be configured as required or desired. A required resource is needed for the partition to start when the profile is activated. Desired resources are assigned to the partition if they are available when the profile is activated.

And select the console, location code

To create another profile Right click the partition à createà profileà give profile id.

Change default profile : Right click the partition àchange default profileà choose profile.

Restart options :

DUMP : Initiate a main storage or system memory dump on the logical partition and restart the logical partition when complete.

Immediate : as quickly as possible , without notifying the logical partition.

DUMP Retry : Retry a main storage or system memory dump on the logical partition and restart the logical partition when complete.

Shutdown options :

Delayed : Shutdown the logical partition by starting the delayed power off sequence.

Immediate : as quickly as possible , without notifying the logical partition.

DLPAR:

DLPAR can be performed against the following types :

Physical Adapters

Processors

Memory

VIO Adapters

Right click the partition à Dynamic Logical Partitioningà Physical adapter resourcesà add/move/remove

Licensed Internal Code Updates: To install licensed internal code fixes on your managed systems for a current release click “change licensed internal code for the current release”

To upgrade licensed internal code fixes on your managed systems for a current release click “upgrade licensed internal code for the current release”

HMC Security: Servers and Clients communicate over the secure sockets layer(SSL). Which provides server authentication, data encryption and data integration.

HMC Serial number lshmc -v

To format the DVD-RAM media

The following steps show how to format the DVD-RAM disk:

1. Place a DVD-RAM disk in to the HMC DVD drive.

2. In the HMC Navigation area,under your managed system, click Licensed Internal Code

Maintenance.

3. Then click HMC Code Update.

4. In the right-hand window, click Format Removable Media.

5. Then select the Format DVD radio button.

6. Select Backup/restore.

7. Then click the Format button.

The DVD-RAM disk should be formatted in a few seconds, after which you will receive a

“Format DVD has been successfully completed – ACT0001F” message.

Back up to formatted DVD media

Use the following steps to back up the CCD to the formatted DVD media:

1. In the HMC Navigation area, click Licensed Internal Code Maintenance.

2. Then click the HMC Code Update.

3. In the right-hand window, click Back up Critical Console Data.

4. Select the Back up to DVD on local system radio button and click the Next button.

5. Enter some valid text in the description window and click OK.

AIX Short Notes

Posted: May 30, 2011 in Uncategorized

AIX

LVM:

VG: One or more PVs can make up a VG.

Within each volume group one or more logical volumes can be defined.

VGDA(Volume group descriptor area) is an area on the disk that contains information pertinent to the vg that the PV belongs to. It also includes information about the properties and status of all physical and logical volumes that are part of the vg.

VGSA(Volume group status area) is used to describe the state of all PPs from all physical volumes within a volume group. VGSA indicates if a physical partition contains accurate or stale information.

LVCB(Logical volume control block) contains important information about the logical volume, such as the no. of logical partitions or disk allocation policy.

VG type Max Pv’s Max LV’s Max PP’s/VG Max PP Size

Normal 32 256 32512 1G

BIG 128 512 130048 1G

Scalable 1024 4096 2097152 128G

PVIDs stored in ODM.

Creating PVID : chdev –l hdisk3 –a pv=yes

Clear the PVID : chdev –l hdisk3 –a pv=clear.

Display the allocation PP’s to LV’s : lspv –p hdisk0

Display the layout of a PV: lspv –M hdisk0

Disabling partition allocation for a physical volume : chpv –an hdisk2 : Allocatable=no

Enabling partition allocation for a physical volume : chpv –ay hdisk2 : Allocatable = yes

Change the disk to unavailable : chpv –vr hdisk2 : PV state = removed

Change the disk to available : chpv –va hdisk2 : PV state = active

Clean the boot record : chpv –c hdisk1

To define hdsik3 as a hotspare : chpv –hy hdisk3

To remove hdisk3 as a hotspare : chpv –hn hdisk3

Migrating ttwo disks : migratepv hdisk1 hdisk2

Migrate only PPS that belongs to particular LV : migratepv –l testlv hdisk1 hdisk5

Move data from one partition located on a physical disk to another physical partition on a different disk: migratelp testlv/1/2 hdisk5/123

Logical track group(LTG) size is the maximum allowed transfer size for an IO disk operation. Lquerypv –M hdisk0

VOLUME GROUPS

For each VG, two device driver files are created under /dev.

Creating VG : mkvg –y vg1 –s64 –v99 hdisk4

Creating the Big VG : mkvg –B –y vg1 –s 128 –f –n –V 101 hdisk2

Creating a scalable VG: mkvg –S –y vg1 –s 128 –f hdisk3 hdisk4 hdisk5

Adding disks that requires more than 1016 PP’s/PV using chvg –t 2 VG1

Information about a VG read from a VGDA located on a disk: lsvg –n VG1

Change the auto vary on flag for VG : chvg –ay newvg

Change the auto vary off flag for VG: chvg –an newvg

Quorum ensures data integrity in the event of disk failure. A quorum is a state in which 51 percent or more of the PVs in a VG accessible. When quorum is lost, the VG varies itself off.

Turn off the quorum : chvg –Qn testvg

Turn on the quorum : chvg –Qy testvg

To change the maximum no of PPs per PV : chvg –t 16 testvg.

To change the Normal VG to scalable vg : 1. Varyoffvg ttt 2. chvg –G ttt 3. varyonvg ttt

Change the LTG size : chvg –L 128 testvg à VG’s are created with a variable logical track group size.

Hot Spare: In Physical volume all PP’s shou;d be free. PP located on a failing disk will be copied from its mirror copy to one or more disks from the hot spare pool.

Designate hdisk4 as hot spare: chpv –hy hdisk4

Migrate data from a failing disk to spare disk: Chvg –hy vg;

Change synchronization policy : chvg –sy testvg; synchronization policy controls automatic synchronization of stale partitions within the VG.

Change the maximum no. of pps within a VG: chvg –P 2048 testvg

Change maximum no.of LVs/VG : chvg –v 4096 testvg.

How to remove the VG lock : chvg –u

Extending a volume group : extendvg testvg hdisk3; If PVID is available use extendvg –f testvg hdisk3

Reducing the disk from vg : reducevg testvg hdisk3

Synchronize the ODM information : synclvodm testvg

To move the data from one system to another use the exportvg command. The exportvg command only removes VG definition from the ODM and does not delete any data from physical disk. : exportvg testvg

Importvg : Recreating the reference to the VG data and making that data available.. This command reads the VGDA of one of the PV that are part of the VG. It uses redefinevg to find all other disks that belong to the VG. It will add corresponding entries into the ODM database and update /etc/filesystems with new values. importvg –y testvg hdisk7

  • Server A: lsvg –l app1vg
  • Server A: umount /app1
  • Server A: Varyoffvg app1vg
  • Server B: lspv|grep app1vg
  • Server B: exportvg app1vg
  • Server B: importvg –y app1vg –n V90 vpath0
  • Chvg –a n app1vg
  • Varyoffvg app1vg

Varying on a volume group : varyonvg testvg

Varying off a volume group : varyoffvg testvg

Reorganizing a volume group : This command is ued to reorganize physical partitions within a VG. The PP’s will be rearranged on the disks according to the intra-physical and inter-physical policy. reorgvg testvg.

Synchronize the VG : syncvg –v testvg ; syncvg –p hdisk4 hdisk5

Mirroring a volume group : lsvg –p rootvg; extendvg rootvg hdisk1; mirrorvg rootvg; bosboot –ad /dev/hdisk1; bootlist –m normal hdisk0 hdisk1

Splitting a volume group : splitvg –y newvg –c 1 testvg

Rejoin the two copies : joinvg testvg

Logical Volumes:

Create LV : mklv –y lv3 –t jfs2 –a im testvg 10 hdisk5

Remove LV : umount /fs1, rmlv lv1

Delete all data belonging to logical volume lv1 on physical volume hdisk7: rmlv –p hdsik7 lv1

Display the no. of logical partitions and their corresponding physical partitions: lslv –m lv1

Display information about logical volume testlv read from VGDA located on hdisk6: lslv –n hdisk6 testlv

Display the LVCB : getlvcb –AT lv1

Increasing the size of LV : extendlv –a ie –ex lv1 3 hdisk5 hdisk6

Copying a LV : cplv –v dumpvg –y lv8 lv1

Creating copies of LV : mklvcopy –k lv1 3 hdisk7 &

Splitting a LV : umount /fs1; splitlvcopy –y copylv testlv 2

Removing a copy of LV : rmlvcopy testlv 2 hdisk6

Changing maximum no.of logical partitions to 1000: chlv –x 1000 lv1

Installation :

New and complete overwrite installation : For new machine, Overwrite the existing one, reassign your hard disks

Migration: upgrade AIX versions from 5.2 to 5.3. This method preserves most file systems, including root volume group.

Preservation installation : If you want to preserve the user data.. use /etc/preserve.list. This installation overwrites /usr, /tmp,/var and / file systems by default. /etc/filesystems file is listed by default.

TCB:

  • To check the tcb is installed or not: /usr/bin/tcbck.
  • By installing a system with the TCB option, you enable the trusted path, trusted shell, trusted processes and system integrity checking.
  • Every device is part of TCB and every fle in the /dev directory is monitored by the TCB.
  • Critical information about so many files storing in /etc/security/sysck.cfg file
  • You can enable TCB anly at installation time

Installation steps : Through HMC à activate à override the boot mode to SMS.

Without hmc à After POST à hear the 2 beeps à press 1.

Insert the AIX 5L CD1. à select boot options(NO:5)àSelect install / Boot devise(Option1)à select CD/DVDà select SCSIà select the normal bootà exit from SMSàSystem boots from mediaàChoose languageàChange/show installation settingsàNew and complete overriteàselect harddiskàInstall optionsàenter to confirmàAfter installation system reboots automatically

Erase a hard disk à using diag command

Alternate Disk Installation:

  • Cloning the current running rootvg to an alternate disk
  • Installing a mksysb image on another disk.

Alt_disk_copy: Creates copies of rootvg on an alternate set of disks.

Alt_disk_mksysb: Installs an existing mksysb on an alternate set of disks.

Alt_rootvg_op: Performs wake, sleep and customize operations.

Alternate mksysb installation: smitty alt_mksysb

Alternate rootvg cloning: smitty alt_clone.

Cloning AIX :

  • Having online backup. As in case of disk crash.
  • When applying new maintenance levels, a copy of the rootvg is made to an alternate disk, then updates are applied to that copy

To view the BOS installation logs : cd /var/adm/ras à cat devinst.log. or alog –o –f bosinstlog. Or smit alog_show

Installation Packages:

Fileset : A fileset is smallest installable unit. Ex: bos.net.uucp

Package : A group of installable filesets Ex: bos.net

Licenced program products : A complete s/w product Ex :BOS

Bundle : A bundle is a list of software that contain filesets, packages and LPPs. Install the software bundle using smitty update_all.

PTF:Program temporary fix. It’s an updated fileset or a new fileset that fixes a previous system problem. PTF’s installed through installp.

APAR: Authorised program analysis report. APAR’s applied to the system through instfix.

Fileset revision level identification : version:release:modification:fixlevel

The file sets that are below level 4.1.2.0, type: oslevel –l 4.1.2.0
The file sets at levels later than the current maintenance level, type: oslevel -g
To list all known recommended maintenance levels on the system, type:oslevel –rq
Oslevel –s for SP level
Current maintenance level: oslevel -r

Installing S/W: Applied and commited

Applied: In applied state the previous version is stored in /usr/lpp/packagename.

Commited : First remove the previous version and go to for the installation

To install filesets within the bos.net software package in /usr/sys/inst.images directory in the applied state: installp –avx –d /usr/sys/inst.images bos.net

Install S/W in commited state: installp –acpX –d/usr/sys/inst.images bos.net

Record of the installp output stored in /var/adm/sw/installp.summary

Commit all updates: installp –cgX all

List all installable S/W : installp –L –d /dev/cd0

Cleaning up after failed installation : installp –C

Removing installed software: installp –ugp

Software Installation: smitty install_latest

Commiting applied updates: smitty install_commit

Rejecting applied updates: smitty install_reject

Removing installed software: smitty install_remove

To find what maintenance level your filesets are currently on : lslpp –l

To list the individual files that are installed with a particular fileset : lslpp –f bos.net

To list the installation and update history of filesets : lslpp –h

To list fixes that are on a CDROM in /dev/cd0 – instfix –T –d /dev/cd0

To determine if APAR is installed or not : instfix –iK IY737478

To list what maintenance levels installed : instfix –i |grep ML

To install APAR : instfix –K IY75645 –d /dev/cd0

Installing individual fix by APAR: smitty update_by_fix

To install new fixes available from IBM : smitty update_all

Verifying the integrity of OS : lppchk –v

Creating installation images on disk: smitty bffcreate

Verify whether the software installed on your system is in a consistent state: lppchk

To install RPM packages using geninstall. à geninstall –d Media all

Uninstall software: geninstall –u –f file

List installable software on device: geninstall –L –d media.

AIX Boot Process:

  1. When the server is Powered on Power on self test(POST) is run and checks the hardware
  2. On successful completion on POST Boot logical volume is searched by seeing the bootlist
  3. The AIX boot logical contains AIX kernel, rc.boot, reduced ODM & BOOT commands. AIX kernel is loaded in the RAM.
  4. Kernel takes control and creates a RAM file system.
  5. Kernel starts /etc/init from the RAM file system
  6. init runs the rc.boot 1 ( rc.boot phase one) which configures the base devices.
  7. rc.boot1 calls restbase command which copies the ODM files from Boot Logical Volume to RAM file system
  8. rc.boot1 calls cfgmgr –f command to configure the base devices
  9. rc.boot1 calls bootinfo –b command to determine the last boot device
  10. Then init starts rc.boot2 which activates rootvg
  11. rc.boot2 calls ipl_varyon command to activate rootvg
  12. rc.boot2 runs fsck –f /dev/hd4 and mount the partition on / of RAM file system
  13. rc.boot2 runs fsck –f /dev/hd2 and mounts /usr file system
  14. rc.boot2 runs fsck –f /dev/hd9var and mount /var file system and runs copy core command to copy the core dump if available from /dev/hd6 to /var/adm/ras/vmcore.0 file. And unmounts /var file system
  15. rc.boot2 runs swapon /dev/hd6 and activates paging space
  16. rc.boot2 runs migratedev and copies the device files from RAM file system to /file system
  17. rc.boot2 runs cp /../etc/objrepos/Cu* /etc/objrepos and copies the ODM files from RAM file system to / filesystem
  18. rc.boot2 runs mount /dev/hd9var and mounts /var filesystem
  19. rc.boot2 copies the boot log messages to alog
  20. rc.boot2 removes the RAM file system
  21. Kernel starts /etc/init process from / file system
  22. The /etc/init points /etc/inittab file and rc.boot3 is started. Rc.boot3 configures rest of the devices
  23. rc.boot3 runs fsck –f /dev/hd3 and mount /tmp file system
  24. rc.boot3 runs syncvg rootvg &
  25. rc.boot3 runs cfgmgr –p2 or cfgmgr –p3 to configure rest of the devices. Cfgmgr –p2 is used when the physical key on MCA architecture is on normal mode and cfgmgr –p3 is used when the physical key on MCA architecture is on service mode.
  26. rc.boot3 runs cfgcon command to configure the console
  27. rc.boot3 runs savebase command to copy the ODM files from /dev/hd4 to /dev/hd5
  28. rc.boot3 starts syncd 60 & errordaemon
  29. rc.boot3 turn off LED’s
  30. rc.boot3 removes /etc/nologin file
  31. rc.boot3 checks the CuDv for chgstatus=3 and displays the missing devices on the console
  32. The next line of Inittab is execued

/etc/inittab file format: identifier:runlevel:action:command

MkitabàAdd records to the /etc/inittab file

LsitabàList records in the /etc/inittab file

Chitabàchanges records in the /etc/inittab file

Rmitabàremoves records from the /etc/inittab file

To display a boot list: bootlist –m normal –o

To change a boot list: bootlist –m normal cd0 hdisk0

Troubleshooting on boot process:

Accessing a system that will not boot: Press F5 on a PCI based system to boot from the tape/CDROMàInsert volume 1 of the installation media àselect the maintenance mode for system recoveryà Access a root volume groupàselect the volume groupà

Damaged boot image:Access a system that will not bootàCheck the / and /tmp file system sizeàdetermine the boot disk using lslv –m hd5àRecreate the boot image using bosboot –a –d /dev/hdisknàcheck for CHECKSTOP errors on errlog. If such errors found probably failing hardware. àshutdown and restart the system

Corrupted file system, Corrupted jfs log: Access a system that will not bootàdo fsck on all filw systemsà format the jfs log using /usr/sbin/logform /dev/hd8àRecreate the boot image using bosboot –a –d /dev/hdiskn

Super block corrupted: If fsck indicates that block 8 is corrupted, the super block for the file system is corrupted and needs to be repaired ( dd count=1 bs=4k skip=31 seek=1 if=/dev/hdn of=/dev/hdn)àrebuild jfslog using /usr/sbin/logform /dev/hd8àmount the root and usr file systems by (mount /dev/hd4 /mnt, mount /usr)àCopy the system configuration to backup directory(cp /mnt/etc/objrepos* /mnt/etc/objrepos/backup)àcopy the configuration from the RAM fs(cp /etc/objrepos/Cu* /mnt/etc/objrepos)àunmount all file systemsàsave the clean ODM to the BLV using savebase –d /dev/hdiskàreboot

Corrupted /etc/inittab file: check the empty,missing inittab file. Check problems with /etc/environment, /bin/sh,/bin/bsh,/etc/fsck,/etc/profileàReboot

Runlevelà selected group of processes. 2 is muti user and default runlevel. S,s,M,m for Maintenance mode

Identifying current run levelàcatt /etc/.init.state

Displaying history of previous run levels: /usr/lib/acct/fwtmp < /var/adm/wtmp |grep run-level

Changing system run levels: telinit M

Run level scripts allow users to start and stop selected applications while changing the run level. Scripts beginning with k are stop scripts and S for start scripts.

Go to maintenance mode by using shutdown -m

Rc.boot fle: The /sbin/rc.boot file is a shell script that is called by the init. rc.boot file configures devices, booting from disk, varying on a root volume group, enabling fle systems, calling the BOS installation programs.

/etc/rc file: It performs normal startup initialization. It varyon all vgs, Activate all paging spaces(swapon –a), configure all dump devices(sysdumpdev –q), perform file system checks(fsck –fp), mount all

/etc/rc.net: It contains network configuration information.

/etc/rc.tcpip: it start all network related daemons(inted, gated, routed, timed, rwhod)

Backups:

MKSYSB : Creates a bootable image of all mounted filesystems on the rootvg. This command is for restore a system to its original state.

Tape Format : BOS boot image(kernel device drivers), BOS install image(tapeblksz, image.data, bosinst.data), dummy table of contents, rootvg backup

Exclude file systems using mksysb –ie /dev/rmt0

Cat /etc/exclude.rootvg

List content of MKSYSB image smitty lsmksysb

Restore a mksysb image : smitty restmksysb

Savevg command finds and backs up all files belonging to the specified volume group. Ex: savevg –ivf /dev/rmt0 uservg.

Restvg command restores the user volume group

Backup command backs up all files and file systems. Restore command extracts files from archives created with the backup command.

Verify the content of a backup media à tcopy /dev/rmt0

Daily Management :

/etc/security/environ : Contains the environment attributes for a user.

/etc/security/lastlog : Its an ascii file that contains last login attributes.(time last unsuccessful login, unsuccessful login

count, time last login)

/etc/security/limits : It specify the process resource limits for each user

/etc/security/user :

/usr/lib/security/mkuser.default : It contains the default attributes for a new user.

/etc/utmp file contains record of users logged into the system Command : who –a

/var/adm/wtmp file contains connect-time accounting records

/etc/security/failedlogin contains record of unsuccessful login attempts.

/etc/environment contains variables specifying the basic environment for all processes.

/etc/profile file is first file that the OS uses at login time.

To enable user smith to access this system remotely : chuser rlogin=true smith

Remove the user rmuser smith

Remove the user with remove the authentication information rmuser –p smith

Display the current run level : who –r

How to display the active processes : who –p

Changing the current shell : chsh

Change the prompt : export PS1=”Ready.”

To list all the 64-bit processes : ps –M

To change the priority of a process : nice and renice

SUID –set user id – This attribute sets the effective and saved user ids of the process to the owner id of the file on execution

SGID – set group id — This attribute sets the effective and saved group ids of the process to the group id of the file on execution

CRON daemon runs shell commands at specified dates and times.

AT command to submit commands that are to be run only once.

System Planning:

RAID: Redundant array of independent disks.

RAID 0: Striping. Data is split into blocks of equal size and stored on different disks.

RAID 1: Mirroring. Duplicate copies are kept on separate physical disks.

RAID 5: Striping with Parity. Data is split into blocks of equal size. Additional data block containing parity information.

RAID 10: It is a combination of mirroring and striping.

AIX 5.3 requires at least 2.2 GB of physical space.

Configuration:

ODM: ODM is a repository in which the OS keeps information about your system, such as devices, software, TCP/IP configuration.

Basic Components of ODM: object classes, objects, descriptors

ODM directories: /usr/lib/objrepos, /usr/share/lib/objrepos, /etc/objrepos

Following steps for NFS implementation:

  • · NFS daemons should be running on both server and client
  • · The file systems that need to be remotely available will have to be exported(smitty mknfsexp, exportfs –a , showmount –e myserver)
  • · The exported file system need to be mounted on the remote systems

NFS services: /usr/sbin/rpc.mountd, /usr/sbin/nfsd, /usr/sbin/biod,rpc.statd, rpc.lockd

Changing an exported file system: smitty chnfsexp TCP/IP Daemons: inetd,gated, routed,named,

Configuration:

ODM: ODM(Object data manager) is a repository in which the OS keeps information regarding your system, such as devices, software or TCP/IP information.

ODM information is stored in /usr/lib/objrepos, /usr/share/lib/objrepos, /etc/objrepos.

ODM commands: odmadd, odmchange, odmcreate, odmshow, odmdelete, odmdrop, odmget,

To start the graphical mode smit using smit –m

Creating alias: alias rm=/usr/sbin/linux/rm

Export PATH=/usr/linux/bin:$path; print $path

Netwok File System:

Daemons: Server side(/usr/sbin/rpc.mountd, /usr/sbin/nfsd, portmap, rpc.statd, rpc.lockd) Client side ( /usr/sbin/biod)

Start the NFS faemons using mknfs –N. To start all nfs daemons using startsrc –g nfs.

Exporting nfs directories:

  • Verify nfs is running or not using lssrc –g nfs
  • Smitty mknfsexp
  • Specify path name, set the mode(rw,ro). It updates /etc/exports file.
  • /usr/sbin/exportfs –a à it sends all information in the /etc/exports to kernel.
  • Verify all file systems exported or not using showmount –e Myserver

Exporting an nfs directory temporarily using exportfs –i /dirname

Un exporting an nfs directory using smitty rmnfsexp

Establishing NFS mounts using smitty mknfsmnt

Changing an exported file system using smitty chnfsexp

Network configuration:

Stopping TCP IP daemons using /etc/tcp.clean script.

/etc/services file contains information about the known services

Add network routes using smitty mkroute or route add –net 192.168.1 –netmask 255.255.255.0

Traceroute command shows the route taken

Changing IP address smitty mktcpip

Identifying network interfaces : lsdev –Cc if

Activating network interface: ifconfig interface address netmask up

Deactivating network interface: ifconfig tr0 down

Deleting an address: ifconfig tr0 delete

Detaching network interface: ifconfig tr0 detach

Creating an IP alias: ifconfig interface address netmask alias

To determine MTU size of a network interface using lsattr –El interface.

Paging Space: A page is unit of virtual memory that holds 4kb of data.

Increasing paging space: chps –s 3 hd6 ( it’s a 3LP)

Reducing paging space: chps –d 1 hd6

Moving a paging space within the VG: migratepv –l hd6 hdisk0 hdisk1

Removing a paging space: swapoff /dev/paging03; rmps paging03

Device configuration:

Lscfgà detail about devices ex: lscfg –vpl rmt0

To show more about a particular processor: lsattr –El proc0

To discover how much memory is installed: lsattr –El sys0 | grep realmem.

To show processor details: lscfg |grep proc or lsdev –Cc processor

To show available processors: bindprocessor –q

To turn on SMT using smtctl –m on –w boot

To turn off SMT : smtctl –m off –w now

Modifying an existing device configuration using chdev. The device can be in defined,stopped,available state.

To change maxuproc value: chdev –l sys0 –a maxuproc=100

Remove a device configuration: rmdev –Rdl rmt0

Bootinfo –y command à returns 32 bit or 64 bit.

Commands to run enable 64 bit: ln –sf /usr/lib/boot/unix_64 /unixàln –sf /usr/lib/boot/unix_64 /usr/lib/boot/unixàbosboot –ad /dev/ipldevice àshutdown –r àls –al /unix

File Systems:

Types: Journaled, Enhanced journaled, CDROM, NFS

FS Structure: Super block, allocation groups, inodes, blocks, fragments, and device logs

Super block: It contains control information about file system, such as overall file system in 512 byte blocks, FS name, FS log device, version no, no. of inodes, list of free inodes, list of free data blocks, date and time of creation, FS state.

This data is stored in first block of FS and 31.

Allocation group:It consists of inodes and corresponding data blocks.

Inodes: It contains control information about the file. Such as type, size, owner, date and time when the file was created, modifies, last accessed, it contains pointers to data blocks that stores actual data. For JFS maximum no.of inodes and files is determined by the no. of bytes per inode(NBPI). For JFS 16MB inode. For JFS2 there is no NBPI.

Data Blocks: actual data. The default value is 4KB.

Device logs: JFS log stores transactional information. This data can be used to roll back incomplete operations if the machine crashes. Rootvg use LV hd8 as a common log.

FS differences:

Function JFS JFS2

Max FS Size 1TB 4PB

Max File Size 64G 4PB

Np.of inodes Fixed Dynamic

iNode size 128B 512B

Fragment Size 512 512

Block size 4KB 4KB

Creatinf FS: crfs –v jfs2 –g testvg –a size=10M –m /fs1

Display mounted FS: mount

Display characteristics of FS: lsfs

Initialize log device: logform /dev/loglv01

Display information about inodes: istat /etc/passwd

Monitoring and Performance Tuning:

Quotaon command enables disk quotas for one or more file systems

Ouotaoff command disables disk quotas for one or more file systems

Enable user quotas on /home: chfs –a “quota=userquota,groupquota” /home

To check the consistency of the quota files using quotacheck

Edquota command to create each user or group’s soft and hard limits for allowable disk space and maximum number of files

Error logging is automatically started by the rc.boot script

Errstop command stops the error logging

The daemon for errlog is errdemon

The path to your system’s error log file: /usr/lib/errdemon –l

Change the maximum size of the error log: errdemon –s 2000000

Display all the errors which have an specific error id: errpt –j 8527F6F4

Display all the errors logged in a specific time: errpt –s 1122164405 –e 1123100405

To delete all the entries: errclear 0

Delete all the entries classified as software errors: errclear –d s 0

VMSTAT: It reports kernel threads, virtual memory, disks, traps and cpu activity.

To display 5 summaries at 1 second intervals use vmstat 1 5

Kthr(kernel thread state) ràaverage number of runnable kernel threads. Bàaverage number of kernel threads placed in the VMM wait queue

Memory(usage of virtual and real memory). Avm à active virtual pages, total number of pages allocated in page space. A high value is not an indicator of poor performance. Freàsize of the free list. A large portion of real memory is utilized as a cache for file system data.

Page(information about page faults and page activity). Reàpager input/output list, piàpages paged in from paging space, poàpages paged out to paging space, fràpages freed, sràpages scanned by page replacement algorithm, cyà clock cycles used by page replacement algorithm

Faults(trap and interrupt rate averages per second): inàdevice interrupts, syàsystem calls, csàkernel thread context switches

CPU(breakdown of percentage usage of CPU time): usàuser time, syàsystem time, idàcpu idle time,waàwaiting for request, pcànumber of physical processors consumed ecàthe percentage of entitled capacity consumed.

Disks(provides number of transfers per second)

SAR: sar 2 5(%usr, %sys, %wio, %idle, physc)

To report activity for the first 2 processors for each second for next 5 times: sar –u –P 0,1 1 5

Topas:

Tuning Parameters:

/etc/tunables directory centralizes the tunable files.

Nextboot: this file is automatically applied at boot time.

Lastboot: It contains tunable parameters with their values after the last boot.

Lastboot.log: It contains logging of the creation of the lastboot file.

NIM installation procedure

Posted: May 30, 2011 in Uncategorized

TABLE OF CONTENTS

1 Introduction. 2

2 Contacts.

3 High Level overview: 3

4 Helpful NIM commands: 4

5. NIM Install and Configuration. 5

5.1. Introduction to Network Installation Management 5

5.2. Create the NIM master 7

5.2.1 Build NIM Filesystem.. 7

5.2.2 Install NIM Master 7

5.2.3 Build the LPP SOURCE.. 7

5.2.4 Update the SPOT.

5.3 Configure NIM master 8

5.3.1 Configure Master Network Resource. 9

5.3.2 Configure LPP Resources. 10

5.3.3 Configure SPOT Resources. 13

5.3.4 Create mksysb NIM Resources. 14

5.4 Install NIM Clients. 15

5.4.1 Adding the Client to the NIM Master 15

5.4.1.1 Add the Client Server to NIM master 15

5.4.1.2 Preparing the NIM Client for a bos_install 16

5.4.2 Configure the Client Servers. 19

5.4.2.1 Booting into SMS Mode. 19

5.4.2.2 Configuring Remote IPL.. 20

5.4.2.3 Adapter Configuration and Test 20

5.4.2.4 Booting the Server from the NIM Master 22

6 Server Backups and Restores. 24

6.1 Structure of the NIM Master 24

6.2 Preparation for NIM Backups and Restores. 24

6.3 Backup a NIM Client 24

6.4 Restore a NIM Client 25

6.5 Reconfiguration on the cleint after mksysb backup is installed. 25

1 Introduction

The intent of this document is to provide detail steps for AIX install, backup, restore, migration using Network Installation Management tool (NIM) for LPARs in NHSS environment. It includes guidance in setting up a NIM environment.

Assumptions used during this document,

à There is already a backup strategy in place and mksysb’s are being saved on the NIM Master in the /export/nim/mksysb NFS directory.

à P51A Server has been designated at the NIM Master.


3. High Level overview:

The following is a fundamental list of activities to be performed in order to create the NIM environment and perform a NIM install on the client partition.

q Planning the NIM Configuration

o Plan the NIM Master network config (what network is being used)

o Plan the NIM Master and Client NIM names

o Plan the NIM Master directory structure

o Plan the NIM Client network config.

o Plan the NIM Master Resource names (lppsource, spot…)

o Plan the NIM Master Mksysb resource.

q Implement the NIM Master

o Install the bos.sysmgt file sets

o Create the required filesystems

o Use smitty nim to configure nim environment

o Use smitty nim_mkres to build the lpp resource, spot,bosinst and mksysb resources

q Create the nim clients

o “smitty nim_mkmac”

q Allocate the NIM resources to the client in preparation for mksysb restore.

o “smitty nim_bosinst”

q Boot the client into SMS and select boot from Ethernet (add the IP’s as required).

q Client boots and performs the mksysb restore.

o Once finished, the nim resources are deallocated.

o Additional AIX configuration such as etherchannel may be required.


4 Helpful NIM commands:

Þ smitty nim

§ smitty nim_mkmac

§ smitty nim_mknet

§ smitty nim_bosinst

Þ lsnim

Þ lsnim –l

Þ lsnim –l

Þ lsnim –a spot (who has spot allocated)

Þ nim –Fo reset

Þ nim –o deallocate –a subclass=all

Þ nim –Fo reset

Þ nim –o check

Þ nim –o check spot52

Þ nim –o lslpp

Þ nim –o showres

Þ nim –o fix_query

Þ /usr/lib/instl/lppmgr –d / -u –b –r

§ –d = lppsource directory

§ –u = remove duplicate updates

§ –b = remove duplicate base levels

§ –k = remove languages

§ – x = remove supercedes

§ –r = remove files

§ –m = move files

§ –l = list files

§ –V = verbose


5. NIM Install and Configuration

There are different ways which can be used for AIX installation process. Due to the physical configuration of a managed system, use of the Network Installation Management (NIM) environment to install AIX is recommended.

5.1. Introduction to Network Installation Management

This section provides an introduction to the NIM environment and the operations you can perform to manage the installation of the AIX Base Operating System (BOS) and optional software on one or more machines. NIM gives you the ability to install and maintain not only the AIX operating system, but also any additional software and fixes that may be applied over time. NIM also allows you to customize the configuration of machines both during and after installation. NIM eliminates the need for access to physical media, such as tapes and CD-ROMs, because the media is a NIM resource on the NIM master server. System backups can be created with NIM, and stored on any server in the NIM environment, including the NIM master. Use NIM to restore a system backup to the same server or to another server. Before you begin configuring the NIM environment, you should already have the following:

  • · NFS and TCP/IP installed
  • · Name resolution configured
  • TCP/IP configured correctly

For any installation procedure, you need a software source to install from, such as the AIX 5.2 product CDs (in NHSS environment). The AIX 5.2 product CDs contains boot images used to boot the system from the CD-ROM, installation images, and the installation commands used to install the installation images.

In the NIM environment, the software source is separated into two NIM resources, the LPP_Source and the SPOT. The LPP_Source is a directory on your NIM server. When the LPP_Source is created, installation images are copied from the product CDs to the LPP_Source directory. The product CDs also contains boot images that enable the system to boot from the CD-ROM and installation commands that are used to install the installation images. This equivalent NIM resource is called the SPOT (Shared Product Object Tree). The SPOT is a directory that contains the installation commands that are used to install the installation images from the LPP_Source onto a system. The SPOT is also used to build the necessary boot images to boot a client system. Separate boot images exist for each type of adapter (Ethernet, token ring, and so on). This illustration above shows that when an LPP_Source resource is created, installation images are copied from the product CDs to the LPP_Source directory and a SPOT resource contains the installation commands used to install the installation images from the LPP_Source resource onto a system.

When using the nim_master_setup script to install a NIM master on a system, it creates an LPP_Source and SPOT resource for you and defines the resources in the NIM environment. The nim_master_setup script will also copy the AIX update images from your update CD to the LPP_Source and then install the update images into the SPOT resource. In addition to the LPP_Source and SPOT resources, several NIM resources can help customize the BOS installation process and basic network configuration. The following table shows all the NIM resources that are created with the nim_master_setup script:

Table 5. NIM resources created by nim_master_setup script
NIM Resource Name Given Description
spot* Spot_52 Commands used during installation. The network boot images are built from the SPOT.
lpp_source Lppsource_52 Directory containing installation images.
mksysb Hostname.mksysb System backup image
bosinst_data bosinst_ow Answers questions asked during the BOS installation, which allows for a non-prompted new and complete overwrite installation.
resolv_conf resolv_res Provides the domain name and name server information.
res_group basic_res_grp Used by the nim_clients_setup script to allocate the bosinst_data, mksysb, lpp_source, spot, and resolv_conf to install the client partitions.

* Required resource for installation

Beside lpp_source and spot resource which represent BOS, a system backup image called mksysb is usually created. mksysb is a generic system backup which includes BOS plus customer software. As a standard AIX BOS image, a mksysb can be used to clone customized AIX to the servers who has no AIX installed. Also as a dedicated server backup on the NIM master, in the event a client server has to be recovered, such as a catastrophic hardware or software problem, the mksysb can be “pushed” down to the server to restore the server operation. Scripts are available that create the resources necessary to perform a mksysb installation.

Each LPAR on P5 server then will be defined in the NIM environment as a standalone system resource, also referred to as a NIM client. Use smitty nim_mkmac to add the servers as NIM clients. A nim_clients_setup script is also available you to define NIM clients and initiates a AIX installation on the NIM clients.

5.2. Create the NIM master

This section describes how to install the NIM master from scratch. It starts to build a separate NIM logic volume group called nimvg volume group if additional hard disk is available. Then it runs nim_master_setup script to install NIM from CDs. In case the NIM has been installed though smitty, the NIM master can also be configured through smitty nim, which will be described in next section.

5.2.1 Build NIM Filesystem

It is optional but recommended to build the NIM logic volume group in a separate nimvg volume group if additional hard disk is available. For the limit of hard disks, the NIM file system can be established on rootvg.

5.2.2 Install NIM Master

To install the NIM Master, do the following:

  • Ensure that the AIX 5.3 CD 1 of 8 to install AIX
  • NIM Server is on AIX 5.3 ML05 and all LPAR’s are on AIX 5.2 ML09
  • Issue the following command to configure the NIM environment:

§ smitty nim

§ nim easy create option

§ select the network name, lppsource name , spot name

§ select the filesystem size and volume group you want to create to (like in our case its nimvg)

§ Note: The estimated run time is 40 minutes.

  • After the above command has completed, verify that the following directories are created:

§ /export/nim;

§ /export/nim/spot/spot_52;

§ /export/nim/lpp_source/lppsource_52

§ /tftpboot

o Manually create the mksysb and backup directory using the following command:

§ mkdir /export/nim/mksysb

§ mklv

§ crfs

§ mount

Build the LPP SOURCE

Note: lppsource and SPOT was created during the nim easy install process. However we can manually create the lppsource as below

Copying fileset from AIX 5.2 CDs to /media/AIX , this is used as a media server

o Insert the AIX 5.2 CD 1 of 8 CD into the CD-ROM drive.

o Enter the following command at the command line:

§ smitty bffcreate

o The Copy Software to Hard Disk for Future Installation screen appears. Do the following:

o Enter /dev/cd0 in the INPUT device / directory for software field, then press the Enter key.

§ In the DIRECTORY for storing software package field, enter /media/AIX/AIX_52.

o Update the TOC

§ Inutoc

5.3 Configure NIM master

This section describes how to configure the NIM master when the NIM fileset (and lpp source) has been installed onto the server. There are two lpp sources – AIX 5.3 and AIX 5.2 – installed on the NIM master.

5.3.1 Configure Master Network Resource

To create a lpp resource, do the following:

o smitty nim_mknet

o The Network Type screen appears. Select ent, then press the Enter key.

  • The Define a Network screen appears. enter the appropriate information as the following.

Define a Network

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[Entry Fields]

* Network Name [master]

Network Type ent

Ethernet Type Standard

Network IP Address []

Subnetmask []

Other Network Type

Comments [master network]

F1=Help F2=Refresh F3=Cancel F4=List

Esc+5=Reset F6=Command F7=Edit F8=Image

F9=Shell F10=Exit Enter=Do

5.3.2 Configure LPP Resources

To create a lpp source, do the following:

o smitty nim_mkres

o The Resource Type screen appears. Select the lpp_source type, and then press the Enter key.

o The Define a Resource screen appears. Do the following:

o In the Resource Name field, enter lppsource_52.

o In the Server of Resource field, press the F4 key, then select master from the list.

o In the Location of Resource field, enter /media/AIX/AIX52.

o In the Comment field, enter your comment such as “This is the latest AIX 5.2 OS filesets”

o Press the Enter key to add the resource.

Define a Resource

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[Entry Fields]

* Resource Name [lppsource_52]

* Resource Type lpp_source

* Server of Resource [master]

* Location of Resource [/export/software+]

Source of Install Images []

Names of Option Packages []

Comments [This is the lates+]

F1=Help F2=Refresh F3=Cancel F4=List

Esc+5=Reset F6=Command F7=Edit F8=Image

F9=Shell F10=Exit Enter=Do

o Once the command has completed, press the F10 key to exit to the command prompt.


5.3.3 Configure SPOT Resources

To create a spot resource, do the following:

o smitty nim_mkres

o The Resource Type screen appears. Select the spot type, then press the Enter key.

o The Define a Resource screen appears. Do the following:

o In the Resource Name field, enter spot52.

o In the Server of Resource field, press the F4 key, then select master from the list.

o In the Source of Install Images, press the F4 key, then select lppsource_52 from the list.

o In the Location of Resource field, enter /export/nim/spot.

o In the Comment field, enter your comment such as “This is the spot for AIX 5.2 ML7”

o Press the Enter key to add the resource.

Define a Resource

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[Entry Fields]

* Resource Name [spot52]

* Resource Type spot

* Server of Resource [master]

* Source of Install Images []

* Location of Resource [/export/NIM/spot]

Expand file systems if space needed? yes

Comments [This is the spot for+]

installp Flags

COMMIT software updates? no

SAVE replaced files? yes

AUTOMATICALLY install requisite software? yes

OVERWRITE same or newer versions? no

VERIFY install and check file sizes? no

F1=Help F2=Refresh F3=Cancel F4=List

Esc+5=Reset F6=Command F7=Edit F8=Image

F9=Shell F10=Exit Enter=Do

o It may take 10 -15 minutes to create the spot resource from the lpp_source. Once the command has completed without error, press the F10 key to exit to the command prompt.

5.3.4 Create mksysb NIM Resources

You must create at least one mksysb resource as your system backup image. The actual mksysb image can be created from each LPAR weekly as a backup AIX 5.x image for each LPAR, saved in the /export/nim/mksysb/hostname directory. It can also be copied to a CD as a separate backup, especially for the NIM master. To create a mksysb resource, do the following:

o Assume the mksysb is a file in /export/nim/mksysb/hostname/hostname.mksysb

o Define the new mksysb resource in NIM by entering the following command from the command line:

o smitty nim_mkres

o The Resource Type screen appears. Select the mksysb resource type, then press the Enter key.

o The Define a Resource screen appears. Do the following:

o In the Resource Name field, enter hostname.mksysb.

o In the Server of Resource field, press the F4 key, then select master from the list.

o In the Location of Resource field, enter /export/nim/mksysb/hostname.

o In the Comment field, enter your comment as required

o Press the Enter key to add the resource.

Define a Resource

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]

* Resource Name [52mksysb]

* Resource Type mksysb

* Server of Resource [master]

* Location of Resource [/export/nim/mksysb+]

Comments []

Source for Replication []

-OR-

System Backup Image Creation Options:

CREATE system backup image? no

NIM CLIENT to backup []

PREVIEW only? no

IGNORE space requirements? no

[MORE…9]

F1=Help F2=Refresh F3=Cancel F4=List

Esc+5=Reset F6=Command F7=Edit F8=Image

F9=Shell F10=Exit Enter=Do

o Once the command has completed, press the F10 key to exit to the command prompt.

5.4 Install NIM Clients

Now you can start to configure and install AIX to each LPAR (NIM Client). The following sections describe the procedure of installation on a NIM client and you can repeat it until all servers are installed. There are two stages to configuring and installing a new client server from the NIM master. The first step involves adding the client on the NIM master. The second step involves configuring the client server for a NIM action, and then initiating a NIM boot over the network. This will automatically install the new server using the client template definition in the NIM master.

5.4.1 Adding the Client to the NIM Master

5.4.1.1 Add the Client Server to NIM master

The next step adds the client server to the NIM server. Run the following command:

smitty nim_mkmac

Enter the host name of NIM client press Enter, then select ent as the attached network. Fill in the fields of the next screen using the following values and press Enter when complete:

Define a Machine

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[Entry Fields]

* NIM Machine Name [catom-mddbpca01]

* Machine Type [standalone]

* Hardware Platform Type [chrp]

Kernel to use for Network Boot [mp]

Primary Network Install Interface

* Cable Type tp

* Network Speed Setting [auto]

* Network Duplex Setting [auto]

* NIM Network master_net

* Host Name catos-nimpa00

Network Adapter Hardware Address [0]

Network Adapter Logical Device Name []

IPL ROM Emulation Device []

CPU Id []

Machine Group []

Comments [] F1=Help F2=Refresh F3=Cancel F4=List

Esc+5=Reset F6=Command F7=Edit F8=Image

F9=Shell F10=Exit Enter=Do

To verify the new client was added correctly, run the following command:

§ lsnim –l

5.4.1.2 Preparing the NIM Client for a bos_install

At this point, the newly created NIM client must have a bosinst_data resources allocated to it in order to make it serviceable.

To allocate the bosinst.data, run the following command:

§ smitty nim_bosinst

§ Select the previously defined client name, and press Enter.

§ Choose “mksysb” for the installation type, and press Enter.

§ Select the desired mksysb resource (52mksysb for instance), and press Enter.

Select the MKSYSB to use for the installation

Move cursor to desired item and press Enter.

52mksysb resources mksysb

F1=Help F2=Refresh F3=Cancel F4=List

Esc+5=Reset F6=Command F7=Edit F8=Image F9=Shell

§ Select the previously defined SPOT resource, and press Enter.

Select the MKSYSB to use for the installation

Move cursor to desired item and press Enter.

spot52 resources spot

F1=Help F2=Refresh F3=Cancel F4=List

Esc+5=Reset F6=Command F7=Edit F8=Image F9=Shell

§ Complete the next screen by filling in the following fields, and pressing Enter twice to confirm when complete.

Install the Base Operating System on Standalone Clients

[TOP] [Entry Fields]

* Installation Target catos-mddbbca02

* Installation TYPE mksysb

* SPOT spot52

* LPP_SOURCE 52_lppres

MKSYSB 52mksysb

BOSINST_DATA to use during installation [52bid_64bit_2dr]

IMAGE_DATA to use during installation []

RESOLV_CONF to use for network configuration [resolv_conf]

Customization SCRIPT to run after installation []

Remain NIM client after install? [yes]

PRESERVE NIM definitions for resources on target [yes]

FORCE PUSH the installation [no]

Initiate reboot and reboot now? [no]

-OR-

Set bootlist for installation at the [no]

next reboot?

Additional BUNDLES to install []

-OR-

Additional FILESETS to install []

(bundles will be ignored)

installp Flags

COMMIT software updates? [yes]

SAVE replaced files? [no]

AUTOMATICALLY install requisite software? [yes]

EXTEND filesystems if space needed? [yes]

OVERWRITE same or newer versions? [no]

VERIFY install and check file sizes? [no]

Group controls (only valid for group targets):

Number of concurrent operations []

Time limit (hours) []

Schedule a Job [no]

YEAR []

MONTH []

DAY (1-31) []

HOUR (0-23) []

MINUTES (0-59) []

[BOTTOM]

F1=Help F2=Refresh F3=Cancel F4=List

§ Press the F10 key to exit this screen and return to the command line.

To verify the completed NIM client profile, run the following command:

lsnim –l

5.4.2 Configure the Client Servers

Now that the NIM master has been updated with the client information, the client server needs to be configured for network boot through the HMC console.

5.4.2.1 Booting into SMS Mode.

This procedure assumes that the target server is currently “down”

o Open the HMC console.

§ Use Websm to do

In the Navigation Area (left pane), and right click it to open the terminal window.

1. Click Activate. The Activate Partition menu opens with a selection of partition profiles.

2. Select Open a terminal window to open a virtual terminal (vterm) window.

3. Click (Advanced…) to open the advanced options menu.

4. For the boot mode, select SMS.

5. Click OK to close the advanced options menu.

6. Click OK. You will see the following message:

After a few moments, the terminal screen will open, and the system will boot to the SMS prompt.


5.4.2.2 Configuring Remote IPL

In SMS menu, choose “Setup Remote IPL (Initial Program Load)” from the main menu.

o Choose the Port 1 press Enter, then select “1. IP Parameter”, press enter.

o Fill in each field using the IP addresses of both NIM server and NIM client from the Server Install Worksheets.

Note: the “Gateway IP Address” field is always set to the IP Address of the NIM Master

5.4.2.3 Adapter Configuration and Test

o Back to the previous screen and choose option “2. Adapter Configuration”.

o Select network speed auto and Duplex auto

o Press ESC twice and select option 3. Ping Test.

o Select “1. Execute Ping Test” on next screen to attempt the ping operation. The NIM client will now attempt to ping the NIM server.

After a 60 second wait, a success message should appear. If not, review and change the network adapter configuration until the ping is successful.


5.4.2.4 Booting the Server from the NIM Master

After the successful ping, press any key and then select “M” to return to the SMS main menu screen. From this screen, select “1. Select Install/Boot Devices” and press Enter

From next screen, choose “7. List all Devices”, and press Enter.

o After the buses are scanned, a list of boot devices will be presented. Select “1. Ethernet” and press Enter.

o Choose “2. Normal Mode Boot”. and press Enter

o Finally, choose “1. Yes” to exit the SMS menu and initiate the boot process.

After the “STARTING SOFTWARE PLEASE WAIT…” message, the new client will start booting and install the new image automatically. Once the “Welcome to AIX” message appears, the client has successfully started the boot process. Approximately 30 minutes later, the new client will be installed and ready for login.

Now you can follow the procedure in 5.4.2 to install the NIM client.


6 Server Backups and Restores

It is high recommended to make a completed system backups after implementing AIX on all p570 LPAR’s. This section is intended to highlight a few key points about backups and restores on the AIX platform.

6.1 Structure of the NIM Master

Usually OS Backups (mksysb) will occur through regularly scheduled jobs on each of the servers. These servers will nfs mount the /export/nim/mksysb directory from the NIM Master, over the network, and perform a mksysb to a file in the NFS directory. Once the mksysb completes, the nfs mount will be unmounted, leaving the completed mksysb file on the nim master.

In the event a server has to be recovered, such as a catastrophic hardware or software problem, this mksysb can be “pushed” down to the server to restore the server operation.

The /export/NIM directory holds the required NIM master resources such as the mksysb and SPOT and LPPSOURCE.

6.2 Preparation for NIM Backups and Restores

Some key components required for Backups and Restores are listed below.

  • The /etc/hosts file on all servers must contain all server hostnames and IP Addresses. It must be accurate.
  • The network must be fully functional.
  • The /export/nim/mksysb directory is nfs exported as Read/Write from the NIM master to all Clients. As part of this, the /etc/hosts file must be accurate.
  • The client must have an mksysb mount point directory created
  • · You will need to create a scripts for the same

6.3 Backup a NIM Client

The backup process begins on the NIM Client (for instance – catos-mddbbca01).

o Step 1 – the client nfs mounts the /export/nim/mksysb directory from the NIM master.

o Step 2 – the client issues the mksysb command and places the resulting file in the nfs mounted mksysb directory. This mksysb file contains all info in regards to the Operating System as well as customer data.

o Step 3 – the client unmounts the nfs directory. The mksysb backup now resides on the NIM master.

o Step 4 – Backup software such as TSM can then be used to manage the backup of the mksysbs now residing on the NIM Master. TSM is also used to backup all application “data”. This is beyond the scope of this document.

6.4 Restore a NIM Client

The restore process involves the use of NIM to restore a previously saved mksysb image of a server in the event of a catastrophic failure.

The restore process begins on the NIM master but then shifts to the NIM Client Server at step 3 (perform a netboot).

o Step 1 – The desired mksysb image mksysb.hostname to restore is selected in the /export/nim/mksysb/hostname directory.

o Step 2 – Remove and recreate the mksysb resource on the NIM master. Use smitty nim_mkres

o Step 3 – the NIM master is configured to allocate the desired mksysb image to the desired client and set the client to netboot. Use “smitty nim_bosinst”.

o Step 4 – the client is netbooted. During this process, it follows the instructions setup on the NIM master and performs a non-prompted OS install.

o Step 5 – at the end of the netboot, the NIM resources are automatically deallocated on the NIM Master and the server is rebooted to the login screen.

6.5 Reconfiguration on the client after mksysb backup is installed

To restore original environment which mksysb was built, do the following steps:

o Step 1 – Check hostname.

o Step 2 – Check all Ethernet adapater and lables.

o Step 3 – Check all etherchannels

o Step 4 – Check HACMP filesets and start HACMP .

o Step 5 – Check all SAN disk are attached and configured

o Step 6 – Start GPFS.

o Step 7 – Mount filesystems.

o Step 8 – run smitty aio and select change/show Asynchronous I/O.

select “STATE to be configured at system restart” as “available”. This is necessary for oracle application.

o Step 9 – Application People to check Oracle RAC and its working