Showing posts with label Solaris. Show all posts
Showing posts with label Solaris. Show all posts

Monday, March 4, 2013

Persitent Binding for HBA Card

Persistent binding : As the name implies persistent binding enables LUNs zoned from a Storage frame will be assigned with a specific target ID. Now next question which may pops up in your mind "Why do we need that"?

I don't have any strong answer for it. After googling, got few clarifications which are not much convincing.


Below are the few points which i feel could be a reasonable answer for the above question.


1. In a cluster environment where the same LUN is assigned to multiple machine should be configured with persistent target ID.


2. Its a recommended steps for emulex branded HBA card where drivers software are provided by Emulex and same applies for Q Logic branded card.


Here in this, i would be taking only Emulex and Qlogic card as these cards are the most commonly used card on Solaris Box.


NOTE : You do not need to configure persistent binding for Sun Branded Emulex and Qlogic card. 
So HBA cards are categorized as Sun-branded and Sun-non-branded.

Below are the tips to identify the sun-branded and non-branded qlogic and emulex card.

Emulex Card:

LPxxxxxx-S are SUN Emulex HBA Cards  (ie has a “-S” for Sun at end).

For Solaris 10 fcinfo hba-port would provide you the details and for previous version of solaris prtpicl command can be used to get the HBA details

If you see any of the following references they are NON-SUN HBA Cards:

NON-SUN Qlogic:
  • qlaxxxx
  • QLGC,qla
  • QLGC,qlc
  • SUNW,qla

NON-SUN Emulex:
  • emlx (emlxs is SUN)
  • lpfc

    Click on the below link for the detailed explanation.

    Identify HBA card

    So we have got some idea on HBA cards identification now lets go ahead and configure the persistent for emulex card. As i have previously mentioned that persistent binding is needed only for SUN non-branded emulex card.

    To configure persistent binding, there are 2 files needed to be modified.

    /kernel/drv/lpfc.conf
    /kernel/drv/sd.conf

    lpfc.conf - Kernel configuration file for lpfc driver modules
    sd.conf - kernel configuration file for sd driver module

    lpfc.conf file is where you configure the persistent binding and sd.conf is where you should add enough entry to allow OS to scan for more devices.

    For persistent binding you need to find the HBA card instance you can easily determine this by using lputil commands

    Here is an example

    bash-3.00# /usr/sbin/lpfc/lputil
    LightPulse Common Utility for Solaris/SPARC. Version 2.0a13 (1/3/2006).
    Copyright (c) 2005, Emulex Corporation

    Emulex Fibre Channel Host Adapters Detected: 3
    Host Adapter 0 (lpfc3) is an LP11002-E (Ready Mode)
    Host Adapter 1 (lpfc4) is an LP11002-E (Ready Mode)
    Host Adapter 2 (lpfc2) is an LP9802 (Ready Mode)


    As you can see above there are 3 cards. LP11002-E looks like a dual port card that can be confirmed by looking at /etc/path_to_inst file . Here is extract from the the file


    "/pci@1c,600000/lpfc@1,1" 1 "emlxs"
    "/pci@1c,600000/lpfc@1,1" 1 "lpfc"
    "/pci@1c,600000/lpfc@1,1/fp@0,0" 1 "fp"
    "/pci@1c,600000/fibre-channel@1" 3 "lpfc"
    "/pci@1c,600000/fibre-channel@1,1" 4 "lpfc"
    "/pci@1d,700000/lpfc@2" 2 "emlxs"
    "/pci@1d,700000/lpfc@2" 2 "lpfc"
    "/pci@1d,700000/lpfc@2/fp@0,0" 2 "fp"

    From the above 
  •  
  • "/pci@1c,600000/fibre-channel@1" 3 "lpfc"
    "/pci@1c,600000/fibre-channel@1,1" 4 "lpfc"


    See the physical path of the lpfc which is same with the instance as 3 and 4 so its a single HBA card having dual port so lpfc3 and lpfc4 is the instance of dual port HBA card.

    Sample sd.conf file:

    name="sd" parent="lpfc" target=0 lun=0 hba="lpfc0";
    name="sd" parent="lpfc" target=0 lun=1 hba="lpfc0";
    name="sd" parent="lpfc" target=0 lun=2 hba="lpfc0";
    name="sd" parent="lpfc" target=0 lun=3 hba="lpfc0";
    name="sd" parent="lpfc" target=0 lun=4 hba="lpfc0";
    name="sd" parent="lpfc" target=0 lun=5 hba="lpfc0";
    name="sd" parent="lpfc" target=0 lun=6 hba="lpfc0";
    name="sd" parent="lpfc" target=0 lun=7 hba="lpfc0";

    name="sd" parent="lpfc" target=1 lun=0 hba="lpfc1";
    name="sd" parent="lpfc" target=1 lun=1 hba="lpfc1";
    name="sd" parent="lpfc" target=1 lun=2 hba="lpfc1";
    name="sd" parent="lpfc" target=1 lun=3 hba="lpfc1";
    name="sd" parent="lpfc" target=1 lun=4 hba="lpfc1";
    name="sd" parent="lpfc" target=1 lun=5 hba="lpfc1";
    name="sd" parent="lpfc" target=1 lun=6 hba="lpfc1";
    name="sd" parent="lpfc" target=1 lun=7 hba="lpfc1";





    Now in the above output assigning lun no is a tricky one. You need to work with storage team for it. Get the HEX code for each LUN and convert it to decimal which needs to be specified as lun no.

    Another example of /etc/path_to_inst

    > grep -i lpfc /etc/path_to_inst
    "/pci@1e,600000/pci@0/pci@1/pci@0/pci@8/fibre-channel@1" 0 "lpfc"
    "/pci@1e,600000/pci@0/pci@1/pci@0/pci@8/fibre-channel@1,1" 1 "lpfc"
    "/pci@1e,600000/pci@0/pci@8/fibre-channel@0" 4 "lpfc"
    "/pci@1e,600000/pci@0/pci@8/fibre-channel@0,1" 5 "lpfc"
    "/pci@1f,700000/pci@0/pci@2/pci@0/pci@8/fibre-channel@2" 2 "lpfc"
    "/pci@1f,700000/pci@0/pci@2/pci@0/pci@8/fibre-channel@2,1" 3 "lpfc"
    "/pci@1f,700000/pci@0/pci@9/fibre-channel@0" 6 "lpfc"
    "/pci@1f,700000/pci@0/pci@9/fibre-channel@0,1" 7 "lpfc"

    From the above you can find out that there 4 emulex dual ports HBA cards









Wednesday, September 19, 2012

Replacing a disk in Veritas Volume Manager

Replacing a disk in VxVM
--------------------------------------------------------

This has always been a difficult task while replacing a disk. The main worry which runs in our mind that data should not be lost while performing the activity, it should be intact otherwise you know the pain of facing other team.


Here i will be explaining very simple steps which will make you comfortable in doing such critical activities.


I will be talking about concatenated volume where you don't have redundant copies of data later i will cover the mirrored volume.


Disk replacement in concatenated volume


Here is the vxprint -htq output


bash-3.00# vxprint -htq

Disk group: appdg

dg appdg        default      default  4000     1347953587.80.vcs1

dm disk03       disk_3       auto     65536    2027168  -
dm disk04       disk_4       auto     65536    2027168  -

v  appvol       -            ENABLED  ACTIVE   3121152  SELECT    -        fsgen
pl appvol-01    appvol       ENABLED  ACTIVE   3121152  CONCAT    -        RW
sd disk03-01    appvol-01    disk03   0        2027168  0         disk_3   ENA
sd disk04-01    appvol-01    disk04   0        1093984  2027168   disk_4   ENA


In the above appdg diskgroup first subdisk is
disk03-01 which is associated with disk03 and the plex is a concatenated type.

In the above scenario, if in case disk03 which is the first disk added to diskgroup and on which the Veritas has created the first subdisk for "appvol" volume. There is no way to replace the disk, if you try to do so, you will end up in corrupting the filesytem and the data.


But if you want to replace the second disk(flag - failing disk status)  which is absolutely possible without data loss but during this activity volume would not accessible. 

First we will use vxdiskadm to replace the disk, later we will try with only command prompt.


Connect the new disk in an empty slot and initialize it.


Run vxdiskadm from the command prompt, it will open up a list of menu items.



-> Select option no 4 - Remove a disk for replacement


Select the correct disk which you want to replace


(Caution - Do not select first the disk in the disgroup)


Once you have removed the disk it will prompt to select a disk with which you want to replace.  Select the disk which you have initialized.http://www.blogger.com/blogger.g?blogID=7967882172610062116#editor/target=post;postID=2988077713332921354


Once you are done with the above steps, you will find the plex is in disabled and recover steps


# vxprint -htqv

Disk group: appdg

v  appvol       -            DISABLED ACTIVE   3121152  SELECT    -        fsgen
pl appvol-01    appvol       DISABLED RECOVER  3121152  CONCAT    -        RW
sd disk03-01    appvol-01    disk03   0        2027168  0         disk_3   ENA
sd disk04-01    appvol-01    disk04   0        1093984  2027168   disk_5   ENA
#


You need to correct the above


Below sequence of commands needs to executed to fix the abovfe


#vxmend -g appdg fix stale appvol-01


#vxmend -g appdg fix clean appvol-01


#vxvol -g appdg start
appvol

That's it you are done. Mount the volume and you are ready to use the volume.


Replacing a failed disk in mirrored volume

Here is my volume status

# vxprint -htqg mir
dg mir          default      default  4000     1348198265.15.vcs1

dm mir01        disk_3       auto     65536    2027168  -
dm mir02        disk_4       auto     65536    2027168  -

v  appvolmir    -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl appvolmir-01 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir01-01     appvolmir-01 mir01    0        1024000  0         disk_3   ENA
pl appvolmir-02 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir02-01     appvolmir-02 mir02    0        1024000  0         disk_4   ENA


Where appvolmir is mirrored volume in mir diskgroup, having 2 plexes appvolmir-01 and appvolmir-02

Change the plex status to offline

#vxmend -g mir off appvolmir-02

# vxprint -htqg mir
dg mir          default      default  4000     1348198265.15.vcs1

dm mir01        disk_3       auto     65536    2027168  -
dm mir02        disk_4       auto     65536    2027168  -

v  appvolmir    -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl appvolmir-01 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir01-01     appvolmir-01 mir01    0        1024000  0         disk_3   ENA
pl appvolmir-02 appvolmir    DISABLED OFFLINE  1024000  CONCAT    -        RW
sd mir02-01     appvolmir-02 mir02    0        1024000  0         disk_4   ENA


Disassociates the offline plex from the volume

#vxplex -g mir dis  appvolmir-02

# vxprint -htqg mir
dg mir          default      default  4000     1348198265.15.vcs1

dm mir01        disk_3       auto     65536    2027168  -
dm mir02        disk_4       auto     65536    2027168  -

pl appvolmir-02 -            DISABLED -        1024000  CONCAT    -        RW
sd mir02-01     appvolmir-0 mir02    0        1024000  0         disk_4   ENA

v  appvolmir    -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl appvolmir-01 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir01-01     appvolmir-01 mir01    0        1024000  0         disk_3   ENA
#
 

 Remove the plex

#vxedit -g mir -r rm appvolmir-02

 # vxprint -htqg mir
dg mir          default      default  4000     1348198265.15.vcs1

dm mir01        disk_3       auto     65536    2027168  -
dm mir02        disk_4       auto     65536    2027168  -

v  appvolmir    -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl appvolmir-01 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir01-01     appvolmir-01 mir01    0        1024000  0         disk_3   ENA
#

Once plex is removed, removed the disk from veritas control

#vxdg -g mir rmdisk mir02 




# vxprint -htqg mir
dg mir          default      default  4000     1348198265.15.vcs1

dm mir01        disk_3       auto     65536    2027168  -

v  appvolmir    -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl appvolmir-01 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir01-01     appvolmir-01 mir01    0        1024000  0         disk_3   ENA
#



So disk removal is done, Now its time to attach a new disk and sync the data. Identify a new disk should be of similar or more size.

Here in this example i would be using the disk_5 as a replacement disk.

# vxprint -htqg mir
dg mir          default      default  4000     1348198265.15.vcs1

dm mirreplacedisk disk_5     auto     65536    41764864 -
dm mir01        disk_3       auto     65536    2027168  -

v  appvolmir    -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl appvolmir-01 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir01-01     appvolmir-01 mir01    0        1024000  0         disk_3   ENA
#


So once the new disk is initialized and added to the diskgroup , mirror the volume. Here is the vxdisk list ouput

# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0d0s2       auto:sliced     ibmdg01      ibmdg        online
disk_3       auto:cdsdisk    mir01        mir          online
disk_4       auto:cdsdisk    -            -            online
disk_5       auto:cdsdisk    mirreplacedisk  mir          online
disk_6       auto:SVM        -            -            SVM
disk_7       auto:SVM        -            -            SVM
disk_8       auto:ZFS        -            -            ZFS
disk_9       auto:ZFS        -            -            ZFS
disk_10      auto:ZFS        -            -            ZFS
disk_11      auto:cdsdisk    -            -            online
disk_12      auto:cdsdisk    -            -            online


"mirrreplacedisk" is the replaced disk

#vxassist -g mir make mirror appvolmir mirreplacedisk

# vxprint -htqg mir
dg mir          default      default  4000     1348198265.15.vcs1

dm mirreplacedisk disk_5     auto     65536    41764864 -
dm mir01        disk_3       auto     65536    2027168  -

v  appvolmir    -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl appvolmir-01 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mir01-01     appvolmir-01 mir01    0        1024000  0         disk_3   ENA
pl appvolmir-02 appvolmir    ENABLED  ACTIVE   1024000  CONCAT    -        RW
sd mirreplacedisk-01 appvolmir-02 mirreplacedisk 0 1024000 0      disk_5   ENA
#
 













Friday, September 14, 2012

Driver versus firmware

The difference of firmware and drivers is the application of this piece of software.
The most electronic hardware only needs firmware to run basicly. E.G. a DVD-player needs firmware to know how to read a CD or DVD. But to have it accessed under a operating system like Windows you need a drivhttp://www.blogger.com/blogger.g?blogID=7967882172610062116#editor/target=post;postID=5967573526606548086er to access the CD or DVD or watch a movie.
To visualize: A car needs an engine to drive at all, but needs a person (driver!)to control it
Firmware is the software that runs on the device. A driver is the software that tells your operating system how to communicate with the device. Not all devices have firmware--only devices with some level of intelligence.
Read more: http://wiki.answers.com/Q/What_is_the_difference_between_firmware_and_drivers#ixzz25ieOoLEf

Thursday, September 13, 2012

ALOM reset.

1. ALOM console is hunged

I am able to access the console but not able to type in the username and password. As per my finding from the internet, ALOM can be reset by completely poweroff the box means to say remove the power cable.

Second option i could find is, it can be done at the OS level. Here is the command

#/usr/platform/`uname -i`/sbin/scadm resetrsc

Wednesday, September 12, 2012

Veritas Dyanmic Multipathing

Veritas Dynamic Multipathing:

As the name indicates, veritas dynamic multipathing provides multipathing functionality for operating system native device.
It represents a multiple path of a device using DMP metadevice also known as DMP nodes .

Important points:

You can not create veritas filessytem on DMP metadevice, only way to  have the filesystem is create VxVM volume and create VxFS filesystem on it.

VxVM volume and ZFS volume can coexist together but the device which has VxVM label can not be used in ZFS and vice versa.

Feature of Multipathing:

It provides the following feature:

1. Availability
2. Reliability
3. Performance 

It does those by using path failover and load balancing.

Two kernel thread performs the checking the status of HBA and path restoration.

1. errord
2. restored

This can be checked using #vxdmpadm stat

Support for ZFS
---------------------

By default VxVM does not support zfs. To enable it, you need to turn on dmp_native_support tunable

#vxdmpadm settune dmp_native_support=on

To check the status 

#vxdmpadm gettune dmp_native_support


Here is sequence of commands to enable VxDMP support and create a zpool followed by adding a new device to the zpool.

1.Enable the dmp_native_support

#vxdmpadm settune dmp_native_support=on

2.Check the dmp_native_support status

#vxdmpadm gettune dmp_native_support

Once the above steps are done, you can go ahead and add create zpool.

3. Bring the disk under Veritas Volume manager

#vxdctl enable or #vxdisk scandisks

4. Check the disks status

#vxdisk list

5.Now its time to create zpool using the device

#zpool create testpool <device_name>


6. Check the zpool status

#zpool status

7.Add a new device

#zpool add testpool <device name>

------------------------------------------------------------------

To see all the subpaths of the dmpnode name

#vxdmpadm getsubpaths dmpnodename=<name> 

# vxdmpadm getsubpaths dmpnodename=disk_8
NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
================================================================================
c2t5d0       ENABLED(A)    -          c2         Disk         disk       

      -

where disk_8 is the dmpnodename

Now the next is when we have device details, wants to find out the corresponding dmpnodename


# vxdmpadm getdmpnode nodename=c2t5d0
NAME                 STATE        ENCLR-TYPE   PATHS  ENBL  DSBL  ENCLR-NAME
==============================================================================
disk_8  


Others useful commands

#vxdmpadm list dmpnode all

#vxdmpadm list dmpnode dmpnodename=<dmp node name>

# vxdmpadm list dmpnode dmpnodename=disk_4
dmpdev          = disk_4
state           = enabled
enclosure       = disk
cab-sno         = DISKS
asl             = scsi3_jbod
vid             = VMware,
pid             = VMware Virtual S
array-name      = Disk
array-type      = Disk
iopolicy        = MinimumQ
avid            = -
lun-sno         = 6000C29A8511F1C593A6B3235534E5EA
udid            = VMware%2C%5FVMware%20Virtual%20S%5FDISKS%5F6000C29A8511F1C593A6B3235534E5EA
dev-attr        = -
###path         = name state type transport ctlr hwpath aportID aportWWN attr
path            = c2t1d0s2 enabled(a) - SCSI c2 /pci@0,0/pci15ad,1976@10 - - -
#






Tuesday, September 4, 2012

Configuring network management port to access console

Console can be accessed remotely either using Serial management port or Network management port. Mostly all the Solaris servers are having both the option.

Here, I will be showing, how to set up the Network Management port.

Access the console locally. Once you are connected you will be prompted to enter username and password. Use the below default username and password.

Username : root
password : changeme

Once you are logged in you will be placed in ILOM service processor prompt "->"

Note: Assign a new password during the initial system configuration
Initially network management port is configured to use DHCP.

Follow the below steps to configure the Network Mgmt port with static IP address.


Step 1:

-> set /SP/network state=enabled
Set ’state’ to ’enabled’

Step 2:

-> set /SP/network pendingipaddress=xx.xx.xx.xx

Step 3:

-> set /SP/network pendingipdiscovery=static

Step 4:

-> set /SP/network pendingipnetmask=xx.xx.xx.xx

Step 5:


-> set /SP/network pendingipgateway=xxx.xxx.xx.xxx

Step 6:

Final steps to commit the changes.

-> set /SP/network commitpending=true

Step 7:

Verify the network settings.

Show /SP/network

That's it you are done. Connect your network Mgmt port using ssh

Tuesday, August 28, 2012

Below are the few solaris questions, hope this would be useful.

1. Can a RAID level 5 volume in SVM withstand mulitiple failure of disk?

Ans: Yes, if it has sufficient no of disks are configured as hotspare

2. Is your solaris operating system continue to function in case of all state database replicas are deleted?

Ans: Yes, Solaris would continue to function as normal, however the system looses all Solaris Volume Manager Configuration data if a reboot occurs with no existing state database replicas on disks

Solaris Volume Manager

Configuration file and Process details of SVM


I won't get into the details of solaris volume manager, as there are many articles available on the internet which will provide you a plenty of information.
Few of the information which i would like to share, that will be handy in understanding the important concept of SVM.
As you know,for each and every software configured on a machine there are few configuration files involved and there are few processes associated with it.

So first, lets discuss the main configuration files for SVM,

The configuration located in /etc/lvm

-rw-r--r--   1 root     sys          101 Aug 28 20:01 md.cf
-rw-r--r--   1 root     sys           95 Aug 28 20:35 mddb.cf


md.cf - metadevice configuration file 

Cat output of the file is shown below

# cat md.cf
# metadevice configuration file
# do not hand edit
d0 -m d10 d11 1
d10 1 1 c2t3d0s6
d11 1 1 c2t4d0s6

The above file content can be displayed using the below command.

#metastat -p
mddb.cf - metadevice database location details

Below is the output of cat on that file

# cat mddb.cf
#metadevice database location file do not hand edit
#driver minor_t daddr_t device id       checksum
sd      512     16      id1,sd@n6000c29d87e38ec3b5a13a31456d14c1/a      -3659
sd      512     8208    id1,sd@n6000c29d87e38ec3b5a13a31456d14c1/a      -11851
sd      512     16400   id1,sd@n6000c29d87e38ec3b5a13a31456d14c1/a      -20043

From the above output what we can extract is, there are three metadatabase created on the same device.
Daemon for SVM

disabled         20:34:50 svc:/network/rpc/meta:default
disabled        20:34:50 svc:/system/metainit:default


You need not to start it manually, when you create your first metadatabase on the machine, it will start on its own and when you delete all the metadatabase, the above processes goes into disabled mode.

As you know metadatabase is the foundation of any metadevice. If by some reason, all metadatabase replicas are corrupted then you loose the complete SVM configuration.

Recovering SVM configuration from a corrupted metadatabase.

Follow the below steps incase all metadatabase are deleted or corrupted.

Step 1:

Recreate the metadatabase on the same slice which were existed earlier or use a different slice

# metadb -d -f /dev/dsk/c2t3d0s0

Step 2:

Now you need to rebuild the metadevice configuration. First question that may be arising in your mind how do we find which metadevice is configured on which slice, whether it was mirrored. Don't worry the configuration file(/etc/lvm/md.cf) which i explained earlier still has the complete details. Now you may be thinking even this file will be emptied as a result of metadatabase corruption. No, as i mentioned earlier SVM maintains configuration of metadevice and metadatabase in different files so even if you delete all metadatabase your metadevice configuration file content will remain intact.

# cat md.cf
# metadevice configuration file
# do not hand edit
d0 -m d10 d11 1
d10 1 1 c2t3d0s6
d11 1 1 c2t4d0s6

Now using the above, you can easily rebuild your metadevice. As you see from the above,  d0 is mirror metadevice which has two submirrors d10 and d11

#metainit  d10 1 1 c2t3d0s6
#metainit  d11 1 1 c2t4d0s6

#metainit d0 -m d10

#metattach d0 d11

So your metadevice is place.

NOTE: Don't create a new filesystem on this metadevice else you will loose all the previous data. Just mount  the metadevice and access your old data.








Wednesday, August 22, 2012

Indentifying HBA card on Solaris servers

Installing and configuring a HBA on the server rather i would say configuring multiple HBA is a very common task of Unix Administrator. There are two major manufacturer of HBA card 
1. Emulex
2. Qlogic


Sun-branded Qogic and Emulex card for which driver is  provided by Sun and support as well. Now the difficult task is how do we identify a card which is installed on the server is a Sun-branded or non Sun-branded.

There are few commands given below by which you can determine the type of the card and the WWPN no .

To display the connected card

#luxadm -e port

To display the wwn and wwpn details

#luxadm -e dump_map /devices/pci@23d,600000/SUNW,qlc@1/fp@0,0

prtpicl command is very powerful command which can be used to display the complete PCI hierarchy . 

Prtpicl command usage

#prtpicl -v -c scsi-fcp#
#prtpicl -v -c scsi-fcp | /usr/xpg4/bin/grep -e port-wwn -e devfs-path

For solaris 10

#fcinfo hba_port

Monday, August 20, 2012

Solaris Flash image creation and installation

Creation and deployment of flar image:

-----------------------------------------------

Before deploying the flar image, you need to be sure that the clone system should have the same hardware. Flar image created on x86 can not be deployed onto SPARC hardware. Filesystem should have enough space to store the flar image. Now the question is how much space its required to create Flar image.

Run df -h and check the used space for all the filesystem which needs to be added to determine the space required to create flar image.

Creating a flar image


#flar create -n Sol10 -c /var/opt/Sol10.flar

Where Sol10 and Sol10.flar can be of different name.

To view the metadata(means the contents) of the flar image

#flar info -l  /var/opt/Sol10.flar


Once you create the flar image it can be deployed using nfs, http, ftp and local tape. 

I will explain here a simple step how a flar image can be deployed using nfs

As you see above the flar image name is "sol10.flar" located in /var/opt

Share /var/opt as a nfs share

#share -F nfs -d "Nfs share for flar image" /var/opt. That's it you are done with the server having flar image shared.

Installing the flar image:

Pre-requisite:

1. Hardware has to be the same as machine on which flar image is created.
2. Obviously should be on the network.

Boot the server using CDROM considering that you don't have the jumpstart server.

Select Interactive install( i.e option 1)

And continue with the installation process which i believe all are aware of it.

Stop when you are in the below steps.


Here you need to select the second option "Network File System"



You need to provide the absolute the path the flar image. "server" is the name of the server where flar image is located. Instead of server name, provide the IP address of the server.



So you are done, once you click on next, the installer will install the image,and finally prompt you to reboot the server once its completed.




Solaris Flash image creation and installation

Creation and deployment of flar image:

-----------------------------------------------

Before deploying the flar image, you need to be sure that the clone system should have the same hardware. Flar image created on x86 can not be deployed onto SPARC hardware. Filesystem should have enough space to store the flar image. Now the question is how much space its required to create Flar image.

Run df -h and check the used space for all the filesystem which needs to be added to determine the space required to create flar image.

Creating a flar image


#flar create -n Sol10 -c /var/opt/Sol10.flar

Where Sol10 and Sol10.flar can be of different name.

To view the metadata(means the contents) of the flar image

#flar info -l  /var/opt/Sol10.flar


Once you create the flar image it can be deployed using nfs, http, ftp and local tape. 

I will explain here a simple step how a flar image can be deployed using nfs

As you see above the flar image name is "sol10.flar" located in /var/opt

Share /var/opt as a nfs share

#share -F nfs -d "Nfs share for flar image" /var/opt. That's it you are done with the server having flar image shared.

Installing the flar image:

Pre-requisite:

1. Hardware has to be the same as machine on which flar image is created.
2. Obviously should be on the network.

Boot the server using CDROM considering that you don't have the jumpstart server.

Select Interactive install( i.e option 1)

And continue with the installation process which i believe all are aware of it.

Stop when you are in the below steps.


Here you need to select the second option "Network File System"



You need to provide the absolute the path the flar image. "server" is the name of the server where flar image is located. Instead of server name, provide the IP address of the server.



So you are done, once you click on next, the installer will install the image,and finally prompt you to reboot the server once its completed.




Tuesday, July 24, 2012

Shell Scripting

1. To remove all the empty lines in a file
            #grep -v "^$" testfile where testfile - filename

^ - means beginning of the line
$ - end of the line
           
Above characters are called anchors . 

One more example : 

grep -i  ^ts testfile

Which will display the line which has "ts" in the beginning of the lines


Usage of sed:

Deleting all the lines matching a particular pattern:

Take the vfstab file where you want to delete all the lines containing tmpfs

#sed /tmpfs/d vfstab

Where d indicates delete all the lines matching pattern tmpfs
similarly p means print. Lets do one exercise.

I want to print only the ufs filesystem specified in vfstab file

#sed -n /ufs/p vfstab

Where -n option is used to print only those lines matching the pattern else all lines will be displayed.

There are few operators which are commonly used

p - print
d - delete
g - global - which means operation is performed on all the occurrence of the pattern.
s - substitute

All other operators, i believe is self explanatory.
 


# echo "suman mandal" | sed 's/\([a-z]*\) \([a-z]*\)/\2 \1/'

Output of the above command will
mandal suman




Shell Scripting

1. To remove all the empty lines in a file
            #grep -v "^$" testfile where testfile - filename

^ - means beginning of the line
$ - end of the line
           
Above characters are called anchors . 

One more example : 

grep -i  ^ts testfile

Which will display the line which has "ts" in the beginning of the lines


Usage of sed:

Deleting all the lines matching a particular pattern:

Take the vfstab file where you want to delete all the lines containing tmpfs

#sed /tmpfs/d vfstab

Where d indicates delete all the lines matching pattern tmpfs
similarly p means print. Lets do one exercise.

I want to print only the ufs filesystem specified in vfstab file

#sed -n /ufs/p vfstab

Where -n option is used to print only those lines matching the pattern else all lines will be displayed.

There are few operators which are commonly used

p - print
d - delete
g - global - which means operation is performed on all the occurrence of the pattern.
s - substitute

All other operators, i believe is self explanatory.
 


# echo "suman mandal" | sed 's/\([a-z]*\) \([a-z]*\)/\2 \1/'

Output of the above command will
mandal suman

vxprint -g oracle -dF "%publen" | awk 'BEGIN {s = 0} {s += $1} END {print s/2097152, "GB"}'