Differences

This shows you the differences between two versions of the page.

Link to this comparison view

installing_and_configuring_nas_on_debian_jessie_en [2017/09/05 12:18] (current)
Line 1: Line 1:
 +====== Installing and Configuring NAS on Debian Jessie ======
 +
 +What's up folks, here I'll show how to configure NAS on Debian Jessie and their clients.
 +
 +Network-attached storage (NAS) is file-level computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. NAS not only operates as a file server, but is specialized for this task either by its hardware, software, or configuration of those elements. NAS is often manufactured as a computer appliance – a specialized computer built from the ground up for storing and serving files – rather than simply a general purpose computer being used for the role.
 +
 +As of 2010 NAS devices began gaining popularity as a convenient method of sharing files among multiple computers. Potential benefits of dedicated network-attached storage, compared to general-purpose servers also serving files, include faster data access, easier administration,​ and simple configuration.
 +
 +NAS systems are networked appliances which contain one or more hard drives, often arranged into logical, redundant storage containers or RAID. Network-attached storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB/CIFS, or AFP.
 +
 +Note that hard drives with "​NAS"​ in their name are functionally similar to other drives but may have different firmware, vibration tolerance, or power dissipation to make them more suitable for use in RAID arrays, which are sometimes used in NAS implementations. For example, some NAS versions of drives support a command extension to allow extended error recovery to be disabled. In a non-RAID application,​ it may be important for a disk drive to go to great lengths to successfully read a problematic storage block, even if it takes several seconds. In an appropriately configured RAID array, a single bad block on a single drive can be recovered completely via the redundancy encoded across the RAID set. If a drive spends several seconds executing extensive retries it might cause the RAID controller to flag the drive as "​down"​ whereas if it simply replied promptly that the block of data had a checksum error, the RAID controller would use the redundant data on the other drives to correct the error and continue without any problem. Such a "​NAS"​ SATA hard disk drive can be used as an internal PC hard drive, without any problems or adjustments needed, as it simply supports additional options and may possibly be built to a higher quality standard (particularly if accompanied by a higher quoted MTBF figure and higher price) than a regular consumer drive.
 +
 +What we shall use here:
 +
 +  * **NAS Server: Debian Jessie**
 +      * **IP:​** ​ 10.101.0.100/​24
 +      * **Name:​** ​ nas
 +      * **Disk:​** ​ /dev/sdb 100 GB
 +  * **Client Debian Jessie**
 +      * **IP:​** ​ 10.101.0.102/​24
 +      * **Name:​** ​ client01
 +      * **Client configuration Path:​** ​ /etc/iscsi
 +  * **Client CentOS 7.1**
 +      * **IP:​** ​ 10.101.0.104/​24
 +      * **Name:​** ​ client02
 +      * **Client configuration Path:​** ​ /​var/​lib/​iscsi
 +We need to update the repositories and upgrade the system as follows
 +
 +<sxh bash>
 +aptitude update && aptitude dist-upgrade -y
 +</​sxh>​
 +
 +We need to install the prerequisites to use the NAS
 +
 +<sxh bash>
 +aptitude install iscsitarget iscsitarget-dkms lvm2 -y
 +</​sxh>​
 +
 +Now we need to rebuild the modules dependences
 +
 +<sxh bash>
 +depmod -a
 +</​sxh>​
 +
 +Let's load the iscsi module
 +
 +<sxh bash>
 +modprobe iscsi_trgt
 +</​sxh>​
 +
 +Let's put the iscsi module to be launched in the boot time.
 +
 +<sxh bash>
 +echo "​iscsi_trgt">>​ /​etc/​modules
 +</​sxh>​
 +
 +Now let's reboot the server.
 +
 +<sxh bash>
 +reboot
 +</​sxh>​
 +
 +Now we can get information about the iscsi devices and other thinks about it in
 +
 +<sxh bash>
 +cat /​proc/​net/​iet/​session /​proc/​net/​iet/​volume
 +</​sxh>​
 +
 +Now we need to set the LVM flag in the second disk
 +
 +<sxh bash>
 +pvcreate -v /dev/sdb
 +</​sxh>​
 +
 +Now we need to create the volume group to afterwards create the logical volume.
 +
 +<sxh bash>
 +vgcreate -v STORAGE /dev/sdb
 +</​sxh>​
 +
 +Now let's create a logical volume that will be used as a LUN by client.
 +
 +<sxh bash>
 +lvcreate -v -L 7G -n lun0 STORAGE
 +</​sxh>​
 +
 +Let's make a backup of the ietd configuration file that controls the LUNs
 +
 +<sxh bash>
 +cp -a /​etc/​iet/​ietd.conf{,​.bkp}
 +</​sxh>​
 +
 +Now let's clean up the file.
 +
 +<sxh bash>
 +cat /dev/null > /​etc/​iet/​ietd.conf
 +</​sxh>​
 +
 +Now we can
 +
 +<sxh apache>
 +vim /​etc/​iet/​ietd.conf
 +##  A target definition and the target name. The targets ​ name  (the iSCSI  Qualified ​ Name  )
 +# must  be  a  globally unique name (as defined by the  iSCSI  standard) ​ and  has  to  start
 +# with  iqn followed ​ by  a  single ​ dot like this
 +# Target iqn.<​yyyy-mm>​.<​tld.domain.some.host>​[:<​identifier>​] e.g (Target iqn.2004-07.com.example.host:​storage.disk2.sys1.xyz)
 +Target iqn.2015-04.br.com.douglasqsantos:​storage.lun0
 +## This assigns an optional <​aliasname>​ to the target.
 +Alias LUN0
 +## The <​username>​ and <​password>​ used  to  authenticate ​ the  iSCSI initiators to this target.
 +# It may be different from the username and password in  section ​ GLOBAL ​ OPTIONS, ​ which  is
 +# used  for discovery
 +IncomingUser usuario senha
 +## The <​username>​ and <​password>​ used to  authenticate ​ this  iSCSI target ​ to  initiators.
 +# Only  one  OutgoingUser ​ per  target is supported. It may be different from the username
 +# and password in section ​  ​GLOBAL ​  ​OPTIONS, ​ which  is  used  for  discovery.
 +OutgoingUser
 +## Lun <lun> Path=<​device>,​Type=(fileio|blockio)[,​ScsiId=<​scsi_id>​][,​ScsiSN=<​scsi_sn>​][,​IOMode=(wb|ro)] | Sectors=<​size>,​Type=nullio
 +# Parameters after  <​lun> ​ should ​ not  contain ​ any  blank  space character except the first blank space after <lun> is needed.
 +# This  line  must  occur at least once. The value of <lun> can be between 0 and 2^14-1
 +# In fileio ​ mode  (default), ​ it  defi nes  a  mapping ​ between a "​Logical ​ Unit Number"​ <lun> and a given device <​device>​ , which
 +# can be any block device (including regular ​ block  devices ​ like hdX and sdX and virtual block devices like LVM and Software RAID
 +# devices) or regular files
 +# In blockio mode, it defines a mapping between ​ a  "​Logical ​ Unit Number"​ <lun> and a given block device <​device>​. ​ This mode will
 +# perform direct block i/o with the device, ​ bypassing ​ page-cache for  all operations.
 +# Optionally a <​scsi_id>​ can be specified to assign a unique ID to the  iSCSI  volume. ​ This  is  useful e.g. in conjunction with a
 +# multipath-aware ​ initiator ​ host  accessing ​ the  same  <​device>​ through ​ several ​ targets.
 +# By default, LUNs are writable, employing write-through ​ caching. By  setting IOMode to "​ro"​ a LUN can only be accessed read only.
 +# Setting IOMode to "​wb"​ will enable write-back ​ caching. NOTE: IOMode "​wb"​ is ignored when employing blockio.
 +# In nullio mode, it defines a mapping ​ between ​ a  "​Logical ​ Unit Number"​ <lun> and an unnamed virtual device with <​size>​ sectors.
 +# This is ONLY useful for performance ​ measurement ​ purposes.
 +Lun 0 Path=/​dev/​STORAGE/​lun0,​Type=fileio
 +## Optional. ​ The number of connections within a session. Has to be set to "​1"​ (in words: one), which is also the default since MC/S
 +# is not supported.
 +MaxConnections 1
 +## Optional. ​ The  maximum ​ number ​ of sessions for this target. If this is set to 0 (wich is the  default) ​ there  is  no  explicit
 +# session limit.
 +MaxSessions 0
 +## Optional. ​ If  value is non-zero, the initiator will be "​ping"​ed during phases of inactivity (i.e. no data transfers) every value
 +# seconds ​ to  verify ​ the  connection ​ is  still  alive. ​ If  the initiator ​ fails  to  respond ​ within ​ NOPTimeout ​ seconds, ​ the
 +# connection will be closed.
 +NOPInterval 1
 +## Optional. ​ If  a  non-zero ​ NOPInterval ​ is used to periodically "​ping"​ the initiator during phases of inactivity (i.e.  no  data
 +# transfers), ​ the  initiator ​ must  respond within value seconds, otherwise the connection will be closed. If value is set to zero
 +# or if it exceeds NOPInterval , it will be set to NOPInterval.
 +NOPTimeout 5
 +##  Optional. Has to be set to "​Yes"​ - which is also the default.
 +DataPDUInOrder Yes
 +## Optional. Has to be set to "​Yes"​ - which is also the default.
 +DataSequenceInOrder Yes
 +</​sxh>​
 +
 +Now we need to define who will have to allow access to ours LUNs, we shall allow to CIDR 192.168.1.0/​24
 +
 +Let's make a backup of the configuration file.
 +
 +<sxh bash>
 +cp /​etc/​iet/​initiators.allow{,​.bkp}
 +</​sxh>​
 +
 +Now we shall comment the line ALL ALL that enable access to all LUNs to all clients and let's add a new line enabling the acess to lun0
 +
 +<sxh bash>
 +vim /​etc/​iet/​initiators.allow
 +[...]
 +# ALL ALL
 +iqn.2013-01.br.com.douglasqsantos:​storage.lun0 10.101.0.0/​24
 +</​sxh>​
 +
 +Now we need to enable the iscsitarget be launched at the boot time, and let's set the ip and port the server will bind.
 +
 +<sxh bash>
 +vim /​etc/​default/​iscsitarget
 +ISCSITARGET_ENABLE=true
 +ISCSITARGET_MAX_SLEEP=3
 +
 +# ietd options
 +# See ietd(8) for details
 +ISCSITARGET_OPTIONS="​--address=192.168.1.100 --port=3260 "
 +</​sxh>​
 +
 +Let's restart the iscsitarget service as follows.
 +
 +<sxh bash>
 +/​etc/​init.d/​iscsitarget restart
 +</​sxh>​
 +
 +Let's show the open session to our LUN as follows
 +
 +<sxh bash>
 +cat /​proc/​net/​iet/​session
 +tid:1 name:​iqn.2015-04.br.com.douglasqsantos:​storage.lun0
 +</​sxh>​
 +
 +Let's check the LUNs that will be exported as follows.
 +
 +<sxh bash>
 +cat /​proc/​net/​iet/​volume
 +tid:1 name:​iqn.2013-01.br.com.douglasqsantos:​storage.lun0
 +    lun:0 state:0 iotype:​fileio iomode:wt blocks:​4194304 blocksize:​512 path:/​dev/​STORAGE/​lun0
 +</​sxh>​
 +
 +====== Configuring the Debian Client ======
 +
 +Let's update the repositories and upgrade the all system.
 +
 +<sxh bash>
 +aptitude update && aptitude dist-upgrade -y
 +</​sxh>​
 +
 +Let's install the iscsi client as follows
 +
 +<sxh bash>
 +aptitude install open-iscsi -y
 +</​sxh>​
 +
 +Let's make a backup of the client configuration file as follows
 +
 +<sxh bash>
 +cp -Rfa /​etc/​iscsi/​initiatorname.iscsi{,​.bkp}
 +</​sxh>​
 +
 +Now let's change the iscsi name of our client.
 +
 +<sxh bash>
 +vim /​etc/​iscsi/​initiatorname.iscsi
 +InitiatorName=iqn.2015-04.br.com.douglasqsantos.client01:​88288d9a1b78
 +</​sxh>​
 +
 +Let's restart the iscsi service
 +
 +<sxh bash>
 +/​etc/​init.d/​open-iscsi restart
 +</​sxh>​
 +
 +Now let's try to discovery the LUN on the NAS
 +
 +<sxh bash>
 +iscsiadm -m discovery -t st -p 192.168.1.100
 +192.168.1.100:​3260,​1 iqn.2015-04.br.com.douglasqsantos:​storage.lun0
 +</​sxh>​
 +
 +As just we got the information as we've wanted let's configure the lun access
 +
 +Let's enable the automatic conneciton.
 +
 +<sxh bash>
 +iscsiadm -m node iqn.2015-04.br.com.douglasqsantos:​storage.lun0 --op=update --name node.startup --value=automatic
 +</​sxh>​
 +
 +Now let's change the autentication type.
 +
 +<sxh bash>
 +iscsiadm -m node iqn.2015-04.br.com.douglasqsantos:​storage.lun0 --op=update --name node.session.auth.authmethod --value=CHAP
 +</​sxh>​
 +
 +Now let's change the user that has access privileges
 +
 +<sxh bash>
 +iscsiadm -m node iqn.2015-04.br.com.douglasqsantos:​storage.lun0 --op=update --name node.session.auth.username --value=usuario
 +</​sxh>​
 +
 +Now let's change the user password that has access privileges
 +
 +<sxh bash>
 +iscsiadm -m node iqn.2015-04.br.com.douglasqsantos:​storage.lun0 --op=update --name node.session.auth.password --value=senha
 +</​sxh>​
 +
 +Now let's log on the server.
 +
 +<sxh bash>
 +iscsiadm -m node iqn.2015-04.br.com.douglasqsantos:​storage.lun0 --login
 +Logging in to [iface: default, target: iqn.2015-04.br.com.douglasqsantos:​storage.lun0,​ portal: 192.168.1.100,​3260] (multiple)
 +Login to [iface: default, target: iqn.2015-04.br.com.douglasqsantos:​storage.lun0,​ portal: 192.168.1.100,​3260] successful
 +</​sxh>​
 +
 +Now we can show the connection with the node
 +
 +<sxh bash>
 +iscsiadm -m node -o show
 +# BEGIN RECORD 2.0-873
 +node.name = iqn.2015-04.br.com.douglasqsantos:​storage.lun0
 +node.tpgt = 1
 +node.startup = automatic
 +node.leading_login = No
 +iface.hwaddress = <​empty>​
 +iface.ipaddress = <​empty>​
 +iface.iscsi_ifacename = default
 +iface.net_ifacename = <​empty>​
 +iface.transport_name = tcp
 +iface.initiatorname = <​empty>​
 +iface.bootproto = <​empty>​
 +iface.subnet_mask = <​empty>​
 +iface.gateway = <​empty>​
 +iface.ipv6_autocfg = <​empty>​
 +iface.linklocal_autocfg = <​empty>​
 +iface.router_autocfg = <​empty>​
 +iface.ipv6_linklocal = <​empty>​
 +iface.ipv6_router = <​empty>​
 +iface.state = <​empty>​
 +iface.vlan_id = 0
 +iface.vlan_priority = 0
 +iface.vlan_state = <​empty>​
 +iface.iface_num = 0
 +iface.mtu = 0
 +iface.port = 0
 +node.discovery_address = 192.168.1.100
 +node.discovery_port = 3260
 +node.discovery_type = send_targets
 +node.session.initial_cmdsn = 0
 +node.session.initial_login_retry_max = 8
 +node.session.xmit_thread_priority = -20
 +node.session.cmds_max = 128
 +node.session.queue_depth = 32
 +node.session.nr_sessions = 1
 +node.session.auth.authmethod = CHAP
 +node.session.auth.username = usuario
 +node.session.auth.password = ********
 +[...]
 +</​sxh>​
 +
 +Now we can show the session as follow
 +
 +<sxh bash>
 +iscsiadm -m session -o show
 +tcp: [6] 192.168.1.100:​3260,​1 iqn.2015-04.br.com.douglasqsantos:​storage.lun0 (non-flash)
 +</​sxh>​
 +
 +We can show the connection with the host as well
 +
 +<sxh bash>
 +iscsiadm -m host -o show
 +tcp: [10] 192.168.1.102,​[<​empty>​],<​empty>​ <​empty>​
 +</​sxh>​
 +
 +If you got some problem with the connection and need to remove the host, session or node we can use the follow commands
 +
 +<sxh bash>
 +iscsiadm -m host -o delete
 +iscsiadm -m node -o delete
 +iscsiadm -m session -o delete
 +</​sxh>​
 +
 +Example, we need to logout from the session.
 +
 +<sxh bash>
 +iscsiadm -m node -u
 +Logging out of session [sid: 6, target: iqn.2015-04.br.com.douglasqsantos:​storage.lun0,​ portal: 192.168.1.100,​3260]
 +Logout of [sid: 6, target: iqn.2015-04.br.com.douglasqsantos:​storage.lun0,​ portal: 192.168.1.100,​3260] successful.
 +</​sxh>​
 +
 +Now we can remove the node.
 +
 +<sxh bash>
 +iscsiadm -m node iqn.2015-04.br.com.douglasqsantos:​storage.lun0 -o delete
 +</​sxh>​
 +
 +Now if I try to show the information about the node I'll get the follow message
 +
 +<sxh bash>
 +iscsiadm -m node iqn.2015-04.br.com.douglasqsantos:​storage.lun0 -o show
 +iscsiadm: No records found
 +</​sxh>​
 +
 +If you've tested the steps above please log on again in the NAS before to follow the next steps.
 +
 +Now let's check in the dmesg the new device
 +
 +<sxh bash>
 +dmesg
 +[...]
 +[ 2581.823463] sd 10:0:0:0: [sdb] Attached SCSI disk
 +[ 3127.027900] scsi11 : iSCSI Initiator over TCP/IP
 +[ 3128.041754] scsi 11:0:0:0: Direct-Access ​    ​IET ​     VIRTUAL-DISK ​    ​0 ​   PQ: 0 ANSI: 4
 +[ 3128.048398] sd 11:0:0:0: Attached scsi generic sg2 type 0
 +[ 3128.049183] sd 11:0:0:0: [sdb] 14680064 512-byte logical blocks: (7.51 GB/7.00 GiB)
 +[ 3128.050004] sd 11:0:0:0: [sdb] Write Protect is off
 +[ 3128.050012] sd 11:0:0:0: [sdb] Mode Sense: 77 00 00 08
 +[ 3128.053991] sd 11:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn'​t support DPO or FUA
 +[ 3128.059804] ​ sdb: unknown partition table
 +[ 3128.065097] sd 11:0:0:0: [sdb] Attached SCSI disk
 +</​sxh>​
 +
 +Now we need to create a partition in our new disk as follow
 +
 +<sxh bash>
 +fdisk /dev/sdb
 +
 +Welcome to fdisk (util-linux 2.25.2).
 +Changes will remain in memory only, until you decide to write them.
 +Be careful before using the write command.
 +
 +Device does not contain a recognized partition table.
 +Created a new DOS disklabel with disk identifier 0xd7a32fe8.
 +
 +Command (m for help): n
 +Partition type
 +   ​p ​  ​primary (0 primary, 0 extended, 4 free)
 +   ​e ​  ​extended (container for logical partitions)
 +Select (default p): p
 +Partition number (1-4, default 1): 1
 +First sector (2048-14680063,​ default 2048): #ENTER
 +Last sector, +sectors or +size{K,​M,​G,​T,​P} (2048-14680063,​ default 14680063): #ENTER
 +
 +Created a new partition 1 of type '​Linux'​ and of size 7 GiB.
 +
 +Command (m for help): w
 +The partition table has been altered.
 +Calling ioctl() to re-read partition table.
 +Syncing disks.
 +</​sxh>​
 +
 +As I've created the partition with the all amount of space let's create the file system now.
 +
 +<sxh bash>
 +mkfs.ext4 -L ISCSI -m 0 /dev/sdb1
 +mke2fs 1.42.12 (29-Aug-2014)
 +Creating filesystem with 1834752 4k blocks and 458752 inodes
 +Filesystem UUID: d9ccbb02-b64f-480e-b1e8-c4c37df2f568
 +Superblock backups stored on blocks:
 +    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
 +
 +Allocating group tables: done
 +Writing inode tables: done
 +Creating journal (32768 blocks): done
 +Writing superblocks and filesystem accounting information:​ done
 +</​sxh>​
 +
 +Here I've used the option -L to insert a label and -m 0 to not store 5% of disk for root user.
 +
 +Now let's create a directory to mount the new partition.
 +
 +<sxh bash>
 +mkdir /iscsi
 +</​sxh>​
 +
 +Now let's mount the new partition as follows
 +
 +<sxh bash>
 +mount /dev/sdb1 /iscsi/
 +</​sxh>​
 +
 +Now let's display our partitions
 +
 +<sxh bash>
 +df -Th
 +Filesystem ​          ​Type ​     Size  Used Avail Use% Mounted on
 +/​dev/​dm-0 ​           ext4      9,3G  1,2G  7,6G  14% /
 +udev                 ​devtmpfs ​  ​10M ​    ​0 ​  ​10M ​  0% /dev
 +tmpfs                tmpfs     ​201M ​ 4,5M  196M   3% /run
 +tmpfs                tmpfs     ​501M ​    ​0 ​ 501M   0% /dev/shm
 +tmpfs                tmpfs     ​5,​0M ​    ​0 ​ 5,0M   0% /run/lock
 +tmpfs                tmpfs     ​501M ​    ​0 ​ 501M   0% /​sys/​fs/​cgroup
 +/​dev/​mapper/​vg01-var ext4      6,3G  396M  5,6G   7% /var
 +/​dev/​mapper/​vg01-tmp ext4      1,8G  2,9M  1,7G   1% /tmp
 +/​dev/​sda1 ​           ext4      268M   ​33M ​ 218M  13% /boot
 +/​dev/​sdb1 ​           ext4      6,8G   ​16M ​ 6,4G   1% /iscsi
 +</​sxh>​
 +
 +As we've mounted the partition with no problem, let's put it in the boot time.
 +
 +<sxh bash>
 +echo "/​dev/​sdb1 /iscsi ext4 _netdev,​defaults,​noatime 0 0">>​ /etc/fstab
 +</​sxh>​
 +
 +As the mount is via network we need to use the option _netdev and to tunning the filesystem I used the option noatime to disable the update in the access files.
 +
 +Now let's update the file with the options about the iscsi
 +
 +<sxh bash>
 +vim /​etc/​iscsi/​iscsid.conf
 +[...]
 +node.startup = automatic
 +[...]
 +node.session.auth.authmethod = CHAP
 +[...]
 +node.session.auth.username = usuario
 +node.session.auth.password = senha
 +[...]
 +node.session.cmds_max = 1024
 +[...]
 +node.session.queue_depth = 128
 +[...]
 +node.session.iscsi.FastAbort = No
 +</​sxh>​
 +
 +Now let's restart the client to make sure that everything is ok.
 +
 +<sxh bash>
 +reboot
 +</​sxh>​
 +
 +Now we need to take a look at the uptime of the client and make sure that client has just restarted.
 +
 +<sxh bash>
 +uptime
 + ​19:​44:​11 up 0 min,  1 user,  load average: 0,36, 0,11, 0,04
 +</​sxh>​
 +
 +Now let's take a look at the mount points.
 +
 +<sxh bash>
 +df -Th
 +Filesystem ​          ​Type ​     Size  Used Avail Use% Mounted on
 +/​dev/​dm-0 ​           ext4      9,3G  1,2G  7,6G  14% /
 +udev                 ​devtmpfs ​  ​10M ​    ​0 ​  ​10M ​  0% /dev
 +tmpfs                tmpfs     ​201M ​ 4,5M  196M   3% /run
 +tmpfs                tmpfs     ​501M ​    ​0 ​ 501M   0% /dev/shm
 +tmpfs                tmpfs     ​5,​0M ​    ​0 ​ 5,0M   0% /run/lock
 +tmpfs                tmpfs     ​501M ​    ​0 ​ 501M   0% /​sys/​fs/​cgroup
 +/​dev/​sda1 ​           ext4      268M   ​33M ​ 218M  13% /boot
 +/​dev/​mapper/​vg01-var ext4      6,3G  396M  5,6G   7% /var
 +/​dev/​mapper/​vg01-tmp ext4      1,8G  2,9M  1,7G   1% /tmp
 +/​dev/​sdb1 ​           ext4      6,8G   ​16M ​ 6,4G   1% /iscsi
 +</​sxh>​
 +
 +Now let's check the client connection on the server side.
 +
 +On the NAS Server
 +
 +<sxh bash>
 +cat /​proc/​net/​iet/​session
 +tid:1 name:​iqn.2015-04.br.com.douglasqsantos:​storage.lun0
 +    sid:​3377699741303296 initiator:​iqn.2015-04.br.com.douglasqsantos.client01:​88288d9a1b78
 +        cid:0 ip:​192.168.1.102 state:​active hd:none dd:none
 +</​sxh>​
 +
 +====== Creating a brand new LUN on NAS Server ======
 +
 +Now let's create more one logical volume to use as LUN
 +
 +<sxh bash>
 +lvcreate -v -L 7G -n lun1 STORAGE
 +</​sxh>​
 +
 +Now we need to create a new entry for our new LUN
 +
 +<sxh apache>
 +vim  /​etc/​iet/​ietd.conf
 +### LUN 0 ###
 +##  A target definition and the target name. The targets ​ name  (the iSCSI  Qualified ​ Name  )
 +# must  be  a  globally unique name (as defined by the  iSCSI  standard) ​ and  has  to  start
 +# with  iqn followed ​ by  a  single ​ dot like this
 +# Target iqn.<​yyyy-mm>​.<​tld.domain.some.host>​[:<​identifier>​] e.g (Target iqn.2004-07.com.example.host:​storage.disk2.sys1.xyz)
 +Target iqn.2015-04.br.com.douglasqsantos:​storage.lun0
 +## This assigns an optional <​aliasname>​ to the target.
 +Alias LUN0
 +## The <​username>​ and <​password>​ used  to  authenticate ​ the  iSCSI initiators to this target.
 +# It may be different from the username and password in  section ​ GLOBAL ​ OPTIONS, ​ which  is
 +# used  for discovery
 +IncomingUser usuario senha
 +## The <​username>​ and <​password>​ used to  authenticate ​ this  iSCSI target ​ to  initiators.
 +# Only  one  OutgoingUser ​ per  target is supported. It may be different from the username
 +# and password in section ​  ​GLOBAL ​  ​OPTIONS, ​ which  is  used  for  discovery.
 +OutgoingUser
 +## Lun <lun> Path=<​device>,​Type=(fileio|blockio)[,​ScsiId=<​scsi_id>​][,​ScsiSN=<​scsi_sn>​][,​IOMode=(wb|ro)] | Sectors=<​size>,​Type=nullio
 +# Parameters after  <​lun> ​ should ​ not  contain ​ any  blank  space character except the first blank space after <lun> is needed.
 +# This  line  must  occur at least once. The value of <lun> can be between 0 and 2^14-1
 +# In fileio ​ mode  (default), ​ it  defi nes  a  mapping ​ between a "​Logical ​ Unit Number"​ <lun> and a given device <​device>​ , which
 +# can be any block device (including regular ​ block  devices ​ like hdX and sdX and virtual block devices like LVM and Software RAID
 +# devices) or regular files
 +# In blockio mode, it defines a mapping between ​ a  "​Logical ​ Unit Number"​ <lun> and a given block device <​device>​. ​ This mode will
 +# perform direct block i/o with the device, ​ bypassing ​ page-cache for  all operations.
 +# Optionally a <​scsi_id>​ can be specified to assign a unique ID to the  iSCSI  volume. ​ This  is  useful e.g. in conjunction with a
 +# multipath-aware ​ initiator ​ host  accessing ​ the  same  <​device>​ through ​ several ​ targets.
 +# By default, LUNs are writable, employing write-through ​ caching. By  setting IOMode to "​ro"​ a LUN can only be accessed read only.
 +# Setting IOMode to "​wb"​ will enable write-back ​ caching. NOTE: IOMode "​wb"​ is ignored when employing blockio.
 +# In nullio mode, it defines a mapping ​ between ​ a  "​Logical ​ Unit Number"​ <lun> and an unnamed virtual device with <​size>​ sectors.
 +# This is ONLY useful for performance ​ measurement ​ purposes.
 +Lun 0 Path=/​dev/​STORAGE/​lun0,​Type=fileio
 +## Optional. ​ The number of connections within a session. Has to be set to "​1"​ (in words: one), which is also the default since MC/S
 +# is not supported.
 +MaxConnections 1
 +## Optional. ​ The  maximum ​ number ​ of sessions for this target. If this is set to 0 (wich is the  default) ​ there  is  no  explicit
 +# session limit.
 +MaxSessions 0
 +## Optional. ​ If  value is non-zero, the initiator will be "​ping"​ed during phases of inactivity (i.e. no data transfers) every value
 +# seconds ​ to  verify ​ the  connection ​ is  still  alive. ​ If  the initiator ​ fails  to  respond ​ within ​ NOPTimeout ​ seconds, ​ the
 +# connection will be closed.
 +NOPInterval 1
 +## Optional. ​ If  a  non-zero ​ NOPInterval ​ is used to periodically "​ping"​ the initiator during phases of inactivity (i.e.  no  data
 +# transfers), ​ the  initiator ​ must  respond within value seconds, otherwise the connection will be closed. If value is set to zero
 +# or if it exceeds NOPInterval , it will be set to NOPInterval.
 +NOPTimeout 5
 +##  Optional. Has to be set to "​Yes"​ - which is also the default.
 +DataPDUInOrder Yes
 +## Optional. Has to be set to "​Yes"​ - which is also the default.
 +DataSequenceInOrder Yes
 +
 +### LUN 1 ###
 +##  A target definition and the target name. The targets ​ name  (the iSCSI  Qualified ​ Name  )
 +# must  be  a  globally unique name (as defined by the  iSCSI  standard) ​ and  has  to  start
 +# with  iqn followed ​ by  a  single ​ dot like this
 +# Target iqn.<​yyyy-mm>​.<​tld.domain.some.host>​[:<​identifier>​] e.g (Target iqn.2004-07.com.example.host:​storage.disk2.sys1.xyz)
 +Target iqn.2015-04.br.com.douglasqsantos:​storage.lun1
 +## This assigns an optional <​aliasname>​ to the target.
 +Alias LUN1
 +## The <​username>​ and <​password>​ used  to  authenticate ​ the  iSCSI initiators to this target.
 +# It may be different from the username and password in  section ​ GLOBAL ​ OPTIONS, ​ which  is
 +# used  for discovery
 +IncomingUser usuario senha
 +## The <​username>​ and <​password>​ used to  authenticate ​ this  iSCSI target ​ to  initiators.
 +# Only  one  OutgoingUser ​ per  target is supported. It may be different from the username
 +# and password in section ​  ​GLOBAL ​  ​OPTIONS, ​ which  is  used  for  discovery.
 +OutgoingUser
 +## Lun <lun> Path=<​device>,​Type=(fileio|blockio)[,​ScsiId=<​scsi_id>​][,​ScsiSN=<​scsi_sn>​][,​IOMode=(wb|ro)] | Sectors=<​size>,​Type=nullio
 +# Parameters after  <​lun> ​ should ​ not  contain ​ any  blank  space character except the first blank space after <lun> is needed.
 +# This  line  must  occur at least once. The value of <lun> can be between 0 and 2^14-1
 +# In fileio ​ mode  (default), ​ it  defi nes  a  mapping ​ between a "​Logical ​ Unit Number"​ <lun> and a given device <​device>​ , which
 +# can be any block device (including regular ​ block  devices ​ like hdX and sdX and virtual block devices like LVM and Software RAID
 +# devices) or regular files
 +# In blockio mode, it defines a mapping between ​ a  "​Logical ​ Unit Number"​ <lun> and a given block device <​device>​. ​ This mode will
 +# perform direct block i/o with the device, ​ bypassing ​ page-cache for  all operations.
 +# Optionally a <​scsi_id>​ can be specified to assign a unique ID to the  iSCSI  volume. ​ This  is  useful e.g. in conjunction with a
 +# multipath-aware ​ initiator ​ host  accessing ​ the  same  <​device>​ through ​ several ​ targets.
 +# By default, LUNs are writable, employing write-through ​ caching. By  setting IOMode to "​ro"​ a LUN can only be accessed read only.
 +# Setting IOMode to "​wb"​ will enable write-back ​ caching. NOTE: IOMode "​wb"​ is ignored when employing blockio.
 +# In nullio mode, it defines a mapping ​ between ​ a  "​Logical ​ Unit Number"​ <lun> and an unnamed virtual device with <​size>​ sectors.
 +# This is ONLY useful for performance ​ measurement ​ purposes.
 +Lun 0 Path=/​dev/​STORAGE/​lun1,​Type=fileio
 +## Optional. ​ The number of connections within a session. Has to be set to "​1"​ (in words: one), which is also the default since MC/S
 +# is not supported.
 +MaxConnections 1
 +## Optional. ​ The  maximum ​ number ​ of sessions for this target. If this is set to 0 (wich is the  default) ​ there  is  no  explicit
 +# session limit.
 +MaxSessions 0
 +## Optional. ​ If  value is non-zero, the initiator will be "​ping"​ed during phases of inactivity (i.e. no data transfers) every value
 +# seconds ​ to  verify ​ the  connection ​ is  still  alive. ​ If  the initiator ​ fails  to  respond ​ within ​ NOPTimeout ​ seconds, ​ the
 +# connection will be closed.
 +NOPInterval 1
 +## Optional. ​ If  a  non-zero ​ NOPInterval ​ is used to periodically "​ping"​ the initiator during phases of inactivity (i.e.  no  data
 +# transfers), ​ the  initiator ​ must  respond within value seconds, otherwise the connection will be closed. If value is set to zero
 +# or if it exceeds NOPInterval , it will be set to NOPInterval.
 +NOPTimeout 5
 +##  Optional. Has to be set to "​Yes"​ - which is also the default.
 +DataPDUInOrder Yes
 +## Optional. Has to be set to "​Yes"​ - which is also the default.
 +DataSequenceInOrder Yes
 +</​sxh>​
 +
 +Now we need to give the allow access to 192.168.1.0/​24
 +
 +<sxh bash>
 +vim /​etc/​iet/​initiators.allow
 +[...]
 +iqn.2015-04.br.com.douglasqsantos:​storage.lun0 192.168.1.0/​24
 +iqn.2015-04.br.com.douglasqsantos:​storage.lun1 192.168.1.0/​24
 +</​sxh>​
 +
 +Now let's restart the iscsi server
 +
 +<sxh bash>
 +/​etc/​init.d/​iscsitarget restart
 +</​sxh>​
 +
 +Now let's display the volumes
 +
 +<sxh bash>
 +cat /​proc/​net/​iet/​volume
 +tid:2 name:​iqn.2015-04.br.com.douglasqsantos:​storage.lun1
 +    lun:0 state:0 iotype:​fileio iomode:wt blocks:​14680064 blocksize:​512 path:/​dev/​STORAGE/​lun1
 +tid:1 name:​iqn.2015-04.br.com.douglasqsantos:​storage.lun0
 +    lun:0 state:0 iotype:​fileio iomode:wt blocks:​14680064 blocksize:​512 path:/​dev/​STORAGE/​lun0
 +</​sxh>​
 +
 +====== Cliente CentOS ======
 +
 +Use the following script to make sure that your system has all the packets and configuration to use this how-to http://​wiki.douglasqsantos.com.br/​doku.php/​confinicialcentos6_en ​
 +
 +Let's update the repositories and update the system with the newest packets
 +
 +<sxh bash>
 +yum check-update && yum update -y
 +</​sxh>​
 +
 +Now we need to install the packets to able the CentOS to work with the iscsi
 +<sxh bash>
 +yum install iscsi-initiator-utils iscsi-initiator-utils-devel -y
 +</​sxh>​
 +
 +Now we need to put the iscsi in the boot time.
 +<sxh bash>
 +chkconfig --add iscsi
 +chkconfig --add iscsid
 +</​sxh>​
 +
 +After put the iscsi in the boot time we need to enable it.
 +<sxh bash>
 +chkconfig iscsi on
 +chkconfig iscsid on
 +</​sxh>​
 +
 +Now let's restart the service to be able to work with it.
 +<sxh bash>
 +/​etc/​init.d/​iscsi restart
 +</​sxh>​
 +
 +Let's check out what are the LUNs available to us on the NAS server
 +<sxh bash>
 +iscsiadm -m discovery -t st -p 10.101.0.25
 +10.101.0.25:​3260,​1 iqn.2013-01.br.com.douglasqsantos:​storage.lun1
 +10.101.0.25:​3260,​1 iqn.2013-01.br.com.douglasqsantos:​storage.lun0
 +</​sxh>​
 +
 +As we can see we have two LUNs available to connect to, but we need to use only the one that are created to us.
 +
 +We need to get our iqn of the CentOS client.
 +<sxh bash>
 +cat /​etc/​iscsi/​initiatorname.iscsi
 +InitiatorName=iqn.1994-05.com.redhat:​4a84d448b327
 +</​sxh>​
 +
 +Now that we have the iqn that is the iscsi id for the client we need to enable only the CentOs client to connect in it.
 +
 +On the NAS Server.
 +<sxh bash>
 +vim /​etc/​iet/​initiators.allow
 +[...]
 +iqn.2013-01.br.com.douglasqsantos:​storage.lun0 10.101.0.0/​24
 +iqn.2013-01.br.com.douglasqsantos:​storage.lun1 iqn.1994-05.com.redhat:​4a84d448b327
 +</​sxh>​
 +
 +As we can see has a issue the lun0 still will show to the CentOS client, but the lun1 won't be show to the Debian Client so let's restart the iscsitarget.
 +<sxh bash>
 +/​etc/​init.d/​iscsitarget restart
 +</​sxh>​
 +
 +On the Debian Client let's check the iscsi connections available.
 +<sxh bash>
 +iscsiadm -m discovery -t st -p 10.101.0.25
 +10.101.0.25:​3260,​1 iqn.2013-01.br.com.douglasqsantos:​storage.lun0
 +</​sxh>​
 +
 +As we can see only one LUN was showed because we've created a rule to enable only the CentOS client to see the other one.
 +
 +On the CentOS client we can see the both LUNs connections.
 +<sxh bash>
 +iscsiadm -m discovery -t st -p 10.101.0.25
 +10.101.0.25:​3260,​1 iqn.2013-01.br.com.douglasqsantos:​storage.lun0
 +10.101.0.25:​3260,​1 iqn.2013-01.br.com.douglasqsantos:​storage.lun1
 +</​sxh>​
 +
 +Let's get the iqn of the Debian Client.
 +<sxh bash>
 +tail -n 1 /​etc/​iscsi/​initiatorname.iscsi
 +InitiatorName=iqn.1993-08.org.debian:​01:​51fe411a118d
 +</​sxh>​
 +
 +
 +Now let's created another rule to match with the iqn of the Debian Client and enable only this iqn to has access to the lun0.
 +
 +
 +On the Nas Server.
 +<sxh bash>
 +vim /​etc/​iet/​initiators.allow
 +[...]
 +iqn.2013-01.br.com.douglasqsantos:​storage.lun0 iqn.1993-08.org.debian:​01:​51fe411a118d
 +iqn.2013-01.br.com.douglasqsantos:​storage.lun1 iqn.1994-05.com.redhat:​4a84d448b327
 +</​sxh>​
 +
 +
 +Let's restart the iscsitarget.
 +<sxh bash>
 +/​etc/​init.d/​iscsitarget restart
 +</​sxh>​
 +
 +On the CentOS client let's show the LUNs again.
 +<sxh bash>
 +iscsiadm -m discovery -t st -p 10.101.0.25
 +10.101.0.25:​3260,​1 iqn.2013-01.br.com.douglasqsantos:​storage.lun1
 +</​sxh>​
 +
 +If you get some errors as bellow.
 +<sxh bash>
 +iscsiadm -m discovery -t st -p 10.101.0.25
 +iscsiadm: This command will remove the record [iface: default, target: iqn.2013-01.br.com.douglasqsantos:​storage.lun0,​ portal: 10.101.0.25,​3260],​ but a session is using it. Logout session then rerun command to remove record.
 +10.101.0.25:​3260,​1 iqn.2013-01.br.com.douglasqsantos:​storage.lun1
 +</​sxh>​
 +
 +
 +We need to do the logout.
 +<sxh bash>
 +iscsiadm -m node -u
 +</​sxh>​
 +
 +Now we can control the LUNs
 +
 +
 +Now as we don't need to login with an user and password on the lun1 we can connect as following.
 +<sxh bash>
 +iscsiadm -m node -l -T iqn.2013-01.br.com.douglasqsantos:​storage.lun1 -p 10.101.0.25:​3260
 +Logging in to [iface: default, target: iqn.2013-01.br.com.douglasqsantos:​storage.lun1,​ portal: 10.101.0.25,​3260] (multiple)
 +Login to [iface: default, target: iqn.2013-01.br.com.douglasqsantos:​storage.lun1,​ portal: 10.101.0.25,​3260] successful.
 +</​sxh>​
 +
 +
 +We can check the session with the NAS Server.
 +<sxh bash>
 +iscsiadm -m session -P 2
 +Target: iqn.2013-01.br.com.douglasqsantos:​storage.lun1
 +    Current Portal: 10.101.0.25:​3260,​1
 +    Persistent Portal: 10.101.0.25:​3260,​1
 +        **********
 +        Interface:
 +        **********
 +        Iface Name: default
 +        Iface Transport: tcp
 +        Iface Initiatorname:​ iqn.1994-05.com.redhat:​bf9c07079f6
 +        Iface IPaddress: 10.101.0.50
 +        Iface HWaddress: <​empty>​
 +        Iface Netdev: <​empty>​
 +        SID: 1
 +        iSCSI Connection State: LOGGED IN
 +        iSCSI Session State: LOGGED_IN
 +        Internal iscsid Session State: NO CHANGE
 +        *********
 +        Timeouts:
 +        *********
 +        Recovery Timeout: 120
 +        Target Reset Timeout: 30
 +        LUN Reset Timeout: 30
 +        Abort Timeout: 15
 +        *****
 +        CHAP:
 +        *****
 +        username: <​empty>​
 +        password: ********
 +        username_in:​ <​empty>​
 +        password_in:​ ********
 +        ************************
 +        Negotiated iSCSI params:
 +        ************************
 +        HeaderDigest:​ None
 +        DataDigest: None
 +        MaxRecvDataSegmentLength:​ 262144
 +        MaxXmitDataSegmentLength:​ 8192
 +        FirstBurstLength:​ 65536
 +        MaxBurstLength:​ 262144
 +        ImmediateData:​ Yes
 +        InitialR2T: Yes
 +        MaxOutstandingR2T:​ 1
 +</​sxh>​
 +
 +
 +As we can see there is a conneciton on the state of LOGGED IN
 +
 +We can see the new device on the dmesg 
 +<sxh bash>
 +scsi7 : iSCSI Initiator over TCP/IP
 +scsi 7:0:0:1: Direct-Access ​    ​IET ​     VIRTUAL-DISK ​    ​0 ​   PQ: 0 ANSI: 4
 +sd 7:0:0:1: Attached scsi generic sg2 type 0
 +sd 7:0:0:1: [sdc] 4194304 512-byte logical blocks: (7.51 GB/7.00 GiB)
 +sd 7:0:0:1: [sdc] Write Protect is off
 +sd 7:0:0:1: [sdc] Mode Sense: 77 00 00 08
 +sd 7:0:0:1: [sdc] Write cache: disabled, read cache: enabled, doesn'​t support DPO or FUA
 + sdc: unknown partition table
 +sd 7:0:0:1: [sdc] Attached SCSI disk
 +</​sxh>​
 +
 +As we can see now theis is a devices called sdc via iscsi, so let's check the partition table.
 +<sxh bash>
 +fdisk /dev/sdc
 +
 +WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
 +         ​switch off the mode (command '​c'​) and change display units to
 +         ​sectors (command '​u'​).
 +
 +Comando (m para ajuda): n
 +Comando - ação
 +   ​e ​  ​estendida
 +   ​p ​  ​partição primária (1-4)
 +p
 +Número da partição (1-4): 1
 +Primeiro cilindro (1-1020, default 1): #ENTER
 +Using default value 1
 +Last cilindro, +cilindros or +size{K,​M,​G} (1-1020, default 1020): #ENTER
 +Using default value 1020
 +
 +Comando (m para ajuda): w
 +A tabela de partições foi alterada!
 +
 +Chamando ioctl() para reler tabela de partições.
 +Sincronizando discos.
 +</​sxh>​
 +
 +Above I've created a partition with the whole space of the disk, so we need to create a filesystem that will able us to use the partition.
 +<sxh bash>
 +mkfs.ext4 -L ISCSI -m 0 /dev/sdc1
 +mke2fs 1.41.12 (17-May-2010)
 +Filesystem label=ISCSI
 +OS type: Linux
 +Block size=4096 (log=2)
 +Fragment size=4096 (log=2)
 +Stride=0 blocks, Stripe width=0 blocks
 +458752 inodes, 1833952 blocks
 +0 blocks (0.00%) reserved for the super user
 +First data block=0
 +Maximum filesystem blocks=1879048192
 +56 block groups
 +32768 blocks per group, 32768 fragments per group
 +8192 inodes per group
 +Superblock backups stored on blocks:
 +    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
 +
 +Writing inode tables: done
 +Creating journal (32768 blocks): done
 +Writing superblocks and filesystem accounting information:​ done
 +
 +This filesystem will be automatically checked every 28 mounts or
 +180 days, whichever comes first. ​ Use tune2fs -c or -i to override.
 +</​sxh>​
 +
 +Was used the label -L to set up the label of the partition and the -m 0 to do not reserve the 5% of the disk space for the root user.
 +
 +
 +Now let's create a new directory to mount the partition
 +<sxh bash>
 +mkdir /iscsi
 +</​sxh>​
 +
 +Now let's mount the new partition on /iscsi
 +<sxh bash>
 +mount /dev/sdc1 /iscsi/
 +</​sxh>​
 +
 +Now let's list the partitions mounted.
 +<sxh bash>
 +df
 +Sist. Arq.    Tipo    Size  Used Avail Use% Montado em
 +/​dev/​sda5 ​    ​ext4 ​    ​47G ​ 7,8G   ​37G ​ 18% /
 +tmpfs        tmpfs    3,9G  5,3M  3,9G   1% /dev/shm
 +/​dev/​sda1 ​    ​ext4 ​   461M   ​81M ​ 357M  19% /boot
 +/​dev/​sda7 ​    ​ext4 ​   410G  232G  157G  60% /home
 +/​dev/​sdb1 ​    ​ext4 ​   294G  191M  279G   1% /srv
 +/​dev/​sdc1 ​    ​ext4 ​   6,9G  144M  6,8G   3% /iscsi
 +</​sxh>​
 +
 +As everything is ok, let's add the new partition on the boot time.
 +<sxh bash>
 +echo "/​dev/​sdc1 /iscsi ext4 _netdev,​defaults,​noatime 0 0">>​ /etc/fstab
 +</​sxh>​
 +
 +As the the new partition use the network to mount we need to use the flag _netdev and to add more efficiency to the I/O was disabled the updated of the access file.
 +
 +Now let's update the iscsi client configuration.
 +<sxh apache>
 +vim /​etc/​iscsi/​iscsid.conf
 +[...]
 +node.startup = automatic
 +[...]
 +node.session.cmds_max = 1024
 +[...]
 +node.session.queue_depth = 128
 +[...]
 +node.session.iscsi.FastAbort = No
 +</​sxh>​
 +
 +
 +Let's restart the client and make sure that everything is working properly
 +<sxh bash>
 +reboot
 +</​sxh>​
 +
 +Let's check the uptime of the client.
 +<sxh bash>
 +uptime
 + ​15:​26:​35 up 0 min,  1 user,  load average: 0.00, 0.00, 0.00
 +</​sxh>​
 +
 +Let's list the partition table.
 +<sxh bash>
 +df
 +Sist. Arq.    Tipo    Size  Used Avail Use% Montado em
 +/​dev/​sda5 ​    ​ext4 ​    ​47G ​ 7,8G   ​37G ​ 18% /
 +tmpfs        tmpfs    3,9G  5,3M  3,9G   1% /dev/shm
 +/​dev/​sda1 ​    ​ext4 ​   461M   ​81M ​ 357M  19% /boot
 +/​dev/​sda7 ​    ​ext4 ​   410G  232G  157G  60% /home
 +/​dev/​sdb1 ​    ​ext4 ​   294G  191M  279G   1% /srv
 +/​dev/​sdc1 ​    ​ext4 ​   6,9G  144M  6,8G   3% /iscsi
 +</​sxh>​
 +
 +Now let's see the connection with the NAS server.
 +
 +On the NAS server.
 +<sxh bash>
 +cat /​proc/​net/​iet/​session
 +tid:2 name:​iqn.2013-01.br.com.douglasqsantos:​storage.lun1
 +    sid:​562949990973952 initiator:​iqn.1994-05.com.redhat:​43c7a0d2ad63
 +        cid:0 ip:​10.101.0.1 state:​active hd:none dd:none
 +tid:1 name:​iqn.2013-01.br.com.douglasqsantos:​storage.lun0
 +    sid:​281474997486080 initiator:​iqn.1993-08.org.debian:​01:​fa98a8565dfe
 +        cid:0 ip:​10.101.0.26 state:​active hd:none dd:none
 +</​sxh>​
 +
 +Let's check the volumes
 +<sxh bash>
 +cat /​proc/​net/​iet/​volume
 +tid:2 name:​iqn.2013-01.br.com.douglasqsantos:​storage.lun1
 +    lun:1 state:0 iotype:​fileio iomode:wt blocks:​4194304 blocksize:​512 path:/​dev/​STORAGE/​lun1
 +tid:1 name:​iqn.2013-01.br.com.douglasqsantos:​storage.lun0
 +    lun:0 state:0 iotype:​fileio iomode:wt blocks:​4194304 blocksize:​512 path:/​dev/​STORAGE/​lun0
 +</​sxh>​
 +
 +If you need add more LUNs it's only to create another LV and add the entries into /​etc/​iet/​ietd.conf was we did with the other ones.
 +
 +
 +====== References ======
 +
 +  - [[http://​pt.wikipedia.org/​wiki/​Network-Attached_Storage|http://​pt.wikipedia.org/​wiki/​Network-Attached_Storage]]
 +  - [[http://​iscsitarget.sourceforge.net/​|http://​iscsitarget.sourceforge.net/​]]
 +  - [[http://​troysunix.blogspot.com.br/​2011/​06/​configuring-iscsi-targets-in-linux.html|http://​troysunix.blogspot.com.br/​2011/​06/​configuring-iscsi-targets-in-linux.html]]
 +  - [[http://​manpages.ubuntu.com/​manpages/​lucid/​man5/​ietd.conf.5.html|http://​manpages.ubuntu.com/​manpages/​lucid/​man5/​ietd.conf.5.html]]
 +  - [[https://​wiki.debian.org/​SAN/​iSCSI/​iscsitarget|https://​wiki.debian.org/​SAN/​iSCSI/​iscsitarget]]
 +  - [[https://​www.suse.com/​documentation/​sles10/​book_sle_reference/​data/​sec_inst_system_iscsi_target.html|https://​www.suse.com/​documentation/​sles10/​book_sle_reference/​data/​sec_inst_system_iscsi_target.html]]
 +  - [[http://​linhost.info/​2012/​05/​configure-ubuntu-to-serve-as-an-iscsi-target/​|http://​linhost.info/​2012/​05/​configure-ubuntu-to-serve-as-an-iscsi-target/​]]