Blog

Saturday, August 17, 2013

Configuring a Clustered NetApp Filer as an NFS Datastore for VMware ESXi Implementing Multiple VLANs, MTUs and IPs

On your NetApp filer you can easily configure multiple VLANs with differing MTU on the same LACP trunked 1GbE or 10GbE ports with stacked IPs on the storage VLAN network to assist with load balancing.  In this example, network 10.0.0/24 (VLAN 10, MTU 1500) is just the regular network. Network 10.0.1/24 (VLAN 20, MTU 9000) is the NFS storage network. On your switch create an LACP trunk to the filer's interfaces and then trunk VLANs 10 and 20. Your ESXi servers storage network would also be on VLAN 20 and use the load balancing policy of Route based on IP hash. On the switch you would create a static trunk (since ESXi 5 does not support LACP). The VMkernel port on the vSwitch would be untagged for the storage network. Here's /etc/rc:

hostname filer1
ifconfig e0a flowcontrol send
ifconfig e0b flowcontrol send
ifconfig e0c flowcontrol send
ifconfig e0d flowcontrol send
vif create lacp NETWORK -b ip e0a e0b e0c e0d
vlan create NETWORK 10 20
ifconfig NETWORK-10 `hostname`-NETWORK-10 netmask 255.255.255.0 mtusize 1500 -wins partner 10.0.0.51
ifconfig NETWORK-20 `hostname`-NETWORK-20 netmask 255.255.255.0 mtusize 9000 -wins partner 10.0.1.54
ifconfig NETWORK-20 alias `hostname`-NETWORK-20-ALIAS-1 netmask 255.255.255.0
ifconfig NETWORK-20 alias `hostname`-NETWORK-20-ALIAS-2 netmask 255.255.255.0
ifconfig NETWORK-20 alias `hostname`-NETWORK-20-ALIAS-3 netmask 255.255.255.0
route add default 10.0.0.1
routed on
options dns.enable on
options nis.enable off
savecore

Ensure /etc/hosts is populated correctly with the IP of both toasters in the event of failover/failback:

127.0.0.1 localhost
10.0.0.50 filer1 filer1-NETWORK-10   
10.0.1.50 filer1-NETWORK-20
10.0.1.51 filer1-NETWORK-20-ALIAS-1
10.0.1.52 filer1-NETWORK-20-ALIAS-2
10.0.1.53 filer1-NETWORK-20-ALIAS-3
10.0.0.51 filer2 filer2-NETWORK-10
10.0.1.54 filer2-NETWORK-20
10.0.1.55 filer2-NETWORK-20-ALIAS-1
10.0.1.56 filer2-NETWORK-20-ALIAS-2
10.0.1.57 filer2-NETWORK-20-ALIAS-3

Ensure your VM exports (/etc/exports) are secured ensuring only access from your ESXi VMKernel port on the storage switch of each ESXi host - in this case there are 3 ESXi hosts. Additionally, individual IPs don't necessarily need to be used if an entire subnet requires rw and root access to the VM volumes:

/vol/root      -sec=sys,rw,anon=0,nosuid
/vol/root/home -sec=sys,rw,nosuid
/vol/downloads -sec=sys,rw,nosuid
/vol/vm00      -sec=sys,rw=10.0.1.10:10.0.1.11:11.0.1.12,root=10.0.1.10:10.0.1.11:11.0.1.12
/vol/vm01      -sec=sys,rw=10.0.1.10:10.0.1.11:11.0.1.12,root=10.0.1.10:10.0.1.11:11.0.1.12
/vol/vm02      -sec=sys,rw=10.0.1.10:10.0.1.11:11.0.1.12,root=10.0.1.10:10.0.1.11:11.0.1.12
/vol/vm03      -sec=sys,rw=10.0.1.10:10.0.1.11:11.0.1.12,root=10.0.1.10:10.0.1.11:11.0.1.12
/vol/iso       -sec=sys,rw=10.0.1.10:10.0.1.11:11.0.1.12,root=10.0.1.10:10.0.1.11:11.0.1.12

This configuration would be need to be made identically on filer1 and filer2 with the exception that on filer2 the hostname changes in /etc/rc.

Saturday, July 27, 2013

Passwordless root SSH Public Key Authentication on CentOS 6

It's often useful to be able to SSH to other machines without being prompted for a password. Additionally, if you using tools such as Parallel SSH you will need to setup Public Key SSH Authentication. To set it up is relatively straight forward:

On the client machine (ie. the one you are SSH'ing from) you will need to create an SSH RSA key. So run the following command - ensure you don't supply a password:

[[email protected] ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
c6:66:93:16:73:0b:bf:46:46:28:7d:a5:38:a3:4d:6d [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|            .    |
|       . + o     |
|      . @ E      |
|       * & .     |
|      . S =      |
|       = + .     |
|          o      |
|         .       |
|                 |
+-----------------+

This will generate the following files:

[[email protected] ~]# cd ~/.ssh
[[email protected] .ssh]# ls -l
total 8
-rw-------. 1 root root 1675 Jul 27 15:01 id_rsa
-rw-r--r--. 1 root root  406 Jul 27 15:01 id_rsa.pub

On the client machine tighten up file system permissions thus:

[[email protected] ~]# chmod 700 ~/.ssh
[[email protected] ~]# chmod 600 ~/.ssh/*
[[email protected] ~]# ls -ld ~/.ssh & ls -l ~/.ssh
drwx------. 2 root root 4096 Jul 27 15:01 /root/.ssh
-rw-------. 1 root root 1675 Jul 27 15:01 id_rsa
-rw-------. 1 root root  406 Jul 27 15:01 id_rsa.pub

Now copy the public key to the machine you want to SSH and fix permissions (you will be prompted for the root password):

[[email protected] ~]# ssh [email protected] 'mkdir -p /root/.ssh'
[[email protected] ~]# scp /root/.ssh/id_rsa.pub ro[email protected]:/root/.ssh/authorized_keys
[[email protected] ~]# ssh [email protected] 'chmod  700 /root/.ssh'
[[email protected] ~]# ssh [email protected] 'chmod  600 /root/.ssh/*'

You can also use the utility ssh-copy-id to do the above steps. If you don't have scp on the remote machine you will need to install it:

[[email protected] ~]# ssh [email protected] 'yum install openssh-clients'

You should now be able to ssh directory from node01 to node02 without providing a password:

[[email protected] ~]# ssh node02
Last login: Wed Jul 27 15:41:56 2011 from 10.255.5.57
[[email protected] ~]#

IMPORTANT There is a bug in CentOS 6 / SELinux that results in all client presented certificates to be ignored when SELinux is set to Enforcing. To fix this simply:

[[email protected] ~]# ssh [email protected] 'restorecon -R -v /root/.ssh'
restorecon reset /root/.ssh context system_u:object_r:ssh_home_t:s0->system_u:object_r:home_ssh_t:s0
restorecon reset /root/.ssh/authorized_keys context unconfined_u:object_r:ssh_home_t:s0->system_u:object_r:home_ssh_t:s0
Sunday, July 21, 2013

Fortinet Fortigate 300C Active Directory Integration

We recently had to install a Fortinet Fortigate 300C cluster. You may wish to integrate your firewall cluster into Active Directory to facilitate AD based administrative and VPN logins. This guide is based on FortiOS v4.0 MR3 Patch 8 (v4.0,build0632,120705 (MR3 Patch 8)).

Configure DNS

First thing is to ensure your Fortigate's DNS is configured to point to your Active Directory servers. Go to Network -> DNS:

Configure LDAP

Then you need to configure LDAP. So go to User -> Remote -> LDAP and Create a new LDAP entry. You will need to create an LDAP entry for each domain controller:

 

Windows Server uses sAMAccountName and the Common Name (CN) Identifier. Your Distinguished Name is typically your top level AD DN. You need to do a Regular bind to AD and as a result you will need to specify the user that has access to AD to make queries. In this case the user LDAPBindFortinet was created explicitly with a non-expiring password. The User DN is CN=LDAPBindFortinet,OU=Services,OU=FireDaemon,DC=firedaemon,DC=int. Make sure you test connectivity and that you can successfully browser the directory. If you are having trouble divining CNs and DNs try browsing your directory with Softerra's LDAP Administrator.

Configure User Group

You will now need to create a remote authentication user group. So go to User -> User Group -> User Group.  Name it appropriately then add in your two Active Directory servers. Your users will ideally need to be in a group to permit firewall or VPN access. In this example, the group the users are in is:  CN=FortinetUsers,OU=Groups,OU=FireDaemon,DC=firedaemon,DC=int. You can obtain this DN by browsing the user and looking at their MemberOf attribute.

Add Remote Users

Lastly, you will need to add remote users (in this case for firewall configuration). So go to System -> Admin -> Administrators and add remote users.

 

You should now be able to login as a domain user to your Fortigate:

 

Friday, June 28, 2013

Arista Networks switch easter egg

If you have an Arista Networks switch here's a little easter egg that you may find amusing. At the command prompt type show chickens. You will get the following output:

Saturday, June 01, 2013

Setting up DHCP on an Enslaved VLAN Bridge on CentOS Linux

I had to setup a single interface on a server, with dual DHCP IP addresses that were obtained on the native untagged interface along with a tagged interface enslaved to VLAN bridge in order to rollout Enomaly SpotCloud. Thus the primary interface obtains its IP address via DHCP along with the bridged interface on a VLAN. To set this up :

1. cd /etc/sysconfig/network-scripts

2. vi ifcfg-eth0 so it looks like (change your MAC address accordingly):

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
HWADDR=f4:ce:46:82:55:f4 

3. Then create your VLAN interface configuration. So vi ifcfg-eth0.1051:

DEVICE=eth0.1051
BOOTPROTO=dhcp
VLAN=yes
BRIDGE=virbr0
ONBOOT=yes

4. Then create your bridge interface configuration: So vi ifcfg-virbr0:

DEVICE=virbr0
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=dhcp

Note that TYPE must be Bridge with a capital B - otherwise it won't work. And there you have it - when the box boots it gets a DHCP lease on eth0 and on virbr0 which is on VLAN 1051.

Friday, May 10, 2013

Configuring iSCSI on CentOS 5.6

I recently had to load CentOS 5.6 on several HP BL2x220C blade servers to run Enomaly SpotCloud. One of the requirements was to provision disk for KVM virtual machine storage. This could be local disk or optionally iSCSI disk. The following describes the steps I went through to configure iSCSI successfully.

1. You will need to configure your storage system. I was using a HDS HNAS Mercury cluster. The configuration of the HNAS is probably beyond the scope of this post but in essence you need to create a File System of your required size. Then assign that File System to an EVS (Hitachi terminology for a virtual storage system) with an assigned cluster node and IP address on the storage VLAN. You then need to create iSCSI Logical Units within the File System. One LUN will be required for each host. Lastly create iSCSI targets within the EVS iSCSI domain with access configuration only permitted from the host that will use it along with the LUN ID and LUN name. You will end up with is a series of Globally Unique Names that are of a finite size (eg. 500GB)  that are only accessible from a single host: iqn.2011-04.spotcloud:sc-evs-iscsi01.sc-target01.

2. Back to the CentOS side of things - make sure your interfaces are configured correctly and you can ping the storage system. I have two Virtual Connect modules in the HP C7000 enclosure - hence two interface were available. Static IPs were used on the storage network. I edited:

/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/sysconfig/network-scripts/ifcfg-eth1
/etc/sysconfig/network

3. Make sure the iSCSI daemons are installed. You can do this via yum or from the original source media. Via yum:

yum install iscsi

Via virtual media:

mount /dev/cdrom /mnt
cd /mnt/CentOS
rpm -ivh iscsi*
cd /
umount /mnt

Don't forget to eject the virtual media.

4. Make sure iSCSI starts on boot and start the daemon:

chkconfig iscsi on
service iscsi start

5. Discover your iSCSI targets:

iscsiadm -m discovery -t sendtargets -p 10.255.4.10

The IP address is that of the storage system.

6. Delete any unnecessary iSCSI nodes:

service iscsi stop
iscsiadm -m node <nodename> -o delete
service iscsi start

The <nodename> is the UIN mentioned earlier. Sometimes you will always discover multiple nodes - so you need to configure the storage system to filter available LUNs by client source IP address.

7. Work out which device is the iSCSI node:

fdisk -l

8. Create a partition then format it:

fdisk /dev/sdb
mkfs.ext4 /dev/sdb1

9. Label the device:

e2label /dev/sdb1 /sc-node01

10. Configure the mount in /etc/fstab (note the _netdev mount option to ensure the iSCSI LUN is mounted after networking has been brought up):

LABEL=/sc-node01 /var/lib/xen/images ext3 defaults,_netdev,noatime 0 0

And that's it - you are in business. Lastly, if you are interested here is the Virtual Connect configuration used to configure the blades. This configures blade 1A and 1B interfaces 1 and 2. Interface 1 is assigned untagged VLAN of 1050 (eth0) and tagged VLAN 1051 (eth0.1051). Interface 2 is assigned untagged VLAN 1052 (eth1) - which is the storage network.

add profile D4-C2-B01 -NoDefaultEnetConn -NoDefaultFcConn -NoDefaultFcoeConn
add enet-connection D4-C2-B01
add enet-connection D4-C2-B01
add server-port-map D4-C2-B01:1 SC-Management VlanID=1050 Untagged=True
add server-port-map D4-C2-B01:1 SC-VM VlanID=1051
add server-port-map D4-C2-B01:2 SC-iSCSI VlanID=1052 Untagged=True
assign profile D4-C2-B01 enc0:1A

add profile D4-C2-B02 -NoDefaultEnetConn -NoDefaultFcConn -NoDefaultFcoeConn
add enet-connection D4-C2-B02
add enet-connection D4-C2-B02
add server-port-map D4-C2-B02:1 SC-Management VlanID=1050 Untagged=True
add server-port-map D4-C2-B02:1 SC-VM VlanID=1051
add server-port-map D4-C2-B02:2 SC-iSCSI VlanID=1052 Untagged=True
assign profile D4-C2-B02 enc0:1B
Tuesday, March 05, 2013

FireDaemon Service does run and Process ID changes every few seconds

If the process of your FireDaemon Service is changing rapidly, it's probably because it's crashing, not starting correctly or terminating. Generally it can be a pain to troubleshoot this kind of problem, but there are a few things you can do to fix it:

  1. Check the windows event logs, they usually reveal exactly what's happening.
  2. Try running your service as the user you installed the application as. This user should be a local or domain administrator. To change the service's user credentials set them in the Login section in the Settings tab: /manual/SettingsTab.html
  3. The local file system permissions might be wrong, see http://forums.firedaemon.com/threads/system-permission-on-local-drives.648/ for more information.
  4. If the executable is on a mapped drive or UNC path, your path might be in the wrong format, see http://forums.firedaemon.com/threads/how-do-i-use-mapped-drives-and-or-unc-paths.38/ for more information.
  5. Are you remotely connected via RDP?  Make sure the "Shadow Console" is enabled.  See
    http://forums.firedaemon.com/threads/accessing-the-shadow-console-via-remote-desktop-rdp-using-mstsc-admin-or-console.397/ for more information.
  6. If all else fails, then enable Debug Logging in the FireDaemon Service, let the service run a few times and then look at the debug log to see what's happening.  If you don't understand it, you can send a support ticket and attach the debug log to your ticket.
Saturday, January 26, 2013

Application doesn't launch under FireDaemon

Often FireDaemon services are run off other local drives eg. E: F: etc. These drives could be a new local disk array, iSCSI targets or SAN LUNs. If you find your app is not launching under FireDaemon control then ensure you have checked that the Security permissions includes SYSTEM / Full Control. You need to check this as when you add a new drive to a machine and format with NTFS this permission is not automatically set. To check this:

  1. Go to My Computer and look for the local drive you want to check.
  2. Right click on the local drive and select Properties.
  3. Click on the Security tab
  4. In the list of "Group or user names" look for SYSTEM. If it is not there click Edit
  5. A new dialog box will be displayed titled "Permissions for E:"
  6. Click Add
  7. A new dialog box will be displayed titled "Select Users or Groups"
  8. In the "Enter the object names to select" type SYSTEM and click the Check Names button.
  9. Click OK
  10. Then in "Permssions for E:" dialog check Full Control
  11. Then click OK twice.
Your FireDaemon apps should launch correctly.
Monday, December 31, 2012

Application Window is not visible when logged into remote desktop

When you log into a computer remotely, by default you are only seeing the desktop of the user that you logged in as. Interactive services (including FireDaemon ones) are only visible on the shadow console session or on session 0. This is covered in the following article:

http://forums.firedaemon.com/threads/accessing-the-shadow-console-via-remote-desktop-rdp-using-mstsc-admin-or-console.397/

Tuesday, December 18, 2012

Are administrative rights necessary to run FireDaemon?

The FireDaemon Pro's GUI must be run as an administrator to function correctly on Windows XP and Windows 2003 Server.  On Windows Vista, 2008 and 7 the GUI's elevate correctly so the user should not need to be an administrator. Services can be run as any user,  however the privilege of that user will determine whether the service can interact with the desktop, access network resources and so forth. As a rule of thumb services should be run as Localsystem (that's the default). If network access is required then generally run your service as an administrator. FireDaemon will automatically grant user accounts "Log on as Services" rights.

 


Recent Posts



Tags


Archive

    Sign up for Product Updates and Discounts
    Captcha Image
    ×