Navigation
Introduction
Many NetScalers are managed by server admins and/or security people that do not have extensive networking experience. This topic will introduce you to important networking concepts to aid you in successful configuration of NetScalers. Most of the following concepts apply to all networks, but this topic will take a NetScaler perspective.
The content is intended to be introductory only. Search Google for more detail on each topic.
Request-Response
Request-Response Overview
Request/Response – fundamentally, a Client sends a Request to a Server. The Server processes the Request, and sends back a successful Response, or sends back an Error. Request-Response describes almost all client-server networking.
Clients send Requests – For NetScaler, Clients are usually web browsers. But it can be any client-side program that requests something from a server.
Servers Respond to Requests – For NetScaler, Servers are usually web servers. These machines receive HTTP requests from clients, perform the HTTP Method (command) contained in the request, and send back the response.
What’s in a Request?
Request are sent to Web Servers using the HTTP protocol – Web Browsers use the HTTP protocol to send Requests to Web Servers. Web Servers use the HTTP protocol to send Responses back to Web Browsers.
Protocol – A protocol defines a vocabulary for how machines communicate with each other. Since web browsers and web servers use the same protocol, they can understand each other.
HTTP is an OSI Layer 7 protocol – HTTP is defined by the OSI Model as a Layer 7, or application layer, protocol. Layer 7 protocols run on top of (encapsulated in) other lower layer protocols, as detailed later.
HTTP Request Commands – HTTP Requests contain commands for the web server. The web server is intended to carry out the requested command. In the HTTP Protocol, Request Commands are also known as Request Methods.
HTTP GET Method – The most common Command in an HTTP Request is GET. This Command asks the web server to send back a file. In other words, web servers are essentially nothing more than file servers.
- Additional HTTP Request Commands/Methods beyond GET will be detailed in Part 2.
HTTP Path – attached to the GET Command is the path to the requested file. Web servers can host thousands of files. The client needs some method of requesting a particular file. In HTTP, the format is something like /directory/directory/file.html. On NetScaler, you can access the HTTP path in a policy expression by entering HTTP.REQ.URL.PATH
.
- More info on URLs will be provided later in Part 2
Addresses Overview
Unique addresses – Every machine (including clients and servers) has at least one address. Addresses are unique across the whole Internet; only one machine can own a particular address. If you have two machines with the same address, which machine receives the Request or Response?
Requests are sent to a Destination Address – when the client sends a request to a web server, it sends it to the server’s address. This is similar to email: you enter the address of the recipient. The server’s address is put in the Destination Address field of the Request Packet.
- Requests are placed on a Network, which gets it to the destination – The client puts the Request Packet on the network. The network uses the Destination Address in the packet to get the packet to the web server. This process is detailed later.
Web Servers reply to the Source Address – when the Request Packet is put on the network, the client machine inserts its own address as the Source Address. The web server receives the Request and performs its processing. The web server then needs to send the Response back to the Client. It extracts the Source Address from the Request Packet, and puts that in the Destination Address of the Response Packet. If the original Source Address is wrong or missing, then the response will never make it back to the client.
- Sometimes, Requests get to Servers successfully, but Responses fail to come back – If you don’t receive a Response to your Request, then either the Request didn’t make it to the Server, or the Response never made it from the Server back to the Client. The key point is that there are two communication paths: the first is from Client to Server, and the second is from Server to Client. Either one of those paths could fail.
Numeric-based addresses – All network addresses are ultimately numeric, because that’s the language that machines understand. Network packets contain Source and Destination addresses in numeric form. Routers and other networking equipment read the numeric addresses, perform a table lookup to find the next hop to reach the destination, and quickly place the packet on the next interface to reach the destination. This is much quicker if addresses are numbers instead of words.
- IP Addresses are one type of address – Different OSI layers have different addresses. Layer 3 IP Addresses are how the network (Internet) gets the packet from the Source to the Destination and back again. Clients and Servers have unique IP Addresses. Layer 3 networking will be detailed later.
- IP Address format – Each IP address is four numbers separated by three periods (e.g. 216.58.194.132). Each of the four numbers must be in the range from 0 to 255. Most network training guides cover IP addressing in excruciating detail so I won’t repeat it here.
Human-readable addresses – When a human enters the destination address of a Web Server, humans much prefer to enter words instead of numbers. So there needs to be a method to convert word-based addresses into numeric-based addresses. This method is called DNS (Domain Name System), which will be detailed later.
Web Servers and File Transfer
Web Servers are File Servers – essentially, Web Servers are not much more than file servers. A Web Client requests the Web Server to send it a file.
Web Clients use the HTTP Protocol to download files from a Web Server.
Web Clients are responsible for doing something meaningful with the files downloaded from Web Servers – The files downloaded from a Web Server can be: displayed to the user, processed by a program, or stored.
- Web Browsers – Web Browsers are a type of Web Client that usually want to display the files that are downloaded from Web Servers. If the file contains HTML tags, then the Browser will render the HTML tags and display them to the user.
- API Web Clients – Web Clients can use an HTTP-based API to download data files from a web server. These data files are typcially processed by a client-side script or program, and aren’t displayed directly to the user.
- Downloaders – some Web Clients are simply Downloaders, meaning all they do is use HTTP to download files and store them on the hard drive. Later, the user can do something with those downloaded files.
Web Server and Web Client Scripting
Web Server Script Processing – web servers can do more than just file serving: they can also run server-side scripts that dynamically modify the files before the files are downloaded to the web client.
Web Server Script Languages – different web server programs support different server-side script languages. These server-side script languages include: Java, ASP.NET, Ruby, PHP, Node.js, etc.
- Web Server Data – Server-side scripts use data to dynamically modify the HTML pages. The data can be retrieved from a database. And the data can be provided by the Web Client.
Web Browser Scripting – all Web Browsers use Javascript for client-side scripting. Client-side scripts (JavaScript) add animations and other dynamic features to web pages.
- Web Browser Plug-ins – Additional client-side scripting languages can be added to web browsers by installing plug-ins, like Flash and Java. But today these plug-ins are becoming more rare because JavaScript can do almost everything that Flash and Java can do.
Other Client-side programs – Non-browser Client-side programs can use any language (including PowerShell) that supports sending HTTP Requests and processing the HTTP Responses.
Server Services and Server Port Numbers
Web Server Software – there are many web server programs like IIS, Apache, NGINX, Ember (Node.js), WebLogic, etc. Some are built into the operating system (e.g. IIS is built into Windows Server), others must be downloaded and installed.
Web Server Software runs as a Service – The Web Server Software installation process creates a Service (or UNIX/Linux Daemon) that launches automatically every time the Server reboots. Services can be stopped and restarted. Server admins should be familiar with Server Services.
Servers can run multiple Services at the same time – A single Server can run many Services at the same time: an Email Server Service, a FTP Server Service, a SSH Server Service, a Web Server Service, etc. There needs to be some way for the Client to tell the Server that the Request is to be sent to the Web Server Service and not to the SSH Service.
Services listen on a Port Number – when the Web Server Service starts, it begins listening for requests on a particular Port Number (typically port 80 for unencrypted HTTP traffic, and port 443 for encrypted SSL/TLS traffic). Other Services listen on different port numbers. It’s not possible for two Services to listen on the same port number.
Clients send packets to a Destination Port Number – When a Client wants to send a HTTP Request to a Web Server Service, it needs to add the Destination Port Number to the packet. If you open a browser and type a DNS name into the browser’s address bar, by default, the browser will send the packet to Port 80, which is usually the port number that Web Server Services are listening on.
Client Programs and Client Ports
Multiple Client Programs – multiple programs can be running at the same time on a single Client; for example: Outlook, Internet Explorer, Chrome, Slack, etc. When the Response is sent from the Server back to the Client, which client-side program should receive the Response?
Client Ephemeral Ports – whenever a client program sends a request to a Server, the operating system assigns a random port number between 1024 and 65535 to the client process. The range of ephemeral port numbers varies for different client operating systems.
Servers send Response to Client Emphemeral Port – The Server sends the Response to the Client’s Ephemeral Port, also known as the Source Port. This is how a client NIC matches the Response with the client program that initiated it.
Source Port in Network Packet – In order for a Server to know the Client’s Ephemeral Port, the Source Port number must be included in the Request packet.
Each Client Request can use a different Ephemeral Port – A client program can send multiple requests to multiple server machines, and each of these outstanding requests usually has a unique Client Ephemeral port.
Summary of the Network Packet Fields discussed so far – In order for Packets to reach the Server Service and return to the Client Program, every network packet must contains the following fields:
- Destination Address – the Server’s IP address
- Destination Port – port 80 for Web Server Services
- Source Address – the Client’s IP address
- Source Port – the ephemeral port assigned by the operating system
Sessions
Sessions Overview
Sessions and the OSI network model – A session is a longer-lived connection between two endpoints. Each layer of the OSI network model has a different conception of sessions. (Note: the OSI model is detailed in every network training book/video/class).
- Layer 4 – TCP Connection
- Layer 6 – SSL/TLS Session
- Layer 7 – HTTP Session
- Web Server Session – doesn’t map to the OSI Model. Think “shopping carts”
Higher layer sessions require lower layer sessions – Sessions at higher layers require Sessions (or Connections) at lower layers to be established first. For example, HTTP Requests can’t be sent unless a TCP Connection is established first.
A single Server Service can handle Requests from multiple Clients at the same time – When a Client connects to a Service’s Port Number, the Server Service creates a session for the the Client IP and Client Port Number. Each combination of Client IP and Client Port Number is a different session.
Session duration – Sessions at higher layers can live beyond a single lower layer session. For example, a Web Server Session might exist for days, while each HTTP Request might only live for a few seconds.
Session multiplexing – A single lower layer session might be used for multiple higher level sessions. For example, NetScaler appliances multiplex multiple HTTP Requests onto a single TCP Connection.
Network Sessions
Application Data can exceed the maximum size of a Network Packet – Requests and Responses (especially responses) can be too big for a single packet. Thus the Request and/or Response must be broken up into multiple packets. These multiple packets must then be reassembled after they arrive at the destination.
Packets can arrive out of Order – when a Request or Response is broken into multiple packets, the destination needs to reassemble the packets in the correct order. Each packet contains a Sequence Number. The first packet might have Sequence Number 1, while the second packet might have Sequence Number 2, etc. These Sequence Numbers are used to reassemble the packet in the correct order.
Packet Loss – Some packets might not make it to the destination machine. TCP uses the Sequence Numbers to determine if it received all of the packets from the source machine. If one of the sequences is missing, then TCP asks the source to resend the packet. Packet resend is also known as retransmission.
TCP and UDP Overview – there are two Layer 4 Session protocols – TCP, and UDP. TCP handles many of the Network Session services (reassembly, retransmission, etc.) mentioned above. UDP does not do any of these services, and instead requires higher layer protocols to handle them.
TCP Port Numbers and UDP Port Numbers are different – Each Layer 4 protocol has its own set of port numbers. TCP port numbers are different from UDP port numbers. A Server Service listening on TCP 80 does not mean it is listening on UDP 80. When talking about port numbers, you must indicate if the port number is TCP or UDP, especially when asking to firewall teams to open ports. Most of the common ports are TCP, but some (e.g. Voice) are UDP.
TCP Protocol (Layer 4)
TCP Three-way handshake – Before two machines can communicate using TCP, a three-way handshake must be performed:
- The TCP Client initiates the TCP connection by sending a TCP SYN packet (connection request) to the Server Service Port Number.
- The Server creates a TCP Session in its memory, and sends a SYN+ACK packet (acknowledgement) back to the TCP client.
- The TCP Client receives the SYN+ACK packet and then sends an ACK back to the TCP Server, which finishes the establishment of the TCP connection. HTTP Requests and Responses can now be sent across this established TCP connection.
TCP Connections are established between Port Numbers – The TCP Connection is established between the Client’s TCP Port (ephemeral port), and the Server’s TCP Port (e.g. port 80 for web servers).
Multiple Clients to one Server Port – A single Server TCP Port can have many TCP Connections with many clients. Each combination of Client Port/Client IP with the Server Port is considered a separate TCP Connection. You can view these TCP Connections by running netstat on the server.
- Netstat shows Layer 4 only – netstat command shows TCP connections (Layer 4) only, not HTTP Requests (Layer 7).
HTTP requires a TCP Connection to be established first – When an HTTP Client wants to send an HTTP Request to a web server, a TCP Connection must be established first. The Client and the Server do the three-way TCP handshake on TCP Port 80. Then the HTTP Request and HTTP Response is sent over this TCP connection. HTTP is a Layer 7 protocol, while TCP is a Layer 4 protocol. Higher layer protocols run on top of lower layer protocols. It is impossible to send a Layer 7 Request (HTTP Request) without first establishing a Layer 4 session/connection.
Use Telnet to verify that a Service is listening on a TCP Port number – when you telnet to a server machine on a particular port number, you are essentially completing the three-way TCP handshake with a particular Server Service. This is an easy method to determine if a Server machine has a Service listening on a particular port number, and that you’re able to communicate with that port number.
UDP Protocol (Layer 4)
UDP is Sessionless – UDP is a much simpler protocol than TCP. For example, there’s no three-way handshake like TCP. Since there’s no handshake, there’s no UDP session.
No Sequence Numbers – UDP packets do not contain sequence numbers. If a packet is lost, UDP does not request a resend like TCP does. If packets arrive out of order, UDP cannot determine this, and cannot reassemble them in the correct order. If these features are desirable, then the application (Layer 7) needs to do these features instead of relying on UDP to do it.
Why UDP over TCP? – TCP session information is contained in every TCP packet, thus making every TCP packet (20 byte header) bigger than a UDP packet (8 byte header). Also, TCP retransmissions are not as fast or efficient as other methods of recovering from packet loss. For example, Citrix has recently reconfigured HDX/ICA so it can use UDP instead of TCP. They did this by essentially creating their own version of TCP sessions. Citrix’s version of network sessions over UDP is more efficient (smaller packets, quicker recovery) than TCP’s version.
- Audio uses UDP – For audio traffic, there’s usually no point in resending lost packets. Getting rid of retransmissions makes UDP more efficient (less bandwidth, less latency) than TCP.
You can’t use Telnet to troubleshoot UDP – since there’s no three-way handshake in UDP, it’s impossible to use telnet to determine if a Server Service is listening on a UDP port or not. With UDP, the UDP Client machine sends a Request to the UDP Server. The UDP Server does not send any acknowledgment that it received the UDP Request. Thus all a UDP Client can do is wait for a response from the server. If the server doesn’t respond, it doesn’t necessarily mean that the server isn’t listening on the UDP port.
You can’t use netstat to see UDP sessions – since there’s no UDP session, if you run netstat on a machine, you won’t see any UDP sessions. To really see UDP traffic, use a packet capture program like Wireshark.
Application Sessions – web servers typically have their own application session mechanism. Application sessions usually extend beyond a single TCP session to encompass multiple TCP sessions, Web Server sessions are detailed in Part 2.
HTTP Basics
HTTP Protocol Overview
URLs – users enter a URL into a browser’s address bar. An example URL is https://en.wikipedia.org/wiki/URL
- https:// or http:// – the first part of the URL specifies the Layer 7 protocol that the browser will use to connect to the web server.
- en.wikipedia.org – the second part of the URL is the human-readable DNS name that translates to the web server’s IP address.
- /wiki/URL – the remaining part of the URL is the Path and Query. The Path indicates the path to the file you want to download.
Why forward slashes in URLs? – Web Server programs were originally developed for UNIX and Linux. Thus they share some of the Linux characteristics. For example, file paths in HTTP requests use forward slash (/) instead of backslash (\).
- Some URLs are case sensitive – Since UNIX/Linux is case sensitive, file paths in HTTP requests are sometimes case sensitive.
HTTP vs HTML – Web Browsers use the HTTP protocol to download HTML files from a web server. HTTP is the communication protocol to get a file. HTML defines how a web browser displays a web page (HTML file) to a user. There are many books and videos that explain HTML, but not many explain HTTP.
HTTP Packet
HTTP Request Command (Method) – at the top of every HTTP Request packet is the HTTP command. This command might be something like this: GET /Citrix/StoreWeb/login.aspx HTTP/1.1
![]()
HTTP Response Code – at the top of every HTTP Response is a code like this: HTTP/1.1 200 OK
. Different codes mean success or error. Code 200 means success. You’ll need to memorize many of these codes.
![]()
Header and body – HTTP Packets are split into two sections: header, and body.
- HTTP Headers – Below the Request Command (Method), are a series of Headers. Web Browsers insert Headers into requests. Web Servers insert Headers into responses. Request Headers and Response Headers are totally different. You’ll need to memorize most of these Headers.
- HTTP Body – Below the Headers is the Body. Not every HTTP Packet has a Body. In a HTTP Response, the HTTP Body contains the actual downloaded file (e.g. HTML file). In an HTTP Request, the HTTP Body contains data (parameters) that is uploaded with the Request.
Raw HTTP packets – To view a raw HTTP packet, use your browser’s developer tools (F12 key), or use a proxy program like Fiddler. In a Browser’s Developer Tools, switch to the Network tab to see the HTTP Requests and HTTP Responses.
Multiple HTTP requests – A single webpage requires multiple HTTP requests. When an HTML file is processed by a Web Browser, the HTML file contains links to supporting files (CSS, JavaScript, images) that must be downloaded from the web server. Each of these file downloads is a separate HTTP Request.
- HTTP and TCP Connections – Every HTTP Request requires a TCP Connection to be established first. Older Web Servers tear down the TCP Connection after every single HTTP Request. This means that if a web page needs 20 downloads, then 20 TCP Connections, including 20 three-way handshakes, are required. Newer Web Servers keep the TCP Connection established for a period of time, allowing each of the 20 HTTP Requests to be sent across the existing TCP Connection.
HTTP Redirects – one HTTP Response Code that you must understand is the HTTP Redirect. These HTTP Response packets have response code 301 or 302. The HTTP Response Header named Location identifies the new URL that the browser is expected to navigate to.
-
- User’s Web Browser sends an HTTP Request to a Web Server
- Web Server sends back an HTTP Response with a 301/302 response code and Location header.
- User’s Web Browser navigates to the URL contained in the Location header. Note that the Web Server tells the Web Browser where to go. But it’s the browser that actually goes there.
- Redirect usage – Redirects are used extensively by web applications. Most web-based applications would not function without redirects. A common usage of Redirects is in authenticated websites where an unauthenticated user is redirected to a login page, and after login, the user is redirected back to the original webpage.
- Not all Web Clients support HTTP Redirects – Web Browsers certainly can perform a redirect. However, other Web Clients (e.g. Citrix Receiver) do not follow Redirects.
Additional HTTP concepts will be detailed in Part 2.
Networking
Layer 2 (Ethernet) and Layer 3 (Routing) Networking
Subnet – all machines connected to a single “wire” are considered to be on the same subnet. Machines on the subnet can communicate directly with other machines on the same subnet.
Routers – If two machines are on different subnets, then those two machines can only communicate with each other by using an intermediary device that is connected to both subnets. This intermediary device is called a router. The router is connected to both subnets (wires) and can take packets from one subnet and put them on the other subnet.
Layer 2 – When machines on the same subnet want to communicate with each other, they use a Layer 2 protocol, like Ethernet.
Layer 3 – When machines on different subnets want to communicate with each other, they use a Layer 3 protocol, like IP (Internet Protocol).
Local IP address vs remote IP address
Local vs Remote – since different protocols are used for intra-subnet (Layer 2) and inter-subnet (Layer 3), the machines need to know which other machines on are on the local subnet, and which machines are on a remote subnet.
Subnet Mask – all machines have an IP address. All machines are configured with a subnet mask. The subnet mask defines which bits of the IP address are on the same subnet. For example, if a machine with address 10.1.0.1 wants to talk to a machine with address 10.1.0.2, and if the subnet mask is 255.255.0.0, when both addresses are compared to the subnet mask, the results are the same, and thus both machines are on the same subnet, and Ethernet is used. If the results are different, then the other machine is on a different subnet, and IP Routing is used. There is a considerable amount of training material on subnet masks so I won’t repeat that material here.
Wrong Subnet Mask – If either machine is configured with the wrong subnet mask, then one of the machines might think the other machine is on a different subnet, when actually it’s on the same subnet. Or one of the machines might think the other machine in on the same subnet, when actually it’s on a different subnet. Remember, same subnet communication uses a different communication protocol than between subnets. Thus it’s important that the subnet mask is configured correctly.
Layer 2 Ethernet communication
Every machine sees every packet – A characteristic of Layer 2 (Ethernet) is that every machine sees traffic from every other machine on the same subnet.
MAC addresses – When two machines on the same subnet talk to each other, they use a Layer 2 address. In Ethernet, this is called the MAC address. Every Ethernet NIC (network card) in every machine has a unique MAC address.
NICs Listen for their MAC address – The Ethernet packet put on the wire contains the MAC address of the destination machine. All machines on the same subnet see the packet. If the listening NIC has a MAC address that matches the packet, then the listening NIC processes the rest of the packet. If the listening NIC doesn’t have a matching MAC address, then the packet is ignored. You can override this ignoring of packets by turning on promiscuous mode, which is useful for packet capture programs (e.g. Wireshark).
Source MAC address – When a Ethernet packet reaches a machine, the machine needs to know where to send the reply. Thus both the destination MAC address, and the source MAC address, are included in the Ethernet packet.
Ethernet Packet Fields – In summary, a typical Ethernet packet contains the following fields:
- Destination MAC address
- Source MAC address
- Destination IP address
- Source IP address
- Destination TCP/UDP port number
- Source TCP/UDP port number
Other Layer 2 technologies – another common Layer 2 technology seen in datacenters is Fibre Channel for storage area networking (SAN). Fibre Channel has its own Layer 2 addresses called the World Wide Name (WWN). Fibre Channel does not use IP in Layer 3, and instead has its own Layer 3 protocol, and its own Layer 3 addresses (FCID).
ARP (Address Resolution Protocol)
Users enter IP Address, not MAC Address – when a user wants to talk to another machine, the user enters a DNS name, which is translated to an IP address. If the destination IP address is on a remote subnet, then the Layer 3 protocol IP Routing will get the packet to the destination. But if the destination IP address is on the same subnet as the source machine, then the destination IP address first needs to be converted to a MAC address. Machines use Address Resolution Protocol (ARP) to find the MAC address that’s associated with an IP address that’s on the same subnet.
- Remember, machines use the Subnet Mask to determine if the destination is local or remote.
ARP Process – The source machine sends out an Ethernet broadcast with the ARP message “who has IP address 10.1.0.2”. Every machine on the same subnet sees the message. If one of the machines is configured with IP address 10.1.0.2, then that machine replies to the source machine, and includes its MAC address in the response. The source machine can now send a packet directly to the destination machine’s Ethernet MAC address.
ARP Cache – after the ARP protocol resolves an IP address to a MAC address, the MAC address is cached on the machine for a period of time (e.g. 30 seconds). If another IP packet needs to be sent to the same destination IP address, then there’s no need to perform ARP again, since the source machine already knows the destination machine’s MAC address. When the cache entry expires, then ARP needs to be performed again.
IP Conflict – Remember, a particular IP address can only be assigned to one machine. If two machines have the same IP address, then both machines will respond to the ARP request. Sometimes the ARP response will be one machine’s MAC address, and sometimes it will be the other machine’s MAC address. This behavior is typically logged as a “MAC move” or an “IP conflict”. Since only half the packets are reaching each machine, both machines will stop working.
Layer 3 on top of Layer 2
Routing to other subnets – When a machine wants to talk to a machine on a different subnet, the source machine needs to send the packet to a router. The router will then forward the packet to the destination machine on the other subnet.
Default gateway – Every client machine is configured with a default gateway, which is the IP address of a router on the same subnet as the client machine. The client machine assumes that the default gateway (router) can reach every other subnet.
- On a NetScaler or UNIX/Linux device, the default route (default gateway) is shown as route 0.0.0.0/0.0.0.0.
Router’s MAC address – Since the router and the source machine are on the same Ethernet subnet, they use Ethernet MAC addresses to communicate. The source machine first ARP’s the router’s IP address to find the router’s MAC address. The source machine then puts the packet on the wire with the destination IP address and the router’s MAC address.
- The Destination IP Address is the final destination’s (the web server’s) IP address, and not Router’s IP address. However, the MAC Address is the Router’s MAC address, and not the final destination’s MAC Address.
- ARP across subnet boundaries – It’s not possible for a source machine to find the MAC address of a machine on a remote subnet. If you ping an IP address on a remote subnet, and if you look in the ARP cache, you might see the router’s MAC address instead of the destination machine’s MAC address. That’s because routers do not forward Ethernet broadcasts to other subnets.
- Router must be on same subnet as client machine – since client machines use Ethernet, ARP, and MAC addresses to talk to routers, the router (default gateway) and the client machine must be on the same subnet. More specifically, the router must have an IP address on the same IP subnet as the client machine. When the client machine’s IP address and the router’s IP address are compared to the subnet mask, the results must match. You cannot configure a default gateway that is on a different subnet than the client machine.
Routing table lookup – When the router receives the packet on its NIC’s MAC address, it sees that the destination IP address is not one of the router’s IP addresses, so it looks in its memory (routing table) to determine what network interface it needs to put the packet on. The router has a list of which IP subnet is on which router interface. IP Subnets are defined by the address prefix and the subnet mask.
Router ARP’s the destination machine on other subnet – If the destination IP address is on one of the subnets/interfaces that the router is connected to, then the router will perform an ARP on that subnet/interface to get the destination machine’s MAC address.
Router puts the original packet on the destination interface, but with some changes:
- The destination MAC address is changed to the destination machine’s MAC address instead of the router’s MAC address.
- The source MAC address in the packet is now the router’s MAC address, thus making it easier for the destination machine to reply.
- The IP Addresses in the packet do not change. Only the MAC addresses change.
There can only be one default route on a machine, which impacts multi-NIC machines – Some machines (e.g. NetScaler appliances) might be configured with multiple IP addresses on multiple subnets. Only one router can be specified as the default gateway (default route). This default gateway must be on one of the subnets that the client machine is connected to. See the NetScaler networking sections below for details on how to handle the limitation of only a single default route.
Multiple Routers and Routing Protocols
Router-to-router communication– When a router receives a packet that is destined to a remote IP subnet, the router might not be Layer 2 (Ethernet) connected to the remote IP subnet. In that case, the router needs to send the packet to another router. It does this by changing the destination MAC address of the packet to a different router’s MAC address. Both routers need to be connected to the same Ethernet subnet.
Routing Protocols – Routers communicate with each other to build a topology of the shortest path or quickest path to reach a destination. Most of the CCNA/CCNP/CCIE training material details how the routers perform this path selection.
Ethernet Switches
Ethernet Subnet = Single wire – All machines on the same Ethernet subnet share a single “wire”. Or at least that’s how it used to work.
Switch backplane – Today, each machine connects a cable to a port on a switch. The switch merges the switch ports into a shared backplane. The machines communicate with each other across the backplane instead of a single “wire”.
MAC address learning – The switch learns which MAC addresses are on which switch ports.
Switches switch known MAC addresses to only known switch ports – If the switch knows which switch port connects to the destination MAC address of an Ethernet packet, then the switch only puts the Ethernet packet on the one switch port. This means that Ethernet packets are no longer seen by every machine on the wire. This improves security by preventing network capture tools from seeing every packet on the Ethernet subnet.
Switches flood unknown MAC addresses to all switch ports – If the switch doesn’t know which switch port connects to a destination MAC address, then the switch floods the packet to every switch port on the subnet. If one of the switch ports replies, then the switch learns the MAC address on that switch port.
Switches flood broadcast packets – The switch also floods broadcast packets to every switch port in the Ethernet subnet.
Switches and VLANs
VLANs – A single Ethernet Switch can have different switch ports in different Ethernet Subnets. Each Ethernet Subnet is called a VLAN (Virtual Local Area Network). All switch ports in the same Ethernet Subnet are in the same VLAN.
VLAN ID – Each VLAN has an ID, which is a number between 1 and 4095. Thus a Switch can have Switch Ports in up to 4095 different Ethernet Subnets.
Switch Port VLAN configuration – a Switch administrator assigns each switch port to a VLAN ID. By default, Switch Ports are in VLAN 1 and shutdown. The Switch administrator must specify the VLAN ID and enable (unshut) the Switch Port.
Pure Layer 2 Switches don’t route – When a Switch receives a packet for a port in VLAN 10, it only switches the packet to other Switch Ports that are also in VLAN 10. Pure Layer 2 Switches do not route (forward) packets between VLANs.
Some Switches can route – Some Switches have routing functionality (Layer 3). The Layer 3 Switch has IP addresses on multiple Ethernet subnets (one IP address with MAC address for each subnet). The client machine has the Default Gateway set to the Switch’s IP address. When Ethernet packets are sent to the Switch’s MAC address, the Layer 3 Switch forwards (routes) the packets to a different IP subnet.
DHCP (Dynamic Host Configuration Protocol)
Static IP Addresses or DHCP (Dynamic) IP Addresses – Before a machine can communicate on a network, the machine needs an IP address. The IP address can be assigned statically by the administrator, or the machine can get an IP address from a DHCP Server. Most client machines use DHCP by default. DHCP is usually required for virtual desktops and non-persistent XenApp servers.
DHCP Process – When a DHCP-enabled machine boots, it sends a DHCP Request broadcast packet asking for an IP address. A DHCP server sees the DHCP IP address request, and sends back a DHCP reply with an IP address in it. DHCP servers keep track of which IP addresses are available and try to avoid IP conflicts.
DHCP Request doesn’t cross routers – The DHCP Request broadcast is Layer 2 (Ethernet) only, and won’t cross Layer 3 boundaries (routers).
DHCP Server on same subnet – If the DHCP server is on the same subnet as the DHCP client, then no problem. But this is rarely the case.
DHCP Server on remote subnet – If the DHCP server is on a different subnet, then the local router needs to forward the DHCP request to the remote DHCP server. The local router must be configured to listen for DHCP requests. To enable DHCP forwarding on a subnet, ask the networking team to configure the subnet’s router (default gateway) with IP Helper Address or DHCP Proxy/Fowarder.
DHCP Server provides the Default Gateway – When a DHCP server sends an IP address to the client, the DHCP server also sends the Default Gateway IP address. This allows the client machine to communicate both Layer 2 and Layer 3.
DHCP Scopes – A single DHCP server can hand out IP addresses to multiple subnets. Each subnet is a different DHCP Scope. The scope configuration and list of issued IP addresses are stored in a database.
DHCP Server Redundancy – If the DHCP Server is down, then DHCP Clients cannot get an IP address when they boot, and thus can’t communicate on the network. You typically need at least two DHCP Servers. However, the DHCP database is usually stored locally on each DHCP Server, so you need some mechanism to replicate the database to each DHCP Server. Windows Server 2012 and newer have a DHCP Database replication capability. As do other DHCP servers like Infoblox.
DNS (Domain Name Server)
DNS converts words to numbers – When users use a browser to visit a website, the user enters a human-readable, word-based address. However, machines can’t communicate using words, so these words must first be converted to a numeric address. That’s the role of DNS.
DNS Client – Every client machine has a DNS Client. The DNS Client talks to DNS Servers to convert word-based addresses (DNS names) into number-based addresses (IP addresses).
DNS Query – The DNS Client sends a DNS Query, which is a word-based address, to a DNS Server. The DNS Server sends back an IP Address.
DNS Servers configured on client machine – On every client machine, you specify which DNS Servers the DNS Client should use to resolve DNS names into IP addresses. You enter the IP addresses of two or more DNS Servers.
- DHCP can deliver DNS Server addresses – These DNS Server IP addresses can also be delivered by the DHCP Server when the DHCP Client requests an IP address.
DNS scalability – The Internet has billions of IP addresses. Each of these IP addresses has a unique DNS name associated with it. It would be impossible for a single DNS server to have a single database with every DNS name contained within it. To handle this scalability problem, DNS names are split into a hierarchy, with different DNS servers handling different portions of the hierarchy. The DNS hierarchy is a tree structure, with the root on top, and leaves (DNS records) on the bottom.
DNS names and DNS hierarchy – A typical DNS name has multiple words separated by periods. For example, www.google.com. Each word is handled by a different portion of the DNS hierarchy.
Root of the DNS tree – The root portion of the DNS tree is handled by many DNS servers hosted worldwide. The root DNS servers are usually owned and operated by government agencies, or large service providers.
- DNS Root Hints – The list of IP Addresses for the root DNS servers is hard coded into every DNS server. This list of root DNS servers is sometimes called Root Hints.
Walk the DNS tree – it’s critical that you understand this process:
- Implicit period (root) – DNS names are read from right to left. At the end of www.google.com is an implicit period. So the last character of every DNS name is a period, which represents the top (root) of the DNS tree.
- Next is .com. The root DNS Servers have a link to the .com DNS Servers. When a .com DNS name needs to be resolved, you first ask the root servers for the IP addresses of the DNS Servers that know about .com addresses. These .com DNS Servers are usually owned and maintained by the Internet Domain Registrars.
- Next is google.com. The .com servers have a link to the google.com DNS Servers. When a google.com DNS name needs to be resolved, you ask a .com DNS server for the IP addresses of the DNS servers that know about google.com addresses.
- Finally, you ask the google.com DNS Servers to resolve www.google.com into an IP address. The google.com DNS Servers can resolve www.google.com directly without linking to any other DNS Server.
Local DNS Servers – DNS Clients do not resolve DNS names themselves. Instead, they send the DNS Query to one of its configured DNS Servers, and the DNS Server resolves the DNS Name into an IP address. The DNS Server IP addresses configured on the DNS Client are sometimes called Local DNS Servers and/or Resolvers.
Recursive queries – A DNS Server can be configured to perform recursive queries. When a DNS Client sends a DNS Query to a DNS Server, if the DNS Server can’t resolve the address using it’s local database, then the recursive DNS Server will walk the DNS tree to get the answer. If Recursion was not enabled, then the DNS server would simply send back an error (or a link) to the DNS client.
DNS Caching – Resolved DNS queries are cached for a configurable period of time. This DNS cache exists on both the Resolver/Recursive DNS Server, and on the DNS Client. The caching time is defined by the TTL (Time-to-live) field of the DNS record. When a DNS Client needs to resolve the same DNS name again, it simply looks in its cache for the IP address, and thus doesn’t need to ask the DNS Resolver Server again. If two DNS Clients are configured to use the same Local DNS Servers/Resolvers, when a second DNS Client needs to resolve the same DNS name that the first DNS Client already resolved, the DNS Resolver Server simply looks in its cache and sends back the response, and there’s no reason to walk the DNS tree again, at least not until the TTL expires.
DNS is not in the data path – Once a DNS name has been resolved into an IP Address, DNS is done. The traffic is now between the user’s client software (e.g. web browser), and the IP address. DNS is not in the data path. It’s critical that you understand this, because this is the source of much confusion when configuring NetScaler GSLB.
FQDN – When a DNS name is shown as multiple words separated by periods, this is called a Fully Qualified Domain Name (FQDN).
DNS Suffixes – But you can also sometimes just enter the left word of a DNS name and leave off the rest. In this case, the DNS Client will append a DNS Suffix to the single word, thus creating a FQDN, and send the FQDN to the DNS Resolver to get an IP address. A DNS Client can be configured with multiple DNS Suffixes, and the DNS Client will try each of the suffixes in order until it finds one that works. When you ping the single word address, ping will show you the FQDN that it used to get an IP address.
Authoritative DNS Servers – A small portion of the DNS hierarchy/tree is stored on one or more DNS servers. These DNS servers are considered “authoritative” for this portion of the DNS tree. When you send a DNS Query to a DNS Server that has the actual DNS records in its configuration, the DNS Server will send back the IP Address, and flag the response as “authoritative”. But when you send a DNS query to a DNS Resolver that doesn’t have google.com‘s DNS records in its local database, the DNS Resolver will get the answer from google.com‘s DNS servers, and flags the IP Address response as “non-authoritative”. The only way to get an “authoritative” response for www.google.com is to ask the google.com‘s DNS servers directly.
- DNS Zones – The portion of the DNS tree hosted on an authoritative DNS server is called the DNS Zone. A single DNS server can host multiple DNS Zones. DNS Zones typically contain only a single domain name (e.g. google.com). If DNS records for both company.com and corp.com are hosted on the same DNS server, then these are two separate zones.
- Zone Files – DNS records need to be stored somewhere. On UNIX/Linux DNS servers, DNS records are stored in text files, which are called Zone Files. Microsoft DNS servers might store DNS records inside of Active Directory instead of in files.
DNS records – Different types of DNS records can be created on authoritative DNS servers:
- A (host) – this is the most common type of record. It’s simply a mapping of one FQDN to one IP address. If you create multiple Host records (A records) with the same FQDN, but different IP addresses, then you are essentially configuring DNS Round Robin load balancing.
- CNAME (alias) – this record maps (aliases) one FQDN into another FQDN. CNAMEs allows you to put the IP address into one A record, and have other CNAME records that map to the one A record. Then whenever you update the IP address in the A record, all of the CNAMEs start resolving to the new IP address. Otherwise, if you had multiple A records for each FQDN pointing to the same IP address, you’d have to update the IP address in each of the A records.
- NS (name server, for delegation) – DNS Resolvers use NS records to enumerate every DNS server that is authoritative for a DNS Zone. This record is important for GSLB configurations.
Resolving a CNAME – While the DNS Resolver is walking the tree, a CNAME might be returned instead of an IP address. However, the DNS Resolver’s job is to return an IP address, not a CNAME, so the Resolver has to start over again with walking the DNS tree to resolve the CNAME into an IP address. If the DNS Resolver gets another CNAME, then it starts over again until it finally gets an IP Address.
- CNAME is not a redirect – The FQDN in the user’s address bar doesn’t change. The ultimate response from the DNS Resolver is still just an IP address.
- CNAMEs and NetScaler GSLB – CNAMEs are typically used in NetScaler GSLB configurations. CNAMEs are one method of delegating resolution of a FQDN to a NetScaler.
NS records and DNS delegation – NS records can also be used to delegate sub-trees to other DNS servers. For example, google.com can delegate gslb.google.com to other DNS servers. In that case, in google.com, you create NS records named gslb.google.com that point to the IP addresses of two or more other DNS servers (or NetScaler appliances running GSLB).
Physical Networking
Layer 1 (Physical cables)
NetScalers connect to network switches using several types of media (cables).
Gigabit cables are usually copper CAT6 twisted pair with 8-wire RJ-45 connectors on both sides
10 Gigabit or higher cables are usually fiber optic cables with SC connectors on both sides.
Transceivers (SFP, SFP+)
- Transceivers convert optical to electrical and vice versa – To connect a fiber optic cable between two network ports, you must first insert a transceiver into the switch ports. The transceiver converts the electrical signals from the switch or NetScaler into optical (laser) signals. The transceiver then converts the laser signals to electrical signals on the other side.
- Transceivers are pluggable – just insert them. Because they are pluggable, you can insert different types of transceivers into different switch/NIC ports. Some switch ports might be fiber, while others might be copper.
- Different types of transceiver – SFP transceivers only work up to gigabit speeds. For 10 Gig, you need SFP+ transceivers.
For cheaper 10 Gig connections, Cisco offers Direct Attach Copper (DAC) cables:
- Transceivers are built into both sides of the cable so you don’t have to buy them separately.
- The cables are based on Copper Twinax. Copper means cheaper metal, and cheaper transceivers, than optical fiber.
- The cables are short distance (e.g. 5 meters). For longer than 10 meters, you must use optical fiber instead.
Port Channel (cable bonds)
Bonding – Two or more cables can be bound together to look like one cable. This increases bandwidth, and increases reliability. If you bond 4 Gigabit cables together, you get 4 Gigabit of bandwidth instead of just 1 Gigabit of bandwidth. If one of those cables stops working for any reason, then traffic can use the other 3 cables.
Network impact of Cable Bonding – Cable bonding does not affect networking in any way. Ethernet and IP routing don’t care if there’s one cable, or if there are multiple cables bonded into a single link. However, if you connect multiple cables to the same VLAN without bonding, then that definitely impacts both Ethernet and IP routing. Don’t connect multiple cables to one VLAN unless you bond those cables.
Various Names for Cable Bonding – On Cisco switches, cable bonding functionality has several names. Probably the most common name is “port channel”. Other names include: “link aggregation”, “port aggregation”, “switch-assisted teaming”, and “Etherchannel”.
Bonding Configuration – to bond cables together, you must configure both sides of the connection identically. You configure the switch to bond cables. And you configure the NetScaler (or server) to bond cables. On NetScaler, the feature is called Channel. On NetScaler, a Channel is represented by a new interface called LA/1 or something like that. LA = Link Aggregate. If you want to bond cables on a NetScaler, then ask the switch administrator to configure the switch side first.
ARP to a single MAC on multiple NICs – Each cable is plugged into a NIC. Each NIC has its own MAC Address. An IP Address can only be ARP’d to a single MAC address, which means the incoming traffic only goes to one of the cables. To get around this problem, when a port channel (bond) is configured, a single MAC address is shared by all of the cables in the bond, and both sides of the cable bond know that the single MAC address is reachable on all members of the cable bond.
Load Balancing across the bond members – The Ethernet switch and the Netscaler will essentially load balance traffic across all members of the bond. There are several port channel load balancing algorithms. But the most common algorithm is based on source IP and destination IP. All packets that match the same source and destination will go down the same cable. Packets with other combinations of source and destination might go down a different cable. If you are bonding Gigabit cables, since a single source/destination connection only goes down one cable, it can only use up to 1 Gigabit of throughput. Bonds only provide increased bandwidth if there are many source/destination combinations.
LACP – Cables can be bonded together manually, or automatically. LACP is a protocol that allows the two sides (switch and NetScaler) of the bonded connection to negotiate which cables are in the bond, and which cables aren’t. LACP is not the actual bonding feature; instead, LACP is merely a negotiation protocol to make bonding configuration easier.
Multi-switch Port Channels – Port Channels (bonds) are usually only supported between one switch and one NetScaler. To bond ports from one NetScaler to multiple switches, you configure something called Multi-chassis Port Channel. Multi-chassis refers to the multiple switches. You almost always want multi-chassis since that lets you survive a switch failure.
- Virtual Port Channel – On Cisco NX-OS switches, the multi-chassis port channel feature is called “virtual port channel”, or vPC for short. vPC requires LACP to be configured on both sides. When connecting a single Port Channel to multiple Nexus switches, ask the network team to create a “virtual port channel”.
- Stacked Switches – Other switches support a “stacked” configuration where multiple switches look like one switch. There are usually cables in the back of the switches that connects them together. This multi-Chassis port channel doesn’t usually need LACP, but there’s no harm in enabling LACP.
Configure Manual Channel On NetScaler – to create a “manual” channel, you go to System > Network > Channels, create a channel, select the channel interface name (e.g. LA/2), and add the interfaces.
Configure LACP Channel on NetScaler – To create a LACP channel, go to System > Network > Interfaces, double-click a member interface, scroll down, check the box to enable LACP, and enter a “key” (e.g. 1). All members of the same channel must have the same key. If you enter “1” as the key, then a new interface named LA/1 is created, where the “1” = the LACP key.
- Channel Number on NetScaler can be different than the Channel Number on the Switch – The LACP “key” configured on the NetScaler does not need to match the port channel number on the switch side. NetScalers typically have Channels named LA/1, LA/2, etc., while Switches can have port channel interfaces named po281, po282, etc.
VLAN tagging
VLAN review – Earlier, I mentioned that switches can support multiple Ethernet subnets, and each of these Ethernet subnets is a different VLAN. Each switch port is configured to belong to a particular VLAN. Ports in the same VLAN use Ethernet to communicate with each other. Ports in separate VLANs use routers to communicate with each other.
Multiple VLANs on one port – Switches can also be configured to allow a single switch port to be connected to multiple VLANs. A NetScaler usually needs to be connected to multiple subnets (VLANs), as detailed later. You can either assign each VLAN to a separate cable, or you can combine multiple VLANs/subnets onto a single cable.
VLAN tagging – If a switch port supports multiple VLANs, when a packet is received by the switch port, the switch needs some sort of identifier to know which VLAN the packet is on. A VLAN tag is added to the Ethernet packet, where the VLAN tag matches the VLAN ID configured on the switch.
- Tags are added and removed on both sides of the switch cable – The NetScaler adds the VLAN tag to packets sent to the switch. The switch removes the tag and switches the packet to other switch ports in the same VLAN. When packets are switched to the NetScaler, the switch adds the VLAN tag so NetScaler knows which VLAN the packet came from.
Trunk Port vs Access Port – When multiple VLANs are configured on a single switch port, this is called a Trunk Port. When a single switch port only allows one VLAN (without tagging), this is called an Access Port. Switch ports default as Access Ports, unless a switch administrator specifically configures it as a Trunk Port. Access Ports don’t need VLAN tagging, but Trunk Ports do need VLAN tagging. When you want multiple VLANs on a single switch port, ask the networking team to configure a trunk port.
- Trunk Ports and VLAN ID tagging – when a switch port is configured as a Trunk Port, by default, every VLAN assigned to that Trunk Port requires VLAN ID tagging. The NetScaler must be configured to add and remove the same VLAN ID tags that the switch is expecting.
- Trunk Ports and Native VLAN – One of the VLANs assigned to the Trunk Port can be untagged. This untagged VLAN is called the native VLAN. Only one VLAN can be untagged. Native VLAN is an optional configuration. Some switch administrators, for security reasons, will not configure an untagged VLAN (native VLAN) on a Trunk Port. If untagged VLANs are not allowed on the switch, then you must configure the NetScaler to tag every packet, as detailed later. If there’s a native VLAN, then some NetScaler configuration (e.g. NSIP) is simplified, as detailed later.
Trunk Ports reduce the number of cables – if you had to connect a different cable (or Port Channel) from NetScaler for each VLAN, then the number of cables (and switch ports) can quickly get out of hand. The purpose of Trunk Ports is to reduce the number of cables.
Trunk Ports and Port Channels are separate features – If you want to bond multiple cables together, then you configure a Port Channel. If you want multiple VLANs on a single cable or Port Channel, then you configure a Trunk Port. These are two completely separate features.
Trunk Ports and Routing are separate features – Configuring a Trunk port with multiple VLANs does not automatically enable routing between those VLANs. Each VLAN on the Trunk Port is a separate Layer 2 Ethernet broadcast domain, and they can’t communicate with each other without routing. Routing is configured in a separate part of the Layer 3 switch, or on a separate router device. In other words, Trunk Ports are unrelated to routing.
Multiple NICs in one machine
A single machine (e.g. NetScaler) can have multiple NICs, which means multiple cables.
Single VLAN/subnet does not need VLAN configuration on the NetScaler – if the NetScaler is only connected to one subnet/VLAN, then no special configuration is needed. Just create the NSIP, SNIP, and VIPs in the same IP subnet. You can optionally bond multiple cables into a Port Channel for redundancy and increased bandwidth.
Two or more NICs to one VLAN requires bonding – If two or more NICs are connected to the same VLAN, then the NICs must be bonded together into a Port Channel. Port Channels require identical configuration on the switch side and on the NetScaler side. If you don’t bond them together, then you run the risk of bridging loops and/or MAC moves.
Multiple VLANs/subnets requires VLAN configuration on the NetScaler – if a NetScaler is connected to multiple IP subnets, then the NetScaler must be configured to identify which subnet is on which NIC. On the NetScaler, for each IP subnet, you create a Subnet IP address (SNIP). Then you create a VLAN object, bind it to an interface (or Port Channel), and bind a subnet IP address (SNIP) to the VLAN object, so NetScaler knows which IP addresses are on which VLAN and interface.
-
- VLAN objects are required on a multi-homed NetScaler, even if VLAN tagging is not needed – If a NetScaler is connected to two subnets, it doesn’t matter if VLAN tagging is required or not; the VLAN objects still must be defined on the NetScaler, so the NetScaler can link IP subnets with interfaces.
- VLAN Tagging – You specify the VLAN ID when creating the VLAN object. If the switch port is a Trunk Port, then there’s a checkbox in the VLAN object to enable tagging. If the switch Trunk Port is configured with a native VLAN, then one of the VLANs bound to the Interface/Channel can be untagged.
- If VLAN tagging is not needed, then the VLAN ID configured on the NetScaler doesn’t have to match the switch’s VLAN ID – If the VLAN is not tagged, then the entered VLAN ID on NetScaler is only locally significant, and doesn’t have to match the switch’s VLAN ID. However, it’s easier to troubleshoot if the VLAN ID’s match.
Routing table – When you create a SNIP/VLAN on a NetScaler, a “direct” connection is added to the routing table. You can view the routing table at System > Network > Routes. “Direct” means the NetScaler has a Layer 2 connection to the IP Subnet.
One Default Route – the routing table usually has a route 0.0.0.0 that points to the Default Gateway/Router. There can only be one default route on a device. The NetScaler can send Layer 2 packets out any directly connected interface/VLAN, but Layer 3 packets only go out the one default route, which is on only one VLAN.
Static Routes to override Default Route – To use routers on a different subnet than the default route, you add static routes to the routing table. To add a static route, you specify the destination subnet you are trying to reach, and the router (Next Hop or Gateway) you want to use to reach that destination. The Next Hop address must be on one of the VLANs that the NetScaler is connected to.
How Source IP is chosen – when the multi-VLAN NetScaler wants to send a packet to a remote subnet (not directly connected) through a router, the NetScaler first looks in its routing table for the next hop address. The NetScaler must have a SNIP address on the same subnet as the next hop address. This subnet-specific IP address is used as the Source IP for the Layer 3 packet. The destination machine sends the reply to this subnet-specific Source IP.
NetScaler Networking
Traffic flow through NetScaler
VIPs (Virtual IP) – VIPs receive traffic. When you create a Virtual Server (e.g. Load Balancing Virtual Server), you specify a Virtual IP address (VIP). This VIP listens for traffic from clients. You also specify a port number to listen on.
SNIPs (Subnet IP) – SNIPs are the Source IP when NetScaler sends traffic to a web server. When NetScalers need to send a packet, they look in the routing table for the next hop address, and select a SNIP on the same subnet as the next hop. This SNIP is inserted into the packet as the Source IP. The web servers reply to the SNIP.
Load Balancing traffic – simplified
- VIP/Virtual Server – Clients send traffic to a VIP.
- Services – Bound to the Load Balancing Virtual Server are one or more Load Balancing Services (or Service Group). These Load Balancing Services define the web server IP address and the web server port number. NetScaler chooses one of the Load Balancing services, and forwards the HTTP request to it.
- Monitors – NetScaler should not send traffic to a web server unless that web server is healthy. Monitors periodically send health check probes to web servers.
NetScaler Source IP
SNIP replaces Client IP – When NetScaler communicates with a back-end web server, the source IP is a SNIP. The web server does not see the original Client IP address. Essentially, the source IP address in the original HTTP packet was changed from the Client IP to the SNIP. On other load balancers, this is sometimes called Source NAT.
If SNIP is the source IP, how to log the original Client IP? – Since web servers behind a NetScaler only see the SNIP, the HTTP entries in the web server access logs (e.g. IIS log) all come from the same SNIP. If the web server needs to see the real Client IP, then NetScaler has two options: insert the Client IP into a HTTP Request Header, or configure NetScaler to not use a SNIP.
Client IP Header Insertion – when you create a Load Balancing Service, there’s a checkbox to insert the real client IP into a user-defined HTTP Header. This Header is typically named X-Forwarded-For, or Real IP, or Client IP, or something like that. The web server then needs to be configured to extract the custom HTTP header and log it. The packets on the wire still have a SNIP as the Source IP.
USIP – The default mode for NetScaler is Use Subnet IP (USNIP). This can be changed to Use Source IP (USIP), which leaves the original Source IP (Client IP) in the packets. When web servers respond, they send the reply to the Client IP, and not the SNIP. If the Response does not go through the NetScaler, then NetScaler is only seeing half of the conversation, which breaks many NetScaler features. If you need USIP mode, then reconfigure the default gateway on the web servers to point to a NetScaler SNIP. When the web server replies to the Client IP, it will send the reply packet to its default gateway, which is a NetScaler SNIP, thus allowing NetScaler to see the entire conversation. USIP can be enabled globally for all new Load Balancing Services, or can be enabled on specific Load Balancing Services, so you can use SNIP for some web servers and USIP for others.
NetScaler networking design questions
Dedicated management VLAN? – Do you want to put the NetScaler Management IP (NSIP) on a dedicated Management VLAN? If so, then the NetScaler needs to be connected to the Management VLAN.
- Dedicated cable? – Is the management VLAN on its own cable? Or is it on a Trunk Port with other VLANs?
- Access Port – If the management VLAN is on a dedicated cable, then configure that switch port as an Access Port so the NSIP VLAN is not tagged.
- Native VLAN on Trunk Port – If the management VLAN is on a Trunk Port, it’s easiest if the NSIP VLAN is untagged, so configure the NSIP VLAN as the native VLAN
- Tagged management VLAN? – Or does the network team require the management VLAN to be tagged? If so, then NetScaler will need special NSVLAN configuration.
Interface 0/1 is only for management – Dedicated management VLANs are usually connected to interface 0/1 on the NetScaler. If you don’t have a dedicated management VLAN, then don’t use interface 0/1 on physical NetScalers and instead, use interfaces 1/1 and higher. That’s because interface 0/1 is not optimized for high-throughput traffic.
What VLAN do you want the VIPs to be on? – one VLAN? Multiple VLANs? Clients send traffic to VIPs. For public-facing VIPs, they are typically created on a DMZ VLAN. You must connect the NetScalers to all VLANs that host NetScaler VIPs. In other words, the NetScaler must be Layer 2 connected to any VLAN where you want to create a NetScaler VIP.
Do you want the NetScaler to be Layer 2 connected to the web server VLANs? – there usually is no requirement for NetScaler to be Layer 2 connected to the web servers, since NetScaler can use a router to reach the web servers on remote subnets.
- Web Servers reply to SNIP – In USNIP mode (the default), Web Servers reply to the packet’s Source IP, which is a NetScaler SNIP. This is sometimes called one-arm mode, because there’s no need to change any of the networking on the web servers. The Default Gateway on the Web Servers does not need to be changed. The Web Severs do not need to be moved to any other VLAN.
- SNIP as Web Server Default Gateway – Some load balancing architectures require the web servers to use a NetScaler SNIP as their default gateway. This is either an older architecture, or an advanced architecture. In this case, the NetScaler SNIP would need to be on the same subnet as the web servers, and thus the NetScaler needs to be connected to the web server VLAN. This is sometimes called two-arm mode.
- One-arm vs two-arm is unrelated to the number of VLANs – One-arm and two-arm has nothing to do with the number of VLANs a NetScaler is connected to. With one-arm, the Source IP of the packets is changed to a NetScaler SNIP, thus no networking changes are needed on the web servers. With two-arm, the source IP of the packets is not changed, but the web server replies need to get to the NetScaler, so the web server Default Gateway is changed to a NetScaler SNIP.
Which networking connections need redundancy? – plug in two or more cables and bond them (Port Channel)
- Port Channel across multiple switches? – on Cisco NX-OS, configure “virtual port channel”. LACP is required on the NetScaler.
Will you combine multiple VLANs onto a single Interface/Channel? – if so, then configure the switch port or channel as a Trunk Port.
Which VLAN will host the default route (default gateway)? – The default route is usually through a router on the DMZ VLAN, which allows the NetScaler to send replies to any Internet IP address.
- Static routes for internal subnets – for internal subnets, create static routes that use an internal router as next hop address. Instead of adding a static route for every single internal subnet (e.g. 10.10.5.0/24, 10.10.6.0/24, etc.), can you summarize the internal networks (e.g. 10.0.0.0/8)? Ask the networking team for assistance.
- PBR for dedicated management interface – When an internal machine connects to the NSIP, it sends a packet that eventually goes through a router that is connected to the dedicated management VLAN. When the NSIP replies, it should send that reply back to the same management router. However, your default route is probably on the DMZ network, and you probably have static routes for internal subnets that use a router on a different data VLAN. To send the replies correctly, configure the NetScaler with a Policy Based Route that causes all packets with Source IP = NSIP to use a management VLAN router as the next hop address. PBR also fixes routing issues for traffic that is sourced from the NSIP (LDAP, NTP, Syslog, etc.)
- MBF? – Another option for the management network is Mac Based Forwarding, which keeps track of which interface a packet came in on, and replies out the same interface. This works for replies from the NSIP, but doesn’t do anything for traffic that is sourced by the NSIP (LDAP, NTP, Syslog, etc.)
- Multiple Internet Circuits – If clients connect to the NetScaler VIPs through multiple Internet circuits, then you probably want replies to go back out the same way they came in. This won’t work if you only have a single default route. The easiest way to enable this is to enable Mac Based Forwarding (MBF), which keeps track of which interface/router a client request came in on, and replies out the same interface/router. You can combine MBF with PBR for the NSIP-sourced traffic.
Is the NetScaler connected to multiple VLANs/Subnets? – if so, then you must configure VLANs on the NetScaler. VLAN objects are required on the NetScaler whether you need VLAN tagging or not.
A NetScaler might be connected to a single VLAN – this is the easiest configuration. Just create NSIP, SNIP, and VIPs in the same IP subnet. No special NetScaler networking configuration required.
NetScaler Forwarding Tables
NetScaler has at least three tables for choosing how to forward a packet. They are listed below in priority order (MBF overrides PBR, which overrides routing table)
- Mac Based Forwarding (MBF) – keeps track of which interface/router a client request came in on, and replies out the same interface/router. Only works for replies. Since it overrides routing tables, MBF is usually discouraged unless absolutely necessary.
- Policy Based Route (PBR) – chooses a next hop address based on information in the packet (e.g. source IP, source port, destination port). Normal routing only chooses next hop based on destination IP, while PBR can use additional packet fields. PBRs are difficult to maintain, and thus most networking people try to avoid them. But they are sometimes necessary (e.g. dedicated management network).
- Routing Table – the routes in the routing table come from three sources: SNIPs (directly connected subnets), manually-configured Static Routes (including default route), and Dynamic Routing (OSPF, BGP).
NetScaler networking configuration
NetScaler networking configuration vs Server networking configuration – NetScaler networking is completely different than server networking. NetScaler is configured like a switch, not like a server.
-
- Servers assign IPs to NICs – On servers, you configure an IP address directly on each NIC. Most servers only have one NIC.
- NetScalers are configured like a Layer 3 Switch – On NetScalers, you assign VLANs to interfaces, just like a switch. Then you put NetScaler-owned IP addresses into each of those VLANs, which again, is just like a Layer 3 switch. More specifically, you create VLAN objects, bind the VLAN to the interface/channel, and bind a SNIP to the VLAN.
Disable unused Interfaces – if a NetScaler interface (NIC) does not have a cable, then disable the interface (System > Network > Interfaces, right-click an interface, and Disable). If you don’t disable the unused interfaces, then High Availability (HA) will think the interfaces are down and thus failover.
LACP – if your port channels have LACP enabled, go to System > Network > Interfaces, edit two or more member interfaces, check the box for LACP, and enter the same LACP Key. If you enter 1 as the key, then a channel named LA/1 is created.
For manual port channels, go to System > Network > Channels, add a channel, select LA/1 or similar, and bind the member interfaces.
NSIP is special – NSIP lives in VLAN 1. If you don’t need to tag the management VLAN, then leave NSIP in VLAN 1. VLAN 1 on the NetScaler does not need to match the switch because you’re not tagging the packets with the VLAN ID. When you bind VLANs to the other interfaces, those interfaces are removed from VLAN 1 and put in other VLANs. The remaining interface in VLAN 1 is your management interface.
-
- NSVLAN – if your management VLAN is tagged, then normal VLAN tagging configuration won’t work. Instead, you must configure NSVLAN to tag the NSIP/management packets with the VLAN ID. All other VLANs are configured normally as shown next.
- PBR – if you connected to a dedicated management VLAN/subnet, configure a Policy Based Route (PBR) based on NSIP as Source IPs, and a management VLAN router as the next hop.
VLANs – If the NetScaler is connected to multiple VLANs:
-
- Create a SNIP for each VLAN (except the dedicated management VLAN).
- Create a VLAN object for each VLAN, and specify the VLAN ID (same as switch). It doesn’t matter if the VLAN is tagged or not, you still must create a separate VLAN object on NetScaler for each subnet.
- Bind the VLAN object to the interface or channel. If the switch needs the VLAN to be tagged, then check the box to tag the packets with the specified VLAN ID.
- Bind the VLAN object to the SNIP for that VLAN.
Static Routes – Add static routes for internal subnets through an internal router on a “data” network. The “data” network is usually a high bandwidth connection that is different than the management network.
Change Default Route to DMZ router – now that PBR and Static Routes are configured, you can probably safely delete the default route (0.0.0.0) and recreate it without you losing connection to the NSIP.
Layer 2 Troubleshooting – To verify VLAN connectivity, log into another device on the same VLAN (e.g. router/firewall) and ping the NetScaler SNIP or NSIP. Immediately check the ARP cache to see if the IP address was converted to a MAC address. If not, then layer 2 is not configured correctly somewhere (e.g. VLAN configuration), or there’s a hardware failure (e.g. bad switch port).
Layer 3 Troubleshooting – There are many potential causes of Layer 3 routing issues. A common problem is incorrect Source IP chosen by the NetScaler. To see the Source IP, SSH to the NetScaler, run shell, then run nstcpdump.sh host <Destination_IP>. You should see a list of packets with Source IP/Port and Destination IP/Port. Then work with the firewall and routing teams to troubleshoot packet routing.
NetScaler High Availability (HA)
Disable unused interfaces – All network interfaces on NetScaler by default have HA monitoring enabled. If any enabled interface is down (e.g. cable not connected), then HA will failover. Disable the unused interfaces so HA won’t monitor them any more.
HA heartbeat packets are untagged – Two nodes in a HA pair send heartbeat packets out all interfaces. These heartbeat packets are untagged. If the switch does not allow untagged packets (no native VLAN on a Trunk Port), then some special configuration is required.
- On NetScaler, for each Trunk interface/channel, turn off tagging for one VLAN. Don’t worry about the switch configuration. Just do this on the NetScaler side.
- On NetScaler, go to System > Network > Interfaces (or Channels), double-click the interface/channel, and enable Tag All VLANs. The VLAN you untagged in step 1 will now be tagged again. As a bonus, HA heartbeat packets will also be tagged with the same VLAN ID you untagged in step 1.
- To verify that HA heartbeats are working across all interfaces, SSH to each NetScaler node, and run show ha node. Look for “interfaces on which HA heartbeat packets are not seen”. There should be nothing in the list.
GARP – When an HA pair fails over, the new primary appliance performs a Gratuitous ARP. For two devices on the same subnet (e.g. router and NetScaler) to talk each other, they first perform ARP to convert the IP addresses to MAC addresses. The IP address to MAC address mappings are cached (ARP cache). Each HA NetScaler node has different MAC Addresses. After a failover, the new appliance needs to tell the router to start sending traffic to the new node’s MAC addresses instead of the old node’s MAC addresses. A GARP packet is intended to inform a router to update it’s ARP cache with the new MAC address information. Some routing devices (e.g. firewalls) will not accept GARP packets, and instead will wait for the ARP cache entry to time out. Or the router/firewall might not allow the IP address to move to a different MAC address. If HA failover stops all traffic, work with the router/firewall admin to troubleshoot GARP.
Port Channels and HA failover – A port channel has two or more member interfaces. If one of the member interfaces is down, should the appliance failover? How many member interfaces must fail before HA failover should occur? On NetScaler, double-click the channel, and you can specify a minimum throughput. If bonded throughput falls below this number due to member interface failure, then HA fails over.
Fail Safe – If at least one interface is down on both HA nodes, then HA will be unhealthy, and both nodes will stop responding. You can configure one of the HA nodes as the Fail Safe node so that at least one of the HA nodes will be up, even if not every interface is functional.
Firewalls
DMZ – public facing NetScaler VIPs should be on a DMZ VLAN that is sandwiched between two firewalls. That means the NetScaler must be connected to the DMZ.
- Firewalls can route – When you connect a NetScaler to a DMZ, the firewall is usually the router.
- NAT – Most DMZ VLANs use private IPs (10.0.0.0/8, 172.16.0.0/20, 192.168.0.0/16) instead of public IPs. These private IP addresses are not routable across the Internet. To make them accessible, you NAT a company-owned public IP to the private DMZ IP. Ask the firewall administrator to configure the NAT translations for each publicly-accessible DMZ VIP.
Internal VIPs – internal VIPs (accessed by internal users) should be on an internal VLAN (not in the DMZ).
Multiple security zones – If you connect a single NetScaler to both DMZ and Internal, here’s how the traffic flows:
- Client connects to DMZ VIP, which goes through the firewall that separates the Internet from the DMZ.
- NetScaler internal SNIP connects to internal server. Since the NetScaler is connected to the Internal network, NetScaler will use an internal SNIP for this traffic. If you have a firewall between DMZ and internal, that firewall has now been bypassed.
Separate NetScaler appliances for DMZ and internal – Bypassing the DMZ-to-internal firewall is usually not what security teams want. Ask Security for their opinion on this architecture. A more secure approach is to have different NetScaler appliances for DMZ and internal. The DMZ appliance is connected only to DMZ (except dedicated management VLAN). When the DMZ NetScaler needs to communicate with an internal server, the DMZ NetScaler uses a DMZ SNIP to send the packet to the DMZ-to-internal firewall. The DMZ-to-internal firewall inspects the traffic, and forwards it if the firewall rules allow. The firewall rule allows the DMZ SNIP to talk to the web server, but the firewall does not allow client IPs (on the Internet) to talk directly to the web server.
Traffic Isolation – NetScaler has some features that can isolate traffic on a single appliance:
- Net Profiles – allows you to specify a particular SNIP to be used by a vServer, or Service. Firewalls can allow different SNIPs to access different web servers.
- Traffic Domains – each Traffic Domain is a different routing table. Put different NetScaler objects in different Traffic Domains. Not all NetScaler features are supported.
- Partitions – carve up an MPX/VPX appliance into different partitions, with each partition having access to a subset of the hardware. Each partition is essentially a separate NetScaler config, which means separate routing tables. However, not all NetScaler features work in a partition.
- NetScaler SDX – carve up physical hardware into multiple virtual machines. Each VM is a full NetScaler VPX, each with its own configuration. No feature limitations.
Network firewall (Layer 4) vs NetScaler Web App Firewall (Layer 7) – Most network firewalls only filter on port numbers and IP addresses. A few of them can filter on HTTP packet contents.
- NetScaler WAF vs next-gen network firewalls – NetScaler has a security feature called Web App Firewall (WAF), which does HTTP inspection/filtering. HTTP packet inspection on next-gen network firewalls is usually signature based, but NetScaler WAF can also be configured with a whitelist to only allow HTTP packets that match the whitelist.
- Put network firewalls in front of NetScalers – NetScaler is not a layer 4 firewall like a Cisco ASA or Palo Alto. Thus you should always put a network firewall in front of your NetScaler, even if you enabled the NetScaler WAF feature.
Next Step