en-US
search-icon

SonicOS 5.9 Admin Guide

Firewall Settings

Configuring Advanced Access Rule Settings

Firewall Settings > Advanced

To configure advanced access rule options, select Firewall Settings > Advanced under Firewall.

The Firewall Settings > Advanced page includes the following firewall configuration option groups:

Detection Prevention

Enable Stealth Mode - By default, the security appliance responds to incoming connection requests as either “blocked” or “open.” If you enable Stealth Mode, your security appliance does not respond to blocked inbound connection requests. Stealth Mode makes your security appliance essentially invisible to hackers.
Randomize IP ID - Select Randomize IP ID to prevent hackers using various detection tools from detecting the presence of a security appliance. IP packets are given random IP IDs, which makes it more difficult for hackers to “fingerprint” the security appliance.
Decrement IP TTL for forwarded traffic - Time-to-live (TTL) is a value in an IP packet that tells a network router whether or not the packet has been in the network too long and should be discarded. Select this option to decrease the TTL value for packets that have been forwarded and therefore have already been in the network for some time.
Never generate ICMP Time-Exceeded packets - The SonicWall appliance generates Time-Exceeded packets to report when it has dropped a packet because its TTL value has decreased to zero. Select this option if you do not want the SonicWall appliance to generate these reporting packets.

Dynamic Ports

Enable FTP Transformations for TCP port(s) in Service Object – FTP operates on TCP ports 20 and 21 where port 21 is the Control Port and 20 is Data Port. However, when using non-standard ports (for example, 2020, 2121), SonicWall drops the packets by default as it is not able to identify it as FTP traffic. The Enable FTP Transformations for TCP port(s) in Service Object option allows you to select a Service Object to specify a custom control port for FTP traffic.

To illustrate how this feature works, consider the following example of an FTP server behind the SonicWall listening on port 2121:

a
On the Network > Address Objects page, create an Address Object for the private IP address of the FTP server with the following values:
Name: FTP Server Private
Zone: LAN
Type: Host
IP Address: 192.168.168.2
b
On the Network > Services page, create a custom Service for the FTP Server with the following values:
Name: FTP Custom Port Control
Protocol: TCP(6)
Port Range: 2121 - 2121
c
On the Network > NAT Policies page, create the following NAT Policy, and on the Firewall > Access Rules page, create the following Access Rule

d
Lastly, on the Firewall Settings > Advanced page, for the Enable FTP Transformations for TCP port(s) in Service Object select the FTP Custom Port Control Service Object.
Enable support for Oracle (SQLNet) - Select this option if you have Oracle9i or earlier applications on your network. For Oracle10g or later applications, it is recommended that this option not be selected.

For Oracle9i and earlier applications, the data channel port is different from the control connection port. When this option is enabled, a SQLNet control connection is scanned for a data channel being negotiated. When a negotiation is found, a connection entry for the data channel is created dynamically, with NAT applied if necessary. Within SonicOS, the SQLNet and data channel are associated with each other and treated as a session.

For Oracle10g and later applications, the two ports are the same, so the data channel port does not need to be tracked separately; thus, the option does not need to be enabled.

Enable RTSP Transformations - Select this option to support on-demand delivery of real-time data, such as audio and video. RTSP (Real Time Streaming Protocol) is an application-level protocol for control over delivery of data with real-time properties.

Source Routed Packets

Drop Source Routed Packets (enabled by default.) - Clear this check box if you are testing traffic between two specific hosts and you are using source routing.

Connections

The Connections section provides the ability to fine-tune the performance of the appliance to prioritize either optimal performance or support for an increased number of simultaneous connections that are inspected by firewall services. There is no change in the level of security protection provided by either of the DPI Connections settings below. The following connection options are available:

Maximum SPI Connections (DPI services disabled) - This option does not provide SonicWall DPI Security Services protection and optimizes the firewall for maximum number of connections with only stateful packet inspection enabled.
Maximum DPI Connections (DPI services enabled) - This is the default and recommended setting for most SonicWall deployments.
DPI Connections (DPI services enabled with additional performance optimization) - This option is intended for performance critical deployments. This option trades off the number of maximum DPI connections for an increased firewall DPI inspection throughput.
* 
NOTE: When changing the Connections setting, the SonicWall security appliance must be restarted for the change to be implemented.

The maximum number of connections also depends on whether App Flow is enabled and if an external collector is configured, as well as the physical capabilities of the particular model of SonicWall security appliance. Mousing over the question mark icon next to the Connections heading displays a pop-up table of the maximum number of connections for your specific SonicWall security appliance for the various configuration permutations. The table entry for your current configuration is indicated in the table, as shown in the example below.

Access Rule Service Options

Force inbound and outbound FTP data connections to use default port 20 - The default configuration allows FTP connections from port 20 but remaps outbound traffic to a port such as 1024. If the check box is selected, any FTP data connection through the security appliance must come from port 20 or the connection is dropped. The event is then logged as a log event on the security appliance.
Apply firewall rules for intra-LAN traffic to/from the same interface - Applies firewall rules that is received on a LAN interface and that is destined for the same LAN interface. Typically, this only necessary when secondary LAN subnets are configured.
Always issue RST for discarded outgoing TCP connections - Issues a TCP/IP reset (RST) flag for discarded outgoing TCP connections. Default is enabled.

IP and UDP Checksum Enforcement

Enable IP header checksum enforcement - Select this to enforce IP header checksums. Packets with incorrect checksums in the IP header are dropped. This option is disabled by default.
Enable UDP checksum enforcement - Select this to enforce UDP packet checksums. Packets with incorrect checksums are dropped. This option is disabled by default.

IPv6 Advanced Configuration

Drop IPv6 Routing Header type 0 packets – Select this to prevent a potential DoS attack that exploits IPv6 Routing Header type 0 (RH0) packets. When this setting is enabled, RH0 packets are dropped unless their destination is this SonicWall security appliance and their Segments Left value is 0. Segments Left specifies the number of route segments remaining before reaching the final destination. Enabled by default. For more information, see http://tools.ietf.org/html/rfc5095.
Decrement IPv6 hop limit for forwarded traffic - Similar to Ipv4 TTL, when selected, the packet is dropped when the hop limit has been decremented to 0. Disabled by default.
Never generate IPv6 ICMP Time-Exceeded packets - Select this option if you don’t want the SonicWall appliance to generate Time-Exceeded Packets that report when the appliance drops packets due to the hop limit decrementing to 0. Disabled by default.
Drop and log network packets whose source or destination address is reserved by RFC - Select this option to reject and log network packets that have a source or destination address defined as an address reserved for future definition and use as specified in RFC 4921 for IPv6. Disabled by default.

Configuring Bandwidth Management

Bandwidth Management Overview

Bandwidth management (BWM) is a means of allocating bandwidth resources to critical applications on a network. SonicOS Enhanced offers an integrated traffic shaping mechanism through its ingress and egress BWM interfaces. BWM can be applied to traffic in either the ingress or egress directions, or both.

* 
NOTE: Although BWM is a fully integrated Quality of Service (QoS) system, wherein classification and shaping is performed on the single SonicWall appliance, effectively eliminating the dependency on external systems and thus obviating the need for marking, it is possible to concurrently configure BWM and QoS (layer 2 and/or layer 3 marking) settings on a single Access Rule. This allows those external systems to benefit from the classification performed on the SonicWall even after it has already shaped the traffic. Refer to Firewall Settings > QoS Mapping (NSA Series Only) for BWM QoS details.
Topics:

Understanding Bandwidth Management

BWM is controlled by the SonicWall Security Appliance on ingress and egress traffic. It allows network administrators to guarantee minimum bandwidth and prioritize traffic based on access rules created in the Firewall > Access Rules page. By controlling the amount of bandwidth to an application or user, the network administrator can prevent a small number of applications or users from consuming all available bandwidth. Balancing the bandwidth allocated to different network traffic and then assigning priorities to traffic improves network performance. The SonicOS provides eight priority queues (0 – 7 or Realtime – Lowest).

Three types of bandwidth management are available:

 

Bandwidth Management Types

BWM Type

Description

Advanced

Enables Advanced Bandwidth Management. Maximum egress and ingress bandwidth limitations can be configured on any interface, per interface, by configuring bandwidth objects, access rules, and application policies and attaching them to the interface.

Global

(Default) All zones can have assigned guaranteed and maximum bandwidth to services and have prioritized traffic. When global BWM is enabled on an interface, all of the traffic to and from that interface is bandwidth managed.

Default Global BWM queues:

2 — High
4 — Medium: Default priority for all traffic that is not managed by a BWM enabled Firewall Access rule or Application Control Policy.
6 — Low

None

Disables BWM.

When Global bandwidth management is enabled on an interface, all traffic to and from that interface is bandwidth managed.

If the bandwidth management type is None, and there are three traffic types that are using an interface, and the link capacity of the interface is 100 Mbps, the cumulative capacity for all three types of traffic is 100 Mbps.

If the bandwidth management type is changed from None to Global, and the available ingress and egress traffic is configured at 10 Mbps, then by default, all three traffic types are sent to the medium priority queue.

The medium priority queue, by default, has a guaranteed bandwidth of 50 percent and a maximum bandwidth of 100 percent. If no Global bandwidth management policies are configured, the cumulative link capacity for each traffic type is 10 Mbps.

Topics:

Packet Queuing

BWM rules each consume memory for packet queuing, so the number of allowed queued packets and rules on SonicOS is limited by platform (values are subject to change):

 

Maximum Queued Packets and Rules Based on Platform

Platform

Max Queued Packets

Max Total BWM Rules

NSA 3500

2080

100

NSA 4500

2080

100

NSA 5000

2080

100

NSA E5500

6420

100

NSA E6500

6420

100

NSA E7500

6420

100

NSA E8500

6420

100

NSA E8510

6420

100

Firewall Settings > BWM

BWM works by first enabling bandwidth management in the Firewall Settings > BWM page, enabling BWM on an interface/firewall/app rule, and then allocating the available bandwidth for that interface on the ingress and egress traffic. It then assigns individual limits for each class of network traffic. By assigning priorities to network traffic, applications requiring a quick response time, such as Telnet, can take precedence over traffic requiring less response time, such as FTP.

To view the BWM configuration, navigate to the Firewall Settings > BWM page.

This page consists of the following entities:

* 
NOTE: The defaults are set by SonicWall to provide BWM ease-of-use. It is recommended that you review the specific bandwidth needs and enter the values on this page accordingly.
Bandwidth Management Type Option:
Advanced — Any zone can have guaranteed and maximum bandwidth and prioritized traffic assigned per interface.
Global — All zones can have assigned guaranteed and maximum bandwidth to services and have prioritized traffic.
None — Disables BWM.
* 
NOTE: When you change the Bandwidth Management Type from Global to Advanced, the default BWM actions that are in use in any App Rules policies are automatically converted to Advanced BWM settings.

When you change the Type from Advanced to Global, the default BWM actions are converted to BWM Global-Medium. The firewall does not store your previous action priority levels when you switch the Type back and forth. You can view the conversions on the Firewall > App Rules page.

Priority Column — Displays the priority number and name.
Enable Check box — When checked, the priority queue is enabled.
Guaranteed and Maximum\Burst Text Field — Enables the guaranteed and maximum/burst rates. The corresponding Enable check box must be checked in order for the rate to take effect. These rates are identified as a percentage. The configured bandwidth on an interface is used in calculating the absolute value. The sum of all guaranteed bandwidth must not exceed 100%, and the guaranteed bandwidth must not be greater than the maximum bandwidth per queue.
* 
NOTE: The default settings for this page consists of three priorities with preconfigured, guaranteed, and maximum bandwidth. The medium priority has the highest guaranteed value since this priority queue is used by default for all traffic not governed by a BWM-enabled policy.

Action Objects

Action Objects define how the App Rules policy reacts to matching events. You can customize an action or select one of the predefined default actions. The predefined actions are displayed in the App Control Policy Settings page when you add or edit a policy from the App Rules page.

Custom BWM actions behave differently than the default BWM actions. Custom BWM actions are configured by adding a new action object from the Firewall > Action Objects page and selecting the Bandwidth Management action type. Custom BWM actions and policies using them retain their priority level setting when the Bandwidth Management Type is changed from Global to Advanced, and from Advanced to Global.

A number of BWM action options are also available in the predefined, default action list. The BWM action options change depending on the Bandwidth Management Type setting on the Firewall Settings > BWM page. If the Bandwidth Management Type is set to Global, all eight levels of BWM are available. If the Bandwidth Management Type is set to Advanced, no priorities are set. The priorities are set by configuring a bandwidth object under Firewall > Bandwidth Objects.

The following table lists the predefined default actions that are available when adding a policy.

 

Available Default BMW Actions

If BWM Type = Global

If BWM Type = Advanced

BWM Global-Realtime
BWM Global-Highest
BWM Global-High
BWM Global-Medium High
BWM Global-Medium
BWM Global-Medium Low
BWM Global-Low
BWM Global-Lowest
Advanced BWM High
Advanced BWM Medium
Advanced BWM Low

Glossary

Bandwidth Management (BWM): Refers to any of a variety of algorithms or methods used to shape traffic or police traffic. Shaping often refers to the management of outbound traffic, while policing often refers to the management of inbound traffic (also known as admission control). There are many different methods of bandwidth management, including various queuing and discarding techniques, each with their own design strengths. SonicWall employs a Token Based Class Based Queuing method for inbound and outbound BWM, as well as a discard mechanism for certain types of inbound traffic.

Guaranteed Bandwidth: A declared percentage of the total available bandwidth on an interface which will always be granted to a certain class of traffic. Applicable to both inbound and outbound BWM. The total Guaranteed Bandwidth across all BWM rules cannot exceed 100% of the total available bandwidth. SonicOS Enhanced 5.0 and higher enhances the Bandwidth Management feature to provide rate limiting functionality. You can now create traffic policies that specify maximum rates for Layer 2, 3, or 4 network traffic. The Guaranteed Bandwidth can also be set to 0%.

Ingress BWM: The ability to shape the rate at which traffic enters a particular interface. For TCP traffic, actual shaping occurs when the rate of the ingress flow can be adjusted by the TCP Window Adjustment mechanism. For UDP traffic, a discard mechanism is used since UDP has no native feedback controls.

Maximum Bandwidth: A declared percentage of the total available bandwidth on an interface defining the maximum bandwidth to be allowed to a certain class of traffic. Applicable to both inbound and outbound BWM. Used as a throttling mechanism to specify a bandwidth rate limit. The Bandwidth Management feature is enhanced to provide rate limiting functionality. You can now create traffic policies that specify maximum rates for Layer 2, 3, or 4 network traffic. This enables bandwidth management in cases where the primary WAN link fails over to a secondary connection that cannot handle as much traffic.The Maximum Bandwidth can be set to 0%, which will prevent all traffic.

Egress BWM: Conditioning the rate at which traffic is sent out an interface. Outbound BWM uses a credit (or token) based queuing system with 8 priority rings to service different types of traffic, as classified by Access Rules.

Priority: An additional dimension used in the classification of traffic. SonicOS uses eight priority values (0 = highest, 7 = lowest) to comprise the queue structure used for BWM. Queues are serviced in the order of their priority.

Queuing: To effectively make use of the available bandwidth on a link. Queues are commonly employed to sort and separately manage traffic after it has been classified.

Global Bandwidth Management

* 
NOTE: This section uses Global BWM as the Bandwidth Management Type (Firewall Settings > BWM).

Global Bandwidth Management can be configured using the following methods:

Configuring Global Bandwidth Management

To set the Bandwidth Management type to Global:
1
On the SonicWall Security Appliance, go to Firewall Settings > BWM.
2
Set the Bandwidth Management Type option to Global.

3
Enable the priorities that you want by selecting the appropriate check boxes in the Enable column.
* 
NOTE: You must enable the priorities in this dialog to be able to configure these priorities in Access Rules, App Rules, and Action Objects.
4
Enter the Guaranteed bandwidth percentage that you want for each selected priority.
5
Enter the Maximum\Burst bandwidth percentage that you want for each selected priority.
6
Click Accept.

Configuring Global BWM on an Interface

To configure Global BWM on an interface:
1
On the SonicWall Security Appliance, go to Network > Interfaces.
2
Click the Configure button for the appropriate interface.
3
Click the Advanced tab.

4
Under Bandwidth Management, select the Enable Interface Egress Bandwidth Limitation option.
5
When this option is selected, the total egress traffic on the interface is limited to the amount specified in the Enable Interface Ingress Bandwidth Limitation box. When this option is not selected, no bandwidth limitation is set at the interface level, but egress traffic can still be shaped using other options.
6
In the Maximum Interface Egress Bandwidth (kbps) box, enter the maximum egress bandwidth for the interface (in kilobytes per second).
7
Select the Enable Interface Ingress Bandwidth Limitation option.
8
When this option is selected, the total ingress traffic is limited to the amount specified in the Maximum Interface Ingress Bandwidth box. When this option is not selected, no bandwidth limitation is set at the interface level, but ingress traffic can still be shaped using other options.
9
In the Maximum Interface Ingress Bandwidth (kbps) box, enter the maximum ingress bandwidth for the interface (in kilobytes per second).
10
Click OK.

Configuring BWM in an Access Rule

You can configure BWM in each Access Rule. This method configures the direction in which to apply BWM and sets the priority queue.

* 
NOTE: Before you can configure any priorities in an Access Rule, you must first enable the priorities that you want to use on the Firewall Settings > BWM page. Refer to the Firewall Settings > BWM page to determine which priorities are enabled. If you select a Bandwidth Priority that is not enabled on the Firewall Settings > BWM page, the traffic is automatically mapped to priority 4 (Medium). See Configuring Global Bandwidth Management.

Priorities are listed in the Access Rules dialog Bandwidth Priority list as follows:

0 Realtime
1 Highest
2 High
3 Medium High
4 Medium
5 Medium Low
6 Low
7 Lowest
To configure BWM in an Access Rule:
1
Navigate to the Firewall > Access Rules page.
2
Click the Configure icon for the rule you want to edit. The Edit Rule General tab dialog is displayed.
3
Click the BWM tab.

4
Select the Enable Egress Bandwidth Management ('allow' rules only) option.
5
Select the appropriate egress priority from the Bandwidth Priority list.
6
Select the Enable Ingress Bandwidth Management ('allow' rules only) option.
7
Select the appropriate ingress priority from the Bandwidth Priority list.
8
Click OK.

Configuring BWM in an Action Object

If you do not want to use the predefined Global BWM actions or policies, you have the option to create a new one that fits your needs.

To create a new BWM action object for Global bandwidth management:
1
Navigate to the Firewall > Action Objects page.
2
Click Add New Action Object at the bottom of the page. .The Add/Edit Action Object dialog is displayed.

3
In the Action Name field, enter a name for the action object.
4
In the Action drop-down menu, select Bandwidth Management to control and monitor Application Level bandwidth usage. New options are displayed.

5
From the Bandwidth Aggregation Method drop-down menu, select either Per Policy or Per Action. Per Policy is the default.
6
Select the Enable Egress Bandwidth Management option. The Bandwidth Object option becomes available.
7
From the Bandwidth Object drop-down menu, select an existing bandwidth object or create a new bandwidth object.
8
If you selected:
An existing option, go to Step 16.
Create new Bandwidth Object, the Add Bandwidth Object dialog displays.

9
Enter a meaningful name in the Name field.
10
In the Guaranteed Bandwidth field, enter the bandwidth this bandwidth object will be guaranteed and then select either kbps or Mbps from the drop-down menu.
11
In the Maximum Bandwidth field, enter the maximum bandwidth for this bandwidth object and then select either kbps or Mbps from the drop-down menu.
12
From the Traffic Priority drop-down menu, select the priority for this bandwidth object, from 0 Realtime to 7 Lowest. The default is 0 Realtime.
13
From the Violation Action drop-down menu, select either Delay or Drop for the action to be taken. The default is Delay.
14
Enter an optional comment in the Comment field.
15
Click OK.
16
Select the Enable Ingress Bandwidth Management option, select an existing bandwidth object or create a new bandwidth object.
17
If you selected:
An existing option, go to Step 18.
Create new Bandwidth Object, the Add Bandwidth Object dialog displays. Follow Step 9 through Step 15.
18
Select Enable Tracking Bandwidth Usage.
19
Click OK.

Configuring Application Rules

Configuring BWM in an Application Rule allows you to create policies that regulate bandwidth consumption by specific file types within a protocol, while allowing other file types to use unlimited bandwidth. This enables you to distinguish between desirable and undesirable traffic within the same protocol.

Application Rule BWM supports the following Policy Types:

SMTP Client
HTTP client
HTTP Server
FTP Client
FTP Client File Upload
FTP Client File Download
FTP Data Transfer
POP3 Client
POP3 Server
Custom Policy
IPS Content
App Control Content
CFS
* 
NOTE: You must first enable BWM as follows before you can configure BWM in an Application Rule.
Before you configure BWM in an App Rule:
1
Enable the priorities you want to use in Firewall Settings > BWM. See Configuring Global Bandwidth Management.
2
Enable BWM in an Action Object. See the Configuring BWM in an Action Object.
3
Configure BWM on the Interface. See the Configuring Global BWM on an Interface respectively.
To configure BWM in an Application Rule:
1
Navigate to the Firewall > App Rules page.

2
Under App Rules Policies, in the Heading row, click Action. The page will sort by Action type.
3
Click the Configure icon in the Configure column for the policy you want to configure. The App Control Policy Settings dialog is displayed.

4
In the Action Object list, select the BWM action object that you want.
5
Click OK.

Configuring App Flow Monitor

BWM can also be configured from the App Flow Monitor page by selecting a service type application or a signature type application and then clicking the Create Rule button. The Bandwidth Management options available there depend on the enabled priority levels in the Global Priority Queue table on the Firewall Settings > BWM page. The priority levels enabled by default are High, Medium, and Low.

* 
NOTE: You must have SonicWall Application Visualization enabled before proceeding.
To configure BWM using the App Flow Monitor:
1
Navigate to the Dashboard > App Flow Monitor page.

2
Select the service-based applications or signature-based applications to which you want to apply global BWM.
* 
NOTE: General applications cannot be selected. Service-based applications and signature-based applications cannot be mixed in a single rule.

Creating a rule for service-based applications will result in creating a firewall access rule, and creating a rule for signature-based applications will create an application control policy.

3
Click Create Rule. The Create Rule pop-up is displayed.

4
Select the Bandwidth Manage radio button, and then select a global BWM priority.
5
Click Create Rule. A confirmation pop-up is displayed.

6
Click OK.
7
Navigate to Firewall > Access Rules page (for service-based applications) and Firewall > App Rules (for signature-based applications) to verify that the rule was created.
* 
NOTE: For service-based applications, the new rule is identified with a tack in the Comments column and a prefix in Service column of ~services=<servicename>. For example, ~services=NTP&t=1306361297.

For signature-based applications, the new rule is identified with a prefix, ~BWM_Global-<priority>=~catname=<app_name> in the Name column and in the Object column prefix ~catname=<app_name>.

Advanced Bandwidth Management

Advanced Bandwidth Management enables administrators to manage specific classes of traffic based on their priority and maximum bandwidth settings. Advanced Bandwidth Management consists of three major components:

Classifier – classifies packets that pass through the firewall into the appropriate traffic class.
Estimator – estimates and calculates the bandwidth used by a traffic class during a time interval to determine if that traffic class has available bandwidth.
Scheduler – schedules traffic for transmission based on the bandwidth status of the traffic class provided by the estimator.

This graphic illustrates the basic concepts of Advanced Bandwidth Management.

Advanced Bandwidth Management concepts

Bandwidth management configuration is based on policies which specify bandwidth limitations for traffic classes. A complete bandwidth management policy consists of two parts: a classifier and a bandwidth rule.

A classifier specifies the actual parameters, such as priority, guaranteed bandwidth, and maximum bandwidth, and is configured in a bandwidth object. Classifiers identify and organize packets into traffic classes by matching specific criteria.

A bandwidth rule is an access rule or application rule in which a bandwidth object is enabled. Access rules and application rules are configured for specific interfaces or interface zones.

The first step in bandwidth management is that all packets that pass through the SonicOS firewall are assigned a classifier (class tag). The classifiers identify packets as belonging to a particular traffic class. Classified packets are then passed to the BWM engine for policing and shaping. The SonicOS uses two types of classifiers:

Access Rules
Application Rules

The following table shows the classifiers that are configured in a bandwidth object:

 

Bandwidth object classifiers

Name

Description

Guaranteed Bandwidth

The bandwidth that is guaranteed to be provided for a particular traffic class.

Maximum Bandwidth

The maximum bandwidth that a traffic class can utilize.

Traffic Priority

The priority of the traffic class.

0 – highest priority

7 – lowest priority

Violation Action

The firewall action that occurs when traffic exceeds the maximum bandwidth.

Delay – packets are queued and sent when possible.

Drop – packets are dropped immediately.

After packets have been tagged with a specific traffic class, the BWM engine gathers them for policing and shaping based on the bandwidth settings that have been defined in a bandwidth object, enabled in an access rule, and attached to application rules.

Classifiers also identify the direction of packets in the traffic flow. Classifiers can be set for either the egress, ingress, or both directions. For Bandwidth Management, the terms ingress and egress are defined as follows:

Ingress – Traffic from initiator to responder in a particular traffic flow.
Egress – Traffic from responder to initiator in a particular traffic flow.

For example, a client behind Interface X0 has a connection to a server which is behind Interface X1. The following table shows:

Direction of traffic flow in each direction for client and server
Direction of traffic on each interface
Direction indicated by the BWM classifier
 

Direction of traffic

Direction of
Traffic Flow

Direction of
Interface X0

Direction of
Interface X1

BWM
Classifier

Client to Server

Egress

Ingress

Egress

Server to Client

Ingress

Egress

Ingress

To be compatible with traditional bandwidth management settings in WAN zones, the terms inbound and outbound are still supported to define traffic direction. These terms are only applicable to active WAN zone interfaces.

Outbound – Traffic from LAN\DMZ zone to WAN zone (Egress).
Inbound – Traffic from WAN zone to LAN\DMZ zone (Ingress).
Topics:

Elemental Bandwidth Settings

The Elemental Bandwidth Settings feature enables a bandwidth object to be applied to individual elements under a parent traffic class. Elemental Bandwidth Settings is a sub-option of Firewall > Bandwidth Objects. The following table shows the parameters that are configured under Elemental Bandwidth Settings.

 

Elemental Bandwidth Settings

Name

Description

Enable Per-IP Bandwidth Management

When enabled, the maximum elemental bandwidth setting applies to each IP address under the parent traffic class.

Maximum Bandwidth

The maximum elemental bandwidth that can be allocated to an IP address under the parent traffic class.

The maximum elemental bandwidth cannot be greater than the maximum bandwidth of its parent class.

When you enable Per-IP Bandwidth Management, the IP address of the initiator is used as the key to identify an elemental traffic flow. The Responder IP address is ignored.

Zone-Free Bandwidth Management

The zone-free bandwidth management feature enables bandwidth management on all interfaces regardless of their zone assignments. Previously, bandwidth management only applied to these zones:

LAN\DMZ to WAN\VPN
WAN\VPN to LAN\DMZ

In SonicOS 5.9, zone-free bandwidth management can be performed across all interfaces regardless of zone.

Zone-free bandwidth management allows administrators to configure the maximum bandwidth limitation independently, in either the ingress or egress direction, or both, and apply it to any interfaces using Access Rules and Application Rules.

* 
NOTE: Interface bandwidth limitation is only available on physical interfaces. Failover and load balancing configuration does not affect interface bandwidth limitations.

Weighted Fair Queuing

Traditionally, SonicOS bandwidth management distributes traffic to 8 queues based on the priority of the traffic class of the packets. These 8 queues operate with strict priority queuing. Packets with the highest priority are always transmitted first.

Strict priority queuing can cause high priority traffic to monopolize all of the available bandwidth on an interface, and low priority traffic will consequently be stuck in its queue indefinitely. Under strict priority queuing, the scheduler always gives precedence to higher priority queues. This can result in bandwidth starvation to lower priority queues.

Weighted Fair queuing (WFQ) alleviates the problem of bandwidth starvation by servicing packets from each queue in a round robin manner, so that all queues are serviced fairly within a given time interval. High priority queues get more service and lower priority queues get less service. No queue gets all the service because of its high priority, and no queue is left unserviced because of its low priority.

For example, Traffic Class A is configured as Priority 1 with a maximum bandwidth of 400 kbps. Traffic Class B is configured as Priority 3 with a maximum bandwidth of 600 kbps. Both traffic classes are queued to an interface that has a maximum bandwidth of only 500kbps. Both queues will be serviced based on their priority in a round robin manner. So, both queues will be serviced, but Traffic Class A will be transmitted faster than Traffic Class B.

The following table shows the shaped bandwidth for each consecutive sampling interval:

 

Shaped Bandwidth for Consecutive Sampling Intervals

Sampling Interval

Traffic Class A

Traffic Class B

Incoming kbps

Shaped kbps

Incoming kbps

Shaped kbps

1

500

380

500

120

2

500

350

500

150

3

400

300

800

200

4

600

400

400

100

5

200

180

600

320

6

200

200

250

250

Configuring Advanced Bandwidth Management

Advanced Bandwidth Management is configured as follows:

Enabling Advanced Bandwidth Management

To enable Advanced Bandwidth Management:
1
On the SonicWall Security Appliance, go to Firewall Settings > BWM.
2
Set the Bandwidth Management Type option to Advanced.

3
Click Accept.
* 
NOTE: When Advanced BWM is selected, the priorities fields are disabled and cannot be set here. Under Advanced BWM, the priorities are set in bandwidth policies. See Configuring Bandwidth Policies.

Configuring Bandwidth Policies

Bandwidth policies are configured as follows:

Configuring a Bandwidth Object
To configure a bandwidth object:
1
On the SonicWall Security Appliance, go to Firewall > Bandwidth Objects.

2
Do one of the following:
Click the Add button to create a new Bandwidth Object.
Click the Configure button of the Bandwidth Object you want to change.

3
Click the General tab.
4
In the Name box, enter a name for this bandwidth object.
5
In the Guaranteed Bandwidth box, enter the amount of bandwidth that this bandwidth object will guarantee to provide for a traffic class (in kbps or Mbps).
6
In the Maximum Bandwidth box, enter the maximum amount of bandwidth that this bandwidth object will provide for a traffic class.
* 
NOTE: The actual allocated bandwidth may be less than this value when multiple traffic classes compete for a shared bandwidth.
7
In the Traffic Priority box, enter the priority that this bandwidth object will provide for a traffic class. The highest priority is 0. The lowest priority is 7.
* 
NOTE: When multiple traffic classes compete for shared bandwidth, classes with the highest priority are given precedence.
8
In the Violation Action box, enter the action that this bandwidth object will provide (delay or drop) when traffic exceeds the maximum bandwidth setting.
Delay specifies that excess traffic packets will be queued and sent when possible.
Drop specifies that excess traffic packets will be dropped immediately.
9
In the Comment box, enter a text comment or description for this bandwidth object.
Enabling Elemental Bandwidth Management

Elemental Bandwidth Management enables the SonicOS to enforce bandwidth rules and policies on each individual IP that passes through the firewall.

To enable elemental bandwidth management in a bandwidth object:
1
On the SonicWall Security Appliance, go to Firewall > Bandwidth Objects.
2
Click the Configure button of the Bandwidth Object you want to change.

3
Click the Elemental tab.
4
Select the Enable Per-IP Bandwidth Management option.
5
In the Maximum Bandwidth box, enter the maximum elemental bandwidth that can be allocated to a protocol under the parent traffic class.
* 
NOTE: When enabled, the maximum elemental bandwidth setting applies to each individual IP under the parent traffic class.
Enabling a Bandwidth Object in an Access Rule

Bandwidth objects (and their configurations) can be enabled in Access Rules.

To enable a bandwidth object in an Access Rule:
1
On the SonicWall Security Appliance, go to Firewall > Access Rules.
2
Do one of the following:
Click the Add button to create a new Access Rule.
Click the Configure button for the appropriate Access Rule.
3
Click the BWM tab.

4
To enable a bandwidth object for the egress direction, under Bandwidth Management, select the Enable Egress Bandwidth Management check box.
5
From the Select a Bandwidth Object list, select the bandwidth object you want for the egress direction.
6
To enable a bandwidth object for the ingress direction, under Bandwidth Management, select the Enable Ingress Bandwidth Management check box.
7
From the Select a Bandwidth Object list, select the bandwidth object you want for the ingress direction.
8
To enable bandwidth usage tracking, select the Enable Tracking Bandwidth Usage option.
9
Click OK.
Enabling a Bandwidth Object in an Action Object
To enable a bandwidth object in an action object:
1
On the SonicWall Security Appliance, go to Firewall > Action Objects.
2
If creating a new action object, in the Action Name list, enter a name for the action object.
3
From the Action list, select Bandwidth Management.

4
In the Bandwidth Aggregation Method list, select the appropriate bandwidth aggregation method.
5
To enable bandwidth management in the egress direction, select the Enable Egress Bandwidth Management option.
6
From the Bandwidth Object list, select the bandwidth object for the egress direction.
7
To enable bandwidth management in the ingress direction, select the Enable Ingress Bandwidth Management option.
8
From the Bandwidth Object list, select the bandwidth object for the ingress direction.
9
To enable bandwidth usage tracking, select the Enable Tracking Bandwidth Usage option.

Setting Interface Bandwidth Limitations

To set the bandwidth limitations for an interface:
1
On the SonicWALL Security Appliance, go to Network > Interfaces.
2
Click the Configure button for the appropriate interface.
3
Click the Advanced tab.

4
Under Bandwidth Management, select the Enable Interface Egress Bandwidth Limitation option. This option is not selected by default.

When this option is selected and BWM Management Type is set to:

Global, if there isn’t a corresponding Access Rule or App Rule, the total egress traffic on the interface is limited to the amount specified in the Maximum Interface Egress Bandwidth (kbps) field.
Advanced, the maximum available egress BWM is defined, but as advanced BWM is policy based, the limitation is not enforced unless there is a corresponding Access Rule or App Rule.

When this option is not selected, no bandwidth limitation is set at the interface level, but egress traffic can still be shaped using other options.

5
In the Maximum Interface Egress Bandwidth (kbps) chec kbox, enter the maximum egress bandwidth for the interface (in kilobytes per second). The default is 384.000000 Kbps.
6
Select the Enable Interface Ingress Bandwidth Limitation option. This option is not selected by default.

When this option is selected and BWM Management Type is set to:

Global, if there isn’t a corresponding Access Rule or App Rule, the total ingress traffic on the interface is limited to the amount specified in the Maximum Interface Ingress Bandwidth (kbps) field.
Advanced, the maximum available ingress BWM is defined, but as advanced BWM is policy based, the limitation is not enforced unless there is a corresponding Access Rule or App Rule.

When this option is not selected, no bandwidth limitation is set at the interface level, but ingress traffic can still be shaped using other options.

7
In the Maximum Interface Ingress Bandwidth (kbps) box, enter the maximum ingress bandwidth for the interface (in kilobytes per second). The default is 384.000000 Kbps.
8
Click OK.

Upgrading to Advanced Bandwidth Management

Advanced Bandwidth Management uses Bandwidth Objects as the configuration method. Bandwidth objects are configured under Firewall > Bandwidth Objects, and can then be enabled in Access Rules.

Traditional Bandwidth Management configuration is not compatible with SonicOS 5.9 firmware. However, to ensure that customers can maintain their current network settings, customers can use the Advanced Bandwidth Management Upgrade feature, when they install the SonicOS 5.9 firmware.

The Advanced Bandwidth Upgrade feature automatically converts all active, valid, traditional BWM configurations to the Bandwidth Objects design model.

In traditional BWM configuration, the BWM engine only affects traffic when it is transmitted through the primary WAN interface or the active load balancing WAN interface. Traffic that does not pass through these interfaces, is not subject to bandwidth management regardless of the Access Rule or App Rule settings.

Under Advanced Bandwidth Management, the BWM engine can enforce Bandwidth Management settings on any interface.

During the Advanced Bandwidth Management Upgrade process, the SonicOS translates the traditional BWM settings into a default Bandwidth Object and links it to the original classifier rule (Access Rule or App Rule). The auto-generated default Bandwidth Object inherits all the BWM parameters for both the Ingress and Egress directions.

The two following graphics show the traditional BWM settings. The graphic that follows them shows the new Bandwidth Objects which are automatically generated during the Advanced Bandwidth Management Upgrade process.

This graphic shows the traditional Access Rule settings from the Firewall > Access Rules > Configure dialog:

This graphic shows the traditional Action Object settings from the Firewall > Action Object > Configure dialog:

The following graphic shows the four new Bandwidth Objects which are automatically generated during the Advanced Bandwidth Management Upgrade process. These settings can be viewed on the Firewall > Bandwidth Objects screen.

 

Configuring Flood Protection

Firewall Settings > Flood Protection

The Firewall Settings > Flood Protection page lets you manage TCP (Transmission Control Protocol) traffic settings and view statistics on TCP Traffic through the security appliance.

Topics:

TCP Settings

The TCP Settings section allows you to:

Enforce strict TCP compliance with RFC 793 and RFC 1122 – Select to ensure strict compliance with several TCP timeout rules. This setting maximizes TCP security, but it may cause problems with the Window Scaling feature for Windows Vista users. When this option is selected, the Enable TCP handshake enforcement option becomes active.
Enable TCP handshake enforcement – Require a successful three-way TCP handshake for all TCP connections.
Enforce strict TCP compliance with RFC 5961 – Select to ensure compliance with IETF RFC 5961. RFC 5961 protects against vulnerability CVE-2004-0230 by stopping spoofed off-path TCP packet injection attacks. This option is selected by default.
* 
CAUTION: For maximum security, all client devices are recommended to be updated to comply with RFC 5961. It is not recommended to disable this option; to do so should be done with caution and only if legacy client devices have not been updated to follow RFC 5961 and RST floods are occurring.
Enable TCP checksum enforcement – If an invalid TCP checksum is calculated, the packet is dropped.
Default TCP Connection Timeout – The default time assigned to Access Rules for TCP traffic. If a TCP session is active for a period in excess of this setting, the TCP connection is cleared by the SonicWall. The default value is 5 minutes, the minimum value is 1 minute, and the maximum value is 999 minutes.
* 
NOTE: Setting excessively long connection time-outs slows the reclamation of stale resources, and in extreme cases, could lead to exhaustion of the connection cache.
Maximum Segment Lifetime (seconds) – Determines the number of seconds that any TCP packet is valid before it expires. The minium value is 1 second, the maximum value is 60 seconds, and the default value is 8 seconds.

This setting is also used to determine the amount of time (calculated as twice the Maximum Segment Lifetime, or 2MSL) that an actively closed TCP connection remains in the TIME_WAIT state to ensure that the proper FIN/ACK exchange has occurred to cleanly close the TCP connection.

SYN Flood Protection Methods

SYN/RST/FIN Flood protection helps to protect hosts behind the SonicWall from Denial of Service (DoS) or Distributed DoS attacks that attempt to consume the host’s available resources by creating one of the following attack mechanisms:

Sending TCP SYN packets, RST packets, or FIN packets with invalid or spoofed IP addresses.
Creating excessive numbers of half-opened TCP connections.
Topics:

SYN Flood Protection Using Stateless Cookies

The method of SYN flood protection employed starting with SonicOS uses stateless SYN Cookies, which increase reliability of SYN Flood detection, and also improves overall resource utilization on the SonicWall. With stateless SYN Cookies, the SonicWall does not have to maintain state on half-opened connections. Instead, it uses a cryptographic calculation (rather than randomness) to arrive at SEQr (see Understanding a TCP Handshake).

Layer-Specific SYN Flood Protection Methods

SonicOS provides several protections against SYN Floods generated from two different environments: trusted (internal) or untrusted (external) networks. Attacks from untrusted WAN networks usually occur on one or more servers protected by the firewall. Attacks from the trusted LAN networks occur as a result of a virus infection inside one or more of the trusted networks, generating attacks on one or more local or remote hosts.

To provide a firewall defense to both attack scenarios, SonicOS provides two separate SYN Flood protection mechanisms on two different layers. Each gathers and displays SYN Flood statistics and generates log messages for significant SYN Flood events.

SYN Proxy (Layer 3) – This mechanism shields servers inside the trusted network from WAN-based SYN flood attacks, using a SYN Proxy implementation to verify the WAN clients before forwarding their connection requests to the protected server. You can enable SYN Proxy only on WAN interfaces.
SYN Blacklisting (Layer 2) – This mechanism blocks specific devices from generating or forwarding SYN flood attacks. You can enable SYN Blacklisting on any interface.

Understanding SYN Watchlists

The internal architecture of both SYN Flood protection mechanisms is based on a single list of Ethernet addresses that are the most active devices sending initial SYN packets to the firewall. This list is called a SYN watchlist. Because this list contains Ethernet addresses, the device tracks all SYN traffic based on the address of the device forwarding the SYN packet, without considering the IP source or destination address.

Each watchlist entry contains a value called a hit count. The hit count value increments when the device receives the an initial SYN packet from a corresponding device. The hit count decrements when the TCP three-way handshake completes. The hit count for any particular device generally equals the number of half-open connections pending since the last time the device reset the hit count. The device default for resetting a hit count is once a second.

The thresholds for logging, SYN Proxy, and SYN Blacklisting are all compared to the hit count values when determining if a log message or state change is necessary. When a SYN Flood attack occurs, the number of pending half-open connections from the device forwarding the attacking packets increases substantially because of the spoofed connection attempts. When you set the attack thresholds correctly, normal traffic flow produces few attack warnings, but the same thresholds detect and deflect attacks before they result in serious network degradation.

Understanding a TCP Handshake

A typical TCP handshake (simplified) begins with an initiator sending a TCP SYN packet with a 32-bit sequence (SEQi) number. The responder then sends a SYN/ACK packet acknowledging the received sequence by sending an ACK equal to SEQi+1 and a random, 32-bit sequence number (SEQr). The responder also maintains state awaiting an ACK from the initiator. The initiator’s ACK packet should contain the next sequence (SEQi+1) along with an acknowledgment of the sequence it received from the responder (by sending an ACK equal to SEQr+1). The exchange looks as follows:

1
Initiator -> SYN (SEQi=0001234567, ACKi=0) -> Responder
2
Initiator <- SYN/ACK (SEQr=3987654321, ACKr=0001234568) <- Responder
3
Initiator -> ACK (SEQi=0001234568, ACKi=3987654322) -> Responder

Because the responder has to maintain state on all half-opened TCP connections, it is possible for memory depletion to occur if SYNs come in faster than they can be processed or cleared by the responder. A half-opened TCP connection did not transition to an established state through the completion of the three-way handshake. When the SonicWall is between the initiator and the responder, it effectively becomes the responder, brokering, or proxying, the TCP connection to the actual responder (private host) it is protecting.

Configuring Layer 3 SYN Flood Protection - SYN Proxy

To configure SYN Flood Protection features, go to the Layer 3 SYN Flood Protection - SYN Proxy section of the Firewall Settings > Flood Protection page.

A SYN Flood Protection mode is the level of protection that you can select to defend against half-opened TCP sessions and high-frequency SYN packet transmissions. This feature enables you to set three different levels of SYN Flood Protection:

Watch and Report Possible SYN Floods – This option enables the device to monitor SYN traffic on all interfaces on the device and to log suspected SYN flood activity that exceeds a packet count threshold. The feature does not turn on the SYN Proxy on the device so the device forwards the TCP three-way handshake without modification. This is the least invasive level of SYN Flood protection. Select this option if your network is not in a high risk environment.
Proxy WAN Client Connections When Attack is Suspected – This option enables the device to enable the SYN Proxy feature on WAN interfaces when the number of incomplete connection attempts per second surpasses a specified threshold. This method ensures the device continues to process valid traffic during the attack and that performance does not degrade. Proxy mode remains enabled until all WAN SYN flood attacks stop occurring or until the device blacklists all of them using the SYN Blacklisting feature. This is the intermediate level of SYN Flood protection. Select this option if your network experiences SYN Flood attacks from internal or external sources.
Always Proxy WAN Client Connections – This option sets the device to always use SYN Proxy. This method blocks all spoofed SYN packets from passing through the device. Note that this is an extreme security measure and directs the device to respond to port scans on all TCP ports because the SYN Proxy feature forces the device to respond to all TCP SYN connection attempts. This can degrade performance and can generate a false positive. Select this option only if your network is in a high risk environment.

Configuring SYN Attack Threshold

The SYN Attack Threshold configuration options provide limits for SYN Flood activity before the device drops packets. The device gathers statistics on WAN TCP connections, keeping track of the maximum and average maximum and incomplete WAN connections per second. Out of these statistics, the device suggests a value for the SYN flood threshold. There aare two options in the section:

Suggested value calculated from gathered statistics – The suggested attack threshold based on WAN TCP connection statistics.
Attack Threshold (Incomplete Connection Attempts/Second) – Enables you to set the threshold for the number of incomplete connection attempts per second before the device drops packets at any value between 5 and 200000, with a default of 300.

Configuring SYN Proxy Options

When the device applies a SYN Proxy to a TCP connection, it responds to the initial SYN packet with a manufactured SYN/ACK reply, waiting for the ACK in response before forwarding the connection request to the server. Devices attacking with SYN Flood packets do not respond to the SYN/ACK reply. The firewall identifies them by their lack of this type of response and blocks their spoofed connection attempts. SYN Proxy forces the firewall to manufacture a SYN/ACK response without knowing how the server will respond to the TCP options normally provided on SYN/ACK packets.

To provide more control over the options sent to WAN clients when in SYN Proxy mode, you can configure the following two objects:

SACK (Selective Acknowledgment) – This parameter controls whether or not Selective ACK is enabled. With SACK enabled, a packet or series of packets can be dropped, and the received informs the sender which data has been received and where holes may exist in the data.
MSS (Minimum Segment Size) – This sets the threshold for the size of TCP segments, preventing a segment that is too large to be sent to the targeted server. For example, if the server is an IPsec gateway, it may need to limit the MSS it received to provide space for IPsec headers when tunneling traffic. The firewall cannot predict the MSS value sent to the server when it responds to the SYN manufactured packet during the proxy sequence. Being able to control the size of a segment, enables you to control the manufactured MSS value sent to WAN clients.

The SYN Proxy Threshold region contains the following options:

All LAN/DMZ servers support the TCP SACK option – This check box enables Selective ACK where a packet can be dropped and the receiving device indicates which packets it received. Enable this check box only when you know that all servers covered by the firewall accessed from the WAN support the SACK option.
Limit MSS sent to WAN clients (when connections are proxied) – Enables you to enter the maximum Minimum Segment Size value. If you specify an override value for the default of 1460, this indicates that a segment of that size or smaller will be sent to the client in the SYN/ACK cookie. Setting this value too low can decrease performance when the SYN Proxy is always enabled. Setting this value too high can break connections if the server responds with a smaller MSS value.
Maximum TCP MSS sent to WAN clients. The value of the MSS. The default is 1460.
* 
NOTE: When using Proxy WAN client connections, remember to set these options conservatively since they only affect connections when a SYN Flood takes place. This ensures that legitimate connections can proceed during an attack.
Always log SYN packets received. Logs all SYN packets received.

Configuring Layer 2 SYN/RST/FIN Flood Protection - MAC Blacklisting

The SYN/RST/FIN Blacklisting feature is a list that contains devices that exceeded the SYN, RST, and FIN Blacklist attack threshold. The firewall device drops packets sent from blacklisted devices early in the packet evaluation process, enabling the firewall to handle greater amounts of these packets, providing a defense against attacks originating on local networks while also providing second-tier protection for WAN networks.

Devices cannot occur on the SYN/RST/FIN Blacklist and watchlist simultaneously. With blacklisting enabled, the firewall removes devices exceeding the blacklist threshold from the watchlist and places them on the blacklist. Conversely, when the firewall removes a device from the blacklist, it places it back on the watchlist. Any device whose MAC address has been placed on the blacklist will be removed from it approximately three seconds after the flood emanating from that device has ended.

The SYN/RST/FIN Blacklisting region contains the following options:

Threshold for SYN/RST/FIN flood blacklisting (SYNs / Sec) – The maximum number of SYN, RST, and FIN packets allowed per second. The default is 1,000. This value should be larger than the SYN Proxy threshold value because blacklisting attempts to thwart more vigorous local attacks or severe attacks from a WAN network.
Enable SYN/RST/FIN flood blacklisting on all interfaces – This check box enables the blacklisting feature on all interfaces on the firewall.
Never blacklist WAN machines – This check box ensures that systems on the WAN are never added to the SYN Blacklist. This option is recommended as leaving it unchecked may interrupt traffic to and from the firewall’s WAN ports.
Always allow SonicWall management traffic – This check box causes IP traffic from a blacklisted device targeting the firewall’s WAN IP addresses to not be filtered. This allows management traffic, and routing protocols to maintain connectivity through a blacklisted device.

UDP Settings

Default UDP Connection Timeout (seconds) - Enter the number of seconds of idle time you want to allow before UDP connections time out. This value is overridden by the UDP Connection timeout you set for individual rules.

UDP Flood Protection

UDP Flood Attacks are a type of denial-of-service (DoS) attack. They are initiated by sending a large number of UDP packets to random ports on a remote host. As a result, the victimized system’s resources will be consumed with handling the attacking packets, which eventually causes the system to be unreachable by other clients.

SonicWall UDP Flood Protection defends against these attacks by using a “watch and block” method. The appliance monitors UDP traffic to a specified destination. If the rate of UDP packets per second exceeds the allowed threshold for a specified duration of time, the appliance drops subsequent UDP packets to protect against a flood attack.

UDP packets that are DNS query or responses to or from a DNS server configured by the appliance are allowed to pass, regardless of the state of UDP Flood Protection.

The following settings configure UDP Flood Protection:

Enable UDP Flood Protection – Enables UDP Flood Protection.
UDP Flood Attack Threshold (UDP Packets / Sec) – The rate of UDP packets per second sent to a host, range or subnet that triggers UDP Flood Protection.
UDP Flood Attack Blocking Time (Sec) – After the appliance detects the rate of UDP packets exceeding the attack threshold for this duration of time, UDP Flood Protection is activated, and the appliance will begin dropping subsequent UDP packets.
UDP Flood Attack Protected Destination List – The destination address object or address group that will be protected from UDP Flood Attack.

ICMP Flood Protection

ICMP Flood Protection functions identically to UDP Flood Protection, except it monitors for ICMP Flood Attacks. The only difference is that there are no DNS queries that are allowed to bypass ICMP Flood Protection.

The following settings configure ICMP Flood Protection:

Enable ICMP Flood Protection – Enables ICMP Flood Protection.
ICMP Flood Attack Threshold (ICMP Packets / Sec) – The rate of ICMP packets per second sent to a host, range or subnet that triggers ICMP Flood Protection.
ICMP Flood Attack Blocking Time (Sec) – After the appliance detects the rate of ICMP packets exceeding the attack threshold for this duration of time, ICMP Flood Protection is activated, and the appliance will begin dropping subsequent ICMP packets.
ICMP Flood Attack Protected Destination List – The destination address object or address group that will be protected from ICMP Flood Attack.

Traffic Statistics

The Firewall > Flood Protection page provides the following traffic statistics:

TCP Traffic Statistics

The TCP Traffic Statistics table provides statistics on the following:

Connections Opened – Incremented when a TCP connection initiator sends a SYN, or a TCP connection responder receives a SYN.
Connections Closed – Incremented when a TCP connection is closed when both the initiator and the responder have sent a FIN and received an ACK.
Connections Refused – Incremented when a RST is encountered, and the responder is in a SYN_RCVD state.
Connections Aborted – Incremented when a RST is encountered, and the responder is in some state other than SYN_RCVD.
Total TCP Packets – Incremented with every processed TCP packet.
Validated Packets Passed – Incremented under the following conditions:
When a TCP packet passes checksum validation (while TCP checksum validation is enabled).
When a valid SYN packet is encountered (while SYN Flood protection is enabled).
When a SYN Cookie is successfully validated on a packet with the ACK flag set (while SYN Flood protection is enabled).
Malformed Packets Dropped - Incremented under the following conditions:
When TCP checksum fails validation (while TCP checksum validation is enabled).
When the TCP SACK Permitted (Selective Acknowledgement, see RFC1072) option is encountered, but the calculated option length is incorrect.
When the TCP MSS (Maximum Segment Size) option is encountered, but the calculated option length is incorrect.
When the TCP SACK option data is calculated to be either less than the minimum of 6 bytes, or modulo incongruent to the block size of 4 bytes.
When the TCP option length is determined to be invalid.
When the TCP header length is calculated to be less than the minimum of 20 bytes.
When the TCP header length is calculated to be greater than the packet’s data length.
Invalid Flag Packets Dropped - Incremented under the following conditions:
When a non-SYN packet is received that cannot be located in the connection-cache (while SYN Flood protection is disabled).
When a packet with flags other than SYN, RST+ACK or SYN+ACK is received during session establishment (while SYN Flood protection is enabled).
TCP XMAS Scan will be logged if the packet has FIN, URG, and PSH flags set.
TCP FIN Scan will be logged if the packet has the FIN flag set.
TCP Null Scan will be logged if the packet has no flags set.
When a new TCP connection initiation is attempted with something other than just the SYN flag set.
When a packet with the SYN flag set is received within an established TCP session.
When a packet without the ACK flag set is received within an established TCP session.
Invalid Sequence Packets Dropped – Incremented under the following conditions:
When a packet within an established connection is received where the sequence number is less than the connection’s oldest unacknowledged sequence.
When a packet within an established connection is received where the sequence number is greater than the connection’s oldest unacknowledged sequence + the connection’s last advertised window size.
Invalid Acknowledgement Packets Dropped –Incremented under the following conditions:
When a packet is received with the ACK flag set, and with neither the RST or SYN flags set, but the SYN Cookie is determined to be invalid (while SYN Flood protection is enabled).
When a packet’s ACK value (adjusted by the sequence number randomization offset) is less than the connection’s oldest unacknowledged sequence number.
When a packet’s ACK value (adjusted by the sequence number randomization offset) is greater than the connection’s next expected sequence number.

SYN, RST, and FIN Flood Statistics

You can view SYN, RST and FIN Flood statistics in the lower half of the TCP Traffic Statistics list. The following are SYN Flood statistics.

Max Incomplete WAN Connections / sec – The maximum number of pending embryonic half-open connections recorded since the firewall has been up (or since the last time the TCP statistics were cleared).
Average Incomplete WAN Connections / sec – The average number of pending embryonic half-open connections, based on the total number of samples since boot up (or the last TCP statistics reset).
SYN Floods in Progress – The number of individual forwarding devices that are currently exceeding either SYN Flood threshold.
RST Floods in Progress – The number of individual forwarding devices that are currently exceeding the SYN/RST/FIN flood blacklisting threshold.
FIN Floods in Progress – The number of individual forwarding devices that are currently exceeding the SYN/RST/FIN flood blacklisting threshold.
Total SYN, RST, or FIN Floods Detected – The total number of events in which a forwarding device has exceeded the lower of either the SYN attack threshold or the SYN/RST/FIN flood blacklisting threshold.
TCP Connection SYN-Proxy State (WAN only) – Indicates whether or not Proxy-Mode is currently on the WAN interfaces.
Current SYN-Blacklisted Machines – The number of devices currently on the SYN blacklist.
Current RST-Blacklisted Machines – The number of devices currently on the RST blacklist.
Current FIN-Blacklisted Machines – The number of devices currently on the FIN blacklist.
Total SYN-Blacklisting Events – The total number of instances any device has been placed on the SYN blacklist.
Total RST-Blacklisting Events – The total number of instances any device has been placed on the RST blacklist.
Total FIN-Blacklisting Events – The total number of instances any device has been placed on the FIN blacklist.
Total SYN Blacklist Packets Rejected – The total number of packets dropped because of the SYN blacklist.
Total RST Blacklist Packets Rejected – The total number of packets dropped because of the RST blacklist.
Total FIN Blacklist Packets Rejected – The total number of packets dropped because of the FIN blacklist.
Invalid SYN Flood Cookies Received – The total number of invalid SYN flood cookies received.

UDP Traffic Statistics

The UDP Traffic Statistics table provides statistics on the following:

Connections Opened – Incremented when a UDP connection initiator sends a SYN, or a UDP connection responder receives a SYN.
Connections Closed – Incremented when a UDP connection is closed when both the initiator and the responder have sent a FIN and received an ACK.
Total UDP Packets – Incremented with every processed UDP packet.
Validated Packets Passed – Incremented under the following conditions:
When a UDP packet passes checksum validation (while UDP checksum validation is enabled).
When a valid SYN packet is encountered (while SYN Flood protection is enabled).
When a SYN Cookie is successfully validated on a packet with the ACK flag set (while SYN Flood protection is enabled).
Malformed Packets Dropped - Incremented under the following conditions:
When UDP checksum fails validation (while UDP checksum validation is enabled).
When the UDP SACK Permitted (Selective Acknowledgement, see RFC1072) option is encountered, but the calculated option length is incorrect.
When the UDP MSS (Maximum Segment Size) option is encountered, but the calculated option length is incorrect.
When the UDP SACK option data is calculated to be either less than the minimum of 6 bytes, or modulo incongruent to the block size of 4 bytes.
When the UDP option length is determined to be invalid.
When the UDP header length is calculated to be less than the minimum of 20 bytes.
When the UDP header length is calculated to be greater than the packet’s data length.
UDP Floods In Progress – The number of individual forwarding devices that are currently exceeding the UDP Flood Attack Threshold.
Total UDP Floods Detected – The total number of events in which a forwarding device has exceeded the UDP Flood Attack Threshold.
Total UDP Flood Packets Rejected – The total number of packets dropped because of UDP Flood Attack detection.

ICMP Traffic Statistics

The ICMP Traffic Statistics table provides the same categories of information as the UDP Traffic Statistics, except for ICMP Flood Attacks instead of UDP Flood Attacks.

Configuring Multicast Settings

Firewall Settings > Multicast

Multicasting, also called IP multicasting, is a method for sending one Internet Protocol (IP) packet simultaneously to multiple hosts. Multicast is suited to the rapidly growing segment of Internet traffic - multimedia presentations and video conferencing. For example, a single host transmitting an audio or video stream and ten hosts that want to receive this stream. In mutlicasting, the sending host transmits a single IP packet with a specific multicast address, and the 10 hosts simply need to be configured to listen for packets targeted to that address to receive the transmission. Multicasting is a point-to-multipoint IP communication mechanism that operates in a connectionless mode - hosts receive multicast transmissions by “tuning in” to them, a process similar to tuning in to a radio.

The Firewall Settings > Multicast page allows you to manage multicast traffic on the SonicWall security appliance.

Topics:

Multicast Snooping

This section provides configuration tasks for Multicast Snooping.

Enable Multicast - This check box is disabled by default. Select this check box to support multicast traffic.
Require IGMP Membership reports for multicast data forwarding - This check box is enabled by default. Select this check box to improve performance by regulating multicast data to be forwarded to only interfaces joined into a multicast group address using IGMP.
Multicast state table entry timeout (minutes) - This field has a default of 5. The value range for this field is 1 to 60 (minutes). Update the default timer value of 5 in the following conditions:
You suspect membership queries or reports are being lost on the network.
You want to reduce the IGMP traffic on the network and currently have a large number of multicast groups or clients. This is a condition where you do not have a router to route traffic.
You want to synchronize the timing with an IGMP router.

Multicast Policies

This section provides configuration tasks for Multicast Policies.

Enable reception of all multicast addresses - This radio button is not enabled by default. Select this radio button to receive all (class D) multicast addresses. Receiving all multicast addresses may cause your network to experience performance degradation.
Enable reception for the following multicast addresses - This radio button is enabled by default. In the drop-down menu, select Create a new multicast object or Create new multicast group.
* 
NOTE: Only address objects and groups associated with the MULTICAST zone are available to select. Only addresses from 224.0.0.1 to 239.255.255.255 can be bound to the MULTICAST zone. You can specify up to 200 Multicast addresses.
Topics:

Creating a Multicast Address Object

To create a multicast address object:
1
In the Enable reception for the following multicast addresses drop-down menu, select Create new multicast address object. The Add Address Object dialog displays.

2
Configure:
Name: The name of the address object.
Zone Assignment: Select MULTICAST.
Type: Select from the drop-down menu:
Host
Range
Network
MAC
FQDN
3
Depending on your selection, the options change. If you selected:
* 
NOTE: An IP address must be in the range for multicast, 224.0.0.0 to 239.255.255.255.
Host, in the IP Address field, enter the IP address of the host.
Range, in the Starting IP Address and Ending IP Address fields, enter the starting and ending IP address for the address range.
Network, enter in the:
Network field, the IP address of the network.
Netmask/Prefix Length field, either the netmask for the network or the prefix length.
MAC:
Enter the MAC address in the MAC Address field.
If this is a multi-homed hose, select the Multi-homed host checkbox. This option is selected by default.
FQDN, enter the FQDN host name in the FQDN hostname field.
4
Click OK.

Creating a Multicast Address Object Group

To create a multicast address object group:
1
In the Enable reception for the following multicast addresses drop-down menu, select Create new multicast address object group. The Add Multicast Address Object Group dialog displays.

2
Enter a friendly name in the Name field.
3
Select one or more Multicast address objects from the left list.
4
Click the right arrow button.
5
Click OK.

IGMP State Table

This section provides descriptions of the fields in the IGMP State table.

Multicast Group Address—Provides the multicast group address the interface is joined to.
Interface / VPN Tunnel—Provides the interface (such as LAN) for the VPN policy.
IGMP Version—Provides the IGMP version (such as V2 or V3).
Time Remaining—Provides the amount of time left before the IGMP entry will be flushed. This is calculated by subtracting the Multicast state table entry timeout (minutes) value, which has the default value of 5 minutes, and the elapsed time since the multicast address was added.
Flush—Click the icon to flush the specific entry immediately.
Flush and Flush All buttons—To flush a specific entry immediately, check the box to the left of the entry and click Flush. Click Flush All to immediately flush all entries.

Enabling Multicast on LAN-Dedicated Interfaces

To enable multicast support on LAN-dedicated interfaces:
1
Enable multicast support on your SonicWall security appliance:
a
Navigate to Firewall Settings > Multicast.
b
In the Multicast Snooping section, click on the Enable Multicast check box.
c
In the Multicast Policies section, select the Enable reception of all multicast addresses.
2
Enable multicast support on LAN interfaces:
a
In the Network > Interfaces page, click on the Configure icon for the LAN interface. The Edit Interface dialog displays.
b
Click the Advanced tab.
c
In the Advanced Settings section, click the Enable Multicast Support check box.\
d
Click OK.

Enabling Multicast for Address Objects over a VPN Tunnel

To enable multicast support for address objects over a VPN tunnel:
1
Enable multicast support on your SonicWall security appliance:
a
Navigate to Firewall Settings > Multicast.
b
In the Multicast Snooping section, click on the Enable Multicast check box.
c
In the Multicast Policies section, select the Enable reception for the following multicast addresses.
d
Select from the drop-down menu, Create new multicast address object....
2
Create a multicast address object as described in Creating a Multicast Address Object Group.
3
Enable multicast support on the VPN policy for your GroupVPN.
a
Navigate to the VPN > Settings page.
b
In the VPN Policies table, click on the Configure icon to edit your GroupVPN’s VPN policy. The VPN Policy dialog displays.
c
Click the Advanced tab.
d
In the Advanced Settings section, select the Enable Multicast check box.
e
Click OK.

Enabling Multicast Through a VPN

To enable multicast across the WAN through a VPN:
1
Enable multicast globally.
a
On the Firewall Settings > Multicast page, check the Enable Multicast check box,.
b
Click the Apply button for each security appliance.
2
Enable multicast support on each individual interface participating in the multicast network.
a
On the Network > Interfaces page for each interface on all security appliances participating, click the Edit icon for each interface.
b
Click the Advanced tab.
c
Select the Enable Multicast Support check box.
3
Enable multicast on the VPN policies between the security appliances.
a
Navigate to the VPN > Settings page.
b
Click the Edit icon for each VPN policy. The VPN Policy dialog displays.
c
Click the Advanced tab.
d
In the Advanced Settings section, select the Enable Multicast check box.
e
Click OK.
4
Navigate to the Firewall > Access Rules page. The Access Rules table is updated.
* 
NOTE: The default WLAN'MULTICAST access rule for IGMP traffic is set to DENY. This needs to be changed to ALLOW on all participating appliances to enable multicast, if they have multicast clients on their WLAN zones.
5
Make sure the tunnels are active between the sites.
6
Start the multicast server application and client applications.

As multicast data is sent from the multicast server to the multicast group (224.0.0.0 through 239.255.255.255), the SonicWall security appliance queries its IGMP state table for that group to determine where to deliver that data. Similarly, when the appliance receives that data at the VPN zone, the appliance queries its IGMP state table to determine where it should deliver the data.

The IGMP state tables (upon updating) should provide information indicating that there is a multicast client on the X3 interface, and across the vpnMcastServer tunnel for the 224.15.16.17 group.

* 
NOTE: By selecting Enable reception of all multicast addresses, you might see entries other than those you are expecting to see when viewing your IGMP state table. These are caused by other multicast applications that might be running on your hosts.

Managing Quality of Service

Firewall Settings > QoS Mapping (NSA Series Only)

Quality of Service (QoS) refers to a diversity of methods intended to provide predictable network behavior and performance. This sort of predictability is vital to certain types of applications, such as Voice over IP (VoIP), multimedia content, or business-critical applications such as order or credit-card processing. No amount of bandwidth can provide this sort of predictability, because any amount of bandwidth will ultimately be used to its capacity at some point in a network. Only QoS, when configured and implemented correctly, can properly manage traffic, and guarantee the desired levels of network service.

Topics:

Classification

Classification is necessary as a first step so that traffic in need of management can be identified. SonicOS Enhanced uses Access Rules as the interface to classification of traffic. This provides fine controls using combinations of Address Object, Service Object, and Schedule Object elements, allowing for classification criteria as general as all HTTP traffic and as specific as SSH traffic from hostA to serverB on Wednesdays at 2:12am.

SonicOS on SonicWall NSA series appliances has the ability to recognize, map, modify, and generate the industry-standard external CoS designators, DSCP and 802.1p (refer to the 802.1p and DSCP QoS).

When identified, or classified, it can be managed. Management can be performed internally by SonicOS’s BWM, which is perfectly effective as long as the network is a fully contained autonomous system. When external or intermediate elements are introduced, such as foreign network infrastructures with unknown configurations, or other hosts contending for bandwidth (for example, the Internet) the ability to offer guarantees and predictability are diminished. In other words, as long as the endpoints of the network and everything in between are within your management, BWM will work exactly as configured. Once external entities are introduced, the precision and efficacy of BWM configurations can begin to degrade.

But all is not lost. When SonicOS classifies the traffic, it can tag the traffic to communicate this classification to certain external systems that are capable of abiding by CoS tags; thus they too can participate in providing QoS.

* 
NOTE: Many service providers do not support CoS tags such as 802.1p or DSCP. Also, most network equipment with standard configurations will not be able to recognize 802.1p tags, and could drop tagged traffic.

Although DSCP will not cause compatibility issues, many service providers will simply strip or ignore the DSCP tags, disregarding the code points.

If you wish to use 802.1p or DSCP marking on your network or your service provider’s network, you must first establish that these methods are supported. Verify that your internal network equipment can support CoS priority marking, and that it is correctly configured to do so. Check with your service provider — some offer fee-based support for QoS using these CoS methods.

Marking

When the traffic has been classified, if it is to be handled by QoS capable external systems (for example, CoS-aware switches or routers as might be available on a premium service provider’s infrastructure, or on a private WAN), it must be tagged so that the external systems can make use of the classification, and provide the correct handling and Per Hop Behaviors (PHB).

Originally, this was attempted at the IP layer (layer 3) with RFC791’s three Precedence bits and RFC1394 ToS (type of service) field, but this was used by a grand total of 17 people throughout history. Its successor, RFC2474 introduced the much more practical and widely used DSCP (Differentiated Services Code Point) which offered up to 64 classifications, as well as user-definable classes. DSCP was further enhanced by RFC2598 (Expedited Forwarding, intended to provide leased-line behaviors) and RFC2697 (Assured Forwarding levels within classes, also known as Gold, Silver, and Bronze levels).

DSCP is a safe marking method for traffic that traverses public networks because there is no risk of incompatibility. At the very worst, a hop along the path might disregard or strip the DSCP tag, but it will rarely mistreat or discard the packet.

The other prevalent method of CoS marking is IEEE 802.1p. 802.1p occurs at the MAC layer (layer 2) and is closely related to IEEE 802.1Q VLAN marking, sharing the same 16-bit field, although it is actually defined in the IEEE 802.1D standard. Unlike DSCP, 802.1p will only work with 802.1p capable equipment, and is not universally interoperable. Additionally, 802.1p, because of its different packet structure, can rarely traverse wide-area networks, even private WANs. Nonetheless, 802.1p is gaining wide support among Voice and Video over IP vendors, so a solution for supporting 802.1p across network boundaries (that is, WAN links) was introduced in the form of 802.1p to DSCP mapping.

802.1p to DSCP mapping allows 802.1p tags from one LAN to be mapped to DSCP values by SonicOS Enhanced, allowing the packets to safely traverse WAN links. When the packets arrive on the other side of the WAN or VPN, the receiving SonicOS Enhanced appliance can then map the DSCP tags back to 802.1p tags for use on that LAN. Refer to the 802.1p and DSCP QoS for more information.

Conditioning

The traffic can be conditioned (or managed) using any of the many policing, queuing, and shaping methods available. SonicOS provides internal conditioning capabilities with its Egress and Ingress Bandwidth Management (BWM), detailed in the Bandwidth Management. SonicOS’s BWM is a perfectly effective solution for fully autonomous private networks with sufficient bandwidth, but can become somewhat less effective as more unknown external network elements and bandwidth contention are introduced. Refer to the Example Scenario for a description of contention issues.

Topics:
Site to Site VPN over QoS Capable Networks

If the network path between the two end points is QoS aware, SonicOS can DSCP tag the inner encapsulate packet so that it is interpreted correctly at the other side of the tunnel, and it can also DSCP tag the outer ESP encapsulated packet so that its class can be interpreted and honored by each hop along the transit network. SonicOS can map 802.1p tags created on the internal networks to DSCP tags so that they can safely traverse the transit network. Then, when the packets are received on the other side, the receiving SonicWall appliance can translate the DSCP tags back to 802.1p tags for interpretation and honoring by that internal network.

Site to Site VPN over Public Networks

SonicOS integrated BWM is very effective in managing traffic between VPN connected networks because ingress and egress traffic can be classified and controlled at both endpoints. If the network between the endpoints is non QoS aware, it regards and treats all VPN ESP equally. Because there is typically no control over these intermediate networks or their paths, it is difficult to fully guarantee QoS, but BWM can still help to provide more predictable behavior.

Site-to-site VPN over public networks configuration

To provide end-to-end QoS, business-class service providers are increasingly offering traffic conditioning services on their IP networks. These services typically depend on the customer premise equipment to classify and tag the traffic, generally using a standard marking method such as DSCP. SonicOS Enhanced has the ability to DSCP mark traffic after classification, as well as the ability to map 802.1p tags to DSCP tags for external network traversal and CoS preservation. For VPN traffic, SonicOS can DSCP mark not only the internal (payload) packets, but the external (encapsulating) packets as well so that QoS capable service providers can offer QoS even on encrypted VPN traffic.

The actual conditioning method employed by service providers varies from one to the next, but it generally involves a class-based queuing method such as Weighted Fair Queuing for prioritizing traffic, as well a congestion avoidance method, such as tail-drop or Random Early Detection.

802.1p and DSCP QoS

The following sections detail the 802.1p standard and DSCP QoS. These features are supported on SonicWall NSA platforms, except for the SonicWall NSA 210 appliance:

Enabling 802.1p

SonicOS Enhanced supports layer 2 and layer 3 CoS methods for broad interoperability with external systems participating in QoS enabled environments. The layer 2 method is the IEEE 802.1p standard wherein 3-bits of an additional 16-bits inserted into the header of the Ethernet frame can be used to designate the priority of the frame, as illustrated in the following figure:

Using Ethernet Data Frame to Designate Priority

.

TPID: Tag Protocol Identifier begins at byte 12 (after the 6 byte destination and source fields), is 2 bytes long, and has an Ethertype of 0x8100 for tagged traffic.
802.1p: The first three bits of the TCI (Tag Control Information – beginning at byte 14, and spanning 2 bytes) define user priority, giving eight (2^3) priority levels. IEEE 802.1p defines the operation for these 3 user priority bits.
CFI: Canonical Format Indicator is a single-bit flag, always set to zero for Ethernet switches. CFI is used for compatibility reasons between Ethernet networks and Token Ring networks. If a frame received at an Ethernet port has a CFI set to 1, then that frame should not be forwarded as it is to an untagged port.
VLAN ID: VLAN ID (starts at bit 5 of byte 14) is the identification of the VLAN. It has 12-bits and allows for the identification of 4,096 (2^12) unique VLAN ID’s. Of the 4,096 possible IDs, an ID of 0 is used to identify priority frames, and an ID of 4,095 (FFF) is reserved, so the maximum possible VLAN configurations are 4,094.

802.1p support begins by enabling 802.1p marking on the interfaces which you wish to have process 802.1p tags. 802.1p can be enabled on any Ethernet interface on any SonicWall appliance.

The behavior of the 802.1p field within these tags can be controlled by Access Rules. The default 802.1p Access Rule action of None will reset existing 802.1p tags to 0, unless otherwise configured (see Managing QoS Marking for details).

Enabling 802.1p marking will allow the target interface to recognize incoming 802.1p tags generated by 802.1p capable network devices, and will also allow the target interface to generate 802.1p tags, as controlled by Access Rules. Frames that have 802.1p tags inserted by SonicOS will bear VLAN ID 0.

802.1p tags will only be inserted according to Access Rules, so enabling 802.1p marking on an interface will not, at its default setting, disrupt communications with 802.1p-incapable devices.

802.1p requires the specific support by the networking devices with which you wish to use this method of prioritization. Many voice and video over IP devices provide support for 802.1p, but the feature must be enabled. Check your equipment’s documentation for information on 802.1p support if you are unsure. Similarly, many server and host network cards (NICs) have the ability to support 802.1p, but the feature is usually disabled by default. On Win32 operating systems, you can check for and configure 802.1p settings on the Advanced tab of the Properties page of your network card. If your card supports 802.1p, it will list it as 802.1p QoS, 802.1p Support, QoS Packet Tagging or something similar:

To process 802.1p tags, the feature must be present and enabled on the network interface. The network interface will then be able to generate packets with 802.1p tags, as governed by QoS capable applications. By default, general network communications will not have tags inserted so as to maintain compatibility with 802.1p-incapable devices.

* 
NOTE: If your network interface does not support 802.1p, it will not be able to process 802.1p tagged traffic, and will ignore it. Make certain when defining Access Rules to enable 802.1p marking that the target devices are 802.1p capable.

It should also be noted that when performing a packet capture (for example, with the diagnostic tool Ethereal) on 802.1p capable devices, some 802.1p capable devices will not show the 802.1q header in the packet capture. Conversely, a packet capture performed on an 802.1p-incapable device will almost invariably show the header, but the host will be unable to process the packet.

Before moving on to Managing QoS Marking, it is important to introduce ‘DSCP Marking’ because of the potential interdependency between the two marking methods, as well as to explain why the interdependency exists.

Example Scenario

802.1p and DSCP Qos: Sample configuration

In the scenario above, we have Remote Site 1 connected to ‘Main Site’ by an IPsec VPN. The company uses an internal 802.1p/DSCP capable VoIP phone system, with a private VoIP signaling server hosted at the Main Site. The Main Site has a mixed gigabit and Fast-Ethernet infrastructure, while Remote Site 1 is all Fast Ethernet. Both sites employ 802.1p capable switches for prioritization of internal traffic.

1
PC-1 at Remote Site 1 is transferring a 23 terabyte PowerPoint™ presentation to File Server 1, and the 100mbit link between the workgroup switch and the upstream switch is completely saturated.
2
At the Main Site, a caller on the 802.1p/DSCP capable VoIP Phone 10.50.165.200 initiates a call to the person at VoIP phone 192.168.168.200. The calling VoIP phone 802.1p tags the traffic with priority tag 6 (voice), and DSCP tags the traffic with a tag of 48.
a
If the link between the Core Switch and the firewall is a VLAN, some switches will include the received 802.1p priority tag, in addition to the DSCP tag, in the packet sent to the firewall; this behavior varies from switch to switch, and is often configurable.
b
If the link between the Core Switch and the firewall is not a VLAN, there is no way for the switch to include the 802.1p priority tag. The 802.1p priority is removed, and the packet (including only the DSCP tag) is forwarded to the firewall.

When the firewall sent the packet across the VPN/WAN link, it could include the DSCP tag in the packet, but it is not possible to include the 802.1p tag. This would have the effect of losing all prioritization information for the VoIP traffic, because when the packet arrived at the Remote Site, the switch would have no 802.1p MAC layer information with which to prioritize the traffic. The Remote Site switch would treat the VoIP traffic the same as the lower-priority file transfer because of the link saturation, introducing delay—maybe even dropped packets—to the VoIP flow, resulting in call quality degradation.

So how can critical 802.1p priority information from the Main Site LAN persist across the VPN/WAN link to Remote Site LAN? Through the use of QoS Mapping.

QoS Mapping is a feature which converts layer 2 802.1p tags to layer 3 DSCP tags so that they can safely traverse (in mapped form) 802.1p-incapable links; when the packet arrives for delivery to the next 802.1p-capable segment, QoS Mapping converts from DSCP back to 802.1p tags so that layer 2 QoS can be honored.

In our above scenario, the firewall at the Main Site assigns a DSCP tag (e.g. value 48) to the VoIP packets, as well as to the encapsulating ESP packets, allowing layer 3 QoS to be applied across the WAN. This assignment can occur either by preserving the existing DSCP tag, or by mapping the value from an 802.1p tag, if present. When the VoIP packets arrive at the other side of the link, the mapping process is reversed by the receiving SonicWall, mapping the DSCP tag back to an 802.1p tag.

3
The receiving SonicWall at the Remote Site is configured to map the DSCP tag range 48-55 to 802.1p tag 6. When the packet exits the SonicWall, it will bear 802.1p tag 6. The Switch will recognize it as voice traffic, and will prioritize it over the file-transfer, guaranteeing QoS even in the event of link saturation.

DSCP Marking

DSCP (Differentiated Services Code Point) marking uses 6-bits of the 8-bit ToS field in the IP Header to provide up to 64 classes (or code points) for traffic. Since DSCP is a layer 3 marking method, there is no concern about compatibility as there is with 802.1p marking. Devices that do not support DSCP will simply ignore the tags, or at worst, they will reset the tag value to 0.

ToS Header of IP Packet Used for DSCP Marking

The above diagram depicts an IP packet, with a close-up on the ToS portion of the header. The ToS bits were originally used for Precedence and ToS (delay, throughput, reliability, and cost) settings, but were later repurposed by RFC2474 for the more versatile DSCP settings.

The following table shows the commonly used code points, as well as their mapping to the legacy Precedence and ToS settings.

 

DSCP Marking: Commonly Used Code Points

DSCP

DSCP Description

Legacy IP Precedence

Legacy IP ToS (D, T, R)

0

Best effort

0 (Routine – 000)

-

8

Class 1

1 (Priority – 001)

-

10

Class 1, gold (AF11)

1 (Priority – 001)

T

12

Class 1, silver (AF12)

1 (Priority – 001)

D

14

Class 1, bronze (AF13)

1 (Priority – 001)

D, T

16

Class 2

2 (Immediate – 010)

-

18

Class 2, gold (AF21)

2 (Immediate – 010)

T

20

Class 2, silver (AF22)

2 (Immediate – 010)

D

22

Class 2, bronze (AF23)

2 (Immediate – 010)

D, T

24

Class 3

3 (Flash – 011)

-

26

Class 3, gold (AF31)

3 (Flash – 011)

T

27

Class 3, silver (AF32)

3 (Flash – 011)

D

30

Class 3, bronze (AF33)

3 (Flash – 011)

D, T

32

Class 4

4 (Flash Override – 100)

-

34

Class 4, gold (AF41)

4 (Flash Override – 100)

T

36

Class 4, silver (AF42)

4 (Flash Override – 100)

D

38

Class 4, bronze (AF43)

4 (Flash Override – 100)

D, T

40

Express forwarding

5 (CRITIC/ECP – 101)

-

46

Expedited forwarding (EF)

5 (CRITIC/ECP – 101)

D, T

48

Control

6 (Internet Control – 110)

-

56

Control

7 (Network Control – 111)

-

DSCP marking can be performed on traffic to/from any interface and to/from any zone type, without exception. DSCP marking is controlled by Access Rules, from the QoS tab, and can be used in conjunction with 802.1p marking, as well as with SonicOS’S internal bandwidth management.

Topics:
DSCP Marking and Mixed VPN Traffic

Among their many security measures and characteristics, IPsec VPNs employ anti-replay mechanisms based upon monotonically incrementing sequence numbers added to the ESP header. Packets with duplicate sequence numbers are dropped, as are packets that do not adhere to sequence criteria. One such criterion governs the handling of out-of-order packets. SonicOS Enhanced provides a replay window of 64 packets, that is, if an ESP packet for a Security Association (SA) is delayed by more than 64 packets, the packet will be dropped.

This should be considered when using DSCP marking to provide layer 3 QoS to traffic traversing a VPN. If you have a VPN tunnel that is transporting a diversity of traffic, some that is being DSCP tagged high priority (for example, VoIP), and some that is DSCP tagged low-priority, or untagged/best-effort (for example, FTP), your service provider will prioritize the handling and delivery of the high-priority ESP packets over the best-effort ESP packets. Under certain traffic conditions, this can result in the best-effort packets being delayed for more than 64 packets, causing them to be dropped by the receiving SonicWall’s anti-replay defenses.

If symptoms of such a scenario emerge (for example, excessive retransmissions of low-priority traffic), it is recommended that you create a separate VPN policy for the high-priority and low-priority classes of traffic. This is most easily accomplished by placing the high-priority hosts (for example, the VoIP network) on their own subnet.

Configure for 802.1p CoS 4 – Controlled load

If you want to change the inbound mapping of DSCP tag 15 from its default 802.1p mapping of 1 to an 802.1p mapping of 2, it would have to be done in two steps because mapping ranges cannot overlap. Attempting to assign an overlapping mapping will give the error DSCP range already exists or overlaps with another range. First, you will have to remove 15 from its current end-range mapping to 802.1p CoS 1 (changing the end-range mapping of 802.1p CoS 1 to DSCP 14), then you can assign DSCP 15 to the start-range mapping on 802.1p CoS 2.

QoS Mapping

The primary objective of QoS Mapping is to allow 802.1p tags to persist across non-802.1p compliant links (for example, WAN links) by mapping them to corresponding DSCP tags before sending across the WAN link, and then mapping from DSCP back to 802.1p upon arriving at the other side:

QoS mapping configuration

* 
NOTE: Mapping will not occur until you assign Map as an action of the QoS tab of an Access Rule. The mapping table only defines the correspondence that will be employed by an Access Rule’s Map action.

For example, according to the default table, an 802.1p tag with a value of 2 will be outbound mapped to a DSCP value of 16, while a DSCP tag of 43 will be inbound mapped to an 802.1 value of 5.

Each of these mappings can be reconfigured. If you wanted to change the outbound mapping of 802.1p tag 4 from its default DSCP value of 32 to a DSCP value of 43, you can click the Configure icon for 4 – Controlled load and select the new To DSCP value from the drop-down box:

You can restore the default mappings by clicking the Reset QoS Settings button.

Managing QoS Marking

QoS marking is configured from the QoS tab of Access Rules under the Firewall > Access Rules page of the management interface. Both 802.1p and DSCP marking as managed by SonicOS Enhanced Access Rules provide 4 actions: None, Preserve, Explicit, and Map. The default action for DSCP is Preserve and the default action for 802.1p is None.

The following table describes the behavior of each action on both methods of marking:

 

QoS Marking: Behavior

Action

802.1p (layer 2 CoS)

DSCP (layer 3)

Notes

None

When packets matching this class of traffic (as defined by the Access Rule) are sent out the egress interface, no 802.1p tag will be added.

The DSCP tag is explicitly set (or reset) to 0.

If the target interface for this class of traffic is a VLAN subinterface, the 802.1p portion of the 802.1q tag will be explicitly set to 0. If this class of traffic is destined for a VLAN and is using 802.1p for prioritization, a specific Access Rule using the Preserve, Explicit, or Map action should be defined for this class of traffic.

Preserve

Existing 802.1p tag will be preserved.

Existing DSCP tag value will be preserved.

 

Explicit

An explicit 802.1p tag value can be assigned (0-7) from a drop-down menu that will be presented.

An explicit DSCP tag value can be assigned (0-63) from a drop-down menu that will be presented.

If either the 802.1p or the DSCP action is set to Explicit while the other is set to Map, the explicit assignment occurs first, and then the other is mapped according to that assignment.

Map

The mapping setting defined in the Firewall Settings > QoS Mapping page will be used to map from a DSCP tag to an 802.1p tag.

The mapping setting defined in the Firewall Settings > QoS Mapping page will be used to map from an 802.1 tag to a DSCP tag. An additional check box will be presented to Allow 802.1p Marking to override DSCP values. Selecting this check box will assert the mapped 802.1p value over any DSCP value that might have been set by the client. This is useful to override clients setting their own DSCP CoS values.

If Map is set as the action on both DSCP and 802.1p, mapping will only occur in one direction: if the packet is from a VLAN and arrives with an 802.1p tag, then DSCP will be mapped from the 802.1p tag; if the packet is destined to a VLAN, then 802.1p will be mapped from the DSCP tag.

For example, refer to the following figure which provides a bi-directional DSCP tag action.

Configuration Showing Bi-Directional DSCP Tag Action

HTTP access from a Web-browser on 192.168.168.100 to the Web server on 10.50.165.2 will result in the tagging of the inner (payload) packet and the outer (encapsulating ESP) packets with a DSCP value of 8. When the packets emerge from the other end of the tunnel, and are delivered to 10.50.165.2, they will bear a DSCP tag of 8. When 10.50.165.2 sends response packets back across the tunnel to 192.168.168.100 (beginning with the very first SYN/ACK packet) the Access Rule will tag the response packets delivered to 192.168.168.100 with a DSCP value of 8.

This behavior applies to all four QoS action settings for both DSCP and 802.1p marking.

One practical application for this behavior would be configuring an 802.1p marking rule for traffic destined for the VPN zone. Although 802.1p tags cannot be sent across the VPN, reply packets coming back across the VPN can be 802.1p tagged on egress from the tunnel. This requires that 802.1p tagging is active of the physical egress interface, and that the [Zone] > VPN Access Rule has an 802.1p marking action other than None.

After ensuring 802.1p compatibility with your relevant network devices, and enabling 802.1p marking on applicable SonicWall interfaces, you can begin configuring Access Rules to manage 802.1p tags.

Referring to the following figure, the Remote Site 1 network could have two Access Rules configured as follows:

 

Remote Site 1: Sample Access Rule Configurations

Tab

Setting

Access Rule 1

Access Rule 2

General

Action

Allow

Allow

From Zone

LAN

VPN

To Zone

VPN

LAN

Service

VOIP

VOIP

Source

Lan Primary Subnet

Main Site Subnets

Destination

Main Site Subnets

Lan Primary Subnet

Users Allowed

All

All

Schedule

Always on

Always on

Enable Logging

Enabled

Enabled

Allow Fragmented Packets

Enabled

Enabled

Qos

DSCP Marking Action

Map

Map

Allow 802.1p Marking to override DSCP values

Enabled

Enabled

802.1p Marking Action

Map

Map

The first Access Rule (governing LAN>VPN) would have the following effects:

VoIP traffic (as defined by the Service Group) from LAN Primary Subnet destined to be sent across the VPN to Main Site Subnets would be evaluated for both DSCP and 802.1p tags.
The combination of setting both DSCP and 802.1p marking actions to Map is described in the table earlier in the Managing QoS Marking.
Sent traffic containing only an 802.1p tag (for example, CoS = 6) would have the VPN-bound inner (payload) packet DSCP tagged with a value of 48. The outer (ESP) packet would also be tagged with a value of 48.
Assuming returned traffic has been DSCP tagged (CoS = 48) by the SonicWall at the Main Site, the return traffic will be 802.1p tagged with CoS = 6 on egress.
Sent traffic containing only a DSCP tag (for example, CoS = 48) would have the DSCP value preserved on both inner and outer packets.
Assuming returned traffic has been DSCP tagged (CoS = 48) by the SonicWall at the Main Site, the return traffic will be 802.1p tagged with CoS = 6 on egress.
Sent traffic containing only both an 802.1p tag (for example, CoS = 6) and a DSCP tag (for example, CoS = 63) would give precedence to the 802.1p tag, and would be mapped accordingly. The VPN-bound inner (payload) packet DSCP tagged with a value of 48. The outer (ESP) packet would also be tagged with a value of 48.

Assuming returned traffic has been DSCP tagged (CoS = 48) by the SonicWall at the Main Site, the return traffic will be 802.1p tagged with CoS = 6 on egress.

To examine the effects of the second Access Rule (VPN>LAN), we’ll look at the Access Rules configured at the Main Site.

 

Main Site: Sample Access Rule Configurations

Tab

Setting

Access Rule 1

Access Rule 2

General

Action

Allow

Allow

From Zone

LAN

VPN

To Zone

VPN

LAN

Service

VOIP

VOIP

Source

Lan Subnets

Remote Site 1 Subnets

Destination

Remote Site 1 Subnets

Lan Subnets

Users Allowed

All

All

Schedule

Always on

Always on

Enable Logging

Enabled

Enabled

Allow Fragmented Packets

Enabled

Enabled

Qos

DSCP Marking Action

Map

Map

Allow 802.1p Marking to override DSCP values

Enabled

Enabled

802.1p Marking Action

Map

Map

VoIP traffic (as defined by the Service Group) arriving from Remote Site 1 Subnets across the VPN destined to LAN Subnets on the LAN zone at the Main Site would hit the Access Rule for inbound VoIP calls. Traffic arriving at the VPN zone will not have any 802.1p tags, only DSCP tags.

Traffic exiting the tunnel containing a DSCP tag (for example, CoS = 48) would have the DSCP value preserved. Before the packet is delivered to the destination on the LAN, it will also be 802.1p tagged according to the QoS Mapping settings (for example, CoS = 6) by the SonicWall at the Main Site.
Assuming returned traffic has been 802.1p tagged (for example, CoS = 6) by the VoIP phone receiving the call at the Main Site, the return traffic will be DSCP tagged according to the conversion map (CoS = 48) on both the inner and outer packet sent back across the VPN.
Assuming returned traffic has been DSCP tagged (for example, CoS = 48) by the VoIP phone receiving the call at the Main Site, the return traffic will have the DSCP tag preserved on both the inner and outer packet sent back across the VPN.
Assuming returned traffic has been both 802.1p tagged (for example, CoS = 6) and DSCP tagged (for example, CoS = 14) by the VoIP phone receiving the call at the Main Site, the return traffic will be DSCP tagged according to the conversion map (CoS = 48) on both the inner and outer packet sent back across the VPN.

Glossary

802.1p – IEEE 802.1p is a Layer 2 (MAC layer) Class of Service mechanism that tags packets by using 3 priority bits (for a total of 8 priority levels) within the additional 16-bits of an 802.1q header. 802.1p processing requires compatible equipment for tag generation, recognition and processing, and should only be employed on compatible networks. 802.1p is supported on SonicWall NSA platforms.
Bandwidth Management (BWM) – Refers to any of a variety of algorithms or methods used to shape traffic or police traffic. Shaping often refers to the management of outbound traffic, while policing often refers to the management of inbound traffic (also known as admission control). There are many different methods of bandwidth management, including various queuing and discarding techniques, each with their own design strengths. SonicWall employs a Token Based Class Based Queuing method for inbound and outbound BWM, as well as a discard mechanism for certain types of inbound traffic.
Class of Service (CoS) – A designator or identifier, such as a layer 2 or layer 3 tag, that is applied to traffic after classification. CoS information will be used by the Quality of Service (QoS) system to differentiate between the classes of traffic on the network, and to provide special handling (for example, prioritized queuing, low latency) as defined by the QoS system administrator.
Classification – The act of identifying (or differentiating) certain types (or classes) of traffic. Within the context of QoS, this is performed for the sake of providing customized handling, typically prioritization or de-prioritization, based on the traffic’s sensitivity to delay, latency, or packet loss. Classification within SonicOS Enhanced uses Access Rules, and can occur based on any or all of the following elements: source zone, destination zone, source address object, destination address object, service object, schedule object.
Code Point – A value that is marked (or tagged) into the DSCP portion of an IP packet by a host or by an intermediate network device. There are currently 64 Code Points available, from 0 to 63, used to define the ascending prioritized class of the tagged traffic.
Conditioning – A broad term used to describe a plurality of methods of providing Quality of Service to network traffic, including but not limited to discarding, queuing, policing, and shaping.
DiffServ – Differentiated Services. A standard for differentiating between different types or classes of traffic on an IP network for the purpose of providing tailored handling to the traffic based on its requirements. DiffServ primarily depends upon Code Point values marked in the ToS header of an IP packet to differentiate between different classes of traffic. DiffServ service levels are executed on a Per Hop Basis at each router (or other DiffServ enabled network device) through which the marked traffic passes. DiffServ Service levels currently include at a minimum Default, Assured Forwarding, and Expedited Forwarding. DiffServ is supported on SonicWall NSA platforms. Refer to the DSCP Marking for more information.
Discarding – A congestion avoidance mechanism that is employed by QoS systems in an attempt to predict when congestion might occur on a network, and to prevent the congestion by dropping over-limit traffic. Discarding can also be thought of as a queue management algorithm, since it attempts to avoid situations of full queues. Advanced discard mechanisms will abide by CoS markings so as to avoid dropping sensitive traffic. Common methods are:
Tail Drop – An indiscriminate method of dealing with a full queue wherein the last packets into the queue are dropped, regardless of their CoS marking.
Random Early Detection (RED) – RED monitors the status of queues to try to anticipate when a queue is about to become full. It then randomly discards packets in a staggered fashion to help minimize the potential of Global Synchronization. Basic implementations of RED, like Tail Drop, do not consider CoS markings.
Weighted Random Early Detection (WRED) – An implementation of RED that factors DSCP markings into its discard decision process.
DSCP – (Differentiate Services Code Points) – The repurposing of the ToS field of an IP header as described by RFC2747. DSCP uses 64 Code Point values to enable DiffServ (Differentiated Services). By marking traffic according to its class, each packet can be treated appropriately at every hop along the network.
Global Synchronization – A potential side effect of discarding, the congestion avoidance method designed to deal with full queues. Global Synchronization occurs when multiple TCP flows through a congested link are dropped at the same time (as can occur in Tail Drop). When the native TCP slow-start mechanism commences with near simultaneity for each of these flows, the flows will again flood the link. This leads to cyclical waves of congestion and under-utilization.
Guaranteed Bandwidth – A declared percentage of the total available bandwidth on an interface which will always be granted to a certain class of traffic. Applicable to both inbound and outbound BWM. The total Guaranteed Bandwidth across all BWM rules cannot exceed 100% of the total available bandwidth. SonicOS Enhanced 5.0 and higher enhances the Bandwidth Management feature to provide rate limiting functionality. You can now create traffic policies that specify maximum rates for Layer 2, 3, or 4 network traffic. This enables bandwidth management in cases where the primary WAN link fails over to a secondary connection that cannot handle as much traffic. The Guaranteed Bandwidth can also be set to 0%.
Inbound (Ingress or IBWM) – The ability to shape the rate at which traffic enters a particular interface. For TCP traffic, actual shaping can occur where the rate of the ingress flow can be adjusted by delaying egress acknowledgements (ACKs) causing the sender to slow its rate. For UDP traffic, a discard mechanism is used since UDP has no native feedback controls.
IntServ – Integrated Services, as defined by RFC1633. An alternative CoS system to DiffServ, IntServ differs fundamentally from DiffServ in that it has each device request (or reserve) its network requirements before it sends its traffic. This requires that each hop on the network be IntServ aware, and it also requires each hop to maintain state information for every flow. IntServ is not supported by SonicOS. The most common implementation of IntServ is RSVP.
Maximum Bandwidth – A declared percentage of the total available bandwidth on an interface defining the maximum bandwidth to be allowed to a certain class of traffic. Applicable to both inbound and outbound BWM. Used as a throttling mechanism to specify a bandwidth rate limit. The Bandwidth Management feature is enhanced to provide rate limiting functionality. You can now create traffic policies that specify maximum rates for Layer 2, 3, or 4 network traffic. This enables bandwidth management in cases where the primary WAN link fails over to a secondary connection that cannot handle as much traffic.The Maximum Bandwidth can be set to 0%, which will prevent all traffic.
Outbound (Egress or OBWM) – Conditioning the rate at which traffic is sent out an interface. Outbound BWM uses a credit (or token) based queuing system with eight priority queues to service different types of traffic, as classified by Access Rules.
Priority – An additional dimension used in the classification of traffic. SonicOS uses eight priority (0 = realtime, 7 = lowest) to comprise the queue structure used for BWM. Queues are serviced in the order of their priority.
Mapping – With regard to SonicOS’ implementation of QoS, the practice of converting layer 2 CoS tags (802.1p) to layer 3 CoS tags (DSCP) and back again for the purpose as preserving the 802.1p tags across network links that do not support 802.1p tagging. The map correspondence is fully user-definable, and the act of mapping is controlled by Access Rules. Mapping is supported on SonicWall NSA platforms.
Marking – Also known as tagging or coloring – The act of applying layer 2 (802.1p) or layer 3 (DSCP) information to a packet for the purpose of differentiation, so that it can be properly classified (recognized) and prioritized by network devices along the path to its destination. Marking is supported on SonicWall NSA platforms.
MPLS - Multi Protocol Label Switching. A term that comes up frequently in the area of QoS, but which is natively unsupported by most customer premise IP networking devices, including SonicWall appliances. MPLS is a carrier-class network service that attempts to enhance the IP network experience by adding the concept connection-oriented paths (Label Switch Paths – LSPs) along the network. When a packet leaves a customer premise network, it is tagged by a Label Edge Router (LER) so that the label can be used to determine the LSP. The MPLS tag itself resides between layer 2 and layer 3, imparting upon MPLS characteristics of both network layers. MPLS is becoming quite popular for VPNs, offering both layer 2 and layer 3 VPN services, but remains interoperable with existing IPsec VPN implementation. MPLS is also very well known for its QoS capabilities, and interoperates well with conventional DSCP marking.
Per Hop Behavior (PHB) – The handling that will be applied to a packet by each DiffServ capable router it traverses, based upon the DSCP classification of the packet. The behavior can be among such actions as discard, re-mark (re-classify), best-effort, assured forwarding, or expedited forwarding.
Policing – A facility of traffic conditioning that attempts to control the rate of traffic into or out of a network link. Policing methods range from indiscriminate packet discarding to algorithmic shaping, to various queuing disciplines.
Queuing – To effectively make use of a link’s available bandwidth, queues are commonly employed to sort and separately manage traffic after it has been classified. Queues are then managed using a variety of methods and algorithms to ensure that the higher priority queues always have room to receive more traffic, and that they can be serviced (de-queued or processed) before lower priority queues. Some common queue disciplines include:
FIFO – First In First Out. A very simple, undiscriminating queue where the first packet in is the first packet to be processed.
Class Based Queuing (CBQ) – A queuing discipline that takes into account the CoS of a packet, ensuring that higher priority traffic is treated preferentially.
Weighted Fair Queuing (WFQ) – A discipline that attempts to service queues using a simple formula based upon the packets’ IP precedence and the total number of flows. WFQ has a tendency to become imbalanced when there is a disproportionately large number of high-priority flows to be serviced, often having the opposite of the desired effect.
Token Based CBQ – An enhancement to CBQ that employs a token, or a credit-based system that helps to smooth or normalize link utilization, avoiding burstiness as well as under-utilization. Employed by SonicOS’S BWM.
RSVP – Resource Reservation Protocol. An IntServ signaling protocol employed by some applications where the anticipated need for network behavior (for example, delay and bandwidth) is requested so that it can be reserved along the network path. Setting up this Reservation Path requires that each hop along the way be RSVP capable, and that each agrees to reserve the requested resources. This system of QoS is comparatively resource intensive, since it requires each hop to maintain state on existing flows. Although IntServ’s RSVP is quite different from DiffServ’s DSCP, the two can interoperate. RSVP is not supported by SonicOS.
Shaping – An attempt by a QoS system to modify the rate of traffic flow, usually by employing some feedback mechanism to the sender. The most common example of this is TCP rate manipulation, where acknowledgements (ACKs) sent back to a TCP sender are queued and delayed so as to increase the calculated round-trip time (RTT), leveraging the inherent behavior of TCP to force the sender to slow the rate at which it sends data.
Type of Service (ToS) – A field within the IP header wherein CoS information can be specified. Historically used, albeit somewhat rarely, in conjunction with IP precedence bits to define CoS. The ToS field is now rather commonly used by DiffServ’s code point values.

Bandwidth Management

For information on Bandwidth Management (BWM), see Bandwidth Management Overview.

 

Configuring SSL Control

Firewall Settings > SSL Control

This chapter describes how to plan, design, implement, and maintain the SSL Control feature.

Overview of SSL Control

SonicOS Enhanced firmware versions 4.0 and higher include SSL Control, a system for providing visibility into the handshake of SSL sessions, and a method for constructing policies to control the establishment of SSL connections. SSL (Secure Sockets Layer) is the dominant standard for the encryption of TCP based network communications, with its most common and well-known application being HTTPS (HTTP over SSL). SSL provides digital certificate-based endpoint identification, and cryptographic and digest-based confidentiality to network communications.

SSL Control Network Communication

An effect of the security provided by SSL is the obscuration of all payload, including the URL (Uniform Resource Locator, for example, https://www.MySonicWall.com) being requested by a client when establishing an HTTPS session. This is due to the fact that HTTP is transported within the encrypted SSL tunnel when using HTTPS. It is not until the SSL session is established (step 14, figure 1) that the actual target resource (www.MySonicWall.com) is requested by the client, but since the SSL session is already established, no inspection of the session data by the firewall or any other intermediate device is possible. As a result, URL based content filtering systems cannot consider the request to determine permissibility in any way other than by IP address.

While IP address based filtering does not work well for unencrypted HTTP because of the efficiency and popularity of Host-header based virtual hosting (defined in Key Concepts below), IP filtering can work effectively for HTTPS due to the rarity of Host-header based HTTPS sites. But this trust relies on the integrity of the HTTPS server operator, and assumes that SSL is not being used for deceptive purposes.

For the most part, SSL is employed legitimately, being used to secure sensitive communications, such as online shopping or banking, or any session where there is an exchange of personal or valuable information. The ever decreasing cost and complexity of SSL, however, has also spurred the growth of more dubious applications of SSL, designed primarily for the purposes of obfuscation or concealment rather than security.

An increasingly common camouflage is the use of SSL encrypted Web-based proxy servers for the purpose of hiding browsing details, and bypassing content filters. While it is simple to block well known HTTPS proxy services of this sort by their IP address, it is virtually impossible to block the thousands of privately-hosted proxy servers that are readily available through a simple Web-search. The challenge is not the ever-increasing number of such services, but rather their unpredictable nature. Since these services are often hosted on home networks using dynamically addressed DSL and cable modem connections, the targets are constantly moving. Trying to block an unknown SSL target would require blocking all SSL traffic, which is practically infeasible.

SSL Control provides a number of methods to address this challenge by arming the security administrator with the ability to dissect and apply policy based controls to SSL session establishment. While the current implementation does not decode the SSL application data, it does allow for gateway-based identification and disallowance of suspicious SSL traffic.

Topics:

Key Features of SSL Control

 

SSL Control: Features and Benefits

Feature

Benefit

Common-Name based White and Black Lists

The administrator can define lists of explicitly allowed or denied certificate subject common names (described in Key Concepts). Entries will be matched on substrings, for example, a blacklist entry for “prox” will match “www.megaproxy.com”, “www.proxify.com” and “proxify.net”. This allows the administrator to easily block all SSL exchanges employing certificates issued to subjects with potentially objectionable names. Inversely, the administrator can easily authorize all certificates within an organization by whitelisting a common substring for the organization. Each list can contain up to 1,024 entries.

Since the evaluation is performed on the subject common-name embedded in the certificate, even if the client attempts to conceal access to these sites by using an alternative hostname or even an IP address, the subject will always be detected in the certificate, and policy will be applied.

Self-Signed Certificate Control

It is common practice for legitimate sites secured by SSL to use certificates issued by well-known certificate authorities, as this is the foundation of trust within SSL. It is almost equally common for network appliances secured by SSL (such as SonicWall security appliances) to use self-signed certificates for their default method of security. So while self-signed certificates in closed-environments are not suspicious, the use of self-signed certificates by publicly or commercially available sites is. A public site using a self-signed certificate is often an indication that SSL is being used strictly for encryption rather than for trust and identification. While not absolutely incriminating, this sometimes suggests that concealment is the goal, as is commonly the case for SSL encrypted proxy sites.

The ability to set a policy to block self-signed certificates allows security administrators to protect against this potential exposure. To prevent discontinuity of communications to known/trusted SSL sites using self-signed certificates, the whitelist feature can be used for explicit allowance.

Untrusted Certificate Authority Control

Like the use of self-signed certificates, encountering a certificate issued by an untrusted CA is not an absolute indication of disreputable obscuration, but it does suggest questionable trust.

SSL Control can compare the issuer of the certificate in SSL exchanges against the certificates in the SonicWall’s certificate store. The certificate store contains approximately 100 well-known CA certificates, exactly like today’s Web-browsers. If SSL Control encounters a certificate that was issued by a CA not in its certificate store, it can disallow the SSL connection.

For organizations running their own private certificate authorities, the private CA certificate can easily be imported into the SonicWall’s certificate store to recognize the private CA as trusted. The store can hold up to 256 certificates.

SSL version, Cipher Strength, and Certificate Validity Control

SSL Control provides additional management of SSL sessions based on characteristics of the negotiation, including the ability to disallow the potentially exploitable SSLv2, the ability to disallow weak encryption (ciphers less than 64 bits), and the ability to disallow SSL negotiations where a certificate’s date ranges are invalid. This enables the administrator to create a rigidly secure environment for network users, eliminating exposure to risk through unseen cryptographic weaknesses, or through disregard for or misunderstanding of security warnings.

Zone-Based Application

SSL Control is applied at the zone level, allowing the administrator to enforce SSL policy on the network. When SSL Control is enabled on the zone, the SonicWall looks for Client Hellos sent from clients on that zone through the SonicWall will trigger inspection. The SonicWall then looks for the Server Hello and Certificate that is sent in response for evaluation against the configured policy. Enabling SSL Control on the LAN zone, for example, will inspect all SSL traffic initiated by clients on the LAN to any destination zone.

Configurable Actions and Event Notifications

When SSL Control detects a policy violation, it can log the event and block the connection, or it can simply log the event while allowing the connection to proceed.

Key Concepts to SSL Control

SSL- Secure Sockets Layer (SSL) is a network security mechanism introduced by Netscape in 1995. SSL was designed “to provide privacy between two communicating applications (a client and a server) and also to authenticate the server, and optionally the client.” SSLs most popular application is HTTPS, designated by a URL beginning with https:// rather than simply http://, and it is recognized as the standard method of encrypting Web traffic on the Internet. An SSL HTTP transfer typically uses TCP port 443, whereas a regular HTTP transfer uses TCP port 80. Although HTTPS is what SSL is best known for, SSL is not limited to securing HTTP, but can also be used to secure other TCP protocols such as SMTP, POP3, IMAP, and LDAP.

SSL session establishment occurs as follows:

SSL Session Establishment Communication

SSLv2 – The earliest version of SSL still in common use. SSLv2 was found to have a number of weaknesses, limitations, and theoretical deficiencies (comparatively noted in the SSLv3 entry), and is looked upon with scorn, disdain, and righteous indignation by security purists.
SSLv3 – SSLv3 was designed to maintain backward compatibility with SSLv2, while adding the following enhancements:
Alternate key exchange methods, including Diffie-Hellman.
Hardware token support for both key exchange and bulk encryption.
SHA, DSS, and Fortezza support.
Out-of-Band data transfer.
TLS – Transport Layer Security (version 1.0), also known as SSLv3.1, is very similar to SSLv3, but improves upon SSLv3 in the following ways:
 

Differences between SSL and TLS

SSL

TLS

Uses a preliminary HMAC algorithm

Uses HMAC as described in RFC 2104

Does not apply MAC to version info

Applies MAC to version info

Does not specify a padding value

Initializes padding to a specific value

Limited set of alerts and warning

Detailed Alert and Warning messages

MAC – A MAC (Message Authentication Code) is calculated by applying an algorithm (such as MD5 or SHA1) to data. The MAC is a message digest, or a one-way hash code that is fairly easy to compute, but which is virtually irreversible. In other words, with the MAC alone, it would be theoretically impossible to determine the message upon which the digest was based. It is equally difficult to find two different messages that would result in the same MAC. If the receiver’s MAC calculation matches the sender’s MAC calculation on a given piece of data, the receiver is assured that the data has not been altered in transit.
Client Hello – The first message sent by the client to the server following TCP session establishment. This message starts the SSL session, and consists of the following components:
Version – The version of SSL that the client wishes to use in communications. This is usually the most recent version of SSL supported by the client.
Random – A 32-bit timestamp coupled with a 28 byte random structure.
Session ID – This can either be empty if no Session ID data exists (essentially requesting a new session) or can reference a previously issued Session ID.
Cipher Suites – A list of the cryptographic algorithms, in preferential order, supported by the clients.
Compression Methods – A list of the compression methods supported by the client (typically null).
Server Hello – The SSL server’s response to the Client Hello. It is this portion of the SSL exchange that SSL Control inspects. The Server Hello contains the version of SSL negotiated in the session, along with cipher, session ID and certificate information. The actual X.509 server certificate itself, although a separate step of the SSL exchange, usually begins (and often ends) in the same packet as the Server Hello.
Certificates - X.509 certificates are unalterable digital stamps of approval for electronic security. There are four main characteristics of certificates:
Identify the subject of a certificate by a common name or distinguished name (CN or DN).
Contain the public key that can be used to encrypt and decrypt messages between parties
Provide a digital signature from the trusted organization (Certificate Authority) that issued the certificate.
Indicate the valid date range of the certificate
Subject – The guarantee of a certificate identified by a common name (CN). When a client browses to an SSL site, such as https://www.MySonicWall.com, the server sends its certificate which is then evaluated by the client. The client checks that the certificate’s dates are valid, that is was issued by a trusted CA, and that the subject CN matches the requested host name (that is, they are both “www.MySonicWall.com”). Although a subject CN mismatch elicits a browser alert, it is not always a sure sign of deception. For example, if a client browses to https://MySonicWall.com, which resolves to the same IP address as www.MySonicWall.com, the server will present its certificate bearing the subject CN of www.MySonicWall.com. An alert will be presented to the client, despite the total legitimacy of the connection.
Certificate Authority (CA) - A Certificate Authority (CA) is a trusted entity that has the ability to sign certificates intended, primarily, to validate the identity of the certificate’s subject. Well-known certificate authorities include VeriSign, Thawte, Equifax, and Digital Signature Trust. In general, for a CA to be trusted within the SSL framework, its certificate must be stored within a trusted store, such as that employed by most Web-browsers, operating systems and run-time environments. The SonicOS trusted store is accessible from the System > Certificates page. The CA model is built on associative trust, where the client trusts a CA (by having the CA's certificate in its trusted store), the CA trusts a subject (by having issued the subject a certificate), and therefore the client can trust the subject.
Untrusted CA – An untrusted CA is a CA that is not contained in the trusted store of the client. In the case of SSL Control, an untrusted CA is any CA whose certificate is not present in System > Certificates.
Self-Signed Certificates – Any certificate where the issuer’s common-name and the subject’s common-name are the same, indicating that the certificate was self-signed.
Virtual Hosting – A method employed by Web servers to host more than one website on a single server. A common implementation of virtual hosting is name-based (Host-header) virtual hosting, which allows for a single IP address to host multiple websites. With Host-header virtual hosting, the server determines the requested site by evaluating the “Host:” header sent by the client. For example, both www.website1.com and www.website2.com might resolve to 64.41.140.173. If the client sends a “GET /” along with Host: www.website1.com, the server can return content corresponding to that site.

Host-header virtual hosting is generally not employed in HTTPS because the host header cannot be read until the SSL connection is established, but the SSL connection cannot be established until the server sends its Certificate. Since the server cannot determine which site the client will request (all that is known during the SSL handshake is the IP address) it cannot determine the appropriate certificate to send. While sending any certificate might allow the SSL handshake to commence, a certificate name (subject) mismatch will trigger a browser alert.

Weak Ciphers – Relatively weak symmetric cryptography ciphers. Ciphers are classified as weak when they are less than 64 bits. For the most part, export ciphers are weak ciphers. Common Weak Ciphers lists common weak ciphers.

Common Weak Ciphers

Caveats and Advisories

1
Self-signed and Untrusted CA enforcement – If enforcing either of these two options, it is strongly advised that you add the common names of any SSL secured network appliances within your organization to the whitelist to ensure that connectivity to these devices is not interrupted. For example, the default subject name of SonicWall network security appliances is 192.168.168.168, and the default common name of SonicOS 5.9 Administration Guide SSL VPN appliances is 192.168.200.1.
2
If your organization employs its own private Certificate Authority (CA), it is strongly advised that you import your private CA’s certificate into the System > Certificates store, particularly if you will be enforcing blocking of certificates issued by untrusted CAs. For more information on this process, see Managing Certificates.
3
SSL Control inspection is currently only performed on TCP port 443 traffic. SSL negotiations occurring on non-standard ports will not be inspected at this time.
4
Server Hello fragmentation – In some rare instances, an SSL server will fragment the Server Hello. If this occurs, the current implementation of SSL Control will not decode the Server Hello. SSL Control policies will not be applied to the SSL session, and the SSL session will be allowed.
5
Session termination handling – When SSL Control detects a policy violation and terminates an SSL session, it will simply terminate the session at the TCP layer. Because the SSL session is in an embryonic state at this point, it is not currently possible to redirect the client, or to provide any kind of informational notification of termination to the client.
6
Whitelist precedence – The whitelist takes precedence over all other SSL Control elements. Any SSL server certificate which matches an entry in the whitelist will allow the SSL session to proceed, even if other elements of the SSL session are in violation of the configured policy. This is by design.
7
SonicOS Enhanced 5.0 increased the number of pre-installed (well-known) CA certificates from 8 to 93. The resulting repository is very similar to what can be found in most Web-browsers. Other certificate related changes:
a
The maximum number of CA certificates was raised from 6 to 256.
b
The maximum size of an individual certificate was raised from 2,048 to 4,096.
c
The maximum number of entries in the whitelist and blacklist is 1,024 each.

SSL Control Configuration

SSL Control is located under Firewall Settings > SSL Control. SSL Control has a global setting, as well as a per-zone setting. By default, SSL Control is not enabled at the global or zone level. The individual page controls are as follows (refer to Key Concepts to SSL Control for more information on terms used below).

Enable SSL Control – The global setting for SSL Control. This must be enabled for SSL Control applied to zones to be effective.
Log the event – If an SSL policy violation, as defined within the Configuration section below, is detected, the event will be logged, but the SSL connection will be allowed to continue.
Block the connection and log the event – In the event of a policy violation, the connection will be blocked and the event will be logged.
Enable Blacklist – Controls detection of the entries in the blacklist, as configured in the Configure Lists section below.
Enable Whitelist – Controls detection of the entries in the whitelist, as configured in the Configure Lists section below. Whitelisted entries will take precedence over all other SSL control settings.
Detect Expired Certificates – Controls detection of certificates whose start date is before the current system time, or whose end date is beyond the current system time. Date validation depends on the SonicWall’s System Time. Make sure your System Time is set correctly, preferably synchronized with NTP, on the System > Time page.
Detect SSLv2 – Controls detection of SSLv2 exchanges. SSLv2 is known to be susceptible to cipher downgrade attacks because it does not perform integrity checking on the handshake. Best practices recommend using SSLv3 or TLS in its place.
Detect Self-signed certificates – Controls the detection of certificates where both the issuer and the subject have the same common name.
Detect Certificates signed by an Untrusted CA – Controls the detection of certificates where the issuer’s certificate is not in the SonicWall’s System > Certificates trusted store.
Detect Weak Ciphers (<64 bits) – Controls the detection of SSL sessions negotiated with symmetric ciphers less than 64 bits, commonly indicating export cipher usage.
Detect MD5 Digest – Controls the detection of certificates that were created using an MD5 Hash.
Configure Blacklist and Whitelist – Allows the administrator to define strings for matching common names in SSL certificates. Entries are case-insensitive, and will be used in pattern-matching fashion, for example:
 

SSL certificate Pattern Matching

Entry

Will Match

Will Not Match

sonicwall.com

https://www.sonicwall.com
https://csm.demo.MySonicWall.com
https://MySonicWall.com
,
https://supersonicwall.computers.org
https://67.115.118.87
1

https://www.sonicwall.de

prox

https://proxify.org,
https://www.proxify.org,
https://megaproxy.com,
https://1070652204 2

https://www.freeproxy.ru 3


1
67.115.118.67 is currently the IP address to which sslvpn.demo.sonicwall.com resolves, and that site uses a certificate issued to sslvpn.demo.sonicwall.com. This will result in a match to sonicwall.com as matching occurs based on the common name in the certificate.

2
This is the decimal notation for the IP address 63.208.219.44, whose certificate is issued to www.megaproxy.com.

3
www.freeproxy.ru will not match prox as the common name on the certificate that is currently presented by this site is a self-signed certificate issued to “-“. This can, however, easily be blocked by enabling control of self-signed or Untrusted CA certificates.

How to configure whitelists and blacklists is described in Configuring White Lists and Black Lists.

Configuring White Lists and Black Lists

To configure the White List and Black List, click the Configure button to bring up SSL Control Custom Lists dialog.

Entries can be added, edited and deleted with the buttons beneath each list. Clicking the Add… button displays the Add Whitelist Domain Entry or Add Blacklist Domain Entry dialog, which are similar.

* 
NOTE: List matching will be based on the subject common name in the certificate presented in the SSL exchange, not in the URL (resource) requested by the client.

Changes to any of the SSL Control settings will not affect currently established connections; only new SSL exchanges that occur following the change commit will be inspected and affected.

Enabling SSL Control on Zones

Once SSL Control has been globally enabled, and the desired options have been configured, SSL Control must be enabled on one or more zones. When SSL Control is enabled on the zone, the SonicWall looks for Client Hellos sent from clients on that zone through the SonicWall will trigger inspection. The SonicWall then looks for the Server Hello and Certificate that is sent in response for evaluation against the configured policy. Enabling SSL Control on the LAN zone, for example, will inspect all SSL traffic initiated by clients on the LAN to any destination zone.

* 
NOTE: If you are activating SSL Control on a zone (for example, the LAN zone) where there are clients who will be accessing an SSL server on another zone connected to the SonicWall (for example, the DMZ zone) it is recommended that you add the subject common name of that server’s certificate to the whitelist to ensure continuous trusted access.

To enable SSL Control on a zone, browse to the Network > Zones page, and select the configure icon for the desired zone. In the Edit Zone window, select the Enable SSL Control check box, and click OK. All new SSL connections initiated from that zone will now be subject to inspection.

SSL Control Events

Log events will include the client’s username in the notes section (not shown in the figure below) if the user logged in manually, or was identified through CIA/Single Sign On. If the user’s identity is not available, the note will indicate that the user is Unidentified. The table after the figure explains the Event Messages.

 

SSL Control: Event Messages

#

Event Message

Conditions When it Occurs

1

SSL Control: Certificate with Invalid date

The certificate’s start date is either before the system time or its end date is after the system time.

2

SSL Control: Certificate chain not complete

The certificate has been issued by an intermediate CA with a trusted top-level CA, but the SSL server did not present the intermediate certificate. This log event is informational and does not affe3ct the SSL connection.

3

SSL Control: Self-signed certificate

The certificate is self-signed (the CN of the issuer and the subject match).

4

SSL Control: Untrusted CA

The certificate has been issued by a CA that is not in the System > Certificates store of the SonicWall.

5

SSL Control: Website found in blacklist

The common name of the subject matched a pattern entered into the blacklist.

6

SSL Control: Weak cipher being used

The symmetric cipher being negotiated was less than 64 bits.

7

See #2

See #2.

8

SSL Control: Failed to decode Server Hello

The Server Hello from the SSL server was undecipherable. Also occurs when the certificate and Server Hello are in different packets, as is the case when connecting to a SSL server on a SonicWall appliance. This log event is informational, and does not affect the SSL connection.

9

SSL Control: Website found in whitelist

The common name of the subject (typically a website) matched a pattern entered into the Whitelist. Whitelist entries are always allowed, even if there are other policy violations in the negotiation, such as SSLv2 or weak-ciphers.

10

SSL Control: HTTPS via SSLv2

The SSL session was being negotiated using SSLv2, which is known to be susceptible to certain man-in-the-middle attacks. Best practices recommend using SSLv3 or TLS instead.