Fortinet black logo

Session-Aware Load Balancing Cluster Guide

FortiController-5903C

5.2.10
Copy Link
Copy Doc ID 31a89d05-200d-11e9-b6f6-f8bc1258b856:186800
Download PDF

FortiController-5903C

The FortiController-5903C distributes IPv4 TCP and UDP sessions to multiple FortiGate-5000-series boards (called workers) over the FortiGate-5144C chassis fabric backplane. The FortiController-5903C includes four front panel 40Gbps Quad Small form-factor Pluggable + (QSFP+) interfaces (F1 to F4) for connecting to 40Gbps networks. The FortiController-5903C forms a session-aware load balanced cluster and uses DP processors to load balance millions of sessions to the cluster, providing up to 40 Gbps of traffic to each cluster member (each worker). Performance of the cluster shows linear improvement if more workers are added.

Clusters can also be formed with one or two FortiController-5903Cs and up to 12 workers. All of the workers must be the same model. Currently FortiGate-5001B, FortiGate-5001C, FortiGate-5101C, FortiGate-5001D, FortiGate-5001E, and FortiGate-5001E1 workers are supported. FortiGate-5001C, FortiGate-5001D, FortiGate-5001E, and FortiGate-5001E1 workers can handle up to 40 Gbps of traffic. FortiGate-5001B and FortiGate-5101C workers can handle up to 10 Gbps.

The FortiController-5903C can also provide 40Gbps fabric and 1Gbps base backplane channel layer-2 switching in a dual star architecture.

You should install the FortiController-5903C in a FortiGate-5144C chassis to meet FortiController-5903C power requirements, to have access to a 40G fabric backplane, and to have enough slots for the number of workers that the FortiController-5903C can load balance sessions to.

In all ATCA chassis, FortiController-5903Cs are installed in the first and second hub/switch slots (usually slots 1 and 2). A single FortiController-5903C should be installed in slot 1 (but you can install it in slot 2). If you add a second FortiController-5903C it should be installed in slot 2.

FortiController-5903C Front Panel

The FortiController-5903C includes the following hardware features:

  • One 1 base backplane channel for layer-2 base backplane switching between workers installed in the same chassis as the FortiController-5903C. This base backplane channel includes 13 1Gbps connections to up to 13 other slots in the chassis (slots 2 to 14).
  • One 40Gbps fabric backplane channel for layer-2 fabric backplane switching between workers installed in the same chassis as the FortiController-5903C. This fabric backplane channel includes 13 40Gbps connections to up to 13 other slots in the chassis (slots 2 to 14). Speed can be changed to 10Gbps or 1Gbps.
  • Four front panel 40Gbps QSFP+ fabric channel interfaces (F1 to F4). In a session-aware load balanced cluster these interfaces are connected to 40Gbps networks to distribute sessions to workers installed in chassis slots 3 to 14. These interfaces can also be split into 4x10G SFP+ interfaces. The MTU size of these interfaces is 9000 bytes. Splitting the F1 to F4 interfaces may require you to reset the workers to factory defaults and then completely re-configure your SLBC cluster. Fortinet recommends you split interfaces before configuring your cluster to save downtime due to the need to re-do your configuration.
  • Two front panel 10Gbps SFP+ base channel interfaces (B1 and B2) that connect to the base backplane channel. These interfaces are used for heartbeat and management communication between FortiController-5903Cs. Speed can be changed to 1Gbps.
  • On-board DP processors to provide high-capacity session-aware load balancing.
  • One 1Gbps out of band management Ethernet interface (MGMT).
  • Internal 64 GByte SSD for storing log messages, DLP archives, SQL log message database, historic reports, IPS packet archiving, file quarantine, WAN Optimization byte caching and web caching. according to the PRD there is no internal storage
  • One RJ-45, RS-232 serial console connection (CONSOLE).
  • One front panel USB port.
Note

If your attached network equipment is sensitive to optical power, you may need to use attenuators with the F1 to F4 QSFP+ or split SFP+ fabric channel front panel interfaces to reduce the optical power transmitted to attached network equipment.

FortiController-5903C

The FortiController-5903C distributes IPv4 TCP and UDP sessions to multiple FortiGate-5000-series boards (called workers) over the FortiGate-5144C chassis fabric backplane. The FortiController-5903C includes four front panel 40Gbps Quad Small form-factor Pluggable + (QSFP+) interfaces (F1 to F4) for connecting to 40Gbps networks. The FortiController-5903C forms a session-aware load balanced cluster and uses DP processors to load balance millions of sessions to the cluster, providing up to 40 Gbps of traffic to each cluster member (each worker). Performance of the cluster shows linear improvement if more workers are added.

Clusters can also be formed with one or two FortiController-5903Cs and up to 12 workers. All of the workers must be the same model. Currently FortiGate-5001B, FortiGate-5001C, FortiGate-5101C, FortiGate-5001D, FortiGate-5001E, and FortiGate-5001E1 workers are supported. FortiGate-5001C, FortiGate-5001D, FortiGate-5001E, and FortiGate-5001E1 workers can handle up to 40 Gbps of traffic. FortiGate-5001B and FortiGate-5101C workers can handle up to 10 Gbps.

The FortiController-5903C can also provide 40Gbps fabric and 1Gbps base backplane channel layer-2 switching in a dual star architecture.

You should install the FortiController-5903C in a FortiGate-5144C chassis to meet FortiController-5903C power requirements, to have access to a 40G fabric backplane, and to have enough slots for the number of workers that the FortiController-5903C can load balance sessions to.

In all ATCA chassis, FortiController-5903Cs are installed in the first and second hub/switch slots (usually slots 1 and 2). A single FortiController-5903C should be installed in slot 1 (but you can install it in slot 2). If you add a second FortiController-5903C it should be installed in slot 2.

FortiController-5903C Front Panel

The FortiController-5903C includes the following hardware features:

  • One 1 base backplane channel for layer-2 base backplane switching between workers installed in the same chassis as the FortiController-5903C. This base backplane channel includes 13 1Gbps connections to up to 13 other slots in the chassis (slots 2 to 14).
  • One 40Gbps fabric backplane channel for layer-2 fabric backplane switching between workers installed in the same chassis as the FortiController-5903C. This fabric backplane channel includes 13 40Gbps connections to up to 13 other slots in the chassis (slots 2 to 14). Speed can be changed to 10Gbps or 1Gbps.
  • Four front panel 40Gbps QSFP+ fabric channel interfaces (F1 to F4). In a session-aware load balanced cluster these interfaces are connected to 40Gbps networks to distribute sessions to workers installed in chassis slots 3 to 14. These interfaces can also be split into 4x10G SFP+ interfaces. The MTU size of these interfaces is 9000 bytes. Splitting the F1 to F4 interfaces may require you to reset the workers to factory defaults and then completely re-configure your SLBC cluster. Fortinet recommends you split interfaces before configuring your cluster to save downtime due to the need to re-do your configuration.
  • Two front panel 10Gbps SFP+ base channel interfaces (B1 and B2) that connect to the base backplane channel. These interfaces are used for heartbeat and management communication between FortiController-5903Cs. Speed can be changed to 1Gbps.
  • On-board DP processors to provide high-capacity session-aware load balancing.
  • One 1Gbps out of band management Ethernet interface (MGMT).
  • Internal 64 GByte SSD for storing log messages, DLP archives, SQL log message database, historic reports, IPS packet archiving, file quarantine, WAN Optimization byte caching and web caching. according to the PRD there is no internal storage
  • One RJ-45, RS-232 serial console connection (CONSOLE).
  • One front panel USB port.
Note

If your attached network equipment is sensitive to optical power, you may need to use attenuators with the F1 to F4 QSFP+ or split SFP+ fabric channel front panel interfaces to reduce the optical power transmitted to attached network equipment.