Fortinet Document Library

Version:

Version:

Version:

Version:

Version:

Version:

Version:

Version:

Version:


Table of Contents

Administration Guide

Main HA Cluster CLI Commands

hc-settings

Configure the unit as a HA Cluster mode unit. Configure cluster fail-over IP set.

hc-status

List the stats of HA Cluster units.

hc-slave

Add, Update, Remove a slave unit to or from the HA Cluster.

hc-master

Turn on/off the file scan on the Master node and adjust the Master's scan power.

Example configuration

This example shows the steps for setting up an HA cluster using three FortiSandbox 3000D units.

Step 1 - Prepare the hardware

The following hardware will be required:

  • Nine cables for network connections.
  • Three 1/10 Gbps switches.
  • Three FortiSandbox 3000D units with proper power connections (units A, B, and C).
note icon

The master and primary slaves should be on different power circuits.

Step 2 - Prepare the subnets

Prepare three subnets for your cluster (customize as needed):

  • Switch A: 192.168.1.0/24: For system management.
    • Gateway address: 192.168.1.1
    • External management IP address: 192.168.1.99
  • Switch B: 192.168.2.0/24: For internal cluster communications.
  • Switch C: 192.168.3.0/24: For the outgoing port (port 3) on each unit.
    • Gateway address: 192.168.3.1
Step 3 - Setup the physical connections
  1. Connect port 1 of each FortiSandbox device to Switch A.
  2. Connect port 2 of each FortiSandbox device to Switch B.
  3. Connect port 3 of each FortiSandbox device to Switch C.
Step 4 - Configure the master
  1. Power on the device (Unit A), and log into the CLI (See Connecting to the Command Line Interface)
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.99/24

    set port2-ip 192.168.2.99/24

    set port3-ip 192.168.3.99/24

  4. Configure the device as the master node and its cluster fail-over IP for Port1 with the following commands:
  5. hc-settings -sc -tM -nMasterA -cTestHCsystem -ppassw0rd -iport2

    hc-settings -si -iport1 -a192.168.1.98/24

    See the FortiSandbox CLI Reference Guide available on the Fortinet Document Library for more information about the CLI commands.

  6. Review the cluster status with the following command:
  7. hc-status -l

    Other ports on the device can be used for file inputs.

Step 5 - Configure the primary slave
  1. Power on the device (Unit B), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.100/24

    set port2-ip 192.168.2.100/24

    set port3-ip 192.168.3.100/24

  4. Configure the device as the primary slave node with the following commands:
  5. hc-settings -s -tP -nPslaveB -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 6 - Configure the regular slave
  1. Power on the device (Unit C), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.101/24

    set port2-ip 192.168.2.101/24

    set port3-ip 192.168.3.101/24

  4. Configure the device as a slave node with the following commands:
  5. hc-settings -s -tR -nSlaveC -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 7 - Configure other settings

VM Image settings and network settings, such as default gateway, static route, and DNS servers etc., should be configured on each unit individually. Scan related settings, such as the scan profile, should be set on Master unit only; they will be synchronized to the Slave node. For more details, refer to Master's Role and Slave's Role.

Step 8 - Finish

The HA cluster can now be treated like a single, extremely powerful standalone FortiSandbox unit.

In this example, files are submitted to, and reports and logs are available over IP address 192.168.1.99.

tooltip icon

FortiSandbox 3500D unit has been configured as a cluster system, with blade 1 configured as the Master node, blade 2 as the Primary Slave node and the other blades as Regular Slave nodes.

What happens during a failover

The Master node and Primary Slave nodes sends heartbeats to each other to detect if its peers are alive. If the Master node is not accessible, such as during a reboot, a failover will occur. Users can also configure a Ping server to frequently check the unit's network condition and downgrade itself to Primary Slave type when the condition is appropriate to trigger a failover. The failover logic handles two different scenarios:

Objective node available

The Object node is a slave (either Primary or Regular) that can justify the new Master. For example, if a cluster is consisted of one Master node, one Primary Slave node, and one Regular Slave node, the Regular Slave node is the objective node.

After a Primary Slave node takes over the Master role, and the new role is accepted by the objective node, the original Master node will accept the decision when it is back online.

After the original Master is back online, it will become a Primary Slave node.

No Objective node available

This occurs when the cluster's internal communication is down.

For example, the cluster contains one Master node and one Primary Slave node and the Master node reboots; or the internal cluster communication is down due to a failed switch, all Primary Slave nodes become the Master (more than one Master unit).

When the system is back online, the unit with the largest Serial Number will keep the Master role and the other will return back to a Primary Slave.

When the new Master is decided, it will:

  1. Build up the scan environment.
  2. Apply all the setting synchronized from the original Master except port3 IP and the internal communication port IP of the original Master.

After a failover occurs, the original Master might become a Primary Slave node.

It keeps its original Port3 IP and internal cluster communication IP. All other interface ports will be shutdown as it becomes a slave node. Some functionalities will be turned off such as Email Alerts. If the user wants to re-configure its settings, such as the interface IP, the user must do that through the CLI command or the Master's Central Management page.

tooltip icon

As the new Master takes over the port that client devices communicate with will switch to it. As the new Master needs time to start up all the services, clients may experience a temporary service interruption.

Main HA Cluster CLI Commands

hc-settings

Configure the unit as a HA Cluster mode unit. Configure cluster fail-over IP set.

hc-status

List the stats of HA Cluster units.

hc-slave

Add, Update, Remove a slave unit to or from the HA Cluster.

hc-master

Turn on/off the file scan on the Master node and adjust the Master's scan power.

Example configuration

This example shows the steps for setting up an HA cluster using three FortiSandbox 3000D units.

Step 1 - Prepare the hardware

The following hardware will be required:

  • Nine cables for network connections.
  • Three 1/10 Gbps switches.
  • Three FortiSandbox 3000D units with proper power connections (units A, B, and C).
note icon

The master and primary slaves should be on different power circuits.

Step 2 - Prepare the subnets

Prepare three subnets for your cluster (customize as needed):

  • Switch A: 192.168.1.0/24: For system management.
    • Gateway address: 192.168.1.1
    • External management IP address: 192.168.1.99
  • Switch B: 192.168.2.0/24: For internal cluster communications.
  • Switch C: 192.168.3.0/24: For the outgoing port (port 3) on each unit.
    • Gateway address: 192.168.3.1
Step 3 - Setup the physical connections
  1. Connect port 1 of each FortiSandbox device to Switch A.
  2. Connect port 2 of each FortiSandbox device to Switch B.
  3. Connect port 3 of each FortiSandbox device to Switch C.
Step 4 - Configure the master
  1. Power on the device (Unit A), and log into the CLI (See Connecting to the Command Line Interface)
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.99/24

    set port2-ip 192.168.2.99/24

    set port3-ip 192.168.3.99/24

  4. Configure the device as the master node and its cluster fail-over IP for Port1 with the following commands:
  5. hc-settings -sc -tM -nMasterA -cTestHCsystem -ppassw0rd -iport2

    hc-settings -si -iport1 -a192.168.1.98/24

    See the FortiSandbox CLI Reference Guide available on the Fortinet Document Library for more information about the CLI commands.

  6. Review the cluster status with the following command:
  7. hc-status -l

    Other ports on the device can be used for file inputs.

Step 5 - Configure the primary slave
  1. Power on the device (Unit B), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.100/24

    set port2-ip 192.168.2.100/24

    set port3-ip 192.168.3.100/24

  4. Configure the device as the primary slave node with the following commands:
  5. hc-settings -s -tP -nPslaveB -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 6 - Configure the regular slave
  1. Power on the device (Unit C), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.101/24

    set port2-ip 192.168.2.101/24

    set port3-ip 192.168.3.101/24

  4. Configure the device as a slave node with the following commands:
  5. hc-settings -s -tR -nSlaveC -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 7 - Configure other settings

VM Image settings and network settings, such as default gateway, static route, and DNS servers etc., should be configured on each unit individually. Scan related settings, such as the scan profile, should be set on Master unit only; they will be synchronized to the Slave node. For more details, refer to Master's Role and Slave's Role.

Step 8 - Finish

The HA cluster can now be treated like a single, extremely powerful standalone FortiSandbox unit.

In this example, files are submitted to, and reports and logs are available over IP address 192.168.1.99.

tooltip icon

FortiSandbox 3500D unit has been configured as a cluster system, with blade 1 configured as the Master node, blade 2 as the Primary Slave node and the other blades as Regular Slave nodes.

What happens during a failover

The Master node and Primary Slave nodes sends heartbeats to each other to detect if its peers are alive. If the Master node is not accessible, such as during a reboot, a failover will occur. Users can also configure a Ping server to frequently check the unit's network condition and downgrade itself to Primary Slave type when the condition is appropriate to trigger a failover. The failover logic handles two different scenarios:

Objective node available

The Object node is a slave (either Primary or Regular) that can justify the new Master. For example, if a cluster is consisted of one Master node, one Primary Slave node, and one Regular Slave node, the Regular Slave node is the objective node.

After a Primary Slave node takes over the Master role, and the new role is accepted by the objective node, the original Master node will accept the decision when it is back online.

After the original Master is back online, it will become a Primary Slave node.

No Objective node available

This occurs when the cluster's internal communication is down.

For example, the cluster contains one Master node and one Primary Slave node and the Master node reboots; or the internal cluster communication is down due to a failed switch, all Primary Slave nodes become the Master (more than one Master unit).

When the system is back online, the unit with the largest Serial Number will keep the Master role and the other will return back to a Primary Slave.

When the new Master is decided, it will:

  1. Build up the scan environment.
  2. Apply all the setting synchronized from the original Master except port3 IP and the internal communication port IP of the original Master.

After a failover occurs, the original Master might become a Primary Slave node.

It keeps its original Port3 IP and internal cluster communication IP. All other interface ports will be shutdown as it becomes a slave node. Some functionalities will be turned off such as Email Alerts. If the user wants to re-configure its settings, such as the interface IP, the user must do that through the CLI command or the Master's Central Management page.

tooltip icon

As the new Master takes over the port that client devices communicate with will switch to it. As the new Master needs time to start up all the services, clients may experience a temporary service interruption.