Fortinet Document Library

Version:

Version:

Version:

Version:

Version:

Version:

Version:

Version:

Version:

Version:

Version:

Version:


Table of Contents

Administration Guide

Main HA-Cluster CLI Commands

In the primary (master) and secondary (primary slave) node, you must enable interface port1 so that they can communicate with each other.

hc-settings

Configure the unit as a HA-Cluster mode unit. Configure cluster failover IP set.

hc-status -l

List the status of HA-Cluster units.

hc-slave

-a to add that worker unit to the cluster.

-r to remove that worker unit from the cluster.

-u to update that worker unit information.

hc-master -s<10-100>

Turn on file scan on the primary (master) node with 10% to 100% processing capacity.

hc-master -r[slave serial number]

Remove the worker unit with the specified serial number from the primary node.

After removing a worker node, use hc-status -l on the primary node to verify that the worker unit has been removed.

Example configuration

This example shows the steps for setting up an HA cluster using three FortiSandbox 3000D units.

Step 1 - Prepare the hardware:

The following hardware will be required:

  • Nine cables for network connections.
  • Three 1/10 Gbps switches.
  • Three FortiSandbox 3000D units with proper power connections (units A, B, and C).
note icon

Put the primary (master) and secondary (primary slave) nodes on different power circuits.

Step 2 - Prepare the subnets:

Prepare three subnets for your cluster (customize as needed):

  • Switch A: 192.168.1.0/24: For system management.
    • Gateway address: 192.168.1.1
    • External management IP address: 192.168.1.99
  • Switch B: 192.168.2.0/24: For internal cluster communications.
  • Switch C: 192.168.3.0/24: For the outgoing port (port 3) on each unit.
    • Gateway address: 192.168.3.1
Step 3 - Setup the physical connections:
  1. Connect port 1 of each FortiSandbox device to Switch A.
  2. Connect port 2 of each FortiSandbox device to Switch B.
  3. Connect port 3 of each FortiSandbox device to Switch C.
Step 4 - Configure the primary (master):
  1. Power on the device (Unit A), and log into the CLI (See Connecting to the Command Line Interface)
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.99/24

    set port2-ip 192.168.2.99/24

    set port3-ip 192.168.3.99/24

  4. Configure the device as the primary node and its cluster failover IP for Port1 with the following commands:
  5. hc-settings -sc -tM -nMasterA -cTestHCsystem -ppassw0rd -iport2

    hc-settings -si -iport1 -a192.168.1.98/24

    See the FortiSandbox CLI Reference Guide available on the Fortinet Document Library for more information about the CLI commands.

  6. Review the cluster status with the following command:
  7. hc-status -l

    Other ports on the device can be used for file inputs.

Step 5 - Configure the secondary (primary slave):
  1. Power on the device (Unit B), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.100/24

    set port2-ip 192.168.2.100/24

    set port3-ip 192.168.3.100/24

  4. Configure the device as the secondary node with the following commands:
  5. hc-settings -sc -tP -nPslaveB -cTestHCsystem -ppassw0rd -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 6 - Configure the worker (slave):
  1. Power on the device (Unit C), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.101/24

    set port2-ip 192.168.2.101/24

    set port3-ip 192.168.3.101/24

  4. Configure the device as a worker node with the following commands:
  5. hc-settings -sc -tR -cTestHCsystem -ppassw0rd -nSlaveC -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 7 - Configure other settings:

VM Image settings and network settings, such as default gateway, static route, and DNS servers etc., should be configured on each unit individually. Scan related settings, such as the scan profile, should be set on primary unit only; they will be synchronized to the worker node. For more details, refer to Role of the primary (master) and worker (slave) node.

Step 8 - Finish:

The HA-Cluster can now be treated like a single, extremely powerful standalone FortiSandbox unit.

In this example, files are submitted to, and reports and logs are available over IP address 192.168.1.99.

 

tooltip icon

FortiSandbox 3500D is configured as a cluster system, with blade 1 configured as the primary node, blade 2 as the secondary node, and the other blades as worker nodes.

note icon

If you use the GUI to change a role from worker to standalone, you must remove the worker from the primary using the CLI command hc-master -r or hc-master -r<slave serial number>; then use hc-status -l to verify that the worker unit has been removed.

What happens during a failover

The primary (master) node and secondary (primary slave) node sends heartbeats to each other to detect if its peers are alive. If the primary node is not accessible, such as during a reboot, a failover will occur. Users can also configure a Ping server to frequently check the unit's network condition and downgrade itself to secondary (primary slave) type when the condition is appropriate to trigger a failover. The failover logic handles two different scenarios:

Objective node available

The Object node is a worker (slave) (either secondary or worker) that can justify the new primary. For example, if a cluster is consisted of one primary node, one secondary node, and one worker node, the worker node is the objective node.

After a secondary node takes over the primary role, and the new role is accepted by the objective node, the original primary node will accept the decision when it is back online.

After the original primary is back online, it will become a secondary node.

No Objective node available

This occurs when the cluster's internal communication is down.

For example, the cluster contains one primary node and one secondary node and the primary node reboots; or the internal cluster communication is down due to a failed switch, all secondary nodes become the primary (more than one primary unit).

When the system is back online, the unit with the largest Serial Number will keep the primary role and the other will return back to a secondary.

When the new primary is decided, it will:

  1. Build up the scan environment.
  2. Apply all the setting synchronized from the original primary except port3 IP and the internal communication port IP of the original primary.

After a failover occurs, the original primary might become a secondary node.

It keeps its original Port3 IP and internal cluster communication IP. All other interface ports will be shutdown as it becomes a worker node. Some functionality will be turned off such as Email Alerts. If the user wants to re-configure its settings, such as the interface IP, the user must do that through the CLI command or the primary's Central Management page.

tooltip icon

As the new primary takes over the port that client devices communicate with will switch to it. As the new primary needs time to start up all the services, clients may experience a temporary service interruption.

Main HA-Cluster CLI Commands

In the primary (master) and secondary (primary slave) node, you must enable interface port1 so that they can communicate with each other.

hc-settings

Configure the unit as a HA-Cluster mode unit. Configure cluster failover IP set.

hc-status -l

List the status of HA-Cluster units.

hc-slave

-a to add that worker unit to the cluster.

-r to remove that worker unit from the cluster.

-u to update that worker unit information.

hc-master -s<10-100>

Turn on file scan on the primary (master) node with 10% to 100% processing capacity.

hc-master -r[slave serial number]

Remove the worker unit with the specified serial number from the primary node.

After removing a worker node, use hc-status -l on the primary node to verify that the worker unit has been removed.

Example configuration

This example shows the steps for setting up an HA cluster using three FortiSandbox 3000D units.

Step 1 - Prepare the hardware:

The following hardware will be required:

  • Nine cables for network connections.
  • Three 1/10 Gbps switches.
  • Three FortiSandbox 3000D units with proper power connections (units A, B, and C).
note icon

Put the primary (master) and secondary (primary slave) nodes on different power circuits.

Step 2 - Prepare the subnets:

Prepare three subnets for your cluster (customize as needed):

  • Switch A: 192.168.1.0/24: For system management.
    • Gateway address: 192.168.1.1
    • External management IP address: 192.168.1.99
  • Switch B: 192.168.2.0/24: For internal cluster communications.
  • Switch C: 192.168.3.0/24: For the outgoing port (port 3) on each unit.
    • Gateway address: 192.168.3.1
Step 3 - Setup the physical connections:
  1. Connect port 1 of each FortiSandbox device to Switch A.
  2. Connect port 2 of each FortiSandbox device to Switch B.
  3. Connect port 3 of each FortiSandbox device to Switch C.
Step 4 - Configure the primary (master):
  1. Power on the device (Unit A), and log into the CLI (See Connecting to the Command Line Interface)
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.99/24

    set port2-ip 192.168.2.99/24

    set port3-ip 192.168.3.99/24

  4. Configure the device as the primary node and its cluster failover IP for Port1 with the following commands:
  5. hc-settings -sc -tM -nMasterA -cTestHCsystem -ppassw0rd -iport2

    hc-settings -si -iport1 -a192.168.1.98/24

    See the FortiSandbox CLI Reference Guide available on the Fortinet Document Library for more information about the CLI commands.

  6. Review the cluster status with the following command:
  7. hc-status -l

    Other ports on the device can be used for file inputs.

Step 5 - Configure the secondary (primary slave):
  1. Power on the device (Unit B), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.100/24

    set port2-ip 192.168.2.100/24

    set port3-ip 192.168.3.100/24

  4. Configure the device as the secondary node with the following commands:
  5. hc-settings -sc -tP -nPslaveB -cTestHCsystem -ppassw0rd -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 6 - Configure the worker (slave):
  1. Power on the device (Unit C), and log into the CLI.
  2. Configure the port IP addresses and gateway address with the following commands:
  3. set port1-ip 192.168.1.101/24

    set port2-ip 192.168.2.101/24

    set port3-ip 192.168.3.101/24

  4. Configure the device as a worker node with the following commands:
  5. hc-settings -sc -tR -cTestHCsystem -ppassw0rd -nSlaveC -iport2

    hc-settings -l

    hc-slave -a -s192.168.2.99 -ppassw0rd

  6. Review the cluster status with the following command:
  7. hc-status -l

Step 7 - Configure other settings:

VM Image settings and network settings, such as default gateway, static route, and DNS servers etc., should be configured on each unit individually. Scan related settings, such as the scan profile, should be set on primary unit only; they will be synchronized to the worker node. For more details, refer to Role of the primary (master) and worker (slave) node.

Step 8 - Finish:

The HA-Cluster can now be treated like a single, extremely powerful standalone FortiSandbox unit.

In this example, files are submitted to, and reports and logs are available over IP address 192.168.1.99.

 

tooltip icon

FortiSandbox 3500D is configured as a cluster system, with blade 1 configured as the primary node, blade 2 as the secondary node, and the other blades as worker nodes.

note icon

If you use the GUI to change a role from worker to standalone, you must remove the worker from the primary using the CLI command hc-master -r or hc-master -r<slave serial number>; then use hc-status -l to verify that the worker unit has been removed.

What happens during a failover

The primary (master) node and secondary (primary slave) node sends heartbeats to each other to detect if its peers are alive. If the primary node is not accessible, such as during a reboot, a failover will occur. Users can also configure a Ping server to frequently check the unit's network condition and downgrade itself to secondary (primary slave) type when the condition is appropriate to trigger a failover. The failover logic handles two different scenarios:

Objective node available

The Object node is a worker (slave) (either secondary or worker) that can justify the new primary. For example, if a cluster is consisted of one primary node, one secondary node, and one worker node, the worker node is the objective node.

After a secondary node takes over the primary role, and the new role is accepted by the objective node, the original primary node will accept the decision when it is back online.

After the original primary is back online, it will become a secondary node.

No Objective node available

This occurs when the cluster's internal communication is down.

For example, the cluster contains one primary node and one secondary node and the primary node reboots; or the internal cluster communication is down due to a failed switch, all secondary nodes become the primary (more than one primary unit).

When the system is back online, the unit with the largest Serial Number will keep the primary role and the other will return back to a secondary.

When the new primary is decided, it will:

  1. Build up the scan environment.
  2. Apply all the setting synchronized from the original primary except port3 IP and the internal communication port IP of the original primary.

After a failover occurs, the original primary might become a secondary node.

It keeps its original Port3 IP and internal cluster communication IP. All other interface ports will be shutdown as it becomes a worker node. Some functionality will be turned off such as Email Alerts. If the user wants to re-configure its settings, such as the interface IP, the user must do that through the CLI command or the primary's Central Management page.

tooltip icon

As the new primary takes over the port that client devices communicate with will switch to it. As the new primary needs time to start up all the services, clients may experience a temporary service interruption.