Fortinet black logo
7.0.0

HA for FortiProxy-VM on Azure

HA for FortiProxy-VM on Azure

When designing a reliable architecture in Azure, you must take resiliency and high availability (HA) into account. See Microsoft's Overview of the reliability pillar. Running FortiProxy inside Azure offers different reliability levels depending on the building blocks used.

Microsoft offers different SLAs on Azure based on the deployment that you use:

  • Availability Zone (AZ) (different datacenter in the same region): 99.99%

  • Availability Set (different rack and power): 99.95%

  • Single VM with premium SSD: 99.9%

Building blocks

  • Active-passive with external and internal Azure load balancer (LB): this design deploys two FortiProxy-VMs in active-passive mode connected using unicast FortiProxy clustering protocol (FGCP) HA protocol. In this setup, the Azure LB handles traffic failover using a health probe towards the FortiProxy-VMs. The failover times are based on the health probe of the Azure LB: 2 failed attempts per 5 seconds with a maximum of 15 seconds. You configure the public IP addresses on the Azure LB. The public IP addresses provide ingress and egress flows with inspection from the FortiProxy. Microsoft provides guidance on this architecture.

  • Active-passive HA with SDN connector failover: This design deploys two FortiProxy-VMs in active-passive mode connected using the unicast FGCP HA protocol. This protocol synchronizes the configuration. On failover, the passive FortiProxy takes control and issues API calls to Azure to shift the public IP address and update the internal user-defined routing to itself. Shifting the public IP address and gateway IP addresses of the routes takes time for Azure to complete. Microsoft provides a general architecture. In FortiProxy's case, the API calls logic is built-in instead of requiring additional outside logic like Azure Functions or ZooKeeper nodes.

Availability zones and availability sets are available as options in the Azure marketplace. You can select them during deployment.

Deploying FortiProxy HA using Terraform

Use the Terraform code in the GitHub page to deploy FortiProxy Active-passive HA across two availability zones. Refer to the README.md file on the page for detailed deployment instructions.

Visit the FortiProxy Terraform Azure GitHub project page for a complete list of FortiProxy Azure solutions. For issues, see this GitHub project's Issues tab. For other questions related to the GitHub project, contact github@fortinet.com.

HA for FortiProxy-VM on Azure

When designing a reliable architecture in Azure, you must take resiliency and high availability (HA) into account. See Microsoft's Overview of the reliability pillar. Running FortiProxy inside Azure offers different reliability levels depending on the building blocks used.

Microsoft offers different SLAs on Azure based on the deployment that you use:

  • Availability Zone (AZ) (different datacenter in the same region): 99.99%

  • Availability Set (different rack and power): 99.95%

  • Single VM with premium SSD: 99.9%

Building blocks

  • Active-passive with external and internal Azure load balancer (LB): this design deploys two FortiProxy-VMs in active-passive mode connected using unicast FortiProxy clustering protocol (FGCP) HA protocol. In this setup, the Azure LB handles traffic failover using a health probe towards the FortiProxy-VMs. The failover times are based on the health probe of the Azure LB: 2 failed attempts per 5 seconds with a maximum of 15 seconds. You configure the public IP addresses on the Azure LB. The public IP addresses provide ingress and egress flows with inspection from the FortiProxy. Microsoft provides guidance on this architecture.

  • Active-passive HA with SDN connector failover: This design deploys two FortiProxy-VMs in active-passive mode connected using the unicast FGCP HA protocol. This protocol synchronizes the configuration. On failover, the passive FortiProxy takes control and issues API calls to Azure to shift the public IP address and update the internal user-defined routing to itself. Shifting the public IP address and gateway IP addresses of the routes takes time for Azure to complete. Microsoft provides a general architecture. In FortiProxy's case, the API calls logic is built-in instead of requiring additional outside logic like Azure Functions or ZooKeeper nodes.

Availability zones and availability sets are available as options in the Azure marketplace. You can select them during deployment.

Deploying FortiProxy HA using Terraform

Use the Terraform code in the GitHub page to deploy FortiProxy Active-passive HA across two availability zones. Refer to the README.md file on the page for detailed deployment instructions.

Visit the FortiProxy Terraform Azure GitHub project page for a complete list of FortiProxy Azure solutions. For issues, see this GitHub project's Issues tab. For other questions related to the GitHub project, contact github@fortinet.com.