Fortinet black logo

Administration Guide

What to do when coredump files are truncated or damaged

What to do when coredump files are truncated or damaged

Sometimes you may find the size of a coredump file is 0, or obvious truncated stack information from the coredump file. It might mean the coredump file is truncated or damaged. To provide enough information to locate the root cause of a system/daemon crash, it’s necessary to resolve the problem and generate a complete coredump file.

1. Check if disk space (especially /var/log) is enough for generating/storing a coredump file:

/# df -h

Filesystem Size Used Available Use% Mounted on

/dev/root 472.5M 335.7M 136.8M 71% /

none 1.1G 116.0K 1.1G 0% /tmp

none 3.8G 2.5M 3.8G 0% /dev/shm

/dev/sdb1 362.4M 213.7M 129.1M 62% /data

/dev/sdb3 90.6M 56.0K 85.6M 0% /home

/dev/sda1 439.1G 7.5G 409.3G 2% /var/log

2. Check if the size of coredump file generated is very large - in older versions there is a limit of 50G for proxyd core files.

3. Check if there is any file system issue:

FortiWeb# execute fscklogdisk

This operation will fsck logdisk !

Do you want to continue? (y/n)y

fsck logdisk…

FortiWeb#

4. Set enable-core-file to generate a complete coredump:

By default, if the coredump file is very large (usually with a FortiWeb box with large memory size), the time used to generate the core file and write to disk might be very long (several minutes to more than 10 minutes). The negative impact is that a reboot will be triggered if the dump cannot be completed in 120s, and the daemon will not respond to new requests during this period.

However, coredump mechanism is usually very essential for further diagnosing a critical issue because it records important information in memory and CPU registers when the issue happens.

On FortiWeb 6.3.15 and later releases, a new option enable-best-effort for set enable-core-file is added. When this option is set, “hung task timeout” will not take effect. That is to say, we can always expect the system to generate a complete coredump file. This option is useful to analyze a tough issue, though it may cause the service to stop responding for a long time. Also, in 6.3.15 and later releases, the 50G core size limit has been removed.

FortiWeb# config server-policy setting

FortiWeb(setting) # set enable-core-file #only works for proxyd

disable Disable coredump for proxyd.

enable Enable coredump action for proxyd, stop if coredump cannot finish in hung task timeout seconds.

enable-best-effort Enable coredump action for proxyd, stop until the entire core file is generated.

5. Other configurable behavior:

You can set the maximum daemon coredump files that can be stored to disk. If more core files are generated, the oldest one will be removed.

FortiWeb(setting) # set core-file-count

3 3

5 5

Note: This command only works for daemon coredump file. For kernel core and core dump files, the limitation is fixed as: only 1 coredump files; up to 5 core files.

What to do when coredump files are truncated or damaged

Sometimes you may find the size of a coredump file is 0, or obvious truncated stack information from the coredump file. It might mean the coredump file is truncated or damaged. To provide enough information to locate the root cause of a system/daemon crash, it’s necessary to resolve the problem and generate a complete coredump file.

1. Check if disk space (especially /var/log) is enough for generating/storing a coredump file:

/# df -h

Filesystem Size Used Available Use% Mounted on

/dev/root 472.5M 335.7M 136.8M 71% /

none 1.1G 116.0K 1.1G 0% /tmp

none 3.8G 2.5M 3.8G 0% /dev/shm

/dev/sdb1 362.4M 213.7M 129.1M 62% /data

/dev/sdb3 90.6M 56.0K 85.6M 0% /home

/dev/sda1 439.1G 7.5G 409.3G 2% /var/log

2. Check if the size of coredump file generated is very large - in older versions there is a limit of 50G for proxyd core files.

3. Check if there is any file system issue:

FortiWeb# execute fscklogdisk

This operation will fsck logdisk !

Do you want to continue? (y/n)y

fsck logdisk…

FortiWeb#

4. Set enable-core-file to generate a complete coredump:

By default, if the coredump file is very large (usually with a FortiWeb box with large memory size), the time used to generate the core file and write to disk might be very long (several minutes to more than 10 minutes). The negative impact is that a reboot will be triggered if the dump cannot be completed in 120s, and the daemon will not respond to new requests during this period.

However, coredump mechanism is usually very essential for further diagnosing a critical issue because it records important information in memory and CPU registers when the issue happens.

On FortiWeb 6.3.15 and later releases, a new option enable-best-effort for set enable-core-file is added. When this option is set, “hung task timeout” will not take effect. That is to say, we can always expect the system to generate a complete coredump file. This option is useful to analyze a tough issue, though it may cause the service to stop responding for a long time. Also, in 6.3.15 and later releases, the 50G core size limit has been removed.

FortiWeb# config server-policy setting

FortiWeb(setting) # set enable-core-file #only works for proxyd

disable Disable coredump for proxyd.

enable Enable coredump action for proxyd, stop if coredump cannot finish in hung task timeout seconds.

enable-best-effort Enable coredump action for proxyd, stop until the entire core file is generated.

5. Other configurable behavior:

You can set the maximum daemon coredump files that can be stored to disk. If more core files are generated, the oldest one will be removed.

FortiWeb(setting) # set core-file-count

3 3

5 5

Note: This command only works for daemon coredump file. For kernel core and core dump files, the limitation is fixed as: only 1 coredump files; up to 5 core files.