LDM Troubleshooting

Contents

Can't start the LDM

If the LDM won't start, then it should log the reason why. Check the LDM log file.

If that is inconclusive, then start the LDM in interactive mode by explicitly telling it to log to the standard error stream via the -l option. Execute the command

      ldmd -l- [-v]
    

This will prevent the LDM from daemonizing itself and it will log directly to the terminal. The running LDM can be stopped by typing ^C (control-C).

The LDM server terminates after executing its configuration-file

This could be caused by

The LDM isn't logging

  1. Verify that logging doesn't occur with the commands, and if it prints "This is a test", then LDM logging works.
  2.           ulogger This is a test 2>/dev/null
              tail -1 ~/var/logs/ldmd.log
            
  3. Verify that the LDM log file is owned and writable by the LDM user:
  4.           ls -l ~/var/logs/ldmd.log
            
    If it's not, then make it so:
              sudo chown ldm ~/var/logs/ldmd.log
              chmod u+w ~/var/logs/ldmd.log
            
  5. Verify that the disk partition that contains the LDM log file isn't full:
  6.           df ~/var/logs/ldmd.log
            
    If it's full, then purge stuff.
  7. If the script "~/bin/refresh_logging" doesn't exist or simply executes the utility hupsyslog(1), then

The LDM isn't receiving data

  1. Are you sure? Verify that the LDM hasn't received the data-products in question by executing — as the LDM user — this command on the same system as the LDM:
  2.           notifyme -v [-f feedtype] [-p pattern] -o 9999999
            

    where feedtype and pattern are a feed specification and extended regular expression, respectively, that match the missing data-products.

    If this command indicates that the LDM is unavailable, then start it; otherwise, continue.

  3. Verify that your system clock is correct. If your LDM asks for data-products from the future, then it won't receive anything until that time.
  4. Verify that each upstream LDM that should be sending the data-products is, indeed, receiving those data-products by executing — as the LDM user — this command on the downstream system:
  5.           notifyme -v [-f feedtype] [-p pattern] -o 9999999 -h host
            

    where feedtype and pattern are as before and host is the hostname of the upstream LDM system (you can get this from the relevant REQUEST entries in the LDM configuration-file).

    If the notifyme(1) command indicates that

The LDM suddenly disappears on an RHEL/CentOS system

Several LDM users have reported the sudden disappearance of a running LDM on their RHEL/CentOS 6 or 7 systems. There's no warning: nothing in the LDM log file. It just vanishes — as if the superuser had sent it a SIGKILL with extreme prejudice.

It turns out, that's exactly what happened. Only, it wasn't the superuser per se, but the out-of-memory manager acting on behalf of the superuser. The smoking gun is an entry in the system log file from the out-of-memory manager about terminating the LDM process around the time that it disappears.

The current workaround is to tell the out-of-memory (OOM) manager that the LDM processes are important by assigning the LDM process-group a particular "score". LDM user Daryl Herzmann explains:

So there is a means to set a "score" on each Linux process to inform the oom killer about how it should prioritize the killing. For RHEL/centos 6+7, this can be done by `echo -1000 > /proc/$PID/oom_score_adj`. For some other Linux flavours, the score should be -17 and the proc file is oom_adj. Google is your friend!

A simple crontab(1) entry like so will set this value for ldmd automatically each hour.

      1 * * * * root pgrep -f "ldmd" | while read PID; do echo -1000 > /proc/$PID/oom_score_adj; done
    

Of course, this solution would have a small window of time between a ldm restart and the top of the next hour whereby the score would not be set. There are likely more robust solutions here I am blissfully ignorant of.

The OOM killer can be completely disabled with the following command. This is not recommended for production environments, because if an out-of-memory condition does present itself, there could be unexpected behavior depending on the available system resources and configuration. This unexpected behavior could be anything from a kernel panic to a hang depending on the resources available to the kernel at the time of the OOM condition.

      sysctl vm.overcommit_memory=2
      echo "vm.overcommit_memory=2" >> /etc/sysctl.conf
    

"ldmadmin scour" takes too long

Daryl Herzmann encountered this problem on his system, which uses the XFS filesystem. As he explained it

Please, I don't wish to start a war regarding which filesystem is the best here... If you have used XFS (now default filesystem in RHEL7) in the past, you may have suffered from very poor performance with IO related to small files. For me and LDM, this would rear its very ugly head when I wished to `ldmadmin scour` the /data/ folder. It would take 4+ hours to scour out a days worth of NEXRAD III files. If you looked at output like sysstat, you would see the process at 100% iowait.

I created a thread about this on the redhat community forums[1] and was kindly responded to by one of the XFS developers, Eric Sandeen. He wrote the following:

This is because your xfs filesystem does not store the filetype in the directory, and so every inode in the tree must be stat'd (read) to determine the filetype when you use the "-type f" qualifier. This is much slower than just reading directory information. In RHEL7.3, mkfs.xfs will enable filetypes by default. You can do so today with "mkfs.xfs -n ftype=1".

So what he is saying is that you have to reformat your filesystem to take advantage of this setting.

So I did some testing and now `ldmadmin scour` takes only 4 minutes to transverse the NEXRAD III directory tree!