General error: 2006 MySQL server has gone away

“MySQL Server has gone away” is a cryptic error that can be hard to troubleshoot (look at all the various responses on Stack Overflow!) Many problems can cause this error; I would like to document one specific case. In this example, the client is a PHP app using the Phalcon framework:

[Mon, 09 Apr 18 03:34:08 -0400][ERROR]  SQLSTATE[HY000]: General error: 2006 MySQL server has gone away
exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 2006 MySQL server has gone away' in /path/to/ModelBase.php:
Stack trace:
#0 [internal function]: PDOStatement->execute()
#17 {main}

Continue reading General error: 2006 MySQL server has gone away


Viewing logs for a cluster of instances on Google Stackdriver Logging

StackDriver Logging is a great feature of Google Compute Engine (GCE). You pretty much need a centralized logging solution if you are taking maximum advantage of the features offered by GCE. For example, most production applications will run on a cluster of web servers. If you set up the cluster as a managed instance group on GCE, Google can auto-scale the size of the cluster based on traffic. The challenge is that it’s much  harder to troubleshoot errors across a cluster. The requests that caused the error could be spread across any number of servers, with randomly assigned names. If load drops and the server pool contracts, you will entirely lose any log data on a server that’s auto-deleted. StackDriver Logging is the answer to this problem. Configure all servers to send all logs to StackDriver, and you can view all of your web server logs in one interface, with the entries in chronological order.

View StackDriver Advanced Filter as a Gist on GitHub

Continue reading Viewing logs for a cluster of instances on Google Stackdriver Logging

Getting DSSP to run with GROMACS 4.5.5 on Red Hat Linux


A new version of GROMACS (4.6 series) has been released since this post was written. Please try installing the latest version of GROMACS before attempting the steps in this post.


There is a bug in the do_dssp command in GROMACS 4.5.5 that prevents the analysis of secondary structure using DSSP . Attempting to run do_dssp will result in a segmentation fault. The bug has been patched since 4.5.5, but this version has not been released (see this post on the GROMACS mailing list). To get the patched version, follow the instructions from this post.

git clone git://
git checkout --track -b release-4-5-patches origin/release-4-5-patches

Continue reading Getting DSSP to run with GROMACS 4.5.5 on Red Hat Linux

Mini Maker Faire Orlando Report: Electronics

In my previous post about the Mini Maker Faire Orlando, I described some of the cool hardware that was on display.  In this post, I’m going to describe some of the electronics available from local vendors.


Electrimod MiniStack
Electrimod MiniStack

Local vendor Electrimod (Clermont, FL) was on hand to showcase their pluggable modules for PIC microcontroller development.  Basically, Electrimod is developing products that are equivalent to Arduino shields.  You start with the PIC module, and then plug in whatever other modules you need to build your prototype in the form of a stack.

These brand-new products aren’t available yet, but they should be shipping by June or July 2012.

Continue reading Mini Maker Faire Orlando Report: Electronics

TeXLive and Asymptote on CentOS 5

Tex Live

For reasons unknown, a TeX Live package is not available for Red Hat Enterprise Linux/Centos 5 from the major repositories (EPEL or DAG).  I consider this to be a glaring omission, since TeXLive is a great improvement upon teTeX.  Since I don’t have time right now to package it myself, I installed TeX Live manually in my user directory and it’s working fine.  I used the network install process, which starts with downloading a command-line installer and then following the detailed installation instructions with the base path set to $HOME/texlive/2011.


The binary version of Asymptote installed with TeX Live didn’t run on my system, so I installed the Asymptote vector graphics language manually.  As root, I used yum to install the gc and gc-devel packages to provide the Boehm garbage collector that Asymptote uses.  I also had to install the package texinfo-tex from the CentOS base repo to provide the texindex utility that Asymptote uses to build its documentation.  Once the dependencies were in place, I downloaded the Asymptote source archive and unpacked it.  I used the following commands to build Asymptote in my user directory:

./configure --prefix=$HOME/asymptote
make install

Finally, I set up up the path environment variable in my .bashrc so that the copy of Asymptote I just built will run instead of the binary that comes with TeX Live and my locally installed TeX Live will run instead of the system installation of teTeX:

export PATH=~/asymptote/bin:~/texlive/2011/bin/x86_64-linux:$PATH

If you are doing this from scratch, you should check whether there is a away to prevent the TeX Live installer from installing Asymptote.

Technically, I could remove the system installation of teTeX at this point, but the LyX package depends on teTeX and I’ll have to see if there’s a way to tell yum to keep LyX and get rid of teTeX.

How to install Octave video tools on CentOS 5

This post describes how to get GNU Octave up and running on a CentOS 5 Linux system for use in reading, processing, and writing video files.  You will need to use the EPEL and DAG/rpmforge repositories, but I won’t explain how to do that here.

NOTE: I had to go through some trial and error to get this working.  I tried to summarize only the necessary steps, but I can’t guarantee I got it completely right until I try a fresh install on another system which does not have any of the dependencies already installed.  Please leave a comment if you encounter any problems.

As a superuser, use yum to install octave.  You need octave-devel and ncurses-devel in order to install any Octave packages.  You will need to have the EPEL repository enabled:

yum install octave octave-devel ncurses-devel

As a superuser, use yum to install ffmpeg and ffmpeg-devel from the rpmforge repo.  The video package for octave uses the ffmpeg libraries (avilib) to perform the actual video processing.  You will need the ffmpeg-devel package, because Octave builds the video package from source.

  1. Since ffmpeg and ffmpeg-devel are only available from DAG/rpmforge, I suggest disabling EPEL before installing these packages.  This will ensure that all the dependencies are installed from rpmforge, instead of mixing packages from EPEL and rpmforge.  Mixing dependencies from different repositories might lead to incompatibilities and bugs that can be hard to trace.
  2. yum install ffmpeg ffmpeg-devel

Install the video package for Octave.  I prefer to do this as an ordinary user (not root) so that the packages will be installed in my home directory (I have a “rule” that only the package manager is allowed to put files into system locations).  You will need to set the CXX flags environment variable to work around a bug in the C++ headers for the ffmpeg libraries.  If you are going to build other packages that link to ffmpeg’s libraries, you should probably set CXXFLAGS in your .bashrc.  I think this bug was fixed in later versions of ffmpeg, but they haven’t made it into EPEL or rpmforge yet.

  1. Download video package
  2. Unpack the archive: tar xfz video-1.0.2.tar.gz
  3. Change to the unpacked directory: cd video-1.0.2
  4. Temporarily set C++ flags and configure: CXXFLAGS=-D__STDC_CONSTANT_MACROS ./configure
  5. Temporarily set C++ flags and build: CXXFLAGS=-D__STDC_CONSTANT_MACROS make
  6. Install: make install

If you don’t set the C++ flags this way, you will get an error like this:

/usr/include/libavutil/common.h: In function ‘int32_t av_clipl_int32(int64_t)’:
/usr/include/libavutil/common.h:154: error: ‘UINT64_C’ was not declared in this scope

Moving example code to GitHub

As this site has grown, the example code has gradually become unmanageable. I’ve posted snippets and fragments here and there over the years, and the original code is scattered in various locations across several computers. Further, as people have pointed out bugs or ways to improve the examples, I’ve had a hard time making changes. This is problem is crying out for version control with a central repository. Since I’ve been using Git for the past year or so, I decided to try GitHub. So, I am gradually moving all the examples from the site to:

When I put code in a post, I will provide a link to GitHub where you can browse or download the code.  You won’ t need to use Git, or even learn anything about it.

Reading an array from a text file with Fortran 90/95

If you’re used to coding in more modern languages, Fortran I/O can seem a little bizarre.  Strings in Fortran are much more difficult to work with, since they are fixed-length rather than null-terminated. The following example illustrates a simple way to read an array of numbers from a text file when the array length is unknown at compile time.

program io_test
      real, dimension(:), allocatable :: x
      integer :: n

      open (unit=99, file='array.txt', status='old', action='read')
      read(99, *), n
      read(99,*) x

      write(*,*) x

Here is the text file that the array is read from. The integer on the first line is the number of elements to read from the next line.

1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0

Managing a pool of MPI processes with Python and Pypar

MPI is a standard for communication between multiple processes in parallel computing. These processes can be running on different cores, CPUs, or entirely different computers in a grid. MPI is a standard, and there are many implementations available (many are open source). The features of MPI can be accessed from Python with the packages pypar and mpi4py.  Here I present a Python script that implements a “process pool” design pattern using pypar.  I have observed a pattern often enough in my own work that I wrote this framework to avoid reinventing the wheel every time I come across it.

This pattern is useful for any embarrassingly parallel problem.  This describes a computing task that can be easily accelerated by running multiple parallel processes that do not not need to interact with one another.  For example, I have large scientific data sets from several runs of an experiment that need to be analyzed.  Since the data from each run can be analyzed independently from the other runs, I can analyze all the data sets at once on a parallel machine.   The code below implements a “master-worker” paradigm that requires at least three processes to accelerate the calculation.  The first process be comes the master, which does no calculation but hands out tasks to the workers.  The rest of the processes are workers, which receive a chunk of work, finish it, return the result to the master process, and then wait for more work.

#!/usr/bin/env python
from numpy import *
import pypar
import time

# Constants

MPI_myID = pypar.rank()

### Master Process ###
    num_processors = pypar.size()
    print "Master process found " + str(num_processors) + " worker processors."

    # Create a list of dummy arrays to pass to the worker processes
    work_size = 10
    work_array = range(0,work_size)
    for i in range(len(work_array)):
        work_array[i] = arange(0.0, 10.0)

    # Dispatch jobs to worker processes
    work_index = 0
    num_completed = 0

    # Start all worker processes
    for i in range(1, min(num_processors, work_size)):
        pypar.send(work_index, i, tag=WORK_TAG)
        pypar.send(work_array[work_index], i)
        print "Sent work index " + str(work_index) + " to processor " + str(i)
        work_index += 1

    # Receive results from each worker, and send it new data
    for i in range(num_processors, work_size):
        results, status = pypar.receive(source=pypar.any_source, tag=pypar.any_tag, return_status=True)
        index = status.tag
        proc = status.source
        num_completed += 1
        work_index += 1
        pypar.send(work_index, proc, tag=WORK_TAG)
        pypar.send(work_array[work_index], proc)
        print "Sent work index " + str(work_index) + " to processor " + str(proc)

    # Get results from remaining worker processes
    while num_completed < work_size-1:
        results, status = pypar.receive(source=pypar.any_source, tag=pypar.any_tag, return_status=True)
        num_completed += 1

    # Shut down worker processes
    for proc in range(1, num_processors):
        print "Stopping worker process " + str(proc)
        pypar.send(-1, proc, tag=DIE_TAG)

    ### Worker Processes ###
    continue_working = True
    while continue_working:

        work_index, status =  pypar.receive(source=MASTER_PROCESS, tag=pypar.any_tag, 

        if status.tag == DIE_TAG:
            continue_working = False
            work_array, status = pypar.receive(source=MASTER_PROCESS, tag=pypar.any_tag, 
            work_index = status.tag

            # Code below simulates a task running
            time.sleep(random.random_integers(low=0, high=5))
            result_array = work_array.copy()

            pypar.send(result_array, destination=MASTER_PROCESS, tag=work_index)
    #### while
#### if worker


Redirecting text output from Python functions

Two posts ago, I described how I wrote a function in Python that reads in a binary file from Labview. In my last post, I described using wxPython to write a GUI to process the data from those binary files. Naturally, I called the binary-file-reader function from the GUI. The problem is that the file reader prints a lot of information to the terminal, using Python print statements. None of this goes to the GUI, requiring the user to run the GUI from a terminal and keep an eye on the text output, which is inconvenient. However, I don’t want to modify the file reader to include GUI-specific code, because that would be less modular and less re-usable. Instead, I learned that Python has a very easy facility to redirect stdout, the default destination of the print statement. I modified the file reader as follows:

def readWGMfile(binaryFile, readSecondDerivatives, output=None):
    if output is not None:
        # allow the caller to hand in an alternative to standard out--for example, if
        # the caller is a GUI, redirect "print" statements to a GUI control
        print "Redirecting output..."
        import sys
        sys.stdout = output

    # rest of the function... 

I added an optional argument to the file reader, allowing the caller to specify an object to replace stdout. The sys module is then used to redirect stdout to the object specified by the caller. There’s no change to the default usage of the function, but it’s a lot more flexible. Here is the object that I defined to capture the text:

class Monitor:
    """The Monitor class defines an object which is passed to the function readWGMfile(). Standard
    output is redirected to this object."""

    def __init__(self, notifyWindow):
        import sys
        self.out = sys.stdout
        self._notifyWindow = notifyWindow        # keep track of which window needs to receive events

    def write(self, s):
        """Required method that replaces stdout."""
        if s.rstrip() != "":        # don't need to be passing blank messages
        #            self.out.write("Posting string: " + s.rstrip() + "n")
        wx.PostEvent(self._notifyWindow, TextMessageEvent(s.rstrip()))

The only requirement of the Monitor class is that it implement a write(self, string) method that accepts a string that would otherwise have gone to standard out. In my case, I post a special event which is sent to the specified window. Here is the definition of that event:

class TextMessageEvent(wx.PyEvent):
    """Event to notify GUI that somebody wants to pass along a message in the form of a string."""
    def __init__(self,message):
        self.message = message

Once again, I’m impressed with the way that Python makes hard things easy and very hard things possible. In my next post, I’ll give a quick example of how easy it is “thread” things in Python.