The case of the open file handle

I did a rewrite of the Processing.js test suite code to get around a bug Al hit on Linux.  In doing so, I hit another nasty bug that Ted eventually solved.  I needed to get an escaped version of JavaScript or Processing code loaded into the js shell via the -f flag.  Previously I had been passing this as a string via -e, but we were erroring out as the size of the files increased beyond the max arg length allowed.  My solution was to create a temporary file in Python, stick the encoded script in, pass that into the shell for processing, then delete it.

It worked great on OS X, but on Linux we were getting an exception about half-way through the test run "Too many open files."  We have 855 test files at present, so it seemed unlikely that we had hit a real OS maximum.  I tried adjusting the number of file descriptors per process (ulimit -n 8192) and it worked.  Looking at my code I first thought that I must be accumulating pipes (I use stdout and stderr to communicate with the js shell).  Nothing I could think of was fixing it, and it seemed crazy to me that subprocess.Popen() wouldn't close them properly.

Eventually Ted figured out that my issue was that I wasn't closing my temp files properly.  Here's the relevant bit of what I was doing:

tmp = tempfile.mkstemp()  
  
t = open(tmp[1], 'w')  
t.write(es)  
t.close()

"But I am closing it," I protested. "No, sir. You're not!" said Ted, and he was right. Looking again at the docs for mkstemp(), and reading more carefully, I noticed this:

mkstemp() returns a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order.
Did you catch that?  "...an open file..."  I missed it totally.  I have said it before, but it is worth repeating: Python has really amazing docs.  I should read them more often.  Here's the fix:

tmp = tempfile.mkstemp()  
os.close(tmp[0])
Show Comments