Markus Müller-Olm:

The Complexity of Faint Code Elimination in Parallel Programs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

\emph{Dead code elimination} is a classic optimizing program
transformation in (imperative) programs that reduces both code size
and execution time.  Its goal is to eliminate computations from the
program the results of which are never needed.  The classic approach
to dead code elimination is an iterative procedure in which in each
step first `dead' assignments are identified by means of a simple data
flow analysis and then removed.  Iteration is useful because the
removal step may render further commands dead.  But even if this
approach is iterated until no further dead assignments are found,
certain assignments can remain in the program that cannot influence
relevant statements like output statements.  Such code has been termed
\emph{faint code}.  By employing a more complex data flow analysis,
faint code can be eliminated from imperative programs with procedures
efficiently.

Recent research has shown that an important class of simple data flow
analysis problems can be solved efficiently for programs with
procedures and a certain kind of parallelism known as \emph{fork/join
parallelism}.  This class includes the data flow analysis used in the
iterative dead code elimination procedure sketched above and thus dead
code can efficiently be eliminated from such programs.  However the
class does \emph{not} contain faint code analysis.

We investigate the complexity of detecting faint code in such parallel
programs.  More specifically, we show that already for acyclic
parallel programs without procedures the problem is NP-complete and
becomes even PSPACE-hard for parallel programs involving loops and
procedures.