# MACHINE-INDEPENDENT OPTIMIZATION

**Global Common Subexpressions: **An important optimization is finding computations of the same expression in two different basic blocks. If one precedes the other, we can store the result the first time it is computed and use the stored result on subsequent occurrences.

**Copy Propagation: **A copy statement, *u = v, *assigns on variable *v* to another, *u*. In some circumstances, we can replace all uses of *u* by *v*, thus eliminating both the assignment and *u*.

**Code Motion:** Another optimization is to move a computation outside the loop in which it appears. This change is only correct if the computation produces the same value each time around the loop.

**Induction Variables**: many loops have induction variables, variables that take on a linear sequence of values each time around the loop. Some of these are used only to count iterations, and they often can be eliminated, thus reducing the time it takes to go around the loop.

**Data-Flow Analysis**: A data-flow analysis schema defines a value at each point in the program. Statements of the program have associated transfer functions that relate the value before the statement to the value after. Statements with more than on predecessor must have their value defined by combining the values at the predecessors, suing a meet (or confluence) operator.

**Data-Flow Analysis on Basic Blocks**: Because the propagation of data-flow values within a block is usually quite simple, data-flow equations are generally set up to have two variables for each block, called IN and OUT, that represent the data-flow values at the beginning and end of the block, respectively. The transfer functions for the statements in a block are composed to get the transfer function for the block as a whole.

**Reaching Definitions:** The reaching-definitions data-flow framework has values that are sets of statements in the program that define values for one or more variables. The transfer function for a block kills definitions of variables that are definitely redefined in the block and adds (“generates”) in those definitions of variables that occur within the block. The confluence operator is union, since definitions reach a point if they reach any predecessor of that point.

**Line Variable:** Another important data-flow framework computes the variables that are live (will be used before redefinition) at each point. The framework is similar to reaching definitions, except that the transfer function runs backward. A variable is live at the beginning of a block if it is either used before definition in the block or is live at the end of the block and not redefined in the block.

**Available expressions:** To discover global common subexpressions, we determine the available expressions at each point – expressions that have been computed and neither of the expression’s arguments were redefined after the last computation. The data-flow framework is similar to reaching definitions, but the confluence operator is intersection rather than union.

**Abstraction of Data-Flow Problems:** Common data flow problems, such as those already mentioned, can be expressed in a common mathematical structure. The value are members of a semilattice, whose meet is the confluence operator. Transfer functions map lattice elements to lattice elements. The set of allowed transfer functions must be closed under composition and include the identity function.

**Monotone Frameworks: **A semilattice has a <= relation defined by *a <= b* if and only if *a Λ b = a*. Monotone frameworks have the property that each transfer function preserve the <= relationship; that is, *a <= b* implies *f(a) <= f(b), *for all lattice elements *a *and *b* and transfer function *f*.

**Distributive Frameworks: **These framework satisfy the condition that *f(aΛb) = f(a)Λf(b)*, for all lattice elements *a* and *b* and transfer function *f*. It can be shown that the distributive condition implies the monotone condition.

**Iterative Solution to Abstract Frameworks: **All monotone data-flow frameworks can be solved by an iterative algorithm in which the IN and OUT values for each block are initialized appropriately (depending on the framework), and new values for these variables are repeatedly computed by applying the transfer and confluence operations. This solution is always safe (optimizations that it suggests will not change what the program does), but the solution is certain to be the best possible only if the framework is distributive.

**The Constant Propagation Framework: **While the basic frameworks such as reaching definitions are distributive there are interesting monotone-but-not-distributive framework as well. One involves propagating constants by using a semilattice whose elements are mappings from the program variables to constants, plus two special values that represent “no information” and “definitely not a constant.”

**Partial-Redundancy Elimination: **Many useful optimizations, such as code motion and global common-subexpression elimination can be generalized to a single problem called partial-redundancy elimination. Expressions that are needed, but are available along only some of the paths to a point, are computed only along the paths where they are not available. The correct application of this idea requires the solution to a sequence of four different data flow problems plus other operations.

**Dominators:** A node in a flow graph dominates another if every path to the latter must go through the former. A proper dominator is a dominator other than the node itself. Each node except the entry node has a immediate dominator – that one of its proper dominators that is dominated by all the other proper dominators.

**Depth-First Ordering of Flow Graphs**: If we perform a depth-first search of a flow graph, starting at its entry, we produce a depth-first spanning tree. The depth-first order of the nodes is the reverse of a postorder traversal of the tree.

**Classification of Edges: **When we construct a depth-first spanning tree, all the edges of the flow graph can be divided into three groups: advancing edges (those that go from ancestor to proper descendant), retreating edges (those from descendant to ancestor) and cross edges (others). An important property is that all the cross edges go from right to left in the tree. Another important property is that of these edges, only the retreating edges have a head lower than its tail in the depth-first order (reverse postorder).

**Back Edges:** A back edge is one whose head dominates its tail. Every back edge is a retreating edge, regardless of which depth-first spanning tree for its flow graph is chosen.

**Reducible Flow Graphs: ** If every retreating edge is a back edge, regardless of which depth-first spanning tree is chosen, then the flow graph is said to be reducible. The vast majority of flow graphs are reducible; those whose only control-flow statements are the usual loop=forming and branching statements are certainly reducible.

**Natural Loops: **A natural loop is a set of nodes with a header node that dominates all the nodes in the set and has at least one back edge entering that node. Given any back edge, we can construct its natural loop by talking the head of the edge plus all nodes that can reach the tail of the edge without going through the head. Two natural loops with different headers are either disjoint or one is completely contained in the other; this fact lets us talk about a hierarchy of the nested loops, as long as “loops” are taken to be natural loops.

**Depth-First Order Makes the Iterative Algorithm Efficient:** The iterative algorithm requires few passes as long as propagation of information along acyclic paths is sufficient; ie., cycles add nothing. If we visit nodes in depth-first order, any data-flow framework that propagates information forwards, e.g., reaching definitions, will converge in no more than 2 plus the largest number of retreating edges on any acyclic path. The same holds for backward-propagating frameworks, like live variables, if we visit in the reverse of depth-first order (i.e., in postorder).

**Regions**: Regions are sets of nodes and edges with a header *h* that dominates all nodes in the region. The predecessors of any node other than *h* in the region must also be in the region. The edges of the region are all that go between nodes of the region, with the possible exception of some or all that enter the header.

**Regions and Reducible Flow Graphs:** Reducible flow graphs can be parsed into a hierarchy of regions. These regions are either loop regions, which include all the edge into the header, or body regions that have no edges into the header.

**Region-Based Data-flow analysis**: an alternative to the iterative approach to data-flow analysis is to work up and down the region hierarchy, computing transfer functions from the header of each region to each node in that region.

**Region-Based Induction Variable Detection:** An important application of region-based analysis is in a data-flow framework that tries to compute formulas for each variable in a loop region whose value is an affine (linear) function of the number of times around the loop.