Check out the new USENIX Web site. next up previous
Next: Annotation Language Up: An Annotation Language for Libraries Previous: Introduction

   
Related Work

Our work builds upon the tremendous amount of previous research in program analysis and program transformations. In particular, we attempt to extend classical analyses and transformations to semantically higher level operations that are encapsulated in library functions. For example, one of our annotations annotation specifies an abstract interpration [9,17] and another draws from the pointer analysis work of Wilson and Lam [28].

Compilers have long used hints and pragmas to guide optimizations such as register allocation and inlining, and to summarize procedure information such as whether a function has side effects. More recently, annotations have been used to guide dynamic compilation [13]. While annotations are not new, our use of them is new. First, our annotations describe function implementations, rather than call site-specific information. This means that application programs do not require annotations, so our annotations are hidden from the everyday user. Second, and more fundamentally, our advanced annotations can convey domain-specific information that other languages cannot. For example, annotators can define concepts, such as data distribution, that extend beyond those of the base language. However, unlike most hints and pragmas, the incorrect use of our annotations can lead to transformations that do not preserve the library's semantics.

Our work is closely related to partial evaluation [5,6,10], which improves performance by specializing routines for specific inputs. Partial evaluation combines inlining, constant propagation and constant folding to evaluate as much of the program as possible at compile time. Recent work in program specialization has generalized partial evaluation to the notion of staged optimizations, which can take place at compile time, link time or runtime [13,14], and which can be applied to class libraries in object oriented programs [27]. All of these approaches specialize based on values of variables that are constant for some duration of the program. By contrast, our approach can specialize based on other criteria: For example, specialization can occur at a particular program point when the program moves into a particular program state. Our approach also can perform optimizations such as loop-invariant code motion that cannot be expressed using partial evaluation.

Software generators [23,24] and program transformation systems [22] are compilers for domain-specific programming languages. While these systems provide sophisticated transformations of high level language constructs, they typically manipulate programs only at the syntactic level. Semantic properties, such as those resulting from dataflow analysis, are either awkward to express or completely unavailable. Our approach instead focuses on the exploitation of semantic, rather than syntactic, information.

There has been considerable work in formal semantics and formal specifications. In particular, Vandevoorde uses powerful analysis and inference capabilities to specialize procedure implementations [26]. However, complete axiomatic theories are difficult to write and do not exist for many domains. In addition, this approach depends on theorem provers, which are computationally intensive and only partially automated. Our work differs from these primarily in the scope and completeness of our annotations, which describe only specific implementation properties instead of complete behaviors.

Open and extensible compilers give the programmer complete access to the internal representation of the program [16,12]. While these systems are quite general, they impose a considerable burden. To use them, the programmer needs to understand (1) general compiler implementation techniques, (2) how to configure the specific compiler they are using, and (3) how to express and execute their optimizations. Similarly, meta-object protocols provide sophisticated mechanisms for modifying the compilation of object oriented programs [8,19], but they can be difficult to use. Our compiler limits configurability to a small but powerful set of capabilities, and provides a simple way to access them.

Finally, we note that our system is an instance of aspect-oriented programming [18]. In our case, the cross-cutting aspect is performance improvement, and our annotation language and compiler are specific mechanisms for implementing this aspect. An important feature of aspects is that they be separated from the rest of the code, and in our case this is achieved by placing the annotations in a separate file.


next up previous
Next: Annotation Language Up: An Annotation Language for Libraries Previous: Introduction
Samuel Z. Guyer
1999-08-25