Our concern is how to determine data dependences between program constructs in programming languages with pointer variables. We are particularly interested in computing data dependences for languages that manipulate heap-allocated storage, such as Lisp and Pascal. We have defined a family of algorithms that compute safe approximations to the flow, output, and anti-dependences of a program written in such a language. Our algorithms ao:ount for destructive updates to fields of a structure and thus are not limited to the cases where all structures are trees or acyclic graphs; they are applicable to programs that build cyclic structures.Our technique extends an analysis method described by Jones and Muchnick that determines an approximation to the actual layouts of memory that can arise at each program point during execution. We extend the domain used in their abstract interpretation so that the (abstract) memory locations are labeled by the program points that set their contents. Data dependences are then determined from these memory layouts according to the component labels found along the access paths that must be traversed during execution to evaluate the program's statements and predicates.For structured programming constructs, the technique can be extended to distinguish between loop-carried and loop-independent dependences, as well as to determine lower bounds on minimum distances for loop-carried dependences.
Recently, two of our stronger graduate students had trouble finding work as software developers. Both students had done masters-level work in operating systems, networking, software engineering, and various "hot" technologies, including Java, C++, and client-server computing. Both students had also worked as programmers before starting graduate school. Given these credentials, I was surprised when these students, in effect, had to move closer to prospective employers before they gave serious consideration to their resumes.My students' experiences caused me to conduct an informal survey about the perceived value of a computer science degree. In October 1998, I asked employees from 23 exhibitors at OOPSLA '98---software, consulting, and training firms---to assess what their company expected from any applicant with a BS or MS in computer science. What follows are my findings, together with concluding observations regarding what these findings might signify.
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., where is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes run-time complexity for accumulating the results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.