First a Brief Detour into the Definition of Abstract: An abstract is a brief summary of the contents of a research report, article or presentation.When an abstract stands alone separate from a paper or poster, the title and author(s) are added to give it context.Traditionally, the abstract covers an Introduction, Methods, Results and Discussion (IMRaD format) Ė in the shortest amount of space imaginable.


Title:This is the most succinct statement of your work.If you could define your research in one catchy concise concrete statement, this would be it.


Authors: List authors and institutional affiliations according to the preferred method in your field.For instance, in computational sciences, the standard is to list authors alphabetically.The presenting author (you) will be distinguished from your co-authors on the submission form.Affiliations must follow each authorís name unless the authors are from the same institution.


Abstact (Body): There are 4 key elements in the body of an abstract: (1) Introduction: Problem Description, Motivation and Relevance, (2) Methods, (3) Results, (4) and Discussion (or Conclusions).These 4 key elements comprise the IMRaD organizational format.

1.       The Introduction typically describes the problem and its importance.

          The Problem Description defines and describes your research topic.What is the specific question that you are going to answer? If you are developing software or hardware, what are you hoping to accomplish?

          The Purpose, Motivation or Relevance describes why the problem is important.You must convey why you have undertaken your project and what you hoped to learn from your research.


2.       The Methods are the framework, procedures, and tools for investigating your defined problem. Summarize all the important information related to strategy and methodology and describe the computer systems used, the computational techniques, the analytical techniques, etc. It is sufficient to briefly summarize how you approached the problem, by describing your methods and analysis procedures.


3.       The Results (or outcomes) of your work should be concisely and objectively listed in a logical sequence.Were any comparisons made to existing ideas? If you have developed software or hardware, did you do a benchmark study if it was appropriate to do so?


4.       The discussion (or conclusions) offers an evaluation and interpretation of your findings and makes some suggestions about solutions to your stated problem. Can you make generalizations or projections of new insights into your scientific field? Are there any future improvements to consider?


The art of writing a good scientific abstract is to address the four key elements of the IMRaD format using two or three well-constructed sentences per element. Use simple statements, precise language, and well-known abbreviations when possible.


Keep it short and simple!



  Samples of Abstracts

What's Performance Got to Do With It?

Valerie Taylor, Northwestern University


Efficient execution of applications requires insights into how system features impact the performance of the application. The availability of national, high-speed networks has made available distributed systems for execution of large-scale applications. Distributed systems, which are composed of systems at geographically different sites, are heterogeneous; such systems consist of heterogeneous networks, processors, run-time systems, and operating systems. This heterogeneity complicates the task of gaining insights into the performance of the application. This talk presents the Prophesy Project, an infrastructure that aids in gaining this needed insight based upon one's experience and that of others. Prophesy consists of three major components: a relational database that allows for the recording of performance data, system features and application details; an application analysis component that automatically instruments applications and generates control flow information; and a data analysis component that facilitates the development of performance models, predictions, and trends. As a result, the Prophesy system can be used to develop models based upon significant data, identify the most efficient implementation of a given function based upon the given system configuration, explore the various trends implicated by the significant data, and predict the performance on a different system.

(The following abstract for an algorithm corresponds to an invited presentation given at Tapia 2001)

Mining Very Large Dimensional Data Sets

Vipin Kumar, University of Minnesota


Data sets with high dimensionality pose major challenges for conventional data mining algorithms. For example, traditional clustering algorithms such as K-means fail to produce good clusters in large dimensional data sets even when they are used along with well-known dimensionality reduction techniques such as Principal Component Analysis. This talk presents graph-based methods for clustering related data items in large high-dimensional data sets. Relations among data items are captured using a graph or a hypergraph, and efficient multi-level graph-based algorithms are used to find clusters of highly related items. We present results of experiments on several data sets including S&P500 stock data for the period of 1994-1996, protein coding data, and document data sets from a variety of domains. These experiments demonstrate that our approach is applicable and effective in a wide range of domains, and outperforms conventional techniques such as K-Means even when they are used in conjunction with dimensionality reduction methods such as Principal Component Analysis or Latent Semantic Indexing scheme.

(The following abstract for a new strategy corresponds to an article that appeared in Parallel Computing Research, 4(1996), No. 3.)

A Hilbert Space Filling data decomposition method for parallel distribution of data.

Srinivas Chippada, Clint Dawson, Carter Edwards, Monica Martinez, Mary Wheeler, University of Texas at Austin


The shallow water flow equations have various important applications. For instance, they can be used to predict tidal ranges and surges affecting a coastal area under development.When coupled with a transport model, pollution impact on bays and estuaries can be predicted.To be useful to decision-makers making policy on a momentís notice, the shallow water codes must be able to execute quickly.We developed and implemented a parallelization strategy for a serial validated shallow water code used by the Texas Development Water Board.The parallelization strategy uses a general message passing library that runs under both MPI and PVM and does not store global arrays.A preprocessor and a postprocessor were written to handle data decomposition as well as input and output.Two overlapping data decomposition approaches, load balanced with equal weighting on each subdomain, were implemented.The computational domain consisted of a 10147 node, 18578 triangulation of a region corresponding to the Gulf of Mexico and the western Atlantic Ocean along the U.S. east coast.The computational tests were carried out on an Intel Paragon distributed memory supercomputer.Because of the size of the computational domain, the base number of processors was 2.For a speed-up study over 2, 4, 8, 16, and 32 processors, the theoretical speed-up rates should be 1, 2, 4, 8, and 16.†† The monotonic element ordering decomposition yielded speed-up rates of 1.00, 1.85, 3.36, 5.37, and 7.57.The Hilbert space-filling curve (HSFC) decomposition strategy that enforced nearest neighbor groupings had speed-up rates of 1.00, 1.98, 3.85, 6.28, and 10.41. The HSFC decomposition strategy resulted in dramatically better speed-up rates when compared to the monotonic element ordering decomposition because the nearest neighbor grouping had the effect of minimizing interprocessor communication. Further improvements in speed-up rates might be possible with improved load-balancing.