Technical Reports

These are the technical reports published by the Computer Science Department. Some of these reports are in Adobe Acrobat (PDF) format. To get the latest version of the free Adobe Acrobat Reader, go to Adobe Acrobat Reader Download site. 


TR-UAH-CS-1995-01 PDF version (1.53MB) PostScript version (2.23MB)


Etzkorn, Letha and Davis, Carl G., "An Approach to Object-oriented Program Understanding,'' Technical Report TR-UAH-CS-1995-01, Computer Science Dept., Univ. Alabama in Huntsville, 1995.


An automated tool to assist in the understanding of legacy code can be useful both in the areas of software reuse and software maintenance. Most previous work in this area has concentrated on functionally-oriented code. Whereas object-oriented code has been shown to be inherently more reusable than functionally-oriented code, in many cases the eventual reuse of the object-oriented code was not considered during development. The research described in this paper addresses an approach to the automated understanding of object-oriented code as an aid to the reuse of object-oriented code.


TR-UAH-CS-1995-02 PDF version (110 KB) Postscript (223 KB)


Thomas H. Hinke, Harry S. Delugachand Randall P. Wolf, "Genie: A Database Generator for Testing Inference Detection Tools,'' TR-UAH-CS-1995-02, Computer Science Dept., Univ. Alabama in Huntsville, 1995.


This paper describes a system called Genie, which generates databases suitable for testing inference detection tools. In order to provide the inter-relationships that must exist among data instances if the database is actually to have inferences, Genie uses a simulator to mimic ``real world'' activity and captures data from the simulator. Since the data is based on a simulation, it will have the necessary inter-relationships. When simulator-based data cohesiveness is not required, Genie provides a means to generate instances that are not related to the simulator. It also provides a means to associate external semantics with the data by renaming data to associate it with desired ``real-world'' objects and activities. The paper describes the database that is currently generated by Genie and then shows how a set of inferences that have been identified by the AERIE inference research project can be supported by the database. These inferences are organized in terms of the inference targets specified by the AERIE inference model. The paper describes a language called FGL (Fact Generation Language), which can be used to program Genie to generate various databases, including the one presented in this paper. It then presents a description of the Genie architecture. Finally, the paper concludes with observations of our experience to date in using Genie to support the development of inference detection tools.


TR-UAH-CS-1996-01 PDF (201 KB) Postscript (2.35 MB)


Ning Tang and Timothy S. Newman, "A Vector-Parallel Realization of the Marching Cubes,'' Technical Report TR-UAH-CS-1996-01, Computer Science Dept., Univ. Alabama in Huntsville, 1996.


The Marching Cubes algorithm is a popular high-resolution isosurface extraction method used in volume data visualization. However, it is relatively computationally intensive making real-time operation on normal workstations a difficult goal when applied to large datasets. One solution is to transform the serial algorithm into a vector-parallel algorithm designed to exploit the potential computing power supplied by a supercomputer. In this paper, we present an implementation of the Marching Cubes that considers the inherent parallelism in the algorithm as well as the specific characteristics of the pipelined CPU of a vector-parallel supercomputer (Cray C90). In our approach, we vectorize two time-consuming operations in the Marching Cubes. The first operation is the interpolation of the intersection points between the isosurface and the cube edges. The second vectorized operation is the computation of topological equivalences for classes of intersections. In this paper, we describe the details of our parallel algorithm and present the experimental results for several typical volume datasets.

TR-UAH-CS-1996-02 PDF version (64 KB) PostScript version (203 KB)


Randy Wolf and Harry S. Delugach, "Knowledge Acquisition Via Tracked Repertory Grids,'' Technical Report TR-UAH-CS-1996-02, Computer Science Dept., Univ. Alabama in Huntsville, 1996.


One of the more valuable and flexible forms of knowledge acquisition is based upon the use of repertory grids. A useful extension of repertory grids can be created by providing a method of semantically linking associated constructs and repertory grids. This network of grids is a semantic network with nodes consisting of individual repertory grids and links acting as `tracks.' A track is a generalization of the laddering process used by repertory grid systems. These linked repertory grids which are acquired using the natural language interface of repertory grids can form an operational definition of a problem solving method.

TR-UAH-CS-1996-04 PDF version (22 KB) Postscript version (68 KB)


Tonya R. Thorne, Harry S. Delugach, "The Requirements Dissection Model: A Model for Aiding Users/Customers in the Development of Software Requirements," Technical Report TR-UAH-CS-1996-04, Computer Science Dept., Univ. Alabama in Huntsville, 1996.


This paper briefly looks at the various methodologies that are used to represent software requirements. One of the deficiencies of the various software methodologies available is that they do not use terminology that the user/customer can understand. One of the ways to help in developing stable software requirements is to make the user/customer as actively involved as possible during the software development process. Unfortunately, not many software requirements notation models incorporate the user’s/customer’s viewpoint or, if they do, not in a way that can be easily understood by the user/customer. The Requirements Dissection Model is designed in simple language that an untrained user/customer can comprehend, yet sophisticated enough to capture the salient features of software systems for implementation in the design of these systems. Six representative systems were chosen to show how this Requirements Dissection Model can be implemented. After analyzing these systems, the paper concludes with ideas for future research on the Requirements Dissection Model.

TR-UAH-CS-1997-01 PDF version (116 KB) PostScript version (373 KB)


L.H.Etzkorn, C.G. Davis, B. L. Vinz, R.P. Wolf, J.C.Wolf, M.Y. Yun, L.L. Bowen , A.M. Orme, L.W.Lewis, D.B. Etzkorn, "An Examination of Object-Oriented Reuse Views in the PATRicia System," Technical Report TR-UAH-CS-1997-01, Computer Science Dept., Univ. Alabama in Huntsville, 1997.


Software reuse has been shown to increase productivity, reduce costs, and improve software quality. The identification of reusable code in existing (legacy) code is an important partof the software reuse process. Most research that has addressed this problem has concentrated on code created in the functional decomposition paradigm. However, it has been shown in many places that object-oriented code is inherently more reusable than functionally-oriented code. In many cases eventual reuse of the code was not considered in the software development process, and so even though the paradigm tends to result in more reusable code than that developed in the functional decomposition paradigm, the code itself was not specifically designed for reuse. This paper describes various views of reuse in object-oriented systems. These views employ object-oriented metrics to aid in the quantification of the reusability of code components in object-oriented systems.

Keywords: software reuse, object-oriented metrics, knowledge-based, program understanding.

TR-UAH-CS-1997-02 PDF version (56 KB) PostScript version (557 KB)


Letha Etzkorn, Carl Davis, and Wei Li, "A Statistical Comparison of Various Definitions of the LCOM Metric," Technical Report TR-UAH-CS-1997-02, Computer Science Dept., Univ. Alabama in Huntsville, 1997.


Several different definitions of the Lack of Cohesion of Methods (LCOM) metric exist. Various implementations of the LCOM metric, regarding inheritance and use of the constructor and destructor in the calculation are possible. This paper discusses the pros and cons of the possible definitions and implementations of the LCOM metric. An experiment that compared each implementation and definition of LCOM to cohesiveness as determined by seven experts is described. Linear regression analyses comparing cohesiveness to the various LCOM metrics are discussed.

TR-UAH-CS-1997-03 PDF version (67 KB) PostScript version (993 KB)


Harry S. Delugach, "Conceptual Integration In Multiple Viewed Requirements Development," Technical Report TR-UAH-CS-1997-03, Computer Science Dept., Univ. Alabama in Huntsville, 1997.


This paper addresses software requirements development, and how it can be supported by combining multiple views of participants with the ability for the participants to gain feedback from other views. We include a brief justification for the inclusion of multiple views, a brief summary of multiple-viewed approaches, and introduce conceptual graphs as a representation method for requirements. We use some simple techniques, using a brief example, along with the Wordnet database to show how conceptual feedback supports elicitation and acquisition. We finally outline some future work we will undertake to further explore the techniques. Keywords: software requirements engineering, requirements acquisition, requirements elicitation, conceptual graphs, conceptual feedback.

TR-UAH-CS-1997-04 PDF version (326 KB) PostScript version (908 KB)


Jeffrey Fox, Heather Huber, David Krum, Jay Moon, Dong Ouyang, Insuk Sickler, and Julie Vo, "Software Requirements Specification: Picasso Requirements Assistant," Technical Report TR-UAH-CS-1997-04, Computer Science Dept., Univ. Alabama in Huntsville, 1997.


Picasso will function as a major component of the Requirements Assistant system. This system is being developed to provide an environment in which a group of developers can collaborate on the production of a set of software requirements. These developers may be working on different continents and may not be able to meet in person. The system will support a multiple-viewed strategy that allows developers to create Requirements-Views using the CASE tools with which they are familiar. Views that have been created will be stored in a common repository for future reference. In addition to providing a storage and retrieval facility, the system will provide facilities to analyze for inconsistency, incompleteness, and ambiguity between the different views. As a result, the translation of views from the respective CASE tool format into a common internal representation is required. The notation of conceptual graphs is a well-defined knowledge representation which has been chosen as a suitable internal representation for the system [SOW97]. A description of the reasons for choosing the conceptual graphs notation is presented in the notes section of this document. The system will provide facilities to resolve the conflicts detected between the views. Finally, the system will track activity in the system by keeping a log of user requests, problem reports, and corrective actions that will used to calculate and report project metrics.

TR-UAH-CS-1998-01 PDF version (44 KB) Postscript version (674KB)


Letha Hughes Etzkorn, "The Use of A Simple Methodology for Flip Flop Conversion as an Aid in Teaching Synchronous Sequential Circuits in a Digital Systems Design Course," Technical Report TR-UAH-CS-1998-01, Computer ScienceDept., Univ. Alabama in Huntsville, 1998.


Most digital systems textbooks treat the topic of converting one flip flop to another by simply giving the student certain simple conversion circuits, such as the use of an inverter between the R and S inputs of an RS flip flop to form a D flip flop, or tying together the inputs of a JK flip flop to make a T flip flop. However, a more general, but very simple, methodology for flip flop conversion has advantages when used to teach synchronous sequential circuits to students in a digital systems course. The use of this methodology removes a source of student confusion (how did they come up with that circuit in the first place?) and allows the student to practice certain standard flip flop techniques before the student is required to use those techniques in a more general sequential circuit analysis. This paper describes this methodology.

TR-UAH-CS-1998-02 PDF version (44 KB)


Min Dai and Timothy S. Newman, "Hyperbolic and Parabolic Quadric Surface Fitting Algorithms - Comparison Between the Least Squares Approach and the Parameter Optimization Approach," Technical Report TR-UAH-CS-1998-02, Computer Science, Dept., Univ. Alabama in Huntsville, 1998.


Locating and classifying quadric surfaces is a significant step in the recognition of 3D manufactured objects because quadric surfaces are commonly occuring shapes in man-made products. Surface fitting based on the input sample data point set is an effective strategy for quadric surface recognition. Two algorithms of quadric surface fitting that are especially useful for hyperboloid and paraboloid fitting are described in this report. One is the Least Squares Approach and the other is the Parameter Optimization Approach. A comparison is made between the performances of these methods.

TR-UAH-CS-1999-01 PDF version (64 KB) Postscript (423 KB)


Anuradha Lakshminarayana and Timothy S. Newman, "Principal Component Analysis of Lack of Cohesion in Methods (LCOM) metrics," Technical Report TR-UAH-CS-1999-01, Computer Science, Dept., Univ. Alabama in Huntsville, 1999.


In this report, we study the Lack of Cohesion in Methods (LCOM) metric for an object-oriented system and examine the suitability of eight variations of this metric through a principal component analysis.

TR-UAH-CS-2002-01 PDF version (94 KB) Postscript (353 KB)


William Lee and Timothy S. Newman, "On OpenGL Rendering of Isosurfaces," Technical Report TR-UAH-CS-2002-01, Computer Science, Dept., Univ. Alabama in Huntsville, 2002.


One of the goals of volume visualization is to provide a visualization user with an accurate (i.e. realistic) graphical representation of the real-world phenomenon. One way to simulate a real-world phenomenon is by use of lighting and shading. In particular, to perform correct lighting and shading in OpenGL, normal vector calculation is one of the required procedures. In this technical report, we discuss a user-defined normal vector calculation procedure and all the necessary OpenGL constructs to light and shade an isosurface-based rendering appropriately.