Für neue Kunden:
Für bereits registrierte Kunden:
98 Seiten, Note: 1,0
Directory of Figures
Directory of Tables
Directory of Listings
Table of Contents
2. Motivation and Problem Description
3. Experiment Background
3.1. A short summary on Empiricism and Experimentation
3.1.1. What is meant by Empiricism?
3.1.2. Research and Experimentation methods
184.108.40.206. Case Studies or Benchmarks
220.127.116.11. Field Studies
18.104.22.168. Controlled Experiments
3.1.3. Empirical research in Software Engineering – Specifics and Dangers
3.2. Aspect-Oriented Programming
3.2.1. Aspect-Orientation in General
3.2.2. AspectJ – a short Introduction
4. The Experiment
4.1. Experiment Buildup
4.1.1. The planned Course of the Experiment
4.1.2. The Questionnaire
4.1.3. The Hard- and Software used in the Experiment
22.214.171.124. The Application used for Editing
126.96.36.199. The Development Environment and Hardware
4.1.4. The Tasks
188.8.131.52. Task1: The Logging Task
184.108.40.206. Task2: The Parameter Null Task
220.127.116.11. Task3: The Synchronization Task
18.104.22.168. Task4: The Check Player Argument Task
22.214.171.124. Task5: The Notify Observers Task
126.96.36.199. Task6: The Observers Null Check Task
188.8.131.52. Task7: The Refresh Constraint Task
184.108.40.206. Task8: The Label Value Check Task
220.127.116.11. Task9: The Current Level Check Task
4.2. Implementation of the Experiment
5. Experiment Analysis and Results
5.1. Data Processing and Preparation
5.2. Data analysis and presentation
5.2.1. The Logging Task
5.2.2. The Parameter Null Task
5.2.3. The Synchronization Task
5.2.4. The Player Check Task
5.2.5. The Notify Observers Task
5.2.6. The Observers Null Task
5.2.7. The Refresh Constraint Task
5.2.8. The Label Value Check Task
5.2.9. The Level Check Task
5.2.10. Results of the Development Times and Descriptive Statistics
5.2.11. Statistical Tests on the Results
5.2.12. Doing a Break-Even Analysis
5.2.13. Participant Grouping
6.1. Thoughts on Validity
6.1.1. Internal Validity
6.1.2. External Validity
6.2. General Discussion
7. Related Work
9.1. The questionnaire (German)
9.2. The aspect-oriented task descriptions (German)
9.3. The object-oriented task descriptions (German)
This book describes and evaluates a controlled experiment on aspect-oriented programming. The experiment was designed to make the overall performance of developers using object-orientation and aspect-orientation on a number of tasks comparable. The primary focus of the experiment laid on whether aspect-orientation has a positive impact on the development time when comparing its performance with the performance of the object-oriented approach on the same task.
Dieses Buch beschreibt und wertet ein kontrolliertes Experiment aus. Das Experiment wurde aufgesetzt, um die allgemeine Arbeitsleistung von Entwicklern bei der Nutzung des objektorientierten und des aspektorientierten Ansatzes für eine Anzahl an Aufgaben vergleichbar zu machen. Dabei wurde der primäre Fokus darauf gelegt messbar zu machen, ob die Aspektorientierung einen positiven Einfluss auf die Entwicklungszeit im Vergleich zur Objektorientierung bei der gleichen Aufgabe hat.
Figure 4-1 – A simplified UML Diagram of the target application
Figure 5-1 – A sample diagram showing progress per time in seconds depending on parameter count of methods for participant 16 and the Logging task
Figure 5-2 – A sample diagram showing overall unaltered progress of participant 16 on the Logging task
Figure 5-3 A sample diagram showing progress per time in seconds depending on parameter count of methods for participant 16 and the Logging task after data cleansing
Figure 5-4 sample diagram showing overall unaltered progress of participant 16 on the Logging task after data cleansing
Figure 5-5 - A sample diagram showing overall unaltered progress of participant 16 on the Parameter Null task
Figure 5-6 - A sample diagram showing overall unaltered progress of participant 16 on the Synchronization task
Figure 5-7 - A sample diagram showing overall unaltered progress of participant 16 on the Player Check task
Figure 5-8 - A sample diagram showing overall unaltered progress of participant 16 on the Notify Observers task
Figure 5-9 - A sample diagram showing overall unaltered progress of participant 16 on the Observers Null task, using the copy and paste approach
Figure 5-10 - Figure 5-11 - A sample diagram showing overall unaltered progress of participant 2 on the Observers Null task, using the find and replace approach
Figure 5-12 - A sample diagram showing overall unaltered progress of participant 16 on the Refresh Constraint task
Figure 5-13 - A sample diagram showing overall unaltered progress of participant 16 on the Label Value Check task
Figure 5-14 – A sample diagram showing overall unaltered progress of participant 16 on the Level Check task
Figure 5-15 - Sum of times for the object-oriented and aspect-oriented solutions and the ratio
Figure 5-16 – A scatter diagram showing development times for all participants using aspect-orientation on the larger three tasks
Figure 5-17 - A scatter diagram showing development times for all participants using object-orientation on the larger three tasks
Figure 5-18 - A scatter diagram showing development times for all participants using aspect-orientation on the smaller six tasks
Figure 5-19 - A scatter diagram showing development times for all participants using object-orientation on the smaller six tasks
Figure 5-20 – Normalized diagram for participant 16 and the logging task
Figure 5-21 – Scatter diagram for the distribution of the break-even values for all tasks
Figure 9-1 – Page 1 of the questionnaire
Figure 9-2 – Page 2 of the questionnaire
Table 5-1 – Regression equations and determination coefficients for all participants for the Logging task using unaltered data
Table 5-2 Regression equations and determination coefficients for all participants for the Logging task using cleansed data
Table 5-3 - Regression equations and determination coefficients for all participants for the Parameter Null task using unaltered data and cleansed data
Table 5-4 - Regression equations and determination coefficients for all participants for the Synchronization task using unaltered data and cleansed data
Table 5-5 - Regression equations and determination coefficients for all participants for the Player Check task using unaltered data
Table 5-6 - Regression equations and determination coefficients for all participants for the Notify Observers task using unaltered data
Table 5-7 - Regression equations and determination coefficients for all participants for the Observers Null task using unaltered data
Table 5-8 - Regression equations and determination coefficients for all participants for the Refresh Constraint task using unaltered data
Table 5-9 - Regression equations and determination coefficients for all participants for the Label Value Check task using unaltered data
Table 5-10 - Regression equations and determination coefficients for all participants for the Level Check task using unaltered data
Table 5-11 – Development times for every participant and every task, measured in seconds
Table 5-12 – Differences between aspect-oriented and object-oriented development times
Table 5-13 – Descriptive statistics for the development times of all participants
Table 5-14 – Ratios of aspect-oriented to object-oriented development time
Table 5-15 – Statistical functions on the ratio data
Table 5-16 – Results of the statistical tests for normal distribution
Table 5-17 – Results of the Wilcoxon-test
Table 5-18 – Break-even values for all participants and all tasks
Table 5-19 – Descriptive Statistics on the break-even values
Table 5-20 – Answers to the questionnaire
Table 5-21 – Three tables showing the developer categorizations
Table 5-22 – Descriptive statistics values for the participant groupings
Table 5-23 – Results for normal distribution tests for the advanced group
Table 5-24 – Results for the paired T-tests on the tasks of the advanced group that are supposedly normally distributed
Table 5-25 – Results for the Wilcoxon-test on all tasks for the advanced group
Table 5-26 - Results for normal distribution tests for the average group
Table 5-27 - Results for the paired T-tests on the tasks of the average group that are supposedly normally distributed
Table 5-28 - Results for the Wilcoxon-test on all tasks for the average group
Table 5-29 - Results for normal distribution tests for the novice group
Table 5-30 - Results for the paired T-tests on the tasks of the novice group that are supposedly normally distributed
Table 5-31 - Results for the Wilcoxon-test on all tasks for the novices group
Listing 3-1 – An example aspect in AspectJ Syntax
Listing 4-1 – An example statement of the Logging Task using only object-oriented programming
Listing 4-2 – A possible AspectJ Solution for the Logging Task
Listing 4-3 – The null-reference check using pure object-orientation
Listing 4-4 – A possible AspectJ solution for the null-reference checks
Listing 4-5 – The code template for the synchronization task
Listing 4-6 – The AspectJ solution for the synchronization task
Listing 4-7 – An example object-oriented solution for the check player argument task
Listing 4-8 – Possible AspectJ solution for the check player argument task
Listing 4-9 – An object-oriented example for the notify observers task
Listing 4-10 – An AspectJ solution for the notify observers task
Listing 4-11 - An object-oriented example for the observers null check task
Listing 4-12 - An AspectJ solution for the observers null check task
Listing 4-13 –Possible solution for the refresh constraint task using object-orientation
Listing 4-14 - An AspectJ solution for the refresh constraint task
Listing 4-15 - Possible solution for the label value check task using object-orientation
Listing 4-16 - An AspectJ solution for the label value check task
Listing 4-17 - Possible solution for the current level check task using object-orientation
Listing 4-18 - An AspectJ solution for the current level check task
“Time is money” is what many people state when considering the aspect of time in modern businesses. Many cost prediction models and actual prices for various products (especially in the area where human service and creativity are playing a major role) are based on time as a central factor. This applies to a large part of the software industry, where the time which developers need to finish the software is a critical factor in almost any software project. The technique of aspect-oriented programming could present a possibility to save a large amount of time, especially for redundant code in larger projects.
This work introduces a controlled experiment that analyzes the development costs in terms of additional development time, caused by the specification of redundant code in the object-oriented programming language Java in comparison to the aspect-oriented programming language AspectJ, which is essentially an add-on for the Java language. Chapter two describes the motivation and the background of the study, trying to argue on the importance of empirical research in this area. Chapter three summarizes some historical background about empiricism, the methods associated with today’s empirical research (like controlled experiments), and a short introduction to aspect-oriented programming and the AspectJ implementation of that technique. Chapter four explains the experiment’s setup, the specific tasks and their different solutions in aspect-oriented and object-oriented programming. Chapter five explains and presents most of the experiment results from the raw data to the aggregated forms and the statistical and explorative approaches that were done with the data. After a discussion of the experiments results and validity thoughts in chapter six, chapter seven summarizes some related work in the field of studies on aspect-orientation. Finally, chapter eight concludes this work.
Aspect-oriented programming has been introduced as a major step forward in software development. Some even proposed that
"AOP could be the next step in the steady evolution of the OO paradigm, or perhaps it will evolve into a completely new paradigm independent of OO",
making it possible to solve difficult problems affecting larger parts of software in less time than it would take using exclusively procedural or object-oriented programming. From looking at its conception, it seems especially well-suited to avoid writing redundant and repeating lines of code which appear on different occasions in the program's code. The study described in this work was therefore focused on redundant code and repetitive tasks, where the progress can be measured easily in time.
All of this sounds like it is a great achievement for the field of software engineering, but as so often in the young computer science, it has still not been tried to strengthen these claims using empirical studies and controlled experiments. Regrettably, this does not only apply to aspect-orientation as a relatively new technique, but to nearly all software engineering methods and techniques. In software engineering, empirical research makes up only a very small amount of all research work and publications. This might be seen as a dangerous development when simultaneously looking at the great impact the field of computer science and business information systems has on everyday life. Basili and Tichy both argue in that direction. Today, vehicles, weapons, machines, air planes, space craft and nearly every other electrical or mechanical system is somehow controlled, observed or at least influenced by software. Modern technology has grown exponentially and towards being nearly completely software relied in such a short time span (about 30 years), that it might be considered extremely careless not to try to validate the use of software engineering methods, techniques, languages and products using empirical research. Not to mention the potential cost and loss of time that could be avoided by differentiating good from bad methods (even if results can only be based on the situation and context). For software development is still a process dominated by human creativity and brains and therefore costs for personnel make up the bulk of the development budgets (attempts on using industrial methods for producing software still have to prove their worth). A lot of time and money is wasted on projects that are never finished or finished way after deadline, on faulty software and the correction of these faults, on software that is not accepted by the client or user, not to mention potential risks and costs through security leaks and their exploitation. Unfortunately, there have been nearly no studies on the cost of writing redundant code either.
So to make an own bold claim: There is a huge lack of empirical research in software engineering to back up many assumptions that are made about methods and techniques. One reason might be that empirical research always consumes time and resources, most of the time more than literature study and collecting arguments does. Even for the object-oriented paradigm, which has been around for quite some time now, there have been only a handful of studies and experiments trying to strengthen the object-oriented claim of being better suited as an approach to programming. The rationale behind this is often the statement that it fits the way the human brain works. Most people would agree that this argument makes sense and it definitely does. Still, some more studies backing that statement up need to be done.
Hence, this experiment was motivated by all the above grievances and the need for more empirical research. The main motivation was to try to back up or falsify the assumption that aspect-orientation decreases the time needed to write redundant code, but in a rather explorative matter, not trying to concentrate on rejecting or hardening one single hypothesis. One of the central questions was: When would aspect-orientation be the technique providing an advantage in time, and when would the plain object-oriented approach be the better choice and would it even be possible to tell at all? To go a little further, it was tried to find a break-even point depending on task complexity and redundancy, which could be taken as an estimated predictor for anyone trying to assess which technique to use in a given situation, which is why a very fine-grained and basic approach was taken. Another part of the motivation was that as a side-effect of the overall data collection procedure, a large set of data for the process and time consumption of writing redundant code in object-oriented Java was gathered and could be analyzed and used in further studies.
There are also other facets of development that aspect-orientation has an impact on. For example, aspects provide clean design separation and modularization of the so-called crosscutting concerns, which are parts of the software that cannot be cleanly encapsulated into one single class or component (because they crosscut multiple components and code needs to be inserted at these spots to have the crosscutting work as wanted). Modularization of crosscutting concerns can enhance readability, maintainability and flexibility of applications. But aspects also add an increasing amount of complexity to a program. Especially if aspects get tangled into each other, more than one aspect weaves itself into the same place of code and on top of that the dynamic features of aspects are used, which can be very confusing to write and understand. This can make debugging applications crosscut by many aspects a burden or even nearly impossible. Some of the studies mentioned in the related work chapter are concerned with these facets of aspect-orientation and (Highley, et al., 1999) provides a critical analysis of the aspect-oriented approach in general. Thus, this work will only focus on the code redundancy removal and time saving.
The term empiricism is derived from a Greek word meaning “based on experience” and is used in two different contexts: The philosophical and the scientific.
In philosophy, a theory of knowledge says that any real knowledge that can be gained has to be gathered by experiencing and perceiving things with the human senses. This ultimately implies that human beings cannot gain knowledge and ideas out of themselves just by “thinking”, which is the concept of another field of philosophy. These two schools of thinking are called empiricism (knowledge arises from experience) and rationalism (knowledge can be gained by reasoning). As a matter of fact, many would agree that they do not exist in the real world in their most radical interpretation, and as a discussion of them is far out of the scope of this thesis, they will not be covered deeper. Interested readers might want to take a look into philosophical literature on both matters.
The more common term of empiricism is used in the modern sciences, its meaning being derived and closely related to the philosophical term. It is used to characterize the methodology of research which tries to strengthen or falsify (it can never prove) scientific theories and hypotheses using experiments or observations of reality, which ultimately leads back to the philosophical notion of learning by experience and observation. Empirical research is especially widespread in the natural and social sciences, as well as in medicine and the pharmaceutical area of research.
This chapter’s information is primarily based on the work of Prechelt (and the summary of Josupeit-Walter ), who concentrated his evaluation and description of empirical research methods on these methods fit to be used in software engineering. The social sciences use other and more detailed categorizations for these methods and are in many matters more precise in their description and implementation. Nevertheless, for this work, the focus will stay on the methods presented by Prechelt. Most of these research methods are based on observation, either through qualitative manual approaches or automated quantitative ones.
Case studies are commonly used to evaluate tools or methods using a controlled but typical environment for their implementation. They mostly consist of one or more tasks or use cases the participants have to fulfill using certain methods or tools, sometimes to evaluate a single method or tool or sometimes to compare two different approaches. The results can be qualitative or quantitative, depending on the implementation of the study. Case studies, in contrast to controlled experiments (explained in more detail later), do not try to keep all possible variable factors constant, which means that all the factors influencing the outcome of the study can possibly have a great impact on the results and make them very indeterministic or complicate the reconstruction of their development because of the many interdependencies. All the same, case studies are still useful for a rough and efficient evaluation of certain approaches or technologies.
To summarize, their advantages lie in their ease of implementation and general application possibilities, while their drawbacks lie in their sometimes unreliable results possibly not giving insights or conclusions.
A special case of case studies are benchmarks, which are standardized case studies which only result in quantitative data. Benchmarks are commonly used to create results that can be compared directly to other implementations of the same benchmark (as it is done in many hardware tests, where a benchmark program is run on computers with different hardware to make their overall performance or the performance of a single device comparable) or to create template data that represents a threshold which certain devices, tools or methods have to fulfill for quality testing.
Unlike case studies, benchmarks or controlled experiments, field studies are implemented in the field, meaning real software projects in the industry and are designed as an accompanying observation of specific factors, processes, behaviors or situations in these projects. Field studies try not to influence the observed projects and processes as to not distort the results. They have the advantage that they can be used to observe even complex situations, which would be too time-consuming or complicated to implement in a controlled environment, or if the scientists want to observe unaltered real life situations. But these advantages lead to a drawback comparable to that of case studies, but in a much stronger way: Their results are really hard to transform into a general hypothesis and the whole field study description can get very complex because of the complex circumstances.
Of all research methods, controlled experiments try to exert the most control on the experiment’s implementation and circumstances. In the optimal case, only one or a few factors are left open as variables, everything else is kept as a constant. These experiments are the hardest to design and implement, as they require thorough planning and disciplined implementation. The setup is generally well defined and only these factors which are the focus of the experiments observations are kept variable. Controlled experiments therefore have a high validity and can easily be reproduced many times (when following the exact setup and implementation), producing reliable and comparable results. Their biggest disadvantage is their large cost in time and work for preparation, buildup and evaluation.
Polls (sometimes called interviews) are easy to implement and to evaluate, and can therefore be used to evaluate facts over a large number of people. However, they suffer from results that are hard to interpret and can be very unreliable, as every submitted answer is a completely subjective rating or evaluation of the specific person. And as many persons tend to have large number of different opinions and views, polls tend to have a large variability in the range of answers, especially when open questions are used. Poll results tend to be more reliably the more persons are included and if the range of questioned persons is representative for certain groups (like an even ratio of software developers and project managers).
Meta studies take a number of other studies on a certain topic and try to evaluate whether there are differences or similarities between these studies’ results and whether there are certain gaps in all data and if unanswered questions remain. They tend to produce less work than most other studies, as anyone carrying out a meta-study mainly has to do investigative work on the results and essays of the original experiments’ executors. Their most important aspect is the resulting summary of a possible large field of other works, which can be used by other scientists to get an overview of the current level of research on the topic.
As stated above, software development is still a process dominated by human creativity and brains. Its mechanisms therefore do still elude a complete understanding and are very hard to measure and capture through observation and data collection. As Prechelt has written in his book, there are many specifics of empirical research and controlled experiments to be considered in software engineering. He states that for many controlled experiments, the most important variable to control is the variation among all participants' approaches and performance on a problem (which is an especially big variation for programming or modeling tasks). The wide range of experience with modeling, programming, programming languages and development tools between software developers, which is the very nature of software development and still eludes any quantitative way of measuring it, makes the results of empirical experiments generally hard to predict or interpret. Empirical research in software engineering is still at its beginning and researchers are still far from being able to handle and control these variations in a way that would make them able to produce very reliable results in most situations. These are some of the reasons why computer scientists tend to stay away from empirical research (Tichy summarizes 16 excuses used most to avoid experimentation in software engineering in his paper that was already cited above: (Tichy, 1997)).
Even the object-oriented approach, which is currently the most used in industry and academics, has not been validated thoroughly. Some even argue that there are still problems in the idea of object-orientation.
Object-oriented Programming has had an amazing triumphal procession in the past years, both in the academic world as well as in the industrial development practice. It still has its drawbacks and is sometimes not sufficient for solving a specific set of problems. In 1997, Kiczales and his colleagues published the paper (Kiczales, et al., 1997) which introduced aspect-oriented programming as a modified approach on solving specific problems in software development. The idea was that certain functional parts of software crosscut an application or at least large parts of it (like logging, which is the most worn example for aspect-orientation, tracing, security, synchronization or other functions). Today, these specific functions are commonly called crosscutting concerns. Using the object-oriented approach, developers had a hard time implementing these crosscutting concerns seamlessly into their programs, because their nature prevented a clean separation of concerns and ultimately lead to tangled and difficult to read code (imagine an example where each method call that had to be logged for debugging purposes needed a separate logging statement which had to be inserted manually into the code). This code was also very tough to maintain and change, as the calls to these functions were scattered across the rest of the code and one central code change to solve the problem (which is usually one of the main benefits of object-orientation, the encapsulation of functionality) was not possible. All these drawbacks lead to the idea of aspect-oriented programming, where the so called aspects replace the tangled and scattered fragments in the old code by one central isolated point, effectively modularizing the crosscutting concern in the code. For the logging example, this aspect could be given the task of calling the logging functionality (which might be a method of a class providing this function) on all occasions the developer wants it to. This makes it easy for the developer to have every single method call in the program be logged without having to insert logging statements into the code to log itself.
AspectJ is the implementation of the aspect-oriented add-on for the well-known and widely used programming language Java. It provides language constructs and mechanisms that implement the aspect-oriented crosscutting concerns using aspects. The paper (Kiczales, et al., 2001) presents an overview of AspectJ and its constructs, as well as how to use them. Some of these mechanisms will be explained here, but for a deeper introduction, the AspectJ Tutorial of the AspectJ project team provides a more sophisticated resource.
AspectJ introduces the aspect language construct into Java, which is defined in practically the same way as a standard Java class and provides the language unit which encapsulates and modularizes crosscutting functionality. Kiczales and his colleagues differentiate the crosscutting mechanisms of AspectJ into dynamic crosscutting, meaning being able to run additional code at well-defined points during program execution, and static crosscutting, meaning the extension of existing types through new operations. There is some confusion concerning the meaning of the terms dynamic and static crosscutting, as some seem to use these terms in different contexts for different concepts, others use them the way Kiczales and his colleagues did. So it cannot be clearly stated whether these definitions of dynamic and static crosscutting are deprecated today. For reasons of simplicity, this AspectJ introduction focuses on the concepts which were originally meant by the notion of dynamic crosscutting: Running aspect code at well-defined points in the program.
These well-defined points are commonly called join points and can represent different occasions during program execution, like a method call, an exception being thrown or the reading of a class field. A set of join points is called a pointcut (like all method calls in a certain class). Another interpretation might be to say that a join point is an incarnation of a certain pointcut definition, somehow like the relation of an object to a class. The code that is to be executed on the execution of a pointcut is called advice. The following code example shows the different concepts and their syntax in AspectJ:
illustration not visible in this excerpt
Listing 3-1 – An example aspect in AspectJ Syntax
On a first look, the syntax of the aspect frame does not differ much from that of a class: It has a name, a package, can have imports and is defined the same as a class (it also gets its own source file), except for the keyword aspect that is used. An aspect can also have fields and methods, like the aspect from the example. These are the similarities between an aspect and a class, but more interesting are the aspect specific constructs: There is a pointcut definition called methodCall, which hooks itself onto all calls to public methods of the class MyClass which return a String. The method name, number of parameters and the parameter types are not relevant for this pointcut, as wildcards are used for the definition. Pointcuts use constructs called primitives, which represent certain points in program execution, like the call of a method (in the example, the call primitive is used). These wildcards and primitives make AspectJ a very powerful tool, interested readers should refer to the AspectJ documentation for more on wildcards and primitives as well as more AspectJ syntax. Hence, the pointcut stands for a set of join points in the program, specifically all occasions of calls to methods which return a String in the class MyClass. The last construct in the example is the advice, which represents the code to be executed on certain occasions. In this case, a line is printed to the console every time before a method which fulfills the primitive of the methodCall pointcut is executed, indicated by the before keyword. Another possibility would be to use the after or around keywords, which can run the advice code after or instead of the original method call.
The process of putting together the object-oriented and aspect-oriented code is called weaving, where the weaver-process inserts the connection to the aspect advice code at the designated join points. This definition of weaving is rather rudimentary, but shall suffice in the context of this work, as a more exact description and explanation can be found in (Hanenberg, 2005), which is generally a good source of information on aspect-orientation concepts and background.
The experiment in question was planned as a group of small sized programming tasks for the participants, consisting of two main assignments which each consisted of the same nine tasks. The nine tasks were designed so that each of them had different variables to influence their editing, which are described separately for each task below. Each of these assignments had to be fulfilled using the plain object-oriented approach (as the control language) as well as using only an aspect to solve the same problems. This means all participants had to do all nine tasks twice, some started with the object-oriented assignment, some with the aspect-oriented and then, after finishing, had to solve the same nine tasks using the other technique. It was randomly chosen who had to start with which assignment, but it was made sure that an even number of participants started with each. When using object-oriented programming, they were not allowed to use an aspect and while using aspect-oriented programming, they were not allowed to modify the original code and were permitted to modify their aspect only. The object-oriented tasks could only be solved by writing heavily redundant code, as can be seen in the descriptions of the specific tasks below. All participants, no matter which assignment they were given first, took part in a short AspectJ Tutorial of about 60-90 minutes in which they could try out some example exercises to get used to AspectJ syntax and handling. The tutorial concentrated only on a practical introduction to these parts and concepts of AspectJ which were needed to solve the tasks in the study while completely ignoring all remaining concepts of AspectJ. All in all, the experiment was planned to take each participant approximately 5 hours to complete, but no hard time limit was set.
Before beginning the study, all participants had to fill out an electronic questionnaire where they had to self-assess their skills and experience through various questions, mostly on a range from one to six. It included questions about general programming skills, Java, Eclipse or AspectJ experience and furthermore contained open slots for the participants to fill in any additional experience they had with different programming languages or techniques like logical programming. The data from the questionnaire was meant to be used later when trying to find out if specific previous knowledge influenced the participants' progress in the study. The whole original questionnaire can be found in the appendix.
A self-written game designed explicitly for the experiment was used as the target application the participants had to work on, consisting of nine classes within three packages with 110 methods, eight constructors, and 37 instance variables written in pure Java (version 1.6). Each class was specified in its own file. The game has a small graphical user interface with an underlying event-based model-view-controller architecture (see Figure 4-1) and was modified in large parts to fit the experiments requirements.
illustration not visible in this excerpt
Figure 4-1 – A simplified UML Diagram of the target application
Essentially, the application is a simple text-based labyrinth game, where the player navigates through a labyrinth, walking towards the goal field while trying to avoid traps which are hidden on some fields. It has a JFrame called LabyrinthFrame, which acts as the view in the model-view-controller architecture and which is responsible for every type of feedback the user gets. Its registered listener for all events and actions is the GameManager class, which also acts as the core of the game, where the main game logic is controlled. The GameManager is called by the GameRunner class, which owns the main method and uses the FileAccess class, which provides functionality for reading level data from files. The GameObject, Player, Trap, GameLevel and LevelField classes represent the underlying model of the game.
A set of Lenovo Thinkpad laptops was used as computers for the study and every participant was provided with a mouse to make the most efficient programming possible. All had the same configuration, which consisted of the integrated development environment and the task workspaces, the batch files for starting the tasks (which were used to make starting eclipse using different workspaces easier), a database for the data to be automatically logged during the experiment and a screen logging software.
As an integrated development environment, Eclipse Ganymede was used, along with the AspectJ Development Tools plugin. For logging reasons, a self-written development trace plugin (which is the detailed topic of the following work: (Josupeit-Walter, 2009)) had been added to Eclipse, which wrote the current workspace status into a database every time the user made a short break in editing the code (about two seconds of inactivity were needed to trigger it), saved the workspace or ran a test case. That way, a large base of data for evaluation could be generated for every task every participant had edited. Additionally, the screen logger installed on each computer was started before the participants began working on the tasks. Its videos were used for data redundancy purposes and to give additional data and detailed evaluation should the data in the database not be enlightening enough to reconstruct what a participant did.
These additional programs, especially the latency of eclipse that was induced by the regular workspace saving created some adverse conditions for the participants, as performance and reaction time of the Eclipse environment were significantly decreased during the experiment and the mouse pointer was flickering due to the screen logger. But as this was the case for every participant, the odds were still even and the effect applies to all data.
All tasks given to the participants had to be done in the order given to them. They received some sheets of paper which had the problem descriptions and instructions for each task on them, along with some general hints. Each task was started using a named batch file, which itself started Eclipse using the specific workspace for that task. The corresponding workspaces for each task also contained a set of test cases the participants were instructed to use to evaluate their progress on the current task. That way they had the possibility to find out whether they were done with the current task and where they still had errors or unfinished work. Only if all test cases succeeded they were allowed to begin the next task. After each task, Eclipse had to be closed and the following task started using the next batch file. The first three larger tasks were supposed to provide an easy means of achieving better results using aspect-orientation, while for the smaller tasks it was predictable that plain object-orientation would generally achieve better results.
In the following chapters, all nine tasks are explained in more detail:
The first (and also the largest) task of the study was to add a logging-feature to the application, where each method (but no constructors) had to be supplemented with a call to the logger at the beginning of the method. For this, a corresponding logger-interface was provided which expected the name of the class where the method was declared in, the method name, the return type, an array of its actual parameter instances and an array of type String with the formal parameter names. An example for a log call is given below in Listing 4-1.
illustration not visible in this excerpt
Listing 4-1 – An example statement of the Logging Task using only object-oriented programming
 (Highley, et al., 1999), p. 2
 (Basili, et al., 2007) and (Basili, 2007), for example. Basili and his colleagues have written many publications on the role and use of empirical studies and experimentation in software engineering
 (Tichy, 1997)
 Most people use the word paradigm as a replacement for technique, approach or concept, even if it differs from the word's original meaning. As its original meaning was to represent an underlying concept or a principle agreed by everyone on the matter, it can still be used in software engineering, but people should be aware of its usage in a wrong sense.
 The paper of Josupeit-Walter (Josupeit-Walter, 2008) summarizes most of the studies on object-orientation. None of them really managed to back everything that is said about its benefits.
 The whole field of theory of knowledge concerns itself with knowledge, its nature, how it is gained, etc. See http://en.wikipedia.org/wiki/Epistemology
 (Prechelt, 2001)
 (Josupeit-Walter, 2008)
 Interesting books to read for anyone interested in detailed information on empirical research methods: (Bortz, et al., 2002), which is very thorough and precise, (Rogge, 1995), which gives a good summary and short explanation or (Christensen, 1977), which is more focused on the experimental approach.
 Open questions leave the answer to the reader, closed questions give a list of concrete answers or a range of ordinal ratings the reader has to choose from (some allow to pick only one answer, some allow more than one).
 See (Josupeit-Walter, 2008) or (Prechelt, 2001) for a summary of empirical research on object-orientation.
 Two papers which go into that direction are (Jones, 1994) and the follow-up (Steidley, 1994)
 As of the time of this work, AspectJ was available as version 1.6.2
 Their web site can be found at http://www.eclipse.org/aspectj/
 (Kiczales, et al., 2001), p.3
 (Hanenberg, 2005) provides a thorough description of dynamic and static features.
 More information on the Model-View-Controller design pattern can be found here: http://en.wikipedia.org/wiki/Model-view-controller
 Specifically, they were R60’s
 The database server was PostgreSQL in version 8.3: http://www.postgresql.org/
 CamStudio Version 2.00: http://camstudio.org/
 Which is Eclipse version 3.4 and can be found on http://www.eclipse.org
 Version 1.6.0 of the AJDT was used: http://www.eclipse.org/ajdt/
 Which were written for JUnit, a tool for running test cases: http://www.junit.org/
Fachbuch, 192 Seiten
Doktorarbeit / Dissertation, 111 Seiten
Masterarbeit, 114 Seiten
Bachelorarbeit, 62 Seiten
Forschungsarbeit, 43 Seiten
Masterarbeit, 78 Seiten
Fachbuch, 192 Seiten
Doktorarbeit / Dissertation, 111 Seiten
Masterarbeit, 114 Seiten
Bachelorarbeit, 62 Seiten
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!