COM3529编程讲解、辅导Java语言程序、SOFTWARE程序辅导 讲解Java程序|讲解留学生Processing

- 首页 >> Python编程
COM3529 — SOFTWARE TESTING & ANALYSIS
Assignment, Spring 2021
Automated Tool Support for Logic Coverage of Java Code
1 Overview
The aim of this assignment is to develop automated tool support for logic testing of Java methods.
The aim of this tool, or framework, is to minimise the amount of effort required by a human tester
when faced with the task of manually writing a test suite for some arbitrary Java method that they
might be given. The more automated support your tool/framework can provide, the higher the mark
you will get.
A very basic submission that is worthy of a pass mark, for example, might involve implementing
Random Testing for Condition Coverage (i.e., a very basic coverage criterion), along with a scheme
for manually instrumenting conditions. (This would be a simple extension of what we did in the
Week 5 lecture for randomly generating test cases for Branch Coverage of the classify method of
the Triangle class.)
A more advanced submission might seek to provide automated tool support for a more advanced
coverage criterion like MCDC. It might automatically parse Java methods to obtain branch predicates
and figure out what the test requirements are that are needed by such a criterion. It might automatically
instrument the Java code, as opposed to relying on a tester manually having to insert logging
statements. It might apply a more advanced test generation method, for example Search-Based
Testing. A more advanced submission might even automatically write out the Java code statements
needed to produce a JUnit test suite.
Section 4 goes into further detail about the requirements behind this assignment, and the different
aspects that you are expected to tackle.
Before we go into further detail, there are two key things you should know about this assignment:
• You can use third-party Java libraries as part of your assignment. For example, I wouldn’t
expect you to write a full-featured parser for Java, if you need one. (Alternatively, you may want
to write a very basic parser yourself, that provides simple and limited support, it is up to you.) If
you can source a package to do certain key tasks (that might normally take months to program
yourself), then you can use it. (This is a Software Engineering module, after all.)
However, you must include any Java packages or tools “as is” (i.e., without modifying them) —
for example as a .jar file — and declare it as a dependency of your tool. (This is quite easy
to achieve in Maven, which we have been using throughout the course.) That is, you must not
copy anybody’s code and use it as part of your own tool, as though you had written it yourself.
This would amount to plagiarism, of course.
• You can work as an individual or in teams of up to four people. Marks will be awarded
on the basis of what you have achieved in proportion to the number of people in your team.
The advantage of working in a team is that you will be able to tackle more aspects of the
project, while also being free to discuss the different aspects of the problem it entails and give
eachother help, without fear of falling foul of academic rules regarding assignment collusion.
Unless you specify otherwise (e.g., because of a large discrepancy in the distribution of the
work among team members), the overall mark you achieve for the assignment will be awarded
to each member of the group.
1
2 Submission of Your Work
Your submission should take the form of a GitHub repository. Your repository should contain all the
Java code needed to run and operate your tool, along with example Java methods on which your tool
with run, and for which it successfully works with. (Note there is no choice of language here, this
assignment must be completed in Java.)
Your GitHub repository should contain a README.md file in the root directory, written in MarkDown.
The README.md should contain:
• Details of your team, listing each person involved, their Sheffield email address, and the work
they contributed to the project.
• A section overviewing the support your tool provides and what automation it delivers. This
section should further include a detailed list of all of your tool’s different features and how they
would assist a human tester perform logic testing in practice.
• A section detailing how to install and run your tool (or how to run the different parts of your tool,
if different stages are required). This section should include a list of libraries or utilities your
tool is dependent on, and how to get hold of them. (A simple way to handle this requirement
is to limit yourself to packages that can be automatically obtained via Maven (https://maven.
apache.org/) or Gradle (https://gradle.org/) and use one of those build automation tools.
• One or more worked examples that demonstrate how to use your tool in practice.
The deadline for this assignment is Monday, 26 April 2020, 5pm (Week 9).
By this date, you should have emailed me, Phil McMinn (p.mcminn@sheffield.ac.uk), with the URL
of your repository.
You must not make any further changes to your repository after this date. If you do, your work will
be subject to standard Department of Computer Science lateness penalties, i.e. 5% per working day
following the submission date, up to a maximum of five days — after which you will score zero.
3 Help and Questions
The lab session in Week 5 will be devoted any immediate questions about the assignment.
You may ask further questions about the assignment via the module discussion board, where a
member of the teaching team will respond to your query.
4 Detailed Requirements
There are many aspects to writing an automated test case generation tool/framework. Your submission
will consider each of the following aspects, but may not need to address them completely. In
other words, your submission may only provide limited automation for some aspect, requiring a human
tester to do the rest of the work. The more automation you can provide “out of the box”, however,
the higher the mark you will achieve.
In this section, we will consider the calculate method of the BMICalculator class, provided in
the uk.ac.shef.com3529.practicals package of the online code repository supporting this module
(available at https://github.com/philmcminn/com3529-code) and listed here:
2
1 package uk.ac.shef.com3529.practicals;
2
3 public class BMICalculator {
4
5 public enum Type {
6 UNDERWEIGHT,
7 NORMAL,
8 OVERWEIGHT,
9 OBESE;
10 }
11
12 public static Type calculate(double weightInPounds, int heightFeet, int heightInches) {
13 double weightInKilos = weightInPounds * 0.453592;
14 double heightInMeters = ((heightFeet * 12) + heightInches) * .0254;
15 double bmi = weightInKilos / Math.pow(heightInMeters, 2.0);
16
17 if (bmi < 18.5) {
18 return Type.UNDERWEIGHT;
19 } else if (bmi >= 17.5 && bmi < 25) {
20 return Type.NORMAL;
21 } else if (bmi >= 25 && bmi < 30) {
22 return Type.OVERWEIGHT;
23 } else {
24 return Type.OBESE;
25 }
26 }
27 }
4.1 Analysis of the Method Under Test
To start at the beginning, your tool needs to know what conditions are present in the method under
test and their form. For example, the calculate method has branch predicates on lines 17, 19, and
21. The predicate on line 17 only has one condition, while the predicates on lines 19 and 21 both
consist of two conditions composed by the && logical operators. Furthermore, the branch predicate
on line 19 won’t be encountered unless the branch predicate on line 17 evaluates to false, while the
branch predicate on line 21 would be encountered unless both predicates on lines 17 and 19 evaluate
to false.
At a basic level, the tool could simple expect this information to be inputted manually by the tester.
This could be provided via some additional file, or the tester could use your framework as an API (i.e.,
a package of useful Java methods for supporting automated testing), writing the required information
as Java code — in the form of parameters to methods provided by your framework. Of course, this
wouldn’t be providing much automated support at all, since the burden of work is on the human. But
it provides a basis by which to start — your framework will need some data structures in which to
store all of this information.
Later on, you may choose to provide further automation in this area, for example by parsing the
Java method. You could write your own simple parser that is capable of detecting if, while, and for
statements, and dissecting the conditions within them. Or, you may wish to enlist the support of a
full-blown parsing tool, such as the JavaParser library (https://javaparser.org/).
4.2 Generating Test Requirements
Once you’ve figured out the conditions in the method under test, you can figure out the test requirements
needed by various logic coverage criteria. This is easy for a basic criterion like Condition
Coverage, but more technical for a coverage criterion like MCDC (in restricted or correlated form).
For the branch predicate on line 17, there is only one condition, so this is trivial. All logic coverage
criteria for this predicate are the same as Branch Coverage — two test requirements, where the
predicate/condition evaluates to true in one requirement, and false in the other.
3
Things get more complicated for the branch predicates on lines 19 and 21, however, which are
composed of two conditions. Generating the test requirements for Condition Coverage involves test
requirements that executing each individual condition as true and false. MCDC is more complicated,
and you will need to consider whether you will support either the restricted or correlated variant (or
both). Of course, you can support multiple coverage criteria for further marks!
4.3 Instrumentation
One challenge your tool needs to solve is how to know which conditions have been covered, i.e.
whether they were executed or not, and if they were, whether they evaluated to true or false.
At the very least, you will need to provide a scheme so that a tester can insert logging statements
themselves that are needed by the tool in order to run.
We did this for the classify method of the Triangle class in the Week 5 lecture, for example. For
considering individual conditions, you might want to do something cleverer. For example, instrumentation
of line 21 of the calculate method of the BMICalculator class could take the form:
... if (logCondition(3, bmi >= 17.5) && logCondition(4, bmi < 25) { ...
where logCondition is a method provided by your framework that logs the ID number of the condition
as the first parameter, takes the condition as the second parameter, and returns the boolean result
of the evaluation of the condition so that the program preserves its normal functionality.
public void logCondition(int id, boolean condition) {
boolean result = condition;
// ... log the id somewhere, along with the result,
// thereby storing whether the condition was executed
// as true or false, for computing coverage later on...
return result;
}
If you’re planning on incorporating a Search-Based method into your tool (see below), the instrumentation
will likely need to log the values of variables so that your tool can also compute fitness. For
example:
public void logGreaterThanOrEqualsCondition(int id, double leftOperand, double rightOperand) {
double fitness = leftOperand - rightOperand;
boolean result = false;
if (fitness <= 0) {
result = true;
} else {
fitness += K;
}
// ... log the id somewhere, along with the result,
// thereby storing whether the condition was executed
// as true or false, for computing coverage later on...
// ... also log fitness ...
return result;
}
Or, you may wish to have the tool insert those instrumentation statements itself. This could be
achieved in conjunction with the Java Parser, mentioned earlier.
4.4 Test Data Generation
Your tool needs to provide some level of test data generation support. This could be random, as per
the example in the Week 5 lecture. However, you may wish to provide a more advanced mecha-
4
nism, using Search-Based techniques. You could use the Alternating Variable Method (e.g., via the
publicly available open-source AVMf — https://avmframework.org), or by incorporating a simple
Evolutionary Algorithm of your own.
(Hint: The AVMf provides an example of test data generation that you could study to see what ideas
you could borrow for this assignment.)
4.5 Computing Coverage
For a given test case, and given the logging information provided by your instrumentation, you need
to be able to figure out which test requirements were covered by the test case and its coverage level.
This would contribute to the coverage level for a series of test cases that together maximise coverage,
provided as an automatically generated test suite for a tester to use.
4.6 Outputting Test Cases
Your tool needs to output the test cases it generates in some form or other. This could take the form
of a very basic series of System.out.println statements that write information to the console. To
provide comprehensive tool support though, a tester would want a JUnit test suite. This may only
be partial — containing calls to the method under test with test data only, requiring the human tester
to fill in the assertions. Or, it could have a go at providing those assertions as well by observing
the outputs of the method given the inputs. For example, suppose the values 182, 6, and 0 were
generated for the calculate method:
assertTrue(BMICalculator.calculate(182, 6, 0) == Type.NORMAL);
For methods that return integers or doubles, you could use assertEquals instead.
4.7 Examples
You need to devise some examples on which to try your test tool/framework, and present them as
evidence of what your tool does as part of your submission. You can use the BMICalculator as one
of these examples.
4.8 Documentation
Finally, you should document what your tool does, and how to use it via a README.md file provided in
the root directory of your GitHub repository. See Section 2 for more details on what to do here.
5 Indicative Marking Scheme
Your submission will be graded according to the following criteria.
A grade will be awarded to each criterion. The grades assigned over all criteria will then be aggregated
depending on the number of people in your team. For example, an individual working on their
own would not be expected to hit as many high levels of achievement as a group of four to obtain the
same overall final grade.
5
Criterion (Automation Aspect) Level of Achievement Indicative Grade
1. Analysis of the Method Under Test Pass
Requires significant manual input 3rd
2:2
Requires some manual input, partial automated support 2:1
Method under test fully parsed, with conditions and structure of predicates
extracted and analysed
1st
2. Test Requirement Generation Pass
Simple coverage criterion implemented (e.g., Condition Coverage) 3rd
2:2
Advanced coverage criterion implemented (e.g., Restricted or Correlated
MCDC)
2:1
Multiple criteria implemented 1st
3. Instrumentation Pass
Supplies a simple API that is applied within code blocks 3rd
2:2
Supplies an advanced API that is applied within conditions 2:1
Method under test is automatically parsed and intrumented 1st
4. Test Data Generation Very basic random number generation Pass
Configurable random number generation (e.g., input parameters can
be configured with upper and lower bounds)
3rd
Advanced random number generation (e.g., can be used to generate
non-numerical inputs randomly, such as strings and other types, like
objects)
2:2
Advanced random number generation (e.g., can use example inputs
as the basis of seeds, similar to fuzzing), or, applies a search-based
technique “out of the box” (e.g., the AVMf)
2:1
Applies own search-based method (e.g., implemented own evolutionary
algorithm) or similarly advanced technique
1st
5. Coverage Level Computation Pass
and Reporting 3rd
Coverage level computed for simple criterion as implemented for (2)
above
2:2
2:1
Coverage level computed and individual uncovered elements reported
1st
6. Test Suite Output Pass
3rd
Simple output of inputs to the command line 2:2
2:1
Writes out JUnit Java code that can be compiled separately and run 1st
7. GitHub Repo and README.md Pass
3rd
Problems with repo (e.g., files missing) and/or instructions deficient 2:2
Everything works and can be setup from the repo, according to instructions
supplied in the README.md
2:1
README.md especially well-polished, installation and running tool
worked flawlessly
1st
6

站长地图