воскресенье, 27 марта 2011 г.

Essential Software Test Design. Torbjörn Ryber


It does not matter how nice-looking your documents are, or how good your plans are, if the tests do not measure up

Seven Contex-driven school rules:
1. The value of each practice depends on its context.
2. There is good practice in its own context, but there is no best practice which applies at all times.
3. People who work together are the most important part of how every project fi ts together.
4. The project is developed over time in a way which often cannot be predicted.
5. The product is a solution: if the problem is not solved, the product does not work.
6. Good software testing is a challenging intellectual process.
7. Simply through judgement and skill, put into practice together throughout the whole project, we can do the right thing at the right time in order to test our products effectively.


1. Test Design: An exploratory tester is fi rst and foremost a test designer.
Anyone can design a test accidentally. The excellent exploratory tester is able
to craft tests for systematic exploration of the product. Test design is a big
subject, of course, but one way to approach it is to consider it a questioning
process. To design a test is to craft a question for a product that will reveal vital
information.
To get better at this: Go to a feature (something reasonably complex, like
the table formatting feature of your favorite word processor) and ask thirty
questions about it that you can answer, in whole or part, by performing some
test activity, by which I mean some test, set of tests, or task that creates tests.
Identify that activity along with each question. If you can’t fi nd thirty questions
that are substantially different from each other, then perform a few tests and
try again. Notice how what you experience with the product gives you more
questions.
Another aspect of test design is making models. Each model suggests different
tests. There are lots of books on modeling (you might try a book on UML, for
instance). Pick a kind of model, such as a fl owchart, data fl ow diagram, truth
table, or state diagram, and create that kind of model representing a feature you
are testing. When you can make such models on napkins or whiteboards in two
minutes or less, confi dently and without hesitation, you will fi nd that you also
are more confi dent at designing tests without hesitation.


2. Careful Observation: Excellent exploratory testers are more careful observers
than novices, or for that matter, experienced scripted testers. The scripted tester
need only observe what the script tells him to observe. The exploratory tester
must watch for anything unusual, mysterious, or otherwise relevant to the
testing. Exploratory testers must also be careful to distinguish observation from
inference, even under pressure, lest they allow preconceived assumptions to
blind them to important tests or product behavior.
To get better at this: Try watching another tester test something you’ve
already tested, and notice what they see that you didn’t see fi rst. Notice how
they see things that you don’t and vice versa. Ask yourself why you didn’t see
everything. Another thing you can do is to videotape the screen while you test,
or use a product like Spector that takes screen shots every second. Periodically
review the last fi fteen minutes of your testing, and see if you notice anything
new.
Or try this: describe a screen in writing to someone else and have them draw
the screen from your description. Continue until you can draw each other’s
screens. Ideally, do this with multiple people, so that you aren’t merely getting
better at speaking to one person.
To distinguish observation from inference, make some observations about a
product, write them down, and then ask yourself, for each one, did you actually
see that, or are you merely inferring it? For instance, when I load a fi le in
Microsoft Word, I might be tempted to say that I witnessed the fi le loading, but
I didn’t really. The truth is I saw certain things, such as the appearance of words
on the screen that I recall being in that fi le, and I take those things to be evidence
that the fi le was properly loaded. In fact, the fi le may not have loaded correctly
at all. It might be corrupted in some way I have not yet detected.
Another way to explore observation and inference is to watch stage magic.
Even better, learn to perform stage magic. Every magic trick works in part by
exploiting mistakes we make when we draw inferences from observations. By
being fooled by a magic trick, then learning how it works, I get insight into how
I might be fooled by software.

3. Critical Thinking: Excellent exploratory testers are able to review and explain
their logic, looking for errors in their own thinking. This is especially important
when reporting the status of a session of exploratory tests, or investigating a
defect.
To get better at this: Pick a test that you recently performed. Ask what
question was at the root of that test. What was it really trying to discover? Then
think of a way that you could get a test result that pointed you in one direction
(e.g. program broken in a certain way) when reality is in the opposite direction
(e.g. program not broken, what you’re seeing is the side effect of an option
setting elsewhere in the program, or a confi guration problem). Is it possible for
the test to appear to fail even though the product works perfectly? Is it possible
for the product to be deeply broken even though the test appeared to pass? I can
think of three major ways this could happen: inadequate coverage, inadequate
oracle, or tester error.

4. Diverse Ideas: Excellent exploratory testers produce more and better ideas
than novices. They may make use of heuristics to accomplish this. Heuristics
are mental devices such as guidelines, generic checklists, mnemonics, or rules of
thumb. The Satisfi ce Heuristic Test Strategy Model (http://www.satisfi ce.com/
tools/satisfi ce-tsm-4p.pdf) is an example of a set of heuristics for rapid
generation of diverse ideas. James Whittaker and Alan Jorgensen’s «17 attacks»
is another (see How to Break Software).
To get better at this: Practice using the Heuristic Test Strategy Model. Try it
out on a feature of some product you want to test. Go down the lists of ideas
in the model, and for each one think of a way to test that feature in some way
related to that idea. Novices often have a lot of trouble doing this. I think that’s
because the lists work mainly by pattern matching on past experience. Testers see
something in the strategy model that triggers the memory of a kind of testing or a
kind of bug, and then they apply that memory to the thing they are testing today.
The ideas in the model overlap, but they each bring something unique, too.

5. Rich Resources: Excellent exploratory testers build a deep inventory of tools,
information sources, test data, and friends to draw upon. While testing, they
remain alert for opportunities to apply those resources to the testing at hand.
To get better at this: Go to a shareware site, such as Download.Com and
review the utilities section. Think about how you might use each utility as a test
tool. Visit the Web sites related to each technology you are testing and look for
tutorials or white papers. Make lots of friends, so you can call upon them to
help you when you need a skill they have.


6. Self-Management: Excellent exploratory testers manage the value of their
own time. They must be able to tell the difference between a dead end and a
promising lead. They must be able to relate their work to their mission and
choose among the many possible tasks to be done.
To get better at this: Set yourself a charter to test something for an hour.
The charter could be a single sentence like «test error handling in the report
generator» Set an alarm to go off every fi fteen minutes. Each time the alarm
goes off. Say out loud why you are doing whatever you are doing at that exact
moment. Justify it. Say specifi cally how it relates to your charter. If it is offcharter,
say why you broke away from the charter and whether that was a wellmade
decision.

7. Rapid Learning: Excellent exploratory testers climb learning curves more
quickly than most. Intelligence helps, of course, but this, too, is a matter of skill
and practice. It’s also a matter of confi dence— having faith that no matter how
complex and diffi cult a technology looks at fi rst, you will be able to learn what
you need to know to test it.
To get better at this: Go to a bookstore. Pick a computer book at random.
Flip through it in fi ve minutes or less, then close the book and answer these
questions: what does this technology do, why would anyone care, how does it
work, and what’s an example of it in action? If you can’t answer any of those
questions, then open the book again and fi nd the answer.

8. Status Reporting: Tap an excellent exploratory tester on the shoulder at any
time and ask, «What is your status?» The tester will be able to tell you what was
tested, what test techniques and data were used, what mechanisms were used to
detect problems if they occurred, what risks the tests were intended to explore,
and how that related to the mission of testing.
To get better at this: Do a thirty minute testing drill. Pick a feature and test
it. At the end of exactly thirty minutes, stop. Then without the use of notes, say
out loud what you tested, how you would have recognized a problem, what
problems you found, and what obstacles you faced. In other words, make a test
report. As a variation, give yourself 10 minutes to write down the report

I have memorized the heuristic test strategy model, when I am asked this question, I can list thirty-three different ways to test. I say to myself «CITESTDSFDPOCRUSPICSTMPLFSDFSCU RR» and then expand each letter

What kinds of specifi cs affect ET? Here are some of them:
• the mission of the test project
• the mission of this particular test session
• the role of the tester
• the tester (skills, talents, and preferences)
• available tools and facilities
• available time
• available test data and materials
• available help from other people
• accountability requirements
• what the tester’s clients care about
• the current testing strategy
• the status of other testing efforts on the same product
• the product, itself
- its user interface
- its behavior
- its present state of execution
- its defects
- its testability
- its purpose
• what the tester knows about the product
- what just happened in the previous test
- known problems with it
- past problems with it
- strengths and weaknesses
- risk areas and magnitude of perceived risk
- recent changes to it
- direct observations of it
- rumors about it
- the nature of its users and user behavior
- how it’s supposed to work
- how it’s put together
- how it’s similar to or different from other products
• what the tester would like to know about the product

Let’s talk about very exploratory testing. Freestyle exploratory testing fi ts in any of the following situations:
• You need to provide rapid feedback on a new product or feature.
• You need to learn the product quickly.
• You have already tested using scripts, and seek to diversify the testing.
• You want to fi nd the single most important bug in the shortest time.
• You want to check the work of another tester by doing a brief independent
investigation.
• You want to investigate and isolate a particular defect.
• You want to investigate the status of a particular risk, in order to evaluate the
need for scripted tests in that area.

Freestyle exploratory testing aside, ET fi ts anywhere that testing is not completely dictated in advance. This includes all of the above situations, plus at least these additional ones:
• Improvising on scripted tests.
• Interpreting vague test instructions.
• Product analysis and test planning.
• Improving existing tests.
• Writing new test scripts.
• Regression testing based on old bug reports.
• Testing based on reading the user manual and checking each assertion.

Dynamic Test Design Techniques:
• Data: equivalence partitions, boundary value analysis, domain testing
• Flows: work processes, use cases for test cases
• Event based: state graphs
• Logic: decision trees, decision tables
• Combinational analysis: all pairs, elementary comparisons
• Risk-based testing: risk, defect guessing, taxonomies, heuristics, attack
patterns
• Advanced testing: scenario-based, soap opera, time cycles, data cycles
• For the developer: control fl ow, data fl ow




Black, White and Grey:
Black Box Techniques – Behaviour-Based Testing
White Box Techniques – Structural Testing
Combined: Grey Box Testing

British standard BS 7925:2 for component testing. In the latest offi cial version, the following techniques are listed: 42
1. Equivalence partitioning
2. Boundary value analysis
3. State diagrams
4. Decision tables
5. Use case testing
6. Classifi cation tree method They deal additionally with the following approaches:
7. Defect guessing
8. Exploratory testing
For structural testing, they deal with code coverage in the form of a number of
different variants where you cover all of the following:
1. Statements
2. Branches
3. Loop
4. A combination of branches and conditions (multiple combination decision
coverage)

Cem Kaner, who is also a Professor at the Florida Institute of Technology, and the author of several books, and got the following answer: He teaches 10 elementary techniques in his black-box testing course:
1. Function testing
2. Specifi cation-based testing
3. Domain testing
4. Risk-based testing
5. Scenario testing
6. Regression testing
7. Stress testing
8. User testing
9. State machines
10. Volume testing


Peter Zimmerer from Siemens in Germany presented what he calls the Test Design Poster at EuroSTAR in December 200545

1. Requirements on the system and desired quality
2. Requirements on the tests in terms of strength and depth
3. Test strategy: which tests are performed in which parts of the
chain of development
4. Existing basic test data
5. The system to be tested
6. Technique for soft- and hardware
7. Compatible tool support

Black Box Interfaces, data, models
Standards, norms, formal specifi cations, requirements
Criteria, functions, interfaces
Requirement - based with traceability matrix
Use case based (activity diagram, sequence diagram)
CRUD, Create, Read, Update, Delete (database operations)
Scenario tests, soap opera tests
User profi les: frequency and priority/critical(reliability)
Statistical testing (Markov chains)
Random (ape testing)
Design with contract (built - in self test)
Equivalence Groups
Classifi cation trees
Domain tests, category partition
Boundary value analysis
Special values
Test catalogue for input data values, input data fi elds
State Graphs
Cause - effect graphs
Decision tables
Syntax tests (grammar)
Combinatory testing (pair tests)
Evolution tests

Grey - box
Dependences/relationships between classes, objects, methods, functions
Dependences/relationships between components, services, applications, systems
Communication behaviour (dependence analysis)
Tracking (passive testing)
Protocol based (sequence diagram)

White - box
-Control flows
-- (code - based, model - based)
--  Branches
-- Conditions
-- Interfaces
-- Static measurement
-- Cyclomatic complexity
-- Measurement (e.g, Halstead)
- Data flows
-- Read/write
-- Define/use

Positive, valid cases
-Normal, expected construction

Negative, invalid cases
--Unauthorised construction
--Defect management
--Exception management

Defect - based
--Risk - based
--Systematic defect analysis (FMEA: Failure Mode and Effect Analysis)
--Defect catalogues, bug taxonomies (Biezer, Kaner)
--Attack patterns (Whittaker)
--Defect models which depend on the technology and nature of the system
--Defect patterns: standard patterns or from root cause analysis
--Defect reports (previous)
--Error guessing
--Test patterns (Binder), Question patterns (Q - patterns: Vipul Kochar)
--Ad Hoc, intuitive, based on experience
--Exploratory testing, heuristics
--Mutation tests


Regression testing
--Parts which have changed
--Parts affected by the changes 2
--Risky, highly prioritised, critical parts
--Parts which are often changed
--Everything

Advanced Testing
– cyclical tests – (data cycles, time cycles)
– several test cases run together – (parallelity etc.)
– exploratory testing – (ideas which are diffi cult to think of in advance)
– scenario testing – (how will it look in reality?)
– soap opera testing – (all the strange and extreme combinations you can think of, and then more still)

5 комментариев:

  1. Really nice topics you had discussed above. I am much impressed. Thank you for providing this nice information here.

    Software Testing Company

    QA Services

    Game Testing Services

    Video Game Testing Companies

    ОтветитьУдалить