This is the governing body for certification in Disciplined Agile.
LIVE CHAT 24 HOURS PER DAY
Hi Scott, I just reviewed this chapter. I have feedback on every page. Can we have a call or meet in person to go over the feedback? I can help clear up a number of disconnects.. including a change to the definition of Exploratory Testing at the very least.
Please let me know. Cheers.
I have just published an update to this excerpt. The big change is the addition of the Choose Testing Types decision point and discussion of the Testing Quadrant. Also addressed a lot of minor bug fixes.
Perhaps to add an alternative flow to TFP. I took this approach from the DA training back to the team to help the developers out. We pulled together the QA and Developer with myself the BDD analyst and compiled developer tests:
Drives/collaborates with QA/BDD Analyst, the solution requirements (via acceptance tests) and collaboratively design (via developer tests) based on the requested functionality.
Defects were reduced by half initially reducing costs substantially!
Thanks Scott. I had a look at the updated version. It is shaping up nicely.
Keep my comments in mind for future reference on Test Intensity, if you think about having another update to the test strategy process goal.
Just to reiterate:
I still maintain that "path coverage" is a test coverage technique to achieve a certain test intensity (See here: http://www.leovanderaalst.nl/Test%20design%20techniques%20were%20not%20invented%20to%20bully%20testers.pdf for an explanation of how it is linked), alongside "decision coverage", "Equivalence classes", "Pairwise Testing", Orthogonal Arrays", "Boundary Value Analysis", "Operational Profiles", "Load Profiles", "Right and fault paths" and "checklists" (according to TMap Next).
These techniques will then be used to apply test design techniques such as "Decision tables", "Data combination tests", "Elementary comparison, error guessing, exploratory tests, data cycle tests, process cycle tests and semantic and syntactic tests, according to TMap Next.
I am able to explain this to anyone that is interested in the above.
Version 3 of the excerpt is now available.
Thanks very much for the feedback. Took awhile, but we've acted on most of it. Later today we will post an updated version of this excerpt, but for now thought I would respond with a summary of the changes we've made.
Here are our thoughts on the feedback.
Aldo: (going in the same numerical order as your points on April 8)
I made some small changes to my posting from yesterday. I have also attached an updated "How" we discussed over Skype.
I don't think there is a need for a role of BDD Analyst as a staffing option.
My thoughts is that BDD is a skill / technique that can be applied by a Team Member such as a Business Analyst / Proxy Product Owner or an Automation Tester. In some Teams a highly skilled Product Owner could also do this.
The implementations of the BDD will be automated tests typically done by an Automated Tester.
I had a look through the updated chapter, and have the following comments:
1. “Test Coverage” process factor
The “test coverage” process factor focuses on a higher, more strategic level than individual test design techniques such as path coverage. The intent is to determine the expected test intensity per quality characteristic a product must adhere to according to the expectations of its stakeholders.
Applying this, for a product, business and technical decision makers will be required to make a decision on the amount of risk the product or business runs by testing specific quality attributes (you can use various standards such as ISO/ IEC 9126 - https://en.wikipedia.org/wiki/ISO/IEC_9126 or https://en.wikipedia.org/wiki/List_of_system_quality_attributes or even the testing quadrants) for that product such as functionality, security or performance. If the risk is high for the given quality attribute, the expected test coverage (or test intensity) for that quality attribute (say, maintainability attribute) should be deep/ or high expected test coverage (or test intensity).
So, I will not use “path coverage” in this context. Path coverage is a specific test design technique to achieve a certain expected level of test coverage (or test intensity) for a product for a specific quality attribute (say, for instance the security quality attribute).
Once the risk has been determined for not achieving a certain amount of test coverage (or test intensity) for the product, the test design technique, called path coverage can be applied to achieve a high level of test coverage (or test intensity), combined with other test design techniques (say, decision table and multiple condition/ decision). Through the combination of these different test design techniques (along with path coverage), the team is able to achieve a high level of test coverage for the quality attribute based on the expectations from the stakeholders.
Another example; If the “Usability” quality attribute does not require a high degree of test coverage since it is low risk for the stakeholders for the product, then a basic test design technique called “checklist” will suffice to test all GUI screens against the same checklist, without adding additional test design techniques to increase the test coverage for that quality attribute, to achieve the expected low test coverage for the Usability quality attribute.
The test coverage decisions are part of the approach to determine what type of testing effort is required overall for the project per quality attribute such as functionality, security, performance etc.
I think instead of using "Test Coverage" (as it can be confusing with test coverage and design techniques), we should perhaps reword it to "Expected Test Intensity"
So, I would reword the decisions under "Test Coverage" different. This is what I suggest that we rename it to “Expected Test Intensity Strategy” which will have the following options:
2. With the above section discussing the way that Quality Attributes and its links to test coverage are described, it perhaps negates the need for the “Non Functional Testing Strategy” option. This implies that the non functionals will be discussed as part of the process explained above.
Suggestion is to reword the process factor and options as follows:
Functional and non-Functional Quality Attributes (Or simply Quality Attributes):
3. The value of the automation pyramid is simply in guiding the conversation for testing to determine where such test coverage can be achieved. I would make that part of an “Automation strategy” section by providing options of where to spend the automation effort. If that will not work, perhaps have a process goal option called “Automation Coverage Capability” and use the pyramid levels in such a process goal option.
Automation Coverage Capability:
4. Thinking further about the current “Automation Strategy” option in the “Test Strategy” process goal, I think we should consider the “Architectural Strategy” as well. Whether the “Automation Strategy” falls under one, other or both of them is worth thinking about as things like CI or CD are heavily influenced by and influences the architecture.
5. Then, there are some of the decision made under the “Initial Testing Strategy” that will influence the process goals called “Explore initial Scope”, “Identify Risks”, “Develop Initial Release Plan” and “Identify initial Architecture Strategy”. This warrants consideration in each of those process goals.
For “Explore Initial Scope”, the test decisions made from the “Develop Initial Test Strategy” process goal will influence the “Explore Initial Scope” process goal. Ideally it should be an option as part of the “Explore General Requirements” named “Test Strategy”. The Test strategy decisions will help to inform the initial scope as it will add or remove work from the initial scope.
For “Identify Risks” process Goal, I would add the testing risks under “Explore Risks” and add it as “Quality and Testing” or just add “Testing” as another option. The test strategy will inform the decision makers of any testing risks
For “Develop Initial Release Plan”, I would add “Test Strategy or Test Plan” as part of the “Scope” option. The test strategy will influence the release plan, especially if there are additional tail-end type of testing that will be required such as compliance, security or E2E integration testing.
And finally for “Identify Initial Architecture Strategy”, I would add “Testing Automation Strategy” as an option for the “Explore Technology Architecture”. The automation decisions will influence the architecture and will inform its own requirements to adjust the architecture to be friendlier towards the automation strategy. (As per point 4 above)
There is a chance that the testing strategy process goal will also influence other process goals, but these are the major impacts as I see them at this stage.
I am happy to discuss any of these comments above over Skype.
Looking forward to be hearing your thoughts
@Jerry - We're adding BDD analyst as a staffing option. BUT, still not convinced we're there yet. I'm thinking that there might be a need for a Functional Testing decision point, not sure yet though.
Updated Teaming Strategy==>Whole Team as suggested
Development strategy==>Agree with what you're saying, but not sure this is the right spot for this. Many of the things you're saying are covered in the goal Produce a Potentially Consumable Solution. I'm thinking we park this discussion until we have that goal published, hopefully later this month.
Test first programming - Reworked the description.
Generated data - Reworked the description.
Automation strategy - Updated the description to discuss challenges around skills/mindset.
Defect reporting - Conversation: Yes! Added operational monitoring as an option.
Our hope is to have an update to the excerpt posted later today.
© 2013-2019 Project Management Institute, Inc.
14 Campus BoulevardNewtown Square, PA 19073-3299 USA