This is the governing body for certification in Disciplined Agile.

Develop Initial Test Strategy

<< First  < Prev   1   2   Next >  Last >> 
  • 24 Oct 2018 2:04 PM
    Reply # 6871873 on 5901520

    Hi Scott, I just reviewed this chapter. I have feedback on every page. Can we have a call or meet in person to go over the feedback? I can help clear up a number of disconnects.. including a change to the definition of Exploratory Testing at the very least.

    Please let me know. Cheers.

  • 19 Sep 2018 2:55 PM
    Reply # 6677086 on 5901520
    Scott Ambler (Administrator)

    I have just published an update to this excerpt.  The big change is the addition of the Choose Testing Types decision point and discussion of the Testing Quadrant.  Also addressed a lot of minor bug fixes.

  • 22 May 2018 2:09 AM
    Reply # 6250566 on 5901520

    Perhaps to add an alternative flow to TFP. I took this approach from the DA training back to the team to help the developers out. We pulled together the QA and Developer with myself the BDD analyst and compiled developer tests:

    Drives/collaborates with QA/BDD Analyst, the solution requirements (via acceptance tests) and collaboratively design (via developer tests) based on the requested functionality.

    Defects were reduced by half initially reducing costs substantially!

  • 21 May 2018 7:07 PM
    Reply # 6249876 on 5901520

    Thanks Scott. I had a look at the updated version. It is shaping up nicely. 

    Keep my comments in mind for future reference on Test Intensity, if you think about having another update to the test strategy process goal. 

    Just to reiterate: 

    I still maintain that "path coverage" is a test coverage technique to achieve a certain test intensity (See here: for an explanation of how it is linked), alongside "decision coverage", "Equivalence classes", "Pairwise Testing", Orthogonal Arrays", "Boundary Value Analysis", "Operational Profiles", "Load Profiles", "Right and fault paths" and "checklists" (according to TMap Next). 

    These techniques will then be used to apply test design techniques such as "Decision tables", "Data combination tests", "Elementary comparison, error guessing, exploratory tests, data cycle tests, process cycle tests and semantic and syntactic tests, according to TMap Next. 

    I am able to explain this to anyone that is interested in the above. 



  • 08 May 2018 6:54 AM
    Reply # 6142856 on 5901520
    Scott Ambler (Administrator)

    Version 3 of the excerpt is now available.

  • 07 May 2018 11:55 AM
    Reply # 6141144 on 5901520
    Scott Ambler (Administrator)

    Thanks very much for the feedback.  Took awhile, but we've acted on most of it. Later today we will post an updated version of this excerpt, but for now thought I would respond with a summary of the changes we've made.

    Here are our thoughts on the feedback.


    • We're keeping the BDD analyst for now.  We've seen enough situations where this sort of role needed to be filled that it makes sense.  We may rethink this position at some point.

    Aldo: (going in the same numerical order as your points on April 8)

    1. Reworked it to Test Intensity.  But kept the options simpler than what you suggested.  
    2. Did not rework non-functional testing strategy.  We liked your suggestion but wanted to keep it as is for now.
    3. Added Automation Coverage as a decision point.
    4. Fleshed out Automation Strategy.
    5. We disagreed with this suggestion. At the phase level, in this case Inception, we make it pretty clear that you'll iterate back and forth between addressing these goals and that they affect each other.  Having said that, we'll review the Inception overview with this in mind to ensure that it is in fact as clear as well think. We didn't want to put explicit ties between each goal as you suggest as it would increase the coupling between goals which is something we want to avoid.

  • 09 Apr 2018 4:32 PM
    Reply # 6080044 on 5901520

    Hi Scott

    I made some small changes to my posting from yesterday. I have also attached an updated "How" we discussed over Skype.



    1 file
    Last modified: 10 Apr 2018 4:19 PM | Aldo Rall
  • 09 Apr 2018 6:25 AM
    Reply # 6053652 on 5901520

    Hi Scott,

    I don't think there is a need for a role of BDD Analyst as a staffing option.

    My thoughts is that BDD is a skill / technique that can be applied by a Team Member such as a Business Analyst / Proxy Product Owner or an Automation Tester. In some Teams a highly skilled Product Owner could also do this. 

    The implementations of the BDD will be automated tests typically done by an Automated Tester.

  • 08 Apr 2018 8:38 PM
    Reply # 6053275 on 5901520

    Hi Scott,

    I had a look through the updated chapter, and have the following comments: 

    1.       “Test Coverage” process factor

    The “test coverage” process factor focuses on a higher, more strategic level than individual test design techniques such as path coverage. The intent is to determine the expected test intensity per quality characteristic a product must adhere to according to the expectations of its stakeholders. 

    Applying this, for a product, business and technical decision makers will be required to make a decision on the amount of risk the product or business runs by testing specific quality attributes (you can use various standards such as ISO/ IEC 9126 - or or even the testing quadrants) for that product such as functionality, security or performance. If the risk is high for the given quality attribute, the expected test coverage (or test intensity) for that quality attribute (say, maintainability attribute) should be deep/ or high expected test coverage (or test intensity).

    So, I will not use “path coverage” in this context. Path coverage is a specific test design technique to achieve a certain expected level of test coverage (or test intensity) for a product for a specific quality attribute (say, for instance the security quality attribute).

    Once the risk has been determined for not achieving a certain amount of test coverage (or test intensity) for the product, the test design technique, called path coverage can be applied to achieve a high level of test coverage (or test intensity), combined with other test design techniques (say, decision table and multiple condition/ decision). Through the combination of these different test design techniques (along with path coverage), the team is able to achieve a high level of test coverage for the quality attribute based on the expectations from the stakeholders.

    Another example; If the “Usability” quality attribute does not require a high degree of test coverage since it is low risk for the stakeholders for the product, then a basic test design technique called “checklist” will suffice to test all GUI screens against the same checklist, without adding additional test design techniques to increase the test coverage for that quality attribute, to achieve the expected low test coverage for the Usability quality attribute.

    The test coverage decisions are part of the approach to determine what type of testing effort is required overall for the project per quality attribute such as functionality, security, performance etc.

    I think instead of using "Test Coverage" (as it can be confusing with test coverage and design techniques), we should perhaps reword it to "Expected Test Intensity"

    So, I would reword the decisions under "Test Coverage" different. This is what I suggest that we rename it to “Expected Test Intensity Strategy” which will have the following options:

    • -          Expected Test Intensity of Quality Attributes based on Business Risk
    • -          Expected Test Intensity of Quality Attributes based on Technical Risk
    • -    Expected Test Intensity based on Project or Product Risks
    • -          Expected Test Intensity based on Adhoc Risks
    • -          No Expected Test Intensity articulated


    2.       With the above section discussing the way that Quality Attributes and its links to test coverage are described, it perhaps negates the need for the “Non Functional Testing Strategy” option. This implies that the non functionals will be discussed as part of the process explained above. 

    Suggestion is to reword the process factor and options as follows:

    Functional and non-Functional Quality Attributes (Or simply Quality Attributes):

    • -          Agile Testing Quadrants
    • -          Organisational Test Policy
    • -          Create your own
    • -          ISO/ IEC standard
    • -          None


    3.       The value of the automation pyramid is simply in guiding the conversation for testing to determine where such test coverage can be achieved. I would make that part of an “Automation strategy” section by providing options of where to spend the automation effort. If that will not work, perhaps have a process goal option called “Automation Coverage Capability” and use the pyramid levels in such a process goal option.

    Automation Coverage Capability:

    • -          Unit Level
    • -          Component Level
    • -          Solution Level
    • -          Service/ API Level
    • -          UI Level
    • -          Multi-System Levels
    • -          Manual


    4.       Thinking further about the current “Automation Strategy” option in the “Test Strategy” process goal, I think we should consider the “Architectural Strategy” as well. Whether the “Automation Strategy” falls under one, other or both of them is worth thinking about as things like CI or CD are heavily influenced by and influences the architecture.


    5.       Then, there are some of the decision made under the “Initial Testing Strategy” that will influence the process goals called “Explore initial Scope”, “Identify Risks”, “Develop Initial Release Plan” and “Identify initial Architecture Strategy”. This warrants consideration in each of those process goals.

    For “Explore Initial Scope”, the test decisions made from the “Develop Initial Test Strategy” process goal will influence the “Explore Initial Scope” process goal. Ideally it should be an option as part of the “Explore General Requirements” named “Test Strategy”. The Test strategy decisions will help to inform the initial scope as it will add or remove work from the initial scope.

    For “Identify Risks” process Goal, I would add the testing risks under “Explore Risks” and add it as “Quality and Testing” or just add “Testing” as another option. The test strategy will inform the decision makers of any testing risks

    For “Develop Initial Release Plan”, I would add “Test Strategy or Test Plan” as part of the “Scope” option. The test strategy will influence the release plan, especially if there are additional tail-end type of testing that will be required such as compliance, security or E2E integration testing.

    And finally for “Identify Initial Architecture Strategy”, I would add “Testing Automation Strategy” as an option for the “Explore Technology Architecture”. The automation decisions will influence the architecture and will inform its own requirements to adjust the architecture to be friendlier towards the automation strategy. (As per point 4 above)

    There is a chance that the testing strategy process goal will also influence other process goals, but these are the major impacts as I see them at this stage.


    I am happy to discuss any of these comments above over Skype.

    Looking forward to be hearing your thoughts


    Last modified: 09 Apr 2018 4:31 PM | Aldo Rall
  • 08 Apr 2018 9:55 AM
    Reply # 6052691 on 5901520
    Scott Ambler (Administrator)

    @Jerry - We're adding BDD analyst as a staffing option.  BUT, still not convinced we're there yet.  I'm thinking that there might be a need for a Functional Testing decision point, not sure yet though.


    • Adding the testing pyramid diagram as previously discussed.
    • Adding a Test Coverage decision point as previously discussed.


    Updated Teaming Strategy==>Whole Team as suggested

    Development strategy==>Agree with what you're saying, but not sure this is the right spot for this.  Many of the things you're saying are covered in the goal Produce a Potentially Consumable Solution.  I'm thinking we park this discussion until we have that goal published, hopefully later this month.

    Test first programming - Reworked the description.

    Generated data - Reworked the description.

    Automation strategy - Updated the description to discuss challenges around skills/mindset.

    Defect reporting - Conversation: Yes!  Added operational monitoring as an option.


    Our hope is to have an update to the excerpt posted later today.


<< First  < Prev   1   2   Next >  Last >> 

© 2013-2019 Project Management Institute, Inc.

14 Campus Boulevard

Newtown Square, PA 19073-3299 USA

Powered by Wild Apricot Membership Software