QA Process Notes and Suggestions
Overview and Purpose
This document provides suggestions for the activities, roles and responsibilities of our QA process. It does not cover the rich foundation of test scripts and scenarios provided by the Fluid team that already exist and that we will be adapting for Collection Space use. See
http://wiki.fluidproject.org/display/fluid/Testing Fluid Components
http://wiki.fluidproject.org/display/fluid/Reorderer QA Test Plan - Image Reorderer
The QA phase should have a specific flow, with clear test plans that team members can pick up and begin to work with. The key here is a culture around testing, and having someone who can help teach, explain, and bring newcomers into the process.
During the QA process, it’s important that we have constant communication with PM’s regarding red flags so that issues can be prioritized and their impact on release can be handled as appropriate.
Quality Assurance Elements and Best Practices
During the design phase for each release, it would be best practice to define our “acceptance tests”, tests that outline the criteria for each function and feature included in the release. These are the criteria that must be met in order for a particular feature/function to be deemed complete. For example, we need to ensure that testing scripts, documentation, acceptance requirements, and time estimates are all in alignment. The developers will code to these criteria and the testing will check that the criteria have been met. These would be directly matched to user stories when there are user stories and for features/functions that do not have user stories (e.g. infrastructure) would be matched to roadmap and JIRA items that describe these areas.
The acceptance tests therefore would be of two kinds depending on the item being tested:
Test scripts provide a written expression of the QA activities that will take place during the QA process. Each of the items listed below will be used as appropriate to the item being tested. For example, automated tests and unit tests are more likely to cover infrastructure elements while use cases and user stories are more likely to cover the end user experience (including an Administrator experience when relevant).
Performance, stress and compatibility testing will be designed to ensure that the application can run well in a real world situation.
Ad hoc testing is when we bang on the application with no particular order or script to find oddities that might be hidden from scripted testing. This sort of testing would be particularly appropriate for end users to participate in.
- Unit tests
- Use case
- User story
- Compatibility: Platform/OS/Browser/Plug-in/API
- Ad hoc
To avoid confusion and wasted effort it is important that the entire team (including implementer testers) uses JIRA in the same way. The following lists the JIRA items for which we need to develop guidelines that describe how each of the JIRA elements is used.
- define what we mean by each of the priority designations
- descriptions should begin with a concise statement of the problem so that reviewers looking at the summary view can quickly get a sense of what the problem is and determine if they must read the JIRA in more detail immediately.
- Who closes bugs?
- the assignee can mark an issue “fixed” but we need to agree on who can actually close the bug. Is that always the QA lead? Are there some items that the PM can close – or should close? We should differentiate between JIRA items that team members create for their own task tracking (the team member can close those) and JIRA items that are entered as bugs (QA lead should close those or should pass to the PM). Other examples?
- Who determines a bug doesn’t need to be closed for a release?
- Is this the release manager or the lead PM? These might require a conversation, but someone needs to be officially designated as the decision maker.
- Can have auto-link from page (i.e. URL) to JIRA entry?
- this makes it much faster and easier for a tester to create the JIRA with the necessary information.
- If we can do this, can we also have auto-fill for OS/Browser?
We need to designate which team member(s) are playing which QA roles. Primary roles are listed below. Are there other roles we need to indentify?
• QA Lead
• User groups
• PM group
• Development groups
The QA process will have several phases during which different roles will be active and different QA goals will be prioritized. Here are several considerations.
- Which role/group is testing when/in what order?
- How does one group hand off to/overlap with another group?
- Decision that it’s time to handoff
- Communication protocols for the handoff
- Duration of each phase
- What if we need a STIM during QA?
As we’ve seen with the installs that Kasper and Richard have been doing, testing the documentation is a key element of the QA process. Some areas to consider include:
- Which documentation needs to be tested and when?
- Who will do the testing?
- Do we create JIRA’s for revising the documentation or does the person conducting the QA on the documentation simply revise it?
- It is likely that the QA person will do some revisions but that the original writer will own other revisions, for example revisions that are very detailed and specific to the function documented.
We have a considerable amount of documentation on the wiki regarding release management but we know that the process needs refinement. Some areas that need to be refined include:
- Decision making
- Communication protocols
- Version control