QA Best Practices, Part 1/2


Working in QA, it’s indispensable to acquire some knowledge of best practices that guarantee smooth and consistent operation

By Sarah Fittje

During my time at Project A, I’ve worked with quite a number of different teams, which sometimes only existed in a specific constellation for a short time. Despite occasional similarities, each project and team is unique in its own way. Working in QA, it’s indispensable to acquire some knowledge of best practices which guarantee smooth and consistent operation regardless of the particular project. In this two-part series I’d like to share my QA best practices with you.

Along with other teams and projects, it has especially been the Project A LATAM team which has influenced me the most and enabled me to gather some best practices while actively working on the Brazilian e-commerce shops Evino and Natue.

This first part will mainly focus on some conventions and soft skills that will aid you in your own work and in communicating with your fellow teammates. The second part will cover some more tech-related topics.

Wording

Using the same wording throughout the team can save so much time and avoid a lot of stress. When taking new people on board or introducing new features, make sure to clarify specific wording conventions and stay consistent.

Metrics

To help your devs reproduce and understand a bug you found, it’s necessary to provide them with as much information as possible. It’s advisable to spend at least a little while on digging deeper into the problem in order to be able to answer the following questions:

  • Location — Where in the system does the bug occur?
  • Action — When does the bug occur?
  • Device — Is there a specific platform on which the bug occurs?
  • Browser — Does the bug only occur in specific browsers?
  • Server/instance — Does this bug only occur for a specific test instance?

Comprehensible Reports

Good documentation is comprehensible — and every report is a small piece of documentation. It’s essential that all team members understand your report. This is not always an easy task to accomplish since the reproducibility of a bug can be complex and loaded with several conditions.

Bug Reports and Test Reports

First of all, make sure you understand the difference between a test and a bug report. Both require their own structure.

A test report is a report related to a specific issue branch. The best thing to use as a foundation is the respective ticket specification, which you can always extend to suit your tested use cases.

A bug report, on the other hand, describes a report related to a bug on your production system.

Easy as pie!

With this knowledge, take the following guidelines into consideration when writing your reports:

1. Bulleted lists are more readable than continuous text

2. Bug report

If possible, answer the following questions:

  • Where — location/platform — Where does the problem occur?
  • What — consequence — What is the problem?
  • When — action — How to reproduce the problem?
  • Why — reason — What is causing the problem?

3. Test report

Start your report with the overall test status:

  • QA STATE (QA ongoing)
  • QA FAIL
  • QA DONE

Single use case statuses:

  • Passed
  • Failed
  • Fixed (valid for >1 test iteration; Fixed after Failed)
  • Specify (further specification by PM needed)
  • Invalid (caused by misunderstanding, falsely reported, etc.)

4. If a request failed, provide the following information:

  • Request URL
  • HTTP method
  • HTTP status code
  • Payload
  • Response

5. If you have any suggestions to make things better, state them

6. Use Markdown to lay out your report

7. You can also replace the use case statuses with icons — or use both

8. If a use case fails, make sure to explain the problem in a nutshell to not overload the report

Here is a comparison of two bug reports which illustrates how poorly structured reports can impair readability:

Bug Report

Side note: Always clarify beforehand which information is expected within your team. Some people only want information on failed use cases, while others might want a complete list including all tested use cases regardless of the status. I would recommend the second option, since retracing future bug occurrences will be a lot easier.

Keeping Track

While testing a specific issue, it might happen that you detect some weird behavior in your product. Although this is a common situation, it might throw some testers for a loop. Losing focus due to new bugs that have to be investigated happens quickly. The key to keeping track of everything is to frequently document your current state of testing.

There are two main options to deal with new bugs you detect while testing an issue:

1. Make a note concerning the new bug:

When you need to concentrate intensively on testing an issue, you shouldn’t be distracted by new bugs. I don’t mean that newly occurring bugs aren’t important, but sometimes it makes sense to care about them later. Thus, make a quick note containing the most important information so that you will be able to reproduce the bug later on and check whether it’s related or not related to the current ticket (I’ll cover this topic in more detail in part two).

2. Investigate the bug directly:

Whenever you are not heavily involved in a huge issue, you can investigate a bug directly and decide whether it’s related or unrelated to the currently tested issue. After successfully scrutinizing the bug and filing it in the ticketing system, you can proceed with the current issue.

Documenting your test progress doesn’t only help you to keep your bearings — it also helps you to keep your team informed about the current state of the test.

Duplicate Tickets

One task of a tester is to document bugs. Not only testers are filing bugs though. Since other team members might find the same bugs that you do, it is advisable to check the existing tickets for similar keywords in order to avoid duplicate tickets.

Whenever you stumble across a duplicate ticket, it makes sense to mark it as such while referring to the identical ticket. You might also want to check which of the two is currently in the development process and close the other one.

Linking Tickets

Most software projects are based on a backend and a frontend. When a new feature is planned, you quickly realize that backend and frontend devs will have different tasks to accomplish — makes sense, right? For some projects it seems obvious to separate backend and frontend issues. This has advantages regarding clarity, but there are downsides as well.

The most important aspect when having separate backend and frontend tickets is to check for a potential link between them. Bear in mind that even when a project has separate frontend and backend tickets, features might still only be related to the front or backend, respectively. Assume that you’re testing a frontend improvement and setting it to QA DONE without knowing that there is a related backend ticket, which might just contain small changes to the logic or a refactoring task. In this case, you are at risk of releasing half a feature which consequently might not function as expected, at least if we assume that a ticket with the status QA DONE will be deployed to production soon after successful testing.

To avoid such a scenario, you should consider a workflow which ensures that an issue is only set to QA DONE when all related tickets are QA DONE as well. A good way to do this is to create one main ticket which includes all related tickets. These related tickets could be subtasks, Github issues from different repos, or other normal tickets that are simply linked to the main ticket. Of course, you could also just link the respective tickets without having an overall main ticket. If there is no need for separation, one ticket for all tasks will do the job too.

Sequential / Non-Sequential Use Cases

Here at Project A, we have a term called “QA ping pong”. Though it sounds funny, it’s not so funny when it happens. QA ping pong means that one and the same ticket is juggled between QA and developers several times. This can have different reasons.

Either the developer:

  • doesn’t want to admit that nobody is perfect and rejects QA Fails without verifying the failed issues
  • doesn’t read the test report properly

Or the QA:

  • marks a ticket as failed in every test iteration as soon as the first bug occurs

Before you declare a ticket as failed and hand it back to the developer, check whether the failed issue is affecting further use cases or not. If the fail doesn’t affect other use cases, go ahead in order to discover more bugs as early as possible. Otherwise your process will look like this:

  • Dev Done
  • In QA > Use case 1 fails
  • QA Fail
  • Dev Done
  • In QA > Use case 3 fails
  • QA Fail
  • Dev Done
  • In QA > New bug caused by last fix + Use case 7 fails
  • QA Fail
  • Dev Done
  • … and so on

Of course, there might be new bugs caused by the last fix, but in fact these new bugs will hardly affect the number of iterations the way that QA ping pong does.
 Also, direct communication might help to solve some misunderstandings and prevent QA ping pong. So whenever you have the chance to talk to a person directly, you should take it. Don’t be afraid of confrontation! 🙂

Bug Communication

If you communicate bugs (verbally or in writing) never be offensive towards developers. A QA’s goal should be to establish a good relationship to the dev team. Only when you achieve this, the dev team will really appreciate your QA.


So much for now! Stay tuned for part two, which will cover some more technical topics like the relatedness of bugs, browser dev tools, API testing and the distinction between frontend and backend bugs.