Quality at Pendo: Our experience of Gorilla Testing

Welcome to the second part of our two part blog on Quality at Pendo, in part one we looked at ensemble testing and how this benefited one of our key releases. In part two we tried our hand at Gorilla testing to see how this form of testing might benefit our testing approach and culture. 

Goals:

  1. To try a different method of testing
  2. To open new channels and methods for communication and collaboration within the quality team
  3. To promote knowledge sharing
  4. To increase test coverage and confidence in areas affected by framework migration (including data)

Read the first part of this series here: Quality at Pendo: Our Experiences in Ensemble Testing

Gorilla Testing

The Why

Gorilla testing is about rigorously testing a feature in random ways in an effort to try and break the feature under test. It is believed the name of Gorilla testing came from the old American Tourister luggage ad where the gorilla is seen banging on the luggage to try and break it. We had a very important reworking of our permissions code, which had a broad impact across our application. So we got all of the quality team together to try and break this new functionality.

 

One of the ways that this differentiates from Ensemble testing is that the testers were told what the change being introduced was, but otherwise intentionally left in the dark as to the specific requirements. The benefit here is that we aren’t prescribing any testing patterns or limiting scope of effort. Simply trying to use it and break it.

The Process

The introduction/kick-off session to this process focused on how we would coordinate the communication during a pandemic (the zoom call setup/and the slack room #gorilla-testing) and an introduction to the change itself. Then we began by focusing on the first step of migrating subscriptions to use the new permission set, which we would then be testing. 

 

Once we had the kick-off session out of the way we hopped on a zoom call and began testing. Folks would chime in when they had a question or believed they found an issue. We did this for 2 hour time blocks across three days to ensure strong coverage, without impacting on team members’ day-to-day. There were some people who naturally had conflicting meetings, so they would continue testing on their own when they had the bandwidth to do so. Ideally these testing sessions would be done in person to facilitate conversation and learning as we all test through the new feature, but with a global pandemic we wanted to see how well it would work using video conferencing.

 

On the fourth day we got together to review our findings with the developers, and to discuss how we thought the process went and what could be improved. This is important, because we strongly believe in being agile and a huge part of that is constantly evaluating and re-evaluating how things are going and how to improve.

The Results

  • Three high severity issues found
  • Another seven lower severity issues found
  • And a hand-ful of “known issues” were found, which shed light on how impactful they were
  • Identified risk areas that we didn’t know about
  • Greater collaboration and teamwork while performing the exercise
  • Increased product knowledge

 

We also found that these testing efforts (Ensemble and Gorilla) shine a light on the benefits of collaborative testing efforts. We can learn from each other and can work as one team despite often being siloed and focused on our feature team’s needs. Providing fresh eyes in areas that we are not typically testing and proactively breaking down our silos provides stronger results, better quality, and an invaluable approach to tackling big features and changes.

Improvements

One of the biggest takeaways for improving this process in future iterations was the need to identify the right level of ambiguity. The specific change impacted our entire application. It would have suited us better in this situation to have a few more guardrails in place so there wasn’t as much overlap between our quality engineers in certain areas. There were also some instances where engineers asked, “what should I be testing?” 

 

The benefit of Gorilla Testing IS having a certain level of ambiguity to understand how usable the feature under test is, but it is also important to strike a balance between providing testers with enough information to do what they do best whilst ensuring there is plenty of scope to explore and investigate the change for themselves.

 

Just like all testing activities at Pendo, developers were involved in this process throughout. However, developers would have been better served by doing a touchpoint with us after each session, instead of waiting until the last day to do a review of all the findings. In hindsight, we feel that a faster find and fix approach would have elevated our gorilla testing and provided a better set of results. 

 

While we had good participation by the quality teams, we should have required attending the 30 minute introduction session (or at the very least recorded it) to ensure everyone understood the intent of this effort.

 

Summary

There are a number of cliches around the idea of working together being better than working solo. We learned the following:

  • How the new feature worked
  • There were still bugs that needed to be resolved prior to release
  • There were some areas of the feature that needed some polish
  • New testing techniques and approaches from our interactions with each other

 

Both Ensemble and Gorilla testing offered us these learning opportunities and benefits. I am happy to say we have had two group testing efforts after these that also proved beneficial. They may not have specifically been called Ensemble or Gorilla Testing, but they were a group effort and provided us with fresh eyes, and more opportunities to learn and grow as individuals, and as a group.

 

Findings Ensemble Gorilla
Better for a smaller group
Better for smaller change
Would work better in person rather than online in video conferences 
Requires more structure/setup
Little to no structure
Cross team collaboration
Beneficial in finding bugs/issues
Increased product knowledge
Identified risk areas