It feels like all our episodes are all out of whack, but that’s just what happens when you have a lot going on! The team attended HubSpot’s Inbound 2022 a few weeks ago. Jess’s favorite part was getting to meet everyone in person for the first time since we hired some new faces, and Doug’s favorite part was the Westin lobby after hours because that’s when the best things happen. But how the oldest two people on the team outlasted all the youngin’s is beyond us.
Today’s topic is one that Jess has been thinking about and has been a hot button topic in her head for the past few weeks. UAT and QAT. She believes they get interchanged frequently and wants to address the best way to manage both UAT and QAT, where people go wrong, and learnings they’ve had with launches.
What’s the difference between UAT and QAT?
Doug tries to be funny here and says that it’s rather obvious – one has a U and the other a Q. All jokes aside, when we talk about UAT/QAT, we need to acknowledge upfront that this is not the typical standard where UAT/QAT comes from. UAT is more of a software development piece. At the simplest level, QAT is quality assurance testing and UAT is user acceptance testing. QAT is done internally from our perspective and UAT is done externally. When we do it it’s QAT and when the customer or client is doing it, it’s UAT.
To highlight further, the difference is that QAT is us checking if the system is working the way we expected and UAT would be having the user go in and go through the way they would use it and kind of try to break it, is how Jess would explain it.
To Doug, UAT is more than that. It’s finding scenarios where something doesn’t work and asking how often it happens. Where Doug thinks UAT goes wrong and ends up hurting the end user is when you move into the infinite project. User acceptance testing is not generating iterations, it’s about does this do what it was specified to do? If it doesn’t do something and it wasn’t specified, then you might make an argument that whoever was responsible for specifying screwed up.
UAT is thinking about, “Was this done?” Even though it might not be technically part of what the use case was, does it meet our standards?
Another place that goes wrong is when people start testing use cases or the exceptions, we don’t build to the exceptions. We build to the role. It’s dangerous to say go in and break it because now they’re going to test all of the one offs that rarely happen. Then we have to make adjustments to that versus making adjustments to what the usual MO is.
Who should be doing the UAT and the QAT?
Those who built it should be doing the QAT and the end user should be doing the UAT.
If the end user is sales, should it be a salesperson? If it’s marketing, should it be a marketing person?
No, when you have a project team, you should know who is responsible on the customer side for everything that is involved. They should be going in and seeing if everything is doing what it’s supposed to do. If the UAT is designed to have everyone test it out, then we just call that using.
The question behind the question is, is it the main point of contact who should do the UAT? Should a sales rep be doing it? Should the sales manager do it?
To Doug this is situational. Think of building a new stadium. You go through the process of building the plumbing and testing it to see when you flush the toilet if it works. You go through each piece, but then before opening day you have a group of people come in and flush everything at once because then you’ll know where things aren’t working.
Sometimes something is missing that we won’t notice is missing. In user acceptance testing, if the client shows us where they are confused, you’re able to catch those missing pieces.
Doug finds a process improvement here. The person doing the UAT should be defined at the point that you’re building, that’s when you should have defined who’s doing the UAT. This person should be high level who is fully involved, someone who owns the project on their end.
Whoever it is should be briefed on what they’re supposed to be doing, it isn’t a blind element. They should be clear as to what they’re testing. If you don’t have a defined level of what you’re testing, you’re not going to be able to tell the difference between an enhancement and a fix.
Dharmesh Shaw had a great post where he said, “There’s nothing wrong with building software that has a strong opinion.” That’s the whole point, isn’t it? Our prime directive is that the business process must drive the technology, the technology should never dictate the business process. In defining the user stories and the use cases, you’re also defining the trade-offs. The system is built to help make the right decisions. If you test outside or not in alignment with the choices being made, then you’re back to building a Frankenstein system. And that’s why 70-80% of implementations fail.
We need to talk about two types of UATs. There’s a feature UAT and a use UAT. The feature UAT should have a checklist to test and make sure that things have been built.
Do you not think that you should have a checklist for the usage UAT?
You’re probably going to have a checklist, but it’s scenario based according to Jess. According to Doug, he doesn’t think you should have a checklist, rather you should have a clear use case to test.
In the end the user should be able to come in and tell us what they found frustrating and why or where things are failing for them.
A key to a successful UAT/QAT is making sure you include in the brief what complete is because the user is going to want the system perfect and that’s not the goal. The goal is to get through whatever you outlined as complete because then you’re in a never-ending project mode.
How many iterations would you go through during a UAT? For example, if you have a user go in and do a UAT and get feedback and we make those fixes, would you then have the user go back in and give more feedback? How do you know how much bouncing back and forth to do?
The danger there is you can fall into an enhancement mode. When you do a UAT, it either passes or fails. If something fails, you have to define what failed and what the fix is.
Jess’s Takeaways:
The biggest takeaway is, map the build.
Follow Jess, Doug & Imagine on socials for updates on the show or other insights:
Doug Davidoff: Twitter - @dougdavidoff | LinkedIn
Jess Cardenas: Twitter - @JessDCardenas | LinkedIn
Imagine Business Development: Twitter - @DemandCreator | LinkedIn
Subscribe to the show on Spotify & Apple Podcasts
Check out Let's Play RevOps on Twitch for more commentary on this topic