IF U SEEK: Decoding Conversion Insights

Showcasing the guest of our new podcast episode David Standen, where he talks about conversing insights for product design

Explore product design with David Standen on If U Seek. Learn strategies to align design with business goals and optimize conversion rates.

We’re excited to bring you the latest episode of If U Seek! This week, we have the pleasure of hosting David Standen, a Product Design Leader, with a remarkable background including experiences at Shopify, eBay, Nokia, and more. David shares his extensive knowledge on aligning design efforts with stakeholder objectives and fostering a common language that drives conversion and business success.

This episode is filled with actionable strategies, insightful tips, and engaging discussions that are essential for anyone involved in UX design and conversion optimization.

You can find If U Seek on Spotify, Apple Podcasts, or Youtube Music.


In this insightful episode, David emphasizes the importance of continuous optimization and the role of ongoing testing in enhancing conversion rates. He provides practical advice on how to effectively communicate the significance of testing to stakeholders, ensuring understanding and buy-in for UX research processes within your organization.

Key Points in This Episode Include:

  • Communicating the Importance of Testing: Learn how to articulate the significance of testing initiatives to stakeholders.
  • Continuous Study Practices: Discover the value of ongoing study practices in optimizing conversion rates.
  • Accelerated Testing Methods: Explore unmoderated testing approaches that speed up the process while maintaining quality outcomes.

Tune in to gain valuable insights from David Standen that will help you elevate your UX strategies and drive measurable improvements in business outcomes. You can follow the conversation from the transcript below.


Layshi: If you seek to shape the future, listen to those who design it. Welcome to “If U Seek” by Useberry, where expert voices guide us to UX wisdom. Layshi Curbelo here, your host of the journey of If U Seek. If you are seeking a different way to learn and understand users, this is the podcast for you.

Maybe you are questioning, why If U Seek? Picture it as an open door to curiosity. In every episode, we will strive to explore and gain deep insights from experts shaping their domains. We want you to feel enlightened, educated, or even inspired after each episode. The idea is to foster connections within the UX industry.

Now, an important information for you. Uncover the secrets of your website with Useberry Usability Testing. Pinpoint issues, explore user perspectives, and make decisions backed by data. Your website’s optimal performance starts with Useberry. Dive into Join us on social and share your thoughts in the comments.

Your journey to enhance user experience begins now.

Today, we are thrilled to have David Standen in the podcast. A pro design leader, David Standen, known for his customer-centric approach and hands-on expertise, with a rich background in product and design that includes time at Shopify, eBay, Nokia, founding startups and advising early stage ventures.

David brings a well of experience to the table. David knows that successful innovation relies on deep customer insights. He prioritizes objective feedback over subjective preferences. User feedback as a food for data driven design decisions. Get ready for a podcast that will help you talk in the same language as your stakeholders, in this episode of Decoding Conversion Insights.

Let’s start with the show.

Layshi: Hi everybody. My name is Layshi Curbelo. I’m your host of If You Seek. Here’s another episode and this time I have the amazing opportunity to be recording with David Standen. Hi, David.

David: Hey, how you doing?

Layshi: I’m doing amazing. How about you? I’m recording from the beautiful island in the Caribbean, Puerto Rico. Where are you from?

David: I’m based in London. We’ve just survived a severe hailstorm randomly. But yeah, it’s great to be part of the podcast So yeah, thanks for inviting me on.

Layshi: Thank you so much for saying yes and sharing your knowledge with the audience I’m super excited to have another colleague from London so far away.

David: Yeah, but you’ve got better. You’ve got much better weather. I wish I was where you are.

Layshi: That’s what people say. But to be honest, today is a rainy day here in the island. It’s a really, really rainy day. So happy to learn from you in this episode of Decoding Conversion Insights. Are you ready to share some of your insights?

David: Yeah, happy to dive in. Yeah, let’s see if we can share some of my learnings from my experience. Let’s go.

Layshi: So I remember when I started in my career as a UX designer, and I know that a lot of UX designers and researchers need to be really focused on numbers when they do it. Where they are doing testing. Needs to be really focused on the metrics percentage, even from one to five percentage. I know that for some people this sounds so minimal, but for us in testing, it’s so much. Right? And yeah, it can make such a difference. And when you are doing an AB test, it could be like a battle.

So taking into consideration metrics, what methods do you typically track to measure the success of a conversion optimization rate?

David: Yeah, it’s a great topic and one, hopefully we can unpick through the show. But I think one of the first things to really focus on is clearly the number of users that are actually able to complete the journey, so the actual conversion rate. But I think it’s more important to dive a little bit deeper and go through the acquisition funnel and really see what areas indicate any friction or frustration. That’s known as the drop off rates, and it helps you dive a little bit deeper into the experience and uncover a little bit more.

But I think it’s also important to look at the time users spend during a single session, it’s a good indicator to know whether they understand things enough, if they’re finding it maybe more complex than what they should. And that’s known as the average session duration.

But from a UX perspective, I think user engagement is critical. I think you need to be able to leverage heatmaps to understand the effectiveness of your experience and understand where people are actually clicking. I think they’re beautiful things to look at, but if you can interpret them in the right way, I think they’re really powerful.

And I think also the number of people that are actually completing a task. So, when you’re setting up your tests for task based tests, it’s important to know who are, who is completing them and who isn’t. And you can even go as far as calculating a score to help you understand the impact of before your improvement and after.

And so leveraging almost a usability score can help you get a bit more insight in terms of how good your work is or how bad it is, right? It doesn’t always go to plan.

Layshi: Thank you so much for that amazing answer. I definitely agree with you. I remember when I do a lot of testing on different kind of companies and the majority of the times they actually measure bounce rate.

They measure heat maps and understand user interactions. And one of the things that I really love is when you can actually record the behavior and you can understand what are the things that confuse users?

I remember one time that it was me and my co worker really having almost like a fight about an element. It was like no I don’t think people will actually feel that this is clickable and will be visible to the user and he was like no forget about it. People will actually get it and after a couple of recordings, we noticed actually the actual behavior and either him or me has the actual answer about it.

David: Yeah, sometimes you just can’t explain it, right? It’s things just don’t go in the sometimes the way you think. I think that’s also the beauty of running these tests, right? You can be pleasantly surprised sometimes which is great. And I think that is the magic.

Yeah. It’s being able to remove any form of emotional attachment you’ve got to the work and say wow, that was what completely blown my mind.

Layshi: Definitely. Can you discuss the process of following the design and implementing conversion experiments?

David: Yeah, so I think this in terms of the process that you actually follow, I think it’s really important to before you get set up, make sure you’re focusing in the right area. So I think do lots of early research and analysis just to understand the potential of the improvement that you want to make that’s just going to help you really hone in and focus your conversations around the area.

I think also you need to be able to have a dialogue with your team, right? Your team’s going to be made up of developers and engineers, product designers, product people even designers, if the team’s much larger and understand any assumptions that may exist.

You may even be new to a team, right? So I think it’s good play that out and understand what people have as an assumption. And that’s going to help get a level of understanding and shared understanding, and it just helps you then think around, okay, if we have these assumptions or leaps of faith, then let’s start thinking around, if we were to structure that as a hypothesis what might that start to look like? So if we do this, then, this is likely to happen. And I think from that, if you’ve got a clear sense of the experiment and what you expect to see, you should be able to have a clear sense of the behaviors that you would see come through the test.

And I think you should commit. You should certainly share that in confidence that you’re expecting these types of behaviors to happen and that will lead to a target that you wish to happen. They can start designing the experiment and I think, again, just make sure you’re rigorously testing the kind of the experiment to make sure you’re always focusing on the right thing.

And I think what’s just paramount to really anything is just focusing on the experiment to begin with, just so you remain focused. And I think some things that help underpin a good test. I have some really straightforward good design principles. So making sure that you’re clear, things are simple and consistent and easy to follow. I really encourage people to test variants. It’s really easy to commit to one thing and be confident about the first thing, but you never know, right? You just keep testing variants.

You just never know. I think if you’re continually looking at variants, you’re always going to be improving the usability. And you can always make sure you’re creating the right experience first time. So, yeah, always do variants.

It’s also powerful to leverage user testing platforms as a way to benchmark existing experiences as well. So very much like an AB test, you can have control of an existing experience, but create the benchmark, but then you can actually measure that against a prototype of a new experience and understand the before and after and the effectiveness of that work.

So, to see things through more effectively, I think I’d focus on a number of those things to have a solid foundation.

Layshi: You mentioned something that I would love to highlight and remember also to my experience on different companies is the way that you actually collaborate with other departments and how important is not be designing in isolation and actually just believing on your hypothesis that are the solutions that actually help the world.

David: Yeah, it’s a collaborative exercise. It really is. It’s a team effort. And I think it’s very much part of that, diverse opinions, as long as everybody can share and feel comfortable sharing those opinions, I think you get a effective result because everyone’s been able to contribute. I think that’s important.

Layshi: I remember working with the sales department specifically, they have a lot of different requests of how they want to change the designs. And this was a conversation of let’s do this variant. And they were happy, me, the team was happy. Everyone is happy because we have something to prove, right?

It’s not only, I love blue. It’s how actually blue converts with the elements that we have for interactions and experience, you know? So, yeah, I will encourage the audience to work in a collaboration mode with different departments to do better design.

And not only the typical departments, as I mentioned that it could be sales also different departments as customer service. I remember one time working with customer service and it was the implications of our designs in their workflow.

David: It’s so important. Yeah. I think having that feedback loop. No, it doesn’t matter where, whether it be a sales team or whether it be the customer success team, they’re all feedback points. And I think that the closer the loop, the better result you’re going to get.

Layshi: Definitely! I remember and I think this is something that I do, but I definitely encourage people to also do it.

I am a people person. So I go around the office and I just take my cup of coffee and I start walking through the different departments. I’m talking to people, but at the same time that I was doing this, I was actually listening to some of the things that were happening in the company. For example, this is not working well, we need to change something on this landing page for this particular client. Okay. So, do you have that problem currently? Let me talk about it. Let me see what I can do for you. And every time that I can actually pick it up, different things from different departments, it actually helped me to design better experience for the entire company.

So yeah, if you are not a people person and you don’t want to actually just work around in the office you can do it in different ways. You can chat about it and hang out.

David: You’ll find a way. I think empathy is a really powerful tool. So find your way of communicating and then opening up, it’s a powerful thing to have.

Layshi: Now I want to ask you, how do you analyze and interpret the result of experience to inform decision making? It’s a hard question because sometimes we are really into, okay, I have my designs now I have my numbers and that’s it. I don’t know how I can actually level this up.

David: Yeah, if you have the tools available and you have AB testing, that’s a real good shortcut. You can compare the performance of the variation against the control. I think if you’re able to do some benchmarking and a user testing tool, that is a great optimal way of design before having to build anything.

So, in any situation, I think, try and find a way to leverage that. It’s a really powerful thing to do, because then you can effectively communicate how well things have gone and build the confidence in the teams and throughout the business.

Other things to really look out for are maybe significant uplifts, within the conversion that might be indicating that it’s working or even think from diving into whether you’ve seen it’s working and interpret why might we be seeing specific behavior coming through and form a level of insight because that insight could be really powerful and you could actually start to applying that to different areas of the product. You may find something working in the acquisition flow. There’s no reason why it couldn’t be used later throughout the experience or midway through the experience. And I think when you’re analyzing things you should really also look out for usability issues.

You may have been confident with one of the directions and you may have created a usability issue that you might not have actually foreseen. So I think pay close attention to the feedback that you get. Also, make sure the interaction patterns that you’re using are familiar and not necessarily stretching what people are necessarily used to.

It’s just a way to give you a deeper understanding of how different design changes actually do impact the behavior as well. Yeah, I was even thinking maybe AI plays a bit of a part here. There’s a bit of a gap, I think where in particularly with analyzing and not necessarily the integrity, I think you play a part in terms of interpretation for your own user base.

But I think I’m quite excited to see AI plays a part in this space actually.

Layshi: Interesting that you actually mentioned AI. I think AI has a lot to do of how we can summarize important information in a way that it’s more quickly to inform. I remember that I mentioned this in another episode of how we can humanize the information and make it more into the user. Because the majority of the times when we inform this kind of content, it’s numbers and we actually work in for users that has been represented by numbers and sometimes in our minds we lose that, we are changing people’s behaviors and the way that they use tools and products. So it’s super important to humanize the content.

And I think another important thing I can add, it’s not only how we can actually put a face on the percentage and the metrics and the information that we are doing, but also how we can communicate this higher up and actually showing case that this will make a difference in the way the company works. I remember one time when changing a design has a lot of impact of how a department actually has an entire workflow and we actually reduced the amount of work time for that specific design because it was basically an automation, so users can contact directly and it was better for customer service.

The main idea is not only how we can humanize the results or how we can condense that results with AI, but also how this can play a huge role in the way that the company works around, right?

David: Yeah, completely. And I think there are big changes happening with AI and customer service tools right now. I think that’s all built around efficiencies and again, optimization. But we should never forget that human element. As a designer, you play a really important role as being the voice of the customer. And I think that’s where we really bring a lot to the table, and we should never forget that. Always be active and bring these conversations to the table, especially when you’re overwhelmed with data you play a really important part.

Ad placement: Let’s stop the conversation for a moment. Step into the future of UX research with Useberry. From user recordings to click tracking and user flows, our platform offers a variety of insights for better decision making. Elevate your research game with Useberry. Discover more at, follow us on social, and drop your thoughts in the comments.

Now, let’s continue with the show.

Layshi: Now that we have a little bit of more light on how we can communicate things and how important is our percentage, I want to talk about the challenge of people that are working on testing environments, discussing the importance of continual testing and iteration, maintaining optimal conversion rates. Because we are really into these elements, we are testing this amount of variants, but we do maybe one round, two rounds. And after that, maybe higher ups and stakeholders are okay, I have what I have and talk about a little bit of that.

David: Yeah, for sure. I think it’s always easy to get caught up, deliver one thing and move on to the next.

I think that’s just almost an appetite to do more, deliver more. But we really need to pay particular close attention to what are the outcomes that we’re trying to create? Continually testing and iterating is crucial because user behaviors are always changing, and the landscapes always evolving. By testing and listening we can adapt and help refine the things that we’re doing and it helps us stay ahead of the curve and it helps us almost keep up as well because with the rapid rate of change with AI, it’s really important that we continually test because it can so easily slip away.

But I think we should always be monitoring the work that we may have delivered as well and actually run surveys or continuously revisit. Because that behavior is going to change. But ultimately, it will help us iteratively improve. So the more we iterate, we can create a more seamless experience and that will drive over time the kind of conversion rates on an ongoing basis. Just because you fixed it once doesn’t mean it isn’t going to need to be optimized again. It certainly will in time. So always measure after.

Layshi: I have a colleague and I will not say his name, but I have a colleague that we do this kind of thing together that we say, the excuses. Every time that we really want to test the element, we say, let’s begin with the excuses. The first time I remember it was like, okay, let’s change images for illustration.

Okay. That’s the excuse. And then it was like, okay, so this element, it will be localized on another experience. So, now it’s different. Now, it’s not the same testing. It’s because we are doing localization and we trying to make it up. I mean, there are hypotheses, of course, but we are trying to be really out in advance, innovating, trying to do it better with the conversion optimization rates. So we need an excuse to iterate every time.

David: It’s good. It’s a great way to do it and a great way to make it part of the process, I love it.

Layshi: Just a tip and trick there. The excuse.

David: The excuse, so that everybody should adopt.

Layshi: Yeah. In time sensitive situations, certain techniques prove more effective than others. Such as an unmoderated experimentation. And I know that companies need everything from yesterday. Companies need numbers right now. So I want to talk about a little bit of the advantage on unmoderated testing and how we can actually get rapid feedback to compare to other traditional testing methods.

David: Yeah, I think it’s a great topic and I’m a big advocate for unmoderated testing. I think it’s great to the point where you can get access to participants really quickly as part of a user panel. And I think it’s a way to shorten that feedback loop. So, ultimately, it’s the speed you get from being able to do an unmoderated test, and you can iterate quickly from the test.

I also think it’s a great way if you can run some unmoderated testing, you can easily play back the results live, almost in a matter of hours, and it just builds the confidence with surrounding your team and really helps you engage better with your stakeholders.

I think, ultimately, again, you’ve got access to the scale as well, and probably a more diverse participant group as well that you might not be able to get immediate access to doing unmoderated testing, unless, you do have a panel that you’ve built up over time. I really do believe you got the advantage of scale as well. I think it’s really good to be able to leverage that and quickly test externally and remove any biased feedback that you might have in teams and remove that confirmation bias overall.

So, I wouldn’t over index. Just like on moderated testing is the go to, I think it’s certainly a balance of both. I think by leveraging unmoderated testing, it allows you to be a bit more pragmatic over when to dive deeper using moderated testing and leverage that to save time, but also unpick more insights by using a moderated format, which helps you build more empathy for the user at the right moment.

Layshi: I remember doing unmoderated testing, and as I mentioned before, we can be in a fight, like, Oh my God, I was thinking of this, as you mentioned, the confirmation bias, we have this idea of how the user will use our designs, but when you actually are not there, you notice what are the things that maybe people have as a mental model or how people really want to navigate or use your products or your services and this testing is super important. And as you mentioned, have a balance, right? Because you also need this kind of interviews that you can actually pick it up the person and you can do follow up questions and understand other type of motivations. But, moderated testing has its beautiful insights too.

David: Yeah, completely! And I think that both are extremely valuable. I think it’s just another tool designers can have within their toolkits to be able to either build confidence quickly in the organization or being able to pick and choose the right form of testing is just really valuable.

Layshi: David, I have a question for you. That is a question that the majority of my students actually ask me every time. They are really into testing. They are trying to start the first project, do unmoderated testing, but the most hard part for them is actually understand how they can put the information that provide the user the guidance to do the task that they need to and how can they actually have a guide of the things that you need to mark and check of this.

David: Yeah, I think you’ve always got to be working, structuring and unmoderated testing. You might always get it right first time. And I think again, it’s continually to refine the tests that you’re doing. It first starts with screening, and making sure that you’re focusing on your user.

And is that user your user persona and it’s going to help you get that almost more meaningful feedback that’s going to be relevant for the business? Also, you’ve got to be really clear in your instructions and, really being very step by step, really granular.

If you set up a test and then play it back and run it yourself, you may catch a few errors in the first pass or second pass, but you’re always going to be making it better and better. And I think at the end of that you need to make sure that you’ve got a clear action you have to take within the test and you’ve got to be careful that it is not leading in any way.

You can also follow up with surveys to try and dive a bit deeper. If you didn’t necessarily get a level of insight that you’ve felt, and then maybe there was some areas that came out of the test that you needed to dive a bit more into.

And I think that’s really important. Through testing regularly yourself, you’re just making sure it’s easy to read and it’s intuitive. These things help you facilitate the tests which can hopefully open the door to get more open feedback.

Layshi: Definitely. I always try to also add to them, like, think about this as a resume. When you are writing a resume, you need action words in the, in the bullet point is that you’re explaining your education or your past experience. So, when you are actually creating an unmoderated test, and you want to give instructions, start with actions. What kind of actions or interactions do you want the user to do? So it’s more directional the way that you actually create an entire, because that’s another thing I always mentioned, you are creating an experience of testing for the user experience that you want trying to try.

David: Completely. Another thing is, base it on real scenarios and try to reflect the real experience. So if you’re going to structure the test, take a real task specific approach. This really helps you get a better, more focused result.

Layshi: I have a question and it’s more related to the topics that I love. That is diversity, equity, inclusion. And we talk a little bit about the bias confirmation, right? But how can you actually mitigate the risk of bias when designing experience and ensure fair and unbiased outcomes, because that’s super hard.

David: It is, yeah. I think you’ve got to learn more about yourself and you’ve got to be really self aware. That takes practice and time. Sometimes it’s easy to fall into a subjective state of mind when things are maybe going to plan, so the bias can always creep in. You’ve got to avoid letting your own assumptions become that influencing factor. So, minimize that leading language that might be creeping in, step back and really reframe questions in the right way. Again, going into the screening of participants is really important.

Make sure that you’re focusing on the right demographics and if you’re unable to remove your bias, get some help. There’s nothing wrong with asking for help to rewrite your questions or reframe them in a way that removes the bias. It’s really important to try and mitigate any form of bias that might creep in the first time. I think you’ll be more aware of it the second time.

There are other things that you can try to do as well. Rather than just setting up the tests and reframing questions, you can try and randomize the order of tasks that you’ve had within tests. You can try using different mixed methods. So, from talking out loud studies and mixing that with a structured survey you’re going to be able to get a mix of qualitative and quantitative feedback. That’s going to be a really great way to balance the bias and removing it.

Always be aware that when we’re running these tests, maybe it’s not going to be perfect. And there’s always room for improvement, that’s when you get the opportunity to continuously improve the way you’re running tests and improve how you’re working together as a team and even how you’re playing back the work that you’re doing as a team and everybody plays a part to try and remove bias.

So, it’s just trying to make things fair, as unbiased as possible. By doing that as a team, you can get more open feedback, and it’s hopefully going to reflect more accurate insights through the tests that you do as a team.

Layshi: I love that you mentioned if you have a bias, ask for help. That’s one of the hardest in terms of identification, like, I have this bias, how I can work around this? And I think this is another hard thing, but I want to mention, because I think it’s super important, representation and there are not a lot of companies that are actually doing this well, but that’s why it’s so important to mention, and when you have a diverse group or team in your company, you have the opportunity of, before you actually launch in a test, get around on the team.

Actually, let’s say for example, and this happens to me in a couple of companies, I start creating a test and I have my own biases. But, when I shared this test with another colleague before we shared this to the public, they say hey, wait a minute, do you think about this and this and this because he is aware of different things that for me are normal, but actually are bias. And that’s why representation is so important and actually collaboration with different mindsets, different intersectionality and actually different kind of opinions.

David: Yeah, 100%. I think also dialing into cultural differences as well. That’s why it’s really important to play out a test within the safety of your own team before you may want to go a bit more public with that. But you just need to step back and realize it’s more input. It’s also feedback. Don’t take things personally, the result is going to be much more profound than where you started and it’s always going to inch you much closer to a better test in the end. Being open is a very tricky place to get to, but I think it’s actively encouraged.

And I think even if it’s a matter of not asking for help or maybe it’s just asking for what do you think? get an opinion and a second opinion is always worth a million dollars.

Layshi: Reach out designers, don’t work alone. I think that’s the summary of this episode.

David: Pray the silo, pray the silo.

Layshi: Yep. So David, we are almost final this amazing interview and thank you so much for sharing all your knowledge before we go, any tip and trick that you want that the audience to add to their notebooks?

David: Yeah, I think because we’ve been focusing on conversion optimization, don’t forget about empathy. It plays such an important role in everybody’s work in all aspects. Put yourself in the shoes of the users and empathize with the experiences that they go through. It’s going to help you uncover so much insight and also really help you differentiate the kind of the product that you’re working on. So if you’re designing experiments or even making business decisions, always prioritize empathy and find a way to include it.

Layshi: And also, every element is worth testing.

David: Yeah. 100%. There’s always an excuse.

Layshi: There’s always an excuse, exactly! David. Thank you so much for being here on If U Seek. I’ve been having a great time. Thank you for sharing your knowledge with the audience.

David: Thank you very much. I’m glad my voice didn’t break.

Layshi: We survived the interview. Well, everyone, thank you so much and we’ll be into the next episode of If U Seek.

Thank you for joining us on If U Seek. For more exciting content, follow us on our social channels. Your review mean the world to us, so don’t forget to leave one. And of course, hit that subscribe button on Apple Podcasts, Google Podcasts, or Spotify to stay updated on our latest episodes. If U Seek is a platform for discussions and personal insights. The opinions presented by guests are independent and do not represent the official position of the host, Useberry, or sponsors.

See you on the next episode of If U Seek.

Get started with Useberry today
Start for free
No credit card required