Subscribe

IF U SEEK: Exploring AI’s Impact on Research

A profile image of our guest speaker Andrew Gordon who is a senior researcher from Prolific, here to talk about AI in research

Discover AI’s impact on research with Andrew Gordon, Senior Research Consultant at Prolific, discussing efficiency, biases, and methodology transformation.

Welcome to another episode of IF U SEEK, the podcast where we explore the latest trends and insights in UX design, research and business. Today, we have the pleasure of speaking with Andrew Gordon, a Senior Research Consultant at Prolific, who is well-known for his expertise in cognitive neuroscience and misinformation reliance.

In this episode, Andrew talks about the transformative effects of AI on research and researchers alike. He sheds light on how AI enhances data collection efficiency and addresses biases inherent in online research practices. Through our conversation, you’ll gain valuable insights into the role of AI in modern research and its implications.

You can find If U Seek on SpotifyApple Podcasts or Google podcasts

Preview

Key Points in This Episode Include:

  • The significant role AI plays in enhancing the efficiency of gathering and analyzing large volumes of online data for research purposes.
  • The various challenges and opportunities that arise when using AI to cross-reference and verify information from diverse online sources.
  • Practical insights on how AI integration impacts traditional online research methodologies, including its effects on research outcomes and data quality.
  • Strategies to mitigate biases in online research by optimizing the use of AI and ensuring diverse representation in data sets.
  • Tips for effectively communicating research findings and methodologies to diverse audiences, ensuring clarity and engagement.

Transcript

Layshi: If you seek to shape the future, listen to those who design it. Welcome to “If U Seek” by Useberry, where expert voices guide us to UX wisdom. Layshi Curbelo here, your host of the journey of If U Seek. If you are seeking a different way to learn and understand users, this is the podcast for you.

Maybe you are questioning, why If U Seek? Picture it as an open door to curiosity. In every episode, we will strive to explore and gain deep insights from experts shaping their domains. We want you to feel enlightened, educated, or even inspired after each episode. The idea is to foster connections within the UX industry.

Also, remember from user recordings to click tracking and user flows, our platform offers a variety of insight for better decision making. Elevate your research game with Useberry. Discover more at Useberry.com. Follow us on social and drop your thoughts in the comments.

Today, we are thrilled to welcome Andrew Gordon, a Senior Research Consultant at Prolific, with a PhD in Cognitive Neuroscience from the University of Bristol, specializing in Misinformation Reliance and Post Doc at the University of California.

His transition into consulting allows him to combine his deep knowledge with a passion for impactful research on a broader scale, a prolific author with a numerous peer review publications. Andrew is a recognized speaker at academic and industry conference where he shares evidence based best practices.

Today, we delve into the topic of exploring AI impact on research. So if you’re eager to learn how to leverage AI effectively, stay tuned for this enlightening conversation. Let’s start the show. Hello everyone. I’m the host of If U Seek, I’m super happy to be here today with Andrew Gordon. How are you, Andrew?

Andrew: I’m doing all right. Thanks.

Layshi: Super happy to have you here on the podcast. How has been your, I don’t know if this, this is your morning for me. It’s my morning. How about you? I’m in from Puerto Rico, the Caribbean island.

Andrew: Yeah. It, it, it’s early afternoon here in a very, very gray and rainy southern England. But we’ve got our four day weekend coming up. So it’s yeah, end of the week.

Layshi: Well, that doesn’t sound too bad. Actually, it sounds amazing. Breaks. So the episode of today has a lot of people wondering what they can learn from you. It’s a hot topic, definitely. And today we’ll be talking about exploring AI impact on research practice.

Are you ready to share some insights?

Andrew: I’m very, very ready. Yeah, I think it’s definitely an area that’s on a lot of people’s minds at the moment. So yeah, happy to give, give what insights I can.

Layshi: As we say in the Caribbean we have a, a little bit of phrase that says, like, Un pajarito me dijo.

Basically, it’s “a little bird told me” that you are a senior research consultant at Prolific, and I would love to meet you to know what is a typical day and what you do in your role?

Andrew: Yeah, yeah, for sure. So I think it’s probably best if I start off just kind of maybe explaining a little bit about what prolific is and what we do.

So essentially what we are is an online marketplace that connects researchers from academia, AI, corporate research with participants all around the world. So, we’ve been going about 10 years now grew out of University of Oxford when our two co-founders were struggling to find participants for their research. They build a little app to do exactly that. And now 10 years on in April, we’ve got about 150,000 people all around the world in 36 countries and about 30,000 active participants.. researchers. sorry on the platform.

So quite a big change. So, In terms of what I do at Prolific like you say, I’m a senior research consultant. Really, no two days are the same for me. So, I currently lead the Prolific research services team, which basically means I work very closely with researchers across different disciplines. I work on their projects with them, or I even manage projects for them fully. So most days you’ll usually find me kind of buried in creating bespoke experiments, managing participant pools, recruiting niche participants, writing analysis code, writing reports, kind of all manner of different things. But given the breadth of research that happens on prolific, pretty much every project I run is totally different, which is quite nice for me. Cause it means every single day is different.

And alongside all that stuff, I do, I actually came to prolific from academia. So I do still continue to run my own streams of research kind of under the prolific banner, mostly interested in basically kind of research methodologies for online research. So I’m interested in what are the best predictors of data quality? How can we get the most out of participants? And more recently I’ve been really, really interested in AI and the impact that AI is having on research and how we can best leverage AI going forward.

So. A little bit of everything. I do a little bit of public opinion polling as well. So yeah, it’s, it’s a pretty varied role and no two days are really the same.

Layshi: Apparently you do a lot of things. You’re almost like an octopus.

Andrew: Yeah. I like to think that that way. Yeah.

Layshi: I mean, this is super nice because you have the opportunity to actually learn from different things, not be so boring just with one thing at a time.

And you can also like learn from people. I remember when I started my career, my. Trying to explain to my dad what it was a typical day of what the things that I do. And he was like, I don’t get it. You study people.

Andrew: And I get that an awful lot. Yeah.

Layshi: Kind of, kind of.

Andrew: Yeah. Yeah. It’s, it’s, it’s definitely trying to kind of explain honestly, like an online marketplace for research to people. Most people aren’t really familiar with that concept. So when people ask me what I do every day, I do struggle a little bit to explain it.

Layshi: You study people, Andrew.

Andrew: Essentially, essentially. Yes. Yeah. I study people in different contexts.

Layshi: Just kidding. Just kidding. As a UX designer, do you imagine that I also need to do research, of course. And I try to do the best I can with the tools available to me. Currently designers out there, they have access to a lot of information. And right now, of course, they will have access to AI.

Considering this and talking about this, how does AI assist in gathering and analyzing large volumes of overlying data for research, for research purposes. I think it’s, it’s kind of a good tool, but sometimes we have a little bit of fear of how we can actually use it in a correct way. What are your opinions on that?

Andrew: Yeah, definitely. I think the one thing I will caveat just the start of my response is, I find that the pace of change in AI and what it can do is a little bit overwhelming. So I’m always conscious what I say, you know, a week from now, by tomorrow might be out of date. So, you know, worth bearing that in mind. But I, you know, in what I do, I’m starting to really see the impact of AI on the research world in general. Right. So, you think about research 15 years ago, you know, we’re basically using pen and paper research, you know, before online data collection was possible.

We then moved to online data collection, was kind of revolutionized research again. Suddenly, researchers were able to basically have much larger samples from much more diverse backgrounds, do their research much faster. And now you’ve got this kind of AI revolution, this third wave of revolution happening as well, where AI is impacting every single stage of the research pipeline.

So to take it in kind of stages of where I’ve seen the impact. At the early stage of research, you know, when you’re exploring the space, you’re trying to develop a literature review. AI is, is there are tools out there that use AI to basically make that process much more condensed, right? AI can search literature space for you just through using natural language questions. You can interact with that. And beyond that, you know, large language models are kind of ideal for summarizing content and, and developing themes and clusters around things. So that whole kind of ideation stage of research has just been massively, massively condensed.

And then you get to the actual survey itself, right? So, AI that you, you, a quick search online will show you kind of all the different types of AI that can help in the survey design process. So now you can actually just simply prompt an AI with a survey idea. And it will generate it entirely fully for you, right?

There’s even tools out there that can do qualitative research. So you can have an AI interview your participants for you. You can get synthetic responses from AI to kind of test the space of an idea before you actually go live with human participants. When you get to the analysis stage, large language models are pretty proficient at doing that. They can suggest analyses. They can write code for you to do analyses. Still requires a lot of kind of significant direction on the researchers part, but it’s getting kind of more accessible and easier every day.

And then finally, and I guess this is where the concerns about AI start to creep in a little bit more there, you know, AI can write content for you, right? So a lot of what researchers do is they run research and they write papers. Now, if AI is get involved in that paper writing point, that’s the point at which we probably want to be slightly concerned. There are tools out there like Grammarly, which is AI, right, but it’s not exactly a bad thing. It’s just helping you write better.

But if someone’s going to the chatGPTs of this world and actually asking it to write a method or asking it to write an introduction to a paper. That’s when we probably should start to be concerned. So, yeah, it’s basically it’s pervasive throughout the research cycle. And to be honest, I only see one direction of travel, which is that it will become much, much more embedded as time goes on.

I don’t really think that’s a bad thing. It’s making research easier and more accessible to much larger part of the population. And broadly speaking, you know, the more people you have doing research, the more insights you have and the more data you have to make decisions. The big unknown for me is whether the fact that it facilitates so much more research, is it facilitating largely low quality research or is it facilitating more high quality research?

And I guess time will time will tell on that one.

Layshi: One of the things that you mentioned that for me is super important to highlight, it’s the way that we use tools. As you mentioned, Grammarly has AI, but it’s a way to actually improve the things that we need to do. Not necessarily doing the things that we need to do.

Andrew: Right. Exactly. Yeah.

Layshi: And another important thing is like, right now, AI, it’s, it’s actually helping, I don’t know if this is the correct word, but putting there the opportunity to everyone can do things that maybe they don’t have the knowledge at one point, but now it’s more easy to do and they are capable of doing because of AI.

And, and I think it’s making available professional things that maybe people that have not necessarily like professional education, but maybe a technical background can do it in a way that it will be more efficient for companies. So it’s, yeah, it’s a, it’s a, I don’t know. It’s, it’s a difficult topic because of opinions, but to be honest, I think one of the things that we can say for sure is that it’s, it’s actually, putting faster, the way that we work and putting faster, the way that we can produce things. And I think the important thing is like, how we measure that final result and what is the type of result that we want to actually have? We have a result that is completely automated and doesn’t have any human connection, integration, and sense of belonging or is something that we actually are thinking and we are really mindful of biases that we really mindful of information. And we are really, you know, like conscious of data, the data that we are using.

So I really like your approach. Some, some people say like, and I heard a lot of conversations saying like, oh my God, we will not use AI. Well, I don’t know if that is the correct way to actually see it.

Andrew: I don’t think so. I think, you know, I,.. the AI, like anything AI can be misused, right?

I think it’s all about how you use it. I see AI as a way to augment what we’re doing and to improve and speed up things. If you’re using AI to fully replace something which should have a human involved, I think that’s one of the problems. Start to come in. And I think there is a risk that people are potentially too trusting of AI, not not generally speaking, but of certain bits of output of a I and maybe using it kind of without thinking through more like how they should be involved in that pipeline with the AI.

Layshi: My, my favorite example and I, and I love it because I remember actually my grandma actually saying to me when I start in school she was really into like, you need to practice like mathematics. And I was super bad at mathematics. And I remember one time when I was almost teen and I said to her, I have now the calculator, I don’t need to, you know, do the things with my mind.

And she was so pissed off. She was like, Oh my God, you will forget everything. And to be honest, that is AI at this point, like, It’s a tool that will help you, but you need to know the process in order to get the results. You need to actually create a correctly, a prompt to actually get good results because you can have a calculator, but definitely you can have a bad answer.

Andrew: Oh, a hundred, a hundred percent. I mean, a good example of that is, you know, you could use an AI potentially to run some pretty complex statistical tests on some data that you don’t understand, and you’re gonna get a result out. But without the kind of fully understanding why that test is being run, like, you know, what the process is to arrive at that conclusion. Then there’s a real risk, right, that whatever you decide based on that is gonna be wrong.

So I think it’s yeah, it’s that concept of making sure you are, You are a cog in the AI pipeline, right? Don’t fully outsource everything to AI. Make sure you’re part of it.

Layshi: Yep. Yep. Now, as we know, we need to ensure that even when cross referencing, things can change and will be changing with the use of AI and in, in how diverse, representative trustworthy the data online currently is.

And we know that sometimes data is not so, you know, trustworthy. So, let me ask you what challenges arise when using AI cross referencing and verifying information gathered from various online sources.

Andrew: Yeah, you’re definitely right to call this out. I think AI is not infallible, right? Most people will probably be aware of things like AI hallucinations. And these become very problematic, especially when you’re talking about things like cross referencing or verifying information. So a hallucination is essentially, incorrect data that an AI produces. So that could be caused by a whole number of factors, but the result is that the AI is gonna provide you with a piece of information potentially in a very authoritative and clear way, but the information could be completely wrong.

A good example of this is when you ask some common large language models for citations of published research for example. So you might say, can you give some examples of papers that were published on looking at X, Y, and Z. Sometimes what you’ll find is they’ll actually produce a full citation of a paper, so it’s the authors, the name of the paper, the journal, the issue number, and even the page range.

But it doesn’t actually exist, right? That is an entire hallucination. It looks completely legit but it doesn’t exist. Now, the good news is that there are tools out there which are better at doing this kind of thing. So, for instance, referencing and validating information. One I would call out that I use very regularly is called perplexity AI.

And that’s a really good tool for if you are exploring a space. You ask it a question, it will provide you data back with references that you can actually verify, which is really great. But it is, it is, it is a problem more generally with LLMs. And part of the issue as well is that most LLMs and AIs don’t really have current access to the internet.

So for instance chatGPT, it’s training data goes up to January 2022. So it doesn’t know anything that’s happened beyond that. So if you’re asking about things that haven’t beyond that, it’s even going to give you it’s either just going to say no, or it’s going to give you completely incorrect information.

So it’s kind of inherent on the user to understand which I should be being used for which task. And more so than that, I think it’s really important for everyone to keep in the back of their mind that what AI is, is basically a predictive model of data, right? They don’t evaluate the truth of the information they provide at all.

At a very, very basic level, what these language models are doing is basically predicting the next word in a sentence that they’re giving back to you. The fact that they can’t evaluate truth yet, they might be able to, which is a whole different can of worms, especially in like a post truth world that we’re living in at the moment.

But the fact that they can’t do that means that you basically need to kind of paraphrase Ronald Reagan, you need to trust yet verify the data, right? So don’t just take it on face value. Make sure make sure you know what the AI is producing for you is correct.

Layshi: Yeah, this is almost like a soap opera. When you are talking with an AI, like, I don’t, I know that you told me something, but I don’t know if I can trust you.

Andrew: 2 Yeah, yeah. Yeah. And you’ll find, you know, I, I found myself in arguments with AI of saying, you know, that’s wrong. Can you give me the correct information? And it will just keep pumping out incorrect information. And I think it’s It’s probably, it’s probably a limit that won’t exist on AI for very much longer given the pace of change, but for now it’s something that really everyone needs to be aware of.

Layshi: And also as a trained model like it’s super important to also like provide information in order if you want to use this as a tool this is another thing, that I’d be actually, I don’t know experimenting by myself.

A lot of people right now are into like in the space of designing prompts and and it’s not, it’s not only like just give me this and provide it in this format. It’s it’s more like okay, I have all this information and I want it in this format with this outcome with this kind of reference in this type of like specifically standard operation whatever and that is the way that you can start using the maximum, you know, the best way to actually maximize the resource of the AI.

And there’s a lot of different actually currently opportunities as a designer for only prompts. And, and it’s super interesting of how much you can actually do designing a good prompt. Even also like challenging the AI and creating the opportunity to give me three, I don’t know, three different versions of this, but also pointed out what are the difference of each version and actually give me what, in your opinion, what is the best suits for this particular case?

Andrew: Yeah. Yeah, for sure. It’s a really good call out, because I think, 99 percent of people, the way they interact with AI is on such a surface level, right? They might go to a chat GPT and ask it a question, whereas actually there is, there’s much more advanced ways that you can interact with these models. Like you just. Just called out and the better you get at doing that, the better the output is going to be, and the more useful it’s going to be for you. So I think, yeah, there’s like crafting the correct prompts, knowing things about like temperature of responses for AI and those kind of things.

As soon as you start to kind of grasp those ideas with AI, you’ll find that it becomes significantly more useful and much more reliable at the same time.

Layshi: Now I want to actually ask you, how has the integration AI impacted traditional online research methodologies? I, I know that as you mentioned at the beginning, we start with paper and pencil doing research, but can be a huge change just adding a little bit of AI?

What is your opinion on that?

Andrew: Yeah, I mean, yeah, significantly is what I’d say. So as I mentioned before, you know, it’s it’s now present in every stage of the research pipeline and really those the way it’s actually impacted online methodologies as a whole. I think, you know, there’s three kind of three kind of ways I actually think about it.

So the first is speed, right? It’s basically speeded up everything we can do. So things that used to take days or months now can be condensed into hours. Good example again, building out a survey, used to have to basically define the survey in advance question by question. Set it up with all your logic and then iterate on top of that and so on and so on and so on.

Now you can do that with a single prompt, right? It doesn’t mean the process ends there. Although I suppose it could, but it dramatically shortens the time the process takes. So we’ve really sped up the process at which research can be done by using AI. The second and probably I think the biggest impact it’s had though is flexibility.

It’s really enabled researchers to do things that they just couldn’t do before. So I touched on this earlier, but a nice example of this is qualitative work. So imagine you were going to run a series of interviews, right? Back in the old days, the pen and paper days of research, you would need an awful lot of time because you would basically need to be running all those interviews yourself.

Right? One after the other. And basically what that meant was it really limited the size of the sample you could get because who has that much time and the breadth of insight that you can actually achieve. Nowadays, there are AI power tools that can basically replace you in that task, right? So I’ve been playing around with some recently where you essentially give the AI a statement of work to act as an interviewer on your behalf.

And it will interview a participant and it will even respond to what the participant’s saying, and it will adapt and all those kind of things. And really, by removing the need for the human interviewer, you’ve now got a situation where qualitative work, which used to be a much slower, much smaller samples, can now be done at the pace of quantitative work, which is a huge change, a kind of sea change in research, to be honest.

So flexibility is massive. The third way I think is impacted, which is probably the one we don’t want to think about so much, is potentially validity of research. So AI has taken a lot of decisions out of researchers hands. That with the survey creation, for example there’s a real risk there that whatever the AI is generating for you or doing doesn’t actually do what you want it to do.

So how do we know an AI generated survey is any good? How do we ensure that the AI interview is asking the right questions at the right time? So when we get into the situation where we’re over relying on AI, like I said before, kind of starting to replace us fully, there’s a very real risk that’s actually going to lead to a drop in research quality.

And the other side of this coin of validity is that not on the researcher side, on the respondent side, AI tools are really widely available now, right? Anybody can use chatGPT. There are no barriers to that. There is a real risk that this starts to impact how people are responding when they’re taking part in online research.

So there was a study recently on enter, which actually flagged that 40 to 45 percent of responses were AI generated. Obviously, if this is a problem that continues to grow, it’s clear that they’re pretty big issues for validity of research that’s being conducted because we might be moving from a situation where we’re not analyzing human responses were analyzing AI, AI versions of human responses.

But with anything, it’s an arms race, right? Like, as this becomes a larger problem, there’ll be more and more focused on solving it. So I still remain totally optimistic. I don’t think it’s a problem that you know, that invalidates any online research tool. I think it’s just about knowing the best way to solve it. And researchers knowing what they could do best practice to avoid.

Layshi: And just for one moment. Let’s stop the conversation. Don’t let biases dictate your UX research outcomes. Useberry’s randomization feature levels the playing field providing accurate and reliable insights set up studies effortlessly. Analyze with confidence and make informed decisions at lightning speed. Dive into Useberry.com, join us on social and share your thoughts in the comments. Your journey to enhance user experience begins now.

Now I want to talk about one of my favorite topics. And this is something that I always try to push a little bit. Diversity, equity, and inclusion. Of course, with a little bit of twist of online research. The other day I was reading an article about bias. And we know that everyone has a bias. It will be a lie if we say, no, I don’t have a bias. Because actually bias is, it’s a way that our minds actually act because we need to take decisions so quickly that this is our, like the mental models that we have, and if this is not me defining bias. This is actually a thing that you can check fact, fact check.

I’m not using AI for, for this reference. For people who are listening to us, do you know that sampling bias can affect online research outcome? This is true. So it occurs when data collection is not random, leading to misleading conclusions.

So that being said what are the common types of bias that you encounter on online research and how we can mitigate this?

Andrew: Yeah. Yeah. So let me start off by saying that, yeah, there are so many different types of bias in research, like whether you’re running them, running it online, whether you’re running it offline, there are biases outside of research that you can bring into research as well. So there’s no way to cover them all in this conversation.

That being said, I think there are some, some certain biases which are potentially significantly more relevant when you’re doing online work, and the first is exactly as you picked up on, which is sampling bias. So, you know, back before online research was a thing and everything was in person, you were inherently limited in the sample that you could get, right?

So sampling bias was still an issue, but it was kind of less of a factor because you didn’t really have a choice of who was in your, your studies a lot of the time. Now that you have access to hundreds of thousands of participants around the world in different countries, different demographics, it’s now really incumbent on researchers to make sure that they are sampling correctly and that the sample is correct for their research question.

The idea underlying sampling bias is, you don’t want a situation where certain participants have a higher or lower chance of being selected than others, because then the data that you get out is going to be skewed one way or the other.

So, an example, you’re running a study on youth opinions of TikTok. You accidentally survey a bunch of people over the age of 50. The data that comes out of that study is going to be pretty useless, right? The study data will be totally skewed by having that incorrect sample included. That’s probably a pretty unrealistic example. But a more realistic example is if you’re doing something like a new product test and you want it to, you want to find out the opinions of the general population.

In order for your results to be reliable, you basically need to ensure that your sample is a microcasm of the larger population. And that’s the real way that you mitigate sampling bias, right? You look at your overall population. Who is it you’re wanting to study? Who’s the population of interest? Then your smaller sample, your research sample, should reflect them as much as possible.

It’s not possible to get a one to one mapping, but they should reflect them in terms of the demographic aspects like age, gender, ethnicity, and these kinds of things. And that’s really the way you’d avoid sampling bias. It’s becomes even more of an issue, I think, with AI and training – training AI. So a lot of the training of AI is done with what’s called reinforcement learning with human feedback.

So an AI model will generate output. Humans will then evaluate that output on certain factors, feed that back, and then the model will be updated. Right? And that’s how the model will learn by taking those human responses into account. Now, it’s obviously really important to get as.. Like as diverse a set of participants as you possibly can for that process.

So you want to get as many ages, as many genders, as many ethnicities, as many educational levels, etc, as possible reflected in your training pool. We actually did some research on this with Professor David Juergens and Jiaxin Pei from the University of Michigan. They built a data annotation tool called “Potato”.

And we use that to essentially, sample a very diverse set of participants and ask them to rate things, things like the toxicity of a statement, the offense, offensiveness of a statement, the politeness of an email, these kind of things that would be used in AI model training. And fairly predictably, what we found was that different demographics responded differently to these questions.

So younger people had a totally different perception of politeness than older people and males had a different perception of toxicity than females. And what this really highlights is that if you’re training in AI, you want all of those different perspectives included, right? You want the biggest, broadest set of perspectives you possibly can, because then the AI that comes out of it will take all of that into account and be much more usable.

The other bias I just want to touch on quickly is self selection bias, which is something pretty unique to online research a lot of the time, because it occurs when people are given the choice to participate in research. So there’s a question as to whether the people who are choosing to take part in research are different to people more broadly.

Right, so with online research, you have people signing up to a platform to take part in a study. The question then becomes how reflective of real people, you know, the broader population, are those people? Anecdotally, there’s evidence that, you know, they tend to skew younger because they’re more tech savvy. They tend to skew slightly more left wing, but the evidence still kind of remains anecdotal. And I think it’s just something that researchers do need to be really aware of when they’re doing online research. So I’ll stop there. There’s there’s lots of other biases we could go into.

Layshi: It’s just a fascinating topic. Sorry, I think I can go with you three days talking about this. For example, one of the things that for me blows my mind is the data sets. Right now there is like a discussion out there of how diverse are the data sets that it’s feeding the AI in order to train and I know a big company, everybody know that the name we will not say it, that is actually creating opensource data sets from different languages, religions, people even like like metrics and, and faces. And this is super good, in a way in order to make more diverse those data sets, but also it’s questionable how they get this information and how these people are participating and how they are trading the information in order to participate and have this.

So yeah, it’s, it’s.. It’s a difficult topic, but it’s super important because sometimes it’s this playing thing about authority, power, and how we can include people from diverse backgrounds, but maybe it’s not necessary in an equality way. That is super important, too.

Andrew: Yeah, I think so. And it also kind of goes on the question of like there are an awful lot of existing data sets online that you can use to train models, for instance, and mostly they don’t include any kind of demographic information about who’s been given those opinions.

We found some kind of some tentative evidence. We found that responses to a data set that we found correlated very highly with the responses we got from white older males that didn’t correlate highly with any other demographic, which suggests that those those data sets are actually being produced by a pretty small subset of the population, which should be a concern.

And I think, honestly, the solution, the solution other than, you know, we should always be be generating data with a diverse set of participants is also this data should be labeled, right? It should show who the data’s come from so that that can be taken into account at the point that the data is used. I think that would go a very long way to solving this problem as well.

Layshi: Great one with that one. I really like it. So now before we go I want to leave the audience with some tips and tricks on online methodologies of research, taking advantage that you are with us, of course. First of all, how can researchers optimize the user experience of overlying research platforms to minimize participant dropouts rates?

This is super important. I know a lot of UX people that they are saying to me like, oh my God, it’s super hard to find at least 25 participants. And from those 25, I just have 10 that are good. The other ones are dropout. How, how we can work with this.

Andrew: Yeah, it’s great question. It’s to be honest. It’s something a lot of people in my day to day. Lots of people ask me about this. I also see a lot of kind of unoptimized research and that doesn’t really think about the participant experience. And I think that feeds into this quite a lot. The results of when you put something out that’s kind of not optimized to maximize retention of participants, it’s going to lead to maybe participants leave partway through, which is obviously a massive problem, if you’re running longitudinal research they also might just zone out halfway through, right? And stop caring less about their responses. That becomes a problem for the validity of the results.

Or I guess at the very extreme end, they might get frustrated and intentionally sabotage sabotage their responses Unlikely, but I’ve seen it happen. So when I’m working on a project and kind of designing the methodology for it, there are a few kinds of core things I think about to mitigate the risk of dropout.

So first is really just providing enough information to participants right at the start. So they have enough information to decide, do I want to take part in the study? Does it sound like it’s for me? If the task they’re going to do is tedious, which let’s be honest, a lot of psychology, especially tasks can be quite tedious. They may decide it’s not for them, and they may never begin, right, which is exactly what we want. We don’t want them to begin the study and drop out halfway through.

We want them to say, Actually, that’s not for me. I’m not going to start taking part. So essentially, we need to give the participant enough information to make that decision, which sometimes just isn’t the case. Second is once they are in the survey or the task, whatever it is, you need to kind of really optimize that task to make it a good experience for them.

So it’s things, thinking like keeping them informed of how far through the task they are, how long they have left, giving them the opportunity to take breaks. And most importantly, just making the task engaging and fun. There’s a whole literature out there about the effects of how we gamify research and how much of a positive impact it can have on data.

Even things as simple as adding a bit of variety to what the participants doing, even if you’re not going to study that, those, those very different things, just a little bit of variety can make a massive difference in retention rates. Really, you should be thinking, would I want to take part in this research? Right? And if I did, what would keep me engaged in it?

Third is payment. I mean, it does kind of come down to it. You want to make sure that participants being fairly compensated for their time. It’s one of the principles that prolific was founded on that people should get ethical payment for their time. Because they’re more likely to provide high quality data, and the data really backs that up. So if you if you’re paying participants very little, what you’re signaling to them is, A) you don’t care about their time, and you’re incentivizing them to get through your study as quickly as possible so that they can get onto the next study and maximize their earnings, right?

You want to avoid that. Giving them fair payment does, goes pretty much all the way to addressing it. And finally, if you’re conducting long student research specifically, so you’re having multiple waves of studies coming through, what you really want to think about is how are you going to incentivize those participants to stay in something over time, right?

So it’s much easier to keep participants in a single experiment than it is to keep them coming back week after week for six months. Again, it’s setting expectations. So tell them when the next study is going to appear, how are they going to access it, right? Offering them a bonus if they complete all parts of the study, developing a contact strategy, keeping them informed, messaging them when the next study becomes available.

So we often do this by scheduling messages to go out on a prolific platform to say, next study is available tomorrow. Here’s your link. Get ready. And if you do all those things, you know, it’s not uncommon in the studies I run to see about, you know, 80 to 85 percent retention rate across time.

Now, obviously, it also depends on on the amount of time between sessions, so it’s going to be a lot easier to keep people coming back one week after the other than if you’re bringing them back every six months, right? So that’s a big factor as well. But generally speaking, if you follow those guidelines, you’ll get better results, pretty good retention all the time.

Layshi: What I’m hearing from you, Andrew, and I will summarize in a funky way. If you want to actually just cut it down those dropout rates, do a good user experience for your research.

Andrew: Yes. Yeah, exactly that. Exactly that. And to be honest, I think it’s just, I don’t think people don’t think about that on purpose. I think it just skips some people’s mind when they’re creating studies, because what they really care about is, you know, how good is this study at getting the data I want? They should be thinking about that, obviously, but they should be thinking, what’s it actually like to be a participant in this study?

If you don’t want to participate in it. Why would anybody else?

Layshi: Yep. Yep. Yep. And also you was talking about the entire like tips and tricks. And I was thinking, this is almost like a funnel. I mean, I always try to remind and trying to compare the things with my day to day business. And when I create a funnel, it’s like, this is the type of questions that the person will feel that is too personal to us in this moment at the beginning of the funnel, maybe they will quit out because this is not so this is too, too boring.

Or maybe this type of questions is not related even with the thing that they want to actually get from that funnel. So, yeah, I was thinking about that and it’s like, we need to create good user experience in the research methodologies.

Andrew: Yes. Yeah. A hundred percent. I honestly, like, I think if you’re thinking about this at the point the research is already running, it’s too late, right? This all needs to be done, like, you need to, like, design everything perfectly up front and be thinking about this before a participant ever sets foot in your study. So thinking about that, I guess you could call it the top of the funnel, right? Like, from there onwards, like, what’s the best experience I can give to participants?

Layshi: Yep. Yep. Yep. So what are some practical tips for effectively communicating research findings and online methodologies to diverse audience? I, I talk about a lot in different other podcasts and episodes that, creative people that are creative and people are they are from management side there Even if they are talking the same language, they they talk in different types of like communications, right?

People are from management talking about numbers creative people talk about like qualitative things. So it’s really important to actually trying to talk about the same language And even more important when you are trying to convey information Any tips and tricks out there?

Andrew: Yeah, I mean, yeah, first of all, it’s, I’m a big believer in, in basically sharing kind of research to the public at large. I think it’s really important. I think there’s a massive gap in that happening at the moment. I think there’s a disconnect between the public and the researchers because basically most of the research that’s being done, ends up in scientific journals, often behind a paywall, and it’s only ever read by other researchers.

So it’s kind of like a closed community. The findings never make their way out of that community into the public view unless they’re picked up by the media and if they are picked up by the media, media is very good at sensationalizing results and really losing the point. So, you know, even if that happens you the public may not be getting the true story. It’s a difficult one.

I don’t think there’s an easy answer for how we communicate research more widely. The one thing I would say is, you know, I think leveraging social media and things like YouTube is held like great promise. I follow a channel on YouTube called Two Minute Papers. They are basically the best at doing this, right?

So they take what are incredibly complex AI papers that I read them and I just, it’s very, very hard to get through. It’s very, very high level. They take them and they turn it into a five minute video on YouTube and they explain it in a nice simple way. And yeah, it’s just, it allows me to stay up to date with this.

This whole area of research without ever you know, actually having to read the papers. It’s initiatives like that I think they really do close the gap between us and the public. The other things like open science principles. So it’s it’s happening more and more that researchers are making their their papers open access, which basically means anybody can read them, and they are not behind paywalls.

That makes a massive difference. And, you know, the public can go to open repositories like SciArchive and all these websites to find those papers. And the final thing, which I’ve only started doing more recently, actually, since I worked for prolific is sharing the output of your data with the people that made it happen, right?

I get contacted a lot by the participants in the studies I run saying I’d really love to see the results of this and even if they don’t contact me, I still make a point of sharing the results. So if a paper, if it’s a blog that comes out, if it’s a white paper, scientific journal. I’ve tried to make sure that I’m going to share that with the people that made it happen.

I think that’s a really nice way to kind of bring the public and participants kind of along for the, for the research. Right.

Layshi: That’s a, that’s a super nice thing to do, but also there is a way, not only sharing information, but also sharing points of view, like every time that I do an interview and, and we release an episode, the guest speaker actually pointed out and sharing social shuttles in a different way than sometimes I don’t even ask it or the, I and sometimes there are things that they are not so good for me in the interview and they really like it and that could happen also with research.

Sometimes they are really into one finding that you are like, what? And they really want to share that. And it’s super important because everyone has a different point of view and the way that they can communicate the information. It’s surprising. Another thing that I love is how we can humanize the information and that YouTube channel. It’s super cool. Can you mention it again? Sorry.

Andrew: Yeah, it’s called Two Minute Papers.

Layshi: Two Minute Papers.

Andrew: I only learned about it very recently from a colleague but she, she said it was brilliant and I fully agree. It’s It’s you can almost just have it on in the background, you know, while you’re doing work just to keep up with with everything.

Layshi: We have another episode with Sergei Golubev and I was talking about humanized information and how we can share that specific part of how this is making an impact on people’s lives and that is the way that you can actually catch the attention of stakeholders, of people that are in your company, that are your internal clients that sometimes they think, yeah, we need research.

This is part of the process. Just let me know the bullet points. But when you start actually putting information and humanizing information and put in context, quotes and even like a specific part of the research of the people talking about it, that is the way that actually you can hook them. Okay.

Andrew: Yes. Yeah, for sure. For sure. I know. I totally agree. And I think initiatives like, like this podcast, right? I mean, this, this, this being able to talk more kind of conversationally about research, I think also is great, right? Because I think it’s a lot more digestible for, like, I think there’s a big element to making research digestible for people, right?

Otherwise it’s just too much of a man of art to ask them to, yeah, read a scientific paper, which, you know, I’m a researcher and sometimes I dread the thought of reading scientific

papers, so.

Layshi: Even Andrew doesn’t read. No, I’m just kidding. We are getting to the end of the episode. I’m super sad because I love to talk to you. I think you have a lot to share and a lot of knowledge to share with the audience. And we can be four hours here, but yeah. So before we go, I want to thank you, Andrew, for sharing your knowledge without if you seek before we go, if we want to actually, I don’t know, make some questions to you where, where we can find you.

Andrew: Yeah, you can find me on Twitter. It’s @AndrewJGordon or LinkedIn. You just search for Andrew Gordon prolific. And yeah, to be honest, there’s probably going to be a lot of links to me on the prolific website. So prolific.com.

Layshi: Thank you so much for participating with us and sharing all this information with the audience today.

Andrew: No worries. Thank you very much for having me. It was a great conversation.

Layshi: Well, it will be to the next episode of If U Seek.

Layshi: Thank you for joining us on If You Sick. For more exciting content, follow us on our social channels. Your review means the world to us, so don’t forget to leave one. And of course, hit that subscribe button on Apple Podcasts, Google Podcasts, or Spotify to stay updated on our latest episodes.

If You Seek is a platform for discussions and personal insights. The opinions presented by guests are independent and do not represent the official position of the host, Useberry, or sponsors. See you on the next episode of If U Seek.

Get started with Useberry today
Start for free
No credit card required