BJKS Podcast

54. Jessica Kay Flake: Schmeasurement, making stats engaging, and the Psychological Science Accelerator

April 07, 2022
BJKS Podcast
54. Jessica Kay Flake: Schmeasurement, making stats engaging, and the Psychological Science Accelerator
Show Notes Transcript Chapter Markers

Jessica Flake is Assistant Professor for quantitative psychology and modeling at McGill University, where she studies measurement. In this conversation, we talk about her recent paper 'Measurement Schmeasurement:  Questionable measurement practices and how to avoid them' (with former guest of the podcast Eiko Fried), how she makes stats lectures interesting, and her work on the Psychological Science Accellarator.

BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith. In 2022, episodes will appear irregularly, roughly twice per month. You can find the podcast on all podcasting platforms (e.g., Spotify, Apple/Google Podcasts, etc.). 

Timestamps
0:00:04: Eiko Fried is maybe not that good at p-hacking
0:02:03: How Jessica got into researching measurement
0:10:42: The title of 'Measurement Schmeasurement'
0:16:15: So what is Schmeasurement?
0:24:47: How does Jessica ('literally the best prof ever') make statistics engaging?
0:43:02: Is transparency the solution to schmeasurement?
0:49:56: Was I measuring or schmeasuring in my recent paper?
1:03:39: The next generation of the open science movement
1:15:15: What's it like working on large collaborative projects like The Psychological Science Accelerator?

Podcast links

Jessica's links

Ben's links


References

My episode with Eiko Fried: https://geni.us/bjks-fried
The Twitter thread that started schmeasurement: https://twitter.com/JkayFlake/status/917514276893536257

Axelrod (1980). Effective choice in the prisoner's dilemma. Journal of conflict resolution.
Flake & Fried (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science.
Flake, Pek, & Hehman (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science.
Flake, Davidson, Wong, & Pek (2022). Construct validity and the validity of replication studies: A systematic review.
Kuper-Smith, Doppelhofer, Oganian, Rosenblau, & Korn (2021). Risk perception and optimism during the early stages of the COVID-19 pandemic. Royal Society open science.
Moshontz, ... & Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science.
Nosek, Beck, Campbell, Flake, Hardwicke, Mellor, ... & Vazire (2019). Preregistration is hard, and worthwhile. Trends in cognitive sciences.
Simmons, Nelson, & Simonsohn (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science.

[This is an automated transcript to help with search engines. It will contain a lot of errors]

Benjamin James Kuper-Smith: [00:00:00] Oh, yeah, by the way. So I, I always look just usually on YouTube, whether I can find anything about the guests. And, um, I found a video of you with, uh, with Eiko Fried at sips. The, I dunno, the conference I'm assuming. And you said, uh, you you're trying to get him to p-hack all his measurements in his depression study. 
 
 

Uh, did you manage to do that?  
 
 

Jessica Flake: Yeah.  
 
 

Well  
 
 

Benjamin James Kuper-Smith: Is he now P hacking?  
 
 

Jessica Flake: yeah, he's, he's trying, he's not maybe that good at it. We did the S this is a precursor actually to the measurement schmeasurement paper. We did a workshop. And so one of the like sticks during the workshop, as I would try to get them to engage in questionable measurement practices and make suggestions that, you know, he, he hack his instruments and stuff, and then he would be like, no, I should be doing this. 
 
 

And so it was like demonstrating what you could do versus what you should do. We, I think were, that was probably a little cheesier than is acceptable for a conference, but we did it.  
 
 

Benjamin James Kuper-Smith: Uh, I still haven't really [00:01:00] been to a conference, I guess, because of the whole COVID thing. Um, I started my PhD, like basically like the second I had results, uh COVID happened. So, um, it was, 
 
 

Jessica Flake: yeah. 
 
 

my students, uh, haven't been to a conference yet either and they're going to their first one this summer and it's really, it's kinda sad cause they've been pretty isolated and they're excited to go to their first one. But I think when we think about the pandemic and what it's done to our jobs, we forget this something that's kind of insidious about it, which is that it took away a lot of those little interactions with the broader community that make that at least for me and give you like energy and interest. 
 
 

And it's just nothing. It's just like all zoom screens. So I hope you get out to a conference. Now. Some people don't like conferences, but I have to get out to one soon and you like meet other people with shared interests and you're like, oh, this is more fun.  
 
 

Benjamin James Kuper-Smith: Yeah. And I'm really looking forward to those. That's what I'm going to go to. Oh July, I guess it's a little bit off, but yeah, we're looking forward to that one. See, I [00:02:00] guess today we'll be talking mainly about your paper measurement, spasm, measurement, measurement, questionable re measurement practices and how to avoid them in general. 
 
 

You know, I like to take a slightly more indirect route to talking about the topics. I'm just curious, like, how did you start kind of working, I guess you did your PhD on measurement. Um, and before that you did a man quantitative methodology. Yeah. I'm just curious, how did you so early I'd also say, get into kind of these methods topics. 
 
 

It seems to me maybe that's the wrong assumption, but it seems to me as if these kind of like technical things like the, how to do science is something that often comes a bit later, but you seem to have been interested in it pretty early. 
 
 

Jessica Flake: Yeah. 
 
 

there is in a lot of the areas of psychology. There's undergraduate training in that. So when you're an undergraduate, you might take a course in social psychology or neuroscience or cognitive psychology. And so this. Shapes your interests. Like when you go to graduate school, usually select an area to go into and with [00:03:00] quantitative methods, which is an area of psychology, quantitative psychology, there's no such class, except maybe you could take statistics and the statistics department, or maybe your psychology department has statistics or something like that, but you don't think of it as an area of psychology. 
 
 

So a lot of undergraduates would know when they're finishing up that you could specialize in that area. And I got kind of lucky. So then you might just learn about it on accident. And that's kinda how it happened to me. I had taken extra statistics courses to get a bachelor of science. That was like an extra requirement, but I liked my statistics courses. 
 
 

I was surprised I liked them more than I thought. And I was like, okay, well, I'll take the extra ones to get a bachelor of science instead of a bachelor of arts, because that looks a little bit better on your final diploma. I mean, who cares about art? Right? So then, um, 
 
 

Benjamin James Kuper-Smith: then you've got some M a. 
 
 

Jessica Flake: Yeah, I know that wasn't an option. 
 
 

They should make that an and um, so I was getting ready to [00:04:00] graduate and I didn't, I wasn't sure about graduate school or about getting a job or, or whatever. And I met somebody who was doing a PhD in measurement and he was like, oh, well, maybe you should consider going to graduate school in quantitative methods. 
 
 

You really like your statistics courses. And I was like, that's a thing. And he sent me this article. There was an article written by Steve wise. I can't remember the title of it now in a base, like the opening paragraph was like, do you really like psychology and statistics, but you're not sure what area of psychology you would go to graduate school in. 
 
 

You should try quantitative psychology. And I was like, this is me. Um, and so. That person that I had met, they were doing an PhD internship at my undergraduate institution, but they were getting their PhD at James Madison university where I did that master's degree. And he was like, I'm going to go defend my dissertation. 
 
 

Why don't you, I'll set up some meetings with you faculty and you can come and learn about the master's program in quantitative psychology. And I was like, okay. And I [00:05:00] went there and everybody was really nice. And, um, they let me into the program and it didn't cost a lot of money. It was a funded program. 
 
 

And so I had thought, well, I could get a job, like an entry-level job, or I could go get this master's degree. And so I did the master's degree instead. And in the master's degree, I took measurement theory with Debbie bandoleros. Who's a respected scholar in measurement. And I think I just like, kind of hit the jackpot, getting to take a course with such an amazing instructor. 
 
 

She's so clear. She's so interesting. I thought that measurement was like just the most interesting thing one could study. It has this mix of psychology, psychology and psychological theory, but also with quantitative methods. And I was like, this is it. This is what I need to do. And so then I Googled like PhD programs and measurement, and that's how I ended up applying where I applied and ultimately going to the university of Connecticut. 
 
 

And that, that PhD program at the time was called measurement evaluation and assessment. So it was like really clear. They changed the name of it [00:06:00] now. So it's like, I feel homeless for a PhD program. It's called like research methodology, measurement, and evaluation now. So it still has the word measurement in it, but yeah, basically just stemmed from. 
 
 

Having an area of psychology, I felt super interested in, but really being interested in the application of quantitative methods, but to psychology, it's not like I'm not interested in statistics maybe for medicine or for other disciplines because statistics are in a. 
 
 

lot of disciplines. I really like psychology and we have the like worst or hardest measurement problems. 
 
 

And so it's a good place to be if you're interested in measuring.  
 
 

Benjamin James Kuper-Smith: That's interesting. I mean, I guess I w. I feel like now I've found something I'm really interested in. I want to work on dramatically topically, um, for the next few years, at least. But I guess I also, this thing was like, I I'm generally interested in none of these things, but don't really know what exactly. 
 
 

And I guess I just took the route of just trying out like half of the things you can do on us, but like working on like attention [00:07:00] and voluntary action and body ownership and I guess anything that wasn't clinical more or less, and I guess it's kind of cool. Yeah. If you have like a general interest in psychology then working, but like nothing like specifically a topic they magically then working on something that kind of is relevant to all of those things does seem like a good route. 
 
 

Jessica Flake: Yeah. it's our like Metta area. Um, we also have the history of psychology there, Mehta too. They, they can like do the history of psychology and anything in psychology. So you can play around, I guess, in whatever area of psychology you want, knowing that you're studying the methods that that area uses.  
 
 

Benjamin James Kuper-Smith: I'm assuming you still focus on kind of one kind of measurement or measurement I've I've  
 
 

Jessica Flake: okay. Um, I, I do have, so though it might seem small to the outside. I do have like an area of measurement that I focus [00:08:00] in actually are at McGill university where I work now we have three professors in that work in measurement, but we feel like we're all doing very different things. So 
 
 

Benjamin James Kuper-Smith: of course. 
 
 

Jessica Flake: my colleague Carl fall key works on a response styles and this framework called item response theory. 
 
 

And so he's interested in like how people kind of respond in the same way to a survey, mean like they do like 5, 5, 5, or they get like lazy responding stuff like that. Um, my other colleague hung sung. He works in this aspect of structural equation modeling. That is not your typical survey factor analysis stuff, but it's 
 
 

called components analysis. 
 
 

And I work in like more general psychometrics. So instrument design, like questionnaires surveys, and tests, how to evaluate instruments, how to build validity arguments, and assess the assumptions of validity for those instruments. And some of the modeling aspects of that, especially when you have [00:09:00] data collected from instruments that are really complex. 
 
 

So you have large data sets that are clustered in some way, or they come from geographically or culturally diverse groups. How you can tell the instruments are similar across those groups. I'm interested in a lot of things that are related to measurement modeling that would come with really big data sets. 
 
 

And I feel like that's really different than what they're doing, but we're all three of us are in kind of psychometrics more broadly. So it seems small from the outside, but on the inside of we feel like we're all working on different.  
 
 

Benjamin James Kuper-Smith: Yeah. I mean, a big, big day just helps you mean lots of variables and not necessarily like thousands of participants, but more like, or both.  
 
 

Jessica Flake: both. 
 
 

but more that there's a lot of participants and they, the structure of their responses might be complex. So the participants might be clustered in countries or they might speak different languages, but have given you [00:10:00] responses to the same survey questions, or they might be over time, these, all these kinds of datasets that you could think, well, I could analyze the data set and just look at it at the country level, or just looking at the person level, or just looking at the language level. 
 
 

Those are complex data structures because they have all these different ways that you can slice and dice the data and or thinking about how to analyze responses from instruments. There's different models that you can use to do.  
 
 

Benjamin James Kuper-Smith: So lots, lots of potential for P hacking there. 
 
 

Jessica Flake: Oh, yeah. Oh yeah. P hack your heart out.  
 
 

Benjamin James Kuper-Smith: Um, Yeah. As I mentioned, the, the, the, the paper of yours that I have read and that we're talking about is the measurement measurement paper. Shall we just talk about the title first? So I think it's, I, I find that really interesting because on the one sense, I think it's almost the best title I've ever heard because in two words, that summarizes the attitude of an entire field, and that's also the topic of the paper. 
 
 

And I don't think I know any other paper [00:11:00] that basically does that into words. Um, at least not that I can think of. On the other hand, I, you know, I, I live in Germany right now and I know lots of people who speak English as a second language, and I wonder how many of those actually get what it means. So, yeah, it just goes, did you think of, was that like a concern or something, or, 
 
 

Jessica Flake: write it's a linguistically isolating title. So the title comes from. yeah, So it's really common in English for somebody to 
 
 

Benjamin James Kuper-Smith: yeah, I know.  
 
 

Jessica Flake: that you don't care about. And then you're like, oh, Schmoe, you know, oh, I need to go get my laptop, a laptop from mountaintop. You know, I don't know. It's just a thing. 
 
 

I don't know what the equivalent is. And other, is there an equivalent and other  
 
 

Benjamin James Kuper-Smith: Uh, let me think. I  
 
 

Jessica Flake: of this, like some little mock mocking thing you  
 
 

say  
 
 

Benjamin James Kuper-Smith: whether something like that in gentlemen, um, feels like there is, but I couldn't tell you what it is, but yeah. It's I interesting because it's one of those things that get to native speakers, it's the most obvious thing what it means  
 
 

Jessica Flake: it's a pretty cliche title, like a lot of [00:12:00] fields. If you Google the seal  
 
 

Benjamin James Kuper-Smith: oh really? 
 
 

Oh, I hadn't seen that.  
 
 

Jessica Flake: who's a philosopher is like, oh yeah, we have a famous philosophy paper that has this kind of title. Like I  
 
 

think it's sort of a cheap trick, but it comes from so it's actually how I go. And I met. 
 
 

So I go to the coauthor on the paper. I go free and we didn't know each other in real life. We only knew each other on Twitter. And we were chatting on Twitter, not privately though. Like in Twitter threads, we were chatting about measurement and somebody was complaining, I guess, about this, these instruments and a paper I could dig up, I could dig up the tweet. 
 
 

It's still there somewhere. And they were like, oh Yeah. Or, you know, there's all these problems with the paper. And then somebody was like, Yeah. but they're looking at this measurement problems. You know, these surveys are bad or something like that. And I just responded and said, measurement measurement, like just who cares, who cares about that? 
 
 

That's and it was like me being sarcastic because I'm the person on this thread who really cares. Like that's what my research areas. And I go, [00:13:00] I was on this thread and he sent me a private message. And he was like, would you want to make a symposium for a conference called measurement measurement? And we'll invite people to come and talk about the problematic measures in their areas. 
 
 

Like the measurement that they, they see going on. And I was like, yeah, sure. Let's do it. So we made this symposium. Called measurement measurement for the association of psychological science conference. I can't even remember where it was that. 
 
 

year, how long ago this has been, it's been a three or four years or something. 
 
 

And we met there for the first time in person, and then we had this symposium and, uh, it was quite well attended. And we talked about different presenters, talked about different measurement things. And so the title just came from me kind of, Yeah. Was just linguistically joking around being like, oh shit measurement. 
 
 

And we weren't sure if we would call the paper that, but we eventually gave a workshop at sips, the society for the improvement of psychological science conference. [00:14:00] That was about that more directly. And then that's, and then we were like, you know, we should write a paper about, about this issue. And so we just kept the title. 
 
 

It's sort of polarizing, like when I pulled my friends, they were like, it's half we're like it's terrible. And the other half are like, it's great, but it is there to stay. 
 
 

Benjamin James Kuper-Smith: Yeah. Yeah. Mean it has, it does have a subtitle which is questionable measurement practices and how to avoid them, which also sounds like that JK routing film, uh, the fantastic piece and where to find them  
 
 

Jessica Flake: the whole title is just like a cliche after cliche. I know it's, it's kind of cheap, 
 
 

Benjamin James Kuper-Smith: I mean, it works right. I, I read the paper. I mean, for one, because I talked to ICO and read some of it, some of his other stuff, and because I thought, well, that sounds like an interesting title. Um, I guess, I guess for me it worked. Um, I guess it's funny that I was going to ask, like how you two met because you know, for, for preparing the, the one with ICWA, I knew roughly where, like where he was when, [00:15:00] and then I looked at your civilians, like, it doesn't look like they've ever been like in the same continent, even though like, uh, apart from maybe a conference, I was really wondering like how you met, but, um, I think a lot of people meet like on Twitter. 
 
 

He's very, 
 
 

Jessica Flake: you can find your like measurement people on Twitter. We had met at the conference and then we did the workshop and then I had him, I invited him as a speaker out at McGill. And so he's been to Montreal, but to write the paper we met up in Glasgow, we like, 
 
 

Benjamin James Kuper-Smith: as you.  
 
 

Jessica Flake: you. 
 
 

know, yeah, I it's, it was kind of a wild, a wild ride, but we were like, why don't we meet for one week and turn this workshop into a paper? 
 
 

And so we 
 
 

Benjamin James Kuper-Smith: I really like, not, not, not like you happen to both be in  
 
 

Jessica Flake: No, no, no. We have  
 
 

Benjamin James Kuper-Smith: I see. I see. That's cool. 
 
 

Jessica Flake: meet there and it was a sort of neutral area I needed to be there. I think anyway, I was giving a talk at university of Glasgow and I was like, Hey, I'm getting a lot closer to you than I normally am. Why don't we meet [00:16:00] up? 
 
 

So he came out for like a week or something and we hold up and Lisa to brain's office. She's a professor at university of Glasgow. She gave us her office and we wrote the paper on her dry erase.  
 
 

Benjamin James Kuper-Smith: That's cool. Um, so maybe, I guess we've kind of been dancing around the topic now for 50 minutes or something. Um, can you just provide a brief summary of the paper? Just so people who are not familiar with it, get a rough first idea of what this is about. And then we can actually talk about it in before. 
 
 

Jessica Flake: Yeah. So, well, getting back to a little bit about where the idea for the paper came from, we were inviting of this measurement workshop, and I didn't want to give a workshop on how to create an instrument or how to do validity research on an instrument. There's a lot of textbooks written about that. 
 
 

That's kind of a well trodden topic in psychology, but I had done some meta science work that made me think about measurement a little differently, particularly what are people actually doing when they use score, analyze, and report on instruments. And I [00:17:00] just found that it was like really lacking, getting basic information about instruments, evaluating whether. 
 
 

They seem to make sense at all, could be quite difficult. And so that's that whole like homelessness, like measurement kind of aspect of it. And so we decided to do a workshop that focused on good reporting and transparent practices of instrument, use instrument analysis, um, instrument interpretation. And so that's what the paper talks about. 
 
 

It doesn't talk about necessarily how to make a good instrument or how to make a good measurement. It's talking about the lack of information that we have on instruments. Like, so when a author uses an instrument and a paper, you might not know much about it. They might not say what the instrument was like, how long it was, how they statistically analyzed it. 
 
 

So all of the things that you use, a stimulator instruments get turned into numbers eventually. So where did those numbers come from? There's just no [00:18:00] information. And so part of the paper is just convincing the reader. This is a big. Like it's really common in psychology just to sort of snow over the measurement parts and get to the punchline. 
 
 

And we go through why this is a problem. So it's a problem because you can't evaluate if the instruments are any good, which is not something we should just accept that it's face. We don't accept that hypotheses are just good. We shouldn't accept that the instruments are good, but if you have more information, you can evaluate the instruments. 
 
 

And so it talks about what information you should report so that other people can read your paper and get their questions answered about your instruments. Because the status quo is if you're reading a paper, you would have lots of questions about the instruments. And where did this come from? How long is it? 
 
 

How was it administered? That's the questions that need to be answered in the paper. So then it's just like answering these questions. You're good to.  
 
 

Benjamin James Kuper-Smith: Yeah. I mean, it seems to me that the one way to [00:19:00] describe the paper briefly is almost to say, like, you know, you draw this analogy to question with research practices and quick questionable measurement practices and say like the whole, like all these solutions kind of that we've come up with in that domain kind of center around transparency and being open about what you're doing. 
 
 

Just please, also do that with your measurements. 
 
 

Jessica Flake: Yeah, exactly right. And there's been a lot of focus on statistics. So how you included participants are not all the things that you can raise questions about with statistical analyses borrowing from that we should do that about our instruments to the scrutiny about our instruments has been kind of mild and we just want to take it up a level. 
 
 

Benjamin James Kuper-Smith: Yeah, it's funny. I once kind of randomly took like a physics course. I mean, it's a long story, but I once took like a experimental physics course or something like that. Uh, at least the first few lectures. And, um, what really struck me is, so this was after my psychology bachelor's [00:20:00] and what really struck me is that the first thing, basically the instructor said was, I don't exactly know what the topic is, but it was, you know, something like, okay, here's this thing we want to measure. 
 
 

And we're interested in this thing and we can say that. You know, our measurement has this level of precision. So give them this level of precision. This is the kind of, uh, conclusion we can, you know, we can draw conclusions to this kind of extreme. And just as kind of like very, you know, as you talk about in your paper, this was, I think about like physical length of something, something that's pretty easy to observe, but it just struck me. 
 
 

It was, I was sitting there going, like, I don't think anyone's, uh, I mean, sure. We've talked about methods and we've talked about, I mean, stats and how you measure things. Like we have talked about that to now and like design of psychological study module, but never to this level of like, what can we actually measure with this thing? 
 
 

It just, it just struck me like a much more formal and precise way of talking about measurement that it just hadn't really come across to some extent in cycle.[00:21:00]  
 
 

Jessica Flake: Yeah. I have, I taught a measurement course last year and we did talk about. Um, the standard error of measurement and how you can use that to think about the precision of a score and putting confidence intervals around people's scores, like say you answered a survey and you got a score of 10. We could use the reliability of their survey responses to estimate the precision of your response. 
 
 

And that confidence interval might be like from, I don't know, two to 18. And so like, actually we can actually quantify it, but the problem, and this makes a lot of other assumptions about your instrument that aren't just related to the reliability. So it doesn't answer all of the questions, but most students will not take in their undergraduate or graduate training, a course where they learn about that stuff. 
 
 

So there's some of that formal approach to measurement and psychology, but it's woefully underrepresented in curricular. And there aren't very many people who study [00:22:00] it, who could teach it. And this is a kind of long standing crisis in psychology that there's not enough people. There's a lack of support for training people formally in methods to research our methods. 
 
 

There's not very many PhD programs where you can specialize in that. So then that means there's not very many people who go back out and become faculty to teach the students. And so a lot of times when you have methodological training and PhD programs, those courses might be taught by people who are not experts in that area, which I don't think you have to be a world renowned expert to do a good job teaching a class, but there creates this like general lack of expertise in our whole discipline. 
 
 

We're like really siloed into our own little communities, just like how we have disciplinary silos. But we would think that like foundational training and methods, research methods, measurement, and statistics would be for all cycles. But that's not really the case. That's not how it plays out in our graduate programs.[00:23:00]  
 
 

Benjamin James Kuper-Smith: Yeah. I mean, um, I guess I had like a few points I wanted to ask about teaching stats later, but I guess we can just do it now. Almost where it, um, uh, maybe, maybe first a caveat. Whenever I talk about like education and psychology, like I did do a bachelor's in psychology, but I've never been particular. 
 
 

How should we say conscientious in my attendance of lectures? Um, especially not for statistics. Uh, so, uh, I just want to say, like, when I say, like, I didn't know much about this, it might just be, because I didn't go to the lectures. I mean, so I did a degree in the UK and the UK, you have this put your psychological society and they do require you to take a certain amount of modules in methods and stats. 
 
 

Um, so you can't Jen just do topic and that kind of thing. Yeah, for me, the weird thing was my master's was a, I did a kind of fairly, at least in Europe, it feels like it's, it was a fairly unusual, unique [00:24:00] masters and that we could basically do whatever we wanted. It was called Brendan mind sciences, and they're just like, choose any module you want from like this whole list of master's programs at UCLA at the time. 
 
 

And. I mean, there was even one module I wanted to take that wasn't on the list from computer science. I was like, can I do it? It's like, sure. Just do whatever you want, basically. So there was like no requirement there. And of course I didn't choose anything,  
 
 

Jessica Flake: Oh,  
 
 

Benjamin James Kuper-Smith: any kind of, um, but yeah. Um, so just wanted to have that as a brief cabbage at the beginning, even though I don't know anything about anything, that's maybe not necessarily an independent of the Bridget education system, other than that, they don't require attendance, but they're they actually thing. 
 
 

So I've, I've saw that you do teach intro to stats courses or introduce methodology causes and that kind of thing. And I had a look and I mean, I just typed your name into Google and then on, oh yeah, here we go. Now I think, uh, probably like this, uh, on page three or [00:25:00] something, it was your rate, my professor.com profile. 
 
 

And, uh, it was very positive. Uh, so I just wrote down a one quote there, which was from a. Intro to start school. So I think someone said literally the best profit ever she's she's she's somehow made statistics engaging. So my question is how.  
 
 

Jessica Flake: how. 
 
 

well, yeah, um, I've taught, so I have a couple of courses that I teach in that class is a unique one. I teach graduate level courses and measurement and in a multilevel modeling or hierarchical, linear regression. Those are small courses like 20 students or something. Their data analysis courses, intro stats as my big lecture first year undergraduate course. 
 
 

So in Canada, this is like 18 year olds or 19 year olds on their first day of university. And I call it baby stats. Um, and it's like, you learn all the basic, like half of the course is descriptive [00:26:00] statistics. Like the mean the variance. And then we do T tests. We do one little tiny Inova and one tiny little, one predictor regression. 
 
 

So It's not complicated. Statistical modeling it's baby stats. It's big 200 to 400 people. It's like a rock concerts in a lecture hall. I wear a mic. Um, they're all out there, their little faces. Uh, 
 
 

Benjamin James Kuper-Smith: It's just like a sea of people than  
 
 

Jessica Flake: yeah, it's a sea of people. Um, I don't know what I do to make the course engaging. Exactly. I mean, I think w uh, one aspect of it, I like teaching the course. 
 
 

So a lot of faculty don't want to teach this course and I took it on. I remember getting interested in stats as an undergrad. It was disparaged by my comrades that it was going to be terrible. And I quite liked it. And I thought, well, this is my chance to do that for those other students out there who maybe aren't sure if they want to take it. 
 
 

So I do bring a [00:27:00] lot of energy to it. Like I'm interested in it. I'm excited about it. I do. I like psych myself out. Like just before you, you know, if I were a rock star going to rock concert, I don't like do cocaine before I teach, but you know, I slap myself around and  
 
 

like, try to, Yeah. I know. Right. We'll see. 
 
 

Maybe after tenure. Um, but I like try to psych myself out for it. Um, so I think I show up with some interest and engagement. I don't think that's a zero for students. I've done some research into student motivation and having like an interesting and enthusiastic instructor, uh, students pick up on that. 
 
 

But I do a lot of other stuff. That's just like good practice. I have a lot of real life examples. I ask students to contribute examples. I have a lot of in-class problem solving. So students like would take some time to talk to each other about a problem, and then they would have to solve it. They can come up and share, you know, back when we were in the real, we would clap. 
 
 

You know, if a student came up in front of the whole class and worked out a problem, we would clap for them. [00:28:00] I'd ask them their name and say, oh, you know, you're temporary professor. Just do things to make it feel like we're all learning here. And we're having a good time instead of just like them all sitting there blinking, and the lights are low and the projector is on. 
 
 

Um, actually one of the things. That I think is probably most successful in that class that has nothing to do with me is that I assigned homework for every chapter I teach and they have to do it within a week and a half. And that sounds like something students would hate, but in a class where a lot of them are worried about learning the material, getting regular feedback from their homework, takes it down a notch. 
 
 

And so a lot of my evaluations are like, it was a lot of homework, but doing it every week, kept me on top of it. I couldn't fall behind in this class. This was the class I made sure not to fall behind. And so I think there's just some like plain old good practices of teaching, which is like giving students a lot of opportunities to practice, giving them prompt and quick feedback. 
 
 

There are some redos and do overs in the class if they don't do well, they can try again, you know, having two [00:29:00] midterms and a final so they could drop one of their low midterms, those kinds of things that just kind of keep students going in the face of stuff that they might not think they're doing very well at, or that they're not very interested. 
 
 

If you can take the stress down, students will like your class more. I think that's a part of it. 
 
 

Benjamin James Kuper-Smith: Yeah. I mean, like one comment that I saw, like in a few of the reviews said something like she wants us to do well or something like that. And I guess that kind of plays into this whole thing. Like, it's not like the professor's trying to, you know, you against them and they're trying to punish you and make it hard, but like just make it fun and engaging. 
 
 

And 
 
 

Jessica Flake: yeah, I tell them it's not a trick. I, you know, it's weird. A lot of students think that the professors are trying to trick them like that. They're going to write test questions. Well, I think some of this, my perspective comes from my PhD is actually from a school of education. So I've had some formal training in how to teach and pedagogy and assessment And 
 
 

stuff like that.  
 
 

Benjamin James Kuper-Smith: she basically like the only professor who actually knows how  
 
 

Jessica Flake: It's a little, it's like a secret, cause I'm in a psych department now. And like, I hope [00:30:00] nobody finds me out, but yeah, I mean, I asked, I check in with students like. When, you know, when they're not doing well at a homework, I slowed down a little bit. There's a lot of things you can do if you really care about them learning the material and not about the grade curve, that also makes them feel a little less stressed out in the class. 
 
 

And so, Yeah. 
 
 

I do. I'm like, I want you to, well, I think it's true. I always tell them, I know you think you don't, this is the useless class you're taking, but statistics are every day in your life. This is actually the most important class you're taking and you're going to like it. Damn it, you know, that's kind of my approach to approach to these guys. 
 
 

You're gonna like it statistics. I mean, they're every day when you open up any app or look at anything on your computer or do anything, catch a plane, catch a bus, catch a train. All of this is related to statistical models are used in so many aspects of your life. Like get on board don't delay.  
 
 

Benjamin James Kuper-Smith: Yeah. I mean, it's pretty cool that, that. They enjoy that because I guess, yeah, I [00:31:00] guess it can seem daunting to have like a few hundred 18 year olds. So I don't know. Do they have to take that course as part of the  
 
 

Jessica Flake: They do. Yeah. 
 
 

It's like you said, it's a requirement of two, you know, the Canadian, psychological association, American psychological association requires a certain amount of classes and statistics. So they have to take two and they have to take the baby steps before they take the, I dunno, toddler stats or whatever the next one is.  
 
 

Benjamin James Kuper-Smith: Yeah. I mean, the crazy thing is like when I started the doctoral program in humble cause where our professor was first and then we moved to high tobacco, it's such a medical university and I don't know what it's like in the, I guess education system always different, but different. Uh, but at least in Germany, medicine, I don't think involves any kind of numbers that say and um, as well, at least very little let's say because they had like this in the doctoral program that, I mean, it's not really doctor program because. 
 
 

In Europe, you know, you usually do your masters and your PhD separately. Um, and you often move institutions where that kind of, and that kinda stuff. But we [00:32:00] had to take, like to get a PhD, a few courses, and the, the stats course was basically what you just described. Um, so we went, you know, in our PhD, I had to go to those lectures where they told us like, what a mean is. 
 
 

And I was like, ah, God, like, but some people I guess, needed that, 
 
 

Jessica Flake: I think it's good for you. They tried to talk about an art graduate stats course taking some of that stuff out. And I was like, no baby stats for everybody. Baby stats for undergrads, baby stats for grads,  
 
 

Benjamin James Kuper-Smith: but it  
 
 

Jessica Flake: baby stats for post actually. I mean, it depends on how much you actually  
 
 

Benjamin James Kuper-Smith: the best thing is there's even some statisticians, um, at the hospital doing their PhD who have to take that. 
 
 

Jessica Flake: yeah, I mean, some people really do have the background training and maybe they could skip That first course, but a lot of people could hear a good amount of that stuff. Again, I learned something new every time I teach it, you know, I learned some new about the means.  
 
 

Benjamin James Kuper-Smith: Yeah.  
 
 

Jessica Flake: more. You can learn about it.  
 
 

Benjamin James Kuper-Smith: Yeah, yeah. Um, 
 
 

Jessica Flake: I try not to look at my rate. My professor [00:33:00] very often.  
 
 

Benjamin James Kuper-Smith: yeah, no, I mean, 
 
 

Jessica Flake: The only one I remember is the one, the guy was like, she says like a lot. She's really juvenile. 
 
 

Benjamin James Kuper-Smith: I got, I didn't  
 
 

Jessica Flake: That, that. 
 
 

was one of them. I was like, okay, thanks.  
 
 

I mean, VI I'm like six years older than that person. Probably. So the fact that I'm not just like super old, I guess it's a problem for them. 
 
 

They like only respect their professors who were like fully gray haired or something.  
 
 

Benjamin James Kuper-Smith: yeah, exactly. Just because I'm not like three times as old as you, it doesn't, it doesn't mean I don't know what I'm doing.  
 
 

Jessica Flake: Yeah,  
 
 

Benjamin James Kuper-Smith: Yeah, no, I don't know. I mean, I've, I don't know whether that's kind of stuff exists in Europe. Um, I don't know. I've I've never like, I've heard of  
 
 

Jessica Flake: stereotypes you mean.  
 
 

Benjamin James Kuper-Smith: Uh, no. I mean the, the, the kind of system like rate my professor.com or something like that. 
 
 

I don't know whether it exists. Seeing it let's say at least. And, um,  
 
 

Jessica Flake: yeah. It's not university sanctioned in any way. So  
 
 

Benjamin James Kuper-Smith: Yeah. Yeah. So, 
 
 

Jessica Flake: be 20 ratings on there for me, but my class is 400 students. So it's, I don't think a lot of it depends on some universities. There's like a [00:34:00] culture I think of using it more. 
 
 

Benjamin James Kuper-Smith: I mean, it's good. Like if, if every kind of for everyone does it, then it's just feedback and you can learn from it. But I guess it's a bit like if you ever like, look on Amazon at the, uh, you know, they can often be very insightful reviews, right? Often you can have like this thing, but like, you know, is great, but here's this one problem that really ruins and you're going like, oh, I'm pretty glad that I said that. 
 
 

But often if you look at something and you really want to make sure you get the right thing, you just, and like different versions, different companies available, you just, I just ended up going, like, I can't buy any of these because they're all bad. They all have this fatal flaw in them. And I can't buy anything now. 
 
 

So I guess it's similar. Like if, yeah, I guess if this kind of feedback is not required, then just you'll get random people who, for whatever reason, decided to write it. 
 
 

Jessica Flake: And also like it. you know, Miguel, there's not that many people who teach this class, so I'm I going to do it? Just go on, rate my professor and hear that they're terrible. And then you have to go take the class with them anyway. I mean, it's, it depends on the class, I guess. Like if you're picking an elective, I guess you could look for a class with a good [00:35:00] professor and pick that, but for this required class to add,  
 
 

I'm the one person who teaches it in the fall. 
 
 

So.  
 
 

Benjamin James Kuper-Smith: Yeah. I mean, this is what it is. Yeah. But I mean, do you, one question I had, I think I might've actually talked with IKO about this. I'm not sure I talked about it with someone. It might've been him. And that was the question of like how much methods and stats and that kind of stuff. Should you even teach us an undergraduate level because, or formal modeling or these kinds of things? 
 
 

Right. Because I remember in my psychology course, you know, it felt at least as if 70, 80% of the people wanted to do some sort of clinical work afterwards. And if they basically heard of a variable number, they started getting slightly uncomfortable and. I don't know. I sometimes wonder, like, does it make sense to force people through this if then if they just there to pass the thing anyway? 
 
 

Yeah. I just wonder, I don't know, like whether that kind of stuff makes sense or they just like, you know, learn how to pass the test. And then, I don't know, [00:36:00] maybe it may be able there's there's only good students who learn everything properly, but yeah. How do you, how do you think about that? 
 
 

Jessica Flake: Um, I think it's undergrad level, it's a little bit different than at the grad level where students are maybe getting a general degree and it, uh, this is diff this is different in Canada, in the U S than it is in Europe, or the undergrad degree is more general and less specialized. But for people who want to be clinicians or people are just getting more general degree and they might not go any further in the discipline. 
 
 

I think having a, a course or two in statistics, that's focused on interpretation and information literacy and consumption is useful. I mean, again, like statistics are everywhere. You can't go through an election without seeing the standard error, um, or confidence in. And so that I think though we don't have that approach. 
 
 

Like even in my course, it's not that? 
 
 

focused on information and statistical consumption. It's, it's focused on like, how do you calculate a [00:37:00] variance? So maybe that's not the most useful way to teach those courses at the undergrad level. And if you're a clinician, you might like to be able to read a paper about a new technique and read the methods section and not think that it's just totally in another language. 
 
 

Like if you've got some basic understanding about statistical tests and how to interpret them, that would probably be quite useful. So thinking about it from an information literacy perspective is probably better than how we do it now in a few courses though, maybe not a lot could go a long way toward that. 
 
 

I taught online and a master's level sort of like programs certificate course for ongoing professionals. So they could take this, they were working, but they could take this certificate. Uh, it was less than a master, a full master's degree. And one of the courses in there was a research methods course, but it was designed not for them to be able to carry out the research. 
 
 

It was designed for them to be able to like read and consume and interpret and evaluate different kinds of research studies so that they would know about different [00:38:00] designs. And there's this a little bit of a statistical section in there about like, what's a P value what's statistical power because you see that in studies. 
 
 

So I do think that's useful even for people who are maybe going a clinical route, because a lot of the things that they're do there there's whole idea that we're empirically-based. So we're using techniques that have been researched. And if this is like the medical doctors who can't read the studies about the medicines and they know so little about statistics, that they can't even spot something like blatantly terrible. 
 
 

Oh, I dunno how drinking red wine cures or cancer or whatever. Some of these like stupid studies are in medicine, medical research. So I think it's kind of the same thing. I mean, I often wish like, you know, medical doctors should have some basic training and statistical literacy. How do they figure out? I think it's just like, they figure out what to do through word of mouth. 
 
 

I don't think they like read research studies and base their [00:39:00] medical decisions on it. They don't know how they like talk to their other doctor friends. And then their doctor friends are like, oh, what about this new drug? And they just start prescribing it. I mean, that is a, not the best way to do it. I don't  
 
 

Benjamin James Kuper-Smith: I mean, if I think about the life, I don't don't think that at the time, even if they had the, like, even if you have someone who has all the skills, I mean,  
 
 

Jessica Flake: Yeah.  
 
 

Right. They're not going to, like, they don't have, they're like 
 
 

read a research day. I mean, I hardly even have that day,  
 
 

Benjamin James Kuper-Smith: yeah, 
 
 

Jessica Flake: um, at the graduate level. I think it's different. If you're engaged in empirical work as part of your thesis, I think we could basically do away with any content training and only have methods. 
 
 

And people do all the content training in their lab. That's a European versus, you know, in north America, there's more course level formal training that a PhD in Europe or Australia or something, it's all just research. Maybe the formal training comes from the master's degree, like you were saying, but yours is like, pick your own adventure. 
 
 

So 
 
 

Benjamin James Kuper-Smith: I mean, that's  
 
 

Jessica Flake: I think those could be methods and statistics courses. And [00:40:00] then when you'd go to do your research and you get the, um, content training through the lab or through reading the literature through running studies, actually neuroscience is an interesting case where there's a lot of necessary technical training. 
 
 

If you're doing FSRI or, um, even EEG and stuff might be useful to have some technical, formal training. And some of that via coursework or via like lab practicum or something, a little more structured because there's a lot of technical knowledge involved there. Like, I dunno. Do you just want everybody to kind of learning how to do that on the street? I see it was not ideal.  
 
 

Benjamin James Kuper-Smith: Yeah. I always find, yeah, I've had a difficult also talking about this things in general, because basically whenever someone tells me I have to do something, I don't want to do it, even if it's a really good idea for me and I would have chosen it anyway. Um, I guess. For someone who's basically spent most of his life, basically all of his life and formal education until now. 
 
 

I sometimes wonder why I do it. I don't even like being taught [00:41:00] anything. I just like, kind of, you know, just let me be and like figure out my stuff on my end, but it's obviously also a very inefficient way because other than I, at least I imagine if I had to paid more attention and stats than I would have a much better foundation. 
 
 

And actually rather than having to learn it during my PhD, basically, 
 
 

Jessica Flake: Yeah. It's probably not very clear to students why it's relevant. I spend a lot of time in my intro class telling students that statistics is like a part of the backbone of psychologic psychology. A lot of psychology is based on research and research uses statistics. And they don't even really know that, you know, that they don't necessarily understand that when they chose the major, they  
 
 

Benjamin James Kuper-Smith: I definitely didn't. 
 
 

Jessica Flake: seemed interesting or something. 
 
 

They didn't realize that there was a lot of stuff that there were a lot of statistics in bold. So some of it is, I think what you're talking about is not having the autonomy to pursue your own interests and education, which is really important for student motivation. Like it's a fundamental aspect of being motivated. 
 
 

It's been like you have the autonomy to [00:42:00] choose things that you're interested in. And I think it's probably still in part some basic fundamental training and methods and statistics while also giving students that autonomy to learn and apply things that they're interested in. But man, professors can be really bad salespeople for their content. I was like, tell students why it's important. They need to know that that's an important, tell them why it's important. Tell them how it's going to help them in their studies or help them find the aspects of it that are interesting. I think that's like a huge part of teaching that a lot of professors just check out on. 
 
 

Maybe they don't understand it, or,  
 
 

Benjamin James Kuper-Smith: We'll just assume that everyone knows. 
 
 

Jessica Flake: yeah. Right. And that's a big problem in the academy.  
 
 

Benjamin James Kuper-Smith: Or even just the, you know, you've chosen this, if it's an intellect of models, like you've chosen this, like, you probably know where this is, otherwise you wouldn't be here or, yeah. Um, I guess my slight excursion has been slightly longer than I wanted it to be. Yeah. So to get back to measurement, [00:43:00] um, yeah. 
 
 

Is transparency then, I mean more transparency is that kind of just, I mean, you kind of called it a necessary step requirement or something that for you even to be able to evaluate all the other stuff, and I guess that's kind of like the point you make, right. Just, just tell us like, kind of what you did, why you did it, and then we can see with it, that decision made sense rather than just saying we measure depression and then you may be mentioned like, uh, one of the many scales you used or whatever. 
 
 

Jessica Flake: Yeah, 
 
 

it's um, I think it comes from this desire to want to increase rigor and in a good way to increase rigor is to evaluate what's going on and to see how it could be better. And then you can evaluate what's going on because you don't have any of the information. So transparency is this necessary, but not sufficient aspect of research reporting, which is it's a hard sell [00:44:00] because. 
 
 

You're more transparent. It's like, everybody can see that you suck. So say you suck this old system of not being transparent might work well for you because you can suck kind of in plain sight and not really mention it. And maybe nobody will notice. And then you just get to go along and publish your paper. 
 
 

Whereas if we have the expectation of transparency, you have to be like, I suck. I use sucky instrument. Number two, I use suckiest Trent, number four, you know, so it's like, it's a hard sell, but if we make it unacceptable to just leave out all this important information, the idea is that eventually to raise the tide of the, of the rigor, because there's be this information it'll help you like call out the things that suck. 
 
 

Whereas it's hard to tell things are sort of sucking, but they're hiding, I guess, in the papers there, it's unclear. [00:45:00] And sometimes things are fine. It's just, it's not reported. There's another whole aspect of this transparency thing, which is we can't replicate or reproduce research. If we don't have basic information about the instruments, about the materials, about the stimulate, about the procedures, like you can't even redo what the people did in the, some of the background or other research I was working on, we were writing this paper was looking at the instruments used in large scale replication studies and the first large scale replication study of the reproducibility project psychology. 
 
 

They couldn't replicate. Meaning run the replication and multiple studies. Cause they couldn't figure out what instruments have been used in the original studies. In one study, they later found out that they used the wrong one because what was reported in the paper, wasn't really quite clear. And so they contacted the authors and the authors were like, well it was this. 
 
 

And then later they found out it was actually something else. So there's all this basic stuff. Like if you're interested in measurement, [00:46:00] you're interested in psychometrics. You're interested in like the latest, most interesting latent variable model. You can't even do any think about that stuff because all the basic information is missing from the literature. 
 
 

You can't, you can't figure out what happened. I mean, sometimes you can't even figure out what they were studying or measuring the whole issue comes. Yeah. Well, so before I wrote this paper with IKO, I wrote this other paper. That's a systematic review of instruments used in papers, published in the journal of personality and social psychology. 
 
 

JPSP that journal is pretty prestigious in that field. And I read a bunch of those papers and just try to figure out what they were measuring. And if the instruments had any empirical evidence for their use and it was a mess, it was like there would be six studies and a paper and say, all the studies are measuring self-esteem, there'd be six different self-esteem issue instruments. 
 
 

And every study, like why [00:47:00] in one of the studies, two of the instruments from a previous study, it might be combined to make a totally new instrument of self esteem. I mean, things that are just really bizarre and haphazard and why I call this, this is measurement. It's not measurement it's measurement. 
 
 

It's like just some proxy approach for developing instruments. It's not systematic. It's haphazard. It's probably related to P hacking. It's probably out of convenience or out of interpretability or. Finding results that you think are going to help you publish the paper. So I read like 30 or 40 of those papers published in that journal. 
 
 

And it was sort of like come to Jesus experience for me, where I was really interested in all these psychometric modeling things, but I couldn't even figure out what was being studied and how it was being measured. So like say the study measured four things. They're all slightly related. I don't know. 
 
 

They could be like self-esteem and self-efficacy and self-concept, and self-determination so like, this is long lit review. It's [00:48:00] super theoretical about all these things and you get to the measure section and there's like six things, and they're all sort of, kind of named what the four things in the lit review are named. 
 
 

And you can't tell if they're all the different or the same, and then you get to the results section and there's like three, three results. So there's four things and let review six things in the measure, section three things in the results section. And that's when this whole issue of like, we should, why isn't it that we're. 
 
 

Detailed and systematic with how we select report, use, analyze our instruments. And I had a few, um, my coauthors on that review paper that came before the measurement paper, and one of them was reading it and he was like, I cannot believe this is published. And so when you really dig into it, it's like flabbergasting, how much is missing or how haphazard things seem to be or how it just seems that like nobody cares or nobody even notices that [00:49:00] your instrument of self-esteem is like slightly different. 
 
 

Every study you ran.  
 
 

Benjamin James Kuper-Smith: Yeah. That's not a good sign. 
 
 

Jessica Flake: Yeah. So this transparency coming first, like to me, it feels like it inherently undermines rigor, but we're working on that paper. The reviewers really pushed us to think about what's the difference between bad and Trent and transparent, or the fact that you can't tell. And so we really tried to separate our thinking into like, You don't have the information, so you can't tell and that's questionable, that's different than you can tell what's going on and you think it's bad. 
 
 

Benjamin James Kuper-Smith: Yeah. I mean, you can't even do that. Right. That's kind of  
 
 

Jessica Flake: Yeah. Yeah. So we, we kind of had to pull those apart a little bit, but my experience reading the literature is like, oh, I can't tell what's going on. And this is also very bad.  
 
 

Like, it must be bad,  
 
 

Benjamin James Kuper-Smith: like it's not even bad. It's just, it's, it's, it's not, it's not good enough to be bad or ways to, um, to, to maybe make this slightly more concrete and maybe also get a bit of advice for my own stuff. I thought we could. So I kind of, as I was reading your paper, uh, so I mean, [00:50:00] I'm in my PhD and we published a COVID paper, um, right at the beginning of the pandemic. 
 
 

And as I was reading a paper, I thought, Hmm. Some of these problems, I recognize maybe that's some of those things we did or didn't do that maybe we should, should have done. Uh, so I, I was just curious, maybe we could just talk about some things we did then maybe like, maybe you can say like what we should have done, uh, what we should have reported maybe more, or I don't  
 
 

Jessica Flake: I mean, I think this is a good exercise, whatever I talk about, like present on treasure, man. I always say sharing is caring and I share a time that I engaged in like, you know. 
 
 

I got lost in the garden of forking, met human paths and did some P hacking adjacent kind of thing. So I think it's good to, to do those kinds of exercise. 
 
 

Benjamin James Kuper-Smith: So in that study, it was, it was basically about risk perception of COVID like, how likely do you think is your going to get COVID or are you going to infect someone else? And that's what kind of, we're kind of interested, just like, and so this was like right at the beginning, right? At the beginning of this condensed means we collected data on the [00:51:00] 11th of March, 2020,  
 
 

Jessica Flake: Oh, wow.  
 
 

Benjamin James Kuper-Smith: which is almost exactly two years ago. 
 
 

So  
 
 

Jessica Flake: yeah. yeah. 
 
 

Cause everything shut down here on March 13,  
 
 

Benjamin James Kuper-Smith: Yeah. I think we were basically,  
 
 

Jessica Flake: of  
 
 

Benjamin James Kuper-Smith: I think basically we said we were at a hospital an hour. Um, like the Institute director said something like. Probably going to shut down pretty soon. So I guess everyone take their stuff this week. So like on the, uh, yeah, so that was basically, we were just like, should we do something about this? 
 
 

Maybe, especially because my supervisor had some stuff about optimism bias, how people think like bad stuff is less likely to happen to them than to someone like them. And good stuff is more likely to have them than to someone like them. Uh, so people kind of have these like, just skewed perceptions about risk and how that might lead to people potentially not actually, you know, distancing and all this kind of stuff, because I think, uh, like I'll be fine, that kind of stuff. 
 
 

And so we were just kind of curious, like, is any of this there? And maybe it's also like, we, I mean, we were [00:52:00] super careful about like P hiking, that kind of stuff, right? Like we didn't like do it, I guess I wouldn't be talking about this. If we did, we didn't act it. Well, at least that I'm aware of. Uh, we weren't like, I mean, we had open data, open code, some period of stuff. 
 
 

We replicated everything with a second dataset collected at the same time and, or replicates and, you know, From that perspective. I think we at least tried, but there was, for example, the whole measurement thing is something, I guess we largely for time reasons because we'd had to happen so quickly because we wanted to be before locked down. 
 
 

Basically we just didn't have the time to really think about the measurement stuff. Maybe explicitly the way that maybe you would say. And, um, maybe we should, um, so maybe it's so the question I kind of, so the main kind of question we have is like, how likely are you going to get COVID infected with covert? 
 
 

And then we had, for that question, we asked it for four different time horizons, like within the next few weeks, within the next two months, within the next year or within your life, then we also had, we also, we've always asked that for you and for someone similar [00:53:00] to you, same age, same sex and same geographical locations. 
 
 

So three factors that are kind of relevant for this kind of stuff. Um, what these mortality was, you know, the older you are, the more male, I think those were risk factors for getting severe symptoms. And then. Where you are obviously matters, um, before getting infected. Um, so that was the one thing then the other thing was how likely are you going to infect someone else? 
 
 

And that was split six, six different contexts. So your family, how likely are you getting affects friends or colleagues, someone like while commuting or that kind of stuff. And then something else, you know, that was the kind of thing we had. And what we did is we then took, because this was a fairly large dataset with lots of variables to kind of be consistent throughout the entire thing we basically said, okay, we're going to take the average per participant of the four time horizons for getting infected and the six social contexts for in fact others. 
 
 

And that's kind of our main independent variable kind of, and [00:54:00] I think we did report a crossover. I think we did do that, but we've slowly ending basically what we did in terms of validating or whatever our measure. So I don't know, is that enough for you to say like, oh, if you have like these. Items that make up the main thing you should have done this and this and this, or do you need more information or,  
 
 

Jessica Flake: Yeah. Well, I think so. I'd have to read your paper. This, this goes back to these two issues of like transparency versus did you do like a really good job? And those are separate issues. So I think like the least transparent thing I could imagine you guys writing as something like we use these four items, crumps box alpha, it's like not very much information about where those items came from, how they were developed. 
 
 

Did they come from a previous study? Did you guys  
 
 

Benjamin James Kuper-Smith: yeah, it was still true. Just made it up. Yeah, we didn't, I don't think we said that.  
 
 

Jessica Flake: They don't normally say, they should say, yeah, no, they should say we developed [00:55:00] these items, which people don't say. And you're  
 
 

Benjamin James Kuper-Smith: it sounds too grand though. We did develop. We was like, well, 
 
 

Jessica Flake: we, I know they should just say, we've just, we just made these up. 
 
 

That's like my choice, my top choice of what authors should say, they should say, We just made these  
 
 

Benjamin James Kuper-Smith: came up with this, you know, I mean, it was obviously both, all three, but it basically, that is what it is. Right. We didn't, yeah, 
 
 

Jessica Flake: So being clear about that, I think is important. And it's important because if you just made something up, it might not be doing as good of a job as he would like, cause it hasn't been used a lot before. 
 
 

So it's important to say that because then, you know, and your limitations, you can say, well, we just made these items up and so they might, you know,  
 
 

need more piloting.  
 
 

Benjamin James Kuper-Smith: I mean, so we did like, right, that this whole way of assessing comparative optimism has been not quite a bit. And we linked to that, that this general approach it's this way and we kind of apply it to this topic, but I guess the funny thing is like, it didn't even, I mean, you know, we were obviously like thinking about like, what are we measuring? 
 
 

And does it make sense? But we [00:56:00] didn't, I guess if, I mean, there's not the limitations that this isn't a potential fault. 
 
 

Jessica Flake: Yeah. I mean, I think that saying that this, so there's this general idea and the other literature is a little bit different than there are these other instruments in the published literature with this kind of word. And we adapted there or changed this wording to be for the COVID context. That's different than we just made up the wording for the first time. 
 
 

Or there's an instrument that already exists that has this exact wording and we're using it. So this is like kind of three scenarios that are all fine to do in your research, but it could be hard to tell when you're reading it, what actually happened. So is it that these items are kind of based on items from other adjacent areas or areas of similar content? 
 
 

Is it that you totally wrote them a new, or is it that it's an instrument that has been in use for awhile with the exact same wording and wording? Knowing about how item [00:57:00] wording is developed is really important for evaluating the instruments because items, their wording can dramatically impact how people respond to them.  
 
 

And so if you're just  
 
 

Benjamin James Kuper-Smith: of my big fears. Always just changing the wording and then, yeah, 
 
 

Jessica Flake: If you're just making something up for the first time, um, it's more of a chance that people didn't think. The question the way you thought, or in the case of aggregation, they may have interpreted those questions quite differently. And when you aggregate them, you might be aggregating things together. 
 
 

That don't make sense. And so what justifies your choice of aggregation? You know, alpha is a, it's not the best way to think about that because alpha just gets higher with the more things you have, whereas a correlation coefficient doesn't. So looking at the correlation between the four things that you aggregated, the average correlation can be a little bit better than alpha. 
 
 

So there's some things that like, maybe you could just do better because they're better justification. Yeah, There's those kinds of things that are just maybe better justification, but there's also just like reading the paper and being like, oh, [00:58:00] well it seems like they just made these questions up. Did they like pilot them in any way? 
 
 

Did they review them in any way? How do they perfect them? How do they know that people didn't read them and just think I have no idea what this means. 
 
 

Benjamin James Kuper-Smith: Yeah, I dunno. Yeah. I want, yeah. I mean, so the, yeah, the answer in this case is just, we just thought, like, what would be a sense of a question? And then we wrote it and, you know, we also, so we wanted to make sure that we, it wasn't a country, not particularly, country-specific what we found. And so, uh, we all spoke German and English, so we had the whole thing in German and in English and, um, and for English, we collected data in the UK and the us and, um, you know, we just looked over it and we're like, it makes sense. 
 
 

It seems about right. 
 
 

Jessica Flake: I don't think that that's crazy. I think, you know, my ideal, I guess I'd say, well, we, we just made these items up and we reviewed their content and our research team [00:59:00] for reviewed their content for clarity and for wording. And we revised them to, you know, to make them clear it is the case. And then I think it's okay to have these limitations in a study. 
 
 

And publish it, especially in the case of COVID where you want to get this very timely, you know, it might be very time sensitive. It's okay to say we reviewed these items for their content and clarity. We made some revisions to make sure they were, you know, we had these two languages, we had speakers of both, just a little bit of information about that can be helpful in the paper. 
 
 

It gives a little bit more confidence, but I, I went through this exercise in my measurement class of making up an instrument and collecting data on it so that my students would have real data to analyze. And I did something I had never done before. Well, I had my home, my students do it as a homework assignment. 
 
 

I had them interview somebody, uh, thinking aloud a cognitive interview with all these items that me and my, uh, teaching intern wrote my grad student, Lindsay, Allie, and the people reading those items [01:00:00] thought and interpreted them in ways we never would have imagined.  
 
 

Benjamin James Kuper-Smith: Yeah. 
 
 

Jessica Flake: they just did not land the way we thought they, I mean, we wrote some of them to be. 
 
 

Cause it's like an, you know, it's a measurement class and we've got to have some bad items in there, but wow. And especially the cultural diversity aspect. So how people from different cultures we had students interviewed, they had to do two cognitive interviews. Each we had quite a few students interview. 
 
 

Other international students. People really interpreting the items in ways that we didn't anticipate. And this is really common. And I think any psychologist would know, well, a couple, a bunch of PhD and master's level. People wrote some questions and then they gave it to the people in the hospital who saw this. 
 
 

Like my granny is in the hospital.  
 
 

My granny is from Kentucky. She calls black people colored. Okay. Like she's 94. Okay.  
 
 

Like  
 
 

Benjamin James Kuper-Smith: could be saying worse things of that age. 
 
 

Jessica Flake: I know she could, she's pretty mild, you know, she's a [01:01:00] sweet granted in a lot of ways, but she probably wouldn't read the question the same way you did. And so I think that aspect of it, even if you're not doing a lot of complex analyses, just having a little bit of room in your mind that what you wrote may not have landed. 
 
 

And if you just wrote it that you reviewed it for clarity and that you wrote it is good to know. It's good to report that, you know, we developed these items, we reviewed them for clarity. We had people who spoke both languages, review them. That was our best shot at it. I think it's okay to say that, you know, we reviewed, we, we aggregated, I think justifying aggregation as important. 
 
 

We aggregated these because they were correlated highly. Are we separated them because they weren't correlated Chromebox alpha. Um, I think is, does raise a lot of questions because it, it gets higher and higher as you have more variables. So it's not a good way to know if things are really correlated. 
 
 

You only had a few variables. So maybe in this case, it's not as  
 
 

much of a concern, but we were like, oh, our  
 
 

20 item instrument. I mean, 20 items can be correlated at like 0.2 and still [01:02:00] have a high Chromebooks alpha. So that doesn't give you good information about that. But you know, you make a decision, you justify it. 
 
 

You say, you know, we didn't do a lot for this. We, these items look face valid for us. We reviewed them. They made sense to us. They were correlated. We aggregated them. I think that's fine, especially in a quick study, but it would be nice if the person reading the paper kind of could tell that's what you did. 
 
 

So that's the two different issues that you could do more. You could always do more. You could develop that instrument for like five years, you know, you don't want to.  
 
 

Benjamin James Kuper-Smith: Yeah. I guess in that case, it just, I guess this may be one of those blind spots that you have. It's like, of course we just made this up, like it's, you know, this pandemic, like, we, we we've basically heard about this like two weeks ago and suddenly, like we realized, like you can tie countries closed down. 
 
 

Of course we didn't spend years, but yeah, I guess we could have just said that. 
 
 

Jessica Flake: Yeah. Or we base them off of these items off of this other area that, you know, if there's something kind of, if there's framing of the questions, thinking about, you know, having [01:03:00] people think about different timeframes, there is probably some kind of measurement precedent for that That's reporting on those sorts of things like that. 
 
 

You know, it's just a kind of thing you can say, oh, well, these other instruments, it was loosely based off of this. It's just good to know. Like where did they come from? Did they just come out of 
 
 

Benjamin James Kuper-Smith: Yeah. That's fair enough. Yeah. I mean, I obviously don't want to give the idea that we didn't think about this at all. Right. We did obviously talk about this quite a bit, and I think implicitly it's in there quite a bit that we made it up and why we made certain decisions, but it maybe wasn't quite as good as it could have been. 
 
 

Jessica Flake: Explicit. Yeah. 
 
 

Well, that's one of the things about the open science movement that I think like, cause I'm thinking a lot about like what's the next generation of what people are going to care about. And, and you, you said like, oh, you know, we had open data and we preregistered. And like you did that, you did those things because an open science movement told you they were good things to do. And so now I'm trying to push the open science movement to say, well, it's also good to like, [01:04:00] make sure you dot all your I's and cross your T's about the instruments. You know, when you're writing your pre-registration. Th th that there's more details there then maybe you had thought in the past. So, and a lot of people, when they started pre-registering they started thinking through the details of their analyses a lot more. 
 
 

It kind of requires you to do that. And they thought, oh, before you might've said something similar, like, oh Yeah. I mean, we really cared about the statistical test, but we didn't really think through too much about what we do ahead of time. And so we just did what we thought was best and okay. That's, that's fine. 
 
 

As long as you're honest about it, but the open science move kind of pushed everybody to think about that aspect of it a little bit more. And so I think like something that I go and I've been working on and I've been working on more generally, is getting people to take that same kind of thinking and apply it to their instruments. 
 
 

And then people are like, oh, now we have to think about so many things. Yeah, you do. 
 
 

Benjamin James Kuper-Smith: Yeah. Yeah.  
 
 

Jessica Flake: Yeah. So  
 
 

Benjamin James Kuper-Smith: That that's always what I have with. Open science and open code or whatever. And then sometimes the response is like, you know, like how much am I supposed to do? [01:05:00] Like, what do you want me to do? And then, I mean, somewhat confrontational facetiously, you can say like, well, you just have to do your job. 
 
 

Jessica Flake: Yeah.  
 
 

Yeah. I just have to  
 
 

do all this stuff and it's your job. So  
 
 

just do it, you know,  
 
 

Benjamin James Kuper-Smith: the way you get  
 
 

Jessica Flake: like it, go do something else. Yeah. I know? 
 
 

people like, oh, I revised, it took me like X amount of time to revise this paper. It's like, yeah. That's your gig.  
 
 

Benjamin James Kuper-Smith: Yeah. It's still not that great,  
 
 

Jessica Flake: Yeah.  
 
 

Benjamin James Kuper-Smith: but yeah, I mean, uh, it's funny, like when you mentioned that giving people actually getting responses from people, filling out a questionnaire or something can be really insightful. Um, for my bachelor thesis project, I kind of random, randomly ended up doing something about blind people, um, and, uh, sleep problems and kind of psychiatric disorders. 
 
 

And first of all, blind, uh, Give them a questionnaire and a computer, or give them a piece of paper. I mean, there was lots of effort to get the consent sheet and braille, for example, and send that around the country. [01:06:00] And anyway, so the, and the way we did it is like cold the people. And they basically walked through the questionnaire with them, uh, with all of them. 
 
 

And what was really fascinating was just how different, as you mentioned, people reacted to things. I mean, there was some people I don't remember if it was, it's been a while, but, you know, we had like a bunch of like, uh, clinical questionnaires or whatever. I don't remember which ones they were, but I think on average it maybe took 45 minutes an hour or something to do the whole thing. 
 
 

And there was, you know, some people who did it in half an hour and one guy took. Two hours, something like that, because every single time I'd asked him a question, he just stopped philosophizing about the wording of that  
 
 

Jessica Flake: Oh, my  
 
 

Benjamin James Kuper-Smith: And he'd be like, oh, well, I'm not really concerned about this problem. It's more that, you know, just like take everything apart. 
 
 

Whereas other people were just like, Ugh, three, two, or whatever. And I mean, how do you deal with that problem? It's it's such, it feels like such a, [01:07:00]  
 
 

yeah. Every,  
 
 

Jessica Flake: yeah. let's, let's not overwhelm ourselves. When you really start thinking about how people respond to questions, it feels like an infancy of uncertainty. Um, and it seems like we'll never be able to measure anything, but, you know, for example, in any given paper, when we just, when we make up a couple of items that we think are clear, having a few other people that are kind of different demographically pilot, the items could be super helpful because it might help you. 
 
 

And we, I think we take at face value our questions a little more than we should when they're like, just developed by the researcher because these other people read them and you can't go and have hundreds of thousands of people read them. Cause that would take too much time and think aloud as they're reading. 
 
 

But he could do it with like five people and say, you know, most of these people interpreted the item the way we thought, or, you know, we realized these two items were not [01:08:00] interpreted the way we thought that simple aspect of item development is woefully missing in the psychological literature. Like we really just don't know if people interpreting the items the way we think it will. 
 
 

Like look at the correlations of the items. We'll look at the item, distributions we'll factor, analyze the items, multilevel model, the items. We'll do all these complicated statistical things to the items. And none of that stuff tells us if the people interpret them the right way, either they just tell us that the items are correlated. So.  
 
 

Benjamin James Kuper-Smith: guess, I mean, that's, I guess what I like to do, not necessarily extensive piloting per se, but at least be not in the yeah. Kind of in the room with the people. I mean, I also do other stuff. I don't really use question as much. Um, it's more about like, decision-making, you have a few options, you choose whatever you like most and you know, um, I don't, I don't really use questionnaires much, so, but I really like being in the room and asking about. 
 
 

So what'd you think afterwards and like, why did you, you know, I'm not judging, I'm just like trying to make the experiment good. Like, [01:09:00] why did you do that? So that, or whatever, and yeah, I mean, there's always going to 
 
 

Jessica Flake: I mean, I even wonder if like on a survey, if we just had a box of like, did you understand this question? Or, you know, how often people would just feel like. I didn't understand it. Like, I didn't even understand what you're asking me. Um, I think we kind of miss when we just make up instruments and that's kind of acceptable and we don't talk at all about how they were developed, how they were piloted, if there was any clarity, you know, you miss out on all that potential feedback from participants. 
 
 

And sometimes it's fine because you know, some things are easier or harder to measure than others. Sometimes the study's really strong and a lot of other ways. And so you can't maximize all aspects of your study. There's always trade offs. You know, sometimes there's a limitation of our study, which is that you just made up those three items and they all made sense to you. 
 
 

And that seemed fine. But I don't think that we should like encourage that or that, that shouldn't be [01:10:00] like the norm for how we do things. It should be like easy to tell that's happening. And then when it's happening, we should be. 
 
 

a little skeptical. Um,  
 
 

Benjamin James Kuper-Smith: I guess the thing also is.  
 
 

Jessica Flake: that's, that's measurement.  
 
 

Benjamin James Kuper-Smith: The the, the main, I mean, not solution to the problem, but kind of first big step that you propose by just being more open and transparent about what you actually did and where the questions come from and why you next to them, et cetera. That's not the labor intensive part of doing science, right. 
 
 

That just means you write an additional paragraph in your paper or  
 
 

Jessica Flake: Yeah, right,  
 
 

Benjamin James Kuper-Smith: a pretty quick fix for pretty substantial problem. In that sense, you know, like in our entire COVID study, it was, you know, to quite, uh, in the end then quite a bit to actually analyze this stuff properly and, uh, make sure, you know, we're not P hacking, et cetera. 
 
 

And, um, to, to kind of have like a clear plan of how we're going to analyze this and, you know, writing that we made it, the thing would have been a sentence here and there, but that's what it would have taken. Right. And  
 
 

Jessica Flake: I think it's, it's kind of [01:11:00] akin to those. It sounds like that at the surface, but it's akin to, so say you hadn't preregistered and the results that you reported you and you did 20 analyses and you reported, um, with different versions of these variables aggregated different ways, and you found four of them were kind of interesting and confirmed your hypothesis and you reported them in the paper. 
 
 

And then I go, Hey, you know, but you should just tell me all that other stuff you did, 
 
 

Benjamin James Kuper-Smith: Right.  
 
 

Jessica Flake: Right. 
 
 

You don't want to do that. Cause It's like, more clear that your research probably isn't any good. And so the transparency aspect of analysis forced a lot of people to reign that. And to change what they actually do. 
 
 

And I think that it was like in the false positive psychology paper, they have the example of like with disclosure and without disclosure, nobody is just going to write that paragraph for they're like of 348 analyses that we ran on the 20 different versions of our variables and all of these different ones with interactions. 
 
 

These two were significant and blah, blah, [01:12:00] blah. You know, nobody wants to write that. And so then when you have to start being transparent about it, you're like, oh, well we should maybe do this a little differently. Like we should do the statistical analysis differently than we would have because before we wouldn't, we just would've done all sorts of things. 
 
 

And so I think that it, it does imply, like if I told you every time you make an instrument up and you don't do anything to pilot, it, just tell me what you would do. And your next study is be like, oh, well, I probably don't want to write that in the paper. Why don't we pilot this a little bit more, you know, I'll go talk to a few of the people in the hospital and have them read through the items and just see, or, you know, you might just, and so that's how this transparency thing ends up like improving rigor because people end up being hesitant to be, to say that they did stuff that like, now that they're actually thinking about it, wasn't that rigorous. 
 
 

So the face, they don't want to report that. Um, though I do think it's acceptable to use an ad hoc or just made up or whatever you want to call it. Instrument with little development. In some cases, I think it can be [01:13:00] acceptable, but I think the people they, at first, they should hesitate before doing that. 
 
 

You know, they should try to use an established instrument, but they're going to hesitate to say, well, you know, when we're analyzing all these different items, if we grouped these three together, we got a significant result. Versus if we group these four together, you know, they're not going to say that. 
 
 

And so that's going to this desire. If we force them to be transparent, they might actually engage in like, Thoughtless or, you know, less practices that kind of undermine the validity of the study. So I think that's why there's always this pushback against transparency because it ends up making people think through what they're doing more. And why would they do that when they didn't have to do it in the past?  
 
 

Benjamin James Kuper-Smith: I mean, I think you were, I think you were co author on the paper. Was it preregistration is difficult, but worthwhile where you on that. Yeah. And I, and I liked the P I mean, yeah, I liked I've paper ended because that's kind of what I've found from, from doing it myself. A few times. 
 
 

I get [01:14:00] the first one takes a while because you realize how ink in clearly imprecisely you thought about what you were doing. I'd used to my case and I'm sure there are some people who are perfectly clear and consistent what they do, but at least in my case, I read, I was like, well, I thought I knew what I was going to do, but when I actually had to write it down, I realized it was a lot fuzzier and more places than I would have thought. 
 
 

Jessica Flake: Yeah. 
 
 

And you're like, oh, I thought I would usually do that. Actually. A lot of people might think that's a bad idea. Like you start thinking about how people are going to evaluate it. Cause you you're going to put it all there. It's like, oh, well, no, I was just going to make up the questions about that aspect of it. 
 
 

Maybe I should just do a search and see if there was an instrument available already. Like, you know, you might start to second guess some of those decisions because you know, other people are going to be seeing them. Whereas before there was all this wiggle room to leave out. As we demand more transparency. 
 
 

People don't have as much wiggle room to do that. And I think it kind of pulls the rigger boat up or [01:15:00] something, or at least what people think other people want to hear.  
 
 

Benjamin James Kuper-Smith: Yeah. Yeah. Just that  
 
 

Jessica Flake: Yeah.  
 
 

Benjamin James Kuper-Smith: the fear of the people asking about is enough. Yeah. Uh, yeah, I guess now I understand measurement perfectly and know exactly what to do. Um, yeah. I just wanted to talk briefly about your work with a psychological science, science accelerated. So at this, I know fairly little, I mean, I, I, I think I just saw some on your website when you severe or something that you were going to vote on this and that. 
 
 

Um, I guess I've, I've never taken part in one of those like many labs or like keep forgetting what all these different projects are called. I'd never taken parts of them. And whenever I see them, I think this is, this is important work, but I don't know whether I'd want to do it, like not the replication, but the kind of being part of this like big project where it feels like it feels like just a lot of admin and that kind of thing, and making sure to  
 
 

Jessica Flake: You're right. I mean, you're totally right. [01:16:00]  
 
 

Benjamin James Kuper-Smith: And to me personally, that's just, I think I, I like the idea of like the lone genius sitting in a room and thinking of course stuff. I really liked just doing that kind of stuff. So I'm just curious, have kind of, kind of maybe what is the psychological science, et cetera. Yeah, this can be very brief. 
 
 

What's your role? And kind of, um, how did you get involved with doing it? Oh, being  
 
 

Jessica Flake: Uh, yeah, so the psychological science alert is a distributed laboratory network. It is hundreds of thousands of researchers from over a thousand labs who joined the network and it doesn't have dues or there's no contract it's voluntary to join. You can come and go as you please, what the accelerator does is it runs studies. 
 
 

So we have study proposals. If a study proposal is accepted, then we ask our network of labs. Hey, do you want to collect data for this study? And so it enables us [01:17:00] to run really large studies. With labs that are all over the world. So the first one that's now published is a face perception study. And we had participants over 11,000 participants from all populated confidence. 
 
 

So it's a voluntary thing, but the people who are in the network, the labs who sign up are interested to participate in the studies sometime, not every study. So we've, we have seven, well, we have nine studies right now on the go plus three COVID rapid studies. So, you know, some labs might participate in just one of those. 
 
 

Some labs might participate in to if it's of interest to them Sometimes when we accept a new study labs that do research in that area will sign up because they want to participate in that particular study. It started out. Um, so Chris Chardonnay started the psychological science accelerator. Maybe it was in 2018. 
 
 

I forget what year. It was after a, [01:18:00] a sips conference. And I had met him there and we'd been talking about it and he pitched this idea that like, we should have a CERN for psychology. Why don't we as psychology kind of work in a big team to pump out really big ideas that are more globally representative. 
 
 

And he had posted a blog post about it, and he wanted to like get labs to sign up. And a couple of weeks after that, I saw that he had started to get labs that were signing up from like Europe and in the U S and various places internationally. And I called him and I was like, Hey, I think you're going to have a lot of complicated data. 
 
 

And this organization is going to need a data and methods committee it's going to need that kind of forward-thinking about the methods and the data. And so that's how I joined. I joined early on. I think there was like 20 of us early on, who were. We put out a call for studies and we were like reviewing [01:19:00] studies. 
 
 

And then as we started going, we thought, well, we need a structure. You know, we need a director and an assistant director and associate directors, and we need kind of like study review processes and study project management processes and data storage and data release procedures. Like it was just getting really big. 
 
 

And, uh, basically we just had to inject a lot of bureaucracy in it. And so I've been doing that there now for like four years. Um, we didn't have committees before I came there. You know, I helped us develop them and I run the data and methods committee I'm in charge of the methods aspect of it. So we want to make sure that every study has a methodologist that there's somebody with some methodological, it's just a skull expertise on the analysis at hand that there's an analysis plan or pre-registration, there's some stuff that we want to ensure is happening. 
 
 

And then on the data side, uh, Patrick Forscher helped me. He was the on the ground. And I guess inaugural assistant director for data, he helped develop a lot of the data management and data sharing [01:20:00] policies that we have. And so a lot of what I do there is just try to make the whole ship run better. So like when we get a study or viewing the study and making sure somebody on the team knows about the methods, if they don't find somebody from the network who does know about the methods, um, making sure that there's a of plan for data release and what format the data are going to be released in. 
 
 

We want to have all of our data be open. We want to share all of that with the community. Um, so I have a little committee there's people on it. We meet, uh, we talk about ways to make the PSA run better. Something that we're thinking about right now as a way of co-leading all of our open days. So that people who want to access our open data can find it more easily. 
 
 

Right now it's all an individual project, OSF pages. This is not interesting. Okay. It's I mean, all the stuff that we do, it's like not interesting. Um, one of the projects that I'm working on, we have translated a lot of instruments and I've become more and more concerned that the instruments are [01:21:00] not equivalent across translated versions. 
 
 

And we haven't done any research to see if they are, so I have two papers on the, go with a committee, looking at some of our translated instruments. And if they're psychometrically or statistically similar across versions, we do a lot of stuff, translation best practices with our bilinguals. So it's not like we're just totally not doing a good job, but we have never like empirically evaluated the instruments and all of the instruments are available and all the data from them are available. 
 
 

So I think there should be something in that go along with that, to say this instrument was translated. It's, psychometrically very different than the. Or it's, psychometrically the same as the original and a lot of the instruments are in English first and they're translated all over second. And I think we overestimate the ability to take our Western ideas and translate them in different languages on a survey. 
 
 

Like I don't, I think that always works as well as we would think. Um, so something that I've got more interested in is all of the translated instruments and materials that we generate and how [01:22:00] we can empirically evaluate them and how we can develop methods that the whole field can look to for doing this kind of research, because research is getting just more and more global. 
 
 

And so basically it's a lot of like doodle polls and Google docs and slack messages. We have meetings. I had a meeting, um, today with some PSA people over kind of auditing all our studies to see if we can find all the data from all of them. And to see if we can find all of the analysis plans or pre registrations from all this stuff. Sometimes people just disappear. They fall off. So we want to like find all the people who were supposed to be in charge of all the studies and make sure they're still in charge and that they're doing it. I mean, it's, it's like the wrangling of the cats.  
 
 

Benjamin James Kuper-Smith: Yeah. I mean about the translation thing. This is, you know, because I, I was born in English and grew up mainly in Germany. And this is something that for me is a really like a soul, not as, I don't know whether it's so point with us the correct term, but I've read it at night translations. And for example, when I read fiction, I basically I've kind of stopped reading translations, which of [01:23:00] course limits me then to at least two widely spoken languages with lots of speakers and lots of books. 
 
 

Uh, so that's good. Um, but I mean, I remember like once there was this, uh, supposedly great translation of a book and I just read, just compare the two. I was like, this is just not, it's just a different book.  
 
 

Jessica Flake: Yeah.  
 
 

I mean, this is exactly like earlier. I was like, yeah, you know, you just wrote the questions. Maybe the people didn't interpret them the way you thought it's the same thing with translated instruments. You know, you translate an instrument and sure of. 
 
 

Benjamin James Kuper-Smith: it and yeah. 
 
 

Jessica Flake: Yeah, like sure. Having a bilingual look through it and say that it translated is one thing and that's better than nothing, but this idea that we can just translate our instruments and like pick up and use them. 
 
 

And we think that they're going to mean the same thing that they meant in the original version. This is. 
 
 

a huge assumption that all of these large scale replication projects and team science projects have just they've made that assumption and they haven't evaluated it really at all. Like none of the many labs have [01:24:00] done any evaluation of the translated versions of their instruments. 
 
 

I have, I have a paper out about the many of the instruments used in many labs too, and they think it was 16 languages. Those instruments were translated into. and we, we just so far, we're working on a paper, looking at the instruments psychometrics, and if they're different across translated versions, but in the one paper that's already published, we just have all the reliabilities across the different labs. 
 
 

The translated instruments are less reliable on average. It's kind of expected, you know, from your experience, you're like, no, you translate the thing and it's totally different or it's not as good or it doesn't have the same meaning.  
 
 

Benjamin James Kuper-Smith: Yeah. I mean, I'm even talking about English and German too. Dramatic languages. They're pretty similar, right. Countries that are basically next to each other. I'm not talking about Russian and Brasilia Brazilian Portuguese or something, or, I mean, that's even not that close, not that far apart, if you compare it to like, I dunno, Vietnamese and  
 
 

Jessica Flake: Yeah.  
 
 

Benjamin James Kuper-Smith: and other than that,  
 
 

Jessica Flake: Yeah. That's, that's where I'm, that's where my [01:25:00] thinking is next for these team science things is thinking about how we can be an engine for doing culturally relevant large-scale research instead of just translating instruments that were originally in English. So I think I might, I think of myself as working on how to make these instruments better, but I think the result of some of this work is just going to be that we can't do that Very well, and we sh we should quit, but we'll see the PSA has been doing it. And I'm trying to help us kind of get better analysis pipelines for evaluating some of our instruments. So we've got a couple of projects on the go about that, but it's not mostly research that I'm doing over there. It's the doodle polls and the 
 
 

Benjamin James Kuper-Smith: So is that okay? I mean, but that's, that's probably because of your, uh, what'd you said you have a committee.  
 
 

Jessica Flake: Yeah. I have a  
 
 

Benjamin James Kuper-Smith: but I I'm assuming is it, I'm assuming it's less administrative work if you're one of the labs that signed up and said like, oh yeah, sure. We want to collect some data,  
 
 

Jessica Flake: yeah, exactly. Right. Yeah.  
 
 

The,  
 
 

Benjamin James Kuper-Smith: kind of. 
 
 

I mean, I guess, is it then you, you, [01:26:00] uh, say these are, this is the next project who wants to sign up. Um, and then you just have to just do that, right? No. 
 
 

Jessica Flake: Yeah. 
 
 

If you're just like a participating member lab, you could just sign up for a study when you want and you get our mailing list. And so you would do. For a study when you want and collect data, you'd be sent like a protocol and told what you have to do to collect the data. And then you would collect the data. 
 
 

Whereas like the people on the leadership team were like, thinking about how it should work, you know, should we change from a Google drive? There were, I think we're going to move over to canvas as a management platform, you know, should we track study progress in a Google sheet or should we track it in slack? 
 
 

How should we meet? Should we meet on zoom? Or should we meet on slack calling? I mean, these are the things that we're concerned about. Like all of this kind of administrative, like we developed this analysis plan approval process. And when there's a study, it has an analysis plan. We want to check to make sure that it's not like totally bogus.[01:27:00]  
 
 

And so the whole, yeah, we, we voted, okay, we have this thing. You got to submit your analysis plan. Well, nobody knows. Just the people, like just the 20 people who cared to vote on the issue and decided that we would have an analysis plan approval process. And so now we're like, okay, we have this process. 
 
 

How do we get the study to do it? How do we get people to know about it? It's like hard to communicate with anybody because there's so many different people and they're spread around all over the world. And so, Yeah, it's a lot of just communicating 
 
 

Benjamin James Kuper-Smith: Yeah, I wish these things were easier. I mean, like I've had, I've had probably at least like one project that I've seen seriously considered doing, which would be. Um, like, I mean, I do like cooperation and social decision-making and it, this from economics and psychology neuroscience, all these like models of how people make decisions. 
 
 

And I don't really know how good we are predicting what someone's going to do. Like if you give someone a prison cinema, like how good are we predicting? Like how good does that model predict what this person's going to do? And I just have no idea what the [01:28:00] predictive value of these models is, is. Yes. Um, I got my proven singular character, um, and I've seriously considered doing something like, especially because in the corporation literature, that is Axelrod's famous tournament, where they basically got people to ask, which of different strategies would win, kind of and get the most points and that kind of stuff. 
 
 

So there hasn't been a precedent in the field and I've seriously considered doing something like that. But with predicting how good the models are at actually predicting how people behave, but every time, and I always think like, this is a really good, I should do it. And then it's like, you have it. 
 
 

You're just going to do a lot of admin.  
 
 

Jessica Flake: Yeah, 
 
 

Yeah, but that's the gig, man. The further up you go in this business, the more administration you do, the expense reports, my whole life right now is figuring out the best way to spend my money. Cause it's been a pandemic, my grant money and paying people, making sure the student's computer can run the software. 
 
 

I've booked hotel [01:29:00] rooms for grad students before, you know, so they don't have to book it cause they don't have money. Like there's all this, oh, COVID teaching and COVID restrictions. And the university during the pandemic, this is all just like administrative work or drudgery. It's administrative drudgery. 
 
 

It's not like cool research projects. So you better watch 
 
 

Benjamin James Kuper-Smith: but why did you choose then to work on the thing that forces  
 
 

Jessica Flake: I  
 
 

Benjamin James Kuper-Smith: exclusively that, I mean,  
 
 

Jessica Flake: Yeah. I  
 
 

Benjamin James Kuper-Smith: would you like doing some of that or was  
 
 

Jessica Flake: I liked doing some of it. 
 
 

Um, I, I was not like on an academic track for my career and I didn't care if I was an academic or not. Actually I plan not to be an academic. I didn't even think about being an academic. I didn't know what one was. And then 
 
 

I  
 
 

Benjamin James Kuper-Smith: plan. 
 
 

Jessica Flake: all right. I was like, didn't even care about that, but in my post-doc. 
 
 

Which I took a postdoc for kind of geographical and personal reasons made sense for [01:30:00] my personal life. Um, I got this opportunity and So I did, and I got, I started, you know, I went to sips and I met, uh, met these people and I thought that it was super energizing. I was like learning more about academia and how it worked and how it could work better. 
 
 

And then there were all these people who wanted to like change it and make it work better. And the accelerator is one of those things that I put a lot of time into that doesn't pay the kind of dividends that we traditionally value in academia. But there's such a big effect for me of working with people who want to improve our science. 
 
 

It kinda like gives me the energy to do the other stuff and having this network of people. I don't know, they feel like my comrades, like, I feel like I've, those are my people. And if I say to them, Hey, we're. Our translation, you know, the empirical evaluation of our translated instruments is something like we really need to work on. 
 
 

They're like, oh Yeah. 
 
 

that's awesome. You know, like, yeah, you're [01:31:00] right. We, we should be. So it's just like, we have this, this community. And I feel like I'm working on something that matters and that can do something really different. And the discipline, you know, we're running some of the largest, most globally representative studies in the history of psychology. 
 
 

It's cool to be a small part of that. And even though it is a lot of drudgery, like when I, you know, meet with the people that I meet with and we talk about stuff, it's, it gives me a little put sometimes like a little pep in my step. You know, when our first paper was accepted and people from all over the world were sending like little emojis about it being excited. 
 
 

It's like, wow, we, we just did this over a hundred other people and data all over the world and you just don't get that. That's not the day-to-day of academia. And so it, it's, it's a little bit of a pet pet project for me, or passion kind of project sort of thing. I get a lot of value out of it, even though, and, you know, I think, oh, it might not be the traditional academic outcomes, but you know, I'm, I'm on a really big paper. 
 
 

[01:32:00] Um, I've got some grant money related to some PSA project, so it's not totally counter to what I should be doing as a professor. And my, my students were working on some research related to the translation stuff. So it's got some research paybacks too, but it's also just like having that group of people who we like fight with journals about how to publish papers. 
 
 

We, you know, we, we, we like push a lot of boundaries, like cause sometimes journals. They want to publish our study because it has giant dataset. They want to publish it in a way that we don't like, you know, we've been in fights with nature human behavior with P NAS. You know, we've been in fights with all these journals to make sure that like we can keep our integrity and keep our open science and all the things that we want. 
 
 

It's kind of cool to like see a little bit of progress being made through these like little incremental wins. That we get, we we've got, um, some recent funding, a couple million dollars from the Templeton foundation. So it's like, oh, we're like gaining, getting some traction. So not going to walk off now, I've been doing the Google [01:33:00] sheets for four years, so I'm going to keep cute doing it, but you can get involved at different levels. 
 
 

You know, you don't have to, I probably do more administrative stuff in as necessary, but I do kind of like that stuff, even though I complain.  
 
 

Benjamin James Kuper-Smith: Okay, cool. I guess, uh, we, we managed to, I mean, I've been, I've been holding you for too long already and we've, we managed to find a positive ending, so let's just love it.

Eiko Fried is maybe not that good at p-hacking
How Jessica got into researching measurement
The title of 'Measurement Schmeasurement'
So what is Schmeasurement?
How does Jessica ('literally the best prof ever') make statistics engaging?
Is transparency the solution to schmeasurement?
Was I measuring or schmeasuring in my recent paper?
The next generation of the open science movement
What's it like working on large collaborative projects like The Psychological Science Accelerator?