BJKS Podcast

80. Simine Vazire: Scientific editing, the purpose of journals, and the future of psychological science

Simine Vazire is a Professor of Psychology at the University of Melbourne. In this conversation, we talk about her work on meta-science, the purpose of journals and peer review, Simine's plans for being Editor-in-Chief at Psychological Science, the hidden curriculum of scienitic publishing, and much more.

BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.

Support the show: https://geni.us/bjks-patreon

Timestamps
0:00:00: What is SIPS and why did Simine cofound it?
0:05:10: Why Simine resigned from the NASEM Reproducibility & Replicability committee
0:13:07: Do we still need journals and peer review in 2023?
0:28:04: What does an Editor-in-Chief actually do?
0:37:09: Simine will be EiC of Psychological Science
0:59:44: The 'hidden curriculum' of scientific publishing
1:04:03: Why Siminie created a GoFundMe for DataColada
1:15:10: A book or paper more people should read
1:17:10: Something Simine wishes she'd learnt sooner
1:18:44: Advice for PhD students and postdocs

Podcast links

Simine's links

Ben's links


References/links

Episode of Black Goat Podcast I mentioned: https://blackgoat.podbean.com/e/simine-flips-out/

Mini-interview with Simine in Science: https://www.science.org/content/article/how-reform-minded-new-editor-psychology-s-flagship-journal-will-shake-things

My 2nd interview w/ Adam Mastroianni, and his blog post on peer review:
https://geni.us/bjks-mastroianni_2

Interview w/ Chris Chambers and Peer community in RR
https://geni.us/bjks-chambers

Simine's vision statement for Psychological Science
https://drive.google.com/file/d/1mozmB2m5kxOoPvQSqDSguRrP5OobutU6/view

GOFUNDME for Data Colada's legal fees
https://www.gofundme.com/f/uhbka-support-data-coladas-legal-defense

Francesca  Gino's response
https://www.francesca-v-harvard.org/

NYT Magazine article about Amy Cuddy (and Joe Simmons)
https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.html

Streisand effect
https://en.wikipedia.org/wiki/Streisand_effect

Holcombe (during dogwalk). On peer review. Personal communication to Simine.
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science.
Reich (2009): Plastic fantastic: How the Biggest Fraud in Physics Shook the Scientific

[This is an automated transcript that contains many errors]

Benjamin James Kuper-Smith: [00:00:00] Maybe I'll start very, very broadly and then we can narrow it down a little bit. What, what is SIPS? And, uh, I'm, I'm asking this, this broadly because I think a lot of my audience comes also from neuroscience. Um, so maybe what is SIPS and how, I guess, not did you get involved with it, but did you co found it, I believe. 
 
 

Simine Vazire: Yeah. So SIPS is the Society for the Improvement of Psychological Science. Um, yeah, I co founded it with Brian Nosek. The first meeting was in 2016, but we started, I reached out to Brian in 2015 after yet another conference where, so this was, you know, three, four years into the replication crisis and social and personality psychology, my home discipline. We were at every single conference you would go to, there would be. At least one, usually several panels about open science or replication or things like that. And there would be the same debates and the same talking points over and over again. And the same, like, awkward hushed whispers in the elevator and, you know, conversations at the bar [00:01:00] after the program ended and things like that. So, after another such conference, I emailed Brian and I said, you know, Is there any meeting or group where people can get together and get past these debates? And... Get together with other people who agree on the fundamentals of like things could be better and just start working on Improving things rather than debating whether or not things are this bad or that bad or what's the cause of the bad things or so Yeah, by emailing him and you know going back and forth a bit realizing there isn't such a platform or opportunity for people to get together We decided to try it out and so it was first going to be just a one time meeting to see what the reception was like, how much interest there was, and Brian and the Center for Open Science provided a lot of really important support. 
 
 

It wouldn't have been possible without them. So we wanted to make it really accessible to anybody who wanted to come and Yeah, it was just basically to see how big was the community, were people interested, what would happen [00:02:00] if we brought people together, and back then the idea of like a hackathon was really new in psychology. I'd never been to a conference where you did something other than listen to people talk, so like the ratio of people speaking to listening was always very, very And there was never any action component at the conferences that I've been to. And so that was really was Brian Nozick's idea to bring. I'm sure he had these experiences more from the tech world, bring that to psychology. 
 
 

So the conference was very few talks, just people getting together in groups and. Working on a concrete problem, a concrete solution, things like that. So the first conference in 2016 was about a hundred people and at the end of the conference We literally like voted with raised hands whether we wanted to keep going whether we want this to And then, group that continues. So we voted to do that and we formed an interim executive committee who then that group wrote bylaws and then eventually held elections. And then it's been going ever since. So [00:03:00] now it's I'm no longer affiliated with it other than being the editor in chief of it of the journal that's affiliated with the society. But I'm not on the executive committee anymore. I don't run anything. I don't run the conference, but it's still happening. Still going strong. The last meeting was in Italy and there were. I think a few hundred people there at the peak, there were the peak in person was right before COVID. I think it was like, no, I don't remember 500 people or something that could be wrong. 
 
 

And then when it moved online, registration went up, but then now it's kind of a hybrid thing. So there's a few hundred people in person, a few hundred people online. So there's the annual conference next year. It'll be in Nairobi, Kenya, but there's also a few other initiatives that the society does. They do some small funds for different projects. 
 
 

They do awards for. projects and outputs. Um, they run PsyArXive, the preprint server for psychology. Yeah. A few other things like that. 
 
 

Benjamin James Kuper-Smith: You mentioned that you, or you said that you contacted Brian Nosek. Like, was the Center [00:04:00] for Open Science already a thing back then? I don't know exactly that timeline when that started. Is that also why you contacted him or? 
 
 

Simine Vazire: Yeah. Yeah. I thought if, if there was an initiative like this, I didn't want to like reinvent the wheel. I thought he would know because yeah, the Center for Open Science is already a pretty big thing by then. So 2015. So I contacted him probably like a month or two after the Reproducibility Project Psychology came out. 
 
 

The big hundred. 
 
 

It's a hundred replication paper that the Center for Open Science played, and Brian played a really big role in that. But yeah, so the Center for Open Science had already been around for a while by then. 
 
 

Benjamin James Kuper-Smith: yeah. By the way, I think the decision to go to Kenya next year is, uh, I think a lot of people like that, right? 
 
 

Simine Vazire: Yeah, I think it'll be really great. I think it's great to just do it in different places, and I think it really broadens the community. 
 
 

Benjamin James Kuper-Smith: Yeah, definitely. I mean, I guess now people from Africa can be like, yeah. Well, I mean, from Kenya at least can be like, yeah, maybe I can go. Whereas people from Europe and North America like, ah, can I afford it? 
 
 

Simine Vazire: right, right, right. Shakes things up a bit.[00:05:00]  
 
 

Benjamin James Kuper-Smith: Yeah, um, maybe in, in, in line of the kind of. Not exactly in line of SIPs, actually, but it just, as it, when it was announced that you were going to be the editor in chief of psychological science, um, there was this mini interview on science and there was a quote there that I was just, uh, curious about, uh, what exactly that meant. 
 
 

It said, in 2018, for instance, um, Fazia, is that actually how you pronounce it?  
 
 

Simine Vazire: Yeah.  
 
 

Benjamin James Kuper-Smith: or, yeah, okay. Uh, resigned from the National Academy of Sciences, Engineering and Medicine's Reproducibility and Replicability in Science Study Committee. Great sentence. Uh, frustrated with what she saw as its lack of engagement with the question of how much error in science should be considered acceptable. 
 
 

I actually don't know what the context is. Um, I'm curious what was, what, what, what was the context? 
 
 

Simine Vazire: I mean, I didn't know very much about these committees before I was on one either. So the National Academy of Sciences was commissioned by the U. S. Congress. So U. S. Congress passed a law saying we [00:06:00] want to know more about this replication and reproducibility issues in science. They, so that meant that the National Science Foundation in the U. 
 
 

S. had a mandate to produce a report on this issue and they outsourced that basically to the National Academy of Sciences who formed a study committee who were tasked with writing this report for Congress, basically. So I was chosen to be on the study committee. I don't know how. I feel really lucky to have gotten to be part of that because it's a really, really interesting process. There was about a dozen people from all different fields of psychology, of science, and some people, like one philosopher of science and a couple of people in industry, um, so probably more than a dozen people all together. And so we would get together in person for a few days every few months for a year, I think. And so I went to all the in person meetings and in between we would work on the report, the document, and basically got to the point where we had finished all the meetings. And we had a full draft of the report that we [00:07:00] revised a few times, and my experiences led me to the conclusion that this was the wrong committee to write a definitive report on these issues for a number of reasons. 
 
 

I mean, some of it was frustration with the process, some of which I can't talk about very much because everything that happens during the committee meetings is confidential. So I can't talk about any specifics that happen during the committee meetings, but I can point to things in the report. So, for example. There was a chapter in the report early on about kind of setting the context of like to help the public understand errors in science, and I think that's important. I think that it's important to acknowledge that errors in science are normal, that we shouldn't expect science to never make errors, and so on, but I also think that we shouldn't ask the public for a blank check. 
 
 

So we shouldn't tell the public, any and all errors in science are normal. You should accept whatever happens in science, even if we keep making the same error over and over again, or if we make errors that seem really avoidable or things like that. So I don't know what the answer to this question is, but I really wanted us to [00:08:00] grapple with the question of like, what's a reasonable amount or pattern of error in science? 
 
 

What's reasonable for, to ask the public for tolerance of and what's too much error or a kind of error that's not acceptable or pattern of errors that's not acceptable, that where the public should then lose trust in that, at least that scientific field or community. And I think it's a really, really difficult question. 
 
 

I still don't know the answer, but I was frustrated that the report didn't engage very much with that question. To me, it felt like it was asking the public for complete trust, regardless of what kinds of errors a particular scientific discipline might make. But also another thing I came to realize, and maybe this should have been obvious to me from the start, is that I don't think that the way these committees are constituted, I think it's appropriate for many topics, so picking kind of experts and people who are really successful in their fields to kind of sit on these committees and do the reviews and write the summaries about what's the kind of the best evidence on this topic. So for [00:09:00] something like genetically modified foods, yeah, yeah, Makes sense to me that you would want the best experts and so on. The problem is when you're doing something about science itself, and you get the people who are most successful at the top of their field, I don't think that's the right group to evaluate. How would we know if there were, like, deep problems? You know, problems that shouldn't be tolerated in science, problems that need to be fixed or that the public has a right to be concerned about. 
 
 

Benjamin James Kuper-Smith: Sorry, why aren't those people appropriate? 
 
 

Simine Vazire: Because, by definition, the people who have succeeded in the current system are not very likely to think that there's a problem with the current system. So, if there was one, I don't think that the way that this committee was, uh, constituted, and the way that all NAS committees, from what I understand, are constituted, which is mostly, not exclusively, but mostly picking people at the top of their, uh, Their disciplines. Um, and obviously I'm an, I was an exception to that. I was relatively junior for someone on that committee. I, which again, I, so I don't know [00:10:00] exactly how I got on there, so I don't mean that about me, but many of the other people on the committee were, you know, very, very successful. And so the idea that they, and I don't mean them in particular, but the idea that people chosen in this way would then be willing to turn around and say, oh yeah, there are all these problems in the system, or there might be problems in the system. It's just the wrong way to go about it, in my opinion. 
 
 

Benjamin James Kuper-Smith: I mean, do you think you asked because of your involvement with SIPs? I mean, it sounds like a pretty, or that, that kind of work at least. 
 
 

Simine Vazire: Maybe. I mean, I know that the role that I was supposed to play on the committee, because people kind of were given a Discipline or a group that they represent and I was representing journals and journal editors So I think it was more about that than about Sips I still don't know why they didn't get a more eminent person in that role and they probably wish they had to now but I don't know maybe there was maybe it was on purpose to not get all late career people like I was You know, in the early part of the mid career stage, I would [00:11:00] say, at that, at that point. Like, by some definitions, I was still technically early career, but, I mean, that's a whole nother issue, is all the varying 
 
 

definitions of early career. Um, so maybe that was on purpose, in which case, they deserve credit for that. They didn't pick 
 
 

Benjamin James Kuper-Smith: I mean, maybe they recognize the same thing that you mentioned, right? There should be a bit of a mix in there. 
 
 

Simine Vazire: It's possible, in that case, either I wasn't good enough, or one person wasn't enough. I don't know. Maybe, I did, I mean, I felt a lot of pressure the whole time. That I was the one chosen in this role, and I needed to live up to that, and it was an opportunity that, you know, sometimes I would imagine, like, what would Brian Nozick do if he had a seat at this table? 
 
 

And that's a lot of pressure and big shoes to fill, and I did feel a little bit like I failed. I felt, I felt like I was trying to bring up these issues and trying to get a little bit more meta, at least in the report, but I advocated for some things in the report that I wasn't able to get traction on. 
 
 

Benjamin James Kuper-Smith: I mean, you said that you, you know, were one of the more junior people that did [00:12:00] then stepping like making the decision to resign from that position feel like an extra big hurdle? Or was it like almost less of a hurdle because you were not the, you know, most senior person in  
 
 

Simine Vazire: was, I was really scared. I remember the phone call. I remember where I was standing, where I was pacing during the phone call to the chair of the committee. I was really nervous. Yeah. I wasn't sure I was doing the right thing either, but I also, even if I had been, I would have been very nervous. 
 
 

Benjamin James Kuper-Smith: Was it a good decision? In  
 
 

Simine Vazire: I think it was fine. I think in hindsight it mattered a lot less than I thought it did. 
 
 

Benjamin James Kuper-Smith: Yeah, okay, yeah, it's fair. Um, obviously I, I, I haven't read everything about you. Um, but this was the one instance where, for example, I read this was this one interview, so if they hadn't mentioned it then I wouldn't have heard about it either. Um. Yeah, I mean, I thought maybe we could, maybe we can, uh, I'm jumbling my order completely, but, um, I thought maybe we could talk about journals and editing them and [00:13:00] why we have journals and then of course your, uh, new position from, what is it, January at PsychScience. 
 
 

Um, so I thought we could maybe start very broad. Um, why do we... Why do we need journals? Or maybe what's the purpose of a journal, not, you know, originally or historically, but like right now. Um, yeah. 
 
 

Simine Vazire: I think, I think It depends how you define journal. I think the purpose of peer review, which is different from the purpose of a journal, but I'm going to answer that question instead. 
 
 

Um, I mean if you think of like scientific outputs, let's say papers, but in principle it could be other kinds of outputs too. 
 
 

If you think of scientific papers coming out as like a fire hose. It's too much. We need some kind of help filtering the good from the bad, both because not everybody has the expertise to evaluate every paper on the Sun Merits, but also because we don't have time. And so I think of peer review as one and an early, relatively early filter. Uh, that doesn't do a great job, so it lets a lot of bad stuff in, and [00:14:00] it kicks out some good stuff that it shouldn't, although most things, most things that people want to publish get published eventually in a journal, so it's just a matter of persistence, for better or worse. So I think that that's important. 
 
 

I think it doesn't serve the purpose of being like a really reliable quality filter, and it shouldn't, and it can't, but that's sometimes a misconception, I think, about peer review and about journals. So it's kind of like one, one way I put it to someone recently is if you think of like the life cycle of a scientific claim from its It's the first time it's exposed to the public or to the world to when it becomes like an established fact, which of course maybe never happens, but let's oversimplify and say at some point it's a done deal. 
 
 

It's like really, really solid. So if that were a 24 hour period, I would think of like journal based peer review as the first hour maybe. And in that first hour of scrutiny, you learn a lot of things. You can already start ignoring some claims and. If you pay more attention to others. But it shouldn't stop there. 
 
 

There's so much more that happens after Journal Peer Review [00:15:00] before we get to the point where it should become, like, a textbook finding or really, really established fact. But, if we got rid of that first hour we'd be in even more of a mess of, like, how to make sense of the entire fire hose of information coming at us. 
 
 

So I do think it serves a really important function, but it's a relatively early and non definitive function. Of filtering things that seem more worth our attention and things that seem less worth our attention. And of course, worth our attention can mean very different things. Another thing that I think journals... I think a purpose they should have is kind of having a brand that is like, our aim is to identify the things worthy of your attention for this reason. And that brand could be like, we want to bring things to your attention that we're really sure about. Or another journal could say, no, we want to bring things to your attention that are really exciting, but we're not necessarily sure about. Or another journal could say, we want to bring things to your attention that, I don't know, whatever. Like. Would have implications for the broadest [00:16:00] range of people possible or something like that, right? There could be so many different Criteria on which you decide what's most worth our attention or what we want And then if journals were really explicit about their brand or what they're trying to maximize Then they could be evaluated on that. 
 
 

You could look back and say okay you claim that you were Selecting for the things that were most robust and rigorous, so then they should have a high rate of replicability, they should, you know, whatever, we could put them to test to see if the journal achieved its goal. Whereas in another journal that said, no, we want to bring you stuff that's really exciting but not at all sure, then we wouldn't necessarily expect a high rate of replicability, but there would be truth in advertising. 
 
 

Readers would know that's not what they're selecting on necessarily. Right now I feel like many journals... don't pin down exactly what they're selecting for, and so then it's hard to know if they're doing a good job, it's hard to hold them accountable, and then they're a little bit kind of doing a, what's the expression I'm trying to think of? The Mott and Bailey, where they'll like, kind of on one hand say, Oh, we're publishing the most exciting, [00:17:00] groundbreaking stuff, but then also you can trust it, and it's totally reliable, and blah blah. It's like, well, probably it can't be both, so pick one, or at least tell us which papers you're publishing because they're exciting, which ones you're pretty sure about. So I think There's a, I think journalists should do a better job of this, but one function they could serve is to say we're filtering mostly on this, so if you're interested in that, you know, follow us and we'll pick out the articles that, that are worth your time if you're interested in the same values or the same dimensions. Yeah, I think those are some of the 
 
 

Benjamin James Kuper-Smith: Hmm. I think usually it's more about topic, right? It's usually like, this is the topic and this is kind of, that's the limitation, but I've rarely seen someone say, yeah, publish speculative stuff, it might  
 
 

Simine Vazire: but I mean we all kind of know that some journals are known for that, right? And within a topic there's different journals and they, and there's kind of a reputation but it's more informal of like oh yeah, this journal is like more about rigor and this journal is more about flashiness or whatever. [00:18:00] But yeah I don't know that journals would want to own those, those reputations completely but in a way it would protect them, right? 
 
 

Because if a journal would just admit that it's about flashiness Well, they wouldn't put it that way, but there's a positive way to spin that, right? They're about, like, big if true things that are really, like, they need to get out there because they could revolutionize everything, but we should not interpret them as definitive. Then they wouldn't look bad if those things don't replicate because that wasn't the point. So I do think it could be a way of protecting themselves to have this kind of honest advertising of what they're selecting for. 
 
 

Benjamin James Kuper-Smith: Um, I want to talk a little bit more about that. I just have a question about something you mentioned earlier. Um, and I don't want to, if I escalate, it's going to be completely out of context. Um, just briefly, because you mentioned the filter, um, of peer review and that kind of things. How much of a filter is it if basically everything can get published, if the authors are just willing to submit frequently enough, is it just a filter then because of the hierarchy of how much attention [00:19:00] people pay to different journals? 
 
 

Simine Vazire: I don't, I don't think it's just that, but I do think it's mostly that, but I think, so there's a couple different ways in which things could get kind of completely excluded from the peer reviewed literature. One is if during peer review, an issue comes up and the authors acknowledge that issue. And believe it, and take it seriously, right? 
 
 

So then they could, maybe they'd stop trying to publish the article if it's not fixable. Or they fix it, and a claim that would have been wrong doesn't get through peer review, but a correct claim does. So just because an article eventually gets through peer review doesn't mean the claim does. And I should give credit to my partner Alex Holcomb who pointed this out on our dog walk this morning. So that wasn't my original idea. 
 
 

Um,  
 
 

Benjamin James Kuper-Smith: That's going to go in the references. Alex, I'll go during dog  
 
 

Simine Vazire: yeah. Um. But I do think some things authors give up trying to publish if it gets rejected enough or they Yeah, they come to see that the reviewers have a point and they change their claim or they change [00:20:00] their approach. But I, I do think a lot of the filtering is more about how high visibility of a journal it ends up getting into. 
 
 

I mean, there's a well known statistic that I think is true, but I'm not 100 percent sure, that the modal number of citations of scientific outputs is zero. So many things technically are peer reviewed, but... Yeah, there's a filtering on, functionally, does it get any attention? Although, I do think the peer reviewed label is still treated as a sign of quality, and so we need to be careful about if we're not actually serving any purpose. 
 
 

If anything could get, quote unquote, into a peer reviewed journal, then it's really not serving much of a function. Or we need to change the way people understand what that label means, if it's, if it's not actually filtering anything. 
 
 

Benjamin James Kuper-Smith: Yeah, I mean, what I find kind of tricky is, I mean, I don't really know. I mean, I know there's some people say, you know, shouldn't have peer review at all. It's kind of all a bit pointless and that kind of stuff. And [00:21:00] I mean, I talked to Adam Mastriani recently again, um, and he had this fairly widely read, uh, uh, blog post let's say about it and I mean one one really good point I think he makes is there's like yeah you can just make a new journal with anything so you can have a creationist journal you can you know you can just make a journal you can call it peer reviewed and then it's fine and I do wonder so I mean it feels kind of weird to me to say peer review is completely worthless with that if that can happen but it  
 
 

Simine Vazire: Well, I think we should separate peer review from journals. 
 
 

Benjamin James Kuper-Smith: you  
 
 

Simine Vazire: So, even if we got rid of journals, I still think we should have peer review. I think the problem is not that we, the problem, the solution is not to get rid of peer review, the solution is to calibrate what people's understanding is of peer review. So getting past peer review doesn't mean as much as people think it means, but that's okay. Like, I really think the problem isn't that peer review is horrible and broken, I think journals... are more broken than peer 
 
 

review. But, um, [00:22:00] but I think the problem is that peer review gets put on this pedestal and people think that it does something that not only does it fail to do, but it can't, it can't and 
 
 

shouldn't be expected to do. 
 
 

Benjamin James Kuper-Smith: so what is the kind of Best but realistic case scenario for peer review and its function is to just point out certain flaws and hope that they get fixed or? 
 
 

Simine Vazire: Yeah, I think it's, I think in the future, peer review will be more and more separated from journals. I think journals will serve more of a curating function and peer review will be seen as an ongoing process. And It doesn't end at acceptance because, and that will become more the case as publication becomes separated from peer review. 
 
 

So publication will happen first, then peer review will happen on top of that. And so there's, that helps because then there's no end point for peer review. And the traditional system where peer review happens first and then publication. Publication is seen as a point when scrutiny should stop or you have to be a jerk [00:23:00] to continue CRI critiquing a paper or something like that. Um, but to, to start with publication and then do peer review on top of that helps because then it's more obvious that peer review is an ongoing process. And then anyone could come and curate a list of recommended articles. They might end up looking silly if the peer review in the future points out and there comes to be a consensus that that article actually had flaws. That's normal. That's, I mean, yeah, journals are taking a risk and anyone who curates and recommends papers is taking a risk when they're saying we think these papers are going to stand the test of time, if that's what they're saying, but we need that too. I do think we need some curation. I think I, you know, the same way that like, even on a micro level, like there are some people I follow on Twitter who, if they recommend a paper, I'm much more likely to read it. So they're curating papers for me, right? They're not a journal, but that's basically, so a journal should be people who are good at that. Evaluating a bunch of papers and deciding which ones to recommend to their [00:24:00] followers, basically. I think that's kind of what a journal is doing, although right now it's also taking on the role of organizing the peer review. Which I think is necessary right now, that's the stage we're at. But it's a really hard way to do peer review because it's locked in time. It depends on who's available during this short time window, are they going to do a good job, etc. But an ideal world peer review wouldn't start and end there, it would be ongoing. 
 
 

Benjamin James Kuper-Smith: From a kind of organizational perspective, how do you think this is going to change? I mean, I talked to Chris Chambers and he mentioned, for example, the peer review in and some of the, especially for reg reports, you know, you have peer review acceptance independent of, you know, I mean, not completely independent, but a bunch of journals say, like, we'll accept anything that's accepted. 
 
 

Do you think that's kind of the way forward, or? 
 
 

Simine Vazire: I think, like, from a mechanistic perspective, that's the way forward. Like, I think there are platforms now, there are, there are technical solutions to [00:25:00] these problems. I think the harder part is changing the incentive structure, making it so that people want to. Choose those options and I don't know how to get there. 
 
 

Like I think there's still a lot of Prestige attached to traditional journals and that's not changing very fast It is changing but it's pretty slow to change the technology and the ability to do these kind of journal agnostic peer review platforms is there it's far ahead of where I think the incentives and reward systems are. 
 
 

Benjamin James Kuper-Smith: Maybe in terms of, I mean, we talked about brands earlier, of journals. I'm curious, why did, I mean, you, you, I guess, are still editor in chief of Collaborative Psychology. Why did, kind of, what, what's the brand of Collaborative Psychology supposed to be, and why did we need that journal in, when was it founded, five years ago or something? 
 
 

I don't know exactly. 
 
 

Simine Vazire: Yeah, it depends how you count, but yeah. Um, so Collaborative Psychology is a psychology wide journal [00:26:00] where we evaluate manuscripts based primarily on rigor and ethical considerations, not on novelty. And there's a tiny little asterisk there that's probably not worth getting into. But yeah, so that's, the idea is to still be selective. 
 
 

It's not meant to be like a low bar. It's meant to be a high bar, but a high bar based mostly on The ethics and the rigor and the calibration of the claims. And of course, that could mean a lot of different things. There isn't a formula. There's very, there's some things that are required, like the data and code and materials need to be shared to the extent ethically possible, things like that. 
 
 

But there's not like you have to have a particular sample size or you have to pre register or you have to do whatever. But to the extent that you don't reach those ideal design features and things like that, then your conclusions should be caveated appropriately. Yeah. I think that. Part of it was, yeah, to try to see if we could create a journal that was, that had those two characteristics. To select mostly [00:27:00] based on rigor and related issues, not on novelty, but still be moderately selected. Like, not be seen as a place that accepts everything. And I think in terms of how we've done, I think the reputation of the journal is quite good. The submissions are quite strong. For a new journal it's done amazingly well. 
 
 

The publisher tells me that they, yeah, this is not at all typical for a new journal to to get up and running and be as successful as this one has been. I think having SIPS affiliated with it helps a lot, too. So that came a little bit after the journal was founded. 
 
 

Benjamin James Kuper-Smith: Oh, okay. So, it wasn't. Yeah, I saw that, you know, on on the SIP's website, it says Collaborate is the official journal of the but that wasn't from the it sounds like it was founded through SIP's but then it was 
 
 

Simine Vazire: No, it was founded actually by Dan Morgan when he worked at UC Press. It was his idea, so he was more on the publishing side of things. And then a few years after, Calabra was founded. Actually, Calabra was founded before SIPS existed. And then 
 
 

I think maybe a year after SIPS was founded, then it affiliated with [00:28:00] Calabra. 
 
 

Benjamin James Kuper-Smith: Okay, well, I wanted to ask a little bit about, like, what is, what editor in chiefs actually do, or no, editors in chief, that way around, um, what they actually do, uh, and kind of what the, especially why you, you know, want to be and will be the editor in chief of PsychSci and that kind of stuff. Maybe, as a starting point, I mean, partly I want to know, like, what, what exactly do they do, but I guess the, maybe the way I'm asking this is kind of, why did you decide to be editor in chief of Calabria Psychology, um, because that seems like a journal that's already doing a lot of the things that you want implemented. 
 
 

So, um, yeah. 
 
 

Simine Vazire: Yeah, the honest answer to why I applied for Editor in Chief of Collabora Psychology is just because I love editing and I felt like I was going to be at loose ends if I wasn't editing. Um, it was funny when I was finishing my term as Editor in Chief of the journal I edited before Collabora [00:29:00] SPPS, um, I remember I was going for a walk with my mom and I was telling her it was going to be sad when my term was over. 
 
 

I didn't have anything else lined up. And she was like, well, why can't you just keep doing what you're doing? And I was like, what do you mean? And she's like, Well, you could just like read people's papers and email them and tell them what's wrong with their papers. And I was like, yeah, you know, funnily enough, when I'm an editor, they care, but when I'm not an editor, they don't care what I think about their papers. Um, so instead of doing that, I decided to apply for the Collabra position. I mean, 
 
 

it is  
 
 

Benjamin James Kuper-Smith: So just briefly, I,  
 
 

Simine Vazire: also just a  
 
 

Benjamin James Kuper-Smith: uh, sort of interrupt, but I, I listened to the episode of your, of the black goat podcast where you, I think this was the one where you just stopped being an editor and you said you were going to do this. I'm curious. Did this actually happen? I think then the plan was for you to once a month or I can't remember what it was, but,  
 
 

Simine Vazire: Yeah. Yeah. Yeah. I wrote a blog post saying I was going to do it, but no, yeah, I didn't do it. 
 
 

Cause I, yeah, I did Collabra instead, although I thought I could maybe do both, but yeah, no.[00:30:00]  
 
 

Benjamin James Kuper-Smith: Okay, so, so what's, what's the fun part about being an editor? I've never been one. I 
 
 

Simine Vazire: Oh, I think, I think of it as, so my favorite thing to teach, and it took me a while to figure out, which, but maybe this is an answer to a question you're asking later about what I wish I had known, um, when I started my career, so I thought that I would teach my substantive area, which is like social and personality psychology. And I really didn't enjoy teaching that. And it wasn't until I was like, assigned to teach research methods that I realized, oh, I love teaching research methods. And I think the reason I like editing is the same reason I like teaching research methods. I'd much rather talk about the process and evaluate the process and kind of think of research as like a puzzle. And I love being faced with a puzzle and like, you're basically evaluating, do the pieces fit together? Like, this is what they were aiming to answer, this is what they did, and this is what they concluded. Do those things match? Do they go together? I really enjoy that, and I enjoy teaching students how to think critically. So I [00:31:00] think of, you know, teaching research methods is basically teaching critical thinking, and I think of editing as like, applied critical thinking. Like, okay, you've learned in the abstract what critical thinking is, now go at it with this paper. And a lot of what we do... When we're mentoring undergrads, for example, if we have lab meetings where we read an article and discuss it, we're training them on like how to do evaluation and critical thinking. of a specific paper. Um, and I really enjoy that process and I feel like I'm always learning. Like I, you know, when I would teach research methods every year, I felt like my editing was better during the terms that I was teaching research methods because I would remind myself even really basic things. 
 
 

Like I would relearn, oh yeah, of course we should evaluate papers on their Whatever, this aspect of their external validity, or things like that, that, and part of it is just hard to keep it all in your head, and so when you're teaching, you're going through all the things that you're supposed to be evaluating research on. Yeah, so I find that really fun. Um, [00:32:00] so that's, that's part, I mean, that's what any, that's what you do even as a reviewer though, so that's not specific to being an editor, much less an editor in chief. Other parts of being an editor in chief, I mean, honestly, the most fun part to me is digging into individual papers, which then maybe editor in chief isn't the right role, because you don't actually spend that much time handling a paper all the way through to acceptance. 
 
 

But, um, Editors in chief do a lot of triaging, so desk rejecting, and I think that's really, really important because it's a chance to, on the positive way of framing it, it's a chance to identify kind of papers that are undervalued that other editors might miss but you see value in, and then you want to give it a chance, and that feels very powerful because you're helping to shape incentives, you're helping to reward research that you think should be valued more than it is. The flip side of it is you also get to kind of kick back papers that another editor might have kind of missed. That was worth publishing, but you have different values, and you think that different things need to be prioritized, and so obviously, I [00:33:00] don't want to, like, on an individual paper level, it's not a fun task to reject a paper, but when you feel like you're shaping the rewards of the field, and you're saying, no, Bye. 
 
 

Bye. This might have been okay in the past, but now we're going to expect more, or if you're going to use that kind of design, then you can't draw this kind of conclusion, or things like that. Being able to change the norms of, like, what should be expected and what is an appropriate conclusion to draw from different kinds of evidence and so on, it feels important. And so even triage, it's a lot of negativity, a lot of potential conflict, and it's, Not as in depth as a submission that goes out for external review and so on. It has a big influence on the incentive structure because of the things you let through versus don't. But it's also a power that can be abused very easily, so it's something, obviously, editors need to be really, really conscious of. How accountable they are for those decisions, how much they're communicating to the public, to the audience, the community, about what kinds of decisions [00:34:00] they're going to be making so that people don't waste their time submitting things that, as a matter of policy, are going to get death rejected. As much as possible, I think it's a good idea to get a second opinion so that it's not uncommon in journals for an editor to consult with another editor with more specialized expertise before death rejecting. So there's steps you can take, I think, to not abuse that power, but it is an important job, even though it's... Um, and then the third aspect of it is, um, helping to shape journal policy, um, both journal policy and journal practice. So things like selecting the associate editors, selecting the editorial board. Those affect, like, what actually happens in practice, as well as setting policies, of course, also affects what happens in practice. And it's not that Editors in Chief have complete control over those decisions, but they get a say in them, obviously. 
 
 

Benjamin James Kuper-Smith: mean, probably more than any other kind of academic, right? 
 
 

Simine Vazire: Yeah, it, it depends on the specific journal and the specific dynamics of the decision makers, [00:35:00] but yes, in principle, quite a bit of, 
 
 

Benjamin James Kuper-Smith: I wanted to ask a little bit about the, whilst you're mentioning it with desk projections, that kind of stuff, because I was a little bit surprised that you mentioned that as a role of the editor in chief, because, for example, I, before I started, uh, researching this, this interview, I didn't know that you were editor in chief of the journal I just published in, because I never, you know, I never heard from you, right? 
 
 

I heard from, uh, associate editor or whatever the, can't remember what the precise title is. So just as a, like, behind the, behind the scenes do, you Who, I mean, this probably will depend on journal, but um, is it usually that the editing chief sees, I mean, they can't see every submission, right? Um, I assume that's what the  
 
 

Simine Vazire: Depends what you mean by see. No, at many journals, the editor in chief sees every submission and then assigns it to a handling editor, an associate editor. At really high volume journals, they often have a structure where there's also senior editors. So there's editor in chief, senior editors, and associate editors. And in that [00:36:00] case, they might split the job of like, first pass between the editor in chief and senior editors. But even then it would, strictly speaking, it would first go to the editor in chief and then they would select a senior editor. So they could still catch a manuscript that they decide 
 
 

to disreject at  
 
 

Benjamin James Kuper-Smith: but I guess like this editing chief of scientific reports or plus one. They're gonna have a full email inbox if they have to go through everyone. 
 
 

Simine Vazire: Yeah, yeah, yeah. Yeah. And even psych science, I think it will be, for the first time, I won't probably read every submission. But at the other journals that I've edited, I did. And I mean, yeah, to different levels. So I read it enough to know either that I can't evaluate it, and then I assign it to someone more appropriate, and I tell them I didn't evaluate it for death rejection. Or I read it more carefully, and I either decide that it, I don't think it should be death rejected, and I pass it on, that person could still death reject. Um, or I desk reject it. So at, at the two journals that I've been editor in chief of so far, I did a lot of the desk rejection, and I also assigned some [00:37:00] manuscripts to myself that went out for peer review. At Psych Science, I think I'll have to shift some of these practices just because of the volume. 
 
 

Benjamin James Kuper-Smith: Okay. Um, yeah, maybe let's, let's talk a little bit about psych science and what you want to do there. I mean, so maybe to link it to what we said earlier, journalists as brands. What exactly is the brand of psych science? Is it gonna stay exactly the same when you're there? Do you want to shift that a little bit or focus it or I don't know. 
 
 

Simine Vazire: Yeah, that's a good question. I mean, I think there's a lot that is fixed. So it's selective. That's not going to change. Um, the acceptance rate is quite low. It's short reports. I'm not going to change that. It's psychology wide. That's not going to change. Uh, so being selective, I think, That's often not articulated much beyond just the best articles, right? 
 
 

If you're a selective journal, you're selecting the best. I think that that's, has some value in, I think we should be pluralistic about what makes a good article. So there's not one way to be exceptional. I think, um, submissions, manuscripts can be exceptional on a variety of different dimensions. [00:38:00] And some, I think, are underappreciated. 
 
 

I would like to, to kind of raise the... Profile of some kinds of research that I think haven't been super well represented at psych science. So some basic descriptive research Some using mixed methods qualitative and quantitative methods and then some sub disciplines of psychology and that's who knows, you know, I'm about to find out whether that's because we don't get submissions in those areas or because Yeah, the reviewers or editors don't don't value them. 
 
 

Um,  
 
 

Benjamin James Kuper-Smith: But I guess that's part of the brand also, right? To say like, hey, we are open to these things. We haven't had that many submissions. 
 
 

Simine Vazire: Yeah, so I'd like to communicate That submissions should be should be exceptional on at least one dimension, but but we're open we're pluralistic about that There's not only one way to be an exceptional paper. It doesn't have to be groundbreaking. It doesn't have to be theoretically Sophisticated it doesn't have to have a mediation or causal Um, mechanism that's fully articulated or whatever. And in fact, I think one thing, one thing I would [00:39:00] really like to keep pushing for, and it's not to say that previous editors haven't, but I want to maybe go a bit further, is to say that it would, you should not claim to be exceptional on all the dimensions. That's just not possible. And to the extent that authors over claim about being exceptional on too many things, that's going to be penalized, not rewarded. So I would like to really move towards Yes, the papers have to be exceptional in some dimension, and they should be calibrated and accurate in their presentation of what's exceptional about the paper and what's not. So what trade offs did you have to make to get that amazing sample that is underrepresented in the literature and really sheds new light on the phenomenon through the sample and the population it represents? 
 
 

That's great. Probably you had to make some trade offs to achieve that. Maybe you didn't use the best measure because you had to use something that could be administered, On the internet or, you know, who knows, but being upfront about those trade offs, so showing the value of the thing you [00:40:00] did that you prioritize and that you did, that's really exceptional while acknowledging and fully incorporating into your conclusions. the the compromises that you had to make in your design or your, and sure, there are a few rare studies that are really exceptional on multiple dimensions. I don't think there are any that are exceptional on all, it's just not possible to do everything perfectly all the time. So. I think just more honesty, more calibration, but of course it's going to be hard. 
 
 

I don't, I know we're sending authors mixed messages. If we're saying we have very low acceptance rate, so you have to really be the best, but you shouldn't overclaim, 
 
 

right? So it's going to be tricky how to navigate that, and part of it is just going to be through our own practices demonstrating that we're serious about this, that we are going to reward papers that are well calibrated if they also are very strong in some dimensions, and that we're going to not be tolerant of papers that exaggerate and. Misrepresent things. 
 
 

Benjamin James Kuper-Smith: I was just wondering once you said that about [00:41:00] something that I have encountered with journals. I mean, you know, I just kind of finished my PhD, so I'm still not, you know, far less experienced than many other people. But I mean, one thing that occurred to me is that like, it's often quite difficult to know what a journal exactly wants, because these kind of mission statements or whatever can often be fairly vague. 
 
 

Um, and or very long and then you have submission guidelines and these things and I feel like at some point it's just like I feel like it takes a lot of time to submit to a particular journal even though where it is published in a sense doesn't really matter much um because it's you know you already did the research in that sense and I'm just curious, is, is this a trade off where they're, you know, it's either have a very long mission statement that's very precise or have a short vague one, or is, I don't know, like, how do you, I guess this is more like, um, a general problem in a sense with, uh,  
 
 

Simine Vazire: Yeah. Well, I think [00:42:00] another problem that causes the problem you're talking about is that a journal can say everything it wants, or an editor in chief can write an editorial saying whatever they want. I mean, not whatever they want. But if the action editors, the handling editors, so the associate editors, senior editors, not, don't carry that out or have a different understanding of that, then there will be inconsistency within the journal. 
 
 

And that's almost unavoidable that there's going to be some variability in how individual editors carry out. The same kind of abstract principles, or even in some cases, if you have concrete guidelines, they might not get applied super consistently. So I think one goal, one thing the journals I think should do better, and there's almost no training or, or really effort towards this in my experience, is trying to get teams of editors for a journal to really communicate better, to get on the same page about things if they disagree, and they're going, they're planning to implement things differently to talk about that, and try to then say, okay, well, then our public facing communication about this should. [00:43:00] Reflect that, that nuance or that complexity. So, I'm going to try very hard to create opportunities for the team of editors to communicate better than I... Um, and I think the technology is better now. So things like slack, like we should have a slack for the editors of a journal so that we can have discussions about, Hey, the editorial says we're going to do this. 
 
 

This is how I'm interpreting. Is that how you're interpreting it? Or in this particular case? I'm going to do this because of what it says in the submission guidelines or whatever. Is that how other people have been implementing that rule or whatever? So I think that will help. Um, there will always be some of that though. 
 
 

And I think that's super frustrating for authors, right? That depending on which editor you get, you might have Different experience. And that's just reality. I mean, unfortunately, there's no way to completely eliminate that. 
 
 

Benjamin James Kuper-Smith: Yeah, I mean, it's still people making decisions,  
 
 

Simine Vazire: Yeah. Yeah, and no one would agree to be an editor if you had to [00:44:00] suspend all subjective judgment, right? You're chosen in part because people, somebody thought you had decent judgment. Um, it would be weird to then ask those editors to just behave like robots. And 
 
 

Benjamin James Kuper-Smith: Yeah, yeah, I  
 
 

Simine Vazire: bad for peer review. 
 
 

Benjamin James Kuper-Smith: Yeah, I guess then it becomes very easy to game something if there's like, you know, just strict criteria, then yeah, you just try and get through those, then you're done. Um, yeah, I mean, maybe, uh, I mean, you've published your business statement about what you want to do, XxScience. And by the way, one thing that's surprising about it is how, I mean, in a way it shouldn't be a surprise, but like, it's easy to read. 
 
 

And like, straightforward. I don't know, I mean, I guess that's probably why it worked and that kind of stuff. But I was kind of surprised when I read through it to go like, Oh yeah, okay, okay, I like it. It's nine pages, but it actually didn't take that much, that long, wasn't that long to read. Yeah, I mean, some of the changes you want to make, I mean, maybe, I think that there was one quote in the vision statement that, um, [00:45:00] I was not surprised by, but, um, especially I read the, the interview on, on, and that came in science first, where it said you're sometimes described as revolutionary. 
 
 

And then in the vision statement, you say on openness and transparency, psychological science should continue leading in incremental steps. I was like, that's not the revolution I was promised. Uh, I thought there would be some burning buildings or something. But yeah, what do you, what are some changes you want to make? 
 
 

That aren't already there. 
 
 

Simine Vazire: Yeah, well, specifically on transparency and openness or more 
 
 

Benjamin James Kuper-Smith: Uh, kind of, maybe those two first, and then... 
 
 

Simine Vazire: Yeah. I mean, the reason I said that about transparency and openness is actually it's like science is quite progressive on those, on that dimension. So thanks in part to the previous editors. And especially Eric Eick and Steve Lindsey, but Patricia Bauer announced in her most recent editorial that psych science is moving to top level two, which is requiring data code materials for all [00:46:00] papers, unless there are ethical barriers to it. So that was already decided. So, from my perspective, it's like, okay, if psych science is already committed to doing that, those are the big things. There's not a lot more I want to do on transparency right now. I mean, I think, you know, as a four year term, there will probably be things. One, the main thing I think where transparency could be improved at psych science, and at any journals, is, uh, making the peer review process more transparent for articles that are published. Um, and I think this is an area where journals, it's kind of hypocritical that journals ask more and more transparency of authors. But they're not being transparent themselves, to the, to the degree they could be. And there are some real constraints on transparency. Like, I do think that reviewers identities should be protected. 
 
 

If they don't want to sign their reviews, we should be careful not to reveal their identities and so on. So I'm not saying everything should be transparent all the time. Of course, there needs to be considerations of other values besides transparency. But the content of the reviews and the decision letter, I think, [00:47:00] It really is a big benefit to sharing that, um, both for individual papers that people might want to dig in and see what was raised during peer review, how thorough was the peer review process for this paper, et cetera, but also more system at a more systems level, like for accountability for the journal and for editors, if there are patterns of systematic bias across manuscripts, um, then having that review history be public will help People detect any, like, systematic patterns. So, that's one change I'd like to make on transparency and openness. But honestly, like, I do think PsychScience is already, especially given its commitment to moving to top level 2, is already in pretty good shape with openness and transparency. 
 
 

Benjamin James Kuper-Smith: Um, other changes? 
 
 

Simine Vazire: Other changes? I mean, one big one is the one we talked about already of, like, really trying to reward and convince authors that we're going to reward calibrated claims. While maintaining, you know, this [00:48:00] very selective profile, like, of course we're going to be picky. But if you misrepresent or over claim, that's not going to help you here. So, again, I know that that's walking a fine line for authors, and that we'll have to figure out exactly how to implement that. You know, making sure that authors are allowed to kind of promote the positive aspects of their research and emphasize those, still holding them to... be accurate and accurately represent the limitations and the caveats. So that's a big change. Um, other changes, I should consult my notes. There's so many things going on in my head right now with psych science. I would like to do more with, um, so requiring data and code and so on is one thing, but then actually checking it is another, and I think that we're still a long way away from being able to 
 
 

do that  
 
 

Benjamin James Kuper-Smith: that that it's there, or that it's correct, or... 
 
 

Simine Vazire: both. 
 
 

So anything, really. Like, checking that it's there, depends what you mean by that. That's already... [00:49:00] Um, there's already a process for doing that, that there's something there. But checking that it's the right thing, checking that it's... usable, that's interpretable, and then checking that it actually leads to the statistical results reported in the paper, you know, computational reproducibility. I would love to move towards checking that, but I think we need to move there in steps. So initially it might be like end of year audits of the randomly selected or subset of articles, checking how, basically that would be reflecting more on us as a journal rather than the individual papers, but asking like, how are we doing, are the, is the typical paper in psych science this year, is the data usable, are the results reproducible, etc. 
 
 

And then eventually moving to doing that before the paper is published, checking all of that and making sure that it is at the level where it's useful to other people and computationally reproducible. I think we're a ways from being able to do that at that scale, and we'd have to be able to do it quickly so that we're not holding up the publication of the article and so on. 
 
 

But ultimately, I'd like to move there. And actually, I [00:50:00] think political science and econ journals have, are ahead of psychology journals in that dimension. So I'd like to learn from other journals how they're doing that. Also, more generally, just encouraging post publication critiques. of psych science papers. 
 
 

So one is like trying to do audits ourselves, or get a third party to do audits to see how we're doing on these kinds of reproducibility issues, but another would be to, yeah, figure out the best avenue for encouraging corrections, critiques, etc. of published papers, so that while we are trying to select the best papers on some dimensions, um, we understand that that's very hard to predict what's going to still be a Solid and robust and important and all of the dimensions that we're trying to select on in the long run and the peer review should not end when something is published in psych science. Um, so trying to participate in and encourage post publication critiques and [00:51:00] reviews. I don't know exactly what, what we'll do 
 
 

Benjamin James Kuper-Smith: Yeah, I was about to ask, like, it's because it also feels like there's not much of a general culture around that. I mean, of course, people talk about it, right, and people have their opinions they'll share. But, for example, even if you had, let's say, a comment function, let's say something like that underneath papers. 
 
 

I don't think most people would even look for that, right? I mean, I know some journals have it, I  
 
 

Simine Vazire: I mean, there are,  
 
 

Benjamin James Kuper-Smith: you know, 
 
 

Simine Vazire: there are two submission types. Currently, it's like science commentaries and letters to the editor where authors could submit. Critiques of published papers. They have slightly different criteria. Um, I don't know if they're working as well as they could, and to the extent they're not working as well as they could, I don't know where the block is in the kind of process. So I want to look into all that, but honestly, like, this is one of the areas where when you start getting into the nitty gritty of, like, what is the ideal system? What should the criteria be? It gets really complicated really fast, but even just starting those conversations and trying out different things, and maybe we'll get it wrong [00:52:00] and we'll try a different way or something like that, but Thinking through those issues. 
 
 

Benjamin James Kuper-Smith: on a kind of broader perspective, um, I'm curious, like, It's kind of a funny question, but like, do you enjoy that kind of stuff? Because it's, to me, it seems very different from why most people get into science. You know, this kind of like structural changes and creating a system through which things can go the way you want them to. 
 
 

To me, it's quite different from like people, many people going like, Oh, I'm interested in that question or like tinkering with things or that kind of thing. 
 
 

Simine Vazire: I think yeah, I think partly I go through phases where I'm interested in different things And so there were probably times when I wanted to spend all of my time on my own research But then I got tired of that and now I'm in a phase where I like doing something different Maybe I'll go back to the other phase. 
 
 

I mean, I still like doing research, but it's not as I don't want to spend all my time doing it anymore. Um, but I think the other thing is, it really depends. Like, sometimes it's super frustrating to try to change the [00:53:00] system, and a lot of the time I would say it's really frustrating. But sometimes there are low hanging fruit. 
 
 

There are really simple changes you can make that seem to make a decent sized difference. I don't want to say big difference. It's definitely not. That doesn't happen very often. But I mean, so another change that I'd like to try to make at PsychScience, I'm still thinking through, but is to increase the inclusivity of the journal, especially to regions of the world and populations of authors, and reviewers for that matter, that are typically underrepresented. And I think a lot of that, some of that I'm not as journal editors aren't really in a position to fix, but some of it we can help fix. So for example, I'm surprised at how often I talk to people and they have misconceptions about how journals work or how peer review works. And then I realized, well, yeah, who, how would they have learned even people in well resourced parts of the world that are well connected and well represented journals often have misconceptions. 
 
 

But then if you. Talk [00:54:00] about people who have very, relatively little access to insiders. How would they know? So one thing I'm planning to do is holding kind of drop in Zoom office hours at different time zones that Would be convenient for people in different parts of the world where people can just come and ask questions about like, do I need a cover letter or what blah blah blah. 
 
 

Actually, one of my hopes is to get rid of cover letters. 
 
 

So that's like one kind of misconception is some people spend a lot of time on cover letters and then nobody reads them. Like at most, you know, someone might skim it. So things like that I think kind of demystifying and making explicit some things that are usually just You have to know the right person to learn those ins and outs, or things like having an official appeals policy so that authors can see what kinds of appeals we consider, even just saying that appeals are an option. 
 
 

Some people don't know that, but specifying what will make for a successful appeal and what appeals we won't consider at all. Um, so all kinds of stuff like that to try to reduce the inequalities that come [00:55:00] from differences in access and differences in representation at journals. So those kinds of things, if they're effective, are rewarding because you think, well, why isn't everyone doing this? 
 
 

And you start doing it and it kind of works. And that's nice when you find something like that. 
 
 

Benjamin James Kuper-Smith: Yeah. I mean, I think you also wrote about This actually I found a little bit surprising, um, the, an idea to provide free, um, not copy editing, but like for people who don't speak English natively to, to provide, yeah, I mean, maybe you  
 
 

Simine Vazire: I don't think being a native English speaker is the relevant criterion because I'm not a native English speaker and I would be offended if someone told me I should get a native English speaker to read my paper. 
 
 

Benjamin James Kuper-Smith: well, voluntary, I don't think you force it on anyone. Yeah, 
 
 

Simine Vazire: Yeah, yeah, yeah. Um, yeah, so I'm still trying to find... The right mechanism for that, but I, so part of the reason I think that's really important is as editor, I've seen manuscripts come in where the research itself is really high quality, but the way the research is [00:56:00] communicated, the writing, the structure is quite difficult. 
 
 

And so it's hard to ask reviewers. to do the work of getting past the presentation and evaluating their research. And so, for those papers, I think it would be beneficial for reviewers, but also for authors, hopefully, if there were resources that would help them with just the superficial presentation of it, so that it would be easier for reviewers to evaluate it. So I'm looking for kind of third party, um, resources that would, we could pair with where the editors do the job of doing a first pass evaluation of like, there's something really worth considering here, but it's really hard to evaluate it or to ask volunteers to spend their time evaluating it. So then we pair with an organization that could provide help with the writing and the structure and so on, but it would have to be reserved for authors. Who don't have access to that kind of help from within the author team, and, um, so basically if there was [00:57:00] a, if there was someone on the author team who clearly has a lot of experience publishing in English language journals of this type, then I wouldn't want And I wouldn't want people to submit thinking, oh yeah, well, we'll polish 
 
 

the  
 
 

Benjamin James Kuper-Smith: exactly, exactly. 
 
 

Simine Vazire: So it would be, it would be a service for authors who, because of structural reasons, don't have access to that kind of support. Or don't, haven't developed the English language writing skills. Um, so I think there's a lot of steps involved in a lot of figuring out of like, exactly what the right partnership is for that kind of thing. I'm looking into 
 
 

Benjamin James Kuper-Smith: Yeah, that was uh, it was one of those things where I thought like that seems like a really good idea that's going to be hard to implement without either offending lots of people or like setting false hopes or all that kind of stuff. 
 
 

Simine Vazire: Right, right, right. Yeah, I mean, I have a potential idea, but I don't want to say too much yet because I don't know if it'll work out. 
 
 

Benjamin James Kuper-Smith: Yeah, okay. 
 
 

Simine Vazire: I think it would require partnership with other organizations. I guess I can say that. 
 
 

Benjamin James Kuper-Smith: Wow. [00:58:00] Um, yeah, I mean obviously you don't know all the things you're going to do. Probably going to learn a lot about the journal. In the first week or whatever I'm actually working there. 
 
 

Simine Vazire: Yeah, right. Luckily, January 1 is in the middle of our summer, so it will be a nice, relatively slow time in my work life for the fire hose to be turned 
 
 

Benjamin James Kuper-Smith: Yeah, actually a very basic question. How much time does an editor in chief spend on editoring in chief? Is it,  
 
 

Simine Vazire: It totally depends on the journal. So even, like, the difference between Collabra and the journal I did before Collabra, SPPS. So SPPS I spent, I would say, a day and a half a week, probably. And then Collabra it's maybe half a day a week, or three quarters of a day a week. So Collabra gets a lot fewer submissions than SPPS did. 
 
 

Maybe I spent more time doing SPPS and might have repressed those memories. 
 
 

Benjamin James Kuper-Smith: but I assume for psych science it's quite a lot, I would imagine,  
 
 

Simine Vazire: I think so, [00:59:00] yeah.  
 
 

Benjamin James Kuper-Smith: sorry, are these positions paid? 
 
 

Simine Vazire: You do get a stipend, not Calabra, so Calabra, no one's paid, that is an editorial role, but journals that have a subscription that are paywalled, usually they pay stipends to editors of a few thousand dollars, or up to a few tens of thousands. 
 
 

Benjamin James Kuper-Smith: Yeah, I mean, it's, I mean, it sounds like a huge amount of work, especially if you do it for several years,  
 
 

Simine Vazire: Yeah, the stipend isn't proportional to the amount of work, but it does help, often it'll help like, yeah, covers a research assistant that helps with your, keep your lab running, or you know, something that helps offset some of the time. 
 
 

Benjamin James Kuper-Smith: In some sense, I think we, we might've been speaking about my next question already. Um, but you wrote, I think this was in the vision statement, uh, about the hidden curriculum. I... I quite like the term. I'm curious whether you could, of publishing in particular, uh, whether you could expand a little bit on what that [01:00:00] is. 
 
 

Um, again, we've kind of, I think we talked about it to some extent, but 
 
 

Simine Vazire: Yeah. So it's things like that, the fact that some authors appeal when their paper is rejected, like some people don't know that because they don't know anyone who would ever do that. Or yeah, I think there's a lot of stuff that like is part of the culture of whatever lab or department or whatever you learn from and you are in now. And because those cultures vary locally quite a bit, there's just very different ideas about what. What you should do as an author as a reviewer during the peer review process, or if something happens, is it okay to email a journal or an editor and say, Hey, my papers been sitting there for 9 months. what's what's going on? Um, so a lot of that, I think, is stuff that there's. very little explicit training on and people just learn from the people around them. And so there's huge inequities in what people learn. So sometimes they get taught explicitly different things, or just the [01:01:00] fact that people have an opportunity to learn these things at all can be very uneven. 
 
 

Benjamin James Kuper-Smith: is, I mean, I feel like to some extent things like this podcast, maybe help a little bit with that kind of stuff. Um, by just asking a bunch of random questions, um, that kind of explore the edges of certain things, but is there. I mean, it seems to me, especially a lot of these things are kind of quite nuanced and specific and probably take sometimes a long time to explain or whatever, like the thing that's just, you know, it's not easily communicated. 
 
 

Um, so how can, what are some ways of kind of helping with that in general, but, um, maybe also as an, as an editor. 
 
 

Simine Vazire: Yeah. I mean, I think there's some things that journals could make policies about, like appeals, I think is a good example. Or like another example is editors publishing in their own journals. I think this is something that It's like, very unclear whether it's acceptable or to what extent it's acceptable, and as a matter of practice in psychology, at least the [01:02:00] journals that I'm familiar with, it's almost never disclosed as a conflict of interest. 
 
 

If you see an article where one of the authors is also an editor at that journal, and then you look at, where the conflict of interest would be declared, there's usually nothing there. So having a policy that that should at least be disclosed and declared as a potential conflict of interest so that readers know, or, I mean, that's very niche example, but I think that there are some things that journals can have policies about so that people even know that it's a thing and have some idea what to expect. Otherwise, yeah, I think a lot of informal conversations. So like. I'm on a lot of editor panels or things like that at conferences or on Zoom. I think trying to do those things beyond the usual conferences and things like that, so trying to go to regional conferences in different parts of the world or even different parts of the country where people might have less access to journal editors or things like that. But yeah, it's not gonna make a huge difference if it's, you know, one conference at a time [01:03:00] or one web seminar at a time, but it's still something. Um, and then to the extent that those things can be recorded or transcribed and then that's easier for other people to access, that helps. And then trying to write those down. 
 
 

So for Collabra, for example, I made a kind of guide to editing that I shared publicly. So these were kind of more my informal advice to editors at Collabra. I wanted authors to know. What we're saying how we're communicating informally about what you should typically do in these situations. They're not policies editors aren't bound by them But it's kind of trying to make transparent how we're operating and our general kind of values and norms in our editorial team So I think there's a lot that we can do I mean, you're right in the end There's always going to be some people that the information doesn't reach and I think we'll always have to keep making more effort To kind of find those gaps, but but we can do a lot 
 
 

better.  
 
 

Benjamin James Kuper-Smith: and it's obviously not just, not just your role to just to solve that particular [01:04:00] problem. It's kind of a collective thing. Yes, it's funny. So, when I, you know, initially contacted you to do this interview, um, I think you'd Just started the GoFundMe, uh, for the data collider or data collator or whatever, uh, lawsuit in the Geno case. 
 
 

Um, now it's a few months later and quite a lot has changed on that front in particular, but maybe to go back to the kind of the, I mean, so maybe for context, uh, we're recording this, uh, mid October, um, God knows what's gonna happen, uh, even between. Us recording this and me publishing this. But maybe briefly, I mean, so you and quite a few other people created this GoFundMe to help with the legal fees. 
 
 

Um, yeah, why? 
 
 

Simine Vazire: Yeah, um, so yeah, it wasn't just me. It was a group of us that did it together. A couple of reasons. One is the obvious reason of... That Data Collado were going to be in a [01:05:00] financially precarious position, and we wanted to help with that. But I think an equally big reason, if not bigger, was to show, to measure and make public how much support there was for these kinds of activities. 
 
 

And it was interesting, like when we were trying to, you know, on GoFundMe you have to set a target. And when we were talking about where to set the target, we disagreed by orders of magnitude, and I was completely wrong. Like, I thought... I thought... like 25, 000, 50, 000 would have been reasonable, um, so I was maybe a bit too cynical, um, which is a good lesson for me. That I was really blown away by how many people donated. I mean, so many of the donations were relatively small, which means that we had many, many donations. Who donated? Like, it's just such a diversity of people. Um, and the lack of backlash as far as I saw. I mean, maybe people are whispering things behind our backs, but I was really pleasantly surprised to [01:06:00] see how Unified of a chorus that was in support of Data Collada, and I think without a fundraiser, we just didn't, couldn't know. 
 
 

Benjamin James Kuper-Smith: didn't you see, I think on Twitter, I saw there was a conspiracy theory that this was for fake. Did you see that? 
 
 

Simine Vazire: Yeah, yeah, yeah. That it's an AstroTurf thing that's actually funded by 
 
 

Benjamin James Kuper-Smith: Wait, I know these, like, I've gone through the list. Like, I know lots of these names. 
 
 

Simine Vazire: Yeah, I feel like you've made it when there's conspiracy theories about you. Like you have to be, yeah, the fundraiser, if it wasn't successful, there would be no conspiracy theories about 
 
 

Benjamin James Kuper-Smith: Yeah. I mean, to be fair, this is like a lot of money, right? Like suddenly, you know, I mean, I can kind of understand how people like, wait a minute, these three people create a blog post and suddenly 300, 000 or more now, I think even by now, um, just appears. Um, I, but I mean, it's very obvious once you go through the names, there's also lots of famous names that even like, like, like most people would know. 
 
 

Yeah. Um, but I mean, I kind of want to get a little bit into, [01:07:00] you know, not, not really discussing necessarily this particular case, um, but kind of just the, the broader situation or like what happens maybe in similar situations or how people should or could behave because I found it really difficult to, I mean, maybe for, for, for, what's the word, to be transparent. 
 
 

I, I, I did donate some money to that, but I really went back and forth on it because I also felt like, ah, it's also kind of really weird that this. You know, you have one person who did or didn't do something, and then she's kind of attacking three people and now suddenly this entire field is just like, you know, going against that. 
 
 

Uh, I don't like that kind of thing at all.  
 
 

Simine Vazire: It felt like piling on, you 
 
 

Benjamin James Kuper-Smith: not exactly like piling on, but it's um, it's just completely asymmetrical. Maybe that's the way. Um, just in the sense that, I mean, I guess I'm just very hesitant to call something... fraud, anything like that, right? And I mean, she's released [01:08:00] a website, I'll link to it in the description, where she says explicitly, I did not do any, I did not do any fraud. 
 
 

She said it a bit better. But yeah, I find this, I find it very tricky. 
 
 

Simine Vazire: I doubt, you know, the several thousand people who donated, there's no way all of them think that Gino committed fraud. I think that, to me, what I interpret a donation as meaning is that the work that Data Collada did was valuable and is not defamatory and we don't want to be in a community, a scientific community where you can't share that, these kinds of 
 
 

Benjamin James Kuper-Smith: Yeah, that's exactly why I did it. Yeah, 
 
 

Simine Vazire: So, I think even if you think you draw a different conclusion from the evidence that they raised, you could find the evidence really valuable in the effort that went into it. And we should also mention that it wasn't just Data Colada, right, that there were anonymous whistleblowers who shared findings of their own with Data Colada that went into those blog [01:09:00] posts. So, it's also, I mean, those people aren't being sued and they're not the beneficiaries of the fundraiser, but I think the support we're showing also is support for them and their work. 
 
 

Benjamin James Kuper-Smith: I mean, for me, it was just, I don't want to be in a scientific world in which you publish something, then you get sued for it. Um, But did, yeah, it's kind of difficult to discuss this in general without going into the specifics, but I mean, did, did, did, did the, those three people, did they do it correctly, the way they handled it? 
 
 

For example, I think on the website, Gina said, like, they didn't contact her beforehand. Um, even though they say they do, I don't know. 
 
 

Simine Vazire: Uh, I don't. I didn't know that. I don't know what they claim or what she 
 
 

Benjamin James Kuper-Smith: Yeah. I mean, sorry, briefly, I mean, the one thing I read is that just, uh, said, like, we always contact the people beforehand and then she said, no, they didn't. 
 
 

Simine Vazire: Uh, 
 
 

Benjamin James Kuper-Smith: But. 
 
 

Simine Vazire: they don't say they always do, I don't think. I was under the impression that they don't always do, so I don't remember reading that they always do. Um, I don't think [01:10:00] people always should. I don't think it should be a rule. I think there are lots of good reasons why, in any particular case, you might not. Yeah, I mean, personally, what I know, and I don't know much more than what's public in the blog posts, I think all of it was above board and was... professional and responsible. I mean, I do know them personally and so that probably influences my prior about how careful they were. Um, but I, no one has pointed out something to me that they did or wrote where I'm like, Oh yeah, they probably shouldn't have done that. And I also think we should be really careful of that counterfactual. So one thing for me that's interesting about this situation and the fundraiser is that I think four or five years ago, There would not have been as much support for error detection kind of activities. And I mean, I remember when the Data Collada had posted a blog post about power posing and then the New York Times Magazine ran a [01:11:00] story about Amy Cuddy and her kind of experiences with criticism. And there was a particular anecdote where the reporter who wrote the story for the New York Times Magazine was interviewing Joe Simmons and pointed out that, you know, Joe, you went to grad school with Amy. Like, don't you think that maybe, you should have picked up the phone and just called her when you were... Like looking into the results and so on and in the New York Times magazine stories as Joe like thought about it And he said yeah Maybe I should have and then like people on social media were pointing this out and saying see even Joe thinks That he should have done things differently and that he was too harsh or whatever. I think that's crazy Can you imagine getting a phone call? And on the other end of the phone line, someone is saying I have questions about the integrity of your results or the robustness of your results or whatever it was. I don't want a phone call. Absolutely not. Put it in an email and let me read it without having to react on the spot and so on. I mean, maybe I'm wrong. Maybe... [01:12:00] Most people would prefer a phone call. I just don't think it's obvious and I think engaging in this counterfactual like don't you think that they should have done this differently? I think we should be skeptical of our own intuitions about that that like is there really a world where we wouldn't be saying? 
 
 

Oh, but don't you think they shouldn't they should have done this and they should have done that whatever was the opposite of what they did because the reality is when somebody feels attacked and feels hurt. We obviously tend to wonder if the person who hurt them did something wrong. That's totally normal. 
 
 

And look for ways that they could have done it without hurting them. But this kind of criticism hurts and there's just no way around it. And if we're saying they should have done it in a way that didn't hurt, we're saying they shouldn't have done it at all. And absolutely, I think people shouldn't pile on more hurt. 
 
 

And I think they should go out of their way to be professional about it and be careful, be really responsible. But I think Very often, even when they do that, and I would say DataGalada is an example of that, they still get criticized for not doing [01:13:00] enough to... Kind of sugar coated or whatever, but I was really pleasantly surprised that in this case We didn't see a ton of discourse around that It wasn't so much about like what could data call to have done differently or I don't 
 
 

Benjamin James Kuper-Smith: Yeah, I mean, this is, I mean, I think that this is the really surprising aspect to me is just how big this thing, this whole thing became, right? I mean,  
 
 

Simine Vazire: Yeah, well, it's a straysand effect, isn't it? I mean actually in this case It's not the straysand effect because Gino is not shooting herself in the foot by suing them for 25 million dollars and drawing more attention to herself. She's It's going to deter future people from raising these kinds of issues, so that, that isn't shooting herself in the foot. 
 
 

It's in some ways protecting future cases of bad research, but, um, but I don't think it would have gotten this big if she 
 
 

Benjamin James Kuper-Smith: Yeah, yeah.  
 
 

Simine Vazire: them for 25  
 
 

Benjamin James Kuper-Smith: Yeah, I think it already got bigger than I expected it was before the lawsuit and then that just really, you know, got 
 
 

Simine Vazire: Yeah. Well, also when you [01:14:00] have potential misconduct and 
 
 

Benjamin James Kuper-Smith: Yeah, yeah, it was, yeah. 
 
 

Simine Vazire: research about ethical and unethical conduct, that's just too juicy. And you have two researchers on Raising questions about integrity of their data. 
 
 

Benjamin James Kuper-Smith: But it's, yeah, it's kind of weird though how I think even this case again says like, shows like Dan Ariely, so far, did the smart thing by just shutting up, basically. 
 
 

Simine Vazire: know. I was gonna say, you know, we're talking so much about Gina, why aren't we talking about Dan Ariely? And I don't know, again, in both cases, you know, I'm reluctant to say anything about what I think about the source of the problems in the data, but there are clearly very serious problems in the data that need to be answered, and he's doing a very good job of shifting attention away from that. 
 
 

Benjamin James Kuper-Smith: Yeah. Yeah, unfortunately, the classic just sitting there and letting it just pass by seems to work quite well in these  
 
 

Simine Vazire: And announcing your new book that just 
 
 

Benjamin James Kuper-Smith: Yeah, [01:15:00] exactly. Yeah. 
 
 

Simine Vazire: It's amazing. 
 
 

Benjamin James Kuper-Smith: Yeah. Yeah. Yeah. Yeah. Um, anyway, um, so recurring questions, uh, book or paper you think more people should read? 
 
 

Simine Vazire: A book that comes to mind just because I read it fairly recently, it's called um, Plastic Fantastic, and it's about a case of fraud in physics, and it's written by Eugenie Reich, who was a science journalist who wrote a lot about like whistleblowers and fraud and things like that, and now she's become a lawyer who represents whistleblowers. So she has a really interesting perspective on all of this, but the, the book is like just a very well written story of a particular case of fraud in physics. And now lately there's been a lot of questions around questionable papers in physics claiming to find room temperature superconductivity. That's not what this book is about, but it's kind of fascinating to watch all of that unfold on Twitter. [01:16:00] It's a different, it's not fraud exactly. I think that questions are not, not at all or mostly not about fraud, but um, around hype and around, and also it's apparently like the same claim keeps getting made and then retracted and then Either the same lab or another lab says no now We really did find room temperature superconductivity and then we're like what like come on be more skeptical It's really fascinating to watch anyway It's fun to read about these things in a completely different field and especially a field that we think of as so much more Rigorous or scientific or whatever but and maybe they are I don't know but this particular story at least shows some of the 
 
 

Benjamin James Kuper-Smith: Well, what case is that though? Plastic Fantastic? 
 
 

Simine Vazire: I don't even know how to describe it because I don't understand the physics well enough. Something to do with really small things, but I don't remember even what they're supposed 
 
 

Benjamin James Kuper-Smith: Oh, so it sounds like it's about material science, but okay. 
 
 

Simine Vazire: I think it's materials, uh, and in my defense it was at least like four months ago that I read the book, so it's not like I just read it, but [01:17:00] even if it had been three weeks I already would have forgotten, so that's not really 
 
 

Benjamin James Kuper-Smith: Yeah. Okay. But it's, it's worth reading for people interested  
 
 

Simine Vazire: yeah, it's good.  
 
 

Benjamin James Kuper-Smith: research integrity and that kind of stuff. Okay. Uh, something you wish you'd learned sooner. 
 
 

Simine Vazire: Something I wish I'd learned sooner is that I think nobody really knows what they're doing. Thank you. I mean, it's related to the whole imposter syndrome thing, but I think the solution to imposter syndrome, often it's like all these positive messages about how you're so, you're so much more competent than you think you are, you shouldn't have imposter syndrome because you're great. And I think actually, to me, the answer is that you shouldn't have imposter syndrome because everyone else is just figuring it out as they go along too. And I think it took me a while to realize that and I still, I still find myself like naively. Assuming that people know a lot more and understand a lot better than they do. And then when you see the inside of things, you're like, oh, we're all just fumbling along. 
 
 

Benjamin James Kuper-Smith: Yeah. 
 
 

Simine Vazire: Or people can be really, really smart about one [01:18:00] thing, and then really not about other 
 
 

Benjamin James Kuper-Smith: That's always the most fun part. 
 
 

Simine Vazire: blowing. 
 
 

Benjamin James Kuper-Smith: I've definitely been in situations where someone is actually really smart about something and then they just don't get the most basic thing about something I happen to know. You're like, wait a minute, why didn't, and it almost feels like there's, like it's, like it's just not real. 
 
 

Sense of like, wait a minute, are they making fun of me? 
 
 

Simine Vazire: I still have this problem, like, talking to people in other fields. Like, when I talk to people in econ, I think, oh, I'm gonna look so dumb. They know so much I don't know, especially about stats and stuff like that. But then you talk about, I don't know. I mean, I do think, overall, I'm not nearly as smart as an econ person. 
 
 

But I think that there are gaps in their knowledge, too, and it's a little bit comforting when I, when I find those. 
 
 

Benjamin James Kuper-Smith: I see. Advice for PhD students or postdocs. Um, I, I used to preface this recently in that I just finished, but I feel like because I'm doing lots of, I'm doing lots of interviews just after I finished, I 
 
 

Simine Vazire: Oh, 
 
 

Benjamin James Kuper-Smith: don't want to milk that statement all the time. Um, 
 
 

Simine Vazire: [01:19:00] Yeah, I mean one piece of advice I remember I heard from Brian Nozick, who I think the quote is attributed to Mark Twain, when in doubt, do the right thing. And I think it's really good advice because like a lot of times there might be very clear strategic reasons to like, Make compromises to what you might ideally want to do or like maybe ideally you would submit only to this kind of journal But for career purposes you have to also submit to other journals or whatever And I think it's fine to be strategic when you decide to be But there's gonna be a lot of situations where you really don't know what's gonna be beneficial or strategic And so I think that default of like well when I don't have other reasons to choose path A or path B, then at least in those situations, I should choose based on what's right. I think that can be a really good guide, at least a like basic foundation of like, you can't sink below that. But the other kind of related thing is I would say, think ahead of time about what are deal breakers for you and where your own personal limits are in terms of [01:20:00] compromises and integrity and But I mean integrity not just in the, like, narrow sense of not faking your 
 
 

data or not engaging in QRPs, but also how you treat people or how, like, are you willing to become one of those people who says they'll do something but then never does it, or are you willing to, like, lose contact with your students to the point where there's like three levels of people between you and them. 
 
 

You know, what kinds of practices do you look at now and you say, I don't ever want to become that person? And like, try to remember those and try to hold that line. 
 
 

Benjamin James Kuper-Smith: yeah. I mean, going back to the first part, I guess there is so much decision making with very limited information.  
 
 

Simine Vazire: Yeah,  
 
 

Benjamin James Kuper-Smith: But is it, is it when in doubt do the right thing? Is that because, you know, even if it goes horribly wrong, you did what you thought was right at the time, so you can live with the decision? 
 
 

Or is it also because, you know, your intuition about these things might just also be more correct? 
 
 

Simine Vazire: I always thought about it as the first one, but the second one... Might have some validity to it. So maybe the morally right thing is correlated [01:21:00] with also the strategic thing? Is that the idea? That like, 
 
 

Benjamin James Kuper-Smith: Yeah, maybe. 
 
 

Simine Vazire: about what's morally right might actually be beneficial for other reasons, too? I mean, in the long run, I'd like to believe that we live in a world where the morally right thing is correlated with good outcomes, but only in the long... I don't have too much optimism 
 
 

about that  
 
 

Benjamin James Kuper-Smith: And on average, not for every case.  
 
 

Simine Vazire: Yeah. 
 
 

Yeah. Yeah. 
 
 

Benjamin James Kuper-Smith: Okay. Uh, I think those are my questions. Um, thank you very much. 
 
 

Simine Vazire: Thank you.