BJKS Podcast

84. Brian Nosek: Improving science, the past & future of the Center for Open Science, and failure in science

December 08, 2023
BJKS Podcast
84. Brian Nosek: Improving science, the past & future of the Center for Open Science, and failure in science
Show Notes Transcript Chapter Markers

Brian Nosek is a professor of psychology at the University of Virginia, and Co-founder and Executive Director of the Center for Open Science. In this conversation, we discuss the Center for Open Science, Brian's early interest in improving science, how COS got started, what Brian would like to do in the future, and how to figure out whether ideas are working.

BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.

Support the show: https://geni.us/bjks-patreon

Timestamps
00:00: Brian's early interest in improving science
15:24: How the Center for Open Science got funded (by John and Laura Arnold)
26:08: How long is COS financed into the future?
29:01: What if COS isn't benefitting science anymore?
35:42: Is Brian a scientist or an entrepreneur?
40:58: The future of the Center for Open Science
51:13: A book or paper more people should read
54:42: Something Brian wishes he'd learnt sooner
58:53: Advice for PhD students/postdocs

Podcast links

Brian's links

Ben's links


References & Links

Article about John Arnold: https://www.wired.com/2017/01/john-arnold-waging-war-on-bad-science/
Scientific virtues (including stupidity): https://slimemoldtimemold.com/2022/02/10/the-scientific-virtues/

Cohen (1994). The earth is round (p<. p05). American psychologist.
Greenwald (1975). Consequences of prejudice against the null hypothesis. Psychological bulletin.
Greenwald, McGhee & Schwartz (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology.
Hardwicke & Ioannidis (2018). Mapping the universe of registered reports. Nature Human Behaviour.
Meehl (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of science.
Nosek, Banaji & Greenwald (2002). Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics: Theory, research, and practice.
Nosek & Bar-Anan (2012). Scientific utopia: I. Opening scientific communication. Psychological Inquiry.
Nosek, Spies & Motyl (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science.
Rosenthal (1979). The file drawer problem and tolerance for null results. Psychological bulletin.
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science.
Schwartz (2008). The importance of stupidity in scientific research. Journal of Cell Science.
Uhlmann, Ebersole, Chartier, Errington, Kidwell, Lai, McCarthy, Riegelman, Silberzahn & Nosek (2019). Scientific utopia III: Crowdsourcing science. Perspectives on Psychological Science.

[This is an automated transcript that contains many errors]

Benjamin James Kuper-Smith: [00:00:00] Yeah. I mean, as I mentioned, kind of before we started recording, I want to talk about the Center for Open Science, which this year has its 10th anniversary, uh, and, you know, all sorts of related open science reproducibility stuff, to put it very broadly. I'd like to kind of briefly take you from, let's say your PhD or something like that until kind of the Center for Open Science started, or you were starting to try and start it maybe. 
 
 

Especially I'm curious because it seems to me when I looked at your Google Scholar and your publications that it was fairly kind of traditional science in that sense, where you, you know, you had some topics you worked on, you published papers, and then from 2012 onwards, it was like a switch happened, flipped, and there was suddenly lots of meta kind of scientific publications you had. 
 
 

So I'm curious kind of what, what kind of happened in, let's say that the 2000s in your career and what that brought you to that point. 
 
 

Brian Nosek: Yeah, for me, there's always been a deep interest in is now called meta science, [00:01:00] which I've just considered thinking about how science works and how we can do better science at the time. But it. I went started graduate school in 1996 and in my second year took a research methods class with Alan Kasdan and As part of that course, we would read by Bob Rosenthal and Jacob Cohen and Paul Meehl and Tony Greenwald, and they had written these papers in the 1960s and 70s describing Low power and its consequences for reducing the credibility of research, describing, ignoring of negative or null results, uh, Tony's original papers called Prejudice Against the Null Hypothesis. 
 
 

It's just a wonderful paper and the publication bias more generally. and all the consequences it would have from the trustworthiness of the literature. [00:02:00] And, you know, we're, you know, late 90s, reading these papers from the 60s and 70s, feeling like, wow, this is really, really, really, these are problems. These are, this stuff is making it so we don't know what to trust in the literature. Tony's paper, The Prejudice Against Null Hypothesis, estimated that 30%, given his assumptions, nothing about p hacking, just about publication bias, percent of the literature is false Like what? 30 percent of the world should. And no one had ever challenged it. And this is 20, 30 years later, and we're talking about it. 
 
 

And so the, a little bit longer here, but this really was for me, like a change moment of coming into science with idealism, right? We just study questions. We figure out stuff. We share what we're finding. We argue about it with people. And then we eventually figure out what the truth is and discovering that the whole system of science is around publication and we advance and how it is we advance our [00:03:00] careers, how we get a job, how we keep a job. there's all these things in that reward system that ends up producing incredible science, not credible science. And here we are all knowing that it happens and nothing's changing. What's up with that? That's, that's bananas. is that happening? that, at least, was the origin of getting interested in these problems. 
 
 

I'm happy to dig into that, but, or just zoom on to what, what, so then what? 
 
 

Benjamin James Kuper-Smith: Yeah, I mean, let me think. I mean, I don't know what the details are of that initial part. I 
 
 

Brian Nosek: recognize these are system problems, right? There's part of why we're all, you know, after each class, we'd all go to the local bar and be talking about over drinks like this is crazy. How is it that it exists like this? But oh, my God, I can't change it. 
 
 

I'm just, you know, a grad student. What am I going to do? [00:04:00] And we would talk about all the solutions that are things that we talk about today. And like, I remember conversations over drinks, you know, many times saying, wouldn't it be awesome if you could decide at the journal whether to publish something before, but nobody knows what the results are. 
 
 

You just decide based on the methods. Like, and this was not unique to my graduate student cohort, but it was a conversation lots of people had. But of course, we're having this conversation in theory, right? How are we going to do, we can't do any of that stuff. So we would have all of those discussions about all of the solutions that are prominent solutions for today that I'm sure most graduate programs had for decades, but with a feeling of impotence. the. I didn't want to feel impotent, right? I love science. I got into science because I love science. And so what can I do as a grad student? For me, the accessible point of entry to [00:05:00] starting to change how methodology happens for my work, at least, was to Build a website collect data. And the reason that that became something that I felt like I could do is I came with computer engineering background so I can build tools. 
 
 

Oh, I can build solutions. And same semester where I'm studying all these problems, the allocation of participant hours for to do my research was 15 total hours from the participant pool. And, you know, finished reading an article by Jacob Cohen saying power is ridiculously low, and I got 15 hours for participants for the whole semester. Oh, my God, what am I going to do? So clearly, we need to, we need a better way to get data. And so in 1998, right, the internet was still relatively new. There's lots of uncertainty about whether you can get any reliable data, but it felt like, oh, this is a way to do it. That we [00:06:00] might be able to double, you know, maybe I'll get 15 people, more people a month coming to my website to complete these. 
 
 

That would be great. I'd double my sample size in a month. so, so pitch to my collaborators, Tony Rewald and Mazari Banaji, that we build a website to measure. Implicit bias. We have this new tool, the implicit association test that could help try to investigate things without having to ask people to self reflect on how they feel about different topics. And so maybe people would do it if you gave them a website to do it. part of that was, they, they were on board and the way that Tony Greenwald works is that. He said, yes. Okay, let's build the website. I've scheduled a press conference six weeks from now. What? Uh, what? You did what? So I didn't sleep for six weeks trying to get this thing done and it actually, you know, I'm on the way to the airport because he scheduled it in Seattle. 
 
 

I lived in [00:07:00] Connecticut, you know, the day before it still had problems. And so finally got it working and on the way the airport confirmed that the website was running and went out to Seattle. And it was his it was the press conference for introducing the I. T. the 1998 paper that he I'm not an author on that. He and two students from his lab published, it came out with The website that made it possible to, to do that. So that was crazy and fun, but, you know, we were, were on, you know, Masarin and I on the airplane going over saying, how many people do you think will come to do tests on this website? You know, and, you know, we were saying, 
 
 

Benjamin James Kuper-Smith: mean, they're still for free, right? You're not paying them or anything like that. 
 
 

Brian Nosek: no, no, this is just, yeah. 
 
 

An open website, you go, you click and you say, I want to measure my implicit racial biases. I want to measure implicit associations about gender and science and math. And, you know, we thought, boy, our [00:08:00] outlandish estimate was, if we could get 50, 000 people in the lifetime of the website, that would kind of double what would be a productive labs throughput of participants coming through the lab over the lifetime of the lab. that be great? That happened in three days. So it was like an awakening of, Oh my gosh, I can build a tool and to try to address a problem and. It actually can work. You know, we done luck work in many parts of this, but it can work to actually solve that problem. Power was no longer a problem there. 
 
 

You know, more than I don't know what it is now. 3, 000, 000 people a year come to the project implicit website. Now completing tests. We solved our power problem. That's not a problem anymore for that area of research, at least for the people that are connected with the using that data. was just a, like, oh, [00:09:00] okay, change is possible, uh, and sometimes just requires some idea, uh, for how to actually make it happen. 
 
 

Benjamin James Kuper-Smith: Yeah, it's funny, as I kind of went through in preparing this, I saw that you had some papers very early on, some of your first were about doing experiments or questionnaires online, and I thought, oh, maybe that's like an early sign, but then it was like, ah, it's probably just a coincidence. So it was funny to see you actually say that this was actually kind of related to similar impetus. 
 
 

Brian Nosek: It was, it completely, and there's an another example from very early on, was a sense that researchers need a better methodological toolbox, but we don't have training in our courses for a lot of these things. And so I organized the summer school where researchers that wanted to teach a class on something I, I know how to use Excel for doing research really well. 
 
 

Nobody should use Excel for doing research. But anyway, someone says, I want to teach a class for Excel. Another person says, I, you know, SAS or I don't think even existed at the time, but [00:10:00] different tools, different analytic techniques. Anybody could organize a class and then anybody could sign up for that class. 
 
 

And we had like 300 people across, uh, the university, uh, come and take classes, uh, from this, just any grad student that wanted to offer a session can offer it, uh, sort of thing to just try to increase our methodological toolkits across the community. So the combination of those two things to me were sort of enlightening of if you create some opportunities for people to work collectively. 
 
 

On addressing a problem that they all agree is a problem, actually get a long way with very little. had no resources. There's no funding for this. It was just I set up a little web page for people to sign up for classes. I sent out an email saying anybody want to offer a class and then I organized it into a schedule. some rooms. 300 people got trained. I mean, look, look at that. That's amazing. So those, those are early examples [00:11:00] of how do we do science better? 
 
 

Benjamin James Kuper-Smith: Yeah, I guess, I guess you were probably then starting to do science at just the right time, as I mean, the website also for the implicit association test, I guess, was also pretty much for free more or less, I'm assuming. Um, and I guess you were the, I'm assuming the press conference really helped with actually getting people to the website, but I guess, yes, you had two examples of basically relatively cheap ways of changing. 
 
 

Brian Nosek: Of getting 
 
 

Benjamin James Kuper-Smith: Yeah. 
 
 

Brian Nosek: Yeah. Yeah. Of course they, they turned into things that were. Could be like the website now, it requires a fair bit of resources to keep it running because it got so big, but it was easy to start with something without a lot of resources. Uh, Mazarin's NSF grant at the time provided some support because as soon as it became a real thing, it was like, Oh my God, I'm not a programmer anymore. 
 
 

I was like. I had done programming, but you know, this is not what I am. so I need help if this is going to be a real [00:12:00] thing. Uh, and so we ended up getting, reallocating some grant dollars to get some, uh, real programming support instead of me trying to do it. Uh, 
 
 

Benjamin James Kuper-Smith: But so the website, is that already a precursor to. Center for Open Science and OSF, or is that a separate thing? 
 
 

Brian Nosek: no, it ends up being that. So to follow the trail, uh, so I ended up graduated in 2002 my PhD and started at university of Virginia. Uh, as an assistant professor and, uh, my first grant was from NIH to support further development of the website to make it a more robust platform for many researchers to use and to make the data more accessible. Now, this is so 2003 is when I got that award. for the next 5 years, I had that grant is my primary grant. An interesting tidbit from that is that they said to cut the budget as often happens 10 or 15 percent or something like that. But what they [00:13:00] recommended that I cut out. Was the postdoc position whose job was going to be to make all the data publicly available for everybody what my argument was is like this. 
 
 

There's we're generating tons of data more than I could or my lab could even use itself. And so the data itself is a. A valued asset from this that could spur other kinds of research sort of like, yeah, yeah, whatever's cut that out. Just focus on the building. So, this lack of appreciation of data availability at that time was, uh, was evident just in that small decision. 
 
 

You know, who knows? Why they why they recommended that, but that was the recommendation. Um, but that grant then made it possible for my lab to have be a technical lab in addition to the substantive work. So I basically had a dual lab of small software development shop and. Substantive research and the symbiosis of the [00:14:00] research team with the software team really sort of institutionalized that feeling of we can build tools to make our research better and have it be more robust, more effective, more rigorous, everything else, and then also potentially to share it. So the, when I was, that grant needed to be renewed in 2007 or 2008, if I. Stretch the funds. I started applying for renewal grants at the time, and I had two different proposals that I was trying to get funded. Uh, one was to create what was called the International Participant Pool. 
 
 

Benjamin James Kuper-Smith: Hmm. 
 
 

Brian Nosek: the idea was, uh, this was pre MTurk, uh, and other things, but the idea was that different labs could contribute samples that they had access to. And then gain credits to extract samples that they needed. if I was a researcher in South Dakota had very little access to a diverse pool [00:15:00] of Americans, cause that's what I needed for my question, and I could submit my South Dakotans and then extract a more diverse pool from that submission, or I could pay extra and get more samples. So I thought it was really cool. 
 
 

Benjamin James Kuper-Smith: Yeah. 
 
 

Brian Nosek: uh, and. But, but that didn't get funded, then NTurk and others came along, which basically did the same stuff, but without requiring everybody to coordinate their samples across labs. The other was to create what was called at the time, the open source science framework. And the basic argument was. There's tons of data that people are generating. It would be really productive to actually share that data, make the findings that we have, be able to interrogate them, to look at the data in other ways. Maybe we could get people to commit things in advance, what their plans are, and that couldn't get funded. With very polarized reviews, some reviewers saying this would change everything. Other reviewers saying, but people don't like sharing data. Why would we fund a [00:16:00] platform to share 
 
 

Benjamin James Kuper-Smith: To not share data. Yeah. 
 
 

Brian Nosek: Right? Yeah, exactly. Yeah, that's what we need. We need tools to make it harder to share. Uh, you know, so we were working on these problems and thinking about how can we build tools to make it easier. Not just our own work, but research in general more effective, but we hit barriers to getting the support. 
 
 

Benjamin James Kuper-Smith: Yeah. I found a quote that's in an article, not about you, but it starts with you in Wired. I'm sure you've read it. Um, I thought, I, I dunno, is it, is it a good article or, 
 
 

Brian Nosek: yeah. Yeah. Yeah. 
 
 

Benjamin James Kuper-Smith: I'll, 
 
 

Brian Nosek: that Sam Apple will be a 
 
 

Benjamin James Kuper-Smith: uh, I don't remember. I'll, I'll, I'll link it in the description, but I thought I'd just read a, the beginning of it because, um, I guess it kind of summarized very quickly what you just, maybe what you're about to describe it starts with. 
 
 

Brian Nosek had pretty much given up on finding a funder. For two years, he had sent out grant proposals for a software project. And for two years, they had been rejected again and again, which was, by 2011, discouraging but not all that [00:17:00] surprising to the 38 year old scientist. An associate professor at the University of Virginia, Nosek, had made a name for himself in a hot, hot sop foot of social psychology, setting people's unconscious biases. 
 
 

But that, that's not what this project was about. At least, not exactly. Um, who are John and Laura Arnold? 
 
 

Brian Nosek: Yeah. John and Laura Arnold came in unexpectedly to make possible these aspirations that we had that we couldn't get resource through the standard mechanisms in Oh, boy. 2012, we had started the reproducibility project in psychology and building the open science framework. Jeff speeds was a grad student in the lab, and he decided. He had a software development background also, and had interests in science as a process too. And so we shared that old grant proposal and had been talking about his [00:18:00] interests and adaptations that were related, but different from that proposal. So he brought a lot of new ideas to it. And he said, I want to do that as a dissertation project. And that's a very unusual dissertation project in psychology to build a technology. But we talked with a few potential committee members and then got to an idea that said, okay, yeah, we could do this as a dissertation project and didn't have funding for it. But I, I was giving lectures to corporate groups for implicit bias, and they would pay money to the nonprofit project implicit. then Project Implicit would send my lab money to Fund my lab. So I was able to fund that project just through giving these sort of corporate lectures. The reproducibility project was sort of one of these crowdsource efforts of saying, Oh, we're debating about replications. Why don't we see if people want to each instead of to run a hundred replications ourselves, let's see if a community of researchers would each contribute a replication and we could aggregate them together and report it. [00:19:00] So we were doing those on nickel and dimes and they got some press attention. And that article, one of those articles, John Arnold noticed and John and his spouse, Laura had just recently retired at the same ages as I am, or I was, we're about the same age today as we were then, that's kind of how age works, 
 
 

Benjamin James Kuper-Smith: More or less. Heh heh heh. 
 
 

Brian Nosek: my area of expertise. Um, he had a very successful career trait as a. Energy trader, uh, had worked for Enron before Enron became, uh, you know, it's synonymous with whatever, uh, it's on this with, uh, and started his own firm and had so much success that he was able to retire very young and start to the 2 of them devoted their careers to being philanthropic on a variety of different topics. 
 
 

And 1 of the areas of interest that they had early was research integrity. And they saw what we were [00:20:00] doing and said, that sounds interesting. And so, uh, John suggested that his, uh, new officer in this area named Stuart Buck, uh, reach out, uh, and see what we were doing. And so we had a couple of email back and forth with Stuart, uh, in late 2012. didn't exist, but Google had a video chat. And so he said, Oh, why don't you show me what, uh, how this OSF thing works. And so we, you know, we had a little video chat, showed him Oh, I want to present this to my board. Uh, which was Laura John, uh, and the president of the organization at the time, Dennis Calabrese. so he was going to present. He said, no, no, no, let us present it. Like, don't, yeah, no, we should show, we don't want him to describe it wrong or whatever. Uh, so he said, sure, yeah, no problem. Uh, and so we set up a video call and sort of showed what we were doing, uh, with the OSF at the time. It was still private. 
 
 

There was no public, uh, facing content yet. It was just the software we [00:21:00] were building. And then they said, okay, why don't you come out to Houston where they're based and we'll have a conversation about it. So we said, okay, we flew out to Houston in early December, chatted for an hour and a half, two hours about the OSF, about the reproducibility project, and then about the big picture of what we see are the challenges and what we'd like to change in the research culture. And we left and like a day later, they followed up by email and said, okay, please drop a budget. Drop a budget. Okay. Uh, actually it wasn't a day because we were working on the budget on the plane ride home. So it must've been, uh, right. Very soon after I suggested drop a budget before we even left. And so we drew up a budget and said, let's think big, you know, and thinking big for us was small for them, but big for us, they, through that correspondence, they said, okay, we want to fund you. 
 
 

We want to start this nonprofit called the center for open science. Uh, go for it. [00:22:00] Uh, here's five million dollars to start. 
 
 

Benjamin James Kuper-Smith: What were you hoping for? 
 
 

Brian Nosek: Oh my 
 
 

Benjamin James Kuper-Smith: Yeah. 
 
 

Brian Nosek: Oh, I mean, I What was I hoping for I mean god we we uh, if we had gotten five hundred thousand dollars that would have been incredible uh, and we so we pitched what we thought was the most outlandish possible budget which was Not that we thought we would get it. 
 
 

They just said, just show us how you're thinking about it. So over five years, we could imagine spending 17 million. And here's all the crazy ways we would spend 17 million. And so they said, okay, I think the first two years were 5 million. So they said, we'll give you 5 million. or five years, I think, uh, the grant was and then of course the other stuff came along but uh, But it was just an astonishing amount of resources From our perspective to get started on a problem That of course is a huge problem could [00:23:00] pursue it in a way that was unconstrained. So how do you want to solve it? Well, we're going to start an organization and we're going to figure out how we're going to solve it as we go So it was just this Amazing gift, uh, opportunity that they provided and ridiculous amount of risk that they took, you know, these two chumps coming up showing up saying, Oh, yeah, we can change how science 
 
 

Benjamin James Kuper-Smith: want to build a website. 
 
 

Brian Nosek: Yeah, 
 
 

Benjamin James Kuper-Smith: I have 17 million? You 
 
 

Brian Nosek: that they were willing to provide the support. so, you know, we're obviously eternally grateful for that risk, uh, that they took, but it also made it possible to then get other funders who would express interest. And the really weird thing about the being in the right place at the right moment, was that the initial coverage of doing the reproducibility project and building the OSF, we had four different funders reach out to us. Right. This is four years now after having not been able to get NSF or NIH [00:24:00] interested. Now funders are calling me and I'm thinking, Oh, my gosh, what have I done? This is ridiculous. a change of fortunes. And it was because they funders did care about this. They did see an issue. They did see an opportunity. And we just happened to be there at that moment with an idea, with a way that we wanted to tackle it, that they were convinced was at least interesting enough to give a little bit of stimulus to. 
 
 

Benjamin James Kuper-Smith: have a question about kind of random grants that. I mean, so it's from what it sounds to me is that you were contacted by them and then given this, you know, outside of kind of the standard, almost, uh, grant application things. And the weird thing is, this is kind of the second time this has happened on the podcast. 
 
 

Um, I talked to Michelle Hornbeck and Hugo Spears who had, Which is an online game, uh, that was played by like 3 million people for like dementia research and that kind of stuff. And this also started with me just being called by someone from like dot com. It's like, [00:25:00] Hey, we want to do, we want to give some money to science. 
 
 

Basically. You're on the short list. right proposal. And then he like kind of out of the blue, like without, does that stuff happen? Or am I, how do I just randomly talk to two people who are not randomly, but you know, yeah, 
 
 

Brian Nosek: does happen. There are many different philanthropic foundations or philanthropists that think in this way of proactively seeking. People or groups or organizations that are doing things aligned with their interests. Um, hard to plan for it. Like you can't really plan for it, right? 
 
 

Because you don't know what other people are thinking. So the, but it's wonderful that it exists if one is lucky enough to get the attention of some of these groups. So yes. And, and it's, it has happened since, you know, because now we're established. It's easier once you're established for groups that are looking in that proactive way funds to find you, but also [00:26:00] still takes a ton of cultivation of opportunities of reaching out and saying, this is what we're about and are our interests aligned. 
 
 

Benjamin James Kuper-Smith: it was just kind of almost very boring question, which is like, I mean, now the center for open science is this big thing that kind of. Almost has to continue. So like, how do you, I mean, if this would should like fall apart in like two years or something, uh, 
 
 

Brian Nosek: Yeah, 
 
 

Benjamin James Kuper-Smith: how do you kind of ensure that this runs in the long run and is going to be available? 
 
 

How long are you actually planning ahead for this kind of stuff? 
 
 

Brian Nosek: yeah, no, it's a good question. At the, at the outset, it sort of was like, yeah, this crashes and burns no big deal because. a crazy idea. We'll see what happens. And risks like that are worth taking because if they do work, great, uh, big stuff happens. If they don't, ah, it was worth trying. We've learned a lot. Now we're at the point of people are depending on the us uh, in a different way, right? There's millions of data files, uh, hosted on our services, and if [00:27:00] that goes away. That causes harm, uh, and so it's, it has changed, uh, over the 10 years into a different feeling of responsibility for the stewardship of public goods infrastructure, uh, that researchers use and, uh, rely on and all the other things that we do as well. As soon as it was clear that it was working, we transitioned to having a plan for our demise. that is to have a, uh, put aside of some funds the event that COS needs to shut down or loses its funding entirely, this reserve would, at least at minimum, make sure that everything that we host on the infrastructure solutions, uh, will stay there and be hosted for some period of time, at least long enough for people to retrieve or find new outlets. But for several years, uh, that would be preserved, uh, so that it outlasts CLS the investment in that kind of sustainability is [00:28:00] increasing over time. But the core part for us organizationally is that what needs to be sustained is the mission, not the organization. So if we, for example, say, you know, what the. Way that COS is formulated isn't serving the purpose of advancing the mission to increase openness, integrity, reproducibility of research. what we should do is move the maintenance of the OSF to some other entity. Uh, maybe it's a consortium of governments or a consortium of universities. So they take over that COS can go off to greener pastures because it's, it isn't The organization exists to serve the mission, not the other way around. 
 
 

So the real priority of how we think about this long term planning is to make sure that it's the, it's the assets, the public good assets have been created that have a sustainability plan. Just[00:29:00]  
 
 

Benjamin James Kuper-Smith: How do you recognize if it's not serving the mission anymore? And would, would someone like you who started it from the beginning and who's, you know, so associated with it, would you be the right person to identify that? ha ha ha 
 
 

Brian Nosek: so much ego invested in, uh, in it and our solutions, right. That is a risk. And you know, this happens all the time, uh, where. Organizations and individuals in organizations do so much rationalizing. Of course, we have to continue to exist, right? People talk about that with like Joe Biden in the U. 
 
 

S. elections, right? Is he really just about himself or is he about his principles? Why doesn't he step aside if people don't think he's too old? I don't, you know, I'm not in the too old category yet, but I might be in the too. To embedded category, uh, to be able to make clear judgments about when things that we're doing shouldn't be continued. 
 
 

But that's a part of doing good science to write is constantly being skeptical [00:30:00] of our own work. looking for its weaknesses in order to make sure that we are advancing the actual aims of increasing knowledge or improving scientific practice, et cetera. And so one of the things that we try to cultivate as a mindset in the organization and in all of our collaborations is that if one of our solutions isn't working, we really, really, really want to know that because if it if registered reports isn't actually. Improving research practice in the ways that we hope then advance and pushing on registered reports advantaging the solution over the mission the mission is to improve openness, integrity, reproducibility, research, we don't want to get so enamored of our, the solutions that we came up with. 
 
 

Or that we support or that we invest time in we end up ironically, uh, doing things that are counter to the mission. Um, so that [00:31:00] to the extent that that we embody that it has been a great benefit for. Keeping sort of the ego part out of it, and it's related to something that we talk a lot about in terms of trying how. Idealized science is the importance of getting it right over being right, We're gonna have lots of wrong ideas, lots of wrong beliefs about how the world works or about whether our solutions will work we're if our success metric is on getting it right, then about our solution evidence against the solutions as they're currently embedded is a benefit because it identifies the weaknesses of what we're actually trying to do. Which is, uh, accelerate that mission. 
 
 

Benjamin James Kuper-Smith: So what hasn't worked then? I mean, I guess it's been running for, you know, 10 years officially, and I guess a bit longer than that. What are some things that you thought were like, this is gonna, you know, really help science, and it just didn't. 
 
 

Brian Nosek: Yeah, yeah, there are [00:32:00] too many to describe in full, because there's lots of small decisions that get made of, oh, here's a feature that we think might help. Uh, here's a practice that might do something good. Uh, and then, oh, that didn't happen as the way we thought it would. One very small example is we, on the OSF, we used to have a feature called Quick Files. Which was make it incredibly easy for people to share a file data, so you don't have to create, you know, because OSF has sort of this mindset of to create projects and adapt them as a research process unfolds. And so it's high investment for managing a research project because it has so much flexibility. and it presumes sort of a long period of time that that project emerges in sort of the standard use case So what we need to do is provide a super simple one where you just go on a website you push something And then it's available right on the website [00:33:00] like a good problem to solve There's certainly other places where you can do that easily. It didn't get as the usage that we thought and More importantly, it didn't transition people from when they did that to doing things that are more about opening up their entire research life cycle. So, even if people were sort of using that on a small scale, our goal is not we want to help people share a file. Our goal is to get people to adopt life cycle open science, which is more of their planning, more of their documentation, more of their data materials, their outcomes, all of the parts of the process. So we dumped that feature because we didn't, the burden of maintaining it was exceeding value that it was providing and instead have been looking at how are there easier ways to onboard people. Uh, at different points in the research process that can actually bring them along in an ongoing way. So that kind of stuff happens all the time. [00:34:00] The other part that's worth calling out is the, the emergence of the meta science movement of evaluating whether these things are working, uh, not just our activities, all sort of reform activities is such a wonderful. element of the system over the last 10 years, because innovation without evaluation, who knows what impact it's having. And so we certainly do our own meta science research, but we love that there are others, uh, doing meta science research on things that, that we're trying to promote, because there's so much more work to do on interrogating where these things are working or not. we really, uh, value the Outcomes of that, because it really helps to inform what changes we need to make. Like, there's a paper. I think Tom Hardwick and I can't remember who his co authors were sort of pointed out. in how register reports are being implemented. It's like the, uh, the stage one reports are [00:35:00] not available anywhere. Uh, so how can we confirm that what was planned in advance versus what was reported after and then doing that analysis, it's like, Oh man, okay, we need to fix that. And so we created a sort of communication pathways to, uh, with the editors. Uh, we created a simple tool on the OSF. So when you have your registered report accepted at stage one, you can just post it there. 
 
 

So those are all preserved. it really helped because it identified a flaw in the implementation that wasn't, that we didn't recognize as clearly as we should have until that metascientific research was done and revealed that limitation. 
 
 

Benjamin James Kuper-Smith: Yeah. Slightly different question. What's it like running an organization with 50 people? Like with 50 em well, around 50 employees. I don't know, maybe it's a bit more, a bit less. Uh, is, 
 
 

Brian Nosek: No, it's, it's right around 50 right now. 
 
 

Benjamin James Kuper-Smith: Um, is that what you imagined science was gonna be like? 
 
 

Brian Nosek: So it, it's, [00:36:00] it was not until someone else labeled me I sort of like, Oh, I maybe entrepreneur is, does fit what I 
 
 

Benjamin James Kuper-Smith: Yeah. 
 
 

Brian Nosek: started, Oh yeah, I've started three organizations, all nonprofits or public goods, entrepreneurship. Uh, but I guess that is kind of stuff that I do. Even though I had never sort of had that as an identity, 
 
 

Benjamin James Kuper-Smith: just to, just to add to that, whilst you were speaking earlier, at some point I was thinking like, if you had, everything was the same about you, but you had slightly different interests, you. Make a lot of money , like you, like, like with, with this so thing, you could probably have made a lot of money, right? 
 
 

Because you could really feel like this enthusiasm about building things, but your interests on science. So it's, 
 
 

Brian Nosek: right. Yeah, no, that's, Yeah, I don't know if that's a compliment or a zing, but, you know, I think it's accurate 
 
 

Benjamin James Kuper-Smith: I, I meant it in terms of like, um,[00:37:00]  
 
 

Brian Nosek: the mindset, the 
 
 

Benjamin James Kuper-Smith: yeah, and the kind of ability also to, you know, make it happen. Because as you said, lots of graduate students have this kind of like thoughts of things that should be done and that kind of stuff. But it's very hard to actually make it happen and then stick with it. And yeah, 
 
 

Brian Nosek: Yeah, yeah That is, I, I, I like trying and I don't mind failing. And so they, I think that is an important orientation for taking risks on things that have perhaps low probabilities of success, but if they did succeed would hopefully provide some value. I do like that in general as a, as an approach and I, I am interested in those common goods problems, those things that are, observational. we, particularly in the academic context, what we idealize scholarly research to be about, how can we be more like what we think it should be about? [00:38:00] And so those are directed problems that, for me, have provide meaning, whereas the pursuit of. by providing a solution that makes money is not very exciting. Of course, if I can provide a solution that provides meaning and also makes money, that would be awesome, too. But, um, that's not the, that's not the path, at least that I'm on. 
 
 

Benjamin James Kuper-Smith: you know. Although I hope you didn't interpret that as an insult. 
 
 

Brian Nosek: No, 
 
 

Benjamin James Kuper-Smith: okay. Okay. 
 
 

Brian Nosek: no, 
 
 

Benjamin James Kuper-Smith: I definitely didn't mean it that way. 
 
 

Brian Nosek: yeah, no, it's, it's funny to reflect on those characterizations, identity things, because, you know, another part of it, you know, from the original question was my identity is as an academic. And I effectively have not been an academic for 10 years and that, that is a weird disjoint for me experientially, uh, because I'm. The scholarly community is what I'm all about. It's what I [00:39:00] feel part of and I am doing something that's quite distinct from it, uh, in terms of what my everyday life, uh, is like. I didn't conceive of myself as, as running an organization like this in advance. I, but I did conceive of myself as tackling these problems with whatever it took to tackle those problems. And so the decision making for me is less about what the daily work is. And more of what work do I need to do to make progress on the problems that I care about. So that I arrived at, uh, this role is sort of like, well, that's, that's what it's taking. So of course that's what I'm going to do. Rather than, Oh, I think I want to be a chief executive now. That sounds fun. I just never, never thought that way. 
 
 

Benjamin James Kuper-Smith: Yeah, I guess it's kind of like finding your path. You know, not by imagining what, what your perfect scenario is and then chasing that, but kind of by going like what am I interested in and step by step you kind [00:40:00] of end up in this position that, I mean, you seem pretty happy with, with what you're doing. I don't know. 
 
 

Brian Nosek: Yeah. Oh, I, I feel very fortunate to be able to work every day. I work on the problem that matters to me. Uh, and of course there are individual things that would rather not spend my time doing, you know, writing grant proposals is not, I'm not excited about writing grant proposals for the purpose of writing grant proposals, but I am excited about getting resources to. keep advancing the solutions that we're trying to advance. So that's, that is, you know, how I frame, least for me, of the motivation to, to do the work is, it is helping me try to make progress on the problem I want to make progress on. And as long as, as long as that's true, then I am satisfied with my, you know, my life. 
 
 

It's, it's, uh, it's, I'm very grateful to be able to do that. 
 
 

Benjamin James Kuper-Smith: That's pretty cool. I've always, I had a [00:41:00] question kind of what, what's, what you, you know, want to do with Center for Urban Science in the future and that kind of stuff. I mean, maybe for, for innovations, like I said, like how far ahead now are you thinking for organization, like, is it the kind of like, Oh, we want to do, yeah, this is like the, the bigger plan we have, or kind of, how do you think about innovations that, you know, might also, I mean, as you mentioned, you know, there might be some new idea that comes up or new paper that says this is a great method, but this is a bad method. 
 
 

Yeah. I don't know. How do you plan for that kind of stuff for, 
 
 

Brian Nosek: Yeah, at least for me, the approach is to think through an idealized end state, and they wrote these papers in 2011 12 called the Scientific Utopia Papers, and there's a third one that came out much later, and those really were the effort to articulate here, here is an end state that I think would be awesome. And then the, the entrepreneurial work is, well, how do we make progress from here to there? [00:42:00] What resources do I need? What solutions would start now? How do we start recognizing where the culture is, where people are to get them to take incremental steps that will move them closer to that ideal end state? With a recognition we'll never get to ideal end state. And also with recognition that that articulated ideal may not be the right end state, but it is a north star. so having a north star of, I would love to be in a world of scholarly research that looks like this, then drives that decision making in the moments of how can we access to resources, get communities together, start experimenting on things to see if we can make progress to that. evaluate whether it actually is the right direction or whether we should change direction. So the parts where, so there's two utopia papers that from [00:43:00] 2012, one of them was about scholarly communication in general and how we evaluate research and then have the self corrective processes of science work and accumulate knowledge. And the other one was about the reward system and incentives. We've made for the first 10 years of COS. We've made the most progress on the reward system part. I feel like most of our solutions are focused on that and the Progress that's being made is there. It's the other papers for me where I still really, really want to make progress. 
 
 

There is how do we change the scholarly communication more generally. And so when I've been talking to people about what's coming next for C. O. S. the way that I've been framing it recently is about the, you know, the 1st decade of C. O. S. was really focused on the research producer. is it that we help when you and I are doing our research? To document it, to plan it, to share it, to make it more usable by others. And we've made [00:44:00] some progress there, both on technology and incentives and everything else, but there's still lots and lots to do. But the next decade, I want to join up the. Research producer with the research consumer, and the consumer in the immediate case is other researchers. do we help make it so that the research process as the producer is sharing it gets evaluated more effectively gets extracted more effectively gets reviewed and integrated with the body body of knowledge as it accumulates. And so 1 of the examples of that, that we've been working on for several years, we sort of had starts and stops, but never quite got resources to be able to pursue it in any meaningful way is the model of a what we call a life cycle journal trading peer review. 
 
 

Not as. A static item that happens at the end of the research process a single time and is the determinant of [00:45:00] published not published, right? You get the award or not, but rather review, not just by peers, but also by machines and objective measures, but all kinds of evaluations happening throughout the research life cycle reports is an illustration of that. 
 
 

Right? We review earlier, but we can elaborate that. So that, for example, corrections process the end, our natural part of the research process scholarly communication process, rather than what is a very badly implemented model now for corrections for papers, because they're not even changing the paper. 
 
 

Paper is sacrosanct. You can't possibly change the paper, which is ridiculous because that's not how knowledge accumulates. Right. So we have a scholarly communication system that's misaligned with how it is that science actually happens and how that debate knowledge develops over time. And so the next step is, how do we start to in the reality of how science does work or should [00:46:00] work? Quotes. gets incorporated into scholarly communication and connects that still with scholarly rewards. So researchers can be involved in that kind of process, but still get the things that they need to advance their careers. 
 
 

Benjamin James Kuper-Smith: yeah. I mean, you mentioned the, the kind of rewards for scholars. I mean, one thing just, uh, kind of as an aside as I, Yeah. Today I applied for my first, uh, grant money, which was, uh, A lot more work than I expected. Uh, 
 
 

Brian Nosek: Yeah. 
 
 

Benjamin James Kuper-Smith: but one thing there was, uh, they had an open science section. So you actually had to, you know, fill in like, what was your, um, yeah, I can't remember exactly what it was, but it's like, yeah, do you, what, what's your plan to like make this accessible to people and that kind of stuff. 
 
 

And so it seems like definitely there's, yeah, stuff is changing. I don't know how common that is, uh, but the one that I've applied for so far, a hundred percent 
 
 

Brian Nosek: Right. Hey, a hundred percent. Yeah. Great. Yeah. I mean, it is increasingly common and there's so many different things that. [00:47:00] being tried, which is wonderful, right? There isn't any one solution. The things that come out of certainly aren't the, the 1 way, or the only way, or even necessarily the right way to do these changes. So, the fact that there is experimentation and trying different methods of changing the rewards of creating new norms of trying out new practices. It's it's a very, for me, it's a very exciting time, given the diversity of efforts. 
 
 

Benjamin James Kuper-Smith: it is the kind of. This larger system, is that a harder problem than the kind of initial one? Because it seems to me like you, I mean, for example, to get funders to, to require people to have some information about this. This is not like a technical thing you can solve by your own, by programming something, right? 
 
 

You kind of need to convince people of something. Is, does that make it harder or, or just different? 
 
 

Brian Nosek: Yeah, I feel like it's just different because it still is in the same realm of things. The, and this is often an error in technology [00:48:00] initiatives is, you know, the, if you build it, they will come a solution to things, right? You may have the best solution in the world, but if you don't build into that, how it is, you get people on boarded and build community around it and build engagement. It's not going to, it's not going to go anywhere, right? So an example is something that's been tried many times, a journal of null results, Dozen, two dozen times people have started journal of null results thinking, Oh, that'll solve the problem, right? No, it doesn't solve the problem. Nobody submits because a journal that's defined by publishing things that nobody else will publish. Is guaranteed to fail because no one else. Why would anyone submit there? No one, no one submits that stuff. you have to find a different way to engage the problem that actually fits with what people can be convinced they're not already just immediately interested in doing. And so that that is a really fun part of. The [00:49:00] work is trying to figure out how to design solutions that actually engage the community to try them out and can engage stakeholders to see value in it and pursue adoption of it. That's most of the work that usually the technology is straight, relatively straightforward compared to. Figuring out how all those agents, uh, 
 
 

Benjamin James Kuper-Smith: Yeah, 
 
 

Brian Nosek: with it. 
 
 

Benjamin James Kuper-Smith: it's interesting as you, as you were talking, I was, I was kind of thinking that you mentioned earlier that all the graduate students in the nineties and probably before then and after then had kind of these idealistic visions, but didn't know what to do because the system is so different in a way you're by just providing an option to do that. 
 
 

It's almost like we're just a way of preserving kind of that idealism and until at some point, you know, at the very latest, those people become the people in charge. 
 
 

Brian Nosek: Right. Yeah. No, and that's been one of the most gratifying elements of reform movement is that is not been, you know, no one can [00:50:00] solve system challenges alone. What was. Surprising is how much energy and how many people are engaged in and willing to do things to try to help facilitate change in the research community. You know, the original reproducibility project, 270 co authors on that paper. 269 of whom get no credit for their contribution on that paper, right? When you ask people who did the reproduced product, they'll say, me, challenge them to give another name, even if they're a co author, they may not remember, right? It is ridiculous to consider that that was a total collective effort. And. The people that contributed did so because they really just care about the science that they're involved in. They wanted to contribute as a service, uh, and did so as a service, uh, because of that idealism. And so you're, I [00:51:00] think you're totally right that it's sometimes it's about creating the conditions so that people's idealism can come through and inspire others to pursue similar things. 
 
 

Benjamin James Kuper-Smith: Yeah. At the end of each episode I have a few recurring questions. Well, three recurring questions. The first is, what's a book or paper you think more people should read? Can be famous, not famous, old, new. Yeah, anything that comes to mind? Yeah. 
 
 

Brian Nosek: about methodology from the sixties and seventies. It is worth reading those because you go back and read them. You're like, wow, this could have been written yesterday. So those are worth reading. But the 1 that comes to mind is a fantastic single page paper by a fellow named Morton Schwartz that's called the importance of stupidity and scientific research and I love this paper because it provides a mindset That I think is underappreciated can be, especially [00:52:00] in one's early career, where we, we spend our time, you know, imposter syndrome and worrying that we're not good enough. 
 
 

We're not smart enough or whatever else. And we take the experience of not understanding as an indication of a problem of ourselves and what the paper points out so beautifully. Is that, that feeling of not understanding is the feeling you're supposed to have in science. You're studying something you don't understand. That's why you're doing the science. So that feeling of not understanding is an approach orientation in science where it gives you the latitude. Oh, of course I don't understand this. Okay, I'm going to see what I can figure out, see what I can learn. And then. Once I feel like I understand something. Well, then I move on to something else that I don't understand. 
 
 

That's kind of what what you do. And so I, I love that paper. And I think everybody should read it and do what they can to cultivate that mindset in their own work. 
 
 

Benjamin James Kuper-Smith: Yeah, it's [00:53:00] funny, I talked to Adam Mastroianni a few weeks ago, or months ago, I guess by now. He recommended a blog post called, uh, The Virtues, Scientific Virtues, something like that, I'll put it in the description. Um, that basically collects virtues that scientists are supposed to have, and that are all kind of the opposite of what you think they are. 
 
 

And the first one, I think, is stupidity. And it's just lots of, like, tiny stories and quotations about supposedly very smart people who Well, it didn't seem that smart. I don't know, but lots of stuff like that. Um, so I'll link that too. 
 
 

Brian Nosek: Yeah. Yeah. I mean, really, if you think about it in we're going to be wrong a lot, almost always, because our most interesting idea to explain the part of the world we're trying to explain in a new way is going to be wrong in some fashion, right? Newton's laws of mechanics are wrong. When you go up very fast speeds. Newton an idiot? No, it is. Those are [00:54:00] really important. They're most, one of the most successful theories ever. And so the. Embracing our wrongness instead pursuing the sort of, we're just going, we'll find some dead ends. Great. We'll move on to other things. We'll see if those are dead ends, too. And the faster we find our dead ends, the better, because then we can move on to other parts of the problem that we don't know if they're dead ends yet. 
 
 

Benjamin James Kuper-Smith: Yeah, yeah, yeah, it's weird. I obviously don't like feeling stupid, but Whenever I understand something, it's like, oh, that's boring. So like those are the two feelings. Do you want to be bored or feel stupid? 
 
 

Brian Nosek: I'm dumb, but at least I'm interested. 
 
 

Benjamin James Kuper-Smith: Exactly. Um, second question, something you wish you'd learned sooner. Uh, this can be private. This can be from your work life. This can be whatever you want. It's something you wish you'd learn sooner. 
 
 

Brian Nosek: Um, I guess I'm relating to the conversation. I [00:55:00] think it would be the having the self confidence that I could make progress on problems that I cared about. And now I have that. I may be overconfidence at this point, and I had some of it in grad school. Those examples that I gave earlier, you know, let's start a summer school activities. 
 
 

Let's build a website. Those those I feel like I leaned into that, but I wish I had had experiences earlier where I had enough self confidence to try. I never I've never minded failing. I've failed A lot even before, but I was lacking, especially as an undergrad, lacking the confidence to. Sort of stick my neck out on something that I cared about, and once I felt that confidence of doing that, and failure, whatever, then stuff really started to click for me, and I'm glad that it did in grad school, but I wish I had learned that sooner.[00:56:00]  
 
 

Benjamin James Kuper-Smith: But how, sorry, how did you kind of learn that? Was it, was it through just failing and then going like, Oh, this isn't that bad. Or was it by actually succeeding and going like, I don't know. 
 
 

Brian Nosek: Yeah, how? Um, I don't, I don't know. Yeah, I don't have clear enough memory of how that came together, but the, maybe the, the website experience felt close enough to things that I felt enough confidence in that I could stick my neck out to say, I might be able to do that. And then once Tony scheduled a press conference, 6 weeks later, then I realized, oh, wait, maybe I don't, but I'm going to do it anyway. 
 
 

Benjamin James Kuper-Smith: Yeah, 
 
 

Brian Nosek: so maybe that hard expectation that did feel outside of my comfort zone. That was imposed after the fact. I'm, I'm inventing a story now is maybe that sort of helped to break the chains of, uh, yeah, problems, just try it out and see what happens. And maybe, maybe it works [00:57:00] out for the best. it's possible that it was with some success experiences, building, building up some self confidence to, to continue trying minding the failures as much. Certainly the. I mean, we, science, you know, we experienced so much failure. You've submitted your first grant proposal. That's awesome. I had, you know, 15 before I got my first one to be 
 
 

Benjamin James Kuper-Smith: don't say that. 
 
 

Brian Nosek: grant proposals and, 
 
 

Benjamin James Kuper-Smith: I just submitted it. 
 
 

Brian Nosek: I'm a champion failure. So don't worry. Don't worry. But it was just one of those experiences of, oh, we all have this these experiences because, you know, science is about criticism. 
 
 

And so, you know, papers get rejected, rejected, rejected. Oh, my God, I'm never gonna do this. Uh, 1 of the hardest experiences I had was my girlfriend at the time now spouse. school, published a paper using the IIT before I did publish a paper using the IIT, and it's like, I'm gonna, I have to break up [00:58:00] with this person. 
 
 

I'm the IIT guy. What, how is it that my spouse is publishing a paper using this tool before I do? Like, that was a deep failure experience of a close other, but like, how, this is unbelievable. So failure happens, uh, and how is it that we adapt to that? And again, embrace the meaning than the outcome. 
 
 

The outcomes. Are important for where we end up in our career, obviously, but the meaning from for many of us is in the problem that we're or trying to solve for the question that we're interested in. And so keeping the focus there, I think, is an important part of. Being satisfied with the work that we do and also being resilient to the failures that we will inevitably confront. 
 
 

Benjamin James Kuper-Smith: Okay, um, final question. Yeah, as I said, I just finished my PhD, uh, will be a postdoc at some point, uh, any advice for [00:59:00] people kind of on that transition, um, advanced PhD students, postdocs, 
 
 

Brian Nosek: maybe I, yeah, maybe I just 
 
 

Benjamin James Kuper-Smith: or was that it? Yeah. Okay. 
 
 

Brian Nosek: yeah, so inadvertently it's, it is because the rewards in our area of work are so clear and tangible, right? Getting a faculty job, getting a paper published, getting a grant award. It is very hard to resist. those things as the objectives. And it's okay to define those as objectives, right? 
 
 

I want to be an academic. So I define getting an academic job as the objective, but I think where we can go wrong and by go wrong, I mean, end up being dissatisfied with where we land is if we define it as an objective, even when it's moves further and further from what we actually want to do. And what provides meaning in our [01:00:00] lives. So, for example, I want to be an academic so that I can study the questions that I want to study and investigate these areas of work and have that, uh, engage with students on these questions, right? It's the so that that's the important part. And in a lot of the decisions that we make, you know, we're, we sort of feel like we have to sacrifice. 
 
 

I want a publication so that. I don't want a publication for publication sake. I want to share what I learned and figure out what if what the reviewers ask for or what I feel like I have to do is sacrifice good science or what I what provides meaning for me to try to get publications or to get a job, then I'll be dissatisfied if I get it. Because if I feel like I have to, you know, just to put concrete examples in the context of what we're talking about, if I feel like I have to pee hack and I have to suppress stuff and I have to exaggerate my evidence in order to get the publication and get the job, [01:01:00] then I'm not going to like. or my work in that job. So why would I do that? And so, for me, the approach orientation is do work, do the life, the way that you value, you can't control the all the outcomes, but you can control that you're doing it in a way that means something to you. And if the discipline isn't ready to align with. The way that you do it. 
 
 

Well, you wouldn't want to be in that discipline anyway, because you would have to do it in a different way that you don't value and so it's very easy to say and hard to live. And there's obviously lots of complications in the in betweens, but prioritizing the things that actually give us meaning, I think, in the long run leads to a much greater and much better contribution to making the world a better place because we keep the focus on the [01:02:00] things that we actually care about. 
 
 

Benjamin James Kuper-Smith: Perfect. Thank you very much. 
 
 

Brian Nosek: Glad to do it. Thanks for having me.

Brian's early interest in improving science
How the Center for Open Science got funded (by John and Laura Arnold)
How long is COS financed for into the future?
What if COS isn't benefitting science anymore?
Is Brian a scientist or an entrepreneur?
The future of the Center for Open Science
A book or paper more people should read
Something Brian wishes he'd learnt sooner
Advice for PhD students/postdocs