Showing posts with label science policy. Show all posts
Showing posts with label science policy. Show all posts

Wednesday, November 4, 2020

When Appreciation of Truth Hangs in the Balance, We All Have Work To Do—Regardless of Which Way it Falls

Let This Be a Wake-Up Call


By Kai Chan


The appreciation of truth, order, and government for the people (not just supporters) hangs in the balance as the US election remains too close to call. No matter what happens, we should all feel an urgency like never before, as we witness a once-great democratic nation crumbling.


What was potentially excusable once, four years ago, as a one-time lapse in judgement, is not excusable now. Back in November 2016, people speculated that maybe Trump wouldn’t be so brash, loose with the truth, and incendiary as President. Others argued that if it got really bad, voters would send him packing in four years.


He was every bit as bad as we could have imagined. And now it’s clear that nearly 50% of Americans prefer the lies and misleading half-truths spewed from a badly written Twitter stream to reputable and reliable journalism. The same almost-50% of Americans love a leader who ‘sounds like us’ (to paraphrase numerous supporters), even if he flagrantly abuses his power for personal and political gain. And they trust a President on climate change, although he is woefully lacking in scientific understanding and at odds with the vast majority of the scientific community, all while climate-enhanced disasters burn and flood their way through US neighbourhoods and homes.


The alternative-reality crisis exacerbates
the climate, ecological and inequality crises.
None would be so bad, if it weren't for the others.
Regardless of the election result, this is a nation that needs transformative change, now. But it’s one that’s dangerously close to sinking into a trap of social-media-fuelled echo chambers of lies and conspiracy theories, that sees enemies in anyone seeking true unity for the nation.


Four years ago, I wrote to my graduate and undergraduate students to help them process what a post-truth world means for those whose entire purpose is seeking truth. It’s deja vu now.


Except that it’s worse. Progressives threw everything they thought they had at achieving a different result. Even if they barely succeeded, they have also failed. The division is so great now, the distrust so deep, the truth so apparently elusive, that a marginal win is nowhere near good enough.


This is not about left vs. right. If a Republican politician with integrity were elected on an honest platform that was fiscally and socially conservative, I would have no beef. As the Lincoln Project demonstrates, the problem with Trump isn’t his Republican affiliation, but rather the threat he poses to cherished and crucial American institutions needed for a functioning democracy.

We need scientists more than ever—including social scientists, of course. They (we) seek the truth for a living. We don’t own the truth, but we have honed the best system available for pursuing it. We can certainly identify lies.


With a million species at risk of extinction, a global climate on the precipice of dangerous tipping points, and pervasive systematic racism and injustice, the truth is essential. And there’s no time to spare.


This crisis of alternative realities is so much worse than the US problems of science integrity of 2004, when a group of us at Stanford wrote in Nature that “If a government abuses science to justify its policies, scientists have a duty to speak out”.


And yet action cannot take the form of scientists simply spouting the truth. As if that will convince anyone new. No, effective science engagement—like effective policy—must recognize that people are not rational agents—that people process ‘facts’ together in ways that consolidate group membership around shared values, even if it’s wrong. Better to be wrong with your friends, than right and alone?


To succeed, we all need to address this reality crisis.


We need to puncture the thought-bubbles on social-media that breed ignorance, incivility, and polarization.


We need to address the systemic inequities that have led so many Americans (especially in the rust belt) to feel angry at being left behind.


And we need to reach out and talk about real issues, even—no, especially—with those who might disagree.


This is hard. And uncomfortable. But necessary.


Those who think this kind of polarization couldn’t happen in Canada or elsewhere are thinking wishfully. Yes, Canada have some distinct advantages over the US. But just last year, in the last Canadian federal election, we had western alienation and memes of #Wexit (a western-province exit from the nation).


As the intersecting global climate, biodiversity, and inequality crises come to a head, it’s hard not to imagine that a functional society depends on addressing this growing fissure now—in every nation.



Creative Commons Licence
CHANS Lab Views by Kai Chan's lab is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://chanslabviews.blogspot.com.

Wednesday, September 9, 2020

Structural classism in Canadian public research funding

By: Maayan Kreitzman


It’s once again scholarship-application season, when we get plied with emails from the Canadian government about funding competitions for graduate school. And, while I’m no longer eligible for graduate student scholarships and am thus freed of feeling bad about failing to get them, I can however, still feel bad. That’s because these awards, and by extension the Canadian public research funding system as a whole, are pretty damn terrible. This blog post will explain why.


Like many others in academia, our lab group and department have been jolted into a conversation about the racist systems of oppression alive in our departments and institutions of late. The Black Lives Matter movement has made it impossible to look away from some of the most egregious and violent examples of racism and oppression in our society, but it has also turned the lens inward to the seemingly progressive spaces where we conduct our scholarship. Due to the initiative of our students society (and a few professors as well), and students within our lab group, we’ve been having more conversations about this topic lately - through monthly lab meetings about anti-racism, through establishing a departmental anti-racism working group, and through the recent proposal for funding from the President’s office for a staff person to analyse racism and inclusion in our department (more on this in a future post). These are welcome developments that have allowed for franker and more challenging discussions to take place about recruitment, retention, and resource allocation than I have previously experienced during my six years here. And, though I and many others have been critical of Canada’s approach to funding research for years, these recent conversations throw an even brighter and less flattering light on the advent of scholarship application season.  


The Canadian public system for funding research follows similar logic from bottom to top: Money = prestige and excellence. Large bundles of money = large bundles of prestige and excellence. The funding system is characterized by programs with huge and illogical jumps in funding levels between one tier and the next or between programs, from graduate scholarships which are stratified into three tiers (21k/yr, 35k/yr, and 50k/yr) to research chairmanships which are stratified into several tiers within various programs (100k/yr and 200k/yr for “regular” Canada Research Chairs, 350k/yr and 1 million/yr for “Canada 150 Research Chairs” and up to 1.4 million for “Canada Excellence Research Chairs”). For example, the 2017 announcement of 117.6 million dollars for a new Canada 150 Research Chair program promised to spend five to ten million dollars each on a handful of foreign imports. Again, a “normal” tier 1 Canada Research Chair is worth 200k/year, even though these are themselves extremely prestigious and competitive. The same goes for the Vanier scholarships for PhD students, which are $29 k/year more than a “normal” PhD scholarship.  


What is the logic of these massive differences in funding levels? Certainly, few people would dispute that differences in experience, career stage, and impact of contributions justify differences in compensation. And that different research programs have different funding needs for technical reasons. But how large should these differences be, and at what point do they amplify rather than reflect differences? Some will say that brilliant people really are worth more than others, and in order to compete for smart people with other sectors and other countries, funding must be competitive, lest they go elsewhere. Moreover, once you’ve found these rare and brilliant individuals, the more money you throw at them, the more brilliant results you will get. But, is there any evidence to support this theory? As far as graduate students in our department go, people that are allowed to apply for the highest level of funding (Vanier scholarships) through the department have already committed to a supervisor and a program. They aren’t shopping around anymore by the time they apply. This suggests that while the extra money is certainly nice, it is not actually functioning to attract bright students - something else is. 


The victims of the research chair and graduate funding  programs are not the same. In the case of the highest tier of the Canada Research Chairs program (the “Canada excellence research chairs” - a moniker I can’t seem to type without scare quotes and an eyeroll), the worst outcome is a waste of public money for proportionally small productivity gains as millions are committed to one individual’s salary and research program. In the case of the graduate student program, the issue is not so much money wasted on the highest levels of scholarships (which actually embody a decent living wage), but the fact that the “normal” tier of support results in actual poverty: 21,000 dollar/year PhD stipends, and 17500 per year for MSc students, a truly impossible amount to live on (and even less than recipients of CERB can expect to receive in a year as emergency living income). Take, for example, a student living in Vancouver or Toronto, who might expect to pay $800 for a bedroom in a house with four or more other students as roommates. In Canada, you're considered to live in "housing poverty" if you spend more than 30% of your income on rent. So at $9,600 per year spent on rent, both the PhD and Master's students are in *acute* housing poverty, spending 50% or more of their total income on rent, even with multiple roommates. (And let’s remember, these are the rates paid to students who are competitive enough to win NSCERC/SSHRC scholarships. For those that don’t make the cut, or for international students who don’t qualify, UBC’s baseline PhD funding is $18,000/year, and there is NO guaranteed funding for Master's students). So though their outcomes are different, the steep jumps between levels for both student scholarships and research chairs send a similar message: good people are scarce commodities, and the best people are worth two to 20 times as much as their colleagues. 


On top of this stratification, is the fact that government awards are likely to be given to people that already have plenty of experience and awards. This is the literal policy of NSERC granting committees for both graduate scholarships and chairs - if you or your institution have received NSERC funding in the past, you’re more qualified to receive more in the future. This stacking means that the people most likely to receive the added prestige and money are the ones that already have plenty of experience and social capital. Thus, they can stockpile awards and prizes, while others with less experience who might actually benefit more from funding and recognition are not considered competitive. This leads to a calcified “rich get richer'' dynamic which disadvantages people that come from weaker socioeconomic backgrounds or smaller institutions, with all the ramifications thereof on the diversity of people producing research. 


The top-heavy hierarchy of funding coupled with the stacking functions of who gets these awards not only fails from an equity perspective, it also fails to actually analyse the optimal distribution of funds from the perspective of outcomes and public benefit. Is the added productivity of giving someone who already has structural advantages extra money more than the added productivity of someone who does not have as good a CV or as much experience getting that money? What about the efficiency of giving large bundles to high-profile research groups (i.e., Canada 150 Chairs) vs spreading it in smaller bundles (i.e., the NSERC discovery grant)? How do these dynamics differ for research funding versus salary funding? What thresholds are reasonable for baseline funding and what are the optimal intervals for signalling meritocratic achievement at various career stages? Should graduate students and junior researchers be treated as actual workers, or as apprentices who get some temporary hard knocks, and are rewarded for it later? These questions, while subjective in part, could actually be answered through analysis of program dollars, dropout rates, research outcomes, and other available data in various fields and at various career stages - analysis which I do not think Canada’s funding agencies have done. They could also be answered without much technocratic analysis simply through an articulation of culture and values that the Canadian public system holds and wishes to promote - for example the values that nobody should live below the poverty line, that talent should be nurtured and retained rather than just attracted externally, and that success and prestige are not signalled chiefly through huge jumps in funding. 


Clearly, these are not the values our system currently promotes. Its excessively bundled and hierarchical programs perpetuate a situation where money and prestige are one and the same, wasting money in some cases and depriving individuals and research groups of the modest funding they need to thrive in others. Spending so much on individual researchers that have the extraordinary good luck/ monomania of thriving in the current system in the hopes that Canada nabs one before they win a Nobel prize sends the message that some people need to be exorbitantly bribed to be here while others who choose to stay in the first place must compete over scraps. This value logic perversely wastes money on importing or “competing” with other sectors for talent rather than actually growing, stewarding and retaining talented people that truly want to call Canada home. 


A competing theory for how to have good research is that there is no lack of brilliance coming in the door, but there is a lack of stewardship and retention of that talent. There are people who, because they aren’t automatons, become exhausted and disillusioned with the competitive economy of prestige in academia and leave. It is customary to regard these dropouts as failures that perhaps wouldn’t have come to much anyway. But what if they are academia’s most valuable wasted resource? Perhaps altogether even more valuable than all the brains that have drained away to other countries (or chose to remain there) for their higher compensation packages? Even in IRES, a program that has a lot to be proud of in terms of student and faculty diversity, recent discussions in the Anti-Racism Working Group have shed light on the fact that we often experience attrition among First Nations students. Furthermore, while IRES has strong representation in terms of international scholars, we have a comparatively poor track record for attracting and retaining the people who are most dispossessed and marginalized here in Canada: Indigenous and Black North American students. Surely this is not due to any racial or cultural propensity toward failure - rather, it means that there is something about our program, our institution, and/or our funding structures (to say nothing of our society at large) that is not enabling everyone to thrive equally. Perhaps some of these people, as well as many non-marginalized folks, would be happy and productive with a livable wage, a modest amount of research funding that was more spread around and some balance in life.  


Canadian public funding should envision and enact a culture of dignity for all (including international students that do not come with wealth), frugality for those who have been over-funded in the past, and recognition for outstanding contributions that is not as closely linked to massive differences in funding. If signalling prestige is important to differentiate people and attract talent, why not do so with much smaller bonuses in funding or by decoupling the two completely? Perhaps the most outstanding students and researchers should get an extra thousand or two dollars as a prize? Or, perhaps they could get a plaque that says “CONGRATULATIONS, YOU WON ACADEMIA”. Some great research is fairly inexpensive; some mediocre research costs a lot. There are real differences in needs and priorities that the allocation of money must reflect. But the size of your paycheck or budget is not the best signifier of quality and achievement, and should not be used as such. 


To situate myself in this, I write not from a place of personal bitterness, but rather from an understanding how my privilege has functioned in a system that has been ok for me in a way that isn’t true for many others. The public funding I received, a competitive NSERC fellowship, but at the lowest tier of PhD funding that Canada offers, combined with TAships was about enough to cover my fairly modest needs (i.e., roommates well into my 30s, no kids, shopping second hand)  for 5 years of my PhD (the final year was funded through an external grant). But the only way I was comfortable existing on that salary was because I had the cushion of a large savings account courtesy of coming from a family that has money, and more recently because of having a partner who makes significantly more than the median Canadian income. Even though I could technically survive on the funding I received, I also had connections, close nearby family, and a native language that made everything easier. I never had to worry about what would happen in an emergency. If I didn’t have these class privileges to give me peace of mind, graduate school would have been a different, and much more stressful story. 


As a public entity, the tri-council system has an enormous responsibility to shape conditions for a healthy academic sector that serves Canadians. It could do so much better.


Acknowledgements: Many thanks to Jo Fitzgibbons who contributed information about the IRES Anti-racism Working Group's findings, and provided helpful comments and edits, to Anna Santo for helpful comments and edits, and to Kai Chan for discussion and background.



Creative Commons Licence
CHANS Lab Views by Kai Chan's lab is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://chanslabviews.blogspot.com.

Wednesday, August 12, 2020

Push for Science in Policy through IPBES: Here's How to Get Started

Part of a series of posts about IPBES (the UN's Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services) and an inside look at its processes. More to come.

Perhaps you were compelled by the global biodiversity crisis laid out in the IPBES* Global Assessment, or inspired by its bold call for transformative change. Or maybe you've been impressed by all the news coverage, or the prominent recognition of the importance of diverse ways of knowing. If you are like the Zoom full of people who attended a recent conference session about IPBES**, one way or another you realize that IPBES is every bit as powerful and needed as its older sibling (IPCC***).

And you wonder how to get involved. This post is intended to guide you.


1. Get to know (some of the work of) IPBES. This includes a variety of assessments (Global, Regional, Land Degradation, Pollination), as well as other reports (e.g., about models, scenarios and values). For an introduction to the Global Assessment, its key points, and how to cite different pieces, see this post.

Beyond these technical pieces, though, there are increasingly accessible ways to get to know IPBES. Follow @IPBES on Twitter. Listen to the new IPBES podcast series, Nature Insight. Frequent the website, and read guest articles (like A million threatened species? Thirteen questions and answers and What Is Transformative Change, and How Do We Achieve It?).

2. Review IPBES products. Any researcher or policymaker (including students) can sign up as reviewers. You can review draft chapters, or even scoping reports (which set the stage for future assessments—including the proposed Transformative Change Assessment). To see what's open for review, follow IPBES notifications here. Here are some tips about reviewing:

(a) Don't be afraid to say, "This is confusing". IPBES products are intended to be accessible. If you're interested, and you don't understand, that's a problem (and not your problem).

(b) Think about what's there and what's not (but should be). It's easy to critique the text that's present, but also think about what else should be included.

(c) Evaluate the flow of ideas. These documents are not always easy to follow, but they should be. Many reviewers attend to particular pieces, and not how the whole fits together. The whole is important.

(d) Don't get stuck word-smithing. A little of this is welcome, but the words used are often highly constrained, so much critique here would be a waste of everyone's time.

3. (If you're early-career) Apply to be an IPBES Fellow. This is a superb program, with an international network of brilliant, interesting people.

4. (If you're established in your career) Apply to be an Expert in a scoping or assessment process. As above, see notifications here.

As I note, I started out a skeptic about IPBES. But I've become convinced that it's desperately needed and making crucial contributions to science and policy about nature and people, shining a light on the ecological crisis and possible ways out of it.

*IPBES stands for the UN's Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services.
**This post synthesizes answers provided by Patty BalvaneraMarla EmeryDoug BeardJeannine Cavender-Bares, and myself at the ESA Annual Meeting in 2020.
***IPCC stands for the UN's Intergovernmental Panel on Climate Change.

Monday, July 6, 2020

Why Some PhD Courses Shouldn't Have Grades

Kai Chan for CHANS Lab Views

This is part of a series, How to Write a Winning Proposal—in 10 Hard Steps


This past semester, I had the great pleasure to re-design and teach our program’s core PhD course (RES 602; see Intro to this series). I bucked many years of history with my reimagined grading policy for the course, which is to effectively replace grading with  individualized feedback. I told the students that they would all receive ‘A’s if they did the work and put in a genuine effort. Here’s why.


First, some context: in this course on “Interdisciplinary Research Design for Sustainability Impact”, the whole purpose is to coach students to become rigorous, insightful, impactful researchers. Thus, students’ and their own work are at the centre. And these are diverse students: all interdisciplinary to some degree, but in vastly different ways. Some primarily in the physical or natural sciences, some in the social sciences and humanities. Some do largely qualitative work, while most do some quantitative research. They straddle different epistemologies, and they adhere to radically different theories of change.


In a nutshell, there are seven classes of reasons for my grading policy that ‘A’ is the default.


1. The students are all on their own journeys, doing radically different kinds of research in different academic traditions, with different standards of evaluation. Some of these I know well, others I’m still just learning. → I can’t equally judge them all.


2. I have my own positionality in all of this. I’m more excited by some kinds of questions and approaches than others. → It’s therefore key to decouple my feedback from measures of performance that will go on students’ records.


3. The spectre of having to provide defensible grades to all would substantially shift my teaching towards a different set of uniform assignments. By their uniformity, such assignments could never properly equip a diverse class of students to do their own projects. Moreover, assigning (and justifying) numeric grades is time-consuming and detracts from the time I can spend giving tailored feedback. → Freedom from grades enables me to make different contributions to different students.


4. We don’t actually need grades for any real purpose at this stage. They simply serve to let people know that students are on track (or not). Other times, what students really need is a letter of recommendation. I can write a detailed letter for all of the students without needing to rely on grades. → Why perpetuate an ill-fitting tradition?


5. Grades exist partly to motivate students when other motivations aren’t sufficient. This also doesn’t apply here. If I’m not giving students assignments that are clearly meaningful to their programs and later careers, or if those things aren’t sufficiently motivating (I’m sure they are), then we’ve got bigger problems. → Self-determined motivations are superior to externally imposed ones (i.e. grades) (Gagné & Deci 2005).


6. You don’t need grades to provide hard-hitting feedback. Many times, my comments pointed to the lack of some element (e.g. “You’ve specified a great set of real-world implications. But what about the academic ones? How will your project contribute to a broader understanding of similar problems in different contexts?”). Even without grades, comments like this are hard to take, but at least there’s a decent chance students will receive them as purely constructive, a growth area. Accompanied by a 3 / 5 grade, which suggests that students should have already known this, I imagine such comments would feel like a smack-down. Meanwhile, boosting feelings of competence is key for intrinsic motivation (Ryan & Deci 2000). → Grades add salt to wounds.


7. Grades (or their absence) fundamentally change the relationship between professor and student. From judge and jury, professors without grades can be a source of guidance and assistance on the student’s journey towards being an independent researcher. That’s what I want. → Grades don’t put the relationship first, and the relationship should come first.


Now clearly I didn’t do everything right in this brand new course. I’m sure I bungled all kinds of things (hopefully small). But in the last class, in the feedback session, students said two things that earned many thumbs up on Zoom, and which were music to my ears.


First, students noted how appreciative they were for the opportunity to receive highly tailored and detailed feedback about how their thinking was developing. Providing individualized feedback to students provides a clear roadmap for change in their own context, and without differentiated grades, this is unencumbered by my notion of how much better they could have been. In the words of one student during our in-class debrief, “In the tradeoff between grades and feedback, I’ll take feedback any day! And you provided extensive, thoughtful feedback every single week.”


Students also commented that they appreciated having a safe and honest space to delve into the messy truths of research design. This “safe-to-fail” approach extended to both in-class discussions and weekly assignments. For assignments, I permitted students to re-imagine the instructions in order to suit their own research project. “Because of that [flexibility],” one student commented, “we all got what we really needed out of the assignments, not just what was expected of us”. When coupled with detailed and tailored feedback, this empowered students to imagine and construct robust research plans based on honest feedback from me, and from their peers. Additionally, weekly informal discussions about how the class was going for folks provided an opportunity to be nimble and adapt weekly assignments on the fly based on the actual real-time needs of students. “You made it safe to make mistakes,” one said, “And that’s what we needed.”



Next up: Author Contributions: Epic Fail, or Relational Success? (extra)

Previous: How to Find a Grad Project That Fulfills You. Step 1: Identify Your Critical Ingredients

The Intro to this series (with links to the full set): How to Write a Winning Proposal—in 10 Hard Steps


References

Gagné, M. and E. L. Deci (2005). "Self-determination theory and work motivation." Journal of Organizational Behavior 26(4): 331-362. https://onlinelibrary.wiley.com/doi/abs/10.1002/job.322

Ryan, R. M. and E. L. Deci (2000). "Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being." American Psychologist 55(1): 68-78. 


Creative Commons Licence
CHANS Lab Views by Kai Chan's lab is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://chanslabviews.blogspot.com.

Tuesday, June 23, 2020

Understand How Others Go about Research. Step 0: Let Experts Reveal Their Messy Realities

Kai Chan for CHANS Lab Views

This is part of a series, How to Write a Winning Proposal—in 10 Hard Steps


Listening to established researchers is absolutely key to learning how to do research, but not via regular research talks. Those teach you very little about the messy realities you’ll have to navigate.


Imagine: you’re sitting in a lecture hall (or these days, on Zoom), listening to a researcher you truly respect. Chances are, the talk seamlessly proceeds from a compelling statement of context through to research questions that spark your inner curiosity, innovative methods, interesting findings, and impactful implications for both academia and the broader world.


Sometimes life doesn’t go as planned. Plan for it, by understanding others’ messy realities.


Star-struck in your seat, you think, “It’s so easy for them.” And then you think either, (a) “I’ll do it just like that,” or (b) “I could never do that.” Thinking (a) is hubris, because the process of research is never that tidy. Thinking (b) is your imposter syndrome: you can do great work, but tidy talks won’t tell you how to get there.


Research design and execution—particularly in early stages—is sausage-making. Anyone who suggests that it’s a simple, straightforward process is afraid to reveal their sausage factory. For most of us, it’s the equivalent of pig lips and bums, blood and gore all over. Sorry to fellow animal-lovers for the imagery, but that’s the truth, metaphorically.


So, because there’s tremendous wisdom and insight in established researchers, particularly those who both do research and guide students through it, bring in the experts for their unique insights. But do it in a way that makes it abundantly clear that you don’t want the usual research seminar. You want some of that, but interspersed with the raw, ugly truths about the sometimes-bumbling, sometimes-lucky journey that got them there.


You want this because no matter how carefully you plan it out, you will have your own messy realities. You’ll have your own bumps in the road, where you realize that you need to stop and repair before you proceed. Or where you realize you’ve gone down the wrong track, and you have to retrace your steps to achieve what you set out to do.


It helps to provide some structure for your experts, though. ‘Messy realities’ and ‘sausage-making’ can mean many different things, so you won’t necessarily get what you seek in asking for that. It may help to provide what seems like a straightforward recipe for research design, and ask them to speak to how their research process navigated those steps. If your experts are like my brilliant and genuine colleagues, that will motivate them to uncover the many ways that things don’t go as planned, but nevertheless get you somewhere good.


Hearing from experts can take the form of guest lectures in a course. If there’s no such course, you might start one, even as a student. Or you can organize a series of student-led brown-bag seminars with professors (and maybe some graduating students or alumni).


Over the span of the posts in this series, I'm going to share my own messy realities and some of my colleagues' (with permission, of course), which pertain to the various weeks of the course. For the big picture, though, students were struck by the messy realities of our paths to our present. This was true not just for me (figure above), but also for Gunilla Öberg, who spoke of her transformation from environmental chlorine chemist to interdisciplinary scholar scrutinizing contrasting beliefs and ideological blinders associated with endocrine disrupting chemicals and sustainable sewage management (or "poo and pee as resources").


It helps to unveil the messy realities of research, even if to guard against the imposter syndrome. What false starts and dead-ends have you encountered on your way? Please comment below—that is, if you don’t mind sharing with strangers in this fully public forum….


Next up: How to Find a Grad Project That Fulfills You. Step 1: Identify Your Critical Ingredients

The Intro to this series (with links to the full set)



Creative Commons Licence
CHANS Lab Views by Kai Chan's lab is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://chanslabviews.blogspot.com.

Thursday, June 18, 2020

IPBES—An Inside Take (the Series)

By Kai Chan, a Coordinating Lead Author for the Global Assessment, Chapter 5.

IPBES (the UN's Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services) is making waves in the arena of environmental science and policy, particularly that dealing with biodiversity conservation, ecosystem services, and the multiple values of nature. It is also somewhat of an enigma, especially for those who haven't participated yet in a formal role.

But if you work in environmental science and policy, you're sure to be confronted by a wide range of questions, including whether you should get involved in an assessment, task force or review process. You might also wonder how it works, how politics enters the process (or if it doesn't), what the assessments are useful for, and how to cite them.
The first IPBES Assessment was on pollination

This series of posts is based on an inside take from someone who has been involved in multiple work packages, starting with the Conceptual Framework, but also including the Global Assessment, and now also the Values Assessment and the (proposed) Transformative Change Assessment.

Let me be clear: this series of posts is not a set of advertisements for IPBES. I entered the Conceptual Framework process highly skeptical but wondering about the questions above, and how much value there is in engaging in this kind of international science-policy process. At the time (the beginning for IPBES), the only way for me to understand what IPBES was about was to get involved. I did, and I was not initially inspired to do more. In fact, I then figured it wasn't worth my while, but at least I knew why. But years later, as you'll learn in these posts, fate conspired to rope me in.

Moreover, I keep questioning deeply whether working with IPBES is the best use of my time (worth the opportunity costs), despite some important successes. Although I've been very frustrated at times (through no fault of the IPBES Secretariat—for whom I have tremendous respect—but rather due to the institutional constraints hard-wired into the organization), I'm increasingly convinced it is.

Here are the posts in chronological order:






Citing the IPBES Global Assessment—Appropriately and Fairly for Authors


By Kai Chan, a Coordinating Lead Author for the Global Assessment, Chapter 5.

Updated with the formatted chapters. Part of a series of posts about IPBES (the UN's Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services) and an inside look at its processes. Next up is Push for Science in Policy through IPBES: Here's How to Get Started).

You want an authoritative source for the decline of nature, its implications for people, the causes of this degradation. Or a single source that reviews possible futures, pathways towards sustainable ones, or promising policy options. Chances are you want to cite the IPBES Global Assessment—but what specifically, and how? There’s the Science article, the Summary for Policymakers, the whole Assessment, and its component chapters. Your choices have important implications for which documents get read, and who gets credit.

It’s tempting just to cite the Science article based on the Global Assessment. Although I’m an author of that article, and I might have done the same five years ago, I’m going to argue that this easy strategy is both unfair and inappropriate.

Díaz et al., a great citation for the Global
Assessment—but not alone.

Díaz, S., J. Settele, E. S. Brondízio, H. T. Ngo, J. Agard, A. Arneth, P. Balvanera, K. A. Brauman, S. H. M. Butchart, K. M. A. Chan, L. A. Garibaldi, K. Ichii, J. Liu, S. M. Subramanian, G. F. Midgley, P. Miloslavich, Z. Molnár, D. Obura, A. Pfaff, S. Polasky, A. Purvis, J. Razzaque, B. Reyers, R. R. Chowdhury, Y.-J. Shin, I. Visseren-Hamakers, K. J. Willis and C. N. Zayas (2019). "Pervasive human-driven decline of life on Earth points to the need for transformative change." Science 366(6471): eaax3100. https://science.sciencemag.org/content/sci/366/6471/eaax3100


Why? The Global Assessment was some 1800 pages, based on three years of work by ~500 authors. As you can see from the above, only a small fraction of those Assessment authors are represented above (for understandable reasons). The Science article is a brief abstraction. Think of it as an ad of sorts. In most cases, it is appropriate to cite Díaz et al., but in virtually every case it's important to also cite the Assessment as a whole (or its chapters):

IPBES (2019). Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. S. Díaz, J. Settele, E. Brondízio and H. T. Ngo. Bonn, Germany, IPBES Secretariat: 1753. Doi: 10.5281/zenodo.3831673 https://ipbes.net/global-assessment

For the Assessment itself as above, only four names are listed (the Co-Chairs and Hien Ngo, the essential lead staff member), but Google Scholar does credit a broader set of authors (I’m not sure whom; I do know it’s on my profile). Because of this uncertainty, but also because of the imprecision of citing a massive 1800-page Assessment for a single point, it’s often better to cite the relevant chapter. You can download the full set of citations for the IPBES Global Assessment here (in BibTeX format).

There are some points that are integrative across multiple chapters, e.g., trends in biodiversity and ecosystem services, and their causes (Chapter 2 Nature, 2 NCP, 2 Drivers); transformative change and how it might be implemented (Chapters 5 and 6). In such cases, it often makes sense to cite the whole Assessment, or the Summary for Policymakers (the “SPM”):



IPBES (2019). Summary for policymakers of the global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. S. Díaz, J. Settele, E. Brondízio, M. Guèze, J. Agard, A. Arneth, P. Balvanera, K. Brauman, S. Butchart, K. Chan, L. Garibaldi, K. Ichii, J. Liu, S. M. Subramanian, G. Midgley, P. Miloslavich, Z. Molnár, D. Obura, A. Pfaff, S. Polasky, A. Purvis, J. Razzaque, B. Reyers, R. R. Chowdhury, Y.-J. Shin, I. Visseren-Hamakers, K. Willis, and C. Zayas. Bonn, Germany, IPBES Secretariat. Doi: 10.5281/zenodo.3553579 https://www.ipbes.net/news/ipbes-global-assessment-summary-policymakers-pdf

But like with the Science article, only a small number of the 500 authors of the Assessment are authors of the SPM (the Coordinating Lead Authors, Co-Chairs, and two key staff). Again, this is understandable and appropriate (writing the SPM was a huge undertaking), and my point isn't to take issue with the rules. Rather, many Lead Authors (LAs) contributed crucial insights to the chapters that formed the basis for the SPM, so let's cite the chapters also to give them credit for that.

Moreover, the SPM is not a scientific document, but rather a science-policy document. It doesn’t cite the many thousands of relevant studies in the scientific literature. These connections should be made prominent—in fairness to the thousands of authors who contributed to that large evidence base.

If you want to make a point about the evidence, cite the Assessment itself and/or its chapters. For global goals, cite Chapter 3 (below).

So, if you want to make a point about what the over-100 nations agreed to (it was 132 in May 2019), cite the SPM, but if you want to make a point about the basis of evidence, cite the Assessment itself and/or its chapters. For those interested in those finer points, below are the chapters, appropriate citation info, and what you might find most interesting and relevant within each.

A final wrinkle I just came to understand properly: Contributing Authors (CAs), who may have contributed a substantial section to the text (or just a paragraph), are not listed on official citations—even on the chapters. This is because unlike the Lead Authors, etc., Contributing Authors are not chosen for various dimensions of diversity through official processes involving the Multidisciplinary Expert Panel and Bureau. There is a need for thorough and even representation of (e.g.) scholars from less-developed nations, so I'm not arguing with the rules. But if there is a peer-reviewed paper associated with a chapter, it should better reflect the intellectual contributions of the full set of authors.

...

Chapter 1 sets the stage for the Assessment, and introduces an important historical narrative about economic development, and how some nations and regions have developed more rapidly somewhat at the expense of others, by externalizing impacts on nature.

Brondízio, E. S., S. Díaz, J. Settele, H. T. Ngo, M. Guèze, Y. Aumeeruddy-Thomas, X. Bai, A. Geschke, Z. Molnár, A. Niamir, U. Pascual, A. Simcock and J. Jaureguiberry (2019). Chapter 1: Introduction to and rationale of the global assessment. Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. T. Ngo: xxx-yyy. Doi: 10.5281/zenodo.3831852

Chapter 2 has three parts, each essentially forming its own chapter. These review the trends since 1970 in (a) nature, including biodiversity; (b) nature’s contributions to people, including ecosystem services; and (c) the drivers of change in nature and its contributions to people:

Purvis, A., Z. Molnar, D. Obura, K. Ichii, K. Willis, N. Chettri, E. Dulloo, A. Hendry, B. Gabrielyan, J. Gutt, U. Jacob, E. Keskin, A. Niamir, B. Öztürk and P. Jaureguiberry (2019). Status and trends - nature. Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. Ngo. Doi: 10.5281/zenodo.3832005

Brauman, K. A., L. A. Garibaldi, S. Polasky, C. Zayas, Y. Aumeeruddy-Thomas, P. Brancalion, F. DeClerck, M. Mastrangelo, N. Nkongolo, H. Palang, L. Shannon, U. B. Shrestha and M. Verma (2019). Status and trends - nature’s contributions to people (NCP). Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. Ngo. Doi: 10.5281/zenodo.3832035

Balvanera, P., A. Pfaff, A. Viña, E. García Frapolli, L. Merino, P. A. Minang, N. Nagabata, S. Hussein and A. Sidorovich (2019). Status and trends - drivers of change. Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. Ngo. Doi: 10.5281/zenodo.3831881

Chapter 3 assess the progress toward international goals for nature (e.g., the Aichi Targets under the Convention on Biological Diversity) and for sustainability (the UN Sustainable Development Goals):

Butchart, S. H. M., P. Miloslavich, B. Reyers, S. M. Subramanian, C. Adams, E. Bennett, B. Czúcz, L. Galetto, K. Galvin, V. Reyes-García, G. L. R., T. Bekele, W. Jetz, I. B. M. Kosamu, M. G. Palomo, M. Panahi, E. R. Selig, G. S. Singh, D. Tarkhnishvili, H. Xu, A. J. Lynch, M. T. H. and A. Samakov (2019). Assessing progress towards meeting major international objectives related to nature and nature’s contributions to people. Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. Ngo. Doi: 10.5281/zenodo.3832052

Chapter 4 assesses a wide range of scenarios and models projecting (mostly non-transformative) changes into the future:

Shin, Y. J., A. Arneth, R. Roy Chowdhury, G. F. Midgley, P. Leadley, Y. Agyeman Boafo, Z. Basher, E. Bukvareva, A. Heinimann, A. I. Horcea-Milcu, P. Kindlmann, M. Kolb, Z. Krenova, T. Oberdorff, P. Osano, I. Palomo, R. Pichs Madruga, P. Pliscoff, C. Rondinini, O. Saito, J. Sathyapalan and T. Yue (2019). Plausible futures of nature, its contributions to people and their good quality of life. Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. Ngo. Doi: 10.5281/zenodo.3832073

Chapter 5 assesses the pathways toward sustainable futures, reviewing a broad range of optimistic scenarios, and identifying the levers and leverage points for transformative changes towards sustainability:

Chan, K. M. A., J. Agard, J. Liu, A. P. D. d. Aguiar, D. Armenteras, A. K. Boedhihartono, W. W. L. Cheung, S. Hashimoto, G. C. H. Pedraza, T. Hickler, J. Jetzkowitz, M. Kok, M. Murray-Hudson, P. O'Farrell, T. Satterfield, A. K. Saysel, R. Seppelt, B. Strassburg, D. Xue, O. Selomane, L. Balint, A. Mohamed (2019). Pathways towards a Sustainable Future. Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. Ngo. Doi: 10.5281/zenodo.3832099

Chapter 5 has also sparked peer-reviewed articles, including one in People and Nature. That paper, about the levers and leverage points, includes a critical reflection of what is novel, as well as a clearer and more scholarly representation of the rigorous expert deliberation process that yielded those insights. (And there, finally, contributing authors will finally get credit.)

Chan, K. M. A., D. R. Boyd, R. K. Gould, J. Jetzkowitz, J. Liu, B. Muraca, R. Naidoo, P. Olmsted, T. Satterfield, O. Selomane, G. G. Singh, R. Sumaila, H. T. Ngo, A. K. Boedhihartono, J. Agard, A. P. D. d. Aguiar, D. Armenteras, L. Balint, C. Barrington-Leigh, W. W. L. Cheung, S. Díaz, J. Driscoll, K. Esler, H. Eyster, E. J. Gregr, S. Hashimoto, G. C. H. Pedraza, T. Hickler, M. Kok, T. Lazarova, A. A. A. Mohamed, M. Murray-Hudson, P. O'Farrell, I. Palomo, A. K. Saysel, R. Seppelt, J. Settele, B. Strassburg, D. Xue and E. S. Brondízio (2020). "Levers and leverage points for pathways to sustainability." People and Nature. https://besjournals.onlinelibrary.wiley.com/doi/10.1002/pan3.10124

 

Chapter 6 assesses options, obstacles and opportunities for transformative change, focusing more narrowly than 5 on particular policy and governance tools:

Razzaque, J., I. J. Visseren-Hamakers, P. McElwee, G. M. Rusch, E. Kelemen, E. Turnhout, M. Williams, A. P. Gautam, A. Fernandez-Llamazares, I. Chan, L. Gerber, M. Islar, S. Karim, M. Lim, L. J., L. G., A. Mohammed, E. Mungatana and R. Muradian (2019). Options for Decision-makers. Global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondízio, J. Settele, S. Díaz and H. Ngo. Doi: 10.5281/zenodo.3832107


Creative Commons Licence
CHANS Lab Views by Kai Chan's lab is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://chanslabviews.blogspot.com.