A new paper offers up a “top 10” list of science communication (scicomm) challenges and potential solutions – but also highlights the flaws in the list. I’m hoping it can be a starting point for a discussion that could help people address at least some of the scicomm problems they’re grappling with.
Here’s the deal: science communication can be a tricky business. It can be defined in a wide variety of ways, and includes a host of different interests that have different (and sometimes competing) goals. But while the goals may differ, many of the challenges are the same. How do we reach people? How do we get non-scientists interested? How do we get scientists involved in the process?
I take a broad view of science communication, defining it as anything that involves one person communicating with another person about science. That includes teaching, museums, art, science journalism, blogging, peer-reviewed journals, etc. And while I’m very interested in the academic study of science communication, my focus is on the nuts and bolts of practical scicomm efforts. Which efforts work? Which ones don’t? Why? I want to learn from other people’s experiences. (That’s why I started this blog in the first place.)
As a result, while the scicomm universe is a large one (depending on how you define it), I’m interested in identifying the major shared challenges that most of us face – and any best practices that can help folks address those challenges.
So I was particularly interested in reading a December 2014 paper published in the journal Science Communication, titled “What Do Science Communicators Talk About When They Talk About Science Communications? Engaging With the Engagers” – full citation below. (In the interest of brevity, I’ll refer to this paper as “the Engagement paper” from here on out.)
The Engagement paper authors, Cormick, et al. – all of whom are affiliated with the Commonwealth Scientific and Industrial Research Organisation, Australia’s national science agency – wrote the paper following the Science Rewired Big Science Communication Summit, which was held in Sydney in June 2013. But to put the work in context, we need to go back just a bit further, to a national audit of science engagement that was conducted in Australia in 2011 and 2012.
That audit evaluated 411 scicomm activities and found that 60 percent of them fell into the “deficit model” category – meaning basically that the public would be much more supportive of science (and science-based decision-making) if they knew more about science. (A full report on the Australian scicomm audit was published in January 2013.) However, the authors of the Engagement paper note that the audit “also found that most science communicators actually favored participatory, critical approaches to science engagement but felt hindered by a lack of resources and organizational support for such engagement.”
Thus the stage was set for the June 2013 summit, which brought together 250 science communicators from around Australia to explore the issues raised in the audit. The communicators participated in workshops to nominate the key obstacles to implementing scicomm “best practices” and develop solutions for overcoming those obstacles. The communicators were split into groups to brainstorm ideas, and then voted to determine which ideas were best. In addition to moderators, each group also included a small “brain trust” of subject matter experts.
A Top 10 List
Here are the 10 ideas that came out on top (all quotes are from the Engagement paper):
- “Undertake broad and local ‘engagement’ into better understanding communities’ needs and trust factors.”
- To ensure scicomm pros can get the data they need, “provide models and standards for evaluation methodologies and best-practice examples.”
- Research grants should include communication and outreach components (and science courses should incorporate scicomm elements).
- To ensure that citizen science projects and participants share expectations, there should be “best-practice models of citizen science that look at the impediments and solutions achieved” – and those solutions should be widely disseminated for use elsewhere.
- “Establish standards for evaluation, with well-considered tailored objectives for different audience[s].” (I’ll be honest – I’m not really clear on what this means.)
- “Establish wider networks that allow for real knowledge sharing and access to key influencers.”
- “Professional development/peer mentors/best-practice models/a national learning network” for sharing how to get “beyond tweets and blogs.”
- To incentivize scicomm activities by scientists, “research grants [should] include communications/outreach components.” (This echoes the third bullet, above.)
- “Granting bodies [should] develop ‘pilot’ grants for citizen science with [a] science mentor, and seek to publish results.”
- “Provide best-practice models for collaboration and mechanisms to bring potential collaborators together.”
Notice anything about many of the items on that list? As Cormick, et al., kindly phrased it, some of the ideas “might be considered a bit too broad to be truly useful.” I would argue that most of them are either out of the control of science communicators (e.g., at least three can be implemented only by grant agencies) or too vague to be useful.
For example, while I agree that it is a great idea for communicators to “establish wider networks that allow for real knowledge sharing and access to key influencers,” that is much easier said than done. How, exactly, should one go about establishing these wider networks? (I’ve touched on this briefly in the past, and may revisit it in the future, but my basic advice is: put yourself out there, keep an eye open for opportunities, and take advantage of opportunities when you see them. And, in general, it’s a good idea to be helpful and nice to people. See? Even my advice about it is vague.)
Cormick, et al., also point out that the list doesn’t include a number of things that would probably be considered “best practices.”
According to the Engagement paper, the “brain trust” subject-matter experts offered some insight into how these oversights happened – many of the best ideas were simply voted down during the workshops. To quote the paper: “One moderator stated, ‘It was frustrating to see the best ideas often languishing because they were unfamiliar, or people didn’t have a lot of understanding of them.’”
The paper also includes five of these suggestions that didn’t make the final cut:
- “Identify and understand people’s emotional/physical/intellectual needs for science.”
- “Embed scientific knowledge into the community’s already existing systems/cultural activities.”
- “Practitioners must gain an understanding of different communities and their values, interests, and motivations (use successful examples).”
- “Use an evidence-based approach to choose communication that works.”
- “Recognize [the] iterative nature of evaluation and collaborate with relevant experts for evaluation.”
I like these ideas more than many of those that did make the top 10 list – they’re fairly practical steps that make sense to me. They may not offer a step-by-step “how to” manual, but they’re ideas that can be put into action. For example, “use an evidence-based approach to choose communication that works,” sums up my approach to scicomm – if something works, keep doing it; if something doesn’t work, try something else.
Altogether, the summit (and the Engagement paper) offers us 15 suggestions for ways to improve our science communication efforts. But what do you think?
How can these broad suggestions be turned to practical use? Are they even applicable to the problems you face as a science communicator? What would your top 10 list – of problems or solutions – look like?
And have you seen promising ideas shot down, simply because they were unfamiliar?
I’d like this post to be a starting point for a conversation in comments. Hopefully, we can learn from each other. What do you think?
Citation: “What Do Science Communicators Talk About When They Talk About Science Communications? Engaging With the Engagers.” Science Communication, online Dec. 16, 2014, Craig Cormick, Oona Nielssen, Peta Ashworth, John La Salle, and Carol Saab. DOI: 10.1177/1075547014560829