What fake news about spiders can teach us about the global spread of (mis)information

Read the full story from Cell Press.

It’s no secret that the internet and social media fuel rampant spread of (mis)information in many areas of life. Now, researchers have explored this phenomenon as it applies to news about spiders. The verdict? Don’t blindly trust anything you read online about these eight-legged arthropods — or anything else for that matter — and always consider the source.

Climate disinformation leaves lasting mark as world heats

Read the full story from the Associated Press.

Even as surveys show the public generally has become more concerned about climate change, a sizeable number of Americans have become even more distrustful of the scientific consensus.

Improving science literacy means changing science education

Introductory science classes typically require students to memorize facts, rather than teaching them the basis of scientific thinking. Maskot via Getty Images

by Zahilyn D. Roche Allred, Florida International University

To graduate with a science major, college students must complete between 40 and 60 credit hours of science coursework. That means spending around 2,500 hours in the classroom throughout their undergraduate career.

However, research has shown that despite all that effort, most college science courses give students only a fragmented understanding of fundamental scientific concepts. The teaching method reinforces memorization of isolated facts, proceeding from one textbook chapter to the next without necessarily making connections between them, instead of learning how to use the information and connect those facts meaningfully.

The ability to make these connections is important beyond the classroom as well, because it’s the basis of science literacy: the ability to use scientific knowledge to accurately evaluate information and make decisions based on evidence.

As a chemistry education researcher, I have been working since 2019 with my colleague Sonia Underwood to learn more about how chemistry students integrate and apply their knowledge to other scientific disciplines.

In our most recent study, we investigated how well college students could use their chemistry knowledge to explain real-world biological phenomena. We did this by having them do activities designed to make those cross-disciplinary connections.

We found that even though most of the students had not been given similar opportunities that would prepare them to make those links, activities like these can help – if they are made part of the curriculum.

Three-dimensional learning

A large body of research shows that traditional science education, for both science majors and non-majors, doesn’t do a good job of teaching science students how to apply their scientific knowledge and explain things that they may not have learned about directly.

With that in mind, we developed a series of cross-disciplinary activities guided by a framework called “three-dimensional learning.”

In short, three-dimensional learning, known as 3DL, emphasizes that the teaching, learning and assessing of college students should involve the use of fundamental ideas within a discipline. It should also involve tools and rules that support students in making connections within and between disciplines. Finally, it should engage students in the use of their knowledge. The framework was developed on the basis of how people learn as a way to help all students gain a deep understanding of science.

We did this in collaboration with Rebecca L. Matz, an expert in science, technology, engineering and math education. Then we took these activities to the classroom.

Making scientific connections

To begin, we interviewed 28 first-year college students majoring in the sciences or engineering. All were enrolled in both introductory chemistry and biology courses. We asked them to identify connections between the content of these courses and what they believed to be the take-home messages from each course.

The students responded with extensive lists of topics, concepts and skills that they’d learned in class. Some, but not all, correctly identified the core ideas of each science. They understood that their chemistry knowledge was essential to their understanding of biology, but not that the reverse might be true as well.

For example, students talked about how their knowledge gained in their chemistry course regarding interactions – that is, attractive and repulsive forces – was important to understand how and why the chemical species that make up DNA come together.

For their biology course, on the other hand, the core idea that the students spoke of most was the structure-function relationship – how the shape of the chemical and biological species determine their job.

Next, a set of cross-disciplinary activities were designed to guide students in the use of chemistry core ideas and knowledge to help explain real-world biological phenomena.

The students reviewed a core chemistry idea and used that knowledge to explain a familiar chemistry scenario. Next, they applied it to explaining a biological scenario.

One activity explored the the impacts of ocean acidification on sea shells. Here, the students were asked to use basic chemistry ideas to explain how increasing levels of carbon dioxide in seawater are affecting shell-building marine animals such as corals, clams and oysters.

Other activities asked the students to apply chemistry knowledge to explaining osmosis – how water transfers in and out of cells in the human body – or how temperature can alter the stability of human DNA.

Overall, the students felt confident in their chemistry knowledge and could easily explain the chemistry scenarios. They had a harder time applying the same chemistry knowledge to explaining the biological scenarios.

In the ocean acidification activity, the majority of the students were able to accurately predict how an increase in carbon dioxide affects the acidic levels of the ocean. However, they weren’t always able to explain how these changes affect marine life by hampering the formation of shells.

These findings highlight that a big gap remains between what students learn in their science courses and how well prepared they are to apply that information. This problem remains despite the fact that in 2012, the National Science Foundation put out a set of three-dimensional learning guidelines to help educators make science education more effective.

However, the students in our study also reported that these activities helped them see links between the two disciplines that they wouldn’t have perceived otherwise.

So we also came away with evidence that our chemistry students, at least, would like to have the ability to gain a deeper understanding of science, and how to apply it.

Zahilyn D. Roche Allred, Postdoctoral Scholar, Department of Chemistry and Biochemistry, Florida International University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How science fuels a culture of misinformation

By Joelle Renstrom

On November 8, 2021, the American Heart Association journal Circulation published a 300-word abstract of a research paper warning that mRNA Covid vaccines caused heart inflammation in study subjects. An abstract typically summarizes and accompanies the full paper, but this one was published by itself. According to Altmetrics, the abstract was picked up by 23 news outlets and shared by more than 69,000 Twitter users. On the basis of that abstract, a video on BrandNewTube, a social media outlet that circumvents YouTube’s anti-misinformation policies, pronounced Covid vaccinations “murder.” Sixteen days later, the American Heart Association added an “expression of concern,” noting that the abstract might not be reliable, and on December 21 it issued a correction that changed the title to indicate that the study did not establish cause and effect, noting there was no control group nor a statistical analysis of the results.

This incident underscores a flaw at the center of the scientific enterprise. It’s all too easy to make outsize claims that sidestep the process of peer review. No publication should carry a standalone abstract, particularly one making such a bold claim, and particularly during a pandemic. But the problem goes much deeper than that: Even scientific papers that have passed through the intended safeguards of peer review can become vectors for confusion and unsubstantiated claims.

As we’ve seen again and again over the past two years, Covid-19 hasn’t been just a viral pandemic, but also a pandemic of disinformation—what the World Health Organization calls an “infodemic.” Many scientists blame social media for the proliferation of Covid-related falsehoods, from the suggestion that Covid could be treated by drinking disinfectants to the insistence that masks don’t help prevent transmission. Facebook, Twitter, TikTok, and other platforms have indeed propagated dangerous misinformation. However, social media is a symptom of the problem more than the cause. Misinformation and disinformation often start with scientists themselves.

Institutions incentivize scientists going for tenure to focus on quantity rather than quality of publications and to exaggerate study results beyond the bounds of rigorous analysis. Scientific journals themselves can boost their revenue when they are more widely read. Thus, some journals may pounce on submissions with juicy titles that will attract readers. At the same time, many scientific articles contain more jargon than ever, which encourages misinterpretation, political spin, and a declining public trust in the scientific process. Addressing scientific misinformation requires top-down changes to promote accuracy and accessibility, starting with scientists and the scientific publishing process itself.

Universities want their scientists to win prestigious grants and funding, and to do that, the research has to be flashy and boundary-pushing.

The history of the scientific journal goes back hundreds of years. In 1731 the Royal Society of Edinburgh launched the first fully peer-reviewed publication, Medical Essays and Observations, initiating what has become the gold standard of credibility: vetting by experts. In the traditional model, scientists conduct original research and write up their findings and methodology, including data, tables, images, and any other relevant information. They submit their article to a journal, whose editors send it to other experts in the field for review. Those peer reviewers evaluate the scientific soundness of the study and advise the journal editors whether to accept it. Editors may also ask authors to revise and resubmit, a process that takes anywhere from weeks to months.

By 2010, most traditional scientific journals also had digital counterparts. The “open access” movement makes roughly 1/3rd of those available to the public for free. Meanwhile, the number of science publications and the number of published papers increased dramatically, and most academic institutions established themselves on social media to help promote the work of their researchers.

In this new world, scientific journals and scientists compete for clicks just like mainstream publications. Articles that are downloaded, read, and shared the most receive a high “impact factor” or Altmetric Attention Score. Studies show that people are more likely to read and share articles with short, positively worded, or emotion-invoking titles.

The rating system can’t help but affect scientists’ publications and their careers. “Many [scientists] are required to achieve certain metrics in order to progress their career, obtain funding, or even keep their jobs,” according to Ph.D. candidate and researcher Benjamin Freeling of the University of Adelaide, who was lead author of a study on the topic, published in the Proceedings of the National Academy of Sciences in 2019. “There’s less room for a scientist to work on a scientific question of immense importance to humanity if that question won’t lead to a particular quantity of publications and citations,” he wrote in an email to OpenMind. Valuing exposure above the scientific process incentivizes sloppy and unethical practices and demonstrates British economist Charles Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.”

University of Washington data scientist Jevin West, who studies the spread of misinformation, says that university public relations offices responsible for press releases and other media interactions “also play a role in the hype machine. Universities want their scientists to win prestigious grants and funding, and to do that, “the research has to be flashy and boundary-pushing.” PR offices may add to that flash by exaggerating the certainty or implications of findings in press releases, which are routinely published almost verbatim in media outlets.

Many reporters don’t distinguish between unvetted preprints and formally published papers; to casual web sleuths, the two can appear nearly the same.

The demand for headline-worthy publications has led to a surge in research studies that can’t be replicated. Results of a reproducibility project published by the Center for Open Science found that only 26 percent of the top cancer studies published between 2010 and 2021 could be replicated, often because the original papers lacked details about data and methodology. Compounding the problem, researchers cite these nonreplicable studies more often than those that can be replicated, perhaps because they tend to be more sensational and therefore get more clicks.

Most readers, including journalists, can’t discern the quality of the science. Yet it’s “taken forever for the publishing community to provide banners on the original papers” to signal they “might not reach the conclusion readers think,” West says. Tentative or unsubstantiated claims can have profound social impacts. West references a one-paragraph letter written by two physicians and published in the New England Journal of Medicine in 1980, which he regards as largely responsible for the current opioid crisis. The authors asserted that “addiction is rare in patients treated with narcotics,” but they provided no supporting evidence.

It took 37 years before the New England Journal of Medicine added an editorial note warning that the letter had been “heavily and uncritically cited,” but neither a warning nor a retraction can put the misinformation genie back in the bottle, especially given the letter’s decades-long influence on narcotics prescriptions. What’s more, readers can still access misleading studies, and researchers continue to cite them even after they’ve been retracted because they either don’t know about a retraction, or don’t care.

The rise of preprints, scientific papers that have yet to be peer-reviewed, has generated further debate about the proper way to communicate scientific research. Some people celebrate preprints as a way to invite advance feedback and disseminate findings faster. Others argue that so much unvetted material adds to the misinformation glut.

Preprints accounted for roughly 25 percent of Covid-19–related studies published in 2020. Of those preprints, 29 percent were cited at least once in mainstream news articles. Take the infamous example of ivermectin, a drug developed for treating parasitic infections. A preprint touting its efficacy in treating Covid-19 patients appeared on the Social Science Research Network (SSRN) server in April 2021, prompting widespread interest in and approval of the drug, including by governments in Bolivia, Brazil, and Peru. As people began taking ivermectin to treat or prevent Covid-19, scientists expressed concern about the data used in the preprint—data supplied by Surgisphere, a health-care analytics company whose unreliable data had previously led to retractions of papers in The Lancet and The New England Journal of Medicine. The paper was removed from SSRN, and shortly thereafter Surgisphere shut down its website and disappeared.

The now-removed preprint paper inserted ivermectin directly into the political spin machine. What’s more, the hype and drama surrounding the drug obscured the critical uncertainty about whether it could actually treat or prevent Covid-19. (A subsequent study suggests that if taken soon after diagnosis, ivermectin can help prevent serious illness.) It also distracted from the important fact that the efficacy of any drug depends on timing, dosage, and other health and safety factors that people shouldn’t try to determine on their own.

Approximately 70 percent of preprint literature is eventually peer-reviewed and published, but what about all the rest, which never become anything more than preprints? Many reporters don’t distinguish between unvetted preprints and formally published papers; to casual web sleuths, the two can appear nearly the same. When unsubstantiated findings guide personal behaviors and policies, even a small number of faulty studies can have significant impact. A team of international researchers found that when first-draft results are shared widely, “it can be very difficult to ‘unlearn’ what we thought was true” —even when the drafts are amended later on.

Unlearning falsehoods is especially challenging given today’s oversaturated news cycle. Online news aggregators syndicate local and national publications and present readers with an endless barrage of information via notifications and emails. In this context, it’s hardly surprising that readers tend to click on splashy headlines and articles that confirm their preexisting beliefs. “Science is embedded in an information ecosystem that encourages clickbait and facilitates confirmation bias,” West says.

And when people try to explore the research behind the headlines, they run into barriers: Scientific articles are becoming increasingly hard to understand as researchers pack them with more jargon than ever. A group of Swedish researchers who evaluated scientific abstracts written between 1881 and 2015 found a steady decrease in readability over time. By 2015, more than 20 percent of scientific abstracts required a post-college reading level. A big issue is the heavy use of acronyms; as of 2019, 73 percent of scientific abstracts contained them. Scientists themselves sometimes avoid citing papers rife with jargon because not even they can confidently parse it. We’ve all heard of “legalese,” but “science-ese” can be similarly inscrutable and alienating to readers.

10 years ago, the debate was around whether scientists should spend their time engaging with the public. Now the question is how to do it.

Addressing the science-driven misinformation problem will require a “profound restructuring of how the science ‘industry’ works,” Benjamin Freeling says. One recommendation is for journals to help readers see the preprint as a work in progress, not as the end result. Critical care physician Michael Mullins, editor-in-chief of Toxicology Communications, referred to a 2020 paper about the effects of hydroxychloroquine on Covid patients that appeared on the preprint server medRxiv and was published in the International Journal of Antimicrobial Agents that same day without undergoing peer review. Many people (including the president of the United States) regarded the study as complete, underscoring the danger of scientists using preprints to circumvent peer review.

One change that statistician Daniel Lakens of Eindhoven University of Technology advocates is to implement a system of “registered reports,” which involve peer review and acceptance of a study’s design, methodology, and statistical plan before data are collected and regardless of what the data ultimately suggest. These reports would be pre-accepted for publication when the final data and analysis are complete. Registered reports would combat the tendency to publish papers with greater potential for publicity and clicks because publication would revolve not around the outcome but around the process. In 2020 the journal Royal Society Open Science initiated a rapid Registered Report system that allows for ongoing documented revisions. Many other journals have followed suit, attempting to balance the need for a faster review process with the need for accuracy. If publications relied on process rather than the outcome or the potential for clicks, scientists could focus on and produce better science.

As for the academic public relations machine, Jevin West believes scientists should be held accountable for the text in university press releases. Carl Bergstrom, a biologist at the University of Washington who is active in public outreach, suggests that scientists sign off on press releases before they’re sent out, putting those releases through their own form of scientific review.

Scientists aren’t responsible for the critical thinking skills of the average reader or the revenue models of journals, but they should recognize how they contribute to the spread of misinformation. To address the jargon problem, scientists could use fewer acronyms and include “lay summaries,” also known as plain-language summaries. Some publications now require these, but they could go further by requiring glossaries of technical terms and acronyms, jargon cheat sheets, or other types of decoders necessary for understanding a study, especially for open-access and preprint articles. Freeling’s advice is more blunt: “Try better writing.”

Scientists can also communicate more effectively with the public by harnessing social media. Freshwater ecologist Lauren Kuehne, whose work includes a devotion to science communication, advocates informative blog posts, Twitter threads, TikTok videos, and public talks to build relationships. But open communication comes with its own issues, especially balancing a desire for influence with trustworthiness. Organizations such as the American Association for the Advancement of Science (AAAS) offer workshops and communication tool kits on effective public science communication, but scientists have to pursue that information on their own. The good news, says Kuehne, is that “10 years ago, the debate was around whether scientists should spend their time engaging with the public,” whereas now the question isn’t “whether it’s important, but how to do it.”

Direct public engagement is the best way to help people understand that even the most canonized scientific facts once were subject to debate. Making the scientific process more transparent will expose flaws and may even beget controversy, but ultimately it will allow scientists to strengthen error-correcting mechanisms as well as build public trust.

That science works despite the problems noted here is, as Bergstrom puts it, “amazing.” But the ability of science to transcend flaws in the system shouldn’t be amazing—it should be standard. Let’s save our amazement for the discoveries that emerge because of the scientific enterprise, not in spite of it.

This story originally appeared on OpenMind, a digital magazine tackling science controversies and deceptions.

Deny, Deceive, Delay: Documenting and Responding to Climate Disinformation at COP26 and Beyond

A new report released Thursday by the Institute for Strategic Dialogue (ISD) and the 20+ member coalition Climate Action Against Disinformation (CAAD) documents the extent and diverse nature of climate disinformation during last year’s international climate conference in Glasgow, COP26. The report, the most comprehensive of its type to date, offers seven key policy recommendations to stop disinformation from jeopardizing future climate action and policy-making.

Across social media, high-traction disinformation was found to originate primarily from a select number of pundits and political actors, who merge climate and “Culture Wars” narratives to violate multiple content moderation policies in tandem. Twitter carried the most false content by volume, while Facebook’s algorithm drove greater exposure to climate disinformation than its own Climate Science Center, and its fact-checking policies remain woefully under-enforced. 

Based on the narratives and tactics identified by CAAD’s bespoke monitoring system, the coalition recommends that policymakers formally recognize the threat, adopt a universal definition of climate disinformation and limit loopholes for traditional media outlets in tech regulation such as the EU’s Digital Services Act – all of which will help mitigate the risk that false or misleading content hinders climate negotiations and legislative agendas at this critical juncture.

How high-profile scientists felt tricked by group denying climate change

Read the full story from the BBC.

A dozen scientists, politicians, and campaigners say they have been tricked into participating in online events promoting climate-change denial.

The events were organised by the Creative Society, an international activist group that denies global warming is being caused by human activity.

Twitter bans climate change propaganda ads as deniers target platforms

Read the full story in the Washington Post.

Twitter is banning advertisements that promote climate change denial in an effort to curb the reach of groups seeking to downplay the extent of the environmental crisis.

Under the new policy, advertisements that contradict the “scientific consensus” on climate change will be prohibited along with other types of banned-ads such as campaigns that contain violence, profanity or personal attacks. Twitter will be relying on reports from the Intergovernmental Panel on Climate Change, a unit within the United Nations, to inform its decisions about which advertisements break its rules, according to the company.

Misinformation is derailing renewable energy projects across the United States

NPR reports that opponents of renewable energy are successfully stalling or rejecting projects across the country. Researchers say that in many groups, misinformation plays a role in slowing or derailing projects.

Columbia Journalism Review publishes two-part series on decline of local news reporting, why it matters, and how to improve it

The Columbia Journalism Review recently published a two-part series by Steve Waldman on the decline of local news reporting. It’s worth a look because, as the author points out in part one, “academic studies show that the local news collapse has likely led to lower voter turnout and bond ratings, and more corruptionwasteair pollution, and corporate crime.”

Part two of the series explores how to create a better local news system, including better service for communities of color and rural areas, and looks at how to improve the business model for local news.

Facebook failed to label over 50% of posts from top climate deniers

Read the full story from Treehugger.

How seriously does Facebook take its climate commitments? 

The company, now known as Meta, has reached net-zero greenhouse gas emissions for its global operations and says its supply chain will be net-zero by 2030. Yet a new report from watchdog group the Center for Countering Digital Hate (CCDH) finds that its platforms are still emitting unfiltered climate denial. 

“At a very simple level, Facebook is falling short of its promises to label and tackle climate disinformation,” CCDH Chief Executive Officer Imran Ahmed tells Treehugger.