Rxivist combines preprints from bioRxiv with data from Twitter to help you find the papers being discussed in your field. Currently indexing 52,258 bioRxiv papers from 242,323 authors.
Most downloaded bioRxiv papers, all time
in category scientific communication and education
345 results found. For more information, click each entry to expand.
68,086 downloads scientific communication and education
Good scientific writing is essential to career development and to the progress of science. A well-structured manuscript allows readers and reviewers to get excited about the subject matter, to understand and verify the paper's contributions, and to integrate these contributions into a broader context. However, many scientists struggle with producing high-quality manuscripts and typically get little training in paper writing. Focusing on how readers consume information, we present a set of 10 simple rules to help you get across the main idea of your paper. These rules are designed to make your paper more influential and the process of writing more efficient and pleasurable.
26,521 downloads scientific communication and education
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs. Application of this straightforward protocol reveals the full extent of the skew of these distributions and the variation in citations received by published papers that is characteristic of all scientific journals. Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF. We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.
18,107 downloads scientific communication and education
We wish to answer this question If you observe a “significant” P value after doing a single unbiased experiment, what is the probability that your result is a false positive? The weak evidence provided by P values between 0.01 and 0.05 is explored by exact calculations of false positive risks. When you observe P = 0.05, the odds in favour of there being a real effect (given by the likelihood ratio) are about 3:1. This is far weaker evidence than the odds of 19 to 1 that might, wrongly, be inferred from the P value. And if you want to limit the false positive risk to 5%, you would have to assume that you were 87% sure that there was a real effect before the experiment was done. If you observe P = 0.001 in a well-powered experiment, it gives a likelihood ratio of almost 100:1 odds on there being a real effect. That would usually be regarded as conclusive, But the false positive risk would still be 8% if the prior probability of a real effect was only 0.1. And, in this case, if you wanted to achieve a false positive risk of 5% you would need to observe P = 0.00045. It is recommended that the terms “significant” and “non-significant” should never be used. Rather, P values should be supplemented by specifying the prior probability that would be needed to produce a specified (e.g. 5%) false positive risk. It may also be helpful to specify the minimum false positive risk associated with the observed P value. Despite decades of warnings, many areas of science still insist on labelling a result of P < 0.05 as “statistically significant”. This practice must account for a substantial part of the lack of reproducibility in some areas of science. And this is before you get to the many other well-known problems, like multiple comparisons, lack of randomisation and P-hacking. Science is endangered by statistical misunderstanding, and by university presidents and research funders who impose perverse incentives on scientists.
9,990 downloads scientific communication and education
Scientific publications enable results and ideas to be transmitted throughout the scientific community. The number and type of journal publications also have become the primary criteria used in evaluating career advancement. Our analysis suggests that publication practices have changed considerably in the life sciences over the past thirty years. More experimental data is now required for publication, and the average time required for graduate students to publish their first paper has increased and is approaching the desirable duration of Ph.D. training. Since publication is generally a requirement for career progression, schemes to reduce the time of graduate student and postdoctoral training may be difficult to implement without also considering new mechanisms for accelerating communication of their work. The increasing time to publication also delays potential catalytic effects that ensue when many scientists have access to new information. The time has come for life scientists, funding agencies, and publishers to discuss how to communicate new findings in a way that best serves the interests of the public and the scientific community.
9,224 downloads scientific communication and education
Despite their recognized limitations, bibliometric assessments of scientific productivity have been widely adopted. We describe here an improved method that makes novel use of the co-citation network of each article to field-normalize the number of citations it has received. The resulting Relative Citation Ratio is article-level and field-independent, and provides an alternative to the invalid practice of using Journal Impact Factors to identify influential papers. To illustrate one application of our method, we analyzed 88,835 articles published between 2003 and 2010, and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. We demonstrate that the values generated by this method strongly correlate with the opinions of subject matter experts in biomedical research, and suggest that the same approach should be generally applicable to articles published in all areas of science. A beta version of iCite, our web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov .
8,967 downloads scientific communication and education
The data in this report summarises the responses gathered from 365 principal investigators of academic laboratories, who started their independent positions in the UK within the last 6 years up to 2018. We find that too many new investigators express frustration and poor optimism for the future. These data also reveal, that many of these individuals lack the support required to make a successful transition to independence and that simple measures could be put in place by both funders and universities in order to better support these early career researchers. We use these data to make both recommendations of good practice and for changes to policies that would make significant improvements to those currently finding independence challenging. We find that some new investigators face significant obstacles when building momentum and hiring a research team. In particular, access to PhD students. We also find some important areas such as starting salaries where significant gender differences persist, which cannot be explained by seniority. Our data also underlines the importance of support networks, within and outside the department, and the positive influence of good mentorship through this difficult career stage.
7,477 downloads scientific communication and education
Clarity and accuracy of reporting are fundamental to the scientific process. The understandability of written language can be estimated using readability formulae. Here, in a corpus consisting of 707 452 scientific abstracts published between 1881 and 2015 from 122 influential biomedical journals, we show that the readability of science is steadily decreasing. Further, we demonstrate that this trend is indicative of a growing usage of general scientific jargon. These results are concerning for scientists and for the wider public, as they impact both the reproducibility and accessibility of research findings.
6,885 downloads scientific communication and education
Inaccurate data in scientific papers can result from honest error or intentional falsification. This study attempted to determine the percentage of published papers containing inappropriate image duplication, a specific type of inaccurate data. The images from a total of 20,621 papers in 40 scientific journals from 1995-2014 were visually screened. Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation. The prevalence of papers with problematic images rose markedly during the past decade. Additional papers written by authors of papers with problematic images had an increased likelihood of containing problematic images as well. As this analysis focused only on one type of data, it is likely that the actual prevalence of inaccurate data in the published literature is higher. The marked variation in the frequency of problematic images among journals suggest that journal practices, such as pre-publication image screening, influence the quality of the scientific literature.
6,414 downloads scientific communication and education
Background: Evidence-based clinical practice relies on unbiased reporting of negative results. Meta-analysis of drug safety and efficacy across many clinical trials is difficult given the unconstrained nature of reasons that are provided to ClinicalTrials.gov to explain clinical trial terminations. Methods and Findings: We scanned all trials in ClinicalTrials.gov marked with the “terminated” status (N=3122), meaning the trial had been stopped before the scheduled end date. Under the current reporting framework, any number of reasons may be given for termination, and these need not conform to a controlled vocabulary. Here we develop a controlled vocabulary for trial termination, and map each terminated trial to as many as three vocabulary terms. Mapping to this “ontology of termination” allows further analysis and conclusions. First, we identify the subset of terminated trials that ended citing safety concerns (6.2%) or failure to establish efficacy (10.8%), and were further able to stratify these rates across trials of different phases. Second, we examine termination reasons where a stricter data model could have preserved more evidentiary value, either because the data model was misused (7.6%) or because the given reason left unclear whether the decision to terminate was based on analysis of the data (74.9%, with 20.4% mentioning a decision-maker that may have had access to the data). Third, we show that imposing a controlled vocabulary of reasons for termination would avoid ambiguity and improve the evidentiary value of clinical trials. Conclusions: We encourage wider use of an “ontology of termination” and propose four questions that should be posed on trial termination. These simple steps would promote transparency and enable ready access to negative trial results for meta-analysis.
5,709 downloads scientific communication and education
Functional neuroimaging techniques have transformed our ability to probe the neurobiological basis of behaviour and are increasingly being applied by the wider neuroscience community. However, concerns have recently been raised that the conclusions drawn from some human neuroimaging studies are either spurious or not generalizable. Problems such as low statistical power, flexibility in data analysis, software errors, and lack of direct replication apply to many fields, but perhaps particularly to fMRI. Here we discuss these problems, outline current and suggested best practices, and describe how we think the field should evolve to produce the most meaningful answers to neuroscientific questions.
5,491 downloads scientific communication and education
Background: Previous research shows that men often receive more research funding than women, but does not provide empirical evidence as to why this occurs. In 2014, the Canadian Institutes of Health Research (CIHR) created a natural experiment by dividing all investigator-initiated funding into two new grant programs: one with and one without an explicit review focus on the caliber of the principal investigator. Methods: We analyzed application success among 23,918 grant applications from 7,093 unique principal investigators in a 5-year natural experiment across all investigator-initiated CIHR grant programs in 2011-2016. We used Generalized Estimating Equations to account for multiple applications by the same applicant and an interaction term between each principal investigator's self-reported sex and grant programs to compare success rates between male and female applicants under different review criteria. Results: The overall grant success rate across all competitions was 15.8%. After adjusting for age and research domain, the predicted probability of funding success in traditional programs was 0.9 percentage points higher for male than for female principal investigators (OR 0.934, 95% CI 0.854-1.022). In the new program focused on the proposed science, the gap was 0.9 percentage points in favour of male principal investigators (OR 0.998, 95% CI 0.794-1.229). In the new program with an explicit review focus on the caliber of the principal investigator, the gap was 4.0 percentage points in favour of male principal investigators (OR 0.705, 95% CI 0.519-0.960). Interpretation: This study suggests gender gaps in grant funding are attributable to less favourable assessments of women as principal investigators, not differences in assessments of the quality of science led by women. We propose ways for funders to avoid allowing gender bias to influence research funding. Funding: This study was unfunded.
5,427 downloads scientific communication and education
This article presents a practical roadmap for scholarly data repositories to implement data citation in accordance with the Joint Declaration of Data Citation Principles, a synopsis and harmonization of the recommendations of major science policy bodies. The roadmap was developed by the Repositories Expert Group, as part of the Data Citation Implementation Pilot (DCIP) project, an initiative of FORCE11.org and the NIH BioCADDIE (https://biocaddie.org) program. The roadmap makes 11 specific recommendations, grouped into three phases of implementation: a) required steps needed to support the Joint Declaration of Data Citation Principles, b) recommended steps that facilitate article/data publication workflows, and c) optional steps that further improve data citation support provided by data repositories.
5,350 downloads scientific communication and education
Researchers in the life sciences are posting work to preprint servers at an unprecedented and increasing rate, sharing papers online before (or instead of) publication in peer-reviewed journals. Though the increasing acceptance of preprints is driving policy changes for journals and funders, there is little information about their usage. Here, we collected and analyzed data on all 37,648 preprints uploaded to bioRxiv.org, the largest biology-focused preprint server, in its first five years. We find preprints are being downloaded more than ever before (1.1 million tallied in October 2018 alone) and that the rate of preprints being posted has increased to a recent high of 2,100 per month. We also find that two-thirds of preprints posted before 2017 were later published in peer-reviewed journals, and find a relationship between journal impact factor and preprint downloads. Lastly, we developed Rxivist.org, a web application providing multiple ways of interacting with preprint metadata.
4,557 downloads scientific communication and education
The robustness of scholarly peer review has been challenged by evidence of disparities in publication outcomes based on author gender and nationality. To address this, we examined the peer review outcomes of 23,876 initial submissions and 7,192 full submissions that were submitted to the biosciences journal eLife between 2012 and 2017. Women and authors from nations outside of North America and Europe were underrepresented both as gatekeepers (editors and peer reviewers) and authors. We found evidence of a homophilic relationship between the demographics of the gatekeepers and authors in determining the outcome of peer review; that is, gatekeepers favored manuscripts from authors of the same gender and from the same country. The acceptance rate for manuscripts with male last authors was higher than for female last authors, and this gender inequity was greatest when the team of reviewers was all male; mixed-gender gatekeeper teams lead to more equitable peer review outcomes. Homogeny between the country affiliation of the gatekeeper and the corresponding author also lend to improved acceptance rates for many countries. We conclude with a discussion of mechanisms that could contribute to this effect, directions for future research, and policy implications. Code and anonymized data have been made available at https://github.com/murrayds/elife-analysis
4,053 downloads scientific communication and education
The present study analyzed 960 papers published in Molecular and Cellular Biology (MCB) from 2009-2016 and found 59 (6.1%) to contain inappropriately duplicated images. The 59 instances of inappropriate image duplication led to 42 corrections, 5 retractions and 12 instances in which no action was taken. Our experience suggests that the majority of inappropriate image duplications result from errors during figure preparation that can be remedied by correction. Nevertheless, ~10% of papers with inappropriate image duplications in MCB were retracted. If this proportion is representative, then as many as 35,000 papers in the literature are candidates for retraction due to image duplication. The resolution of inappropriate image duplication concerns after publication required an average of 6 h of journal staff time per published paper. MCB instituted a pilot program to screen images of accepted papers prior to publication that identified 12 manuscripts (14.5% out of 83) with image concerns in two months. The screening and correction of papers before publication required an average of 30 min of staff time per problematic paper. Image screening can identify papers with problematic images prior to publication, reduces post-publication problems and requires significantly less staff time than the correction of problems after publication.
3,805 downloads scientific communication and education
There has been an increasing concern that most published medical findings are false. But what does it mean to be false? Here we describe the range of definitions of false discoveries in the scientific literature. We summarize the philosophical, statistical, and experimental evidence for each type of false discovery. We discuss common underpinning problems with the scientific and data analytic practices and point to tools and behaviors that can be implemented to reduce the problems with published scientific results.
3,787 downloads scientific communication and education
Learning to write a scientific manuscript is one of the most important and rewarding scientific training experiences, yet most young scientists only embark on this experience relatively late in graduate school, after gathering sufficient data in the lab. Yet, familiarity with the process of writing a scientific manuscript and receiving peer reviews, often leads to a more focused and driven experimental approach. To jump-start this training, we developed a protocol for teaching manuscript writing and reviewing in the classroom, appropriate for new graduate or upper-level undergraduate students of developmental biology. First, students are provided one of four cartoon data sets, which are focused on genetic models of animal development. Students are instructed to use their creativity to convert evidence into argument, and then to integrate their interpretations into a manuscript, including an illustrated, mechanistic model figure. After student manuscripts are submitted, manuscripts are redacted and distributed to classmates for peer review. Here, we present our cartoon datasets, homework instructions, and grading rubrics as a new resource for the scientific community. We also describe methods for developing new datasets so that instructors can adapt this activity to other disciplines. Our data-driven manuscript writing exercise, as well as the formative and summative assessments resulting from the peer review, enables students to learn fundamental concepts in developmental genetics. In addition, students practice essential skills of scientific communication, including arguing from evidence, developing and testing models, the unique conventions of scientific writing, and the joys of scientific story telling.
3,504 downloads scientific communication and education
Despite the growth of Open Access, illegally circumventing paywalls to access scholarly publications is becoming a more mainstream phenomenon. The web service Sci-Hub is amongst the biggest facilitators of this, offering free access to around 62 million publications. So far it is not well studied how and why its users are accessing publications through Sci-Hub. By utilizing the recently released corpus of Sci-Hub and comparing it to the data of ~28 million downloads done through the service, this study tries to address some of these questions. The comparative analysis shows that both the usage and complete corpus is largely made up of recently published articles, with users disproportionately favoring newer articles and 35% of downloaded articles being published after 2013. These results hint that embargo periods before publications become Open Access are frequently circumnavigated using Guerilla Open Access approaches like Sci-Hub. On a journal level, the downloads show a bias towards some scholarly disciplines, especially Chemistry, suggesting increased barriers to access for these. Comparing the use and corpus on a publisher level, it becomes clear that only 11% of publishers are highly requested in comparison to the baseline frequency, while 45% of all publishers are significantly less accessed than expected. Despite this, the oligopoly of publishers is even more remarkable on the level of content consumption, with 80% of all downloads being published through only 9 publishers. All of this suggests that Sci-Hub is used by different populations and for a number of different reasons and that there is still a lack of access to the published scientific record. A further analysis of these openly available data resources will undoubtedly be valuable for the investigation of academic publishing.
3,461 downloads scientific communication and education
The postdoctoral community is an essential component of the academic and scientific workforce. As economic and political pressures impacting these enterprises continue to change, the postdoc experience has evolved from short, focused periods of training into often multidisciplinary, extended positions with less clear outcomes. As efforts are underway to amend U.S. federally funded research policies, the paucity of postdoc data has made evaluating the impact of policy recommendations challenging. Here we present comprehensive survey results from over 7,600 postdocs based at 351 academic and non-academic U.S. institutions in 2016. In addition to demographic and salary information, we present multivariate analyses on the factors that influence postdoc career plans and mentorship satisfaction in this population. We further analyze gender dynamics and expose wage disparities and career choice differences. Academic research positions remain the predominant career choice of postdocs in the U.S., although unequally between postdocs based on gender and residency status. Receiving mentorship training during the postdoctoral period has a large, positive effect on postdoc mentorship satisfaction. Strikingly, the quality of and satisfaction with postdoc mentorship appears to also heavily influence career choice. The data presented here are the most comprehensive data on the U.S. postdoc population to date. These results provide an evidence basis for informing government and institutional policies, and establish a critical cornerstone for quantifying the effects of future legislation aimed at the academic and scientific workforce.
3,116 downloads scientific communication and education
As academic careers become more competitive, junior scientists need to understand the value that mentorship brings to their success in academia. Previous research has found that, unsurprisingly, successful mentors tend to train successful students. But what characteristics of this relationship predict success, and how? We analyzed an open-access database of about 20,000 researchers who have undergone both graduate and postdoctoral training, compiled across several fields of biomedical science. Our results show that postdoctoral mentors were more instrumental to trainees' success compared to graduate mentors. A trainee's success in academia was predicted by the degree of intellectual synthesis with their mentors, resulting from fusing the influence of disparate advisors. This suggests that junior scientists should have increased chances of success by training with and linking the ideas of mentors from different fields. We discuss the implications of these results for choosing mentors and determining the duration of postdoctoral training.
- Top preprints of 2018
- Paper search
- Author leaderboards
- Overall metrics
- The API
- Email newsletter
- 21 May 2019: PLOS Biology has published a community page about Rxivist.org and its design.
- 10 May 2019: The paper analyzing the Rxivist dataset has been published at eLife.
- 1 Mar 2019: We now have summary statistics about bioRxiv downloads and submissions.
- 8 Feb 2019: Data from Altmetric is now available on the Rxivist details page for every preprint. Look for the "donut" under the download metrics.
- 30 Jan 2019: preLights has featured the Rxivist preprint and written about our findings.
- 22 Jan 2019: Nature just published an article about Rxivist and our data.
- 13 Jan 2019: The Rxivist preprint is live!