The 3 dangers of publishing in “megajournals”–and how you can avoid them

You like the idea of “megajournals”–online-only, open access journals that cover many subjects and publish content based only on whether it is scientifically sound. You get that PLOS ONE, PeerJ and others offer a path to a more efficient, faster, more open scholarly publishing world.

But you’re not publishing there.

Because you’ve heard rumors that they’re not peer reviewed, or that they’re “peer-review lite” journals. You’re concerned they’re journals of last resort, article dumping grounds. You’re worried your co-authors will balk, that your work won’t be read, or that your CV will look bad.

Well, you’re not the only one. And it’s true: although they’ve got great potential for science as a whole, megajournals (which include PLOS ONE as well as BMJ Open, SAGE Open, Scientific Reports, Open Biology, PeerJ, and SpringerPlus) carry some potential career liabilities.

But they don’t have to. With a little savvy, publishing in megajournals can actually boost your career, at the same time as you support a great new trend in science communication. So here are the biggest dangers of megajournal publishing–and the tips that let you not have to worry about them:

1. My co-authors won’t want to publish in megajournals

Sometimes wanting to publish somewhere yourself isn’t enough–you’ve got to convince skeptical co-authors (or advisors!). Luckily, there’s a lot of data about megajournals’ advantages for you to share with the skeptics. And the easiest way to convince a group of scientists of anything is to show them the data.

Megajournals publish prestigious science

Megajournals aren’t for losers: top scientists, including Nobelists,  publish there. They also serve as their editors and advisory board members. So, let your co-authors know: you’ll be in great company if you publish in a megajournal.

Megajournals boost citation and readership impact

Megajournals can get you more readers because they’re Open Access. A 2008 BMJ study showed that “full text downloads were 89% higher, PDF downloads 42% higher, and unique visitors 23% higher for open access articles than for subscription access articles.” These findings have been confirmed for other disciplines, as well. Open Access journals can also get you more citations, as numerous studies have shown.

Megajournals promote real-world use

With more readers comes more applications in the real world–another important form of impact. The most famous example is of Jack Andraka, a teenager who devised a test for pancreatic cancer using information found in Open Access medical literature. Every day, articles about diet and public health in Malawi, how to more efficiently monitor animal species in the face of rapid climate change, and other life-changing applied science is shared in Open Access megajournals.

Megajournals publish fast

If the readership and citation numbers don’t appeal to your co-authors, what about super fast publication times? Megajournals often publish more quickly than other journals. PLOS ONE has a median time-to-publication of around six months; PeerJ’s median time to first decision is 24 days; time to final acceptance is a mere 51 days. Why? Rather than having to prove to your reviewers the significance of your findings, you just have to prove that the underlying science is sound. That leaves you with more time to do other research.

Megajournals save money

Megajournals also often cheaper to publish in, due to economies of scale. Which means that while the Journal of Physical Therapy requires you to pay $4030 for an article, PLOS ONE can get you 29x the article influence for a third of the price. PeerJ claims that their even cheaper prices–$299 flat rate for as many articles as you want to publish, ever–have saved academia over $1 million to date.

2. No one in my field will find out about it

You’ve convinced your co-authors–megajournals are faster, cheaper, and publish great research by renowned scientists. Now, how do you get others in your field to read an article in a journal they’ve never heard of?

Getting your colleagues to read your article is as easy as posting it in places where they go to read. You can start before you publish by posting a preprint to Figshare, or a disciplinary pre-print server like ArXiv or PeerJ Preprints, in order to whet your colleagues’ appetite. Make sure to use good keywords to make it findable–particularly since today, a growing percentage of articles are found via Google Scholar and PubMed searches instead of encountered in journals.

Once your paper has been more formally published in your megajournal of choice, you can leverage the social media interest you’ve already gained to share the final product. Twitter’s a great way to get attention, especially if you use hashtags your colleagues follow. So is posting to disciplinary listserves. A blog post sharing the “story behind the paper” and summarizing your findings can be powerful, too. Together, these can be all it takes to get your article noticed.

Microbiologist Jonathan Eisen is a great example. He promoted his article upon publication with great success, provoking over 80 tweets and 17 comments on a blog post describing his PLOS ONE paper, “Stalking the Fourth Domain in Metagenomic Data”. The article itself has received ~47,000 views, 300 Mendeley readers, 23 comments, 35 Google Scholar citations, and hundreds of social media mentions to date, thanks in part to Eisen’s savvy self-promotion.

3. My CV will look like I couldn’t publish in “good” journals

It’s a sad fact that reviewers for tenure and promotion often judge the quality of articles by the journal of publication when skimming CVs. Most megajournal titles won’t ring any bells (yet) for those sorts of reviewers.

So, it’s your job to demonstrate the impact of your article. Luckily, that’s easier than you might think. Today, we don’t have to rely on the journal brand name as an impact proxy–we can look at the impact of the article itself, using article-level metrics.

One of the most compelling article-level stats is good ol’-fashioned citations. You can find these via Google Scholar, Scopus, or Web of Science, all of which have their pros and cons. Another great one is article downloads, which many megajournals report: even if your article is too new to be cited yet, you can show it’s making an impact with readers.

To demonstrate broader and more immediate impacts, also highlight your diverse audiences and the ways they engage with your research. Social media platforms leave footprints on the web. These ”altmetrics” can be captured and aggregated at the article level:

scholarly audience

public audience

recommended

faculty of 1000 recommendation

popular press mentions

cited

traditional  citation

wikipedia citations

discussed

scholarly blog coverage

blogs, twitter mentions

saved

mendeley and citeulike bookmarks

delicious bookmarks

read

pdf views

html views

There are many places to collect this information; rounding it all up can be a pain. Luckily, many megajournals will compile these metrics for you: PLOS has developed its own article level metrics suite; Nature Scientific Reports and many other publishers use Altmetric.com’s informative article-level metrics reports.

WHPnd6k.png    U61Lluv.png

If your megajournal doesn’t offer metrics, or you would like to compile metrics for all your megajournal articles in one place, you can pull everything together with an Impactstory profile instead.

And just like that, you’re turning megajournals into valuable assets for both science and your career:  you’ve convinced your co-authors, done some savvy social media promotion to get your discipline’s attention, and turned your megajournal article from a CV liability to a CV victory through the smart use of article-level metrics.  Congratulations!

Have you found success by publishing in megajournals? Got other megajournal publishing tips to offer? Share your story in the comments section below!

 

Advertisements

Announcing a better way to measure your value: the Total Impact Score

Measuring the full impact of a scholar’s work is important to us here at Impactstory. No single metric captures all the flavors of your impact–until now.

We’re announcing a thrilling new feature to be rolled out in the next few days: Total Impact Scores.* Now, using one metric to rule them all, you can capture and calculate not only your value as a Scholar, but your worth as a Human Being.

We are increasingly able to track your productivity, effectiveness, and health thanks to the Quantified Self movement. Smart appliances are able to tell us more than ever about your habits in the home.

By forging partnerships with new data providers, we’re able to get a fuller picture of your value on the job and in your private life. To help you make sense of all that data, we’re summarized your impact in the Total Impact Score.

While the exact algorithms we use to calculate your Total Impact Scores are proprietary, we can share with you some of the data streams that are taken into account when compiling your Total Impact Score:

We have also paid close attention to concerns about the over-dependence upon quantitative measures, and will soon roll out qualitative supplements to the Total Impact Score, including full-text reports on your effectiveness as a parent, spouse, co-worker, and friend–as reported by your loved ones and colleagues.

Stay tuned for future announcements about the Total-Impact Score and other innovations in altmetrics!

* Some might recognize the name–Total-Impact is what we called the first iteration of Impactstory. With our single impact metric, the Total Impact Score, you can truly calculate your total impact, beyond the Academy.

Four great reasons to stop caring so much about the h-index

You’re surfing the research literature on your lunch break and find an unfamiliar author listed on a great new publication. How do you size them up in a snap?

izaNcrp.png

Google Scholar is an obvious first step. You type their name in, find their profile, and–ah, there it is! Their h-index, right at the top. Now you know their quality as a scholar.

Or do you?

The h-index is an attempt to sum up a scholar in a single number that balances productivity and impact. Anna, our example, has an h-index of 25 because she has 25 papers that have each received at least 25 citations.

Today, this number is used for both informal evaluation (like sizing up colleagues) and formal evaluation (like tenure and promotion).

We think that’s a problem.

The h-index is failing on the job, and here’s how:

1. Comparing h-indices is comparing apples and oranges.

Let’s revisit Anna LLobet, our example. Her h-index is 25. Is that good?

Well, “good” depends on several variables. First, what is her field of study? What’s considered “good” in Clinical Medicine (84) is different than what is considered “good” in Mathematics (19). Some fields simply publish and cite more than others.

Next, how far along is Anna in her career? Junior researchers have a h-index disadvantage. Their h-index can only be as high as the number of papers they have published, even if each paper is highly cited. If she is only 9 years into her career, Anna will not have published as many papers as someone who has been in the field 35 years.

Furthermore, citations take years to accumulate. The consequence is that the h-index doesn’t have much discriminatory power for young scholars, and can’t be used to compare researchers at different stages of their careers. To compare Anna to a more senior researcher would be like comparing apples and oranges.

Did you know that Anna also has more than one h-index? Her h-index (and yours) depends on which database you are looking at, because citation counts differ from database to database. (Which one should she list on her CV? The highest one, of course. :))

2. The h-index ignores science that isn’t shaped like an article.

What if you work in a field that values patents over publications, like chemistry? Sorry, only articles count toward your h-index. Same thing goes for software, blog posts, or other types of “non-traditional” scholarly outputs (and even one you’d consider “traditional”: books).

Similarly, the h-index only uses citations to your work that come from journal articles, written by other scholars. Your h-index can’t capture if you’ve had tremendous influence on public policy or in improving global health outcomes. That doesn’t seem smart.

3. A scholar’s impact can’t be summed up with a single number.

We’ve seen from the journal impact factor that single-number impact indicators can encourage lazy evaluation. At the scariest times in your career–when you are going up for tenure or promotion, for instance–do you really want to encourage that? Of course not. You want your evaluators to see all of the ways you’ve made an impact in your field. Your contributions are too many and too varied to be summed up in a single number. Researchers in some fields are rejecting the h-index for this very reason.

So, why judge Anna by her h-index alone?

Questions of completeness aside, the h-index might not measure the right things for your needs. Its particular balance of quantity versus influence can miss the point of what you care about. For some people, that might be a single hit paper, popular with both other scholars and the public. (This article on the “Big Food” industry and its global health effects is a good example.) Others might care more about how often their many, rarely cited papers are used often by practitioners (like those by CG Bremner, who studied Barrett Syndrome, a lesser known relative of gastroesophageal reflux disease). When evaluating others, the metrics you’re using should get at the root of what you’re trying to understand about their impact.

4. The h-index is dumb when it comes to authorship.

Some physicists are one of a thousand authors on a single paper. Should their fractional authorship weigh equally with your single-author paper? The h-index doesn’t take that into consideration.

What if you are first author on a paper? (Or last author, if that’s the way you indicate lead authorship in your field.) Shouldn’t citations to that paper weigh more for you than it does your co-authors, since you had a larger influence on the development of that publication?

The h-index doesn’t account for these nuances.

So, how should we use the h-index?

more than my h-index.pngMany have attempted to fix the h-index weaknesses with various computational models that, for example, reward highly-cited papers, correct for career length, rank authors’ papers against other papers published in the same year and source, or count just the average citations of the most high-impact “core” of an author’s work.

None of these have been widely adopted, and all of them boil down a scientist’s career to a single number that only measures one type of impact.

What we need is more data.

Altmetrics–new measures of how scholarship is recommended, cited, saved, viewed, and discussed online–are just the solution. Altmetrics measure the influence of all of a researcher’s outputs, not just their papers. A variety of new altmetrics tools can help you get a more complete picture of others’ research impact, beyond the h-index. You can also use these tools to communicate your own, more complete impact story to others.

So what should you do when you run into an h-index? Have fun looking if you are curious, but don’t take the h-index too seriously.

Are you more than your h-index?  Email us today at team@impactstory.org for some free “I am more than my h-index” stickers!

Come hangout with us this Thursday!

5QfWVlk.png

Are you curious about altmetrics? Want to learn more about Impactstory, the only non-profit company committed to helping you find all your research impact?

Follow us on Google+ and get your invitation to join Stacy at our official, one-hour Google Hangout this Thursday, March 27th at 2 pm EDT/11 am PDT.

Stay for a few minutes or the entire hour, it’s up to you! We just want to get to know you better and chat about our favorite topic, altmetrics.

Even if you can’t make it, follow Impactstory on Google+ to stay in the loop with our latest news and learn about future Hangouts!

 

How to be the grad student your advisor brags about

Your advisor is ridiculously busy–so how do you get her to keep track of all the awesome research you are doing? Short answer: do great work that has such high online visibility, she can’t ignore it.

Easy, right?

But if you’re like me, you actually might appreciate a primer on how to maximize and document your research’s impact. Here, I’ve compiled a guide to get you started.

1. Do great work.

To begin with, you need to do work that’s worth bragging about. Self-promotion and great metrics don’t amount to much if your research isn’t sound.

2. Increase your work’s visibility.

Assuming that you’ve got that under control, making your “hidden” work visible is an easy next step. Gather the conference posters, software code, data, and other research products that have been sitting on your hard drive.

Using Figshare, you can upload datasets and make them findable online. You can do the same for your software using GitHub, and for your slide decks using Slideshare.

Want to make your work popular? Consider licensing it openly. Open licenses like CC-BY allow others to reuse your work more easily, advancing science quickly while still giving you credit. Here are some guides to help you license your data, code, and papers.

Making your work openly available has the benefit of allowing others to reuse and repurpose your findings in new and unexpected ways–adding to the number of citations you could potentially receive. These sites can also report metrics that allow you to see often they are viewed, downloaded, and used in other ways. (More about that later.)

3. Raise your own profile by joining the conversation.

Informal exchanges are the heart of scientific communication, but formal “conversations” like written responses to journal articles are also important. Here are three steps to raising your profile.

  1. Engage others in formal forums. You may already participate in conversations in your field at conferences and in the literature. If you do not, you should. Presenting posters, in particular, can be a helpful way to get feedback on your work while at the same time getting to know others in your field in a professional context.

  2. Engage others more and often. Don’t be a wallflower, online nor off. Though it can be intimidating to chat up senior researchers in your field–or even other grad students, for that matter–it’s a necessary step to building a community of collaborators. An easy way to start is by joining the Web equivalent of a ‘water cooler’ conversation: Twitter. There are lots of great guides to help you get started (PDF).  When you’ve gained some confidence and have longform insights to add, start a blog to share your thoughts. This post offers great tips on academic blogging for beginners, as does this article.

  3. Engage others in the open. Conversations that happen via email only serve those who are on the email chain. Two great places to have conversations that can benefit anyone who chooses to listen–while also getting you some name recognition–are disciplinary listservs and Twitter. Open engagement also lets others to join the debate.

4. Know your impact: track your work’s use online.

Once you’ve made your contributions to your discipline more visible, track the ways that your work is being used and discussed by others online. There are great tools that can help: the Altmetric.com bookmarklet, Academia.edu’s visualization dashboard, Mendeley’s Social Statistics summaries, basic metrics on Figshare, Github, and Slideshare, and Impactstory profiles.

See the buzz around articles with the Altmetric.com bookmarklet

The Altmetric.com bookmarklet can help you understand the reach of a particular article. Where altmetrics aren’t already displayed on a journal’s website, you can use the bookmarklet. Drag and drop the Altmetric bookmarklet (available here) into your browser toolbar, and then click it next time you’re looking at an article on a publisher’s website. You’ll get a summary of any buzz around your article–tweets, blog posts, mentions in the press, even Reddit discussions.

Track international impact with Academia.edu download mapDIIQ7HX.png

One of our favorite altmetrics visualization suites can be found on Academia.edu. In addition to a tidy summary of pageviews and referral sources for your documents hosted on their site, they also offer a great map visualization, which can help you to easily see the international reach of your work. This tool can be especially helpful for those in applied, internationally-focused research–for example, Swedish public health researchers studying the spread of disease in Venezuela–to understand the consumption of articles, white papers, and policy documents hosted on Academia.edu. One important limitation is that it doesn’t cover documents hosted elsewhere on the web.

Understand who’s reading your work with Mendeley Social Statistics

Mendeley’s Social Statistics summaries can also help you understand what type of scholars are reading your research, and where they are located. Are they faculty or graduate students? Do they consider themselves biologists, educators, or social scientists? If you’re writing about quantum mechanics, your advisor will be thrilled to see you have many “Faculty” readers in the field of Physics. Like Academia.edu visualizations, Mendeley’s Social Statistics are only available for content hosted on Mendeley.com.

Go beyond the article: track impact for your data, slides, and code

The services above work well for research articles, but what about your data, slides, and code? Luckily, Figshare, Slideshare, and Github (which we discussed in Step 2) track impact in addition to hosting content.

To track your data’s impact, get to know Figshare’s basic social sharing statistics (Twitter, Google+, and Facebook), which are displayed alongside pageviews and cites.

Oy2LxYQ.jpg     IwENKY2.jpg

To understand how others are using your presentations, use Slideshare’s metrics for slide decks. Impact is broken down into three categories: Views, Actions, and Embeds.

6460FQz.jpg

For code, leverage Github’s social functionalities. Stars indicate if others have bookmarked your projects, and Forks allow you to see if others are reusing your code.

Z6qNgWI.jpg

Put it all together with Impactstory

So, there are many great places to discover your impact. Too many, in fact: it’s tough to visit all these individually, and tough to see and share an overall picture of your impact that way.

An Impactstory profile can help. Impactstory compiles information from across the Web on how often people view, cite, reuse, and share your journal articles, datasets, software code, and other research outputs. Send your advisor a link to your Impactstory profile and include it in your annual review–she’ll be impressed when reminded of all the work you’ve done (that software package she had forgotten about!) and all the attention your work is getting online (who knew your code gets such buzz!).

Congrats! You’re on your way.

You’re an awesome researcher who has lots of online visibility. Citations to your work have increased, now that you have name recognition and your work can more easily be found and reused. You’re tracking your impact regularly, and have a better understanding of your audience to show for it. Most importantly, you’re officially brag-worthy.

Are there tips I didn’t cover here that you’d like to share? Tell us in the comments.

Hello! I’m Stacy.

It is with much excitement that I write this post to introduce myself as Impactstory’s Director of Marketing & Research. Like many of you, I’ve watched with admiration as Heather and Jason built a great product that supports their vision of an Open Internet for scientists, where all scholarship gets the credit it deserves for moving knowledge forward.

I come to Impactstory from a tenure-track position at an academic library, where I spent the last few years supporting scientists’ research data management needs. Some of you might also be familiar with my research into how altmetrics can be adopted in research libraries, to the benefit of faculty and librarians alike.

Last week was my first with Impactstory. We spent many long days coding, writing, debating, and strategizing. I’m still exhausted, but also more inspired and happy than I’ve been in quite some time.

So happy, in fact, I’ll share with you some of our short-term plans:

  • Launching our research into the various impacts of scientific software, for which Impactstory recently won an NSF EAGER grant
  • Expanding our efforts to equip our supporters with the means to promote us–and altmetrics, more generally–within their campus and community
  • Continuing to roll out kick-butt features to Impactstory profiles

Stay tuned for more specific updates soon!

Top 5 altmetrics trends to watch in 2014

Last year was an exciting one for altmetrics. But it’s history. We were recently asked: what’s 2014 going to look like? So, without further ado, here are our top 5 trends to watch:

Openness: This is just part of a larger trend toward open science–something altmetrics is increasingly (and aptly) identified with. In 2013, it became more clear than ever before that we’re winning the fight for universal OA. Since metrics are qualitatively more valuable when we verify, share, remix, and build on them, we see continued progress toward making both  traditional and novel metrics more open. But closedness still offers quick monetization, and so we’ll see continued tension here.

Acquisitions by the old guard: Last year saw the big players start to move in the altmetrics space, with EBSCO getting Plum Analytics, and Elsevier grabbing Mendeley. In 2014 we’ll likely see more high-profile altmetrics acquisitions, as established megacorps attempt to hedge their bets against industry-destabilizing change.  We’re not against this, per se; it’s a sign that altmetrics are quickly coming of age. But we also think it underscores the importance of having a nonprofit, scientist-run altmetrics provider, too.

More complex modelling: Lots of money got invested in altmetrics in 2013. This year it’ll get spent, largely on improving the descriptive power of altmetrics tools. We’ll see more network-awareness (who tweeted or cited your paper? how authoritative are they?), more context mining (is your work cited from methods or discussion sections?), more visualization (show me a picture of all my impacts this month), more digestion (are there three or four dimensions that can represent my “scientific personality?”), more composite indices (maybe high Mendeley plus low Facebook is likely to be cited later, but high on both not so much). The low-hanging altmetrics fruit–thing like simply counting tweets–are increasingly plucked. In 2014 we’ll see the beginning of what comes next.

Growing interest from administrators and funders: We gave multiple invited talks at the NSF, NIH, and White House this year to folks highly placed in the research funding ecosystem. These leaders are keenly aware of the shortcomings of traditional impact assessment, and eager to supplement it with new data. Administrators too want to tell more meaningful, textured stories about faculty impact. So in 2014, we’ll see several grant, hiring, and T&P guidelines suggest applicants include altmetrics when relevant.

Empowered scientists: But this interest from the scholarly communications superstructure is tricky. Historically, metrics of scholarly impact have often been wielded as technologies of control: reductionist, Taylorist management tools. There’s been concern that more metrics will only tighten this control. That’s not misplaced. But nor is it the only story: we believe 2014 will also see the emergence of the opposite trend. As scientists use tools like Impactstory to gather, analyze, and share their own stories, comprehensive metrics become a way for them to articulate more textured, honest narratives of impact in decisive, authoritative terms. Altmetrics will give scientists growing opportunities to show they’re more than their h-indices.

And there you have it, our top five altmetrics trends for 2014. Are we missing any? Let us know in the comments!