.

Citation Politics, Special Issues, and Identifying “Core” Rhetoric Journals

You can Google "Rhetoric Journals" and find lists, usually from either disciplinary organizations or folks working for libraries, that identify journals that cover rhetoric as an academic subject area. American Rhetoric and The Consortium of Doctoral Programs in Rhetoric and Composition both keep a list, for example.

The problem with these types of lists is that they use a deductive approach that tends to marginalize scholarship that is important to scholars who identify with the subject area but are studying topics that haven't been covered very much. This sort of oversight could be because the research is innovative, but I would guess that more often than not it's because the topic might not be recognized as relevant by key gatekeepers such as reviewers, editors, graduate faculty, etc., etc. Sometimes gatekeepers will code that as a lack of "fit." Or perhaps a writer who identifies with the field just prefers to publish in places with a different readership, and they'd argue that their audience and journal should be considered relevant even if others might not (yet) read it that way. Case in point, my colleague Nobert Elliot has been publishing his research on writing in linguistics journals, and it would be hard to make a compelling argument that something like the Journal of Writing Analytics is about rhetoric, at least with some of my colleagues. I think it should go without saying that much of this outgrouping and othering is an effect of the historical marginalization of BIPOC, LGBTQI+, and differently abled scholars.

So the predefined lists will only get you so far in identifying research that helps to better understand an area of research. A different way to identify "core" journals is to take these predefined lists and look at what people doing the research are reading, or at the least citing, since it hypothetically would provide a bigger picture of what is being practiced rather than prescribed. This is kind of a chicken-and-egg problem, though, since this type of analysis requires identifying a corpus of articles that count as "rhetoric" or whatever discipline you're interested in. It doesn't fix the central classification problem I mentioned above, but it can potentially point to alternative publication venues that aren't as privileged that are being read by folks who are still attempting to publish in the conservative locations while reading more broadly to understand the topic area.I'd say at best, it can point to potentially important research venues that aren't being valued as much as the historically valued journals.

This idea was interesting enough to me that I wanted to see what would happen if I tried it. Rhetoric Society Quarterly is arguably one of the most recognized academic rhetoric journals. It provides a unique test case since the journal grew out the collaboration of scholars who were being marginalized by their departments in communication, English, writing, and philosophy. Today RSQ is a popular venue for folks in departments of English, Communication, and Writing, and ideally, you'd expect that peer editors and reviewers would have to be a little more flexible about what they're considering "fit," since they're gatekeeping for several different fields of study that overlap around the word "rhetoric." But who knows?

So I collected every citation from the cited sources in RSQ articles from the last ten years, hoping that the references that were being cited would provide a different way of understanding where rhetoric is being studied. I fully expected that the most popular choices that show up in canonical lists would be highly cited, but I was hoping to find something that wasn't expected: sources that were being read but were not making those "core" lists I mentioned above.

Here's what I found. (After each journal I listed the number of times that journal was cited in my corpus. I intentionally left non-journal sources (books, newspapers, blog posts, etc.) because I wanted to start with a smaller question.

  1. QJS 359
  2. RSQ 306
  3. Philosophy and Rhetoric 161
  4. Rhetoric & Public Affairs 108
  5. Rhetoric Review 91
  6. College English 86
  7. Western Journal of Communication 75
  8. CCC 58
  9. Argumentation and Advocacy 33
  10. Critical Studies in Media Communication 32
  11. Communication Monographs 31
  12. Rhetorica 31
  13. Southern Journal of Communication 30
  14. JAC 26
  15. Written Communication 25
  16. Communication and Critical/Cultural Studies 23
  17. Communication Quarterly 21
  18. Communication Studies 21
  19. Argumentation 19
  20. Advances in the History of Rhetoric 18

Disappointing, but not unexpected. Each of those journals would show up in a bunch of the "core" lists. The only journals that are marginally interesting, IMHO, are the regional communication journals like Southern Journal of Communication, but even those are normative rhetoric journals.

But then I had another idea. What if I only looked at references in the special issues of RSQ? Special issues are proposed by guest editors who have a hand in selecting the pieces they want to publish. I got this idea after having read a special issue of RSQ issue on wearables, I felt like the articles just felt different. They seemed to imagine rhetoric differently than a status quo issue. I started wondering if these special issues might be different enough that they'd provide a platform for better inclusion. I can't help but imagine that innovation might be one of the reasons that special issues are valued by journal editors.

So I limited my lists of cited sources from the last ten years to just special issues. The themes of those issues were: Rhetoric’s Demagogue (2019), Keywords (2018), Rhetoric's Bestiary (2017), Wearables (2016), La Idea de la Retórica Americana/The Idea of American Rhetoric (2015), Untimely Historiographies (2014), Comparative Rhetoric (2013), Regional Rhetorics (2012), Human Rights (2011), and Neurorhetorics (2010). Here's what I found were the most cited journals, issue by issue. I should point out that these counts aren't exhaustive--there were lots more sources, these are just the most popular ones.

Human Rights (2011) 41(3):

  1. The Lancet 5
  2. Quarterly Journal of Speech 3
  3. PMLA 3
  4. Harvard International Law Journal 2
  5. Philosophy & Rhetoric 2
  6. Critical Studies in Media Communication 2
  7. Rhetoric & Public Affairs 2

Interesting, right? The issues on rhetoric and human rights draws heavily from the Lancet and the Harvard International Law Journal. And I found that each other special issue also contained a more varied list of preferred sources.

Regional Rhetorics (2012):

  1. Philosophy & Rhetoric 4
  2. Critical Studies in Media Communication 4
  3. Rhetoric Society Quarterly 3
  4. Quarterly Journal of Speech 2
  5. Southern Communication Journal 2
  6. Geographical Review 2

Not as interesting, but Geographical Review!

Comparative Rhetoric (2013)

  1. College English 12
  2. Style 6
  3. Rhetoric Review 6
  4. Rhetoric Society Quarterly 5
  5. Quarterly Journal of Speech 3
  6. PMLA 2
  7. Rhetorica 3
  8. College Composition and Communication 4

Meh. No big differences, but more citations to writing/English journals.

Untimely Historiographies (2014)

  1. Rhetoric Society Quarterly 4
  2. Quarterly Journal of Speech 3
  3. Media, Culture, and Society 3
  4. Advances in the History of Rhetoric 2
  5. Critical Inquiry 2
  6. Philosophy & Rhetoric 2
  7. Political Theory 2

Media, Culture, and Society? Critical Inquiry? Political Theory?

La Idea de la Retórica Americana/The Idea of American Rhetoric (2015)

  1. Rhetoric Society Quarterly 9
  2. Quarterly Journal of Speech 5
  3. Presidential Studies Quarterly 3
  4. College Composition and Communication 2
  5. Advances in the History of Rhetoric 3

Cool! Presidential Studies Quarterly.

Wearables (2016)

  1. Rhetoric Society Quarterly 6
  2. Quarterly Journal of Speech 3
  3. Mobile Media & Communication 3
  4. Written Communication 3
  5. Critical Public Health 2
  6. Journal of Business and Technical Communication 2
  7. Rhetoric Review 2
  8. Philosophy & Rhetoric 2
  9. College English 2
  10. New Media & Society 2

I was right! The citation distribution in this one was more varied. Many more journals, including a smattering of technical communication scholarship.

Rhetoric's Bestiary (2017)

  1. Philosophy & Rhetoric 17
  2. Rhetoric Society Quarterly 7
  3. Environmental Communication 4
  4. Southern Communication Journal 1
  5. Presidential Studies Quarterly 1
  6. Rhetoric Review 1
  7. Western Journal of Communication 2
  8. PMLA 1

Environmental Communication! More references to Philosophy & Rhetoric makes me think that several of the articles could have been theory (re: canonized Western lit theory) heavy.

Keywords (2018)

  1. Rhetoric Society Quarterly 52
  2. Quarterly Journal of Speech 20
  3. Rhetoric Review 6
  4. Western Journal of Communication 4
  5. Computers and Composition 4
  6. College Composition and Communication 3
  7. Review of Communication 3
  8. Philosophy & Rhetoric 3
  9. Rhetoric & Public Affairs 3

Reifying. The disciplinary keywords issue overwhelmingly cited RSQ.

Rhetoric’s Demagogue (2019)

  1. Rhetoric & Public Affairs 21
  2. Rhetoric Society Quarterly 6
  3. Western Journal of Communication 4
  4. Women's Studies in Communication 4
  5. Quarterly Journal of Speech 4
  6. Communication Studies 2
  7. Western Journal of Speech Communication 2

I'm curious who was citing Women's Studies in Communication in this issue.

Overall, each of these lists mostly includes the normative journals in different ratios. But in most of the special issues you also see a few sources that might not be identified as part of a "core." There are some methodological issues with assessing this way (how comparable is the corpus for each special issue in relation to entire ten year run? how big of sample would I want? Is 2009 citation comparable to 2019 citation?) but I'm wondering how important special issues are for shifting the thinking of entire communities? I guess a follow-up question would be--how much were any of these less cited journals cited after they were highlighted in the special issue? I want to keep looking at this, but it seems that it's possible that the special issues could be important media for transforming how a field thinks about itself.

Where do Citation Metrics Come From?

Frequently when citation metrics are written about in popular press, they are critiqued for their (mis)use and abuse. There's a lot to be said about why metrics don't measure what they say they do. The journal impact factor (JIF), for example, highlights how frequently articles in a specific journal are cited within the last few years, and The h-index produces a similar metric at the author level. Both indicators assume a norm--that in an ideal world, articles that are the "most valuable," would be recognized universally by everyone and others writing about the topic would change their citation practices to include the new valuable piece. There are numerous problems with the normative assumption. No one reads everything; topics/papers don't fit neatly in citable categories; citation practices have been historically racist and elitist.​*​ There are numerous other was reasons citation doesn't straightforwardly identify inherent value, but one of the best is that "value" is a deliberative topic, not just an aggregate value of popular practice.

Citations are meaningful somehow, though. Writers include citations and placing them within writing transforms the literary space of the text. How are they meaningful in a particular situation? It varies. Authors writing their texts include citations for reasons ranging from paying homage to substantiating claims to identifying methodology.​†​ In that last sentence, I included a citation because I cribbed that list from "When to Cite," which is a hybrid scholarly/tutorial article written by Eugene Garfield, the founder of Web of Knowledge, one of the big three citation databases. Garfield's article is a normative "how to" article and his list is mostly his personal opinion. I could have just as easily looked at one of my own articles and described why I cited a particular source (at least what I now remember), and give other reasons. Why did I choose Garfield instead of myself or another source about writing? Mostly because he's famous for inventing citation metrics, I knew about the article already, and his article was easy to find with keywords about citation. That of course doesn't even begin to describe the ways that readers of this might make sense of why they cite when writing, especially if you believe as I do that much of what is written is beyond anyones intentionality.

So citations perform meaningfulness multivocally and differently at every point of material production. Writers think of them differently as they are positioned in texts. Editors look at them with their own eyes. Readers make sense of them given their own context. Each person also rereads them with new eyes. And on and on. Interpretation varies while the material of citation stays the same.

When citations are aggregated as metrics, their meaningfulness is transformed in a new way. Much like public polls produce something like "public opinion" as a technique of aggregation, citation metrics produce something like "scholarly value."​‡​ JIF is calculated by dividing the number of citations to a journal by the number of articles published, both in the last two years. This aggregate value represents the journal's approximate number of citations per article. The JIF citation metric depends on a vast number of assumptions. The most foundational are the existence of a journal that has published citable articles for the last two years. A more fundamental assumption is that there is a list of every citation to that journal that exists somewhere, ready to reference. This list doesn't exist. One of the major differences between an impact factor from Web of Science, Scopus, and Google Scholar are the (incomplete) lists of citations that they've managed compile. It's been well documented how these different databases highlight differently curated sets of data and often produce wildly different metrics.

Each aggregated bibliometric value depends on a foundational infrastructure that provides the raw material. The metrics are constructed from what that infrastructure makes available. Each aggregated value flattens out the gaps and specificities of missing parts of the infrastructure. For instance, the Journal of the Medical Humanities includes a variety of genres in its pages--including poetry. Poetry usually isn't cited, at least not in the same way that a JAMA article would be. Aggregated metrics miss nuance that makes a difference and produce numbers that don't highlight those difference.

It's become popular to use the aggregates as evidence for evaluation of individual, publishing, and disciplinary value. At my home institution, a variety of metrics are used to divide public funding among every school in the state. Sometimes some metrics work well, especially if the person or thing being evaluated fits the normative assumptions valued by the metric. Just as often, metrics overlook and provide poor evidence for assessing value. For example, the Quarterly Journal of Speech is frequently esteemed by as the most important journal for rhetoricians in communication departments, primarily because it's one of the longest running. If you compare it's 2017 impact factor (.46) to Communication Monographs (1.738), it doesn't come out so well. Communication Monographs is a more eclectic journal, though. It's topics often appeal to a generalist audience. The pool of potential citing documents is bigger. Yet it would be a mistake to suggest that Quarterly is less important for people that focus on rhetorical scholarship in communication. You couldn't learn a lot about rhetorical theory from reading Communication Monographs.

That doesn't even begin to get at the problems with citation metrics. In 2018, Paula Chakravartty, Rachel Kuo, Victoria Grubbs, Charlton McIlwain pointed out how citational practices in communication forward systemic racism.​§​ The academic journal system started in Europe and has been overwhelmingly sustained and forwarded by a labor force that is to this day predominantly white.​¶​ This means both that the scholarly topics of concern emerged from white in-groups and that the majority of editors and supporters are enculturated in that legacy of racism. There has been and continue to be problems of access in education and community that affect what topics and people end up in the pages of the journals. Differences in service loads, teaching expectations, and funding, and much more are glossed over by performance metrics, even though they affect access and opportunity for publishing or citing.​#​ Read the article. The same issues are affected by gender, too. Although there is evidence that gender is less attenuated by citation metrics than in previous decades,​**​ every step toward better inclusion and diversity is met with two back.​††​

The double edge of aggregate citation metrics is that they perform and provide material evidence of what should be valued. Each time a metric is invoked as evidence of something it lends additional credibility to the metric as evidence. Metrics postulate an invisible norm, which is often that the highest number of citations or mentions is inherently valuable. That norm produces incentives that feedback into the maintenance and care of the infrastructure. If a journal is given better funding or receives more recognition for a higher impact factor, it is incentivized to maximize that impact factor. To say that Communication Monographs is valuable because of its higher impact factor is to simultaneously suggest that the practices that enable that journal are the important ones. If that metric is tied to better funding or more support for that journal, it undercuts the value of specialist journals like Quarterly Journal of Speech or Communication and Critical/Cultural Studies (JIF= .767). Metrics silently lend support to the disparities and differences that plague academic labor. Aggregates flatten contextualized meaning to provide evidence normative behavior. If you are an academic writer that has every thought twice about where to send your writing based on a metric of some sort, you have participated in that norm (guilty here). The norm supports existing academic infrastructure, an infrastructure that does not work for many current problems faced in the 21st century. They reinforce status quo when thought of as indicators of value.

But metrics could instead be looked at as entry points for examination. Each performance metric can be examined for the assumptions and material they are reinforcing, the ones that are supporting normative infrastructure. Since JIF measures and evaluates journals, one way to examine infrastructure would be by looking for what Sarah Ahmed calls "strategic inefficiencies," the points in production that slow the work of people advocating change. Anyone that has attempted to publish in a journal will be able to tell you about how strategic inefficiencies affected them. (Raise your hand if you have a peer review story.) Collecting these stories, each meaningful in their own way, helps to articulate and forward where value is being manipulated by a metric. Another way to open up the black box of metrics is to read them against their own grain. In a previous post I had conducted a co-citation analysis of several rhetoric journals to identify which citations are grouped together frequently. A typical analysis of co-citation patterns looks at frequent co-citations as foundational research for a field. A different way to look at them would be to see their authors as in-groups/out-groups/gatekeepers in a profession that is just as much defined by who you know as by what you know.

This is all just to say these metrics work both ways, as evidence of both functioning and crumbling infrastructure, and as Shannon Mattern has pointed out, "To fill in the gaps in this literature, to draw connections among different disciplines, is an act of repair or, simply, of taking care — connecting threads, mending holes, amplifying quiet voices."


  1. ​*​
    Chakravartty, P., Kuo, R., Grubbs, V., & McIlwain, C. (2018). #CommunicationSoWhite. Journal of Communication, 68(2), 254–266.
  2. ​†​
    Garfield, E. (1996). When to cite. The Library Quarterly: Information, Community, Policy, 66(4), 449–458.
  3. ​‡​
    Hauser, G.A. (2010). Vernacular Voices: The Rhetoric of Publics and Public Spheres. Columbia, SC: University of South Carolina Press.
  4. ​§​
    Chakravartty, P., Kuo, R., Grubbs, V., & McIlwain, C. (2018). #CommunicationSoWhite. Journal of Communication, 68(2), 254–266.
  5. ​¶​
    Moxham, N., & Fyfe, A. (2018). The Royal Society and the Prehistory of Peer Review, 1665–1965. The Historical Journal, 61(4), 863–889.
  6. ​#​
    Gunning, S. (2000). Now That They Have Us, What’s the Point? In S. G. Lim, M. Herrera-Sobek & G. M. Padilla (Eds.), Power, Race, and Gender in Academe (pp. 171–182). New York, NY: Modern Language Association of America.
  7. ​**​
    Andersen, J. P., Schneider, J. W., Jagsi, R., & Nielsen, M. W. (2019). Gender Variations in Citation Distributions in Medicine are Very Small and Due to Self-Citation and Journal Prestige. ELife, 8, e45374; Mayer, V., Press, A., Verhoeven, D., & Sterne, J. (2017). How Do We Intervene in the Stubborn Persistence of Patriarchy in Communication Research? In D. T. Scott & A. Shaw (Eds.), Interventions: Communication theory and practice. New York, NY: Peter Lang.
  8. ​††​
    Caruth, G. D., & Caruth, D. L. (2013). Adjunct faculty: Who are these unsung heroes of academe? Current Issues in Education , 16(3), 1–10.