Citation counts are just that -- counting how many times a scholarly work has been cited by others. It indicates that a piece is being read and utilized.
Citation counts have strengths and weaknesses. For instance, raw counts aren't an indication of quality. Imagine, for example, a piece that is used popularly as a teaching example because the authors employed a flawed procedure. This notoriety for a weaknesses might drive up the counts associated with the article, despite the dubious quality of the piece.
Citation counts may also be increased or decreased by external forces beyond merit such as reviews in newspapers or popular magazines, availability of the piece in multiple formats including open access, and the advertising budget of the publishing house. Even the promotional efforts of the authors themselves may have an effect on counts.
Works in some fields take longer to make it into the stream of scholarly dialogue than others. It all depends, in part, on publication schedules. A filed with many annual or quarry publications or for which the primary communication vehicle is the monograph, will have a scholarly dialogue pace that moves more slowly than one dominated by weekly or monthly publications. Thus, in some fields, citation of a work may peak a good 4-5 years after initial publication.
The Hirsch index measures impact (use by the scholarly community) and assumes that an author has multiple publications each with multiple citations.
The formula takes the number of each of an author's publications and determines the greatest number with the largest possible common number of citations amongst them. So, if a scholar has 4 publications with 3, 2, 0, and 0 citations respectively, then the index is 2 (two or more publications with at least 2 citations each). Likewise, if another scholar only has 1 publication with 28 citations, that academic's index is 1.
Theoretically, an academic's H index will increase over a scholar's career. A scholar, however, retiring with 50 publications, but who only had one of them cited once, would have an H Index of 1. That does not imply, however, that the scholarship was necessarily flawed; only that the various pieces were published in obscure journals or by small or new publishing houses.
The index is helpful when comparing the impact two scholars in the same field who the same type of output (similar number of monographs, similar number of articles, etc), and at the same rank. But impact, again, is not the same as quality or overall productivity, When using the index for comparison it is vital for scholars to be in the same discipline since scholarly communication proceeds at different paces in different fields. For instance, monograph-focused fields cite at a slower pace than article focused fields. Some fields also may also include and variously weigh publications and works intended for lay practitioners and working professionals when making comparisons.
For purposes of tenure and promotion, there is no substitute for a committee reading the artifacts in a colleague's dossier to determine the merit of an individual work.
The h-Index: An Indicator of Research and Publication Output. (2023). Pulse International, 24(8), 1–19.. (Access requires UWG NetID)
Schreiber, M. (2009). A case study of the modified Hirsch index hm accounting for multiple coauthors. Journal of the American Society for Information Science & Technology, 60(6), 1274–1282. https://doi.org/10.1002/asi.21057. (Access requires UWG NetID)
It is used primarily by Google Scholar to indicate the number of publications have at least 10 citations.
If a scholar has 9 publications, one with 23 citations, one with 11, then, 8, 6, 5, 2, 1, 0, 0, 0, 0, then the I-10 for that scholar is 2. There are two publications that have generated more than 10 citations.
It can be used the same way as the H-Index. The chief difference is that the I-10 better shows the number of a scholar's higher citation generating articles than does the H-Index.
Altmetrics track impact of a school's work beyond the academy. They are, in essence, an alternative to traditional impact measures. Altmetrics.
Alternative metrics track mentions in sources including, but not limited to:
The book review and its sibling, the performance review, are staples for measuring how well crafted, researched, or presented a major work might be.
It is important to remember that while news outlets sometimes are able to provide reviews of musical, dramatic, and artistic events quickly, reviews of academic monographs may trail the publication of the book by two to three years.
Reviews represent the opinion of a single individual. And, while reiewers may strive to be impartial, they they do have their own aesthetics, preferences, and biases. Therefore, when evaluating a work it is best to weigh several reviews of the same piece.
For monographs, generally the publisher will ask an author for a list of potential reviewers and/or a list of journals that puiish reviews on the subject matter covered in the book. To these persons and journals the publisher may send a "review copy" of the text. The cost for review copies may be deducted from author royalties, depending on how the author's contract is written.