in 2004 dale benos and i published guidelines for reporting statistics in journals published by the American Physiological Society (6). In so doing, we had hoped the guidelines would help improve the caliber of statistical information reported in those journals. By 2007, an admittedly short interval, it was clear that the mere publication of the guidelines was unlikely to impact reporting practices (7).

The guidelines themselves sparked unsolicited comment (see Refs. 7, 15, 16, 18, and 20). Our sequel to the guidelines (7) provided for invited commentary (see Refs. 2, 17, 19, and 21). Through it all, Dale and I believed that the guidelines reflected mainstream statistical best practices (6, 8).^{1} I still do. I am sure Dale would too.

Back then, to us and to the Editors-in-Chief, authors complained most vigorously about

They did so for two reasons. They failed to appreciate the distinction between a standard deviation and a standard error:*Guideline 5. Report variability using a standard deviation.*^{2}I do not agree with the edict about presenting data as [standard deviations] rather than [standard errors of the mean]. These presentations are for visual effect only. . . . To me, this edict is silly, particularly since showing [standard deviations rather than standard errors of the mean] is a cosmetic issue only. (Comment cited in Ref. 7.)

or they preferred to report a standard deviation using the formatwhen the guidelines and other papers (1, 3, 10, 11, 13) advocated notation of this form:

As I begin my second term as Editor-in-Chief of *Advances in Physiology Education*, a full 10 years after Dale Benos and I published our sequel to the guidelines for reporting statistics, I remain committed to improving statistical practice and reporting among researchers. I am but one of many. Our collective efforts have gone on for decades. There is a reason for that. As educators, we know only too well how difficult it is to correct a misconception one of our students may hold. Why should we be any different?

I equate trying to change the reporting practices of statistics with trying to change the direction of an ocean liner with a kayak. Good luck with that. When I mentioned this analogy to a colleague, he said,What we need are more kayaks. A lot more kayaks.

I understand it is difficult to change entrenched practices (7, 9). I get that change is slow. But that does not mean we should not try.

With this Editorial, I am announcing that I have asked the Associate Editors and Editorial Board of *Advances*—effective June 2017—to actively promote two of the 2004 guidelines for reporting statistics:*Guideline 5. Report variability using a standard deviation*.

Below is my rationale for doing so.*Guideline 7. Report a precise P value*.

*Guideline 5. Report variability using a standard deviation*.^{3} Suppose the random variable *Y* represents the physiological thing we care about. Let us simplify our lives and assume that *Y* is distributed normally with mean *μ* and standard deviation *σ*. The mean *μ* describes the location of the center of the distribution of *Y*, and the standard deviation *σ*—the square root of the variance *σ*^{2}—describes the spread of the normal distribution (Fig. 1).

These two parameters, *μ* and *σ*, determine a normal distribution. We can describe the theoretical distribution of possible outcomes of our random variable *Y* with the normal probability density function *f* (*y*):for *−∞ < y <* +*∞* (12). Bear in mind that the standard deviation *σ* is a single positive number.

*Guideline 7. Report a precise P value*. The 2004 guidelines said a precise *P* value does two things: it communicates more information with the same amount of ink, and it permits each reader to assess a statistical result (6). Moreover, only with a precise *P* value can we estimate the chances that we will reproduce someone else’s scientific result (5, 22). Table 1 reiterates guidelines for the appropriate rounding of precise *P* values.

As Dale and I wrote in 2004 and in 2007 (6, 8), these guidelines embody fundamental concepts in statistics (10), and they are consistent with *Scientific Style and Format* (3), the style manual used by the American Physiological Society. These guidelines are also fully supported by the American Statistical Association (14, 23) which only recently issued its first-ever position statement on a specific component of statistical practice (23).

In 2007 Tom Lang (17) wrote:

Meeting high standards should be required in all research and publication efforts, not merely recommended. We require investigators to use the scientific method; we do not just recommend that they do. We require investigators to explain their experimental procedures; we do not just recommend that they do. We even require investigators to format their references correctly; we do not just recommend that they do. Authors should be required to report statistics as completely and as accurately as every other aspect of the research. To allow ignorance, tradition, personal preference, or the practices of other journals to justify anything less is to legitimize the very forces that science attempts to overcome.

Given that I am a relentless optimist, I am hopeful—even expectant—that *Advances* readers and authors will embrace these two guidelines. As always, if you are moved to comment, I am happy to listen.

## DISCLOSURES

No conflicts of interest, financial or otherwise, are declared by the author(s).

## ACKNOWLEDGMENTS

I thank Ronald Wasserstein (Executive Director, American Statistical Association), Rita Scheman (Director of Publications and Executive Editor, American Physiological Society), Gordon Drummond (School of Medicine, University of Edinburgh, UK), Tom Lang (Medical Writing and Editing Program, University of Chicago), Calvin Williams (Clemson University, Clemson, South Carolina), past Editor-in-Chief Rob Carroll, past Deputy Editor Jon Kibble, Deputy Editor Barb Goodman, and Associate Editors David Harris, Mohammed Khalil, Jodie Krontiris-Litowitz, Bryan Mackenzie, Nancy Pelaez, Kathy Ryan, Arif Siddiqui, and Dee Silverthorn for their helpful comments and suggestions.

## Footnotes

↵1 The aforementioned papers and the first three papers in my

*Explorations in Statistics*series are included in a*Reporting Statistics*collection which is available at http://advan.physiology.org/Reporting-Statistics.↵2 The distinction between a standard deviation and a standard error is substantive: a standard deviation estimates the variability among observations in a sample, but a standard error of the mean estimates the uncertainty about the actual value of some population mean. This distinction has been emphasized repeatedly (1, 4, 6, 10, 11).

↵3 If the standard deviation is not a meaningful estimate of variability, as when the data are skewed, then the range or interquartile range provide more meaningful estimates.

- Copyright © 2017 the American Physiological Society