statistics; +221 new citations

statistics; +221 new citations Report, nevin manimala
statistics; +221 new citations Report, nevin manimala

Grant DJ, Manichaikul A, Alberg AJ, Bandera EV, Barnholtz-Sloan J, Bondy M, Cote ML, Funkhouser E, Moorman PG, Peres LC, Peters ES, Schwartz AG, Terry PD, Wang XQ, Keku TO, Hoyo C, Berchuck A, Sandler DP, Taylor JA, O’Brien KM, Velez Edwards DR, Edwards TL, Beeghly-Fadiel A, Wentzensen N, Pearce CL, Wu AH, Whittemore AS, McGuire V, Sieh W, Rothstein JH, Modugno F, Ness R, Moysich K, Rossing MA, Doherty JA, Sellers TA, Permuth-Way JB, Monteiro AN, Levine DA, Setiawan VW, Haiman CA, LeMarchand L, Wilkens LR, Karlan BY, Menon U, Ramus S, Gayther S, Gentry-Maharaj A, Terry KL, Cramer DW, Goode EL, Larson MC, Kaufmann SH, Cannioto R, Odunsi K, Etter JL, Huang RY, Bernardini MQ, Tone AA, May T, Goodman MT, Thompson PJ, Carney ME, Tworoger SS, Poole EM, Lambrechts D, Vergote I, Vanderstichele A, Van Nieuwenhuysen E, Anton-Culver H, Ziogas A, Brenton JD, Bjorge L, Salvensen HB, Kiemeney LA, Massuger LFAG, Pejovic T, Bruegl A, Moffitt M, Cook L, Le ND, Brooks-Wilson A, Kelemen LE, Pharoah PDP, Song H, Campbell I, Eccles D, DeFazio A, Kennedy CJ, Schildkraut JM.

Cancer Med. 2019 Apr 18. doi: 10.1002/cam4.1996. [Epub ahead of print]

Connect with Nevin Manimala on LinkedIn
Nevin Manimala SAS Certificate

Stats + Stories: How Autocrats Use Statistics – WYSO

Stats + Stories: How Autocrats Use Statistics - WYSO nevin manimala
Stats + Stories: How Autocrats Use Statistics - WYSO nevin manimala

WYSO is partnering with Stats and Stories, a podcast produced at Miami University.

Autocratic or authoritarian leaders work to control just about everything that affects the lives of those they rule, particularly information. Restricting the news and information individuals can access makes them reliant on the state, as they make sense of the world. It can also make them easier to rule. Authoritarianism and information are the focus of this episode of stats and stories. Rosemary Pennington is joined by regular panelists John Bailer, Chair of Miami statistics department and Richard Campbell of media journalism and film. Their guest Arturas Rozenas, an assistant professor at New York University’s Department of Politics. His research focuses on authoritarian states, electoral competition and statistical methodology. He came to the studio after traveling to Miami on a visit sponsored by the Havighurst Center for Russian and post-Soviet studies as part of the colloquium series on Russian media strategies at home and abroad. 

Stats and Stories is a partnership between Miami University’s Departments of Statistics and Media, Journalism and Film and the American Statistical Association. You can follow us on Twitter or iTunes. If you’d like to share your thoughts on our program, send your e-mail to and be sure to listen for future editions of Stats and Stories where we discuss the statistics behind the stories and the stories behind the statistics.        

Weather Talk: Tornado statistics tell a story – INFORUM

Weather Talk: Tornado statistics tell a story - INFORUM nevin manimala
Weather Talk: Tornado statistics tell a story - INFORUM nevin manimala

Statistics are a terribly callous method of discussing tornadoes, but data do tell a story. Five fatal tornadoes have struck in the United States so far this year, resulting in a total of 27 deaths. The tornadoes occurred on four separate days.

The earliest fatality was in east-central Mississippi, outside of Columbus, Mississippi, on Feb. 23. Eight days later, 23 people were killed by a tornado in a populated rural area near Auburn, Alabama. This past Sunday, there were three fatalities from three separate tornadoes: two in eastern Texas and one in Mississippi, again outside of Columbus, Mississippi.

Of the 27 deaths, 21 occurred in mobile homes, four in houses, one outside and one in a permanent substantial structure. All of the fatal tornadoes so far this year have been rated EF 2, 3, or 4. The Alabama tornado that killed 23 people was rated EF4. The EF rating is an estimate of peak wind speed based on the damage.

statistics; +248 new citations

statistics; +248 new citations Report, nevin manimala
statistics; +248 new citations Report, nevin manimala

Sifuentes-Cantú C, Contreras-Yáñez I, Gutiérrez M, Torres-Ruiz J, Zamora-Medina MDC, Romo-Tena J, Castillo JP, Ruiz-Medrano E, Martín-Nares E, Quintanilla-González L, Bermúdez-Bermejo P, Pérez-Rodríguez R, López-Morales J, Whittall-García L, García-Galicia J, Valdés-Corona L, Pascual-Ramos V.

J Clin Rheumatol. 2019 Apr 12. doi: 10.1097/RHU.0000000000001036. [Epub ahead of print]

Connect with Nevin Manimala on LinkedIn
Nevin Manimala SAS Certificate

Coincidence helps with quantum measurements

Coincidence helps with quantum measurements statistics, science, nevin manimala
Coincidence helps with quantum measurements statistics, science, nevin manimala

Quantum phenomena are experimentally difficult to deal with, the effort increases dramatically with the size of the system. For some years now, scientists are capable of controlling small quantum systems and investigating quantum properties. Such quantum simulations are considered promising early applications of quantum technologies that could solve problems where simulations on conventional computers fail. However, the quantum systems used as quantum simulators must continue to grow. The entanglement of many particles is still a phenomenon that is difficult to understand. “In order to operate a quantum simulator consisting of ten or more particles in the laboratory, we must characterize the states of the system as accurately as possible,” explains Christian Roos from the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences.

So far, quantum state tomography has been used for the characterization of quantum states, with which the system can be completely described. This method, however, involves a very high measuring and computing effort and is not suitable for systems with more than half a dozen particles. Two years ago, the researchers led by Christian Roos, together with colleagues from Germany and Great Britain, presented a very efficient method for the characterization of complex quantum states. However, only weakly entangled states can be described with this method. This issue is now circumvented by a new method presented last year by the theorists led by Peter Zoller, which can be used to characterize any entangled state. Together with experimental physicists Rainer Blatt and Christian Roos and their team, they have now demonstrated this method in the laboratory.

Quantum simulations on larger systems

“The new method is based on the repeated measurement of randomly selected transformations of individual particles. The statistical evaluation of the measurement results then provides information about the degree of entanglement of the system,” explains Andreas Elben from Peter Zoller’s team. The Austrian physicists demonstrated the process in a quantum simulator consisting of several ions arranged in a row in a vacuum chamber. Starting from a simple state, the researchers let the individual particles interact with the help of laser pulses and thus generate entanglement in the system. “We perform 500 local transformations on each ion and repeat the measurements a total of 150 times in order to then be able to use statistical methods to determine information about the entanglement state from the measurement results,” explains PhD student Tiff Brydges from the Institute of Quantum Optics and Quantum Information.

In the work now published in Science, the Innsbruck physicists characterize the dynamic development of a system consisting of ten ions as well as a subsystem consisting of ten ions of a 20-ion chain. “In the laboratory, this new method helps us a lot because it enables us to understand our quantum simulator even better and, for example, to assess the purity of the entanglement more precisely,” says Christian Roos, who assumes that the new method can be successfully applied to quantum systems with up to several dozen particles.

The scientific work was published in the journal Science and financially supported by the European Research Council ERC and the Austrian Science Fund FWF. “This publication shows once again the fruitful cooperation between the theoretical physicists and the experimental physicists here in Innsbruck,” emphasizes Peter Zoller. “At the University of Innsbruck and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, young researchers from both fields find very good conditions for research work that is competitive worldwide.”

Story Source:

Materials provided by University of Innsbruck. Note: Content may be edited for style and length.

Russell Westbrook’s Statistics Are Misleading – Fox Sports Radio

Russell Westbrook's Statistics Are Misleading - Fox Sports Radio nevin manimala
Russell Westbrook's Statistics Are Misleading - Fox Sports Radio nevin manimala

Listen to Colin Cowherd explain why NBA basketball can often lead to the most misleading statistics in sports when it comes to individual player accolades.

Russell Westbrook has astoundingly averaged a triple-double for the last three seasons, a feat only Oscar Robertson had ever approached almost 60 years ago.

To someone not well acquainted with professional basketball, it would be natural to see Westbrook’s stats and think he’s the best player in the game considering he has the best STATS, right?

Check out the audio below as Colin thinks individual statistics are the biggest misconception of NBA basketball and explains why he doesn’t even have Russell Westbrook in his Top 12 players rankings.

WPU gives power outage update, shares energy statistics – Daily Globe

WPU gives power outage update, shares energy statistics - Daily Globe nevin manimala
WPU gives power outage update, shares energy statistics - Daily Globe nevin manimala

WPU General Manager Scott Hain updated the commission on the community’s recovery from last week’s ice storm and power outages.

“Our last outage of this magnitude was six years ago,” he said. He explained that crews spent about 24 hours Thursday and Friday just assessing damage before beginning repairs.

Repairing downed poles and iced-up lines required the help of an additional eight trucks and about 50 linemen from Kansas and Nebraska.

Throughout the process, several major buildings were entirely self-generated, including Sanford Worthington Medical Center, the municipal wastewater plant and Prairie Justice Center. That significantly reduced the workload for linemen, Hain noted.

“We still have some work to do,” said Hain, who expressed optimism that crews will have all power restored as soon as possible.

He also presented energy statistics for commissioners’ review.

Every six months, the city of Owatonna publishes a study of average energy prices in 13 southern Minnesota towns. According to the study, Worthington ranks second-lowest in residential and commercial electricity rates and lowest in industrial electricity rates.

In water prices, Worthington’s average fares slightly worse, at fifth-lowest in residential and seventh-lowest in commercial and industrial water.

Additionally, Hain shared a breakdown of WPU’s power supply mix, as follows:

  • 33% coal
  • 27% hydro power
  • 18% wind
  • 12% nuclear
  • 10% natural gas
  • Less than 1% solar

Of those sources, a total of 45% (hydro and wind) is renewable and 57% (hydro, wind and nuclear) is carbon-free.

The proportion of hydro power is likely to increase when the Red Rock Rural Water System comes online, Hain explained. He added that solar power, which is available 20% of the time, is also likely to increase as further developments are made.

Commissioners also awarded a bid for the Clary Street and McMillan Street water reconstruction project. The only bid submitted came from Duininck Inc. at $1,125,098.50 — about $76,000 (7%) more than the engineer’s estimate.

Hain said even though the bid is high, the project will probably meet its total budget.

Can science survive without statistical significance? – Science News

Can science survive without statistical significance? - Science News nevin manimala
Can science survive without statistical significance? - Science News nevin manimala

In science, the success of an experiment is often determined by a measure called “statistical significance.” A result is considered to be “significant” if the difference observed in the experiment between groups (of people, plants, animals and so on) would be very unlikely if no difference actually exists. The common cutoff for “very unlikely” is that you’d see a difference as big or bigger only 5 percent of the time if it wasn’t really there — a cutoff that might seem, at first blush, very strict.

It sounds esoteric, but statistical significance has been used to draw a bright line between experimental success and failure. Achieving an experimental result with statistical significance often determines if a scientist’s paper gets published or if further research gets funded. That makes the measure far too important in deciding research priorities, statisticians say, and so it’s time to throw it in the trash. 

More than 800 statisticians and scientists are calling for an end to judging studies by statistical significance in a March 20 comment published in Nature. An accompanying March 20 special issue of the American Statistician makes the manifesto crystal clear in its introduction: “‘statistically significant’ — don’t say it and don’t use it.”

There is good reason to want to scrap statistical significance. But with so much research now built around the concept, it’s unclear how — or with what other measures — the scientific community could replace it. The American Statistician offers a full 43 articles exploring what scientific life might look like without this measure in the mix.

This isn’t the first call for an end to statistical significance, and it probably won’t be the last. “This is not easy,” says Nicole Lazar, a statistician at the University of Georgia in Athens and a guest editor of the American Statistician special issue. “If it were easy, we’d be there already.”

What does statistical significance offer?

Many scientific studies today are designed around a framework of “null hypothesis significance testing.” In this type of test, a scientist compares results of an experiment asking, say, if a drug reduces depression in a treated versus control group. The scientist compares the results against the hypothesis that no difference really exists between the groups. The goal is not to prove that the drug fights depression. Instead, the idea is to gather enough data (eventually) to reject the hypothesis that it doesn’t.

The scientist will compare the groups using a statistical analysis that results in a P value, a result between 0 and 1, with the “P” standing for probability. The value signifies the likelihood that repeating the experiment would yield a result with a difference as big (or bigger) than the one the scientist got if the drug doesn’t actually reduce depression. Smaller P values mean that the scientist is less likely to see a difference that large if no difference really exists. In scientific parlance, the value is “statistically significant” if P is less than or equal to 0.05.

When scientists interpret P values correctly, they can be useful for finding out how compatible experimental results are with the scientists’ expectations, Lazar says. Because a P value is a probability, it “has variability attached to it,” she explains. “If I repeated my procedure over and over, I’d get a whole range of P values. Some would be significant, some wouldn’t.”

Because of this variability, P equal to 0.05 was never meant to be an end result. Instead, it was more of a beginning, “something that would cause you to raise your eyebrows and investigate further,” Lazar says.

Where did the idea for statistical significance come from?

Many scientists now interpret P equal to 0.05 as a cutoff between an experiment that “worked” and one that didn’t. That cutoff can be attributed to one man: famed 20th century statistician Ronald Fisher. In a 1925 monograph, Fisher offered a simple test that research scientists could use to produce a P value. And he offered the cutoff of P equals 0.05, saying “it is convenient to take this point as a limit in judging whether a deviation [a difference between groups] is to be considered significant or not.”

That “convenient” suggestion has reverberated far beyond what Fisher probably intended. In 2015, more than 96 percent of papers in the PubMed database of biomedical and life science papers boasted results with P less than or equal to 0.05.

What’s the problem with statistical significance?

But science and statistics have never been so simple as to cater to convenient cutoffs. A P value, no matter how small, is just a probability. It doesn’t mean an experiment worked. And it doesn’t tell you if the difference in results between experimental groups is big or small. In fact, it doesn’t even say whether the difference is meaningful.

The 0.05 cutoff has become shorthand for scientific quality, says Blake McShane, one of the authors on the Nature commentary and a statistician at Northwestern University in Evanston, Ill. “First you show me your P less than 0.05, and then I will go and think about the data quality and study design,” he says. “But you better have that [P less than 0.05] first.”

That shorthand also draws a bright line between scientific findings that are “good” and those that are “bad,” when in fact no such line exists. “On one side of the threshold, you label it one thing, and if it falls on the other side, it’s something else,” McShane says. But nothing in statistics, or reality, actually works that way. Strictly speaking, he says, “there’s no difference between a P value of 0.049 and a P value of 0.051.”

What would it take to get rid of statistical significance?

Because statistical significance is entrenched in science culture, being used widely in decisions on whether to fund, promote or publish scientific research, a switch to anything else would take huge effort, says Steven Goodman, a Stanford University medical research methodologist who contributed one of the 43 articles of the special issue of the American Statistician. “The currency in that economy is the P value,” he says.

Computer programs that calculate a P value automatically from experimental data have helped to make the measure even more of a “crutch,” Goodman notes. Using it as the default means that scientists “haven’t developed the scientific muscles to understand what it means to reason under true uncertainty.” True uncertainty doesn’t mean scientists throw up their hands and say the data don’t reveal anything. In statistics, “uncertainty” refers to how much data is expected to vary from one experiment to another. Learning to interpret that uncertainty in scientific results, he notes, would require a lot more statistical training than many scientists usually get.

Shifting to one or many new kinds of statistics that better capture uncertainty would also mean that scientists would have to put more effort into making judgment calls. Journal editors and peer reviewers would have to learn to rely on other criteria to determine if a study was worth publishing. Scientific journals might have to change their standards. “It’s very, very hard to dislodge,” Goodman says. “The world of science is not ruled or directed by statisticians.”

Partially because of the potential challenges of change, some scientists don’t want to throw out statistical significance cutoffs just yet. Some want to start by raising the bar. Instead of P less than or equal to 0.05 as a cutoff, Valen Johnson, a statistician at Texas A&M University in College Station, prefers P less than or equal to 0.005 — a 0.5 percent chance that someone would observe a difference as big or bigger than the difference observed if the null hypothesis were true. “It’s not quite an absolute threshold, but we’d have fewer false positives.”

Is there a better way to judge if a study is solid?

Unfortunately, there is no single alternative that everyone agrees would be better for all experiments. “Everyone knows what they’re against,” Goodman says. “Very few people know what they’re for.”

New computer programs offer people who aren’t statisticians the freedom to move beyond the P value measure, notes Julia Haaf, a psychological methodologist at the University of Amsterdam in the Netherlands. “The reason why P values got so popular was because it was the only thing people could do” throughout much of the 20th century, she says. “Now you have options.”

Scientists could add confidence intervals to their results. These are estimated ranges of values (based on your experiment) that are likely to include the true difference between treatments or conditions. Scientists could also embrace Bayes factors, as Haaf has done, comparing how much the data in an experiment support one hypothesis over another hypothesis. And depending on how an experiment is designed, sometimes a test that spits out a P value can still be the right choice.

But no matter what statistical test is chosen, a scientist should not set a cutoff to serve as a shortcut in separating scientific wheat from chaff, critics of statistical significance say. These cutoffs will always be too black and white, and scientists need to embrace the idea of statistical gray.  

In any case, scientists shouldn’t be judging an experiment’s quality by a single statistical test anyway — whatever that test may be, McShane says. Other factors may be of equal concern. “What’s the quality of your data? What’s your study design like? Do you have an understanding of the underlying mechanism?” he says. “These other factors are just as important, and often more important, than measures like P values.”

What does a future without statistically significance look like?

The P value itself is only a statistical test, and no one is trying to get rid of it. Instead, the signers of the Nature manifesto are against the idea of statistical significance, where P is less than or equal to 0.05. That limit gives a false sense of certainty about results, McShane says. “Statistics is often wrongly perceived to be a way to get rid of uncertainty,” he says. But it’s really “about quantifying the degree of uncertainty.”   

Embracing that uncertainty would change how science is communicated to the public. People expect clear yes-or-no answers from science, or want to know that an experiment “found” something, though that’s never truly the case, Haaf says. There is always uncertainty in scientific results. But right now scientists and nonscientists alike have bought into the false certainty of statistical significance.

Those teaching or communicating science — and those learning and listening — would need to understand and embrace uncertainty right along with the scientific community. “I’m not sure how we do that,” says Haaf. “What people want from science is answers, and sometimes the way we report data should show [that] we don’t have a clear answer; it’s messier than you think.”

Wildcats’ unofficial statistics | National – Cleveland American

Wildcats' unofficial statistics | National - Cleveland American nevin manimala
Wildcats' unofficial statistics | National - Cleveland American nevin manimala

Running back Gary Brightwell drags safety Rourke Freeburg into the secondary on his run in the University of Arizona’s spring game at Arizona Stadium, Saturday, April 13, 2019, Tucson, Ariz.

Official statistics were not kept for the Arizona Wildcats’ spring game Saturday. However, the Star’s Michael Lev kept a play-by-play and compiled an unofficial version.

What follows are the stats for the quarterbacks, running backs and receivers, as well as notable plays by defensive players. They do not include QB rushing numbers or individual tackle totals.