Wagoner County Sheriff’s Office Reports Drop In Crime Statistics – News On 6

.gnm-layout-one-by-five, .gnm-layout-two-by-three, .gnm-layout-two-by-four { margin-top:0; margin-bottom:32px !important; border:none; }
.gnm-layout-full-bleed, .gnm-layout-one-two-list { padding:32px 0; } .gnm-layout-full-bleed .gnm-carousel { padding-top:56px !important; }
.gnm-layout-one-two-three h2, .gnm-layout-two-by-three h2, .gnm-layout-two-by-four h2 { margin-top:0; margin-bottom:16px !important; border:none; }
.gnm-tile.portrait a:not(.gradient) h3 { padding:0; }
.gnm-layout-one-two-list h3 { margin-bottom:13px !important; }

News

Wagoner County Sheriff Chris Elliott said new statistics released by the Oklahoma State Bureau of Investigation show about a 30 percent decrease in the total crime index in Wagoner County, which includes violent crimes throughout the area. 

The full report issued every year is put together using crime and arrest reports from police and sheriff’s departments across the state. 

Those index crimes include everything from murder, to rape robbery, to arson and several other violent crimes. 

Elliott said he believes that the total crime index is down in the county because of several big changes the sheriff’s office has gone through. 

Some of the main changes he points to is the restructuring of the sheriff’s office, eliminating some higher up positions and using that money to put more deputies on the street, hiring more people in the jail and more investigators. 

Elliott said another big factor is using a Violent Crimes Task Force to target high crime areas. He said the task force was set up with the help from a grant through the Oklahoma Attorney General’s Office. People who live in the county told News On 6 they believed those statistics can help bring more people and families into the area. 

“As someone that lives in the county it’s exciting to see, I think it’s important to have a safe place for our kids a safe place for us to be able to just know you are able to be taken care of,” said Ashley Brown who lives in Wagoner County. 

The full 165 page report for can be found here:

Crime statistics for the state from 2002 to 2018 can be found here. 

.Article blockquote{color: #ffffff;} .Article a{color: #ffffff;}

Sponsored Content 

#PageTitle187659 a:link{color: ;text-decoration: ;} #PageTitle187659 a:visited{color: ;text-decoration: ;} #PageTitle187659 a:hover{color: ;text-decoration: ;} #PageTitle187659 a:active{color: ;text-decoration: ;}

statistics; +337 new citations

statistics; +337 new citations Report, nevin manimala
statistics; +337 new citations Report, nevin manimala

Schüpke S, Neumann FJ, Menichelli M, Mayer K, Bernlochner I, Wöhrle J, Richardt G, Liebetrau C, Witzenbichler B, Antoniucci D, Akin I, Bott-Flügel L, Fischer M, Landmesser U, Katus HA, Sibbing D, Seyfarth M, Janisch M, Boncompagni D, Hilz R, Rottbauer W, Okrojek R, Möllmann H, Hochholzer W, Migliorini A, Cassese S, Mollo P, Xhepa E, Kufner S, Strehle A, Leggewie S, Allali A, Ndrepepa G, Schühlen H, Angiolillo DJ, Hamm CW, Hapfelmeier A, Tölg R, Trenk D, Schunkert H, Laugwitz KL, Kastrati A; ISAR-REACT 5 Trial Investigators.

N Engl J Med. 2019 Sep 1. doi: 10.1056/NEJMoa1908973. [Epub ahead of print]

Connect with Nevin Manimala on LinkedIn
Nevin Manimala SAS Certificate

Behind the statistics: The trauma of losing a loved one to femicide – FRANCE 24

Behind the statistics: The trauma of losing a loved one to femicide - FRANCE 24 nevin manimala
Behind the statistics: The trauma of losing a loved one to femicide - FRANCE 24 nevin manimala

Issued on: 03/09/2019 – 10:48Modified: 03/09/2019 – 12:41

France on Tuesday launched a three-month conference on domestic abuse amid mounting concerns over high femicide figures. FRANCE 24 met families of some of the victims to uncover the stories behind the statistics.

Advertising

More than 100 women have been killed in France due to domestic abuse since the start of 2019, sparking a massive effort to combat the violence. France’s Minister of Gender Equality Marlène Schiappa launched a conference aimed at reducing the 222,000 incidents of marital physical or sexual violence that happen every year in France, according to official data.

As part of FRANCE 24’s coverage of the conference on femicide, reporters Catherine Norris Trent and Julie Dungelhoeff met some of the families of women killed by their partners.

‘He didn’t let her leave’

Ghylaine Bouchait was 34 years old on September 24, 2017, when she succumbed to injuries sustained during a violent altercation with her partner.

Her grieving mother and sisters recount how she tried to escape her abusive partner but never succeeded.

[embedded content]

‘He’s killed me’

Julie Douib, a mother of two, was 34 years old on March 3, when she was shot dead by her ex-partner.

Her parents, Violetta and Lucien Douib, revisit that dreadful day, when a neighbour – after hearing shots and seeing a man leave Julie’s apartment – rushed to see their daughter lift her head and say, “He’s killed me.”

Those were her last words.

[embedded content]

A Look at Alberta Agriculture’s Yield Estimates vs. Statistics Canada – DTN The Progressive Farmer

A Look at Alberta Agriculture's Yield Estimates vs. Statistics Canada - DTN The Progressive Farmer nevin manimala
A Look at Alberta Agriculture's Yield Estimates vs. Statistics Canada - DTN The Progressive Farmer nevin manimala

The only crop that is estimated to yield lower by Statistics Canada is canola, which is estimated at 40.1 bpa as compared to the 40.8 bpa yield estimated by the province’s estimates.

The black line with markers plots the five-year average change in yield from the province’s final weekly yield estimate to the official estimates released by Statistics Canada, including any revisions made. Over the past five years, spring wheat yields have been consistently estimated higher by Statistics Canada, with an average of 6 bpa reported over and above the province’s estimates. Barley follows, also showing an average 6 bpa increase over the province’s estimates, although official yield estimates released by Statistics Canada were higher than the province’s final estimate in only four of the five years.

On average, final official yield estimates averaged close to 2 bpa higher for the canola crop and 1.4 bpa for durum. Statistics Canada’s final dry pea yield estimates were reported higher than the province’s estimates in only two of the five years (2014-18), while the five-year average yield reported by Statistics Canada in its final estimates is 0.5 bpa lower than the provincial estimate.

Given Statistics Canada’s harvested acre estimates for 2019, the 6 bpa increase in spring wheat would add an additional 1 million metric tons to the balance sheet, while the 6 bpa increase in barley yield would add 408,000 metric tons. Given the 0.5 bpa reduction in dry peas seen on average over the past five years, this would lower provincial production by approximately 26,000 mt.

Cliff Jamieson can be reached at cliff.jamieson@dtn.com

Follow him on Twitter @Cliff Jamieson

(ES)

Connecticut police release traffic statistics for holiday weekend – Press Herald

Connecticut police release traffic statistics for holiday weekend - Press Herald nevin manimala

HARTFORD, Conn. — Connecticut State Police say troopers made more than 40 arrests for driving under the influence and issued about 500 citations during Labor Day weekend.

The State Police released Labor Day traffic statistics Monday morning. Extra troopers were sent out on the highways for the holiday weekend to target drunken drivers, distracted drivers and other hazards.

State Police say they responded to more than 5,700 calls for service, helped about 250 motorists and responded to about 360 accidents, including one fatality.

Forty-three people were arrested for driving under the influence of alcohol or drugs.

Most of the citations were issued for speeding. Drivers were also cited for seatbelt violations and distracted driving.

The additional patrols started at midnight Friday and run through Monday night.

Comments are not available on this story.

filed under:

What Statistics Can and Can’t Tell Us About Ourselves – The New Yorker

What Statistics Can and Can’t Tell Us About Ourselves - The New Yorker nevin manimala

Harold Eddleston, a seventy-seven-year-old from Greater Manchester, was still reeling from a cancer diagnosis he had been given that week when, on a Saturday morning in February, 1998, he received the worst possible news. He would have to face the future alone: his beloved wife had died unexpectedly, from a heart attack.

Eddleston’s daughter, concerned for his health, called their family doctor, a well-respected local man named Harold Shipman. He came to the house, sat with her father, held his hand, and spoke to him tenderly. Pushed for a prognosis as he left, Shipman replied portentously, “I wouldn’t buy him any Easter eggs.” By Wednesday, Eddleston was dead; Dr. Shipman had murdered him.

Harold Shipman was one of the most prolific serial killers in history. In a twenty-three-year career as a mild-mannered and well-liked family doctor, he injected at least two hundred and fifteen of his patients with lethal doses of opiates. He was finally arrested in September, 1998, six months after Eddleston’s death.

David Spiegelhalter, the author of an important and comprehensive new book, “The Art of Statistics” (Basic), was one of the statisticians tasked by the ensuing public inquiry to establish whether the mortality rate of Shipman’s patients should have aroused suspicion earlier. Then a biostatistician at Cambridge, Spiegelhalter found that Shipman’s excess mortality—the number of his older patients who had died in the course of his career over the number that would be expected of an average doctor’s—was a hundred and seventy-four women and forty-nine men at the time of his arrest. The total closely matched the number of victims confirmed by the inquiry.

One person’s actions, written only in numbers, tell a profound story. They gesture toward the unimaginable grief caused by one man. But at what point do many deaths become too many deaths? How do you distinguish a suspicious anomaly from a run of bad luck? For that matter, how can we know in advance the number of people we expect to die? Each death is preceded by individual circumstances, private stories, and myriad reasons; what does it mean to wrap up all that uncertainty into a single number?

In 1825, the French Ministry of Justice ordered the creation of a national collection of crime records. It seems to have been the first of its kind anywhere in the world—the statistics of every arrest and conviction in the country, broken down by region, assembled and ready for analysis. It’s the kind of data set we take for granted now, but at the time it was extraordinarily novel. This was an early instance of Big Data—the first time that mathematical analysis had been applied in earnest to the messy and unpredictable realm of human behavior.

Or maybe not so unpredictable. In the early eighteen-thirties, a Belgian astronomer and mathematician named Adolphe Quetelet analyzed the numbers and discovered a remarkable pattern. The crime records were startlingly consistent. Year after year, irrespective of the actions of courts and prisons, the number of murders, rapes, and robberies reached almost exactly the same total. There is a “terrifying exactitude with which crimes reproduce themselves,” Quetelet said. “We know in advance how many individuals will dirty their hands with the blood of others. How many will be forgers, how many poisoners.”

To Quetelet, the evidence suggested that there was something deeper to discover. He developed the idea of a “Social Physics,” and began to explore the possibility that human lives, like planets, had an underlying mechanistic trajectory. There’s something unsettling in the idea that, amid the vagaries of choice, chance, and circumstance, mathematics can tell us something about what it is to be human. Yet Quetelet’s overarching findings still stand: at some level, human life can be quantified and predicted. We can now forecast, with remarkable accuracy, the number of women in Germany who will choose to have a baby each year, the number of car accidents in Canada, the number of plane crashes across the Southern Hemisphere, even the number of people who will visit a New York City emergency room on a Friday evening.

In some ways, this is what you would expect from any large, disordered system. Think about the predictable and quantifiable way that gases behave. It might be impossible to trace the movement of each individual gas molecule, but the uncertainty and disorder at the molecular level wash out when you look at the bigger picture. Similarly, larger regularities emerge from our individually unpredictable lives. It’s almost as though we woke up each morning with a chance, that day, of becoming a murderer, causing a car accident, deciding to propose to our partner, being fired from our job. “An assumption of ‘chance’ encapsulates all the inevitable unpredictability in the world,” Spiegelhalter writes.

But it’s one thing when your aim is to speak in general terms about who we are together, as a collective entity. The trouble comes when you try to go the other way—to learn something about us as individuals from how we behave as a collective. And, of course, those answers are often the ones we most want.

The dangers of making individual predictions from our collective characteristics were aptly demonstrated in a deal struck by the French lawyer André-François Raffray in 1965. He agreed to pay a ninety-year-old woman twenty-five hundred francs every month until her death, whereupon he would take possession of her apartment in Arles.

At the time, the average life expectancy of French women was 74.5 years, and Raffray, then forty-seven, no doubt thought he’d negotiated himself an auspicious contract. Unluckily for him, as Bill Bryson recounts in his new book, “The Body,” the woman was Jeanne Calment, who went on to become the oldest person on record. She survived for thirty-two years after their deal was signed, outliving Raffray, who died at seventy-seven. By then, he had paid more than twice the market value for an apartment he would never live in.

Raffray learned the hard way that people are not well represented by the average. As the mathematician Ian Stewart points out in “Do Dice Play God?” (Basic), the average person has one breast and one testicle. In large groups, the natural variability among human beings cancels out, the random zig being countered by the random zag; but that variability means that we can’t speak with certainty about the individual—a fact with wide-ranging consequences.

Every day, millions of people, David Spiegelhalter included, swallow a small white statin pill to reduce the risk of heart attack and stroke. If you are one of those people, and go on to live a long and happy life without ever suffering a heart attack, you have no way of knowing whether your daily statin was responsible or whether you were never going to have a heart attack in the first place. Of a thousand people who take statins for five years, the drugs will help only eighteen to avoid a major heart attack or stroke. And if you do find yourself having a heart attack you’ll never know whether it was delayed by taking the statin. “All I can ever know,” Spiegelhalter writes, “is that on average it benefits a large group of people like me.”

That’s the rule with preventive drugs: for most individuals, most of those drugs won’t do anything. The fact that they produce a collective benefit makes them worth taking. But it’s a pharmaceutical form of Pascal’s wager: you may as well act as though God were real (and believe that the drugs will work for you), because the consequences otherwise outweigh the inconvenience.

There is so much that, on an individual level, we don’t know: why some people can smoke and avoid lung cancer; why one identical twin will remain healthy while the other develops a disease like A.L.S.; why some otherwise similar children flourish at school while others flounder. Despite the grand promises of Big Data, uncertainty remains so abundant that specific human lives remain boundlessly unpredictable. Perhaps the most successful prediction engine of the Big Data era, at least in financial terms, is the Amazon recommendation algorithm. It’s a gigantic statistical machine worth a huge sum to the company. Also, it’s wrong most of the time. “There is nothing of chance or doubt in the course before my son,” Dickens’s Mr. Dombey says, already imagining the business career that young Paul will enjoy. “His way in life was clear and prepared, and marked out before he existed.” Paul, alas, dies at age six.

And yet, amid the oceans of unpredictability, we’ve somehow managed not to drown. Statisticians have navigated a route to maximum certainty in an uncertain world. We might not be able to address insular quandaries, like “How long will I live?,” but questions like “How many patient deaths are too many?” can be tackled. In the process, a powerful idea has arisen to form the basis of modern scientific research.

A stranger hands you a coin. You have your suspicions that it’s been weighted somehow, perhaps to make heads come up more often. But for now you’ll happily go along with the assumption that the coin is fair.

You toss the coin twice, and get two heads in a row. Nothing to get excited about just yet. A perfectly fair coin will throw two heads in a row twenty-five per cent of the time—a probability known as the p-value. You keep tossing and get another head. Then another. Things are starting to look fishy, but even if you threw the coin a thousand times, or a million, you could never be absolutely sure it was rigged. The chances might be minuscule, but in theory a fair coin could still produce any combination of heads.

What Statistics Can and Can’t Tell Us About Ourselves - The New Yorker nevin manimala

“So, there’s a rumor one of you is just a thousand hamsters in a horse costume.”Cartoon by Pia Guerra and Ian Boothby

Scientists have picked a path through all this uncertainty by setting an arbitrary threshold, and agreeing that anything beyond that point gives you grounds for suspicion. Since 1925, when the British statistician Ronald Fisher first suggested the convention, that threshold has typically been set at five per cent. You’re seeing a suspicious number of heads, and once the chance of a fair coin turning up at least as many heads as you’ve seen dips below five per cent, you can abandon your stance of innocent until proved guilty. In this case, five heads in a row, with a p-value of 3.125 per cent, would do it.

This is the underlying principle behind how modern science comes to its conclusions. It doesn’t matter if we’re uncovering evidence for climate change or deciding whether a drug has an effect: the concept is identical. If the results are too unusual to have happened by chance—at least, not more than one time out of twenty—you have reason to think that your hypothesis has been vindicated. “Statistical significance” has been established.

Take a clinical trial on aspirin run by the Oxford medical epidemiologist Richard Peto in 1988. Aspirin interferes with the formation of blood clots, and can be used to prevent them in the arteries of the heart or the brain. Peto’s team wanted to know whether aspirin increased your chances of survival if it was administered in the middle of a heart attack.

Their trial involved 17,187 people and showed a remarkable effect. In the group that was given a placebo, 1,016 patients died; of those who had taken the aspirin, only 804 died. Aspirin didn’t work for everyone, but it was unlikely that so many people would have survived if the drug did nothing. The numbers passed the threshold; the team concluded that the aspirin was working.

Such statistical methods have become the currency of modern research. They’ve helped us to make great strides forward, to find signals in noisy data. But, unless you are extraordinarily careful, trying to erase uncertainty comes with downsides. Peto’s team submitted the results of their experiment to an illustrious medical journal, which came back with a request from a referee: could Peto and his colleagues break the results down into groups? The referee wanted to know how many women had been saved by the aspirin, how many men, how many with diabetes, how many in this or that age bracket, and so on.

Peto objected. By subdividing the big picture, he argued, you introduce all kinds of uncertainty into the results. For one thing, the smaller the size of the groups considered, the greater the chance of a fluke. It would be “scientifically stupid,” he observed, to draw conclusions on anything other than the big picture. The journal was insistent, so Peto relented. He resubmitted the paper with all the subgroups the referee had asked for, but with a sly addition. He also subdivided the results by astrological sign. It wasn’t that astrology was going to influence the impact of aspirin; it was that, just by chance, the number of people for whom aspirin works will be greater in some groups than in others. Sure enough, in the study, it appeared as though aspirin didn’t work for Libras and Geminis but halved your risk of death if you happened to be a Capricorn.

Using sufficiently large groups might help to insure against flukes, but there’s another trap that befalls unsuspecting scientists. It’s one that Peto’s experiment also serves to underline, and one that has led to nothing less than a statistical crisis at the heart of science.

The easiest way to understand the issue is by returning to the conundrum of the biased coin. (Coins are the statistician’s pet example for a reason.) Suppose that you’re particularly keen not to draw a false conclusion, and decide to hang on to your hypothesis that the coin is fair unless you get twenty heads in a row. A fair coin would do this only about one in a million times, so it’s an extraordinarily high level of proof to demand—far beyond the threshold of five per cent used by much of science.

Now, imagine I gave out fair coins to every person in the United States and asked everyone to complete the same test. Here’s the issue: even with a threshold of one in a million—even with everything perfectly fair and aboveboard—we would still expect around three hundred of these people to throw twenty heads in a row. If they were following Fisher’s method, they’d have no choice but to conclude that they’d been given a trick coin. The fact is that, wherever you decide to set the threshold, if you repeat your experiment enough times, extremely unlikely outcomes are bound to arise eventually.

Apple learned this shortly after the iPod Shuffle was launched. The device would play songs from a users’ library at random, but Apple found itself inundated with complaints from users who were convinced that their Shuffle was playing songs in a pattern. Patterns are much more likely to occur than we think, but even if several songs by the same artist, or consecutive songs from an album, had only a tiny probability of appearing next to one another in the playlist, so many people were listening to their iPods that it was inevitable such seemingly strange coincidences would occur.

In science, the situation is starker, and the stakes are higher. With a threshold of only five per cent, one in twenty studies will inadvertently find evidence for nonexistent phenomena in its data. That’s another reason that Peto resisted the proposal that he look at various subpopulations: the greater the number of groups you look at, the greater your chances of seeing spurious effects. And this is far from being only a theoretical concern. In medicine, a study of forty-nine of the most cited medical publications from 1990 to 2003 found that the conclusions of sixteen per cent were contradicted by subsequent studies. Psychology fares worse still in these surveys (possibly because its studies are cheaper to reproduce). A 2015 study found that attempts to reproduce a hundred psychological experiments yielded significant results in only thirty-six per cent of them, even though ninety-seven per cent of the initial studies reported a p-value under the five-per-cent threshold. And scientists fear that, as with the iPod Shuffle, the fluke results tend to get an outsized share of attention.

Many high-profile studies are now widely believed to have been founded on such flukes. You may have come across the research on power posing, which suggests that adopting a dominant stance helps to reduce stress hormones in the body. The study has a thousand citations, and an accompanying TED talk has amassed more than fifty million views, but the findings have failed to be replicated and are now regarded as a notable example of the flaws in Fisher’s methods.

It’s not that scientific fraud is common; it’s that too many researchers have failed to handle uncertainty with sufficient care. This issue has only been exacerbated in the era of Big Data. The more data that are collected, cross-referenced, and searched for correlations, the easier it becomes to reach false conclusions. Illustrating this point, Spiegelhalter includes a 2009 study in which researchers put a subject into an fMRI scanner and analyzed the response in 8,064 brain sites while showing photographs of different human expressions. The scientists wanted to see what regions of the brain were lighting up in response to the photographs and used a threshold of a tenth of one per cent for their experiment. “The twist was that the ‘subject’ was a 4lb Atlantic Salmon which ‘was not alive at the time of scanning,’ ” Spiegelhalter notes.

But, even at that threshold, run enough tests and you’re bound to cross it eventually. Of the more than eight thousand sites in the dead fish’s brain the researchers inspected, sixteen duly showed a statistically significant response. And the fear is that equally unfounded conclusions, albeit less apparently so, will routinely be drawn, with the false assurance of “statistical significance.” Science still stands up to scrutiny, precisely because it invites scrutiny. But the p-value crisis suggests that our current procedures could be improved upon.

Scientists now say that researchers should declare their hypothesis in advance of a study, in order to make fishing for significant results much more difficult. Most agree that the incentives of science need to be changed, too—that studies designed to replicate the work of others should be valued more highly. There are also suggestions for an alternative way to present experimental findings. Many people have called for the focus of science to be on the size of the effect—how many lives are saved by a drug, for instance—rather than on whether the data for some effect cross some arbitrary threshold. How impressed should we be by very strong evidence for a very weak effect? Let’s go back to aspirin. A gigantic study—it tracked twenty-two thousand individuals over five years—demonstrated that taking small daily doses of the drug would reduce the risk of a heart attack. The p-value, the probability of this happening by chance, was tiny: 0.001 per cent. But so, too, was the effect size. A hundred and thirty otherwise healthy individuals would have to take the drug to prevent a single heart attack, and all the while each person would be increasing his or her risk of adverse side effects. It’s a risk that is now deemed to outweigh the benefits for most people, and the advice for older adults to take a baby aspirin a day has recently been recanted.

But perhaps the real problem is how difficult we find it to embrace uncertainty. Earlier this year, eight hundred and fifty prominent academics, including David Spiegelhalter, signed a letter to Nature arguing that the issue can’t be solved with a technical work-around. P-values aren’t the problem; the problem is our obsession with setting a threshold.

Drawing an arbitrary line in the sand creates an illusion that we can divide the true from the false. But the results of a complicated experiment cannot be reduced to a yes-or-no answer. Back when Spiegelhalter was asked to determine whether Dr. Harold Shipman’s mortality rate should have aroused suspicion earlier, he swiftly decided that the standard test of statistical significance would be a “grossly inappropriate” way to monitor doctors. The medical profession would effectively be pointing the finger of suspicion at one in every twenty innocent doctors—thousands of clinicians in the U.K. Doctors would be penalized for treating higher-risk patients.

Instead, Spiegelhalter and his colleagues proposed an alternative test, which took account of patient deaths as they occurred, contrasting the accumulating deaths with the expected number. Year on year, it sequentially compares the likelihood that a doctor’s high mortality rates are a run of bad luck with something more suspicious, and raises an alarm once the evidence starts to build. But even this highly sophisticated method will, owing to the capricious whims of chance, eventually cast suspicion on the innocent. Indeed, as soon as a monitoring system for general practitioners was piloted, it “immediately identified a G.P. with even higher mortality rates than Shipman,” Spiegelhalter writes. This was an unlucky doctor who worked in a coastal town with an elderly population. The result highlights how careful you need to be even with the best statistical methods. In Spiegelhalter’s words, while statistics can find the outliers, it “cannot offer reasons why these might have occurred, so they need careful implementation in order to avoid false accusations.”

Statistics, for all its limitations, has a profound role to play in the social realm. The Shipman inquiry concluded that, if such a monitoring system had been in place, it would have raised the alarm as early as 1984. Around a hundred and seventy-five lives could have been saved. A mathematical analysis of what it is to be human can take us only so far, and, in a world of uncertainty, statistics will never eradicate doubt. But one thing is for sure: it’s a very good place to start. ♦

Watch out, investors: Statistics can lie. Your 401(k) will thank you seeing the truth – USA TODAY

Watch out, investors: Statistics can lie. Your 401(k) will thank you seeing the truth - USA TODAY nevin manimala

Ken Fisher, Special to USA TODAY Published 7:08 a.m. ET Sept. 1, 2019 | Updated 9:56 a.m. ET Sept. 1, 2019

CLOSEWatch out, investors: Statistics can lie. Your 401(k) will thank you seeing the truth - USA TODAY nevin manimala

The National Bureau of Economic Research determines a recession occurs when “a significant decline in economic activity” lasts more than “a few months.” USA TODAY

“There are three kinds of lies: lies, damned lies, and statistics.”

Mark Twain popularized that quote over a century ago. It couldn’t be more true now.

Pundits constantly bombard investors with wild economic and stock market claims backed with charts and data. Much of it is bunk. But it isn’t hard to see through it if you’re armed with a few simple tricks.

Those tricks come from one of my all-time favorite books: Darrell Huff’s 1954 classic, How to Lie With Statistics. It’s humorous, illustrated and non-academic—a great guide about how people torture data to force them to fit their argument. Here are its five most useful nuggets.

1. Beware stats from surveys 

Surveys are only as good as their representativeness. But most have baked-in biases and aren’t truly representative. Political polls, wrong in so many recent elections, prove this. So do opinion surveys of all stripes. As Huff writes: “No conclusion that ‘sixty-seven percent of the American people are against’ something should be read without the lingering question, sixty-seven percent of which American people?”

2. Don’t take “averages” at face value 

According to the Social Security Administration, average U.S. annual wages in 2017 were both $48,251.57 and $31,561.49. How? Technically there are different kinds of averages. The higher figure is the arithmetic mean. The lower is the median — the number in the middle, with half of wages higher and half lower. Both are “right,” but the mean average often gets skewed by outliers. You can’t assess an average until you know what kind it is and what skew it may hide. Keep this in mind when digesting climate stats. As Huff showed, Oklahoma City’s average temperature from 1890 – 1952 was a cool 60.2 degrees—but temperatures in that window ranged from -17 to 113.

3. Consider “axis” fraud 

Usually, in economic graphs, the horizontal line or “axis” shows dates and the vertical axis shows quantity or size. It’s easy to make anything look way wilder than reality simply by stretching the vertical axis scale or skipping a whole chunk of it. In 2014 someone did this with the Dow to make that year’s market movement look just like the run-up to the 1929 crash. It went viral—scaring many. Properly graphed, the two periods looked nothing alike. Blatant fear-mongering, lying with charts.

What could the next recession feel like?: Stories from past downturns are a guide

AI at work:: Machines are training human workers to be more compassionate

4. Know your profits.

Does an article about corporate profits report earnings per share? Or does it talk about gross operating profit margins? Is it discussing return on assets? Or on equity? As Huff explained, “You can, for instance, express exactly the same fact by calling it a one percent return on sales, a fifteen percent return on investment, a ten-million dollar profit, an increase in profits of forty percent (compared with 1935-39 average), or a decrease of sixty percent from last year.” Writers and companies will often pick the one that best suits their angle.

5. No correlation without causation

As Huff explained, once upon a time, a study showed non-smokers got better college grades than smokers. But did not smoking help students get better grades? Or did bad grades drive more people to smoke? Is it a mere coincidence? Tyler Vigen’s hilarious website, Spurious Correlations, shows how easy it is to draw false conclusions between two totally unconnected things. Did you know annual per-capita cheese consumption correlates with the number of people who died from tangled bedsheets? Or that U.S. chicken consumption correlates with oil imports? People pull similar stunts with market data every day. That’s where we get seasonal myths.

For more, get the book.

It’s cheap and a breezy 144 pages. Your 401(k) will thank you.

Ken Fisher is founder and executive chairman of Fisher Investments, author of 11 books, four of which were New York Times bestsellers, and is No. 200 on the Forbes 400 list of richest Americans. Follow him on Twitter: @KennethLFisher

The views and opinions expressed in this column are the author’s and do not necessarily reflect those of USA TODAY.

Read or Share this story: https://www.usatoday.com/story/money/2019/09/01/data-tips-avoid-being-deceived/2094886001/

Submissions Sought for Undergraduate Statistics Project Competition

The purpose of Undergraduate Statistics Project Competition (USPROC) is to encourage the development of data analysis skills, enhance presentation skills, and recognize outstanding work by undergraduate statistics students.

There are two submission deadlines. The first is December 20 for class and research projects happening in summer/fall. The second deadline is June 26, 2020, for winter/spring courses and projects and year-long projects. Winners will be announced within 2–3 months of the submission deadlines.

There are two categories for submissions to the competition:

Cash prizes will be awarded in both categories.

Undergraduate Statistics Research Conference Coming in November

The Electronic Undergraduate Statistics Research Conference (eUSR) will take place November 1.

The conference will feature a keynote address by Jennifer Thompson, biostatistician at Devoted Health; talks from the Undergraduate Statistics Research Project Competition winners; and sessions on graduate school and careers in statistics and data science.

Undergraduate students who want to share their work should submit an abstract by October 23.

Why Be a Statistical Consultant?

Why Be a Statistical Consultant? nevin manimala


Why Be a Statistical Consultant? nevin manimala Mary Kwasny is an associate professor in the department of preventive medicine and an active member of the Biostatistics Collaboration Center at Northwestern University, Feinberg School of Medicine. She has been enjoying the art of statistical consulting and collaboration for more than 20 years in academic medical centers and external non-profits.

As relatively new members of the statistics profession, you might be considering a career as a statistical consultant. You may also be wondering what it is exactly that statistical consultants do. (As a more established statistician, I hear that question a lot. I also want to reply that we don’t do anything “exactly”; we allow for error, unlike mathematicians.)

Well, as statisticians, we know how to deal with data—collect it, clean it, analyze it, and interpret the findings. We are not, however, experts in the subject matter in which the data was collected. But, the subject matter experts, or clients, who collect data are—guess what!—not experts in statistics. Working in a particular industry or research field, or on a specific project, the statistical consultant needs to not only understand, but be able to translate, the subject matter into a statistical problem and then translate findings back to the client.

Anyone who has read a translation of a translation, or played Telestrations or Telephone, knows this is not always easy. As statistical consultants, we have to be able to understand how the data are generated and what measures are being used. We have to translate research questions into problems that can be analyzed with data, determine the appropriate type of statistical analysis, and sometimes be the “honest broker” when interpreting the results to avoid bias.

Apart from knowing statistics and learning about the research field you work in, you will also need communication skills to be able to talk to clients, understand their language, and communicate statistics effectively. You will need business skills, business acumen, time-management skills, networking talents, and a host of other skills. Putting yourself out there as a new professional is hard, isn’t it?

Consulting Comes in Many Colors

There are different types of consulting and different types of consulting jobs available. On one end of the spectrum, there are short-term consultations, which may just be one or two meetings. These typically are for quantitatively skilled individuals who are comfortable running their own analyses, but appreciate an expert looking over their work. On the other end of the spectrum, there are long-term consultations; these are occasionally called collaborations in an academic setting. These are projects in which you are an integral part of the research team from start to finish, and they may span many years. And there are projects that could be anywhere in between. Consulting may include being an expert witness in a trial or helping someone with a master’s thesis in physical therapy.

As for settings, you could work on your own as an independent contractor or in a small group. You could also work in a large group of consultants, in a consulting firm, or be the “in-house” statistician in a large company. I personally enjoy working in an academic environment; I am a member of a collaboration center in which part of my job is consulting. Consulting may not be part of your job description at all, but you may want to do some consulting “on the side” if your current employer allows it.

Consulting Takes You to the Unknown

Almost every consulting project is a giant step into the unknown. You meet with a client, discuss their research questions who knows how many times, learn more about their problem, and then you do the statistics part—possibly including, but not limited to, providing assistance for study design, data collection, proposing an analysis plan, conducting an analysis, and writing up the results. But it doesn’t stop here. You need to estimate the time and cost for your work and later invoice for your work—and all this is for the straightforward projects.

It’s definitely not like coursework, in which you can expect to fit longitudinal models for your Longitudinal Analysis class or survival models for your Survival Analysis class. Any consulting project has the potential to be … anything!

My colleague and friend Masha Kocherginsky said that in more than 15 years of consulting, she has never encountered a real-world consulting project that’s straight out of a textbook, and it’s never the same analysis twice. As a result, it’s never boring!

So Why Would You Want to Do This?

For me, it was precisely because of all the above. John Tukey once said, “The best thing about being a statistician is that you get to play in everyone’s backyard.” I love learning and problem solving. I love seeing what other people are excited about. I love helping them learn more about their data, which, in turn, helps them learn more about their subject matter. Topics change, people are different, and the one thing you know will happen is you will learn something and help someone. This alone can be addictive.

There are times I call consulting a mix between statistical triage and statistical improvisation. In triage, the key is to quickly assess, diagnose, and assist a patient. Any actor will tell you their craft is honed when they are forced to improvise. Not that you are “making things up” as you go, but rather you are attempting to give unscripted advice to an unknown question.

Short consults can be difficult this way, but not everyone needs a full-time statistician. Other projects provide great opportunities to learn new topic domains, as well as to delve into potentially different statistical methods or skills. For these projects, I use the analogy that statistics are the scaffolding that helps construct or renovate the building. Statistical consulting provides a means to keep statistical and communication skills sharp. Additionally, like teaching an introductory statistics course, it provides the opportunity to discover different and potentially better ways to explain things. All told, statistical consulting is educational, exciting, and challenging.

When I was a graduate student, I did a summer internship and had my first “outside the classroom” statistical consult. A pediatric surgical fellow wanted to look at the importance of staging laparotomy in pediatric Hodgkin’s disease. As an intern, I had the good fortune to work on this consult under the supervision of a faculty member, who was there to facilitate the consult. It was the first time I used categorization and regression trees (in my defense, I don’t think random forests had been developed at that time). I learned something about medical procedures and treatments. The investigator was really nice and brought doughnuts to our meetings. And I got my first collaborative publication out of it. It was amazing. I was hooked. I never worked with that investigator again, but recently looked him up only to discover it was his first publication as well. He is now an endowed chair in surgical research at a prominent children’s hospital. Apparently, he was hooked, too.

Consulting Perks and Struggles

Not all projects have gone as smoothly or been as productive—and I quickly realized doughnuts were not the norm. There have been problems with investigators wanting to run inappropriate methods, times when I thought I understood a data set and was—in fact—wrong, and times when I sent an invoice only to have the recipient email back, “FOR WHAT!!!” I would hazard a guess that these problems are common and that every statistical consultant, anywhere in the world, has dealt with one or more of the same struggles. But, over the years, I have learned to be a better collaborator and to make it a priority to align expectations so those problems don’t occur quite as much. Granted, bad consulting experiences will happen, but, fortunately, there are many more positive experiences and enough enjoyment in the “random consult” to continue including collaboration in my job description.

Another perk of consulting is that, if you are interested in methodologic research, it is a hotbed for motivating examples and finding omissions and gaps in knowledge. Clients may seek statistical help after data collection, and it may be clear that traditional statistical methods are not appropriate. Other times, when investigators are collecting data, they change protocol out of necessity. These changes may not have gotten a statistical stamp of approval beforehand. For example, I once worked with an investigator who failed to tell me they changed urine collection from a 24-hour collection to a 12-hour collection until the end of the trial. While pragmatic and understandable, had they consulted me on the change, I would have requested they had a period of time where both were collected so we could properly adjust numbers taken under the different conditions.

In many of these situations, we can make assumptions, appropriately and clearly state those assumptions and potential limitations, and still provide a timely analysis. However, there may be more statistically efficient or better methods that could be developed over time to address these adaptations. The bonus with these motivational examples is that, provided you have the investigator’s consent, you automatically have data to use as an example!
Apart from the statistical, educational, and communication opportunities consulting provides, it is also a convenient way to expand your professional network. Being newer to the workforce, this may not seem like an important reason to become a consultant, but thanks to different collaborators over the years, I have developed a network of experts in various fields. These individuals have referred other investigators in need of statistical assistance to me and provide me with feedback if I’m looking for second opinions or clarifications when working on another project—or since I work in a medical school, when looking for a trusted medical specialist. Additionally, you will learn that not every statistician gets along with every potential collaborator. When that does happen, it is nice to know there are investigators out there who appreciate your unique talents and style.

Consulting for New Statisticians

As new statisticians, I would strongly recommend some consulting, be it on the job or as pro bono work. It is a great way to keep learning and honing your statistical craft. I would also suggest you don’t go it alone; find a mentor, colleague, or boss who is available for advice. This is especially helpful when you have one of those difficult clients.

Independent consulting can be a fabulous career choice, but first it is important to have a network of potential clients and a portfolio of work. As you develop as a consultant, you will be able to better estimate how long a project might take, more easily figure out the real question the investigator is asking, and develop an expanding toolbox of statistical methods to be used for other projects and new problems. While you learn and hone these skills, it is vital to have backup and advice.

The Statistical Consulting Section is active in the online ASA Community and caters to all types of consulting—within universities, pro-bono, on the side, sole proprietorships, or wherever and however consulting is done. This is a tremendous resource for those who are thinking of, or actively consulting in, any of those environments. So, go! Consult! Have fun!