Statistics ministry seeks GST data to improve national accounts

Statistics ministry seeks GST data to improve national accounts statistics, nevin manimala

NEW DELHI: The statistics ministry has sought quarterly goods and services tax (GST) data and information on tax paying units to improve the quality of both National Accounts Statistics and state-level accounts.

The ministry of statistics and programme implementation (MoSPI) has reached out to the finance ministry for GST data on tax collections along with summary information on units, classification-wise, that have paid taxes for use in compilation of national and state-level accounts, officials familiar with the development told ET.

“We have asked finance ministry for GST data on output, input and revenue,” said an official. “They are aware of our request but are waiting for the GST data and system to stabilise.” Independent economists said the move will make proxies cleaner, reduce data collection costs and help obtain higher quality statistical data.

“GST should be leveraged for national statistics because it will improve data by making it quick, timely and robust,” said D K Joshi, chief economist at top credit rating firm Crisil. “Besides reducing costs, the response burden will also go down substantially. This process should be initiated as soon as possible,” he said.

Statistics ministry seeks GST data to improve national accounts statistics, nevin manimala
Once a list of units registered under GST is provided to MoSPI to update business registers, these can be used to conduct various statistical surveys, experts said.

At present, the statistics released by MoSPI are based on administrative sources, surveys and censuses conducted by the central and state governments and non-official sources and studies.

To address privacy and competition related concerns, MoSPI has suggested that the finance ministry share data without naming any companies. “They are dealing with data of companies which is sensitive from competition point of view,” said the government official quoted earlier. “Also, firms might not be comfortable one government department sharing their details with another.”

Keeping in view the legal provisions and practices in other countries like Australia where tax data is shared with statistical agencies, it was suggested that similar protocols be worked out by the central board of indirect taxes and customs to share tax data with statistical agencies.

Vital statistics, Oct. 18

Vital statistics, Oct. 18 statistics, nevin manimala
Vital statistics, Oct. 18 statistics, nevin manimala

Forms to report births to the News-Press are available at Mosaic Life Care, just outside maternity. Forms are normally picked up Tuesdays and Fridays. Forms also are available at the News-Press front desk.

Stacy and Simon Wertenberger, St. Joseph, a girl born Oct. 8.

Katherine Webb and Jeffrey McElroy Jr., Cameron, Missouri, a girl born Oct. 11.

Lauren and Wil Anderson, Cosby, Missouri, a boy born Oct. 11.

Ivory Wilfong and Stephen Kirsch III, St. Joseph, a girl born Oct. 11.

Baleigh and Josh Carrithers, St. Joseph, a girl born Oct. 11.

Dayonna McGaughy and Trevor Genson, St. Joseph, a boy born Oct. 12.

Lexie Droz and Matthew Schmille, Amazonia, Missouri, a girl born Oct. 13.

Marissa Wolfenbarger and Logan Hazzard, St. Joseph, a boy born Oct. 14.

Vanessa and Ben Boyer, St. Joseph, a boy born Oct. 16.


Shannon Shineman and Andrew Watts, Oak Grove, Missouri, a boy born Oct. 8, at Fairfax Community Hospital in Fairfax, Missouri.


Judith D. Gasper and Ronald E. Gasper

The Surprising Statistics Of Stock Market Corrections

The Surprising Statistics Of Stock Market Corrections statistics, nevin manimala

The Surprising Statistics Of Stock Market Corrections statistics, nevin manimala Investors typically have a vague idea of what they’re getting themselves into by buying stocks but don’t quite understand the mechanics behind market corrections. The mainstream media isn’t helping investors make good decisions because creating fear drives viewership. Investors’ memories are short, and market pullbacks are normal.

Personal finance writers and financial advisors often push suboptimal strategies that feel good, like excessively large cash allocations and dollar-cost averaging. By understanding what is statistically likely to happen, you can know what to expect when the market dips and answer other portfolio strategy questions, like whether you statistically do better waiting for pullbacks in cash or investing money immediately.

The stock market doesn’t go straight up.

On any given day, stocks have roughly a 53 percent chance of rising and a 47 percent chance of falling. Over any given 3-month period, stocks rise 68 percent of the time, dropping the other 32 percent of the time. Over a typical 12-month period, the odds of making money in stocks rise to roughly 75 percent. However, if you are in the market for a long enough period of time, there is a 100 percent chance that you will experience temporary price declines at times.

The Surprising Statistics Of Stock Market Corrections statistics, nevin manimala

Source: Thomson Reuters

After bottoming in early 2009, stocks have rallied massively but have not gone straight up. Recent history shows seven declines of 9.8 percent or more in the S&P 500 (SPY) since the bear market bottom (the current downturn is +/- 6 percent as of the time of writing this). Staying the course each time was the right move. By knowing what to expect from the market, we can sleep easier and make better investment decisions. In fact, we can make an educated guess about how often market corrections will occur in the future based on past data.

How often are corrections likely to occur?

Guggenheim Funds did a research piece this August on every stock market decline from 1946 on. They found that pullbacks, or declines of 5 percent or greater, occur about 1.5 times per year. Market declines of 10 percent or greater (corrections) occur roughly 0.5 times per year. Lastly, market declines of 20 percent or greater (bear markets), occur on average about every seven years.

The Surprising Statistics Of Stock Market Corrections statistics, nevin manimala

Source: Guggenheim Funds

A couple of things jump out when comparing the recent data to the historical data. The first thing to notice is that market declines of 10 percent or greater seem to be becoming more common than they were in the past. I attribute this to the rise of quantitative trading. Markets are reacting (or overreacting) to new information quicker than they have in the past. On the flip side, markets seem to be recovering faster from declines. It’s not immediately clear whether the increase in the number of roughly 10 percent or greater declines is statistically significant. However, as an investor, market corrections have implications for you.

The most obvious move is to move money countercyclically when you can. Pullbacks of 5-10 percent have only taken a month to recover on average. When the market does pull back, avoid selling into it, if possible. If you live off of your investments, you can draw money from your bond allocation for your living expenses when pullbacks happen, as they do, roughly 1.5 times per year. This way, you aren’t fighting the traffic of other panicky and overleveraged people selling stocks into the slightest downturn.

Statistically, you usually only have to wait a month to recover. Also, if you happen to get a liquidity boost like an annual bonus when the market is down, you typically won’t have to wait very long to make money off the bounce-back to the upside. Surprisingly, however, the data does not back up waiting in cash for a pullback.

Should you wait for a pullback to buy stocks?

Not usually.

Talking heads on CNBC love to claim that they are “waiting for a pullback” to buy. After all, it seems like the most natural and prudent thing to do. While there is a time and place to wait for price declines (and/or interest rate declines) to buy assets, the strategy of waiting for pullbacks to buy stocks is grossly overused. In the same vein, financial advisors like to push the use of dollar-cost averaging as a way to reduce risk and increase return. However, neither of these methods has been shown to outperform simple buy-and-hold over time.

Vanguard did an amazing white paper on this phenomenon a few years ago, entitled “Dollar-Cost Averaging Just Means Taking Risk Later.” I always like to link to sources, but I actively encourage everyone who has the time to read the white paper (it’s only 8 pages). In the paper, Vanguard found that lump-sum investing – not waiting for a pullback, outperformed dollar-cost averaging/waiting to invest roughly two-thirds of the time. Here’s what they had to say about waiting:

Clearly, if markets are trending upward, it’s logical to implement a strategic asset allocation as soon as possible because it should offer a higher long-run expected return than cash.

Stocks clearly go up over time, so even though they are likely to pull back, the odds are that they will be pulling back from a higher price than the price you paid. Additionally, being invested allows you to collect dividends and interest, which you can either reinvest or use for lifestyle purposes.

Many, many personal finance writers and financial advisors push waiting for pullbacks, but the data does not back their claims up. Dollar-cost averaging has a place, for example, if you contribute to a 401(k) with each paycheck you are dollar cost averaging. This is acceptable because you aren’t letting the money pile up in cash while waiting for a decline to time the market.

Where dollar-cost averaging does not have a place is the practice of letting cash sit only to take the risk later. Unless you have a solid macroeconomic reason that markets are likely to decline, stocks are the place to be. Right now, I see the potential for higher interest rates to drag on GDP (especially around the residential real estate market) but see no cause for panic in the greater economy.


Over time, stock returns converge with company fundamentals and risk moderates relative to the original amount invested. The old adage “that time in the market beats timing the market” is backed by statistical research. However, market pullbacks are common, and understanding how they work helps guide our decisions to buy and sell stocks. Knowing what to do (and what not to do) when the market pulls back helps you achieve your goals and get to where you want to be.

Did you enjoy this article? If so, then scroll to the top and follow me!

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Red-zone statistics provide invaluable handicapping info

Red-zone statistics provide invaluable handicapping info statistics, nevin manimala
Red-zone statistics provide invaluable handicapping info statistics, nevin manimala

One of VSiN’s goals in pro football coverage this season is to encourage handicappers to spend more time studying red-zone performance. There are team skill sets related to maximizing offensive opportunities in close, or to repelling touchdown tries on defense. Those are often the difference-makers for bettors in games with tight point spreads.

Some of you may be skeptical about the importance of red-zone statistics. It’s true starting a new series of downs at the 4-yard-line after a big play isn’t as difficult as starting at the 20.

But, third-and-2 is a lot easier than third-and-9, yet third-down rates on offense and defense are among the most important indicator stats in football. It’s easier to rack up nice yards-per-play numbers on soft defenses or on fast tracks in domes. It’s easier to punt for distance in Denver.

It’s harder to make all of your field goals in the winter in swirling winds. Complications are always in play.

Smart analysts accept that no stat is perfect. They use a wide variety of indicators to make good faith evaluations of team strengths and weaknesses.

Problems in the red zone are one of the defining offensive characteristics of both the Giants and Jets so far in 2018.

Bottom five red-zone offenses

28th Raiders 45%
29th Giants 44%
30th Jaguars 39%
31st Texans 35%
32nd Jets 30%

The Jets don’t just trail the league, they’re way behind other bad red-zone offenses! It would take six touchdowns in their next six red-zone trips just to reach a percentage better than Oakland’s. The current league midpoint is around 56 percent. The Jets are obviously a long way from average. Giants are closer to average than they are to the Jets, but still the fourth-worst offense in pro football at this skill set.

But, there is some good news here. The Jets have a 3-3 record anyway. It’s not uncommon for a rookie quarterback to struggle to find the end zone in close. Smart signal callers do get things figured out. In fact, this is one of the great stats for monitoring how well or how quickly a young QB is doing that. If Sam Darnold improves as he learns, the Jets have a chance to maintain their current run at the .500 mark for the full season.

Also worth noting is the Jets have performed well in terms of quick strikes on offense, and impact plays via defense or special teams. New York scored 34 and 42 points the past two weeks despite scoring TDs on just two of nine red-zone tries against the Broncos and Colts.

Things are much more complicated for the 1-5 Giants. Despite having a veteran quarterback, a head coach who used to be an offensive coordinator, and a few rushing or receiving weapons, the G-Men’s outlook feels bleaker because of offensive-line woes. And, even if they improve enough to play .500 ball the rest of the way, that only yields a 6-10 final record.

We encourage you to start monitoring this stat on a game-by-game basis. NFL box scores at show red-zone performance. You can search for season-to-date percentages.

The Jets draw the much tougher red-zone test this week. Sunday, they’ll play host to a Vikings team that ranks No. 2 in red-zone defense. While Monday night, the Giants catch a break in visiting the porous Falcons, who rank 30th in close on that side of the ball.

Letter: Don’t treat women like passive statistics

Letter: Don't treat women like passive statistics statistics, nevin manimala
Letter: Don't treat women like passive statistics statistics, nevin manimala

I’m tired of hearing about violence against women. As Dr. Jackson Katz said in a 2013 TED talk, “It’s a passive construction… It’s a bad thing that happens to women, but when you look at that term ’violence against women,’ nobody is doing it to them. It just happens to them.”

I no longer want to hear how many women were sexually assaulted. I want to hear how many men assaulted women.

I want to hear how many men raped women last year.

I want to hear how many men got teen-aged women pregnant.

Instead of treating women like passive statistics (women “were” assaulted), we should hear how many men did these things to women, AND how many men were prosecuted for their behavior.

These women were not assaulted or raped by statistics – they were raped by men.

Please, no bloviating about how the men haven’t been proven guilty yet. The women are never given that benefit of doubt about their guilt. They are forever tarnished by statements to the effect that it was all their fault, (shouldn’t have been there, shouldn’t have worn that, etc.) with no trial at all.

If we ignore the wording of how such information is presented to us, we prevent it from producing the consequences that would force that behavior to change.

Ruthann Yaeger, Rochester

Green Bay Packers vs. San Francisco 49ers: Preview, prediction, time, TV, statistics to know for ‘Monday Night Football’

Green Bay Packers vs. San Francisco 49ers: Preview, prediction, time, TV, statistics to know for 'Monday Night Football' statistics, nevin manimala
Green Bay Packers vs. San Francisco 49ers: Preview, prediction, time, TV, statistics to know for 'Monday Night Football' statistics, nevin manimala

Before the season started, this likely looked like one of the marquee matchups of the year. The Green Bay Packers and San Francisco 49ers on Monday night, featuring Aaron Rodgers and Jimmy Garoppolo, would have been quite the show. 

Of course, Garoppolo’s torn ACL ruined all that. And the Packers’ own up-and-down start has taken at least some of the shine off them as an inner-circle NFC contender. So now we have a flawed, spiraling 1-4 team starting its backup quarterback and No. 3 running back and also featuring a porous defense (49ers) taking on a flawed, but still dangerous 2-2-1 team playing with a gimpy star quarterback, missing two of its top three receivers, and looking iffy on defense (Packers).

Still, one can’t help but hope for some fireworks when Rodgers plays in primetime, and especially when two old-school NFC rivals get together. Can Green Bay use this week as a springboard to propel them through the rest of the season, or will the 49ers send them to their second consecutive loss? 

We’ll find out Monday night (8:15 p.m., ESPN). 

When the Packers have the ball 

If you just looked at the overall numbers of the major players involved, you might come away with the idea that Green Bay’s offense has been fantastic this season. Aaron Rodgers has completed 63 percent of his passes at 7.6 yards per attempt, his highest yards per attempt figure since the 2014 season. He has thrown 10 touchdowns against just one interception, yielding a passer rating of 100.1 — a figure which is not the best of his career but is better than where he’s been in two of the past three years. Rodgers is also averaging a career-best 314.4 passing yards per game. 

Rodgers’ No. 1 wideout Davante Adams is off to a terrific start. Adams has caught at least five passes in all five of Green Bay’s games and has scored a touchdown in four of five. After breaking out with a monster nine-catch, 140-yard performance last week, Adams has a season-long line of 37 catches for 425 yards and four scores, and he’s caught a career-best 67 percent of the passes thrown his way. 

Sorry to interrupt your reading, but just a quick PSA here. We have a pretty amazing daily NFL podcast you may not be aware of. It’s hosted by Will Brinson and it’s all the things you’re looking for: news, fantasy, picks, really, just football stuff for football people. Subscribe: via iTunes | via Stitcher | via TuneIn | via Google Play.

Even the Green Bay running game looks pretty damn good. The Packers as a whole are averaging 4.6 yards per carry, the ninth-best figure in the league. And that’s despite the fact that Green Bay’s leading ball-carrier, Jamaal Williams, has averaged just 3.7 yards a pop while toting the rock more than twice as often as the next guy in line, Aaron Jones

But while the Packers are racking up yards, they are not exactly racking up scores. Their offense has produced exactly as many field goals as touchdowns so far this season, and has bogged down repeatedly in the red zone. So even though the Packers are rarely going three-and-out (14 percent of drives, third-lowest in the league), rarely punting (31 percent, fourth-lowest), and gaining a lot of yards (34 per drive, eighth-best in the league), they rank just 19th in the NFL in points per game and 20th in points per drive. 

Some of that low scoring output is due to the bad luck of Mason Crosby missing four field goals and an extra point last week against the Lions. Add in those 13 points and suddenly Green Bay ranks a much more respectable 12th in points per game and 10th in points per drive. The Packers can’t expect Crosby to be perfect going forward but something closer to what he had done through the first four weeks of the season (10 of 11 on field goal attempts and 8 of 9 on extra points) is reasonable. 

On Monday, Green Bay should have a chance to rebound from last week’s shoddy effort. The 49ers have one of the NFL’s worst defenses, and one that has been particularly bad in the red zone. The Niners have allowed 5.21 points per red zone opportunity, per Football Outsiders, which ranks 22nd in the NFL. They’ve also allowed touchdowns on 63 percent of their opponents’ red zone trips, which ranks 21st in the league. San Francisco has particularly struggled against the pass, so Rodgers should be able to freely move the ball downfield even on his injured knee and even while likely working without two of his two three wideouts. (Geronimo Allison and Randall Cobb are widely expected to miss the game, and be replaced by Marquez Valdes-Scantling and Equanimeous St. Brown.) Even with Richard Sherman on the 49ers, San Francisco’s defensive backs are eminently beatable. 

And if the Packers can move the ball well and put some points on the board, we might finally see Mike McCarthy and company turn the running game over to the explosive Aaron Jones, who appears to be the team’s best back but for some reason is behind Williams and sometimes Ty Montgomery in the pecking order. 

When the 49ers have the ball 

This game would likely be much more exciting if the 49ers weren’t missing Jimmy Garoppolo and Jerick McKinnon. The San Francisco offense this season has not looked at all like what we expected it to coming into the year, as McKinnon tore his ACL in camp, Garoppolo tore his in Week 3, Marquise Goodwin and Dante Pettis have been in and out of the lineup with injuries, and even McKinnon’s backup, Matt Breida, has been banged up. 

Still, C.J. Beathard and company totaled 45 points against the Chargers and Cardinals over the past two weeks, which is a respectable total. 

The entire offense essentially runs through tight end George Kittle and wideout Pierre Garcon due to the various injuries to many of the principal players involved. Kittle has been one of the NFL’s best tight ends this season, as he has 23 catches for 399 yards and a score. He should really have even more than that, as he dropped a wide-open touchdown from Garoppolo in Week 1, saw Garoppolo miss him on a play where he was wide open later on, and has had several big plays stopped short of the end zone. The Green Bay pass defense has been pretty good against tight ends so far this season, with players at the position totaling 22 catches for 238 yards in five games. But with Buffalo and Detroit on the schedule over the past two weeks, they were facing two of the offenses that use the tight end least often. Kyle Rudolph had seven catches for 72 yards back in Week 1 and the Jordan Reed-Vernon Davis combination had six for 135 yards in Week 3. 

If Beathard and Kittle can’t find a way to get in sync, it may be tough sledding for the 49er offense. Beathard has shown precious little chemistry with any of his other pass-catchers, and fullback Kyle Juszczyk acted as the de facto No. 2 guy last week amidst all the injuries. Teams do not exactly seem scared of Beathard beating them either down the field or with throws to the perimeter — and with good reason. Beathard is averaging just 5.88 yards per attempt on 32 perimeter throws this season, per Sports Info Solutions. That ranks 31st among 39 qualified quarterbacks. Even worse, Beathard’s perimeter completions have averaged just 2.54 yards in the air, 38th among the same group of players and ahead of only Nathan Peterman

With the uncertainty surrounding Matt Breida’s status for Monday night, the San Francisco run game may be somewhat limited as well. Alfred Morris is a solid one-cut runner but he does not provide the same variety of skills as Breida, and his presence on the field is more of a run/pass tip-off than is Breida’s. Even taking away Breida’s untouched 66-yard touchdown run, he’s averaging almost twice the yards per rush that Morris is this season. If he’s out, Raheem Mostert looks like the No. 2 guy behind Morris, and that did not exactly work out so well for San Francisco a week ago, as Mostert carried five times for 11 yards and lost a fumble. 

Knowing Kyle Shanahan, he will have some wrinkles in his playbook that take advantage of the specific weaknesses of Green Bay’s defense, and will figure out a way to get Kittle open often enough for San Francisco to look respectable moving the ball. But it also probably won’t be enough against a team with Rodgers. 

Prediction: Packers 27, 49ers 20

2018 AGU Election Statistics

2018 AGU Election Statistics statistics, nevin manimala
2018 AGU Election Statistics statistics, nevin manimala

In its most recent biennial election, for which voting ended last month, AGU membership chose 57 new leaders to serve 2-year terms in 2019–2020. Union officers, Board members, section officers, and student and early-career representatives to the Council were elected. Here the AGU Leadership Development/Governance Committee takes a look back at the voting this year and how 2018 stacked up in comparison to prior elections.

See the accompanying article entitled “Lozier to Be AGU President-Elect/AGU Leadership Transitions” for election results and an overview of the leadership transition that’s now getting started.

Electronic Voting

Members voted electronically, and access to voting was provided to all eligible voters for a period of 30 days. All members who joined or renewed their membership by 13 August 2018 were eligible to vote in this year’s leadership election.

Survey and Ballot Systems, Inc. (SBS) conducted the voting. SBS, which offers election planning and management services, provided unique login links and other support services for eligible voters throughout the election. On 27 September, the company certified the results, which were then reviewed by the AGU Leadership Development/Governance Committee.

Participation Rate Tops 20%

The total number of ballots validated in the election was 9,141. The number of eligible voters was 45,491, making the participation rate 20.09%. This is slightly lower than in AGU’s last election in 2016 in which the participation rate was 21.13%.

SBS provided all voters the opportunity to rate their satisfaction with the 2018 voting process. In response to this election, 4,259 comments were received. This is a good indication of voter engagement, and 89.5% of voters continue to be satisfied or very satisfied with the voting process. Voters provided many comments and suggestions, which AGU will analyze and discuss over the coming weeks. Voter feedback is very important, and comments received in 2016 were instrumental in helping the Leadership Development/Governance Committee plan for the 2018 election.

Getting the Word Out

The election was supported by articles in and other communications throughout this year. The Leadership Development/Governance Committee published the proposed slate in on 7 June and the final slate on 17 July.

A special election website was created to aid members with the voting process. Promotion of the election included the AGUniverse newsletter, Eos print ads, Eos Buzz ads, the AGU home page carousel, Facebook, Twitter, and emails. The election vendor sent reminder emails to eligible voters throughout the election, as did section leaders.


After reviewing the election report provided by SBS, the committee kicked off the process to notify candidates and announce the results. The process required that all 114 candidates be notified before the election results could be publicly announced. Each of the 23 sections participating in the 2018 election provided a single point of contact to receive results and contact candidates. Leadership Development/Governance Committee members contacted Board candidates and the student and early-career candidates. Results were released online on 10 October 2018.

The Leadership Development/Governance Committee expresses its gratitude to all candidates and to all AGU members who voted.

—Leadership Development/Governance Committee: Margaret Leinen (Chair; email: [email protected]), Robert A. Duce, Luis Gonzalez, Hans Lechner, Catherine McCammon, Jim Pizzuto, Sabine Stanley, George Tsoflias, Vaughan Turekian, Tong Zhu, Chris McEntee, and Cheryl Enderlein

“Fixed mindsets” might be why we don’t understand statistics

“Fixed mindsets” might be why we don't understand statistics statistics, nevin manimala

“Fixed mindsets” might be why we don't understand statistics statistics, nevin manimala

Enlarge / The wrongful conviction of Sally Clark for the murder of her two sons is a famous case of misuse of statistics in the courts.

In 1999, an English solicitor named Sally Clark went on trial for the murder of her two infant sons. She claimed both succumbed to sudden infant death syndrome. An expert witness for the prosecution, Sir Roy Meadow, argued that the odds of SIDS claiming two children from such an affluent family were 1 in 73 million, likening it to the odds of backing an 80-1 horse in the Grand National four years in a row and winning every time. The jury convicted Clark to life in prison.

But the Royal Statistical Society issued a statement after the verdict insisting that Meadow had erred in his calculation and that there was “no statistical basis” for his stated figure. Clark’s conviction was overturned on appeal in January 2003, and the case has become a canonical example of the consequences of flawed statistical reasoning.

A new study in Frontiers in Psychology examined why people struggle so much to solve statistical problems, particularly why we show a marked preference for complicated solutions over simpler, more intuitive ones. Chalk it up to our resistance to change. The study concluded that fixed mindsets are to blame: we tend to stick with the familiar methods we learned in school, blinding us to the existence of a simpler solution.

“As soon as you pick up a newspaper, you’re confronted with so many numbers and statistics that you need to interpret correctly.”

Roughly 96 percent of the general population struggles with solving problems relating to statistics and probability. Yet being a well-informed citizen in the 21st century requires us to be able to engage competently with these kinds of tasks, even if we don’t encounter them in a professional setting. “As soon as you pick up a newspaper, you’re confronted with so many numbers and statistics that you need to interpret correctly,” says co-author Patrick Weber, a graduate student in math education at the University of Regensburg in Germany. Most of us fall far short of the mark.

Part of the problem is the counterintuitive way in which such problems are typically presented. Meadow presented his evidence in the so-called “natural frequency format” (for example, 1 in 10 people), rather than in terms of a percentage (10 percent of the population). That was a smart decision, since 1-in-10 is a more intuitive, jury-friendly approach. Recent studies have shown that performance rates on many statistical tasks increased from four percent to 24 percent when the problems were presented using the natural frequency format.

That makes sense, since calculating a probability is complicated, requiring three multiplications and one addition, according to Weber, before dividing the resulting two terms. In contrast, just one addition and one division are needed with the natural frequency format. “With natural frequencies, you have one reference set that you can vividly imagine,” says Weber. The probability format is more abstract and less intuitive.

A Bayesian task

But what about the remaining 76 percent who still can’t solve these kinds of problems? Weber and his colleagues wanted to figure out why. They recruited 180 students from the university and presented them with two sample problems in so-called Bayesian reasoning, framed in either a probability format or a natural frequency format.

This involves giving subjects a base-rate statistic—say, the probably of a 40-year-old woman being diagnosed with breast cancer (1 percent)—along with a sensitivity element (a woman with breast cancer will get a positive result on her mammogram 80 percent of the time) and a false alarm rate (a woman without breast cancer still has a 9.6 percent chance of getting a positive result on her mammogram). So if a 40-year-old woman tests positive for breast cancer, what is the probability she actually has the disease (the “posterior” probability estimate)?

“Fixed mindsets” might be why we don't understand statistics statistics, nevin manimala

Enlarge / One sample problem asked participants to calculate the likelihood that a randomly selected person with fresh needle marks on their arm was a heroin addict.
Spener Platt/Getty Images

The mammogram problem is so well known that Weber et al. came up with their own problems. For instance, the probability of a randomly picked person from a given population being addicted to heroin is 0.01 percent (the base rate). If the person selected is a heroin addict, there is a 100 percent probability that person will have fresh needle marks on their arm (the sensitivity element). However, there is also a 0.19 chance that the randomly picked person will have fresh needle marks on their arm even if they are not a heroin addict (the false-alarm rate). So what is the probability that a randomly picked person with fresh needle marks is addicted to heroin (the posterior probability)?

Here is the same problem in the natural frequencies format: 10 out of 100,000 people will be addicted to heroin. And 10 out of 10 heroin addicts will have fresh needle marks on their arms. Meanwhile, 190 out of 99,990 people who are not addicted to heroin will nonetheless have fresh needle marks. So what percentage of the people with fresh needle marks is addicted to heroin?

In both cases, the answer is five percent, but the process by which one arrives at that answer is far simpler in the natural frequency format. The set of people with needle pricks on their arms is the sum of all the heroin addicts (10) plus the 190 non-addicts. Divide that 200 by 10, and you have the correct answer.

A fixed mind

The students had to show their work, so it would be easier to follow their thought processes. Weber and his colleagues were surprised to find that even when presented with problems in the natural frequency format, half the participants didn’t use the simpler method to solve them. Rather, they “translated” the problem into the more challenging probability format with all the extra steps, because it was the more familiar approach.

That is the essence of a fixed mindset, also known as the Einstellung effect. “We have previous knowledge that we incorporate into our decisions,” says Weber. That can be a good thing, enabling us to make decisions faster. But it can also blind us to new, simpler solutions to problems. Even expert chess players are prone to this. They ponder an opponent’s move and choose the tried and true counter-strategy they know so well, when there might be an easier solution to putting their opponent in checkmate.

“You can rigorously define these natural frequencies mathematically.”

Weber proposes that one reason this happens is that students are simply overexposed to the probability format in their math classes. This is partly an issue with the standard curriculum, but he speculates another factor might be a prejudice among teachers that natural frequencies are somehow less mathematically rigorous. That is not the case. “You can rigorously define these natural frequencies mathematically,” Weber insists.

Changing this mindset is a tall order, requiring on the one hand a redesign of the math curriculum to incorporate the natural frequency format. But that won’t have much of an impact if the teachers aren’t comfortable using it either, so universities will also need to incorporate it into their teacher training programs. “This would give students a helpful tool to understand the concept of uncertainty, in addition to the standard probabilities,” says Weber.

DOI: Frontiers in Psychology, 2018. 10.3389/fpsyg.2018.01833  (About DOIs).

How Statistics Doomed Washington State’s Death Penalty

How Statistics Doomed Washington State's Death Penalty statistics, nevin manimala
How Statistics Doomed Washington State's Death Penalty statistics, nevin manimala

Last week, the American death penalty lurched on step closer to its eventual demise, as the Washington Supreme Court decided to fan away some of the smoke from Lewis Powell’s cigarette.

In State v. Gregory, the state court held that the death penalty, as imposed in the state of Washington, was unconstitutional because it was racially biased.  

How does that relate to Powell and tobacco? Fastidious and health conscious (acquaintances remember seeing him order a turkey sandwich for lunch, then set aside the bread and eat only the turkey), Powell was a non-smoker. But he also sat from 1963 until 1970 on the board of Virginia-based tobacco giant Philip Morris. Like all members of the board, he posed in the customary annual photo with a lit cigarette in his fingers.

Over the past half century, that cigarette has befouled the U.S. Supreme Court’s miserable handing of capital punishment. In 1972, the Court put a moratorium on death sentences. It held that Georgia’s capital punishment laws violated the Eighth Amendment’s ban on “cruel and unusual punishment.” The justices could not agree on a rationale—but the case came to stand for the idea that the death penalty by itself might not be unconstitutional, but would be so if state systems were arbitrary or racially biased. The result was a 15-year scramble by state legislatures to design a more consistent way of choosing which murderers to put to death.

That revised system was tested in a 1987 case called McCleskey v. Kemp.The defendant, Warren McCleskey, was an African American man sentenced under Georgia’s new procedures to die for murdering Atlanta police officer Frank Schlatt. McCleskey challenged his sentence by proffering a massive statistical study of the death penalty in Georgia by legal scholars David Baldus and Charles Pulaski and statistician George Woodworth. They concluded that, controlling for other variables, murderers who killed white people were four times more likely to receive a death sentence than those who killed African Americans. In other words, it said, Georgia was “operating a dual system,” based on race: the legal penalty for killing whites was significantly greater than for killing blacks.

Punishing by race seemed a clear violation of the Eighth Amendment’s ban on “cruel and unusual punishment” and of the Fourteenth Amendment’s guarantee of “the equal protection of the laws.”

But the Supreme Court divided. Four justices—Justices William Brennan, Thurgood Marshall, Harry A. Blackmun, and John Paul Stevens—voted to reject Georgia’s racist system. Four others—Chief Justice William H. Rehnquist and Justices Byron White, Sandra Day O’Connor, and Antonin Scalia—wanted to approve it.

Powell cast the deciding vote and wrote the majority opinion, concluding, “At most, the Baldus study indicates a discrepancy that appears to correlate with race. Apparent disparities in sentencing are an inevitable part of our criminal justice system.”

Statistical evidence, Powell argued, could provide “only a likelihood that a particular factor entered into some decisions”; it could never establish certainty that it had done so in any individual case.

Anyone from the tobacco south recognizes the logic. In 1964, during Powell’s service on the Philip Morris board, the U.S. surgeon general released the famous report, Smoking and Health. Then as now, the numbers were unmistakable: cigarettes kill smokers.

But Philip Morris, like all the rest of the industry, responded with denial. The statistical correlation, the industry said, didn’t prove anything. Something else might be causing the cancer. In response, a member of the company’s board stated, “We don’t accept the idea that there are harmful agents in tobacco.”

The logic Powell applied to the death penalty is the same logic Philip Morris employed while he served on its board. Numbers on paper don’t prove a thing.

The death-penalty lawyer Anthony Amsterdam has called McCleskey “the Dred Scott decision of our time”—the moral equivalent of the 1857 opinion denying black Americans any chance of citizenship. After his retirement, Powell told his biographer that he would change his vote in McCleskey if he could.

But it was too late. The Supreme Court was committed to cigarette-maker logic.

Last week, the Washington Supreme Court, in a fairly pointed opinion, declared that, at least in its jurisdiction, numbers have real meaning. And to those who have eyes to see, numbers make clear the truth about death-sentencing: It is arbitrary and racist in its application.

The court’s decision was based on two studies commissioned by lawyers defending Allen Gregory, who was convicted of rape and murder in Tacoma, Washington, in 2001 and sentenced to death by a jury there. The court appointed a special commissioner to evaluate the reports, hear the state’s response, and file a detailed evaluation. The evidence, the court said, showed that Washington counties with larger black populations had higher rates of death sentences—and that in Washington, “black defendants were four and a half times more likely to be sentenced to death than similarly situated white defendants.” Thus, the state court concluded, “Washington’s death penalty is administered in an arbitrary and racially biased manner”—and violated the Washington State Constitution’s prohibition on “cruel punishment.”

The court’s opinion is painstaking—almost sarcastic—on one point: “Let there be no doubt—we adhere to our duty to resolve constitutional questions under our own [state] constitution, and accordingly, we resolve this case on adequate and independent state constitutional principles.” “Adequate and independent” are magic words in U.S. constitutional law; they mean that the state court’s opinion is not based on the U.S. Constitution, and its rule will not change if the nine justices in Washington change their view of the federal Eighth Amendment. Whatever the federal constitutionality of the death penalty, Washington state is now out of its misery.  

Last spring, a conservative federal judge, Jeffrey Sutton of the Sixth Circuit, published 51 Imperfect Solutions: States and the Making of American Constitutional Law,  a book urging lawyers and judges to focus less on federal constitutional doctrine and look instead to state constitutions for help with legal puzzles. That’s an idea that originated in the Northwest half-a-century ago, with the jurisprudence of former Oregon Supreme Court Justice Hans Linde. It was a good idea then and it’s a good idea now. State courts can never overrule federal decisions protecting federal constitutional rights; they can, however, interpret their own state constitutions to give more protection than does the federal Constitution. There’s something bracing about this kind of judicial declaration of independence, when it is done properly.

And the Washington court’s decision is well timed. It is immune to the dark clouds gathering over President Trump’s new model Supreme Court.  Viewed with the logic of history, capital punishment is on the sunset side of the mountain; but conservative Justices Neil Gorsuch and Brett Kavanaugh are likely to join the other conservatives in lashing the court even more firmly to the decaying framework of official death, no matter how much tobacco-company logic they must deploy as a disguise for its arbitrariness and cruelty.

Smoke may cloud the law in D.C. for years yet. But in the state of Washington, numbers are actual numbers. When racism and cruelty billow across the sky, that state’s courts will no longer pretend they cannot see.  

We want to hear what you think about this article. Submit a letter to the editor or write to

Garrett Epps is a contributing editor for The Atlantic. He teaches constitutional law and creative writing for law students at the University of Baltimore. His latest book is American Justice 2014: Nine Clashing Visions on the Supreme Court.