“Fixed mindsets” might be why we don’t understand statistics

“Fixed mindsets” might be why we don't understand statistics statistics, nevin manimala

“Fixed mindsets” might be why we don't understand statistics statistics, nevin manimala

Enlarge / The wrongful conviction of Sally Clark for the murder of her two sons is a famous case of misuse of statistics in the courts.

In 1999, an English solicitor named Sally Clark went on trial for the murder of her two infant sons. She claimed both succumbed to sudden infant death syndrome. An expert witness for the prosecution, Sir Roy Meadow, argued that the odds of SIDS claiming two children from such an affluent family were 1 in 73 million, likening it to the odds of backing an 80-1 horse in the Grand National four years in a row and winning every time. The jury convicted Clark to life in prison.

But the Royal Statistical Society issued a statement after the verdict insisting that Meadow had erred in his calculation and that there was “no statistical basis” for his stated figure. Clark’s conviction was overturned on appeal in January 2003, and the case has become a canonical example of the consequences of flawed statistical reasoning.

A new study in Frontiers in Psychology examined why people struggle so much to solve statistical problems, particularly why we show a marked preference for complicated solutions over simpler, more intuitive ones. Chalk it up to our resistance to change. The study concluded that fixed mindsets are to blame: we tend to stick with the familiar methods we learned in school, blinding us to the existence of a simpler solution.

“As soon as you pick up a newspaper, you’re confronted with so many numbers and statistics that you need to interpret correctly.”

Roughly 96 percent of the general population struggles with solving problems relating to statistics and probability. Yet being a well-informed citizen in the 21st century requires us to be able to engage competently with these kinds of tasks, even if we don’t encounter them in a professional setting. “As soon as you pick up a newspaper, you’re confronted with so many numbers and statistics that you need to interpret correctly,” says co-author Patrick Weber, a graduate student in math education at the University of Regensburg in Germany. Most of us fall far short of the mark.

Part of the problem is the counterintuitive way in which such problems are typically presented. Meadow presented his evidence in the so-called “natural frequency format” (for example, 1 in 10 people), rather than in terms of a percentage (10 percent of the population). That was a smart decision, since 1-in-10 is a more intuitive, jury-friendly approach. Recent studies have shown that performance rates on many statistical tasks increased from four percent to 24 percent when the problems were presented using the natural frequency format.

That makes sense, since calculating a probability is complicated, requiring three multiplications and one addition, according to Weber, before dividing the resulting two terms. In contrast, just one addition and one division are needed with the natural frequency format. “With natural frequencies, you have one reference set that you can vividly imagine,” says Weber. The probability format is more abstract and less intuitive.

A Bayesian task

But what about the remaining 76 percent who still can’t solve these kinds of problems? Weber and his colleagues wanted to figure out why. They recruited 180 students from the university and presented them with two sample problems in so-called Bayesian reasoning, framed in either a probability format or a natural frequency format.

This involves giving subjects a base-rate statistic—say, the probably of a 40-year-old woman being diagnosed with breast cancer (1 percent)—along with a sensitivity element (a woman with breast cancer will get a positive result on her mammogram 80 percent of the time) and a false alarm rate (a woman without breast cancer still has a 9.6 percent chance of getting a positive result on her mammogram). So if a 40-year-old woman tests positive for breast cancer, what is the probability she actually has the disease (the “posterior” probability estimate)?

“Fixed mindsets” might be why we don't understand statistics statistics, nevin manimala

Enlarge / One sample problem asked participants to calculate the likelihood that a randomly selected person with fresh needle marks on their arm was a heroin addict.
Spener Platt/Getty Images

The mammogram problem is so well known that Weber et al. came up with their own problems. For instance, the probability of a randomly picked person from a given population being addicted to heroin is 0.01 percent (the base rate). If the person selected is a heroin addict, there is a 100 percent probability that person will have fresh needle marks on their arm (the sensitivity element). However, there is also a 0.19 chance that the randomly picked person will have fresh needle marks on their arm even if they are not a heroin addict (the false-alarm rate). So what is the probability that a randomly picked person with fresh needle marks is addicted to heroin (the posterior probability)?

Here is the same problem in the natural frequencies format: 10 out of 100,000 people will be addicted to heroin. And 10 out of 10 heroin addicts will have fresh needle marks on their arms. Meanwhile, 190 out of 99,990 people who are not addicted to heroin will nonetheless have fresh needle marks. So what percentage of the people with fresh needle marks is addicted to heroin?

In both cases, the answer is five percent, but the process by which one arrives at that answer is far simpler in the natural frequency format. The set of people with needle pricks on their arms is the sum of all the heroin addicts (10) plus the 190 non-addicts. Divide that 200 by 10, and you have the correct answer.

A fixed mind

The students had to show their work, so it would be easier to follow their thought processes. Weber and his colleagues were surprised to find that even when presented with problems in the natural frequency format, half the participants didn’t use the simpler method to solve them. Rather, they “translated” the problem into the more challenging probability format with all the extra steps, because it was the more familiar approach.

That is the essence of a fixed mindset, also known as the Einstellung effect. “We have previous knowledge that we incorporate into our decisions,” says Weber. That can be a good thing, enabling us to make decisions faster. But it can also blind us to new, simpler solutions to problems. Even expert chess players are prone to this. They ponder an opponent’s move and choose the tried and true counter-strategy they know so well, when there might be an easier solution to putting their opponent in checkmate.

“You can rigorously define these natural frequencies mathematically.”

Weber proposes that one reason this happens is that students are simply overexposed to the probability format in their math classes. This is partly an issue with the standard curriculum, but he speculates another factor might be a prejudice among teachers that natural frequencies are somehow less mathematically rigorous. That is not the case. “You can rigorously define these natural frequencies mathematically,” Weber insists.

Changing this mindset is a tall order, requiring on the one hand a redesign of the math curriculum to incorporate the natural frequency format. But that won’t have much of an impact if the teachers aren’t comfortable using it either, so universities will also need to incorporate it into their teacher training programs. “This would give students a helpful tool to understand the concept of uncertainty, in addition to the standard probabilities,” says Weber.

DOI: Frontiers in Psychology, 2018. 10.3389/fpsyg.2018.01833  (About DOIs).

How Statistics Doomed Washington State’s Death Penalty

How Statistics Doomed Washington State's Death Penalty statistics, nevin manimala
How Statistics Doomed Washington State's Death Penalty statistics, nevin manimala

Last week, the American death penalty lurched on step closer to its eventual demise, as the Washington Supreme Court decided to fan away some of the smoke from Lewis Powell’s cigarette.

In State v. Gregory, the state court held that the death penalty, as imposed in the state of Washington, was unconstitutional because it was racially biased.  

How does that relate to Powell and tobacco? Fastidious and health conscious (acquaintances remember seeing him order a turkey sandwich for lunch, then set aside the bread and eat only the turkey), Powell was a non-smoker. But he also sat from 1963 until 1970 on the board of Virginia-based tobacco giant Philip Morris. Like all members of the board, he posed in the customary annual photo with a lit cigarette in his fingers.

Over the past half century, that cigarette has befouled the U.S. Supreme Court’s miserable handing of capital punishment. In 1972, the Court put a moratorium on death sentences. It held that Georgia’s capital punishment laws violated the Eighth Amendment’s ban on “cruel and unusual punishment.” The justices could not agree on a rationale—but the case came to stand for the idea that the death penalty by itself might not be unconstitutional, but would be so if state systems were arbitrary or racially biased. The result was a 15-year scramble by state legislatures to design a more consistent way of choosing which murderers to put to death.

That revised system was tested in a 1987 case called McCleskey v. Kemp.The defendant, Warren McCleskey, was an African American man sentenced under Georgia’s new procedures to die for murdering Atlanta police officer Frank Schlatt. McCleskey challenged his sentence by proffering a massive statistical study of the death penalty in Georgia by legal scholars David Baldus and Charles Pulaski and statistician George Woodworth. They concluded that, controlling for other variables, murderers who killed white people were four times more likely to receive a death sentence than those who killed African Americans. In other words, it said, Georgia was “operating a dual system,” based on race: the legal penalty for killing whites was significantly greater than for killing blacks.

Punishing by race seemed a clear violation of the Eighth Amendment’s ban on “cruel and unusual punishment” and of the Fourteenth Amendment’s guarantee of “the equal protection of the laws.”

But the Supreme Court divided. Four justices—Justices William Brennan, Thurgood Marshall, Harry A. Blackmun, and John Paul Stevens—voted to reject Georgia’s racist system. Four others—Chief Justice William H. Rehnquist and Justices Byron White, Sandra Day O’Connor, and Antonin Scalia—wanted to approve it.

Powell cast the deciding vote and wrote the majority opinion, concluding, “At most, the Baldus study indicates a discrepancy that appears to correlate with race. Apparent disparities in sentencing are an inevitable part of our criminal justice system.”

Statistical evidence, Powell argued, could provide “only a likelihood that a particular factor entered into some decisions”; it could never establish certainty that it had done so in any individual case.

Anyone from the tobacco south recognizes the logic. In 1964, during Powell’s service on the Philip Morris board, the U.S. surgeon general released the famous report, Smoking and Health. Then as now, the numbers were unmistakable: cigarettes kill smokers.

But Philip Morris, like all the rest of the industry, responded with denial. The statistical correlation, the industry said, didn’t prove anything. Something else might be causing the cancer. In response, a member of the company’s board stated, “We don’t accept the idea that there are harmful agents in tobacco.”

The logic Powell applied to the death penalty is the same logic Philip Morris employed while he served on its board. Numbers on paper don’t prove a thing.

The death-penalty lawyer Anthony Amsterdam has called McCleskey “the Dred Scott decision of our time”—the moral equivalent of the 1857 opinion denying black Americans any chance of citizenship. After his retirement, Powell told his biographer that he would change his vote in McCleskey if he could.

But it was too late. The Supreme Court was committed to cigarette-maker logic.

Last week, the Washington Supreme Court, in a fairly pointed opinion, declared that, at least in its jurisdiction, numbers have real meaning. And to those who have eyes to see, numbers make clear the truth about death-sentencing: It is arbitrary and racist in its application.

The court’s decision was based on two studies commissioned by lawyers defending Allen Gregory, who was convicted of rape and murder in Tacoma, Washington, in 2001 and sentenced to death by a jury there. The court appointed a special commissioner to evaluate the reports, hear the state’s response, and file a detailed evaluation. The evidence, the court said, showed that Washington counties with larger black populations had higher rates of death sentences—and that in Washington, “black defendants were four and a half times more likely to be sentenced to death than similarly situated white defendants.” Thus, the state court concluded, “Washington’s death penalty is administered in an arbitrary and racially biased manner”—and violated the Washington State Constitution’s prohibition on “cruel punishment.”

The court’s opinion is painstaking—almost sarcastic—on one point: “Let there be no doubt—we adhere to our duty to resolve constitutional questions under our own [state] constitution, and accordingly, we resolve this case on adequate and independent state constitutional principles.” “Adequate and independent” are magic words in U.S. constitutional law; they mean that the state court’s opinion is not based on the U.S. Constitution, and its rule will not change if the nine justices in Washington change their view of the federal Eighth Amendment. Whatever the federal constitutionality of the death penalty, Washington state is now out of its misery.  

Last spring, a conservative federal judge, Jeffrey Sutton of the Sixth Circuit, published 51 Imperfect Solutions: States and the Making of American Constitutional Law,  a book urging lawyers and judges to focus less on federal constitutional doctrine and look instead to state constitutions for help with legal puzzles. That’s an idea that originated in the Northwest half-a-century ago, with the jurisprudence of former Oregon Supreme Court Justice Hans Linde. It was a good idea then and it’s a good idea now. State courts can never overrule federal decisions protecting federal constitutional rights; they can, however, interpret their own state constitutions to give more protection than does the federal Constitution. There’s something bracing about this kind of judicial declaration of independence, when it is done properly.

And the Washington court’s decision is well timed. It is immune to the dark clouds gathering over President Trump’s new model Supreme Court.  Viewed with the logic of history, capital punishment is on the sunset side of the mountain; but conservative Justices Neil Gorsuch and Brett Kavanaugh are likely to join the other conservatives in lashing the court even more firmly to the decaying framework of official death, no matter how much tobacco-company logic they must deploy as a disguise for its arbitrariness and cruelty.

Smoke may cloud the law in D.C. for years yet. But in the state of Washington, numbers are actual numbers. When racism and cruelty billow across the sky, that state’s courts will no longer pretend they cannot see.  

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.

Garrett Epps is a contributing editor for The Atlantic. He teaches constitutional law and creative writing for law students at the University of Baltimore. His latest book is American Justice 2014: Nine Clashing Visions on the Supreme Court.

What the statistics don’t say about teachers’ salaries

What the statistics don't say about teachers' salaries statistics, nevin manimala
What the statistics don't say about teachers' salaries statistics, nevin manimala

Primary education is dominated by women, which means that salaries are kept down

(Keystone)

How much does a schoolteacher earn in Switzerland? The federal system means that there are big differences between the cantons. There is also a gap between theory and practice.

If you’re a Kindergarten teacher, you’d be better off working in the French-speaking canton of Geneva than in the southern Alpine canton of Graubünden: you’ll get CHF97,000 ($97,700) per year in the former, CHF60,000 in the latter.

For primary school teachers, the story is the same: CHF97,000 in top payer Geneva, versus CHF66,000 in Italian-speaking Ticino. When it comes to secondary school, the salary gap is less: CHF105,000 in Geneva compared with CHF85,000 in the central canton of Nidwalden.

Overall, the average pay for a new teacher in Switzerland with the right qualifications is CHF82,500, or CHF6,875 per month, gross (without deductions). Salaries rise through a teacher’s career, with the maximum you can earn at CHF8,700 per month…

From theory to practice

But it’s not quite as simple as that. Differences between cantons, as outlined above, are crucial. Under Switzerland’s federal system, cantons oversee education policy and set the salary levels. Differences are thus partly due to the varying cost of living between regions, which accounts for pay gaps across many professions, especially in the public sector.

Another important point is that this whole range of salaries, as published by the Swiss Conference of Cantonal Directors of Education, and highlighted by the news portal Watsonexternal link, is theoretical: what is shown is the maximum for a full-time post. But in Switzerland, 57% of teachers work part-time.

Pascal Frischknecht, deputy secretary general of the German-speaking Federation of Swiss Teachersexternal link, points out that in cantons Zurich and St Gallen nobody earns more than 88% or 87% respectively of the published maximum salaries. This means CHF900 less each month for a newly-qualified primary school teacher.

And while it’s true that salaries increase – theoretically – over time, the Federation says that such increases are not systematic and, in many cantons, have barely been applied at all during the past ten years. In canton Basel-Country, overall salaries have even dropped by 1%.

The French-speaking cantons are similar. Jean-Marc Haller, secretary general of the Union of French-speaking Teachersexternal link, says that cantonal cuts are not always officially published. The figures remain theoretical because in reality salaries stagnate. It’s also sometimes the case that that two teachers doing the same job in the same school receive different pay, he says.

Teachers in around half of the cantons say that their salary increases have not been enough to allow them to maintain their purchasing power.

And on an international level, Switzerland has seen its overall spending on compulsory education fall and then stagnate at 3.4% of GDP; 0.2 points below the Organisation for Economic Cooperation and Development (OECD) average, despite a rise in the number of pupils.

Equality?

In principle, education does not discriminate between genders: a female teacher should earn the same as a male member of staff in the same job. But the reality, again, is not quite so simple.

Frischknecht, from the Federation of Swiss Teachers, speaks of “indirect discrimination”.  Early grade education is dominated by women and the more female a profession becomes, the less the salaries rise, he says.

In April 2018, the latest edition of the wage book, an authority on the issue, showed that there had been an increase of 36.4% in the minimum salary of primary teachers over the past 12 years. But the Federation says this is in fact due to a calculation error on the part of the Zurich cantonal office of economy and labour, an error “recognized by the author”.

The person in question is said to have compared the data for 2006 from the more rural canton of Aargau with 2018 data from canton Zurich – so, rather than showing a uniform rise, what the figure really might show is the inequality between urban and non-urban teachers.

What the statistics don't say about teachers' salaries statistics, nevin manimala

Salary survey Swiss median monthly wage exceeds CHF6,500

A government survey analysing wage structures in the Alpine nation found that the median salary for a full-time job in 2016 was CHF6,502 ($6,509) …

See in other languages: 2

Tranlsated from French by Isobel Leybold-Johnson, swissinfo.ch

Neuer Inhalt

Horizontal Line


swissinfo EN

Teaser Join us on Facebook!

Join us on Facebook!

subscription form

Form for signing up for free newsletter.

Sign up for our free newsletters and get the top stories delivered to your inbox.

statistics; +229 new citations

statistics; +229 new citations Report, nevin manimala
statistics; +229 new citations Report, nevin manimala

Qian F, Wang S, Mitchell J, McGuffog L, Barrowdale D, Leslie G, Oosterwijk JC, Chung WK, Evans DG, Engel C, Kast K, Aalfs CM, Adank MA, Adlard J, Agnarsson BA, Aittomäki K, Alducci E, Andrulis IL, Arun BK, Ausems MGEM, Azzollini J, Barouk-Simonet E, Barwell J, Belotti M, Benitez J, Berger A, Borg A, Bradbury AR, Brunet J, Buys SS, Caldes T, Caligo MA, Campbell I, Caputo SM, Chiquette J, Claes KBM, Margriet Collée J, Couch FJ, Coupier I, Daly MB, Davidson R, Diez O, Domchek SM, Donaldson A, Dorfling CM, Eeles R, Feliubadaló L, Foretova L, Fowler J, Friedman E, Frost D, Ganz PA, Garber J, Garcia-Barberan V, Glendon G, Godwin AK, Gómez Garcia EB, Gronwald J, Hahnen E, Hamann U, Henderson A, Hendricks CB, Hopper JL, Hulick PJ, Imyanitov EN, Isaacs C, Izatt L, Izquierdo Á, Jakubowska A, Kaczmarek K, Kang E, Karlan BY, Kets CM, Kim SW, Kim Z, Kwong A, Laitman Y, Lasset C, Hyuk Lee M, Won Lee J, Lee J, Lester J, Lesueur F, Loud JT, Lubinski J, Mebirouk N, Meijers-Heijboer HEJ, Meindl A, Miller A, Montagna M, Mooij TM, Morrison PJ, Mouret-Fourme E, Nathanson KL, Neuhausen SL, Nevanlinna H, Niederacher D, Nielsen FC, Nussbaum RL, Offit K, Olah E, Ong KR, Ottini L, Park SK, Peterlongo P, Pfeiler G, Phelan CM, Poppe B, Pradhan N, Radice P, Ramus SJ, Rantala J, Robson M, Rodriguez GC, Schmutzler RK, Hutten Selkirk CG, Shah PD, Simard J, Singer CF, Sokolowska J, Stoppa-Lyonnet D, Sutter C, Yen Tan Y, Teixeira RM, Teo SH, Terry MB, Mads T, Marc T, Amanda ET, Katherine MT, Nadine T, van Asperen CJ, van Engelen K, van Rensburg EJ, Wang-Gohrke S, Wappenschmidt B, Weitzel JN, Yannoukakos D; GEMO Study Collaborators; HEBON; EMBRACE, Greene MH, Rookus MA, Easton DF, Chenevix-Trench G, Antoniou AC, Goldgar DE, Olopade OI, Rebbeck TR, Huo D.

J Natl Cancer Inst. 2018 Oct 12. doi: 10.1093/jnci/djy132. [Epub ahead of print]

Connect with Nevin Manimala on LinkedIn
Nevin Manimala SAS Certificate

Relist statistics OT 2017: The relist rest stop offers a golden ticket to some, becomes a holding pen and dead end to …

Relist statistics OT 2017: The relist rest stop offers a golden ticket to some, becomes a holding pen and dead end to ... statistics, nevin manimala

Posted Fri, October 12th, 2018 4:24 pm by Ralph Mayrell, Michael Kimberly and John Elwood

For four years running, we have pored over the prior term’s relists to give the readers of this blog a clearer idea of just what a relist means. When we began with this mind-numbing task after October Term 2014, a first relist meant that the ultimate odds of a grant were better than 50/50 — not bad when the average chance of a grant hovers around three percent. Being relisted continued to serve as a harbinger during the 2017 term, often signaling when the Supreme Court was interested enough in a petition to seriously consider granting it. But for all that relists still told court-watchers last term, the noise-to-signal ratio increased noticeably compared with previous terms. The increased noise appears to result from three related shifts in the court’s relist practice:

First, the number of relisted petitions increased to 159 in the 2017 term — up significantly from the 129 relisted petitions resolved in the 2016 term. (If you thought that nothing in the Supreme Court changes by 30 percent in a single year, you’d be wrong.) And that figure includes only those petitions that were disposed of during October Term 2017, omitting at least 26 petitions that were relisted during the 2017 term but remained pending as of the term’s end. So the justices appear to be a bit freer about relisting cases.

Second, once relisted, petitions were more likely than before to be relisted a second time. During OT 2015 and OT 2016, only 40 percent of first-time relists were relisted a second time (or more). In OT 2017, by contrast, 64 percent of relisted petitions were relisted at least twice. But most weren’t allowed to settle in and get comfortable: The court only relisted a third of the twice-relisted petitions for a third time, denying or summarily reversing 55 percent and granting 10 percent of the remaining twice-relisted petitions. That’s another big change from 2016, when 75 percent of twice-relisted petitions were relisted a third time, and from 2015, when 55 percent of twice-relisted petitions were relisted again. The few petitions surviving the two-relist purge, however, tended to make it to five or more relists in OT 2017 — which is a higher rate than in previous years but not dramatically so.

Third, the grant rate on relists was lower in OT 2017 than in previous years. A petition relisted once had a 32 percent chance of ultimately being granted in the 2017 term compared to a 43 percent chance in 2016 and a 49 percent chance in 2015. Interestingly, the inverse wasn’t true: While the chance of denial for relisted petitions climbed from 35 percent in 2015 to 45 percent in 2016, it fell slightly in 2017, to 42 percent. That’s due to an increase in the number of summary opinions. Indeed, if there is any good news here for relisted petitions, it is that for petitions that are relisted two, three or four times, the grant rate remained static from 2016 to 2017, but the rate of denial decreased as the court more often summarily reversed.

What does all of this mean? On the whole, the number of petitions relisted has increased, and the number of times that a petition is relisted twice has increased — so at any given time, there were many more relisted petitions on the docket in OT 2017 than in prior terms. At the same time, the overall number of cases granted has not increased commensurately. Sadly (for us more than anyone) that means that relists — while still effectively a prerequisite to a grant — have become a less reliable indicator that a case will ultimately be granted. So maybe the Relist Watch columns of yore got it about right in saying:

If a case has been relisted once, it generally means that the Court is paying close attention to the case, and the chances of a grant are higher than for an average case. But once a case has been relisted more than twice, it is generally no longer a likely candidate for plenary review, and is more likely to result in a summary reversal or a dissent from the denial of cert.

So why is the court relisting more petitions? One possibility is that it is relisting multiple cases that raise the same or related issues. For example, the court relisted at least four different cases — Allen, Gates, James and Robinson — each raising the identical question of whether sentence enhancements imposed under the residual clause of the then-mandatory sentencing guidelines’ career offender provision are unconstitutional. Those petitions were relisted repeatedly (some of them 10 times) before the court denied them all. Another example is Bormuth and Lund, cases on either side of a circuit split involving legislative prayer, and which the court either relisted or rescheduled 13 and 15 times respectively. (We are told by those who would know that rescheduling and relisting now serve similar purposes.)

In any event, now that the Supreme Court once again has a full complement of justices, we expect that there may be a brief flurry of grants from the ranks of the relists, at least once Justice Brett Kavanaugh has had a chance to settle in and review them. After that, it is possible the relist trends will continue to zigzag as another justice’s preferences affect relist practices. Relist Watch will keep monitoring, and we will report back next year.

Relist statistics OT 2017: The relist rest stop offers a golden ticket to some, becomes a holding pen and dead end to ... statistics, nevin manimala

Click graph to enlarge.

Thanks to Andrew Quinn for undertaking for a second year the daunting task of reading every Relist Watch to gather the data used in this article.

Posted in Cases in the Pipeline, Featured

Recommended Citation: , Relist statistics OT 2017: The relist rest stop offers a golden ticket to some, becomes a holding pen and dead end to others, SCOTUSblog (Oct. 12, 2018, 4:24 PM), http://www.scotusblog.com/2018/10/relist-statistics-ot-2017-the-relist-rest-stop-offers-a-golden-ticket-to-some-becomes-a-holding-pen-and-dead-end-to-others/

New England Patriots vs. Kansas City Chiefs: Preview, prediction, statistics to know for ‘Sunday Night Football’

New England Patriots vs. Kansas City Chiefs: Preview, prediction, statistics to know for 'Sunday Night Football' statistics, nevin manimala
New England Patriots vs. Kansas City Chiefs: Preview, prediction, statistics to know for 'Sunday Night Football' statistics, nevin manimala

This week’s edition of ‘Sunday Night Football’ features two of the inner-circle contenders in the AFC going head to head. Bill Belichick’s New England Patriots play host to Andy Reid’s Kansas City Chiefs on Sunday night, and who wins this one could go a long way toward determining who gets home-field advantage in the AFC playoffs. 

A Patriots win puts them right back in the mix for a No. 1 seed over the rest of the season, while Kansas City jumping out to a 6-0 record while holding the tiebreaker against the most likely contender for the spot would give them a whole lot more than just a leg up in the race. These two teams squared off in a night game in New England last year, and it was then that the Chiefs’ offense announced itself as a force to be reckoned with and one of the most creative units in the NFL. Things have only gotten more creative and more explosive since then, as the transition from Alex Smith to Patrick Mahomes has gone about as well as could possibly be expected.

Belichick and company will be ready for the Chiefs’ trickeration this time around, though, and won’t be caught off guard. And you can be sure Josh McDaniels, Tom Brady, and the New England offense will have some new wrinkles for the Chiefs’ defense to contend with on the other side of the ball. 

Which of these dynamic offenses will win out? We’ll see on Sunday night. 

When the Patriots have the ball

New England’s offense has gotten back on track in a big way over the past several weeks. When we covered the Patriots’ offense in this space ahead of their game against the Miami Dolphins a few weeks ago, we detailed two major struggles: running the ball and throwing downfield. 

The New England running game, in particular, has struggled to get on track — and that may not cease any time soon. Jeremy Hill was lost for the season to a torn ACL in Week 1, and Rex Burkhead was placed on Injured Reserve with a neck injury earlier this week. Add in the fact that rookie Sony Michel is seemingly still hobbled by the knee injury that plagued him during training camp and has been wildly ineffective so far — 24 carries for 84 yards — and it’s not looking good for the Patriots rushing attack.

… 

Brady struggled for a few years in the early 2010s to throw the ball downfield, but in recent seasons he had cleaned that issue up and become an excellent deep thrower. This year has been a return to poor form. On throws 15 or more yards downfield, Brady is just 6 of 18 for 139 yards, one touchdown and one interception. His 57.4 passer rating on such throws ranks 31st among the 34 quarterbacks who have attempted at least five passes 15 or more yards downfield. 

Somewhat surprisingly, the Patriots’ backfield has been much better since losing Burkhead. Narrowing down the backfield options to only Sony Michel and James White has been beneficial, as the roles are now much more clearly delineated and the Pats are much more focused in attacking opposing defenses. Michel has rushed for 210 yards and two touchdowns on 43 carries over the past two weeks, while White has essentially operated as Brady’s de facto No. 1 receiver, moving all over the field to create matchup issues and smoke linebackers and safeties on option routes. He has 18 catches for 145 yards and two scores in the past two games. 

What NFL picks can you make with confidence in Week 6? And what Super Bowl contender goes down hard? Visit SportsLine to see which NFL teams are winning more than 50 percent of simulations, all from the model that has beaten 98 percent of experts over the past two years.

Kansas City’s pass defense has struggled badly this season against running backs out of the backfield, so White could be in for yet another big game. Opposing backs have 39 catches for 454 yards and three scores against Kansas City so far this season. The Chiefs have also struggled badly against tight ends, which is definitely not a sentence you want to read when you’re about to go up against Rob Gronkowski. Opposing tight ends have 33 grabs for 453 yards and a touchdown against KC, with players like Jesse James (5-138-1), George Kittle (5-79), and even Jeff Heuerman (4-57) and Niles Paul (7-65) having big games against them. Gronk has been somewhat quiet so far this season, averaging the second-lowest yards per reception figure of his career and scoring just once in five games. This is a blow-up spot for him against one of the NFL’s worst defenses. 

Having Julian Edelman back on the field to work the underneath zone areas should benefit the New England passing game. Brady and Edelman have mind-meld type chemistry, and he fits much better in that role than did Chris Hogan, who has just been dreadful this year. Using Edelman underneath with Josh Gordon, Phillip Dorsett, and even Cordarrelle Patterson stretching the field vertically on the perimeter should open things up for Gronkowski inside, and the Chiefs are unlikely to be able to deal with that. 

And we know how the Pats operate. Once they start stretching you vertically, they can work Edelman and White to the perimeter, and then gash you up the middle with Michel and the power running game. The Chiefs’ run defense is even worse than their pass unit, and they don’t particularly stand much of a chance of slowing New England down. 

When the Chiefs have the ball

The Chiefs have already appeared in this column a few times this season. In the most recent edition, written before they played the Denver Broncos, we detailed the relative struggles of the Kareem Hunt-led running game. 

Kansas City is averaging just 3.9 yards per rush, 21st in the NFL. Kareem Hunt — who began last season by rushing for 401 yards and four touchdowns on 47 carries during the first three games of the year — has gained only 168 yards on his 52 carries, and has not gained longer than 16 yards on any of them. (By this time last year, he had five runs longer than that, including three touchdown runs of 50 yards or more.)

So, what’s the issue? Is it blocking? Well, Sports Info Solutions credits Chiefs offensive linemen with precisely zero blown run blocks so far. But not having blown any blocks doesn’t mean that the blocks they haven’t blown have actually been good. The ball-carrier has been stopped in the backfield on 21 percent of Kansas City’s carries, per Football Outsiders, 19th in the league. And the Chiefs rank 26th in FO’s Adjusted Line Yards, which assigns credit to the offensive line in the run game based on a percentage of yards gained per carry. Hunt is averaging just 0.85 yards before contact per carry, per data culled from Sports Info Solutions. That figure ranks 51st among the 65 running backs with at least 10 carries so far this season.

Hunt then broke out in a big way during the Chiefs’ stunning comeback win over Denver on ‘Monday Night Football,’ rushing for 121 yards and a touchdown on 19 carries. He followed up that performance with 22 carries for 87 yards and a score against the Jaguars last week. Hunt’s now on pace to rush for over 1,200 yards despite his slow start. If he gets there, he’d be just the 14th player since the AFL-NFL merger to reach 1,200 rushing yards in each of his first two seasons. 

Hunt seems at least somewhat likely to continue rolling against New England. The Patriots’ run defense has allowed 4.4 yards per carry this season, 21st in the NFL. The Pats have stuffed only 14 percent of opponent rushing attempts in the backfield, per Football Outsiders, 27th in the league. They’ve also allowed conversions on 80 percent of rushing attempts on third or fourth downs with two or fewer yards to go, 27th in the league again. Hunt’s ability to break tackles (11 in the run game so far this season, fifth in the NFL per Sports Info Solutions) will be tested against a Patriots defense that is a strong tackling unit, but if he can get to the second level untouched, yards are there to be gained. 

It will be interesting to see whether Bill Belichick and company prioritize shutting down Hunt or Patrick Mahomes and the Chiefs’ dynamic passing game. Hunt destroyed New England in last year’s season opener, running wild for 148 yards and a touchdown on the ground and catching five passes for 98 yards and two more scores. Mahomes, meanwhile, has arguably been the league’s MVP through the first five weeks of the season, throwing for a league-high 14 touchdowns while averaging 8.6 yards per attempt and leading the league in QBR. 

It is arguably “easier” for the Pats to concentrate on taking away Hunt and forcing Mahomes to beat them through the air, simply because the design of Kansas City’s passing game allows them to throw the ball to any area of the field, in any matchup, with ease. Plus, Belichick’s Patriots have routinely dominated young quarterbacks like Mahomes in games played at Gillette Stadium. 

Since 2001, the Patriots have played 32 home games against a quarterback who was in the first or second year of his career. The Patriots are (seriously, I swear) 31-1 in those games. The lone win came from former 49ers quarterback Colin Kaepernick back in 2012. Combined, that group of players has completed 581 of 1,070 passes (54.3 percent) for 6,676 yards (6.2 per attempt), with 31 touchdowns and 44 interceptions. That works out to an absurdly low 65.9 passer rating, which is essentially the equivalent of turning every quarterback into JaMarcus Russell. In other words, Mahomes has his work cut out for him. 

But the Kansas City passing game has been damn near unstoppable this season. Mahomes just easily marched the Chiefs up and down the field against a Jacksonville defense that is far better than this New England unit. The smart money is on both teams racking up a ton of points in this one, with the team that gets the ball last coming away with a victory. 

Prediction: Chiefs 34, Patriots 30

Why the US needs better crime reporting statistics

Why the US needs better crime reporting statistics statistics, nevin manimala
Why the US needs better crime reporting statistics statistics, nevin manimala

President Donald Trump has long focused on Chicago as a hotbed for American crime. This came up yet again on Oct. 8, when he said that he had directed the Justice Department to work with local officials in Chicago to stem violence in a city overwhelmed by its high rate of violent crime.

With 24.1 homicides per 100,000 people – more than four times the overall U.S. rate – Chicago certainly suffers from serious problems. But, as of a Sept. 25, St. Louis, my hometown, is called by the FBI the most dangerous city in America with over 6,461 violent crimes reported in the city limits in 2017. That’s an increase of more than 7 percent from the previous year.

St. Louis only ranks third for homicides in the U.S. by rate, but it’s the No. 1 most dangerous city. So by what metric does the government measure “most dangerous” – and why is Trump’s focus concentrated on Chicago and not St. Louis? As a statistician studying how people can manipulate numbers, particularly crime data, it is clear to me that the way crimes are currently counted in the U.S. can easily confuse and mislead.

Crime statistics

Since 1929, the FBI has managed the Uniform Crime Reports (UCR), a project that compiles official data on crime across the U.S., provided by smaller law enforcement agencies. For example, in Missouri, data is provided directly to the state by both the county police departments and the smaller municipalities. This information is then sent to the FBI.

With 18,000 different law enforcement agencies providing crime data to the FBI, there must be a standard metric of reporting. So all crimes are classified into only two categories: Part 1 and Part 2.

Part 1 crimes include murder, rape, robbery, larceny-theft and arson – the serious crimes. Part 2 crimes include simple assault, loitering, embezzlement, DUI’s and prostitution – the less serious crimes.

Okay, makes sense. But here’s the catch: None of these crimes are weighted. When a “beautiful, innocent 9-year-old child who was laying on the bed doing her homework” is murdered in Ferguson as a retaliation killing, it counts just the same as when an individual is arrested for shoplifting US$50 or more from the Dollar Store. This flawed metric allows for incredible confusion.

Take this example. You live in a nice neighborhood with a Kmart on the edge of it. “Serious” crime includes all the shoplifting from the Kmart; let’s say 150 incidents in a year. It also includes all the murders and rapes; call it 20 incidents in a year. The Kmart closes. All of a sudden, your crime rate has gone from 170 to 20: an 88 percent decrease in crime.

Chicago mayoral spokesman Matt McGrath criticized Trump’s comments to The Washington Post, saying, “Just last week, [the Chicago Police Department] reported there have been 100 fewer murders and 500 fewer shooting victims in Chicago this year, the second straight year of declines.” And really, I crunched the numbers; all serious crimes are only up 6.88 percent since 2014.

But it isn’t the serious crimes that make me look under my bed before I go to sleep at night. It’s the violent crimes. Those are up 24.27 percent in Chicago between 2014 and 2017. Murder is up 59.53 percent. (Researchers are still trying to figure out what’s caused the spike.)

This metric can be misleading. Former St. Louis Mayor Francis Slay touts “small gains” as overall crime numbers drop. Sure, the number of Part 1 crimes has actually dropped by 0.4 percent since 2014. But violent crimes in the city of St. Louis have increased 24.04 percent.

People can also get confused by the way crimes are sliced geographically. For example, in 2016, the city of St. Louis had a homicide rate of 59.8 per 100,000 people, while St. Louis County, which is separated from the city by a street, had a homicide rate of about 3.2 per 100,000. What combination of the two making up greater St. Louis gets reported in the news? Depends on the day.

New measures

Here’s what I know: The U.S. needs a better metric. How we measure crime has been contentious since the original FBI crime reporting document was released in 1929.

There are even issues with the counting itself. The FBI website removed data from Chicago’s crime statistics in 2013, because the FBI deemed it to be underreported.

Hopefully, a more accurate metric comes in with the FBI’s National Incident-Based Reporting System, scheduled to roll out in 2020. For example, if a criminal assaults someone in their home and steals jewelry as well, that’s only counted as an assault under the UCR system. Under NIBRS, both the assault and theft would be counted.

But this system doesn’t seem to address the key issue: weights. Murdering a child cannot possibly count the same as stealing from the Dollar Store. It is inconceivable that raping someone can count the same as illegal gambling. You serve different amounts of jail time based on the severity of the crime – why wouldn’t crimes also be weighted?

Cities like Chicago and St. Louis most certainly have issues with crime. But how the U.S. measures “dangerous” must be made clearer. It does a disservice to our police and our communities by allowing this misrepresentation of the facts. Until then, politicians will be able to use this confusion to confuse the public, intentionally or unintentionally.

Why don’t we understand statistics? Fixed mindsets may be to blame

Why don't we understand statistics? Fixed mindsets may be to blame statistics, nevin manimala
Why don't we understand statistics? Fixed mindsets may be to blame statistics, nevin manimala

Unfavorable methods of teaching statistics in schools and universities may be to blame for people ignoring simple solutions to statistical problems, making them hard to solve. This can have serious consequences when applied to professional settings like court cases. Published in Frontiers in Psychology, the study shows for the first time that fixed mindsets — potentially triggered by suboptimal education curricula — lead to difficulties finding the simple solution to statistical problems.

We are faced with probabilities and statistics on a daily basis. These are most commonly presented as percentages (i.e. 10% of the population), but a more intuitive way of understanding this information — called natural frequencies — is to present it as two whole numbers (i.e. 1 in 10 people).

Does this remind you of math problems you had to try solving in school? You’re not alone.

“Even though natural frequencies are much easier to understand, people are more familiar with probabilities represented by percentages because of their education,” says Patrick Weber of the University of Regensburg, Germany, who led the study with colleagues Karin Binder and Stefan Krauss.

However, although people are more familiar with probabilities, it does not mean they are any better at understanding them.

“A recent meta-analysis showed the vast majority of people have difficulties solving a task presented in probability format,” says Weber. “This can result in severe misjudgments when applied in professional settings.”

Weber refers to a famous example of the misuse of statistics in court when the prosecution relied heavily on flawed statistical evidence presented by a medical professional. An insufficient understanding of statistical probability led to Sally Clark being wrongly convicted of the murder of her two sons, based on the misjudgment of the probability that they could have died from natural causes.

The researchers believe that people are ‘blind’ to probabilities — yet have a fear of changing them into simpler natural frequencies which would make them easier to understand.

“The same meta-analysis showed that when the task was presented in natural frequency format instead of probabilities, performance rates increased from 4% to 24%,” says Weber. (See below for an example task.)

But while the success rate was much higher when the data was presented as two whole numbers rather than a percentage, around three-quarters of participants still could not solve the task at all. Weber and his colleagues were keen to find out why.

They gave groups of university students different reasoning tasks, one presented in probability format and the other in natural frequency. Participants were asked to show their working so the researchers could understand their cognitive processes behind answering the questions.

They found that, when the questions were presented in natural frequencies, half the participants did not use natural frequencies to solve the problems, but instead ‘translated’ them into the more difficult probability format.

Weber and his team believe that a fixed mindset — known as the Einstellung effect — may explain participants’ preference to change the data.

“Students are a lot more familiar with probabilities than with natural frequencies due to their education. In high school and university contexts, natural frequencies are not considered as equally mathematically valid as probabilities,” says Weber.

“This means that working with probabilities is a well-established strategy when it comes to solving statistical problems,” Weber continues. “While in many situations students profit from such an established strategy, the mental sets developed over a long period of time during school and university can make them ‘blind’ to simpler solutions — or unable to find a solution at all.”

Weber and his team believe this is a widespread problem deeply rooted in school and university curricula all over the world. They do, however, recognize their study only consisted of university students which may produce different results from the general population.

“We assume that while overall solution rates might vary, the tendency to avoid using natural frequencies is widespread across the whole population,” says Weber.

The researchers hope their new insights– published in a research collection on judgment and decision making under uncertainty — will encourage global change to statistical teaching strategies in schools and universities.

“We want our findings to encourage curriculum designers to incorporate natural frequencies systematically into school mathematics and statistics. This would give students a helpful tool to understand the concept of uncertainty — in addition to the ‘standard’ probabilities.”

Example of a problem posed in probability and natural frequency format

Probability format: The probability of being addicted to heroin is 0.01% for a person randomly picked from a population (base rate). If a randomly picked person from this population is addicted to heroin, the probability is 100% that he or she will have fresh needle pricks (sensitivity). If a randomly picked person from this population is not addicted to heroin, the probability is 0.19% that he or she will still have fresh needle pricks (false alarm rate). What is the probability that a randomly picked person with fresh needle pricks is addicted to heroin (posterior probability)?

Solution: With the help of Bayes’ theorem, the corresponding posterior probability P(H|N), with H denoting “person is addicted to heroin” and N denoting “person has fresh needle pricks”, can be calculated:

P(H|N) = (P(N|H) x P(H)) / (P(N|H) x P(H) + P(N|¬H) x P(¬H)) = (100% x 0.01%) / (100% x 0.01% + 0.19% x 99.99%) = 5%

Natural frequencies format: 10 out of 100,000 people from a given population are addicted to heroin. 10 out of 10 people who are addicted to heroin will have fresh needle pricks. 190 out of 99,990 people who are not addicted to heroin will nevertheless have fresh needle pricks. What percentage of the people with fresh needle pricks is addicted to heroin?

Solution:

Number of heroin addicts: 10

Number of people with needle pricks: All the heroin addicts + 190 non-addicts = 200

Percentage of people with needle pricks who are addicts = 10/200 = 5%

###

Please include a link to the original research article in your reporting: https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01833/full

Frontiers is an award-winning Open Science platform and leading open-access scholarly publisher. Our mission is to make high-quality, peer-reviewed research articles rapidly and freely available to everybody in the world, thereby accelerating scientific and technological innovation, societal progress and economic growth. For more information, visit http://www.frontiersin.org and follow @Frontiersin on Twitter.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

New York Giants vs. Philadelphia Eagles: Preview, pick, time, channel, statistics to know for ‘Thursday Night Football’

New York Giants vs. Philadelphia Eagles: Preview, pick, time, channel, statistics to know for 'Thursday Night Football' statistics, nevin manimala
New York Giants vs. Philadelphia Eagles: Preview, pick, time, channel, statistics to know for 'Thursday Night Football' statistics, nevin manimala

The NFC East, right now, is the worst division in football. None of the division’s four teams are above .500 as of this writing, as Washington is sitting in first place with a 2-2 record. All four teams have a negative point differential, and none of them has a unit ranked in the top 10 in efficiency on either side of the ball. 

On Thursday night, two of those four teams will battle it out to see if they can at least salvage the early part of their season. 

The defending champion Philadelphia Eagles are just 2-3 heading into their showdown with the New York Giants, and they’re only 1-2 since getting star quarterback Carson Wentz back on the field. The team’s defense has fallen off significantly from a year ago, and the offense has been dealing with a spate of injuries. New York’s defense has been even worse than Philly’s and while the offense has not dealt with a ton of injuries, it has not exactly experienced much in the way of success, either. 

Can the Eagles right the ship and propel themselves back into the playoff picture? Can the Giants save their season from falling over the edge? We’ll find out Thursday night (8:20 p.m., NFL Network, Fox). Here’s what to look out for. 

What NFL picks can you make with confidence in Week 6? And what Super Bowl contender goes down hard? Visit SportsLine to see which NFL teams are winning more than 50 percent of simulations, all from the model that has beaten 98 percent of experts over the past two years.

When the Giants have the ball

This offseason, the Giants made a big, bold move to improve their offense. Picking No. 2 overall in the NFL Draft, the Giants chose to pass on a quarterback who could become the successor to Eli Manning and instead selected Penn State running back Saquon Barkley, reasoning that Barkley’s spectacular talent would be enough to lift the play of everyone around him. General manager Dave Gettleman mocked analytics and those who believe that running backs are not valuable enough players to be picked at No. 2 overall, and the Giants exuded confidence that Barkley would transform their team. 

To Barkley’s credit, he has been every bit as good as the Giants envisioned. Through five games, Barkley has racked up 308 yards on the ground and 274 through the air. That makes him one of just four players in the league with at least 300 rushing yards and at least 200 receiving yards this season. He’s also the first player in NFL history to reach those marks within the first five games of his career. Much of Barkley’s production, though, has come via big plays as opposed to consistent gains. In other words, Barkley has been gobbled up near the line of scrimmage fairly often, but he’s boosted his overall numbers with explosive gains. 

Indeed, Barkley has six runs of 15 yards or more already this season, including a 68-yard touchdown on opening day. But he’s also gained two or fewer yards on 43 of his 71 carries. As such, his success rate of 38 percent ranks 31st among the 38 players with at least 40 rushes this season, per Football Outsiders. Additionally, because Barkley’s 274 receiving yards have come on 31 catches, he has not been quite as valuable a receiving back as one might initially believe. That 8.84 yards per reception figure ranks 12th among players with at least 10 catches so far this season, for example. 

But even if Barkley has not been quite as extraordinary as he might appear on first glance, he has still been quite good. More concerning for the Giants is that he appears to have had little or no effect on the overall quality of the team’s offense. The Giants clearly expected him to be a rising tide that lifted all boats, and that simply has not happened. The New York offense is nearly as inept this year as it has been for the last two. The Giants rank 25th in yards per game this season and 23rd in points per game. They have 10 offensive touchdowns in five games — a figure that exceeds only five teams, one of which has played only four games. 

Odell Beckham is averaging 7.8 catches per game, the highest mark of his career … but he’s also averaging a career-low 11.8 yards per reception and he’s found the end zone just once in five games. Sterling Shepard’s 10.9 yards per reception figure represents a steep drop from last year’s mark of 12.4 per reception as well. Eli Manning’s overall passing figures are up a bit, though largely because his 2017 season was so terrible. (Manning is averaging an adjusted yards per attempt figure of 6.18, which is worse than the figures he posted in 2014, 2015, and 2016 but better than his dreadful 2017 campaign.) Manning’s also been sacked on 7.9 percent of his drop backs, and the offensive line appears to still be a disaster even after the offseason signing of left tackle Nate Solder and the jettisoning of former first-round pick Ereck Flowers, who was released earlier this week. 

The Giants may have a pretty decent chance to get their offense going Thursday night, however, because Philadelphia’s pass defense has been much more friendly to the opposition than it was a year ago. Eagles opponents are averaging 7.4 yards per attempt and have a passer rating of 96.5. The Eagles have been destroyed by No. 2 receivers in particular, with Football Outsiders ranking them 31st in DVOA against No. 2’s. Cornerback Jalen Mills has struggled badly, yielding a 128.0 passer rating on throws in his direction, according to Sports Info Solutions. Of the five Eagles defenders who have been targeted at least 10 times in coverage, three of them are allowing a passer rating of 100 or higher. 

When the Eagles have the ball

The Eagles will be without Jay Ajayi for not just this game, but the rest of the year. That means the team will likely not play a single game all season with their entire running backs corps at full strength. Ajayi will be replaced on Thursday by the combination of Corey Clement and Wendell Smallwood, with the possibility of rookie Josh Adams mixing in for some work as well. That trio filled in for Ajayi against the Colts a few weeks ago and faired pretty well. Combined, they carried the ball 32 times for 142 yards and caught six passes for another 54 yards. 

That game was Carson Wentz’s first after returning from ACL surgery late last season, and it was also his worst game so far. Wentz has looked better with reach passing week, seeing his yards per attempt figure climb (6.9, 7.0, 8.9) along with his passer rating (84.9, 99.4, 115.3) as he’s settled back into the role. Wentz is completing a career-best 67.2 percent of his passes and his overall yards per attempt figure is now right in line with where it was last year, when he was an inner-circle MVP candidate. His touchdown rate has dropped off from the huge spike it took last season, but he’s also only thrown one interception on 122 pass attempts. 

Wentz’s receiving corps is at least healthy, even if his backs aren’t. Alshon Jeffery missed the few weeks of the season, but has been back on the field for the past two. He provides an element that neither tight end Zach Ertz or No. 2 receiver Nelson Agholor can, with his ability to stretch the field vertically from the perimeter. Ertz does the same kind of thing from the inside, but Agholor is almost exclusively an underneath receiver at this point, even with Doug Pederson’s creative offensive design moving him around the formation. (Agholor is averaging an insanely-low 7.3 yards per reception. Among the 84 receivers with 10-plus catches this season, that figure ranks 82nd ahead of only Chris Conley and Ryan Switzer.) 

Interestingly, the Giants’ pass defense this season has been weakest against No. 1 receivers like Jeffery (27th in DVOA) and interior receivers like Agholor (28th). Wide receivers lined up in the slot have caught 27 of 39 passes for 364 yards and a touchdown against the Giants, good for a 107.2 passer rating. Tight ends haven’t been as big a problem for the Giants’ pass defense as they were a year ago, when they seemingly allowed every tight end they played against to score, but they did allow four catches for 87 yards to Texans tight ends, four catches for 86 yards to Saints tight ends, and another three catches for 38 yards to Panthers rookie tight end Ian Thomas. Their other two games came against the Jaguars and Cowboys, whose tight ends are not significant parts of their offense. Zach Ertz essentially is the Eagles’ passing game, so this will be a different kind of test for New York’s safeties and linebackers. 

The Eagles might initially find more success running the ball against New York, as the Giants have allowed at least 118 rushing yards in four of their five games and are yielding 4.6 yards per carry. The Giants are getting Olivier Vernon back for this game, which should help, but they need to get better tackling when players break through to the second level so they don’t give up quite as many medium and long-gaining runs. The duo of Smallwood and Clement doesn’t necessarily command as much defensive respect as Ajayi, but they are capable players and if you give the Eagles light boxes, they are fully capable of breaking through. 

That would be far more difficult, though, if Lane Johnson — a late addition to the injury report — does not play. A Johnson absence would also have an adverse effect on Wentz, who struggled badly when his tackle was out during his rookie season. 

Prediction: Eagles 24, Giants 13

Vital statistics, Oct. 11

Vital statistics, Oct. 11 statistics, nevin manimala
Vital statistics, Oct. 11 statistics, nevin manimala

Forms to report births to the News-Press are available at Mosaic Life Care, just outside maternity. Forms are normally picked up Tuesdays and Fridays. Forms also are available at the News-Press front desk.

Hannah and Todd Bridges, St. Joseph, a boy born Sept. 18.

ShyAnn Myers and Paul Saunders, St. Joseph, a girl born Oct. 4.

Karine Murphy and Caleb Burns, Elwood, Kansas, a boy born Oct. 6.

April Eshew and Miles Baldwin, Winston, Missouri, a girl born Oct. 7.

Brandi and Chris Grier, Gower, Missouri, a girl born Oct. 8.

Kortney Grippando and Chris Nichols, St. Joseph, a girl born Oct. 9.

MARRIAGE LICENSES

Jeramiah Michael Gilbert, 40, and Genevia Ann Reinert, 38, both of St. Joseph.

Samuel Rosas Mabry, 27, Gaffney, South Carolina, and Sierra Rheannon Willite, 20, St. Joseph.

Michael Allan Mitchell, 27, and Leeann Misty April Bost, 25, both of St. Joseph.

Neil Perry Pollert, 32, and Elizabeth Joyce Dannels, 33, both of Easton, Missouri.

Mark Daniel Ray, 31, and Britani Kailene Jo Henderson, 23, both of St. Joseph.

Jordan Alfred Taber, 28, and Hannah Craig Elizabeth O’Donnell, 25, both of St. Joseph.

Patrick Daniel Taliaferro, 33, and Amanda Marie Babcock, 32, both of St. Joseph.

DIVORCE SUITS FILED

Ashley R. Sparks and William D. Sparks

Rosemary R. Coffman and William R. Coffman