60+ Fascinating Smartphone Apps Usage Statistics For 2019 [Infographic] – Social Media Today

60+ Fascinating Smartphone Apps Usage Statistics For 2019 [Infographic] - Social Media Today nevin manimala

P-values and “statistical significance”: what they actually mean – Vox.com

P-values and “statistical significance”: what they actually mean - Vox.com nevin manimala

For too long, many scientists’ careers have been built around the pursuit of a single statistic: p<.05.

In many scientific disciplines, that’s the threshold beyond which study results can be declared “statistically significant,” which is often interpreted to mean that it’s unlikely the results were a fluke, a result of random chance.

Though this isn’t what it actually means in practice. “Statistical significance” is too often misunderstood — and misused. That’s why a trio of scientists writing in Nature this week are calling “for the entire concept of statistical significance to be abandoned.”

Their biggest argument: “Statistically significant” or “not statistically significant” is too often easily misinterpreted to mean either “the study worked” or “the study did not work.” A “true” effect can sometimes yield a p-value of greater than .05. And we know from recent years that science is rife with false-positive studies that achieved values of less than .05 (read my explainer on the replication crisis in social science for more).

The Nature commentary authors argue that the math is not the problem. Instead, it’s human psychology. Bucketing results into “statistically significant” and “statistically non-significant,” they write, leads to a too black-and-white approach to scrutinizing science.

More than 800 other scientists and statisticians across the world have signed on to this manifesto. For now, it seems more like a provocative argument than the start of a real sea change. “Nature,” for one, “is not seeking to change how it considers statistical analysis in evaluation of papers at this time,” the journal noted.

But the tides may be rising against “statistical significance.” This isn’t the first time scientists and statisticians have challenged the status quo. In 2016, I wrote about how a large group of them called for raising the threshold to .005, making it much harder to call a result “statistically significant.” (Concurrently, with the Nature commentary, the journal The American Statistician devoted an entire issue to the problem of “statistical significance.”) There’s a wide recognition that p-values can be problematic.

I suspect this proposal will be heavily debated (as is everything in science). At least this latest call for radical change does highlight an important fact plaguing science: Statistical significance is widely misunderstood. Let me walk you through it. I think it will help you understand this debate better, and help you see that there are a lot more ways to judge the merits of a scientific finding than p-values.

Wait, what is a p-value? What’s statistical significance?

P-values and “statistical significance”: what they actually mean - Vox.com nevin manimala Mick Wiggins/Getty Creative Images

Even the simplest definitions of p-values tend to get complicated, so bear with me as I break it down.

When researchers calculate a p-value, they’re putting to the test what’s known as the null hypothesis. First thing to know: This is not a test of the question the experimenter most desperately wants to answer.

Let’s say the experimenter really wants to know if eating one bar of chocolate a day leads to weight loss. To test that, they assign 50 participants to eat one bar of chocolate a day. Another 50 are commanded to abstain from the delicious stuff. Both groups are weighed before the experiment and then after, and their average weight change is compared.

The null hypothesis is the devil’s advocate argument. It states there is no difference in the weight loss of the chocolate eaters versus the chocolate abstainers.

Rejecting the null is a major hurdle scientists need to clear to prove their hypothesis. If the null stands, it means they haven’t eliminated a major alternative explanation for their results. And what is science if not a process of narrowing down explanations?

So how do they rule out the null? They calculate some statistics.

The researcher basically asks: How ridiculous would it be to believe the null hypothesis is the true answer, given the results we’re seeing?

Rejecting the null is kind of like the “innocent until proven guilty” principle in court cases, Regina Nuzzo, a mathematics professor at Gallaudet University, explained. In court, you start off with the assumption that the defendant is innocent. Then you start looking at the evidence: the bloody knife with his fingerprints on it, his history of violence, eyewitness accounts. As the evidence mounts, that presumption of innocence starts to look naive. At a certain point, jurors get the feeling, beyond a reasonable doubt, that the defendant is not innocent.

Null hypothesis testing follows a similar logic: If there are huge and consistent weight differences between the chocolate eaters and chocolate abstainers, the null hypothesis — that there are no weight differences — starts to look silly and you can reject it.

You might be thinking: Isn’t this a pretty roundabout way to prove an experiment worked?

You are correct!

Rejecting the null hypothesis is indirect evidence of an experimental hypothesis. It says nothing about whether your scientific conclusion is correct.

Sure, the chocolate eaters may lose some weight. But is it because of the chocolate? Maybe. Or maybe they felt extra guilty eating candy every day, and they knew they were going to be weighed by strangers wearing lab coats (weird!), so they skimped on other meals.

Rejecting the null doesn’t tell you anything about the mechanism by which chocolate causes weight loss. It doesn’t tell you if the experiment is well designed, or well controlled for, or if the results have been cherry-picked.

It just helps you understand how rare the results are.

But — and this is a tricky, tricky point — it’s not how rare the results of your experiment are. It’s how rare the results would be in the world where the null hypothesis is true. That is, it’s how rare the results would be if nothing in your experiment worked and the difference in weight was due to random chance alone.

Here’s where the p-value comes in: The p-value quantifies this rareness. It tells you how often you’d see the numerical results of an experiment — or even more extreme results — if the null hypothesis is true and there’s no difference between the groups.

If the p-value is very small, it means the numbers would rarely (but not never!) occur by chance alone. So when the p is small, researchers start to think the null hypothesis looks improbable. And they take a leap to conclude “their [experimental] data are pretty unlikely to be due to random chance,” Nuzzo explains.

Here’s another tricky point: Researchers can never completely rule out the null (just like jurors are not firsthand witnesses to a crime). So scientists instead pick a threshold where they feel pretty confident that they can reject the null. For many disciplines, that’s now set at less than .05.

Ideally, a p of .05 means if you ran the experiment 100 times — again, assuming the null hypothesis is true — you’d see these same numbers (or more extreme results) five times.

And one last, super-thorny concept that almost everyone gets wrong: A p<.05 does not mean there’s less than a 5 percent chance your experimental results are due to random chance. It does not mean there’s only a 5 percent chance you’ve landed on a false positive. Nope. Not at all.

Again: A p-value of less than .05 means that there is less than a 5 percent chance of seeing these results (or more extreme results), in the world where the null hypothesis is true. This sounds nitpicky, but it’s critical. It’s the misunderstanding that leads people to be unduly confident in p-values. The false-positive rate for experiments at p=.05 can be much higher than 5 percent.

Let’s repeat it: P-values don’t necessarily tell you if an experiment “worked” or not

Psychology PhD student Kristoffer Magnusson has designed a pretty cool interactive calculator that estimates the probability of obtaining a range of p-values for any given true difference between groups. I used it to create the following scenario.

Let’s say there’s a study where the actual difference between two groups is equal to half a standard deviation. (Yes, this is a nerdy way of putting it. But think of it like this: It means 69 percent of those in the experimental group show results higher than the mean of the control group. Researchers call this a “medium-size” effect.) And let’s say there are 50 people each in the experimental group and the control group.

In this scenario, you should only be able to obtain a p-value between .03 and .05 around 7.62 percent of the time.

If you ran this experiment over and over and over again, you’d actually expect to see a lot more p-values with a much lower number. That’s what the following chart shows. The x-axis is the specific p-values, and the y-axis is the frequency you’d find them repeating this experiment. Look how many p-values you’d find below .001.

P-values and “statistical significance”: what they actually mean - Vox.com nevin manimala

This is why many scientists get wary when they see too many results cluster around .05. It shouldn’t happen that often and raises red flags that the results have been cherry-picked, or, in science-speak, “p-hacked.” In science, it can be much too easy to game and tweak statistics to achieve significance.

And from this chart, you’ll see: Yes, you can obtain a p-value of greater than .05 when an experimental hypothesis is true. It just shouldn’t happen as often. In this case, around 9.84 percent of all p-values should fall between .05 and .1.

There are better, more nuanced approaches to evaluating science

Many scientists recognize there are most robust ways to evaluate a scientific finding. And they already engage in them. But they, somehow, don’t currently hold as much power as “statistical significance.” They are:

  • Concentrating on effect sizes (how big of a difference does an intervention make, and is it practically meaningful?)
  • Confidence intervals (what’s the range of doubt built into any given answer?)
  • Whether a result is novel study or a replication (put some more weight into a theory many labs have looked into)
  • Whether a study’s design was preregistered (so that authors can’t manipulate their results post-test), and that the underlying data is freely accessible (so anyone can check the math)
  • There are also alternative statistical techniques — like Bayesian analysis — that in some ways more directly evaluate a study’s results. (P-values ask the question “how rare are my results?” Bayes factors ask the question “what is the probability my hypothesis is the best explanation for the results we found?” Both approaches have trade-offs. )

The real problem isn’t with statistical significance; it’s with the culture of science

The authors of the latest Nature commentary aren’t calling for the end of p-values. They’d still like scientists to report them where appropriate, but not necessarily label them “significant” or not.

There’s likely to be argument around this strategy. Some might think it’s useful to have simple rules of thumb, or thresholds, to evaluate science. And we still need to have phrases in our language to describe scientific results. Erasing “statistical significance” might just confuse things.

In any case, changing the definition of statistical significance, or nixing it entirely, doesn’t address the real problem. And the real problem is the culture of science.

In 2016, Vox sent out a survey to more than 200 scientists asking, “If you could change one thing about how science works today, what would it be and why?” One of the clear themes in the responses: The institutions of science need to get better at rewarding failure.

One young scientist told us, “I feel torn between asking questions that I know will lead to statistical significance and asking questions that matter.”

The biggest problem in science isn’t statistical significance; it’s the culture. She felt torn because young scientists need publications to get jobs. Under the status quo, in order to get publications, you need statistically significant results. Statistical significance alone didn’t lead to the replication crisis. The institutions of science incentivized the behaviors that allowed it to fester.

Reversing alarming health statistics a brother at a time – The Albany Herald

Reversing alarming health statistics a brother at a time - The Albany Herald nevin manimala
Reversing alarming health statistics a brother at a time - The Albany Herald nevin manimala

ALBANY — The brothers of Albany’s Gamma Omicron Lambda Chapter of Alpha Phi Alpha fraternity don’t just talk about issues; they lead by example.

Distraught over a report that showed Dougherty County as one of the state’s least healthy counties and other alarming statistics that pointed to a health crisis in southwest Georgia, members of the local fraternity decided it was time to do something about it.

“We came up with this initiative, ‘Healthy Men, Healthy Families, Healthy Communities,’” Alpha Maurice Elliard said. “There were all these negative numbers: Dougherty County ranked No. 153 out of Georgia’s 159 counties in health statistics; one of the Top 20 regions for diabetes in the nation; 70 percent of adult men in America either overweight or obese, and worse numbers for African Americans: more than 70 percent of African-American men and 82 percent of African-American women.

“As I started researching these alarming statistics, I observed the places where a large number of people frequent — places like churches — asking myself, ‘Are these numbers real?’ That’s when I knew it was time to do something.”

Elliard and other members of the Alpha Phi Alpha fraternity came up with a simple plan to try and affect change in the community’s poor health numbers: start with the organization, expand to families and bring the community along.

Eleven Alphas started on the second Tuesday in January of this year — the fraternity’s meeting date — embarking on a two-month “Biggest Loser” challenge. By the time the second Tuesday in March rolled around, nine of the original participants had stuck out the challenge. And, significantly, all nine lost weight.

Retired educator Prince P. Reid was the competition champion, losing 19.2 pounds, a total that amounted to 9.2 percent of his body weight when the competition began.

“My goal was to lose around 10 percent of my body weight,” Reid, a former president of the fraternity, said. “I took this thing personally. We were talking about those statistics, and something hit me: ‘I want to be here for my family.’ I remember being at the January meeting wearing a three-piece suit and thinking, ‘This is tight; it didn’t fit me like this before.’

“I used to walk the Peachtree Road Race (in Atlanta) each year, but I had to get a hip replacement and wasn’t sure how this was going to work. But I started back walking (after the surgery) and was able to do 3-6 miles five days a week. I got a Fitbit and found that my walks were about 6,000 steps. I worked to increase that to the magic 10,000. If people saw me walking in the afternoon, I was trying to get to 10,000.”

Even Alpha brother Harry Davis, a logistics manager at Marine Corps Logistics base-Albany who has completed marathons in 20 states in his quest to finish one of the 26-mile-plus races in all 50, found that his health could improve.

“I took the statistics that Brother Elliard presented personally,” Davis said. “I started looking for more arrows to add to my (health) quiver. Even if you think you’re in pretty good shape, if you don’t eat the right kinds of foods, you can have issues. I made some minor tweaks, ate more raw vegetables and cut down on the red meat.

“What I found was that there were so many benefits to what we were doing. I noticed that I was more mentally alert, I slept better, I felt better. I found these things increased exponentially, and that’s important in the (at-risk) population that we’re a part of.”

Elliard said that there’s no way of knowing whether the nine Alphas’ healthy choices impacted others in the fraternity, but he did say that it impacted others in his life.

“I didn’t preach, but my girlfriend noted the changes I was making,” the Albany State University Business professor said. “She saw me wearing a suit for the first time in 15 years, and since we ate a lot of meals together, she told me that she was wearing clothes that she hadn’t been able to wear in a long time, too.”

Elliard said the Alphas hope to put together another challenge soon. And this time they’re going to encourage other fraternities and groups to join them. Then, he says, those seeds planted by the fraternity will continue to spread and grow.

“This challenge was everything we’d hoped for when we started talking about these statistics that are very real,” the Alpha brother said. “I kept thinking about these slogans — ‘You’ve got to move to improve,” “Don’t dig your grave with a fork’ … You do a little research and you find out these things, like the No. 1 cause for bankruptcy in this country is health-related issues. There are lots of reasons to take better care of ourselves.

“It’s also important that guys like Harry (Davis) live a long time, too, so that I’ll get to enjoy my Social Security.”

Stay Informed

Untangling the microbiome — with statistics – Scope

Untangling the microbiome — with statistics - Scope nevin manimala
Untangling the microbiome — with statistics - Scope nevin manimala

I first became aware that my body was covered with bacteria in a high school microbiology class. Almost everything from my skin to my intestines, I learned, is inhabited by microbes that keep me healthy. These bacteria digest nutrients or even protect me from other pathogens. Ever since I took that class, I’ve been captivated by the idea that not all microbes are out to get us.

There is a careful balance, however, between so-called “good” and “bad” bacteria when it comes to those that live in and on humans. Certain “bad” species may be linked to disease when they outnumber the “good” guys. But when you factor in that every person has their own collection of different bacteria, these links become more fragile.

This is something that I’ve thought a lot about when reading new microbiome studies done in humans: How do we know that “bad” bacteria are directly related to a disease and it’s not just a coincidence? What can we do when scientists come to different conclusions?

I recently had the chance to think more about these questions during a conversation with Stanford statistician Susan Holmes, PhD, at the American Association for the Advancement of Science annual meeting last month. Holmes has connected things like premature birth and differences in lifestyle to variation in the human microbiome. She’s also a big proponent of sharing data and repeating studies to make sure that her findings are real.

As she explained in our chat, shared by Stanford News:

The biggest source of variability in the microbiome is the person-to-person variability. It’s a problem if you’re looking for causality. That’s a red flag word for us — causality — meaning something about the bacterial community causes some disease. You actually don’t know whether it’s the bacteria or whether the bacteria are a sign of something that happened before. It’s very much individualized, so everybody’s history matters.

But one of my favorite things she said about causality in the microbiome was this:

It’s like this horrible tangled ball and a thread that you’re trying to pull out. It’s a mess, and you pull out one thread at a time, but everything is really tied together.

Parsing out which bacteria in the microbiome cause something is not an easy task. Everything is connected to the immune system, genetics and other microbes. Holmes said that if scientists are open about the decisions they made when conducting their studies, it could help explain the differences.

Erin I. Garcia de Jesus is a graduate student in the Science Communication Master’s Program at UC Santa Cruz. She loves writing about microbes, health, animals, the world we live in and surprising discoveries. 

Photo by vickisdesigns

‘Statistical Significance’ Is Overused And Often Misleading : Shots – Health News – NPR

'Statistical Significance' Is Overused And Often Misleading : Shots - Health News - NPR nevin manimala
'Statistical Significance' Is Overused And Often Misleading : Shots - Health News - NPR nevin manimala

Statisticians say it may not be wise to put all their eggs in the significance basket. intraprese/Getty Images hide caption

toggle caption

intraprese/Getty Images

'Statistical Significance' Is Overused And Often Misleading : Shots - Health News - NPR nevin manimala

Statisticians say it may not be wise to put all their eggs in the significance basket.

intraprese/Getty Images

A recent study that questioned the healthfulness of eggs raised a perpetual question: Why do studies, as has been the case with health research involving eggs, so often flip-flop from one answer to another?

The truth isn’t changing all the time. But one reason for the fluctuations is that scientists have a hard time handling the uncertainty that’s inherent in all studies. There’s a new push to address this shortcoming in a widely used – and abused – scientific method.

Scientists and statisticians are putting forth a bold idea: Ban the very concept of “statistical significance.”

We hear that phrase all the time in relation to scientific studies. Critics, who are numerous, say that declaring a result to be statistically significant or not essentially forces complicated questions to be answered as true or false.

“The world is much more uncertain than that,” says Nicole Lazar, a professor of statistics at the University of Georgia. She is involved in the latest push to ban the use of the term “statistical significance.”

An entire issue of the journal The American Statistician is devoted to this question, with 43 articles and a 17,500-word editorial that Lazar co-authored.

Some of the scientists involved in that effort also wrote a more digestible commentary that appears in Thursday’s issue of Nature. More than 850 scientists and statisticians told the Nature commentary authors they want to endorse this idea.

In the early 20th century, the father of statistics, R.A. Fisher, developed a test of significance. It involves a variable called the p-value, that he intended to be a guide for judging results.

Over the years, scientists have warped that idea beyond all recognition. They’ve created an arbitrary threshold for the p-value, typically 0.05, and they use that to declare whether a scientific result is significant or not.

This shortcut often determines whether studies get published or not, whether scientists get promoted and who gets grant funding.

“It’s really gotten stretched all out of proportion,” says Ron Wasserstein, the executive director of the American Statistical Association. He’s been advocating this change for years and he’s not alone.

“Failure to make these changes are really now starting to have a sustained negative impact on how science is conducted,” he says. “It’s time to start making the changes. It’s time to move on.”

There are many downsides to this, he says. One is that scientists have been known to massage their data to make their results hit this magic threshold. Arguably worse, scientists often find that they can’t publish their interesting (if somewhat ambiguous) results if they aren’t statistically significant. But that information is actually still useful, and advocates say it’s wasteful simply to throw it away.

There are some prominent voices in the world of statistics who reject the call to abolish the term “statistical significance.”

Nature ought to invite somebody to bring out the weakness and dangers of some of these recommendations,” says Deborah Mayo, a philosopher of science at Virginia Tech.

“Banning the word ‘significance’ may well free researchers from being held accountable when they downplay negative results” and otherwise manipulate their findings, she notes.

“We should be very wary of giving up on something that allows us to hold researchers accountable.”

Her desire to keep “statistical significance” is deeply embedded.

Scientists – like the rest of us – are far more likely to believe that a result is true if it’s statistically significant. Still, Blake McShane, a statistician at the Kellogg School of Management at Northwestern University, says we put far too much faith in the concept.

“All statistics naturally bounce around quite a lot from study to study to study,” McShane says. That’s because there’s lots of variation from one group of people to another, and also because subtle differences in approach can lead to different conclusions.

So, he says, we shouldn’t be at all surprised if a result that’s statistically significant in one study doesn’t meet that threshold in the next.

McShane, who co-authored the Nature commentary, says this phenomenon also partly explains why studies done in one lab are frequently not reproduced in other labs. This is sometimes referred to as the “reproducibility crisis,” when in fact, the apparent conflict between studies may be an artifact of relying on the concept of statistical significance.

But despite these flaws, science embraces statistical significance because it’s a shortcut that provides at least some insight into the strength of an observation.

Journals are reluctant to abandon the concept. “Nature is not seeking to change how it considers statistical analysis in evaluation of papers at this time,” the journal noted in an editorial that accompanies the commentary.

Veronique Kiermer, publisher and executive editor of the PLOS journals, bemoans the overreliance on statistical significance, but says her journals don’t have the leverage to force a change.

“The problem is that the practice is so engrained in the research community,” she writes in an email, “that change needs to start there, when hypotheses are formulated, experiments designed and analyzed, and when researchers decide whether to write up and publish their work.”

One problem is what would scientists use instead of statistical significance. The advocates for change say the community can still use the p-value test, but as part of a broader approach to measuring uncertainty.

A bit more humility would also be in order, these advocates for change say.

“Uncertainty is present always,” Wasserstein says. “That’s part of science. So rather than trying to dance around it, we [should] accept it.”

That goes a bit against human nature. After all, we want answers, not more questions.

But McShane says arriving at a yes/no answer about whether to eat eggs is too simplistic. If we step beyond that, we can ask more important questions. How big is the risk? How likely is it to be real? What are the costs and benefits to an individual?

Lazar has an even more extreme view. She says when she hears about individual studies, like the egg one, her statistical intuition leads her to shrug: “I don’t even pay attention to it anymore.”

You can reach NPR Science Correspondent Richard Harris at rharris@npr.org.

Lies, Damned Lies, And Statistics: Why London’s Murder Rate Is Not Higher Than NYC’s – bellingcat – bellingcat

Lies, Damned Lies, And Statistics: Why London's Murder Rate Is Not Higher Than NYC's - bellingcat - bellingcat nevin manimala

On April 3, 2018, an article in USA Today stated that the murder rate in London exceeded that of New York City for the first time. A large numbers of articles and posts have since repeated this claim.

These various articles and posts continue to be cited as evidence that London is more dangerous/violent than New York. Social media posts have amplified this claim, and taken it from a singular statement noting a particular odd situation valid for a short period of time, stretching it into making broader claims. Some commentators have used a “London has more murders than New York City” false narrative to make various points about immigration, terrorism, and gun laws.

Was The Claim True?

In the narrowest sense, the claim was actually true for a short period of time. There was a short stretch in early 2018 in which there were fewer than expected murders in New York City and an unusually high number of murders in London.  This occurred in February and March of 2018.  However, even during that quarter-year, there were more murders in New York City than London due to a great disparity in January of 2018.

Lies, Damned Lies, And Statistics: Why London's Murder Rate Is Not Higher Than NYC's - bellingcat - bellingcat nevin manimala

The media eventually reversed course. Later, several articles correctly stated that the murder rate had reverted back to the historical average, with London having fewer murders.

How To Do A Murder Rate Comparison

The fair way to compare murders is on a per capita basis, and one generally accepted method is numbers of murders per year per 100,000 of population.  Annual rates are more statistically useful for murder rates than daily, weekly, or even monthly rates.

Even in the large cities like London and New York, murders are not always a daily event. There may be days or even weeks without murders, and they are not always evenly spread across the calendar.

There simply are not enough murders in either city for daily or weekly figures to be considered accurate portrayals.  There are individual days on the calendar when there is a murder in New York and not one in London, or vice versa.

If one were to engage in statistical cherrypicking you could say that the murder rate is infinitely higher in New York or infinitely higher in London on a particular day. It would, statistically, be true. But practically, it would be meaningless. The fair way to do comparisons of murder rates for such cities is via an annual rate, whether it be for a calendar year or a 12 month rolling average.

First, to do this fairly, one has to consider population figures. The New York City versus London comparison is often made because the populations of the two cities are roughly comparable. However, the last census in the U.S. was in 2010 and the last one in the UK was in 2011. Therefore, population estimates are useful for this work, although we should always keep in mind that these are estimated figures.

The most recent official estimated figure for New York City’s population is 8,622,298 as of July 1, 2017.  The official estimated figure for greater London at the same point in time was 8,825,001.  These figures will be used for calculation of relative rates as they are from the exact same point in time and based on official figures. (Note: If someone has a more recent data set for both cities on the same date, please contact me via the comments section and I will update accordingly.

Getting to the bottom of murder rates, comparatively speaking, should not be too hard. Unlike some statistics, it is relatively easy to make a one-for-one comparison — a dead body is a dead body.  Dead bodies, particularly ones that are dead because someone committed a murder, are counted and categorised. Articles are writteen about them. In the U.S. and UK, murders rarely go un-reported or under-reported.

Other categories of crime, such as assaults and frauds, are far more difficult to compare, as many crimes go unreported and definitions vary widely.  What constitutes an assault varies from state to state in the U.S., so even internal comparisons are difficult.

Data Sources

For the essential test case of Greater London versus New York City, there are excellent sources of data.  The London Metropolitan Police statistics website (here) has both current and historic figures.  The New York City Police Department has a similar website (here) with very good statistics.

It can be difficult to do a direct comparison at times, as the London statistics are reported on a monthly basis and it is often weeks before the previous month’s statistics are posted.  New York’s, on the other hand, are updated every week. As the weeks do not usually correspond with the end of the month, getting the figures to overlap exactly is not always easy.

It is also important to note that the London Metropolitan Police area does not cover the City of London. This is often confusing to people outside of London or outside the UK.  The City of London is only a small (2.9 square kilometres) part of greater London, comprising what had been Roman and Medieval London. It is now, principally, a financial district. It has a separate governance structure and a separate, much smaller, police force, the City of London Police. The City’s actual population of residents is quite low, estimated at less than 10,000 people in 2018.  Murders in the City of London are rare, but not unheard of. There was one in 2018.

What Was The Situation In 2018?

According to New York’s figures, there were 295 murders in 2018.  This yields a murder rate of 3.42 per 100,000 population in 2018.

According to London figures, there were 128 murders in 2018 in the London Metropolitan Police boroughs of Greater London, i.e. everything except the City. The Daily Mail claimed a total number of murders in greater London as 134, and the Telegraph also repeated this figure. This figure of 134 includes one murder from the City of London.  I cannot account for the discrepancy between the 128 and 133 figures for the Metropolitan Police. However, assuming the higher figure of 134 murders as correct, the murder rate for London for 2018 is 1.52 per 100,000 population.

Lies, Damned Lies, And Statistics: Why London's Murder Rate Is Not Higher Than NYC's - bellingcat - bellingcat nevin manimala

We can therefore see that New York’s murder rate was actually more than twice than that of London’s.

The situation for 2019 shows an even starker difference in murders than 2018. New York has had 53 murders in the period from January 1 to March 3, 2019. During the first two months of 2019, London has had 16 murders.  The 3-day discrepancy in the reporting periods is due to the difference between monthly and weekly rates not exactly lining up (March data for London was not yet published as this report was drafted on March 18, 2019).

Terrorism Deaths

Several accounts on social media have claimed that the 2018 figures for London are artificially low because they do not include deaths from terrorism.  This is rather a pointless argument as the number of terrorism deaths in London in 2018 was zero.

State of New York vs City of New York

Various interlocutors on social media have tried to confuse the state of New York with the City of New York. This makes for an unfair comparison as New York state has a much larger area and population than London, whereas both London and New York City are densely populated cities of approximately equal population.

For the record, New York State had 547 murders in 2017, yielding a murder rate of 2.8 per 100,000 population (the comparable report for 2018 is not yet available).

Historic Trends

This graphics, meanwhile, show the general historic trends in New York City and London, and are useful for making historic comparisons:

Lies, Damned Lies, And Statistics: Why London's Murder Rate Is Not Higher Than NYC's - bellingcat - bellingcat nevin manimala

Statistics Show Most Horses Need To Be Re-Homed More Than Once – Horse Racing News – Paulick Report

Statistics Show Most Horses Need To Be Re-Homed More Than Once - Horse Racing News - Paulick Report nevin manimala
Statistics Show Most Horses Need To Be Re-Homed More Than Once - Horse Racing News - Paulick Report nevin manimala

The Standardbred Retirement Foundation (SRF), operating since 1989, compiled its data on the number of homes an adopted horse needed, based on the information it has collected from 1992-March 11, 2019.

The number of horses this data is based on is 1,692.

1 Home       8.5%
2 Homes     49.0%
3 Homes     20.0%
4 Homes     10.5%
5 Homes       6.0%
6 Homes       3.0 %
8 Homes       2.0%
9-11 Homes  1.0%

This data does not include:

  • Horses that were never adopted, likely due to age or injury and were retired with SRF until deceased;
  • Horses presently under the expense of SRF, not likely to be adopted;
  • Horses presently available for adoption.

Less than 9 of every 100 adopted homes keeps their horse for life.

Due to unreliable information received by adopters, it is difficult to pinpoint the actual reason these horses need more than one home, but change in lifestyle, such as divorce, going off to college, and financial issues, may be the reason. This also explains why, after an individual finds a home for their horse, and when adoption programs relinquish ownership or do not follow-up diligently for the life of the horse, the vast majority of these horses are back at risk again, and some are found in kill pens.

It goes to reason that if Standardbred horses do not have a lifetime protective guardian, 91.5 percent need to be re-homed, and may be at risk.

For further information please contact SRF at 609-738-3255 or via email at [email protected] or see www.adoptahorse.org.

Read more at TapInto.net

statistics; +510 new citations

statistics; +510 new citations Report, nevin manimala
statistics; +510 new citations Report, nevin manimala

Caniglia EC, Robins JM, Cain LE, Sabin C, Logan R, Abgrall S, Mugavero MJ, Hernández-Díaz S, Meyer L, Seng R, Drozd DR, Seage Iii GR, Bonnet F, Le Marec F, Moore RD, Reiss P, van Sighem A, Mathews WC, Jarrín I, Alejos B, Deeks SG, Muga R, Boswell SL, Ferrer E, Eron JJ, Gill J, Pacheco A, Grinsztejn B, Napravnik S, Jose S, Phillips A, Justice A, Tate J, Bucher HC, Egger M, Furrer H, Miro JM, Casabona J, Porter K, Touloumi G, Crane H, Costagliola D, Saag M, Hernán MA.

Stat Med. 2019 Mar 18. doi: 10.1002/sim.8120. [Epub ahead of print]

Connect with Nevin Manimala on LinkedIn
Nevin Manimala SAS Certificate

Global Epichlorohydrin (ECH) Market Report 2019: Capacities, Production, Consumption, Trade Statistics, and Price Outlook to 2023 – PRNewswire

Global Epichlorohydrin (ECH) Market Report 2019: Capacities, Production, Consumption, Trade Statistics, and Price Outlook to 2023 - PRNewswire nevin manimala
Global Epichlorohydrin (ECH) Market Report 2019: Capacities, Production, Consumption, Trade Statistics, and Price Outlook to 2023 - PRNewswire nevin manimala

DUBLIN, March 19, 2019 /PRNewswire/ — The “Epichlorohydrin (ECH): 2019 World Market Outlook and Forecast up to 2023” report has been added to ResearchAndMarkets.com’s offering.

The report is an essential resource for a one looking for detailed information on the world epichlorohydrin market. The report covers data on global, regional and national markets including present and future trends for supply and demand, prices, and downstream industries.

In addition to the analytical part, the report provides a range of tables and figures which all together give a true insight into the national, regional and global markets for epichlorohydrin.

Report Scope

  • The report covers global, regional and country markets of epichlorohydrin
  • It describes present situation, historical background and future forecast
  • Comprehensive data showing epichlorohydrin capacities, production, consumption, trade statistics, and prices in the recent years are provided (globally, regionally and by country)
  • The report indicates a wealth of information on epichlorohydrin manufacturers and distributors
  • Region market overview covers the following: production of epichlorohydrin in a region/country, consumption trends, price data, trade in the recent year and manufacturers
  • Epichlorohydrin market forecast for next five years, including market volumes and prices is also provided

Reasons to Buy

  • Your knowledge of epichlorohydrin market will become wider
  • Analysis of the epichlorohydrin market as well as detailed knowledge of both global and regional factors impacting the industry will take you one step further in managing your business environment
  • You will boost – Your company’s business/sales activities by getting an insight into epichlorohydrin market
  • Your search for prospective partners and suppliers will be largely facilitated
  • Epichlorohydrin market forecast will strengthen your decision-making process

Key Topics Covered:




3.1. World epichlorohydrin capacity

  • Capacity broken down by region
  • Capacity divided by country
  • Manufacturers and their capacity by plant

3.2. World epichlorohydrin production

  • Global output dynamics
  • Production by region
  • Production by country

3.3. Epichlorohydrin consumption

  • World consumption
  • Consumption trends in Europe
  • Consumption trends in Asia Pacific
  • Consumption trends in North America

3.4. Epichlorohydrin global trade

  • World trade dynamics
  • Export and import flows in regions

3.5. Epichlorohydrin prices in the world market


Each country section comprises the following parts:

  • Total installed capacity in country
  • Production in country
  • Manufacturers in country
  • Consumption of in country
  • Export and import in country
  • Prices in country

4.1. European epichlorohydrin market analysis

  • Belgium
  • Czech Republic
  • France
  • Germany
  • Italy
  • Netherlands
  • Poland
  • Russia
  • Spain
  • Switzerland

4.2. Asian epichlorohydrin market analysis

  • China
  • India
  • Japan
  • South Korea
  • Thailand
  • Taiwan

4.3. North American epichlorohydrin market analysis

4.4. Latin American epichlorohydrin market analysis

4.4. Middle East and Africa epichlorohydrin market analysis

  • Saudi Arabia
  • South Africa


5.1. Epichlorohydrin capacity and production forecast up to 2023

  • Global production forecast
  • On-going projects

5.2. Epichlorohydrin consumption forecast up to 2023

  • World consumption forecast
  • Forecast of consumption in Europe
  • Consumption forecast in Asia Pacific
  • Consumption forecast in North America

5.3. Epichlorohydrin market prices forecast up to 2023



7.1. Consumption by application

7.2. Downstream markets review and forecast

For more information about this report visit https://www.researchandmarkets.com/research/wp88qp/global?w=5

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Media Contact:

Laura Wood, Senior Manager

For E.S.T Office Hours Call +1-917-300-0470

For U.S./CAN Toll Free Call +1-800-526-8630

For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907

Fax (outside U.S.): +353-1-481-1716 

SOURCE Research and Markets

Related Links