# oscarbonilla.com

## Doing it wrong

Couldn’t resist posting this one from xkcd:

Written by ob

September 20th, 2010 at 10:14 am

Posted in Humor,Math

Tagged with ,

## Lucia de Berk

with one comment

This is infuriating.

In June 2004, Lucia was convicted of 7 murders and 3 attempted murders by the Court of Appeal in The Hague. She was given a life sentence; in view of the lack of evidence, a perplexing sentence. There are no eye witnesses, there is no direct incriminating evidence. Lucia was never seen in a suspicious situation. She was never found in possession of any of the poisons she was alleged to have used.

So how did they catch this supposed murderer? Why were they even investigating her?

Everything started with an at first glance striking number of incidents (deaths or resuscitations) during Lucia’s shifts at the Juliana Children’s Hospital in the Hague: the JKZ. The run drew attention to her. Seven incidents in a row all in the shifts of one nurse could not possibly be a matter of chance! The services of a former statistician, now professor of Psychology of Law, Henk Elffers, were called in, and the number he came up with must have wiped out all remaining doubt. He figured that the probability that all of seven incidents could have happened during Lucia’s shifts by pure chance was 1 in 6,000,000,000.

So instead of looking at the data to support a theory, they looked at the data to form a theory. This is totally the wrong approach. You can find all sorts of patterns given a large enough data set. That is why seasoned researchers form a theory first and then analyze or gather data in order to test the theory. If you have no theory you’re just doing cargo cult science. As for the 1 in 6,000,000,000 chance, it looks like a case of the Birthday Paradox. Given enough deaths and nurses, the probability of some nurse being present in 7 consecutive deaths is pretty high. Ben Goldacre has more.

Even more bizarre was the staggering foolishness by some of the statistical experts used in the court. One, Henk Elffers, a professor of law, combined individual statistical tests by taking p-values – a mathematical expression of statistical significance – and multiplying them together. This bit is for the nerds: you do not just multiply p-values together, you weave them with a clever tool, like maybe ‘Fisher’s method for combination of independent p-values’. If you multiply p-values together, then chance incidents will rapidly appear to be vanishingly unlikely. Let’s say you worked in twenty hospitals, each with a pattern of incidents that is purely random noise: let’s say p=0.5. If you multiply those harmless p-values, of entirely chance findings, you end up with a final p-value of p < 0.000001, falsely implying that the outcome is extremely highly statistically significant. With this mathematical error, by this reasoning, if you change hospitals a lot, you automatically become a suspect.

Multiplying p-values? Really?

Written by ob

April 9th, 2010 at 4:19 pm

Posted in Math

Tagged with , ,

## Pigeons Beat Students at Probabilities

Interesting. Pigeons outperform humas at the Monty Hall problem. First the pigeons:

Each pigeon was faced with three lit keys, one of which could be pecked for food. At the first peck, all three keys switched off and after a second, two came back on including the bird’s first choice. The computer, playing the part of Monty Hall, had selected one of the unpecked keys to deactivate. If the pigeon pecked the right key of the remaining two, it earned some grain. On the first day of testing, the pigeons switched on just a third of the trials. But after a month, all six birds switched almost every time, earning virtually the maximum grainy reward.

Then the students:

At first, they were equally likely to switch or stay. By the final trial, they were still only switching on two thirds of the trials. They had edged towards the right strategy but they were a long way from the ideal approach of the pigeons. And by the end of the study, they were showing no signs of further improvement.

There is something to be said about our preconceptions and how biased we can be when looking at data. Pigeons are immune to this.

Despite our best attempts at reasoning, most of us arrive at the wrong answer.

Pigeons, on the other hand, rely on experience to work out probabilities. They have a go, and they choose the strategy that seems to be paying off best.

I’ve written about the Monty Hall Problem here.

P.S. In case you missed the joke, look here.

Written by ob

April 4th, 2010 at 11:05 am

Posted in Math

Tagged with , ,

## Introduction

For the past couple of weeks I’ve been trying to write an article explaining briefly what p-values are and what they really measure. Turns out there are enough subtleties involved that I keep writing and writing and haven’t published anything. So I’ve decided that it’s time for a change of tactic.

I’m going to work my way up to p-values, explaining in detail each of the pieces. Then, when I’m done, I will write a summary that just links back to the longer explanations and hopefully I’ll be able then to summarize the journey and write a more succinct explanation.

This is the first installment of the series, and it deals with the basic idea of probabilities.

For Math Geeks
In these boxes you’ll find formal definitions that are intended to complement the main text. If you are not a math geek, you can safely ignore these.

Let’s start at the very beginning,
a very good place to start.
– Maria (The Sound of Music)

## In the beginning there were probabilities

The idea behind naïve probabilities is simple. You have a Universe of all possible outcomes of some experiment (sometimes called a sample space and denoted by the greek letter Omega:$\Omega$), and you are interested in some subset of them, namely some event (denoted by $E$). The probability of event $E$ occurring is the cardinality (number of elements) of $E$ over all the possible outcomes (cardinality of $\Omega$).

$P(E)=\displaystyle\frac{|E|}{|\Omega|}$

Say you are throwing a pair of dice. How many possible outcomes of this experiment can there be? If you ignore the possible but unlikely event that one of the die will land on its edge, there are 36 possible outcomes. That means that the probability of getting snake eyes (two ones) is 1/36. You could even enumerate all the outcomes and construct a set like {(1, 1), (1, 2), … (6, 6)} where each pair (x, y) represents die 1 landing on x and die 2 landing on y.

I said naïve before because this assignment of probabilities makes a couple of implicit assumptions about the sample space and the events. First of all, it assumes that the sample space is finite. I’m going to completely ignore infinite sample spaces and instead focus on the second implicit assumption: that each outcome is equally likely.

What if some outcomes are more likely than others? For example, what if the dice are loaded? All of a sudden 1/36 doesn’t look like such a good probability assignment for snake eyes.

In the general sense, you don’t have to assign equal probabilities to each of the outcomes. It’s usually just a good starting point to assume that this is the case. But if you know that this is not the case, then starting with equal probabilities is not very smart.

As an example, in the Monty Hall problem, if you second cousin thrice removed is part of the staff and he lets you in that the car is not in door number three, that completely changes the problem. You would never assign P = 1/3 to each of the doors. You know for a fact that the probability of the car being behind door number three is exactly zero.

In a general sense then, probabilities can’t be defined by just counting possible outcomes. They must be defined as general functions that map a set of outcomes to numbers between zero and one. They must, of course, satisfy some special properties.

For Math Geeks
A probability function $P$ maps a sample space ($\Omega$) to a number in the interval $[0,1]$, and satisfies the following three properties:

1. $P(E) \ge 0 \textrm{ for every } E$
2. $P(\Omega) = 1$
3. $\textrm{if }E_1, E_2, \ldots \textrm{ are disjoint, then }$
$P\left(\displaystyle\bigcup_{i=1}^{\infty}E_i\right) = \displaystyle\sum_{i=1}^{\infty}P(E_i)$

But notice that the previous definition of probabilities ($|E|/|\Omega|$) was very handy in the sense that just by knowing the cardinalities we had the appropriate probabilities. If we assign the probabilities unevenly, how do we describe them without having to enumerate each one individually?

This is where probability distributions help. And that will be the subject of the next post.

Written by ob

April 3rd, 2010 at 10:17 pm

Posted in Math

Tagged with ,