WHAT IS STATISTICS?

Tweet
Share



Connect with StatsExamples here


LINK TO SUMMARY SLIDES FOR VIDEO:


StatsExamples-what-is-statistics.pdf

TRANSCRIPT OF VIDEO:


Slides 1

What is statistics?
First, statistics and math are two different things. Statistics uses math, but the purpose and goal are what sets them apart.
Statistics is trying to use numbers to understand reality in terms of things we measure. Broadly speaking, there are two types of statistics, descriptive statistics and inferential statistics.

Slides 2

With descriptive statistics we're using special numbers, descriptive or summary statistics, to represent a larger set of numbers.
For example, imagine that we have a bunch of eggs and we want to know how big they are. That is, how much do they weigh. We could measure all the eggs, but then we have a bunch of different numbers and it's hard to really understand what's going on just so many numbers.
Instead of keeping track of all the individual weights, we could calculate the average weight of the eggs. We can then describe the sizes of our eggs by talking about their average weight.
Or we might be interested in how variable the eggs are, which we can represent with a few different descriptive statistics.
There are also more complicated aspects about sets of numbers that we can describe with other summary statistics.
For example, are there a few much larger eggs or super small eggs that are different from all the rest. There are descriptive values we can calculate to describe these things.
When we use our descriptive statistics in this way, we’re helping to simplify the overall situation so that we can understand it better and describe it to other people.

Slides 3

Descriptive statistics is therefore the aspect of statistics in which the purpose is to describe or summarize a set of data values.
When we make data figures to show our data, this is also a form of descriptive statistics - conveying a sense of what the data looks like.
The details of how to calculate good descriptive statistics and how to make good figures are topics we cover in other videos on StatsExamples.

Slides 4

Another aspect of descriptive statistics is choosing the right metrics to measure when we want to study things.
On the left are some straightforward metrics that are used all the time in the sciences. Values like length, mass, temperature, and speed are well-defined. What they measure is obvious and we often have a direct use for the actual numbers.
On the right are values that are less straightforward. Do we really care about blood pressure itself or do we measure it because it's related to the risk of heart attack, the real value we care about?
We all value biodiversity and quality of life, but how do we measure these things - by number of species or by using lifespan? Both of these choices are missing something - the variation in types of species and whether the life is enjoyable or not.
Strengths of beliefs are often measured on a 5 or 7 point scale, but what does that really mean?
Understanding which metrics are the best ones to use for the question at hand is an important part of statistics and experimental design.

Slides 5

Another aspect of choosing metrics is what kind of numbers and values to work with.
Some types of values can lead to equations that are easy to understand and work with, but others can give you weird results.
For example, working with integers, that is whole numbers, decimal numbers, and proportions usually works well.
On the other hand, fractions and ratios, while often being nice for descriptions can end up having some terrible mathematical properties when we try to do more complicated things.
If we just want to describe things, these are all fine.
But if we want to do more, the inferential statistics I'm about to talk about, certain metrics work better than others.

Slides 6

The second type of statistics, inferential statistics, arises from the fact that we usually don’t just care about the set of numbers we have.
We usually have a large system we care about, but can only measure a small subset of the values.
We have all the numbers for the smaller subset and can use descriptive statistics to describe that subset, otherwise known as a sample, perfectly.
But if we want to use that data to describe the larger population these values came from, we have to make some assumptions or estimations, otherwise called inferences.
These terms I'm using, sample and population, are technical terms in the field of statistics.
The population is the overall system of values that we care about, we'll call its descriptive statistics, population parameters.
We can almost never measure these population parameters directly, instead we calculate the descriptive statistics of a sample from the population.
We then use these statistics to make educated guesses about the parameters. These educated guesses, based on sample data, are called inferences.

Slides 7

For example, what if we wanted to know what type of eggs to expect from a giant flock of chickens.
We could measure the weights of some eggs from some chickens and then assume that the rest of the chickens will lay eggs that are similar.
We may be required to use this approach for two reasons.
First, measuring every egg from every chicken in our entire flock might be too much work.
Also, measuring eggs today wouldn’t allow us to predict the future unless we conceptually use that set of today's eggs as a sample from the population of all sets of eggs that we will get now and in the future.

Slides 8

Of course, no sample is guaranteed to perfectly match with the population as a whole. In the pictures shown here, two different samples from identical populations would give us very different sample averages and make different predictions about the population.
Because of this, we can't assume that the population looks exactly like the sample and we have to use some fancy procedures to predict what the population probably looks like based on our sample.
The word probably is key - we use the mathematics of probability to make statements about what the population is likely to look like.
For situations like this we would generally give a range around the sample average where the population average is likely to be.
For example, if our sample average was 10, then we would say that the population average is probably near 10 - but if the sample average was 18, then we would say that the population average is probably near 18.
On a related note, if we thought the population average was 18, but our sample average was 10, then we may change our mind about what we think the population is like.

Slides 9

What I just mentioned, having an idea about a population and seeing if our sample data confirms it, brings us the second aspect of inferential statistics.
In this approach, we pose a question about a population, or more than one population, and use the statistics from our sample or samples to answer it.
These are what are known as statistical tests.
The first step is to pose a question about the parameters of one or more populations.
If we had all the values for the population or populations, we could answer the question directly, but we don't, so we measure some sample values from the population.
From our sample data we calculate the appropriate sample statistics. Often we want to know if the average sizes are the same so we would calculate some averages, but there are lots of other types of questions we may be interested in as well.
Then we calculate the probabilities of seeing our sample statistics if the answer to our original question is yes or no.
Then, based on these probabilities we make a decision about the probable answer to our original questions about the population or populations.

Slides 10

Let's think about an example with two populations of eggs we're interested in. We may want to know if the average sizes of eggs from the two populations are the same.
If the populations are too big to completely measure, or if we're trying to make predictions about the future, then we'll be using samples from these populations.
So step one is to collect some samples and measure the eggs.
Since we're interested in the average size we'll calculate the average for each sample. The chance that each sample average exactly matches each population average is very low however. By random chance, the averages will be different even if the populations have the same average.
We therefore would have to calculate the probability of seeing a pair of sample averages as different from each other as do, if the population averages are the same.
In other words. If the two populations really do have the same average, then the samples should be similar, but we don't expect them to be identical. How similar should they be, and how different is too different, is the question. Statistics uses ideas from probability math to calculate this probability.
Then, once we have that probability we use it to answer our question.
If the probability of seeing two sample averages as different as we do is small, then we would decide that the population averages are probably different. Note that we don't prove they're different, we make a decision based on probability.
If the probability of seeing two sample averages as different as we do is not small, then these samples are what would easily have been seen if the population averages are the same. We would then conclude that we don't have the evidence to decide that the population averages are different. Note that we don't prove the averages are the same, we make a decision based on probability.

Slides 11

Statistical tests are used for lots of questions, way more than I can list here, but let's think about a few.
The example we just looked at was asking if two populations have the same average. We looked at an example of the sizes of eggs in two populations, but there are also tests for more than two. For example, do pills manufactured in five different factories all have the same average amount of active ingredient? Do people in all 50 states have the same average blood pressure?

Slides 12

Another question could be - do populations have the same variability? We usually care more about averages, but we might also care about if different populations are as varied as one another. For example, are the thicknesses of brake pads made in a factory more or less consistent when using different materials or procedures? Do men and women have the same range of blood pressures throughout the day?

Slides 13

We could also think about a question like - is the proportion of some trait the same across different populations. For example, is the proportion of blue eyes the same in men and women? We can't assess every man and every woman, but we can get samples of men and women and then measure the proportion of blue eyes in each. We know the sample proportions won't be exactly the same as the population, but we can calculate the probability that our samples are as different as we see if the proportions of blue eyes are equal in the overall population of all men and all women.

Slides 14

Or maybe we measure a pair of variables in the population and we want to know if they're related. Is the pattern between the variables random or non-random? It's hard to see a convincing pattern in the figure on the left, and the pattern on the right is clear, but what about the one in the middle? Is the pattern of data points random or is there something real and non-random going on?

Zoom 11-14

This searching for evidence of non-random patterns in the data is the core of inferential hypothesis testing.
Sample data will always appear to show differences between groups or have some kind of pattern. The question becomes - is there evidence for a non-random pattern? When we see two samples with different averages, is this because their populations have different averages or is randomness causing our samples to be different?

Slides 15

Because sample data will appear to have differences or patterns we have to ask the question - is there evidence for a non-random pattern? We answer this question with probabilities.
For all of our tests, we calculate the chance of seeing what we do, our sample data, if things are just random.
If those probabilities are low, then we make one decision and if they're not, then we make another.
Typically, when the sample data is very unlikely that means that the differences or patterns that seem apparent are probably non-random and we would decide that they are real.
If the probability of seeing our sample data is not low, based on assumptions we've made, then this usually translates into a conclusion that the apparent differences or patterns are just random, and we would lack evidence to decide that these differences or patterns are real.
An important thing to keep in mind is that we never prove anything beyond all doubt, instead we just make decisions supported by the mathematics of probability.

Slides 16

Here's where I mention that full inferential statistics is more complicated than what I've described.
What I've been talking about is something called hypothesis testing, which is more properly known as the frequentist approach to statistics. It's the kind of statistics that's been around the longest, it's the easiest to understand, and it's what all intro stats courses focus on. If you're taking your first stats class, you'll almost certainly just be doing descriptive statistics and frequentist statistics.
That being said, there are at least three other approaches to statistics that are in wide use.
Bayesian Statistics uses a special probability rule to calculate the probability of a hypothesis being correct and updates that probability as more data is added to the model. The math is more complicated, but for some questions it's faster for a computer to calculate things and can be used for questions where hypothesis testing doesn't work.
Likelihood statistics uses a special value called a likelihood to calculate the likelihood of something. Instead of using these values to make yes or no decisions, they are generally used in a purely supportive manner to describe the strength of a conclusion.
Finally, the AIC "Akaike information criterion" approach is one where you compare a bunch of different mathematical models to each other to see which one predicts the sample data the best. Since many super complicated models would always give better results, the approach includes a penalty for the number of parameters you use. The AIC approach's goal is to find the model that gives the best balance between accuracy and complexity.
If you're watching this and don't plan to be a research scientist, you're probably fine just making sure to understand descriptive statistics and the frequentist approach on the left. If you've heard of p-values, they live on the left side of this figure. You might see things on the right side from time to time if you read scientific papers however.
On the other hand, if you're planning to do scientific research or have a highly scientific career - in addition to showing up in papers you read, the stuff on the right will come up when you get into more advanced stats courses and data analysis.

Zoom out

Now when you hear someone talking about statistics you have a sense of what they mean.
And if anyone ever says that they have statistics that prove something, what they really mean is that they have data that would be super unlikely if their claim wasn't true.
Weird things do sometimes happen though, which is why real scientists usually repeat experiments to see if they get the same results multiple times.
As with all the videos on this channel, there is a direct link to a PDF of this summary slide below.

End screen

Feel free to like, subscribe, comment, share and all the usual YouTube things. If you found this useful, please help someone else find it too.
Also, check out the StatsExamples website where there are links to a bunch of videos like this, pages with more statistics education material, and even some cartoons. Also, everything on the site is free.




Connect with StatsExamples here


This information is intended for the greater good; please use statistics responsibly.