forked from hadley/r4ds
-
Notifications
You must be signed in to change notification settings - Fork 16
/
numbers.qmd
780 lines (586 loc) · 29.5 KB
/
numbers.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
# Numbers {#sec-numbers}
```{r}
#| echo: false
source("_common.R")
```
## Introduction
Numeric vectors are the backbone of data science, and you've already used them a bunch of times earlier in the book.
Now it's time to systematically survey what you can do with them in R, ensuring that you're well situated to tackle any future problem involving numeric vectors.
We'll start by giving you a couple of tools to make numbers if you have strings, and then going into a little more detail of `count()`.
Then we'll dive into various numeric transformations that pair well with `mutate()`, including more general transformations that can be applied to other types of vectors, but are often used with numeric vectors.
We'll finish off by covering the summary functions that pair well with `summarize()` and show you how they can also be used with `mutate()`.
### Prerequisites
This chapter mostly uses functions from base R, which are available without loading any packages.
But we still need the tidyverse because we'll use these base R functions inside of tidyverse functions like `mutate()` and `filter()`.
Like in the last chapter, we'll use real examples from nycflights13, as well as toy examples made with `c()` and `tribble()`.
```{r}
#| label: setup
#| message: false
library(tidyverse)
library(nycflights13)
```
## Making numbers
In most cases, you'll get numbers already recorded in one of R's numeric types: integer or double.
In some cases, however, you'll encounter them as strings, possibly because you've created them by pivoting from column headers or because something has gone wrong in your data import process.
readr provides two useful functions for parsing strings into numbers: `parse_double()` and `parse_number()`.
Use `parse_double()` when you have numbers that have been written as strings:
```{r}
x <- c("1.2", "5.6", "1e3")
parse_double(x)
```
Use `parse_number()` when the string contains non-numeric text that you want to ignore.
This is particularly useful for currency data and percentages:
```{r}
x <- c("$1,234", "USD 3,513", "59%")
parse_number(x)
```
## Counts {#sec-counts}
It's surprising how much data science you can do with just counts and a little basic arithmetic, so dplyr strives to make counting as easy as possible with `count()`.
This function is great for quick exploration and checks during analysis:
```{r}
flights |> count(dest)
```
(Despite the advice in @sec-workflow-style, we usually put `count()` on a single line because it's usually used at the console for a quick check that a calculation is working as expected.)
If you want to see the most common values, add `sort = TRUE`:
```{r}
flights |> count(dest, sort = TRUE)
```
And remember that if you want to see all the values, you can use `|> View()` or `|> print(n = Inf)`.
You can perform the same computation "by hand" with `group_by()`, `summarize()` and `n()`.
This is useful because it allows you to compute other summaries at the same time:
```{r}
flights |>
group_by(dest) |>
summarize(
n = n(),
delay = mean(arr_delay, na.rm = TRUE)
)
```
`n()` is a special summary function that doesn't take any arguments and instead accesses information about the "current" group.
This means that it only works inside dplyr verbs:
```{r}
#| error: true
n()
```
There are a couple of variants of `n()` and `count()` that you might find useful:
- `n_distinct(x)` counts the number of distinct (unique) values of one or more variables.
For example, we could figure out which destinations are served by the most carriers:
```{r}
flights |>
group_by(dest) |>
summarize(carriers = n_distinct(carrier)) |>
arrange(desc(carriers))
```
- A weighted count is a sum.
For example you could "count" the number of miles each plane flew:
```{r}
flights |>
group_by(tailnum) |>
summarize(miles = sum(distance))
```
Weighted counts are a common problem so `count()` has a `wt` argument that does the same thing:
```{r}
#| results: false
flights |> count(tailnum, wt = distance)
```
- You can count missing values by combining `sum()` and `is.na()`.
In the `flights` dataset this represents flights that are cancelled:
```{r}
flights |>
group_by(dest) |>
summarize(n_cancelled = sum(is.na(dep_time)))
```
### Exercises
1. How can you use `count()` to count the number rows with a missing value for a given variable?
2. Expand the following calls to `count()` to instead use `group_by()`, `summarize()`, and `arrange()`:
1. `flights |> count(dest, sort = TRUE)`
2. `flights |> count(tailnum, wt = distance)`
## Numeric transformations
Transformation functions work well with `mutate()` because their output is the same length as the input.
The vast majority of transformation functions are already built into base R.
It's impractical to list them all so this section will show the most useful ones.
As an example, while R provides all the trigonometric functions that you might dream of, we don't list them here because they're rarely needed for data science.
### Arithmetic and recycling rules {#sec-recycling}
We introduced the basics of arithmetic (`+`, `-`, `*`, `/`, `^`) in @sec-workflow-basics and have used them a bunch since.
These functions don't need a huge amount of explanation because they do what you learned in grade school.
But we need to briefly talk about the **recycling rules** which determine what happens when the left and right hand sides have different lengths.
This is important for operations like `flights |> mutate(air_time = air_time / 60)` because there are 336,776 numbers on the left of `/` but only one on the right.
R handles mismatched lengths by **recycling,** or repeating, the short vector.
We can see this in operation more easily if we create some vectors outside of a data frame:
```{r}
x <- c(1, 2, 10, 20)
x / 5
# is shorthand for
x / c(5, 5, 5, 5)
```
Generally, you only want to recycle single numbers (i.e. vectors of length 1), but R will recycle any shorter length vector.
It usually (but not always) gives you a warning if the longer vector isn't a multiple of the shorter:
```{r}
x * c(1, 2)
x * c(1, 2, 3)
```
These recycling rules are also applied to logical comparisons (`==`, `<`, `<=`, `>`, `>=`, `!=`) and can lead to a surprising result if you accidentally use `==` instead of `%in%` and the data frame has an unfortunate number of rows.
For example, take this code which attempts to find all flights in January and February:
```{r}
flights |>
filter(month == c(1, 2))
```
The code runs without error, but it doesn't return what you want.
Because of the recycling rules it finds flights in odd numbered rows that departed in January and flights in even numbered rows that departed in February.
And unfortunately there's no warning because `flights` has an even number of rows.
To protect you from this type of silent failure, most tidyverse functions use a stricter form of recycling that only recycles single values.
Unfortunately that doesn't help here, or in many other cases, because the key computation is performed by the base R function `==`, not `filter()`.
### Minimum and maximum
The arithmetic functions work with pairs of variables.
Two closely related functions are `pmin()` and `pmax()`, which when given two or more variables will return the smallest or largest value in each row:
```{r}
df <- tribble(
~x, ~y,
1, 3,
5, 2,
7, NA,
)
df |>
mutate(
min = pmin(x, y, na.rm = TRUE),
max = pmax(x, y, na.rm = TRUE)
)
```
Note that these are different to the summary functions `min()` and `max()` which take multiple observations and return a single value.
You can tell that you've used the wrong form when all the minimums and all the maximums have the same value:
```{r}
df |>
mutate(
min = min(x, y, na.rm = TRUE),
max = max(x, y, na.rm = TRUE)
)
```
### Modular arithmetic
Modular arithmetic is the technical name for the type of math you did before you learned about decimal places, i.e. division that yields a whole number and a remainder.
In R, `%/%` does integer division and `%%` computes the remainder:
```{r}
1:10 %/% 3
1:10 %% 3
```
Modular arithmetic is handy for the `flights` dataset, because we can use it to unpack the `sched_dep_time` variable into `hour` and `minute`:
```{r}
flights |>
mutate(
hour = sched_dep_time %/% 100,
minute = sched_dep_time %% 100,
.keep = "used"
)
```
We can combine that with the `mean(is.na(x))` trick from @sec-logical-summaries to see how the proportion of cancelled flights varies over the course of the day.
The results are shown in @fig-prop-cancelled.
```{r}
#| label: fig-prop-cancelled
#| fig-cap: |
#| A line plot with scheduled departure hour on the x-axis, and proportion
#| of cancelled flights on the y-axis. Cancellations seem to accumulate
#| over the course of the day until 8pm, very late flights are much
#| less likely to be cancelled.
#| fig-alt: |
#| A line plot showing how proportion of cancelled flights changes over
#| the course of the day. The proportion starts low at around 0.5% at
#| 6am, then steadily increases over the course of the day until peaking
#| at 4% at 7pm. The proportion of cancelled flights then drops rapidly
#| getting down to around 1% by midnight.
flights |>
group_by(hour = sched_dep_time %/% 100) |>
summarize(prop_cancelled = mean(is.na(dep_time)), n = n()) |>
filter(hour > 1) |>
ggplot(aes(x = hour, y = prop_cancelled)) +
geom_line(color = "grey50") +
geom_point(aes(size = n))
```
### Logarithms
Logarithms are an incredibly useful transformation for dealing with data that ranges across multiple orders of magnitude and converting exponential growth to linear growth.
In R, you have a choice of three logarithms: `log()` (the natural log, base e), `log2()` (base 2), and `log10()` (base 10).
We recommend using `log2()` or `log10()`.
`log2()` is easy to interpret because a difference of 1 on the log scale corresponds to doubling on the original scale and a difference of -1 corresponds to halving; whereas `log10()` is easy to back-transform because (e.g.) 3 is 10\^3 = 1000.
The inverse of `log()` is `exp()`; to compute the inverse of `log2()` or `log10()` you'll need to use `2^` or `10^`.
### Rounding {#sec-rounding}
Use `round(x)` to round a number to the nearest integer:
```{r}
round(123.456)
```
You can control the precision of the rounding with the second argument, `digits`.
`round(x, digits)` rounds to the nearest `10^-n` so `digits = 2` will round to the nearest 0.01.
This definition is useful because it implies `round(x, -3)` will round to the nearest thousand, which indeed it does:
```{r}
round(123.456, 2) # two digits
round(123.456, 1) # one digit
round(123.456, -1) # round to nearest ten
round(123.456, -2) # round to nearest hundred
```
There's one weirdness with `round()` that seems surprising at first glance:
```{r}
round(c(1.5, 2.5))
```
`round()` uses what's known as "round half to even" or Banker's rounding: if a number is half way between two integers, it will be rounded to the **even** integer.
This is a good strategy because it keeps the rounding unbiased: half of all 0.5s are rounded up, and half are rounded down.
`round()` is paired with `floor()` which always rounds down and `ceiling()` which always rounds up:
```{r}
x <- 123.456
floor(x)
ceiling(x)
```
These functions don't have a `digits` argument, so you can instead scale down, round, and then scale back up:
```{r}
# Round down to nearest two digits
floor(x / 0.01) * 0.01
# Round up to nearest two digits
ceiling(x / 0.01) * 0.01
```
You can use the same technique if you want to `round()` to a multiple of some other number:
```{r}
# Round to nearest multiple of 4
round(x / 4) * 4
# Round to nearest 0.25
round(x / 0.25) * 0.25
```
### Cutting numbers into ranges
Use `cut()`[^numbers-1] to break up (aka bin) a numeric vector into discrete buckets:
[^numbers-1]: ggplot2 provides some helpers for common cases in `cut_interval()`, `cut_number()`, and `cut_width()`.
ggplot2 is an admittedly weird place for these functions to live, but they are useful as part of histogram computation and were written before any other parts of the tidyverse existed.
```{r}
x <- c(1, 2, 5, 10, 15, 20)
cut(x, breaks = c(0, 5, 10, 15, 20))
```
The breaks don't need to be evenly spaced:
```{r}
cut(x, breaks = c(0, 5, 10, 100))
```
You can optionally supply your own `labels`.
Note that there should be one less `labels` than `breaks`.
```{r}
cut(x,
breaks = c(0, 5, 10, 15, 20),
labels = c("sm", "md", "lg", "xl")
)
```
Any values outside of the range of the breaks will become `NA`:
```{r}
y <- c(NA, -10, 5, 10, 30)
cut(y, breaks = c(0, 5, 10, 15, 20))
```
See the documentation for other useful arguments like `right` and `include.lowest`, which control if the intervals are `[a, b)` or `(a, b]` and if the lowest interval should be `[a, b]`.
### Cumulative and rolling aggregates {#sec-cumulative-and-rolling-aggregates}
Base R provides `cumsum()`, `cumprod()`, `cummin()`, `cummax()` for running, or cumulative, sums, products, mins and maxes.
dplyr provides `cummean()` for cumulative means.
Cumulative sums tend to come up the most in practice:
```{r}
x <- 1:10
cumsum(x)
```
If you need more complex rolling or sliding aggregates, try the [slider](https://slider.r-lib.org/) package.
### Exercises
1. Explain in words what each line of the code used to generate @fig-prop-cancelled does.
2. What trigonometric functions does R provide?
Guess some names and look up the documentation.
Do they use degrees or radians?
3. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they're not really continuous numbers.
You can see the basic problem by running the code below: there's a gap between each hour.
```{r}
#| eval: false
flights |>
filter(month == 1, day == 1) |>
ggplot(aes(x = sched_dep_time, y = dep_delay)) +
geom_point()
```
Convert them to a more truthful representation of time (either fractional hours or minutes since midnight).
4. Round `dep_time` and `arr_time` to the nearest five minutes.
## General transformations
The following sections describe some general transformations which are often used with numeric vectors, but can be applied to all other column types.
### Ranks
dplyr provides a number of ranking functions inspired by SQL, but you should always start with `dplyr::min_rank()`.
It uses the typical method for dealing with ties, e.g., 1st, 2nd, 2nd, 4th.
```{r}
x <- c(1, 2, 2, 3, 4, NA)
min_rank(x)
```
Note that the smallest values get the lowest ranks; use `desc(x)` to give the largest values the smallest ranks:
```{r}
min_rank(desc(x))
```
If `min_rank()` doesn't do what you need, look at the variants `dplyr::row_number()`, `dplyr::dense_rank()`, `dplyr::percent_rank()`, and `dplyr::cume_dist()`.
See the documentation for details.
```{r}
df <- tibble(x = x)
df |>
mutate(
row_number = row_number(x),
dense_rank = dense_rank(x),
percent_rank = percent_rank(x),
cume_dist = cume_dist(x)
)
```
You can achieve many of the same results by picking the appropriate `ties.method` argument to base R's `rank()`; you'll probably also want to set `na.last = "keep"` to keep `NA`s as `NA`.
`row_number()` can also be used without any arguments when inside a dplyr verb.
In this case, it'll give the number of the "current" row.
When combined with `%%` or `%/%` this can be a useful tool for dividing data into similarly sized groups:
```{r}
df <- tibble(id = 1:10)
df |>
mutate(
row0 = row_number() - 1,
three_groups = row0 %% 3,
three_in_each_group = row0 %/% 3
)
```
### Offsets
`dplyr::lead()` and `dplyr::lag()` allow you to refer the values just before or just after the "current" value.
They return a vector of the same length as the input, padded with `NA`s at the start or end:
```{r}
x <- c(2, 5, 11, 11, 19, 35)
lag(x)
lead(x)
```
- `x - lag(x)` gives you the difference between the current and previous value.
```{r}
x - lag(x)
```
- `x == lag(x)` tells you when the current value changes.
```{r}
x == lag(x)
```
You can lead or lag by more than one position by using the second argument, `n`.
### Consecutive identifiers
Sometimes you want to start a new group every time some event occurs.
For example, when you're looking at website data, it's common to want to break up events into sessions, where you begin a new session after gap of more than `x` minutes since the last activity.
For example, imagine you have the times when someone visited a website:
```{r}
events <- tibble(
time = c(0, 1, 2, 3, 5, 10, 12, 15, 17, 19, 20, 27, 28, 30)
)
```
And you've computed the time between each event, and figured out if there's a gap that's big enough to qualify:
```{r}
events <- events |>
mutate(
diff = time - lag(time, default = first(time)),
has_gap = diff >= 5
)
events
```
But how do we go from that logical vector to something that we can `group_by()`?
`cumsum()`, from @sec-cumulative-and-rolling-aggregates, comes to the rescue as gap, i.e. `has_gap` is `TRUE`, will increment `group` by one (@sec-numeric-summaries-of-logicals):
```{r}
events |> mutate(
group = cumsum(has_gap)
)
```
Another approach for creating grouping variables is `consecutive_id()`, which starts a new group every time one of its arguments changes.
For example, inspired by [this stackoverflow question](https://stackoverflow.com/questions/27482712), imagine you have a data frame with a bunch of repeated values:
```{r}
df <- tibble(
x = c("a", "a", "a", "b", "c", "c", "d", "e", "a", "a", "b", "b"),
y = c(1, 2, 3, 2, 4, 1, 3, 9, 4, 8, 10, 199)
)
```
If you want to keep the first row from each repeated `x`, you could use `group_by()`, `consecutive_id()`, and `slice_head()`:
```{r}
df |>
group_by(id = consecutive_id(x)) |>
slice_head(n = 1)
```
### Exercises
1. Find the 10 most delayed flights using a ranking function.
How do you want to handle ties?
Carefully read the documentation for `min_rank()`.
2. Which plane (`tailnum`) has the worst on-time record?
3. What time of day should you fly if you want to avoid delays as much as possible?
4. What does `flights |> group_by(dest) |> filter(row_number() < 4)` do?
What does `flights |> group_by(dest) |> filter(row_number(dep_delay) < 4)` do?
5. For each destination, compute the total minutes of delay.
For each flight, compute the proportion of the total delay for its destination.
6. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave.
Using `lag()`, explore how the average flight delay for an hour is related to the average delay for the previous hour.
```{r}
#| results: false
flights |>
mutate(hour = dep_time %/% 100) |>
group_by(year, month, day, hour) |>
summarize(
dep_delay = mean(dep_delay, na.rm = TRUE),
n = n(),
.groups = "drop"
) |>
filter(n > 5)
```
7. Look at each destination.
Can you find flights that are suspiciously fast (i.e. flights that represent a potential data entry error)?
Compute the air time of a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
8. Find all destinations that are flown by at least two carriers.
Use those destinations to come up with a relative ranking of the carriers based on their performance for the same destination.
## Numeric summaries
Just using the counts, means, and sums that we've introduced already can get you a long way, but R provides many other useful summary functions.
Here is a selection that you might find useful.
### Center
So far, we've mostly used `mean()` to summarize the center of a vector of values.
As we've seen in @sec-sample-size, because the mean is the sum divided by the count, it is sensitive to even just a few unusually high or low values.
An alternative is to use the `median()`, which finds a value that lies in the "middle" of the vector, i.e. 50% of the values is above it and 50% are below it.
Depending on the shape of the distribution of the variable you're interested in, mean or median might be a better measure of center.
For example, for symmetric distributions we generally report the mean while for skewed distributions we usually report the median.
@fig-mean-vs-median compares the mean vs. the median departure delay (in minutes) for each destination.
The median delay is always smaller than the mean delay because flights sometimes leave multiple hours late, but never leave multiple hours early.
```{r}
#| label: fig-mean-vs-median
#| fig-cap: |
#| A scatterplot showing the differences of summarizing daily depature
#| delay with median instead of mean.
#| fig-alt: |
#| All points fall below a 45° line, meaning that the median delay is
#| always less than the mean delay. Most points are clustered in a
#| dense region of mean [0, 20] and median [0, 5]. As the mean delay
#| increases, the spread of the median also increases. There are two
#| outlying points with mean ~60, median ~50, and mean ~85, median ~55.
flights |>
group_by(year, month, day) |>
summarize(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE),
n = n(),
.groups = "drop"
) |>
ggplot(aes(x = mean, y = median)) +
geom_abline(slope = 1, intercept = 0, color = "white", linewidth = 2) +
geom_point()
```
You might also wonder about the **mode**, or the most common value.
This is a summary that only works well for very simple cases (which is why you might have learned about it in high school), but it doesn't work well for many real datasets.
If the data is discrete, there may be multiple most common values, and if the data is continuous, there might be no most common value because every value is ever so slightly different.
For these reasons, the mode tends not to be used by statisticians and there's no mode function included in base R[^numbers-2].
[^numbers-2]: The `mode()` function does something quite different!
### Minimum, maximum, and quantiles {#sec-min-max-summary}
What if you're interested in locations other than the center?
`min()` and `max()` will give you the largest and smallest values.
Another powerful tool is `quantile()` which is a generalization of the median: `quantile(x, 0.25)` will find the value of `x` that is greater than 25% of the values, `quantile(x, 0.5)` is equivalent to the median, and `quantile(x, 0.95)` will find the value that's greater than 95% of the values.
For the `flights` data, you might want to look at the 95% quantile of delays rather than the maximum, because it will ignore the 5% of most delayed flights which can be quite extreme.
```{r}
flights |>
group_by(year, month, day) |>
summarize(
max = max(dep_delay, na.rm = TRUE),
q95 = quantile(dep_delay, 0.95, na.rm = TRUE),
.groups = "drop"
)
```
### Spread
Sometimes you're not so interested in where the bulk of the data lies, but in how it is spread out.
Two commonly used summaries are the standard deviation, `sd(x)`, and the inter-quartile range, `IQR()`.
We won't explain `sd()` here since you're probably already familiar with it, but `IQR()` might be new --- it's `quantile(x, 0.75) - quantile(x, 0.25)` and gives you the range that contains the middle 50% of the data.
We can use this to reveal a small oddity in the `flights` data.
You might expect the spread of the distance between origin and destination to be zero, since airports are always in the same place.
But the code below reveals a data oddity for airport [EGE](https://en.wikipedia.org/wiki/Eagle_County_Regional_Airport):
```{r}
flights |>
group_by(origin, dest) |>
summarize(
distance_sd = IQR(distance),
n = n(),
.groups = "drop"
) |>
filter(distance_sd > 0)
```
### Distributions
It's worth remembering that all of the summary statistics described above are a way of reducing the distribution down to a single number.
This means that they're fundamentally reductive, and if you pick the wrong summary, you can easily miss important differences between groups.
That's why it's always a good idea to visualize the distribution before committing to your summary statistics.
@fig-flights-dist shows the overall distribution of departure delays.
The distribution is so skewed that we have to zoom in to see the bulk of the data.
This suggests that the mean is unlikely to be a good summary and we might prefer the median instead.
```{r}
#| echo: false
#| label: fig-flights-dist
#| fig-cap: |
#| (Left) The histogram of the full data is extremely skewed making it
#| hard to get any details. (Right) Zooming into delays of less than two
#| hours makes it possible to see what's happening with the bulk of the
#| observations.
#| fig-alt: |
#| Two histograms of `dep_delay`. On the left, it's very hard to see
#| any pattern except that there's a very large spike around zero, the
#| bars rapidly decay in height, and for most of the plot, you can't
#| see any bars because they are too short to see. On the right,
#| where we've discarded delays of greater than two hours, we can
#| see that the spike occurs slightly below zero (i.e. most flights
#| leave a couple of minutes early), but there's still a very steep
#| decay after that.
#| fig-asp: 0.5
library(patchwork)
full <- flights |>
ggplot(aes(x = dep_delay)) +
geom_histogram(binwidth = 15, na.rm = TRUE)
delayed120 <- flights |>
filter(dep_delay < 120) |>
ggplot(aes(x = dep_delay)) +
geom_histogram(binwidth = 5)
full + delayed120
```
It's also a good idea to check that distributions for subgroups resemble the whole.
In the following plot 365 frequency polygons of `dep_delay`, one for each day, are overlaid.
The distributions seem to follow a common pattern, suggesting it's fine to use the same summary for each day.
```{r}
#| fig-alt: |
#| The distribution of `dep_delay` is highly right skewed with a strong
#| peak slightly less than 0. The 365 frequency polygons are mostly
#| overlapping forming a thick black bland.
flights |>
filter(dep_delay < 120) |>
ggplot(aes(x = dep_delay, group = interaction(day, month))) +
geom_freqpoly(binwidth = 5, alpha = 1/5)
```
Don't be afraid to explore your own custom summaries specifically tailored for the data that you're working with.
In this case, that might mean separately summarizing the flights that left early vs. the flights that left late, or given that the values are so heavily skewed, you might try a log-transformation.
Finally, don't forget what you learned in @sec-sample-size: whenever creating numerical summaries, it's a good idea to include the number of observations in each group.
### Positions
There's one final type of summary that's useful for numeric vectors, but also works with every other type of value: extracting a value at a specific position: `first(x)`, `last(x)`, and `nth(x, n)`.
For example, we can find the first and last departure for each day:
```{r}
flights |>
group_by(year, month, day) |>
summarize(
first_dep = first(dep_time, na_rm = TRUE),
fifth_dep = nth(dep_time, 5, na_rm = TRUE),
last_dep = last(dep_time, na_rm = TRUE)
)
```
(NB: Because dplyr functions use `_` to separate components of function and arguments names, these functions use `na_rm` instead of `na.rm`.)
If you're familiar with `[`, which we'll come back to in @sec-subset-many, you might wonder if you ever need these functions.
There are three reasons: the `default` argument allows you to provide a default if the specified position doesn't exist, the `order_by` argument allows you to locally override the order of the rows, and the `na_rm` argument allows you to drop missing values.
Extracting values at positions is complementary to filtering on ranks.
Filtering gives you all variables, with each observation in a separate row:
```{r}
flights |>
group_by(year, month, day) |>
mutate(r = min_rank(sched_dep_time)) |>
filter(r %in% c(1, max(r)))
```
### With `mutate()`
As the names suggest, the summary functions are typically paired with `summarize()`.
However, because of the recycling rules we discussed in @sec-recycling they can also be usefully paired with `mutate()`, particularly when you want do some sort of group standardization.
For example:
- `x / sum(x)` calculates the proportion of a total.
- `(x - mean(x)) / sd(x)` computes a Z-score (standardized to mean 0 and sd 1).
- `(x - min(x)) / (max(x) - min(x))` standardizes to range \[0, 1\].
- `x / first(x)` computes an index based on the first observation.
### Exercises
1. Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.
When is `mean()` useful?
When is `median()` useful?
When might you want to use something else?
Should you use arrival delay or departure delay?
Why might you want to use data from `planes`?
2. Which destinations show the greatest variation in air speed?
3. Create a plot to further explore the adventures of EGE.
Can you find any evidence that the airport moved locations?
Can you find another variable that might explain the difference?
## Summary
You're already familiar with many tools for working with numbers, and after reading this chapter you now know how to use them in R.
You've also learned a handful of useful general transformations that are commonly, but not exclusively, applied to numeric vectors like ranks and offsets.
Finally, you worked through a number of numeric summaries, and discussed a few of the statistical challenges that you should consider.
Over the next two chapters, we'll dive into working with strings with the stringr package.
Strings are a big topic so they get two chapters, one on the fundamentals of strings and one on regular expressions.