papersize | documentclass | classoption | colorlinks |
---|---|---|---|
a4 |
scrartcl |
DIV=14 |
true |
All students are required to complete the preparatory course 'R Advanced for Methodology' early in Michaelmas Term, ideally in weeks 0 and 1. You will be auto-enrolled into the R course when enrolling into MY472 on Moodle.
Office hour slots to be booked via LSE's StudentHub
- Friedrich Geiecke, Department of Methodology. Office hours: Tuesdays 16:00-18:00 via Zoom
- Martin Lukac, Department of Methodology. Office hours: Mondays 14:00-17:00 via Zoom
- Lectures are prerecorded and available via Moodle
- Lecture discussions: Tuesdays 09:00–11:00 and 15:00-16:00 via Zoom (you can choose which one to attend)
- Classes on:
- Thursdays 10:00-11:00, CLM.3.02 and via Zoom
- Fridays 16:00-17:00, LRB.R.21 and via Zoom
No lectures or classes will take place during (Reading) Week 6.
Week | Date | Topic |
---|---|---|
1 | 28 Sep | Introduction to data |
2 | 5 Oct | The shape of data |
3 | 12 Oct | HTML and CSS |
4 | 19 Oct | Using data from the Internet |
5 | 26 Oct | Working with APIs |
6 | 2 Nov | Reading week |
7 | 9 Nov | Textual data |
8 | 16 Nov | Data visualization |
9 | 23 Nov | Creating and managing databases |
10 | 30 Nov | Interacting with online databases |
11 | 7 Dec | Cloud Computing |
This course will cover the principles of digital methods for collecting, processing, and storing data. The course will also cover workflow management for typical data transformation and cleaning projects, frequently the starting point and most time-consuming part of any data science project. We use a project-based learning approach towards the study of computation and some group-based collaboration, essential ingredients of modern data science work. We will also make frequent use of version control and group collaboration tools such as git and GitHub.
We begin by discussing concepts in fundamental data types, and how data is stored and recorded electronically. We continue with an introduction of R markdown and the reshaping of data in R. It follows a discussion of various common data types on the internet such as markup languages (e.g. HTML and XML) and JSON. Students also study the fundamentals of acquisition and management of data from the internet through both scraping of websites and accessing APIs of online databases and social network services.
After the reading week, we will learn how to work with unstructured data in the form of text. Afterwards we continue with an overview of the principles of exploratory data analysis through data visualization e.g. using R's ggplot2. Next, we will cover database design, especially relational databases, using examples across a variety of fields. Students are introduced to SQL through MySQL, and programming assignments in this unit of the course will be designed to ensure that students learn to create, populate and query an SQL database. We will then introduce NoSQL using MongoDB and the JSON data format for comparison. For both types of database, students will be encouraged to work with data relevant to their own interests as they learn to create, populate and query data. The course will be concluded with a discussion of cloud computing. Students will first learn the basics of cloud computing that can serve various purposes such data analysis and then how to set up a cloud computing environment through Amazon Web Services, a popular cloud platform.
Students will be expected to produce five weekly, structured problem sets with a beginning component to be started in the staff-led lab sessions, to be completed by the student outside of class. Answers should be formatted and submitted for assessment. One or more of these problem sets will be completed in collaboration with other students.
Take home exam (50%) and in class assessment (50%).
Student problem sets will be marked and will provide 50% of the mark.
Assignments will be marked using the following criteria:
-
70–100: Very Good to Excellent (Distinction). Perceptive, focused use of a good depth of material with a critical edge. Original ideas or structure of argument.
-
60–69: Good (Merit). Perceptive understanding of the issues plus a coherent well-read and stylish treatment though lacking originality
-
50–59: Satisfactory (Pass). A “correct” answer based largely on lecture material. Little detail or originality but presented in adequate framework. Small factual errors allowed.
-
30–49: Unsatisfactory (Fail) and 0–29: Unsatisfactory (Bad fail). Based entirely on lecture material but unstructured and with increasing error component. Concepts are disordered or flawed. Poor presentation. Errors of concept and scope or poor in knowledge, structure and expression.
Some of the assignemnts will involve shorter questions, to which the answers can be relatively unambiguously coded as (fully or partially) correct or incorrect. In the marking, these questions may be further broken down into smaller steps and marked step by step. The final mark is then a function of the proportion of parts of the questions which have been answered correctly. In such marking, the principle of partial credit is observed as far as feasible. This means that an answer to a part of a question will be treated as correct when it is correct conditional on answers to other parts of the question, even if those other parts have been answered incorrectly.
In the first week, we will introduce the basic concepts of the course, including how data is recorded, stored, and shared. Because the course relies fundamentally on GitHub, a collaborative code and data sharing platform, we will introduce the use of git and GitHub, using the lab session to guide students through in setting up an account and subscribing to the course organisation and assignments.
This week will also introduce basic data types, in a language-agnostic manner, from the perspective of machine implementations through to high-level programming languages. We will then focus on how basic data types are implemented in R.
- Lecture slides
- R example: Introduction to RMarkdown and as rmd source
- R example: vectors, lists, data frames
- Wickham, Hadley. Nd. Advanced R, 2nd ed. Ch 3, Names and values, Chapter 4, Vectors, and Chapter 5, Subsetting. (Ch. 2-3 of the print edition),
- GitHub Guides, especially: "Understanding the GitHub Flow", "Hello World", and "Getting Started with GitHub Pages".
- GitHub. "Markdown Syntax" (a cheatsheet).
- Lake, P. and Crowther, P. 2013. Concise guide to databases: A Practical Introduction. London: Springer-Verlag. Chapter 1, Data, an Organizational Asset
- Nelson, Meghan. 2015. "An Intro to Git and GitHub for Beginners (Tutorial)."
- Jim McGlone, "Creating and Hosting a Personal Site on GitHub A step-by-step beginner's guide to creating a personal website and blog using Jekyll and hosting it for free using GitHub Pages.".
- Installing git and setting up an account on GitHub
- How to complete and submit assignments using GitHub Classroom
- Forking and correcting a broken RMarkdown file
- Cloning a website repository, modifying it, and publishing a personal webpage
This week moves beyond the rectangular format common in statistical datasets, modeled on a spreadsheet, to cover relational structures and the concept of database normalization. We will also cover ways to restructure data from "wide" to "long" format, within strictly rectangular data structures. Additional topics concerning text encoding, date formats, and sparse matrix formats are also covered.
- Lecture slides
- R examples: conditionals, loops, and functions, introduction to key tidyverse functions, industrial production dataset, and industrial production and unemployment dataset
- Wickham, Hadley and Garett Grolemund. 2017. R for Data Science: Import, Tidy, Transform, Visualize, and Model Data. Sebastopol, CA: O'Reilly. Part II Wrangle, Tibbles, Data Import, Tidy Data (Ch. 7-9 of the print edition).
- The Tidyverse collection of packages for R.
- Link to GitHub Classroom available via Moodle on Monday, October 5, 2pm
- Deadline on Friday, October 16, 2pm
From week 3 to week 5, we will learn how to get the data from the Internet. This week introduces the basics, including markup languages (HTML, XML, and Markdown) and other common data formats such as JSON (Javascript Object Notation). We also cover basic web scraping, to turn web data into text or numbers. We will also cover the client-server model, and how machines and humans transmit data over networks and to and from databases.
- Lazer, David, and Jason Radford. 2017. “Data Ex Machina: Introduction to Big Data.” Annual Review of Sociology 43(1): 19–39.
- Howe, Shay. 2015. Learn to Code HTML and CSS: Develop and Style Websites. New Riders. Chs 1-8.
- Kingl, Arvid. 2018. Web Scraping in R: rvest Tutorial.
- Munzert, Simon, Christian Rubba, Peter Meissner, and Dominic Nyhuis D. 2014. Automated Data Collection with R: A Practical Guide to Web Scraping and Text Mining. Hoboken, NJ/Chichester, UK:Wiley & Sons. Ch. 2-4, 9.
- Severance, Charles Russell. 2015. Introduction to Networking: How the Internet Works. Charles Severance, 2015.
- Duckett, Jon. 2011. HTML and CSS: Design and Build Websites. New York: Wiley.
- Scraping tables
- Scraping unstructured data
Continuing from the material covered in Week 3, we will learn the advanced topics in scraping the web. The topics include the scraping documents in XML (such as RSS), scraping websites beyond the authentication, and websites with non-static components.
- Sai Swapna Gollapudi. 2018. Learn Web Scraping and Browser Automation Using RSelenium in R.
- Wickham, Hadley. 2015. Parse and process XML (and HTML) with xml2
- Mozilla Developer Web Docs. What is JavaScript.
- Schouwenaars, Filip. 2015. Web Scraping with R and PhantomJS.
- Mozilla Developer Web Docs. A First Splash into JavaScript.
- Link to GitHub Classroom available via Moodle on Monday, October 19, 2pm
- Deadline on Friday, October 30, 2pm
How to work with Application Programming Interfaces (APIs), which offer developers and researchers access to data in a structured format. Our running examples will be the New York Times API and the Twitter API.
- Steinert-Threlkeld. 2018. Twitter as Data. Cambridge University Press.
- Ruths and Pfeffer. 2014. Social media for large studies of behavior. Science.
- Interacting with the New York Times API
- Interacting with Twitter's REST and Streaming API
- Link to GitHub Classroom available via Moodle on Monday, October 26, 2pm
- Deadline on Friday, November 13, 2pm
We will learn how to work with unstructured data in the form of text, and how to deal with format conversion, encoding problems, and serialization. We will also cover search and replace operations using regular expressions, as well as the most common textual data types in R and Python.
- Kenneth Benoit. July 16, 2019. "Text as Data: An Overview" Forthcoming in Cuirini, Luigi and Robert Franzese, eds. Handbook of Research Methods in Political Science and International Relations. Thousand Oaks: Sage.
- Group working with textual data
The lecture this week will offer an overview of the principles of exploratory data analysis through (good) data visualization. In the seminars, we will practice producing our own graphs using ggplot2.
- Wickham, Hadley and Garett Grolemund. 2017. R for Data Science: Import, Tidy, Transform, Visualize, and Model Data. Sebastopol, CA: O'Reilly. Data visualization, Graphics for communication (Ch. 1 and 22 of the print edition).
- Hughes, A. (2015) "Visualizing inequality: How graphical emphasis shapes public opinion" Research and Politics.
- Tufte, E. (2002) "The visual display of quantitative information".
- Data visualization with ggplot2
- Link to GitHub Classroom available via Moodle on Monday, November 16, 2pm
- Deadline on Friday, November 27, 2pm
This session will offer an introduction to relational databases: structure, logic, and main types. We will learn how to write SQL code, a language designed to query this type of databases that is currently employed by most tech companies; and how to use it from R using the DBI package.
- Beaulieu. 2009. Learning SQL. O'Reilly. (Chapters 1, 3, 4, 5, 8)
- Stephens et al. 2009. Teach yourself SQL in one hour a day. Sam's Publishing.
- Analyzing public Facebook data in a SQLite database
This week, we will dive deeper into the databases. In particular, this week covers following topics: How to set up and use relational databases in the cloud, how to obtain big data analytics through data warehousing services (e.g. Google BigQuery), and fundamentals of noSQL databases.
- Beaulieu. 2009. Learning SQL. O'Reilly. (Chapters 2)
- Hows, Membrey, and Plugge. 2014. MongoDB Basics. Apress. (Chapter 1)
- Tigani and Naidu. 2017. Google BigQuery Analytics. Weily. (Chapters 1-3)
- MongoDB Basics on edX
- Analyzing Big Data in less time with Google BigQuery on YouTube
- SQL JOINs, subqueries, and BigQuery
- Link to GitHub Classroom available via Moodle on Monday, November 30, 2pm
- Deadline on Friday, December 11, 2pm
In this week, we focus on the setup of computation environments on the Internet. We will introduce the cloud computing concepts and learn why the big shift to the cloud computing is occurring in the industry and how it is relevant to us as data scientists. In the lab, we will have an introduction to the cloud environment setup using Amazon Web Services. We will sign up an account, launch a cloud computing environment, create a webpage, and set up a statistical computing environment.
- Rajaraman, V. 2014. "Cloud Computing." Resonance 19(3): 242–58.
- AWS: What is cloud computing.
- Azure: Developer guide.
- Puparelia, Nayan. 2016. "Cloud Computing." MIT Press. Ch. 1-3.
- Botta, Alessio, Walter De Donato, Valerio Persico, and Antonio Pescapé. 2016. "Integration of Cloud Computing and Internet of Things: A Survey." Future Generation Computer Systems 56: 684–700.
- Setup an AWS account (link from Moodle for AWS Educate free account)
- Secure the account
- Configure EC2 instance
- Work with EC2 instance
- Login EC2-Linux Console
- Set up a web server
- Install R, some packages
- Stop the instance
- Link to GitHub Classroom available via Moodle on ...
- Deadline on Friday, January 15, 2pm