Most of the data on the web is still largely available as HTML - while it is structured (hierarchical) it often is not available in a form useful for analysis (flat / tidy).
<html><head><title>This is a title</title></head><body><palign="center">Hello world!</p><br/><divclass="name"id="first">John</div><divclass="name"id="last">Doe</div><divclass="contact"><divclass="home">555-555-1234</div><divclass="home">555-555-2345</div><divclass="work">555-555-9999</div><divclass="fax">555-555-8888</div></div></body></html>
rvest
rvest is a package from the tidyverse that makes basic processing and manipulation of HTML data straight forward. It provides high level functions for interacting with html via the xml2 library.
Core functions:
read_html() - read HTML data from a url or character string.
html_elements() / html_nodes() - select specified elements from the HTML document using CSS selectors (or xpath).
html_element() / html_node() - select a single element from the HTML document using CSS selectors (or xpath).
html_table() - parse an HTML table into a data frame.
html_text() / html_text2() - extract tag’s text content.
We will be using a tool called selector gadget to help up identify the html elements of interest - it does this by constructing a css selector which can be used to subset the html document.
Some examples of basic selector syntax is below,
Selector
Example
Description
.class
.title
Select all elements with class=“title”
#id
#name
Select all elements with id=“name”
element
p
Select all <p> elements
element element
div p
Select all <p> elements inside a <div> element
element>element
div > p
Select all <p> elements with <div> as a parent
[attribute]
[class]
Select all elements with a class attribute
[attribute=value]
[class=title]
Select all elements with class=“title”
There are also a number of additional combinators and pseudo-classes that improve flexibility, see examples here.
html =read_html("<p> This is the first sentence in the paragraph. This is the second sentence that should be on the same line as the first sentence.<br>This third sentence should start on a new line. </p>")
html |>html_text()
[1] " \n This is the first sentence in the paragraph.\n This is the second sentence that should be on the same line as the first sentence.This third sentence should start on a new line.\n "
html |>html_text2()
[1] "This is the first sentence in the paragraph. This is the second sentence that should be on the same line as the first sentence.\nThis third sentence should start on a new line."
html |>html_text() |>cat(sep="\n")
This is the first sentence in the paragraph.
This is the second sentence that should be on the same line as the first sentence.This third sentence should start on a new line.
html |>html_text2() |>cat(sep="\n")
This is the first sentence in the paragraph. This is the second sentence that should be on the same line as the first sentence.
This third sentence should start on a new line.
There is a standard for communicating to users if it is acceptable to automatically scrape a website via the robots exclusion standard or robots.txt.
You can find examples at all of your favorite websites: google, facebook, etc.
These files are meant to be machine readable, but the polite package can handle this for us (and much more).
polite::bow("http://google.com")
<polite session> http://google.com
User-agent: polite R package
robots.txt: 313 rules are defined for 4 bots
Crawl delay: 5 sec
The path is scrapable for this user-agent
polite::bow("http://facebook.com")
<polite session> http://facebook.com
User-agent: polite R package
robots.txt: 525 rules are defined for 21 bots
Crawl delay: 5 sec
The path is not scrapable for this user-agent
Example - Rotten Tomatoes
For the movies listed in Popular Streaming Movies list on rottentomatoes.com create a data frame with the Movies’ titles, their tomatometer score, and whether the movie is fresh or rotten, and the movie’s url.
Exercise 1
Using the url for each movie, now go out and grab the number of reviews, the runtime, and number of user ratings.
If you finish that you can then try to scrape the MPAA rating and the audience score,.