One of the drags with Twitter API is the OAuth system. For the web application types it’s no big deal too-ing and fro-ing between servers. For command line things like this, it’s a pain. This pain seems to be show mostly on StackOverflow with a bunch on enquiries on the struggles of getting R/TwitteR/ROAuth working.
A good thing though is that the ROAuth package has been updated to make things easier so it’s not a completely wretched thing like it once was.
The Twitter Developer Account
Firstly you’ll have to log into dev.twitter.com and create a new application. I’m only bothered about read only access to tweets and not interested in following new folk, sending direct messages or anything like that.
Make sure that your application adheres to the following:
- There is NO callback URL required. We’re doing this manually so we don’t need redirecting anywhere.
- The “Sign In With Twitter” option is checked. This generates the PIN code we’ll need to confirm our OAuth verification in the R application.
Make a note of the ConsumerKey and ConsumerSecret settings as you’ll need those for the following R script.
Connecting to Twitter with R
Open up a text editor and type or copy/paste the following:
library(twitteR) cred <- OAuthFactory$new(consumerKey="xxxxxxxxxxxxx", consumerSecret="xxxxxxxxxxxx", requestURL="http://api.twitter.com/oauth/request_token", accessURL="http://api.twitter.com/oauth/access_token", authURL="http://api.twitter.com/oauth/authorize") download.file(url="http://curl.haxx.se/ca/cacert.pem", destfile="cacert.pem") cred$handshake(cainfo="cacert.pem")
Replace the consumer key and secret values with the ones you’ve written down from your own Twitter application.
Now fire up R and run the following line of code:
source("twitterconnect.r") (or whatever you called the file you've created)
This will load all the required libraries and attempt to make the call to Twitter’s OAuth system. You should see something like this:
> source("twitterauth.r") Loading required package: ROAuth Loading required package: RCurl Loading required package: bitops Loading required package: digest Loading required package: rjson trying URL 'http://curl.haxx.se/ca/cacert.pem' Content type 'text/plain' length 251338 bytes (245 Kb) opened URL ================================================== downloaded 245 Kb To enable the connection, please direct your web browser to: http://api.twitter.com/oauth/authorize?oauth_token=cv88UGfPrAJnraJPqdGjeg5QJEUMk185jOUncJhDk When complete, record the PIN given to you and provide it here:
Open up a browser and copy/paste the URL you have been given:
Click on the blue “Authorize app” button and it will then reveal the pin code:
Back to your R script now type in the PIN code and press return. You should return to the R prompt (>).
Now we need to register out new credentials with the Twitter library. Type:
And R will respond with:
Just to make sure everything is working run a quick search test.
And you should see some output.
[]  "eriksmits: #BigData could generate millions of new jobs http://t.co/w1FGdxjBI9 via @FortuneMagazine, is a Java Hadoop developer key for creating value?" []  "MobileBIAus: RT @BI_Television: RT @DavidAFrankel: Big Data Collides with Market Research http://t.co/M36yXMWJbg #bigdata #analytics" []  "alibaba_aus: @PracticalEcomm discusses how the use of #BigData can combat #ecommerce fraud http://t.co/fmd9m0wWeH" []  "alankayvr: RT @ventanaresearch: It's not too late! Join us in S.F. for the 2013 Technology Leadership Summit - sessions on #BigData #Cloud & more http…" []  "DelrayMom: MT @Loyalty360: White Paper 6 #Tips for turning #BigData into key #insights, http://t.co/yuNRWxzXfZ, @SAS, #mktg #data #contentmarketing"
It’s worth saving the credentials data for later use. So in R we can save the data with:
Saving this file saves a lot of headache of constantly going through the above process to get authorisation. Once saved it’s a case of loading the file and registering it as Twitter OAuth credentials.
So that’s a lot of the required work done to get the authentication done. Next time we can start pulling in, storing and making some sentiment of the data.