I’ve posted three examples of Twitter hashtags datasets in the last week: one on China, one on Iran, and one on Algeria.  In order to build these datasets, I needed to obtain older tweets; this is slightly more difficult than simply filtering the streaming feed for your hashtag of choice.  The original code I wrote for this task is in Python and is well-parallelized.  However, the code isn’t commented and looks more complicated than it is due to parallelization choices.  

  As part of my recent exercise to replace Python with R for entire tasks, I decided to rewrite this code using R tonight.  The code is pretty simple, well-commented, and consists of two functions – loadTag and downloadTag. Both of these methods are embedded in the script below the break.

  There is one significant issue with the code, however. At the moment, neither rjson nor RJSONIO seem to support Unicode data in JSON responses.  Furthermore, when character vectors of "unknown" encoding are written to file with a function like write.table, they produce output that cannot be reliably read back into R.  As a result, the code below does not retain the text of a tweet – only the id, date, and username.  This script can be easily modified to include other JSON variables by modifying the lambda function in line 70.

  Once you’ve downloaded some data, producing figures like the ones in the posts above is only two lines away: tweets <- loadTag(tag) and ggplot(data=tweets, aes(x=as.POSIXct(date))) + geom_bar(aes(fill=..count..), binwidth=60*5).  Here’s the current figure of 5-minute frequencies since the 20th (x-axis is EST unlike the previous post, in which it was UTC. Pass tz="UTC" to as.POSIXct to change this).