Случайно нашел библиотеку Patterns... Пришел в щенячий восторг, скопировал в локальный GitHub, там примеров (сотни ?). Если учесть материалы на сайте разработчика CLIPS The module is free, well-document and bundled with 50+ examples and 350+ unit tests. Но в ближайшее время надо осваивать другие темы, потому здесь тупо копирую статью, которую надо будет разобрать тщательнейшим образом.
Web Scraping
Even with the best of websites, I don’t think I’ve ever encountered a scraping job that couldn’t be described as “A small and simple general model with heaps upon piles of annoying little exceptions”
- Swizec Teller http://swizec.com/blog/scraping-with-mechanize-and-beautifulsoup/swizec/5039
What is it?
A large portion of the data that we as social scientists are interested in resides on the web in manner. Web scraping is a method for pulling data from the structured (or not so structured!) HTML that makes up a web page. Python has numerous libraries for approaching this type of problem, many of which are incredibly powerful. If there is something you want to do, there's usually a way to accomplish it. Perhaps not easily, but it can be done.How is it accomplished?
In general, there are three problems that you might face when undertaking a scraping task:- You have a single page, or a set of pages, that you know of and you want to scrape.
- You have a source that generates links, e.g., RSS feeds, to various pages with the same structure.
- You have a page that contains many pages of interest that are scattered across the file system and you only have general rules for reaching these pages.
There's a library for that! (Yea, I know...)
As mentioned previously, Python has various libraries for scraping tasks. The ones I have found the most useful are:In addition you need some method to examine the source of a webpage in a structured manner. I use Chrome which, as a WebKit browser, allows for "Inspect Element" functionality. Alternatively there is Firebug for Firefox. I have no idea about Safari, Opera, or any other browser you wish to use.
So, let's look at some webpage source. I'm going to pick on the New York Times throughout (I thought about using the eventdata.psu.edu page...it actually has very well formatted HTML).
On to the Python
First, I feel obligated to show the philosophy of Python any time I give a talk that uses Python. So, let's take a look.
In [1]:
import this
Okay, cool. Whatever. Let's get down to some actual webscraping.
Scraping a page that you know
The easiest approach to webscraping is getting the content from a page that you know in advance. I'll go ahead and keep using that NYT page we looked at earlier. There are three basic steps to scraping a single page:- Get (request) the page
- Parse the page content
- Select the content of interest using an XPath selector
In [1]:
import requests
import lxml.html as lh
url = 'http://www.nytimes.com/reuters/2013/01/25/world/americas/25reuters-venezuela-prison.html?partner=rss&emc=rss'
page = requests.get(url)
doc = lh.fromstring(page.content)
text = doc.xpath('//p[@itemprop="articleBody"]')
finalText = str()
for par in text:
finalText += par.text_content()
print finalText
So we now have our lovely output. This output can be manipulated in various ways, or written to an output file.
Scraping generated links
Let's say you want to get a stream of news stories in an easy manner. You could visit the homepage of the NYT and work from there, or you can use an RSS feed. RSS stands for Real Simple Syndication and is, at its heart, an XML document. This allows it to be easily parsed. The fantastic librarypattern
allows for easy parsing of RSS feeds. Using pattern
's Newsfeed()
method, it is possible to parse a feed and obtain attributes of the XML document. The search()
method returns an iterable composed of the individual stories. Each result has a variety of attributes such as .url
, .title
, .description
, and more. The following code demonstrates these methods.
In [3]:
import pattern.web
url = 'http://rss.nytimes.com/services/xml/rss/nyt/World.xml'
results = pattern.web.Newsfeed().search(url, count=5)
results
print '%s \n\n %s \n\n %s \n\n' % (results[0].url, results[0].title, results[0].description)
That looks pretty good, but the description looks nastier than we would generally prefer. Luckily,
pattern
provides functions to get rid of the HTML in a string.
In []:
print '%s \n\n %s \n\n %s \n\n' % (results[0].url, results[0].title, pattern.web.plaintext(results[0].description))
While it's all well and good to have the title and description of a story this is often insufficient (some descriptions are just the title, which isn't particularly helpful). To get further information on the story, it is possible to combine the single-page scraping discussed previously and the results from the RSS scrape. The following code implements a function to scrape the NYT article pages, which can be done easily since the NYT is wonderfully consistent in their HTML, and then iterates over the results applying the
scrape
function to each result.
In []:
import codecs
outputFile = codecs.open('~/tutorialOutput.txt', encoding='utf-8', mode='a')
def scrape(url):
page = requests.get(url)
doc = lh.fromstring(page.content)
text = doc.xpath('//p[@itemprop="articleBody"]')
finalText = str()
for par in text:
finalText += par.text_content()
return finalText
for result in results:
outputText = scrape(result.url)
outputFile.write(outputText)
outputFile.close()
Scraping arbitrary websites
The final approach is for a webpage that contains information you want and the pages are spread around in a fairly consistent manner, but there is no simple, straightfoward manner in which the pages are named.I'll offer a brief aside here to mention that it is often possible to make slight modifications to the URL of a website and obtain many different pages. For example, a website that contains Indian parliament speeches has the URL
http://164.100.47.132/LssNew/psearch/Result13.aspx?dbsl=
with differing values appended after the =
. Thus, using a for-loop
allows for the programatic creation of different URLs. Some sample code is below.
In []:
url = 'http://164.100.47.132/LssNew/psearch/Result13.aspx?dbsl='
for i in xrange(5175,5973):
newUrl = url + str(i)
print 'Scraping: %s' % newUrl
Getting back on topic, it is often more difficult than the above to iterate over numerous webpages within a site. This is where the
Scrapy
library comes in. Scrapy
allows for the creation of web spiders that crawl over a webpage, following any links that it finds. This is often far more difficult to implement than a simple scraper since it requires the identification of rules for link following. The State Department offers a good example. I don't really have time to go into the depths of writing a Scrapy
spider, but I thought I would put up some code to illustrate what it looks like.
In []:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from BeautifulSoup import BeautifulSoup
import re
import codecs
class MySpider(CrawlSpider):
name = 'statespider' #name is a name
start_urls = ['http://www.state.gov/r/pa/prs/dpb/2010/index.htm',
] #defines the URL that the spider should start on. adjust the year.
#defines the rules for the spider
rules = (Rule(SgmlLinkExtractor(allow=('/2010/'), restrict_xpaths=('//*[@id="local-nav"]'),)), #allows only links within the navigation panel that have /year/ in them.
Rule(SgmlLinkExtractor(restrict_xpaths=('//*[@id="dpb-calendar"]',), deny=('/video/')), callback='parse_item'), #follows links within the caldendar on the index page for the individuals years, while denying any links with /video/ in them
)
def parse_item(self, response):
self.log('Hi, this is an item page! %s' % response.url) #prints the response.url out in the terminal to help with debugging
#Insert code to scrape page content
#opens the file defined above and writes 'texts' using utf-8
with codecs.open(filename, 'w', encoding='utf-8') as output:
output.write(texts)
The Pitfalls of Webscraping
Web scraping is much, much, much, more of an art than a science. It is often non-trivial to identify the XPath selector that will get you what you want. Also, some web programmers can't seem to decide how they want to structure the pages they write, so they just change the HTML every few pages. Notice that for the NYT example ifarticleBody
gets changed to articleBody1
, everything breaks. There are ways around this that are often convoluted, messy, and hackish. Usually, however, where there is a will there is a way....brief pause to demonstrate the lengths this can go to.
PITF Human Atrocities
As a wrap up, I thought I would show the workflow I have been using to perform real-time scraping from various news sites of stories pertaining to human atrocities. This is illustrative both of web scraping and of the issues that can accompany programming.The general flow of the scraper is:
RSS feed -> identify relevant stories -> scrape story -> place results in mongoDB -> repeat every hour
Посты чуть ниже также могут вас заинтересовать
Комментариев нет:
Отправить комментарий