The pattern.web module has tools for online data mining: asynchronous requests, a uniform API for web services (Google, Bing, Twitter, Facebook, Wikipedia, Wiktionary, Flickr, RSS), a HTML DOM parser, HTML tag stripping functions, a web crawler, webmail, caching, Unicode support.
It can be used by itself or with other pattern modules: web | db | en | search | vector | graph.
Documentation
URLs
Asynchronous requests
Search engine + web services (google, bing, twitter, facebook, wikipedia, flickr)
Web sort
HTML to plaintext
HTML DOM parser
PDF parser
Crawler
E-mail
Locale
Cache
URLs
The URL object is a subclass of Python’s urllib2.Request that can be used to connect to a web address. The URL.download() method can be used to retrieve the content (e.g., HTML source code). The constructor’s method parameter defines how query data is encoded:
GET: query data is encoded in the URL string (usually for retrieving data).
POST: query data is encoded in the message body (for posting data).
url = URL(string=”, method=GET, query={})
url.string # u’http://user:pw@domain.com:30/path/page?p=1#anchor’
url.parts # Dictionary of attributes:
url.protocol # u’http’
url.username # u’user’
url.password # u’pw’
url.domain
# u’domain.com’
url.port
# 30
url.path
# [u’path’]
url.page
# u’page’
url.query # {u’p’: 1}
url.querystring # u’p=1’
url.anchor # u’anchor’
url.exists # False if URL.open() raises a HTTP404NotFound.
url.redirect # Actual URL after redirection, or None.
url.headers # Dictionary of HTTP response headers.
url.mimetype # Document MIME-type.
url.open(timeout=10, proxy=None)
url.download(timeout=10, cached=True, throttle=0, proxy=None, unicode=False)
url.copy()
URL() expects a string that starts with a valid protocol (e.g. http://).
URL.open() returns a connection from which data can be retrieved with connection.read().
URL.download() caches and returns the retrieved data.
It raises a URLTimeout if the download time exceeds the given timeout.
It sleeps for throttle seconds after the download is complete.
A proxy server can be given as a (host, protocol)-tuple, e.g., (‘proxy.com’, ‘https’).
With unicode=True, returns the data as a Unicode string. By default it is False because the data can be binary (e.g., JPEG, ZIP) but unicode=True is advised for HTML.
The example below downloads an image.
The extension() helper function parses the file extension from a file name:
from pattern.web import URL, extension
url = URL(‘http://www.clips.ua.ac.be/media/pattern_schema.gif‘)
f = open(‘test’ + extension(url.page), ‘wb’) # save as test.gif
f.write(url.download())
f.close()
URL downloads
The download() function takes a URL string, calls URL.download() and returns the retrieved data. It takes the same optional parameters as URL.download(). This saves you a line of code.from pattern.web import download
html = download(‘http://www.clips.ua.ac.be/‘, unicode=True)
URL mime-type
The URL.mimetype can be used to check the type of document at the given URL. This is more reliable than sniffing the filename extension (which may be omitted).from pattern import URL, MIMETYPE_IMAGE
url = URL(‘http://www.clips.ua.ac.be/media/pattern_schema.gif‘)
print url.mimetype in MIMETYPE_IMAGE
True
Global Value
MIMETYPE_WEBPAGE [‘text/html’]
MIMETYPE_STYLESHEET [‘text/css’]
MIMETYPE_PLAINTEXT [‘text/plain’]
MIMETYPE_PDF [‘application/pdf’]
MIMETYPE_NEWSFEED [‘application/rss+xml’, ‘application/atom+xml’]
MIMETYPE_IMAGE [‘image/gif’, ‘image/jpeg’, ‘image/png’]
MIMETYPE_AUDIO [‘audio/mpeg’, ‘audio/mp4’, ‘audio/x-wav’]
MIMETYPE_VIDEO [‘video/mpeg’, ‘video/mp4’, ‘video/avi’, ‘video/quicktime’]
MIMETYPE_ARCHIVE [‘application/x-tar’, ‘application/zip’]
MIMETYPE_SCRIPT [‘application/javascript’]
URL exceptions
The URL.open() and URL.download() methods raise a URLError if an error occurs (e.g., no internet connection, server is down). URLError has a number of subclasses:
Exception Description
URLError URL has errors (e.g. a missing t in htp://)
URLTimeout URL takes too long to load.
HTTPError URL causes an error on the contacted server.
HTTP301Redirect URL causes too many redirects.
HTTP400BadRequest URL contains an invalid request.
HTTP401Authentication URL requires a login and a password.
HTTP403Forbidden URL is not accessible (check user-agent).
HTTP404NotFound URL doesn’t exist.
HTTP500InternalServerError URL causes an error (bug?) on the server.
User-agent and referrer
The URL.open() and URL.download() methods have two optional parameters user_agent and referrer, which can be used to identify the application accessing the web. Some websites include code to block out any application except browsers. By setting a user_agent you can make the application appear as a browser. This is called spoofing and it is not encouraged, but sometimes necessary.
For example, to pose as a Firefox browser:
URL(‘http://www.clips.ua.ac.be‘).download(user_agent=’Mozilla/5.0’)
Find URLs
The find_urls() function can be used to parse URLs from a text string. It will retrieve a list of links starting with http://, https://, www. and domain names ending with .com, .org. .net. It will detect and strip leading punctuation (open parens) and trailing punctuation (period, comma, close parens). Similarly, the find_email() function can be used to parse e-mail addresses from a string.from pattern.web import find_urls
print find_urls(‘Visit our website (www․clips.ua.ac.be)’, unique=True)
[‘www.clips.ua.ac.be’]
Asynchronous requests
The asynchronous() function can be used to execute a function “in the background” (i.e., threaded). It takes the function, its arguments and optional keyword arguments. It returns an AsynchronousRequest object that contains the function’s return value (when done). The main program does not halt in the meantime.
request = asynchronous(function, *args, **kwargs)
request.done # True when the function is done.
request.elapsed # Running time, in seconds.
request.value # Function return value when done (or None).
request.error # Function Exception (or None).
request.now() # Waits for function and returns its value.
The example below executes a Google query without halting the main program. Instead, it displays a “busy” message (e.g., a progress bar updated in the application’s event loop) until request.done.
from pattern.web import asynchronous, time, Google
request = asynchronous(Google().search, ‘holy grail’, timeout=4)
while not request.done:
time.sleep(0.1)
print ‘busy…’
print request.value
There is no way to stop a thread. You are responsible for ensuring that the given function doesn’t hang.
Search engine + web services
The SearchEngine object has a number of subclasses that can be used to query different web services (e.g., Google, Wikipedia). SearchEngine.search() returns a list of Result objects for a given query string – similar to a search field and a results page in a browser.
engine = SearchEngine(license=None, throttle=1.0, language=None)
engine.license # Service license key.
engine.throttle # Time between requests (being nice to server).
engine.language # Restriction for Result.language (e.g., ‘en’).
engine.search(query,
type = SEARCH, # SEARCH | IMAGE | NEWS
start = 1, # Starting page.
count = 10, # Results per page.
size = None # Image size: TINY | SMALL | MEDIUM | LARGE
cached = True) # Cache locally?
Note: SearchEngine.search() takes the same optional parameters as URL.download().
Google, Bing, Twitter, Facebook, Wikipedia, Flickr
SearchEngine is subclassed by Google, Yahoo, Bing, DuckDuckGo, Twitter, Facebook, Wikipedia, Wiktionary, Wikia, DBPedia, Flickr and Newsfeed. The constructors take the same parameters:
engine = Google(license=None, throttle=0.5, language=None)
engine = Bing(license=None, throttle=0.5, language=None)
engine = Twitter(license=None, throttle=0.5, language=None)
engine = Facebook(license=None, throttle=1.0, language=’en’)
engine = Wikipedia(license=None, throttle=5.0, language=None)
engine = Flickr(license=None, throttle=5.0, language=None)
Each search engine has different settings for the search() method. For example, Twitter.search() returns up to 3000 results for a given query (30 queries with 100 results each, or 300 queries with 10 results each). It has a limit of 150 queries per 15 minutes. Each call to search() counts as one query.
Engine type start count limit throttle
Google SEARCH1 1-100/count 1-10 paid 0.5
Bing SEARCH | NEWS | IMAGE12 1-1000/count 1-50 paid 0.5
Yahoo SEARCH | NEWS | IMAGE13 1-1000/count 1-50 paid 0.5
DuckDuckGo SEARCH 1 - - 0.5
Twitter SEARCH 1-3000/count 1-100 600/hour 0.5
Facebook SEARCH | NEWS 1 1-100 500/hour 1.0
Wikipedia SEARCH 1 1 - 5.0
Wiktionary SEARCH 1 1 - 5.0
Wikia SEARCH 1 1 - 5.0
DBPedia SPARQL 1+ 1-1000 10/sec 1.0
Flickr
IMAGE 1+ 1-500 - 5.0
Newsfeed NEWS 1 1+ ? 1.0
1 Google, Bing and Yahoo are paid services – see further how to obtain a license key.
2 Bing.search(type=NEWS) has a count of 1-15.
3 Yahoo.search(type=IMAGES) has a count of 1-35.
Web service license key
Some services require a license key. They may work without one, but this implies that you share a public license key (and query limit) with other users of the pattern.web module. If the query limit is exceeded, SearchEngine.search() raises a SearchEngineLimitError.
Google is a paid service (
1for200queries),witha100freequeriesperday.Whenyouobtainalicensekey(followthelinkbelow),activate“CustomSearchAPI”and“TranslateAPI”under“Services”andlookupthekeyunder“APIAccess”.Bingisapaidservice(
1 for 500 queries), with a 5,000 free queries per month.
Yahoo is a paid service ($1 for 1250 queries) that requires an OAuth key + secret, which can be passed as a tuple: Yahoo(license=(key, secret)).
Obtain a license key: Google, Bing, Yahoo, Twitter, Facebook, Flickr.
Web service request throttle
A SearchEngine.search() request takes a minimum amount of time to complete, as outlined in the table above. This is intended as etiquette towards the server providing the service. Raise the throttle value if you plan to run multiple queries in batch. Wikipedia requests are especially intensive. If you plan to mine a lot of data from Wikipedia, download the Wikipedia database instead.
Search Engine results
SearchEngine.search() returns a list of Result objects. It has an additional total property, which is the total number of results available for the given query. Each Result is a dictionary with extra properties:
result = Result(url)
result.url # URL of content associated with the given query.
result.title # Content title.
result.text # Content summary.
result.language # Content language.
result.author # For news items and images.
result.date # For news items.
result.download(timeout=10, cached=True, proxy=None)
Result.download() takes the same optional parameters as URL.download().
The attributes (e.g., result.text) are Unicode strings.
For example:
from pattern.web import Bing, SEARCH, plaintext
engine = Bing(license=None) # Enter your license key.
for i in range(1,5):
for result in engine.search(‘holy handgrenade’, type=SEARCH, start=i):
print repr(plaintext(result.text))
u”The Holy Hand Grenade of Antioch is a fictional weapon from …”
u’Once the number three, being the third number, be reached, then …’
Since SearchEngine.search() takes the same optional parameters as URL.download() it is easy to disable local caching, set a proxy server, a throttle (minimum time) or a timeout (maximum time).
from pattern.web import Google
engine = Google(license=None) # Enter your license key.
for result in engine.search(‘tim’, cached=False, proxy=(‘proxy.com’, ‘https’))
print result.url
print result.text
Image search
For Flickr, Bing and Yahoo, image URLs retrieved with search(type=IMAGE) can be filtered by setting the size to TINY, SMALL, MEDIUM, LARGE or None (any size). Images may be subject to copyright.
For Flickr, use search(copyright=False) to retrieve results with no copyright restrictions (either public domain or Creative Commons by-sa).
For Twitter, each result has a Result.picture property with the URL to the user’s profile picture.
Google translate
Google.translate() returns the translated string in the given language.
Google.identify() returns a (language code, confidence)-tuple for a given string.
from pattern.web import Google
s = “C’est un lapin, lapin de bois. Quoi? Un cadeau.”
g = Google()
print g.translate(s, input=’fr’, output=’en’, cached=False)
print g.identify(s)
u”It’s a rabbit, wood. What? A gift.”
(u’fr’, 0.76)
Remember to activate the Translate API in the Google API Console. Max. 1,000 characters per request.
Twitter search
The start parameter of Twitter.search() takes an int (= the starting page, cfr. other search engines) or a tweet.id. If you create two Twitter objects, their result pages for a given query may not correspond, since new tweets become available more quickly than we can query pages. The best way is to pass the last seen tweet id:
from pattern.web import Twitter
t = Twitter()
i = None
for j in range(3):
for tweet in t.search(‘win’, start=i, count=10):
print tweet.text
i = tweet.id
Twitter streams
Twitter.stream() returns an endless, live stream of Result objects. A Stream is a Python list that accumulates each time Stream.update() is called:
from pattern.web import Twitter
s = Twitter().stream(‘#fail’)
for i in range(10):
time.sleep(1)
s.update(bytes=1024)
print s[-1].text if s else ”
To clear the accumulated list, call Stream.clear().
Twitter trends
Twitter.trends() returns a list of 10 “trending topics”:
from pattern.web import Twitter
print Twitter().trends(cached=False)
[u’#neverunderstood’, u’Not Top 10’, …]
Wikipedia articles
Wikipedia.search() returns a single WikipediaArticle for the given (case-sensitive) query, which is the title of an article. Wikipedia.index() returns an iterator over all article titles on Wikipedia. The language parameter of the Wikipedia()defines the language of the returned articles (by default it is “en”, which corresponds to en.wikipedia.org).
article = WikipediaArticle(title=”, source=”, links=[])
article.source # Article HTML source.
article.string # Article plaintext unicode string.
article.title # Article title.
article.sections # Article sections.
article.links # List of titles of linked articles.
article.external # List of external links.
article.categories # List of categories.
article.media # List of linked media (images, sounds, …)
article.languages # Dictionary of (language, article)-items.
article.language # Article language (i.e., ‘en’).
article.disambiguation # True if it is a disambiguation page
article.plaintext(**kwargs) # See plaintext() for parameters overview.
article.download(media, **kwargs)
WikipediaArticle.plaintext() is similar to plaintext(), with special attention for MediaWiki markup. It strips metadata, infoboxes, table of contents, annotations, thumbnails and disambiguation links.
Wikipedia article sections
WikipediaArticle.sections is a list of WikipediaSection objects. Each section has a title and a number of paragraphs that belong together.
section = WikipediaSection(article, title=”, start=0, stop=0, level=1)
section.article # WikipediaArticle parent.
section.parent # WikipediaSection this section is part of.
section.children # WikipediaSections belonging to this section.
section.title # Section title.
section.source # Section HTML source.
section.string # Section plaintext unicode string.
section.content # Section string minus title.
section.level # Section nested depth (from 0).
section.links # List of titles of linked articles.
section.tables # List of WikipediaTable objects.
The following example downloads a Wikipedia article and prints the title of each section, indented according to the section level:
from pattern.web import Wikipedia
article = Wikipedia().search(‘cat’)
for section in article.sections:
print repr(’ ’ * section.level + section.title)
u’Cat’
u’ Nomenclature and etymology’
u’ Taxonomy and evolution’
u’ Genetics’
u’ Anatomy’
u’ Behavior’
u’ Sociability’
u’ Grooming’
u’ Fighting’
…
Wikipedia article tables
WikipediaSection.tables is a list of WikipediaTable objects. Each table has a title, headers and rows.
table = WikipediaTable(section, title=”, headers=[], rows=[], source=”)
table.section # WikipediaSection parent.
table.source # Table HTML source.
table.title # Table title.
table.headers # List of table column headers.
table.rows # List of table rows, each a list of column values.
Wikia
Wikia is a free hosting service for thousands of wikis. Wikipedia, Wiktionary and Wikia all inherit the MediaWiki base class, so Wikia has the same methods and properties as Wikipedia. Its constructor takes the name of a domain on Wikia. Note the use of Wikia.index(), which returns an iterator over all available article titles:
from pattern.web import Wikia
w = Wikia(domain=’montypython’)
for i, title in enumerate(w.index(start=’a’, throttle=1.0, cached=True)):
if i >= 3:
break
article = w.search(title)
print repr(article.title)
u’Albatross’
u’Always Look on the Bright Side of Life’
u’And Now for Something Completely Different’
DBPedia
DBPedia is a database of structured information mined from Wikipedia and stored as (subject, predicate, object)-triples (e.g., cat IS-A animal). DBPedia can be queried with SPARQL, where subject, predicate and/or object can be given as ?variables. The Result objects in the list returned from DBPedia.search() have the variables as additional properties:
from pattern.web import DBPedia
sparql = ‘\n’.join((
‘prefix dbo: http://dbpedia.org/ontology/‘,
‘select ?person ?place where {‘,
’ ?person a dbo:President.’,
’ ?person dbo:birthPlace ?place.’,
‘}’
))
for r in DBPedia().search(sparql, start=1, count=10):
print ‘%s (%s)’ % (r.person.name, r.place.name)
Álvaro Arzú (Guatemala City)
Árpád Göncz (Budapest)
…
Facebook posts, comments & likes
Facebook.search(query, type=SEARCH) returns a list of Result objects, where each result is a (publicly available) post that contains (or which comments contain) the given query.
Facebook.search(id, type=NEWS) returns posts from a given user profile. You need to supply a personal license key. You can get a key when you authorize Pattern to search Facebook in your name.
Facebook.search(id, type=COMMENTS) retrieves comments for a given post’s Result.id. You can also pass the id of a post or a comment to Facebook.search(id, type=LIKES) to retrieve users that liked it.
from pattern.web import Facebook, NEWS, COMMENTS, LIKES
fb = Facebook(license=’your key’)
me = fb.profile(id=None) # user info dictfor post in fb.search(me[‘id’], type=NEWS, count=100):
print repr(post.id)
print repr(post.text)
print repr(post.url)
if post.comments > 0:
print ‘%i comments’ % post.comments
print [(r.text, r.author) for r in fb.search(post.id, type=COMMENTS)]
if post.likes > 0:
print ‘%i likes’ % post.likes
print [r.author for r in fb.search(post.id, type=LIKES)]
u’530415277_10151455896030278’
u’Tom De Smedt likes CLiPS Research Center’
u’http://www.facebook.com/CLiPS.UA’
1 likes
[(u’485942414773810’, u’CLiPS Research Center’)]
….
The maximum count for COMMENTS and LIKES is 1000 (by default, 10).
RSS + Atom newsfeeds
The Newsfeed object is a wrapper for Mark Pilgrim’s Universal Feed Parser. Newsfeed.search() takes the URL of an RSS or Atom news feed and returns a list of Result objects.
from pattern.web import Newsfeed
NATURE = ‘http://www.nature.com/nature/current_issue/rss/index.html’
for result in Newsfeed().search(NATURE)[:5]:
print repr(result.title)
u’Biopiracy rules should not block biological control’
u’Animal behaviour: Same-shaped shoals’
u’Genetics: Fast disease factor’
u’Biomimetics: Material monitors mugginess’
u’Cell biology: Lung lipid hurts breathing’
Newsfeed.search() has an optional parameter tags, which is a list of custom tags to parse:
for result in Newsfeed().search(NATURE, tags=[‘dc:identifier’]):
print result.dc_identifier
Web sort
The return value of SearchEngine.search() has a total property which can be used to sort queries by “crowdvoting”. The sort() function sorts a given list of terms according to their total result count, and returns a list of (percentage, term)-tuples.
sort(
terms = [], # List of search terms.
context = ”, # Term used for sorting.
service = GOOGLE, # GOOGLE | BING | YAHOO | FLICKR
license = None, # Service license key.
strict = True, # Wrap query in quotes?
prefix = False, # context + term or term + context?
cached = True)
When a context is defined, the function sorts by relevance to the context, e.g., sort([“black”, “white”], context=”Darth Vader”) yields black as the best candidate, because “black Darth Vader” is more common in search results.
Now let’s see who is more dangerous:
from pattern.web import sort
results = sort(terms=[
‘arnold schwarzenegger’,
‘chuck norris’,
‘dolph lundgren’,
‘steven seagal’,
‘sylvester stallone’,
‘mickey mouse’], context=’dangerous’, prefix=True)for weight, term in results:
print “%.2f” % (weight * 100) + ‘%’, term
84.34% ‘dangerous mickey mouse’
9.24% ‘dangerous chuck norris’
2.41% ‘dangerous sylvester stallone’
2.01% ‘dangerous arnold schwarzenegger’
1.61% ‘dangerous steven seagal’
0.40% ‘dangerous dolph lundgren’
HTML to plaintext
The HTML source code of a web page can be retrieved with URL.download(). HTML is a markup language that uses tags to define text formatting. For example, hello displays hello in bold. For many tasks we may want to strip the formatting so we can analyze (e.g., parse or count) the plain text.
The plaintext() function removes HTML formatting from a string.
plaintext(html, keep=[], replace=blocks, linebreaks=2, indentation=False)
It performs the following steps to clean up the given string:
Strip javascript: remove all elements.
Strip CSS: remove all elements.
Strip comments: remove all
elements.
Strip forms: remove all elements.
Strip tags: remove all HTML tags.
Decode entities: replace < with < (for example).
Collapse spaces: replace consecutive spaces with a single space.
Collapse linebreaks: replace consecutive linebreaks with a single linebreak.
Collapse tabs: replace consecutive tabs with a single space, optionally indentation (i.e., tabs at the start of a line) can be preserved.
plaintext parameters
The keep parameter is a list of tags to retain. By default, attributes are stripped, e.g.,
becomesThe replace parameter defines how HTML elements are replaced with other characters to improve plain text layout. It is a dictionary of tag → (before, after) items. By default, it replaces block elements (i.e.,
,
and |
---|
with one tab, and
|
The linebreaks parameter defines the maximum number of consecutive linebreaks to retain.
The indentation parameter defines whether or not to retain tab indentation.
The following example downloads a HTML document and keeps a minimal amount of formatting (headings, bold, links).
from pattern.web import URL, plaintext
s = URL(‘http://www.clips.ua.ac.be‘).download()
s = plaintext(s, keep={‘h1’:[], ‘h2’:[], ‘strong’:[], ‘a’:[‘href’]})
print s
plaintext = strip + decode + collapse
The different steps in plaintext() are available as separate functions:
decode_utf8(string) # Byte string to Unicode string.
encode_utf8(string) # Unicode string to byte string.
strip_tags(html, keep=[], replace=blocks) # Non-trivial, using SGML parser.
strip_between(a, b, string) # Remove anything between (and including) a and b.
‘.
strip_comments(html) # Strips between ‘
‘.
strip_forms(html) # Strips between ‘