http://www.qiushibaike.com/
这是本次爬取的网址,任务是需要爬取姓名、段子内容和评论人数。
http://www.qiushibaike.com/8hr/page/3/?s=4940212
http://www.qiushibaike.com/8hr/page/4/?s=4940212
观察每页网址发现,变量只有page后的数字,故url可写为如下:
url = 'http://www.qiushibaike.com/8hr/page/'+str(page)
打开页面源代码,可以看出所需内容并没有在一个小范围内,比较分散,便想到用正则将内容匹配出来,进行包括换行符在内的匹配需要用到re.S,当然,在具体操作中还需要对内容进行修改,比如去掉没用的代码等等,以下为完整代码:
#coding=utf-8
import re
import urllib
import urllib2
for page in range(1,36):
url = 'http://www.qiushibaike.com/8hr/page/'+str(page)
user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:49.0)'
headers = {'User-Agent':user_agent}
qq = open("QSBK.txt", "a+")
try:
request = urllib2.Request(url,headers=headers)
response = urllib2.urlopen(request)
content = response.read().decode('utf-8')
pattern = re.compile(r'<div class="author clearfix">.*?href.*?<img src.*?title=.*?<h2>(.*?)</h2>.*?<div class="content">(.*?)</div>.*?<i class="number">(.*?)</i>',re.S)
items = re.finditer(pattern,content)
for item in items:
i = re.sub(r'<span>', '', item.group(2))
i = re.sub(r'<br/>', '', i)
i = re.sub(r'</span>','',i)
a = item.group(1).replace('\n','')
b = i.replace('\n','')
c = item.group(3).replace('\n','')
print a+'|'+b+'|'+c
qq.write(a.encode('utf-8')+'|'+b.encode('utf-8')+'|'+c.encode('utf-8')+'\n')
except urllib2.URLError,e:
if hasattr(e,"code"):
print e.code
if hasattr(e,"reason"):
print e.reason
qq.close()