Pyspider中给爬虫伪造随机请求头的实例
Pyspider中采用了tornado库来做http请求,在请求过程中可以添加各种参数,例如请求链接超时时间,请求传输数据超时时间,请求头等等,但是根据pyspider的原始框架,给爬虫添加参数只能通过crawl_config这个Python字典来完成(如下所示),框架代码将这个字典中的参数转换成task数据,进行http请求。这个参数的缺点是不方便给每一次请求做随机请求头。
crawl_config={
"user_agent":"Mozilla/5.0(WindowsNT6.3;WOW64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/52.0.2743.116Safari/537.36",
"timeout":120,
"connect_timeout":60,
"retries":5,
"fetch_type":'js',
"auto_recrawl":True,
}
这里写出给爬虫添加随机请求头的方法:
1、编写脚本,将脚本放置在pyspider的libs文件夹下,命名为header_switch.py
#!/usr/bin/envpython
#-*-coding:utf-8-*-
#Createdon2017-10-1811:52:26
importrandom
importtime
classHeadersSelector(object):
"""
Header中缺少几个字段Host和Cookie
"""
headers_1={
"Proxy-Connection":"keep-alive",
"Pragma":"no-cache",
"Cache-Control":"no-cache",
"User-Agent":"Mozilla/5.0(WindowsNT6.3;WOW64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/52.0.2743.116Safari/537.36",
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"DNT":"1",
"Accept-Encoding":"gzip,deflate,sdch",
"Accept-Language":"zh-CN,zh;q=0.8,en-US;q=0.6,en;q=0.4",
"Referer":"https://www.baidu.com/s?wd=%BC%96%E7%A0%81&rsv_spt=1&rsv_iqid=0x9fcbc99a0000b5d7&issp=1&f=8&rsv_bp=1&rsv_idx=2&ie=utf-8&rqlang=cn&tn=baiduhome_pg&rsv_enter=0&oq=If-None-Match&inputT=7282&rsv_t",
"Accept-Charset":"gb2312,gbk;q=0.7,utf-8;q=0.7,*;q=0.7",
}#网上找的浏览器
headers_2={
"Proxy-Connection":"keep-alive",
"Pragma":"no-cache",
"Cache-Control":"no-cache",
"User-Agent":"Mozilla/5.0(WindowsNT6.1;WOW64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/49.0.2623.221Safari/537.36SE2.XMetaSr1.0",
"Accept":"image/gif,image/x-xbitmap,image/jpeg,application/x-shockwave-flash,application/vnd.ms-excel,application/vnd.ms-powerpoint,application/msword,*/*",
"DNT":"1",
"Referer":"https://www.baidu.com/link?url=c-FMHf06-ZPhoRM4tWduhraKXhnSm_RzjXZ-ZTFnPAvZN",
"Accept-Encoding":"gzip,deflate,sdch",
"Accept-Language":"zh-CN,zh;q=0.8,en-US;q=0.6,en;q=0.4",
}#window7系统浏览器
headers_3={
"Proxy-Connection":"keep-alive",
"Pragma":"no-cache",
"Cache-Control":"no-cache",
"User-Agent":"Mozilla/5.0(X11;Linuxx86_64;rv:52.0)Gecko/20100101Firefox/52.0",
"Accept":"image/x-xbitmap,image/jpeg,application/x-shockwave-flash,application/vnd.ms-excel,application/vnd.ms-powerpoint,application/msword,*/*",
"DNT":"1",
"Referer":"https://www.baidu.com/s?wd=http%B4%20Pragma&rsf=1&rsp=4&f=1&oq=Pragma&tn=baiduhome_pg&ie=utf-8&usm=3&rsv_idx=2&rsv_pq=e9bd5e5000010",
"Accept-Encoding":"gzip,deflate,sdch",
"Accept-Language":"zh-CN,zh;q=0.8,en-US;q=0.7,en;q=0.6",
}#Linux系统firefox浏览器
headers_4={
"Proxy-Connection":"keep-alive",
"Pragma":"no-cache",
"Cache-Control":"no-cache",
"User-Agent":"Mozilla/5.0(WindowsNT10.0;Win64;x64;rv:55.0)Gecko/20100101Firefox/55.0",
"Accept":"*/*",
"DNT":"1",
"Referer":"https://www.baidu.com/link?url=c-FMHf06-ZPhoRM4tWduhraKXhnSm_RzjXZ-ZTFnP",
"Accept-Encoding":"gzip,deflate,sdch",
"Accept-Language":"zh-CN,zh;q=0.9,en-US;q=0.7,en;q=0.6",
}#Win10系统firefox浏览器
headers_5={
"Connection":"keep-alive",
"Pragma":"no-cache",
"Cache-Control":"no-cache",
"User-Agent":"Mozilla/5.0(WindowsNT10.0;Win64;x64;)AppleWebKit/537.36(KHTML,likeGecko)Chrome/52.0.2743.116Safari/537.36Edge/15.15063",
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Referer":"https://www.baidu.com/link?url=c-FMHf06-ZPhoRM4tWduhraKXhnSm_RzjXZ-",
"Accept-Encoding":"gzip,deflate,sdch",
"Accept-Language":"zh-CN,zh;q=0.9,en-US;q=0.7,en;q=0.6",
"Accept-Charset":"gb2312,gbk;q=0.7,utf-8;q=0.7,*;q=0.7",
}#Win10系统Chrome浏览器
headers_6={
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding":"gzip,deflate,sdch",
"Accept-Language":"zh-CN,zh;q=0.8",
"Pragma":"no-cache",
"Cache-Control":"no-cache",
"Connection":"keep-alive",
"DNT":"1",
"Referer":"https://www.baidu.com/s?wd=If-None-Match&rsv_spt=1&rsv_iqid=0x9fcbc99a0000b5d7&issp=1&f=8&rsv_bp=1&rsv_idx=2&ie=utf-8&rq",
"Accept-Charset":"gb2312,gbk;q=0.7,utf-8;q=0.7,*;q=0.7",
"User-Agent":"Mozilla/5.0(WindowsNT10.0;WOW64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/49.0.2623.221Safari/537.36SE2.XMetaSr1.0",
}#win10系统浏览器
def__init__(self):
pass
defselect_header(self):
n=random.randint(1,6)
switch={
1:self.headers_1
2:self.headers_2
3:self.headers_3
4:self.headers_4
5:self.headers_5
6:self.headers_6
}
headers=switch[n]
returnheaders
其中,我只写了6个请求头,如果爬虫的量非常大,完全可以写更多的请求头,甚至上百个,然后将random的随机范围扩大,进行选择。
2、在pyspider脚本中编写如下代码:
#!/usr/bin/envpython
#-*-encoding:utf-8-*-
#Createdon2017-08-1811:52:26
frompyspider.libs.base_handlerimport*
frompyspider.addings.headers_switchimportHeadersSelector
importsys
defaultencoding='utf-8'
ifsys.getdefaultencoding()!=defaultencoding:
reload(sys)
sys.setdefaultencoding(defaultencoding)
classHandler(BaseHandler):
crawl_config={
"user_agent":"Mozilla/5.0(WindowsNT6.3;WOW64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/52.0.2743.116Safari/537.36",
"timeout":120,
"connect_timeout":60,
"retries":5,
"fetch_type":'js',
"auto_recrawl":True,
}
@every(minutes=24*60)
defon_start(self):
header_slt=HeadersSelector()
header=header_slt.select_header()#获取一个新的header
#header["X-Requested-With"]="XMLHttpRequest"
orig_href='http://sww.bjxch.gov.cn/gggs.html'
self.crawl(orig_href,
callback=self.index_page,
headers=header)#请求头必须写在crawl里,cookies从response.cookies中找
@config(age=24*60*60)
defindex_page(self,response):
header_slt=HeadersSelector()
header=header_slt.select_header()#获取一个新的header
#header["X-Requested-With"]="XMLHttpRequest"
ifresponse.cookies:
header["Cookies"]=response.cookies
其中最重要的就是在每个回调函数on_start,index_page等等当中,每次调用时,都会实例化一个header选择器,给每一次请求添加不一样的header。要注意添加的如下代码:
header_slt=HeadersSelector() header=header_slt.select_header()#获取一个新的header #header["X-Requested-With"]="XMLHttpRequest" header["Host"]="www.baidu.com" ifresponse.cookies: header["Cookies"]=response.cookies
当使用XHR发送AJAX请求时会带上Header,常被用来判断是不是Ajax请求,headers要添加{‘X-Requested-With':‘XMLHttpRequest'}才能抓取到内容。
确定了url也就确定了请求头中的Host,需要按需添加,urlparse包里给出了根据url解析出host的方法函数,直接调用netloc即可。
如果响应中有cookie,就需要将cookie添加到请求头中。
如果还有别的伪装需求,自行添加。
如此即可实现随机请求头,完。
以上这篇Pyspider中给爬虫伪造随机请求头的实例就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持毛票票。