python利用beautifulSoup实现爬虫
以前讲过利用phantomjs做爬虫抓网页https://www.nhooo.com/article/55789.htm是配合选择器做的
利用beautifulSoup(文档:http://www.crummy.com/software/BeautifulSoup/bs4/doc/)这个python模块,可以很轻松的抓取网页内容
#coding=utf-8
importurllib
frombs4importBeautifulSoup
url='http://www.baidu.com/s'
values={'wd':'网球'}
encoded_param=urllib.urlencode(values)
full_url=url+'?'+encoded_param
response=urllib.urlopen(full_url)
soup=BeautifulSoup(response)
alinks=soup.find_all('a')
上面可以抓取百度搜出来结果是网球的记录。
beautifulSoup内置了很多非常有用的方法。
几个比较好用的特性:
构造一个node元素
soup=BeautifulSoup('<bclass="boldest">Extremelybold</b>')
tag=soup.b
type(tag)
#<class'bs4.element.Tag'>
属性可以使用attr拿到,结果是字典
tag.attrs
#{u'class':u'boldest'}
或者直接tag.class取属性也可。
也可以自由操作属性
tag['class']='verybold'
tag['id']=1
tag
#<blockquoteclass="verybold"id="1">Extremelybold</blockquote>
deltag['class']
deltag['id']
tag
#<blockquote>Extremelybold</blockquote>
tag['class']
#KeyError:'class'
print(tag.get('class'))
#None
还可以随便操作,查找dom元素,比如下面的例子
1.构建一份文档
html_doc=""" <html><head><title>TheDormouse'sstory</title></head> <p><b>TheDormouse'sstory</b></p> <p>Onceuponatimetherewerethreelittlesisters;andtheirnameswere <ahref="http://example.com/elsie"id="link1">Elsie</a>, <ahref="http://example.com/lacie"id="link2">Lacie</a>and <ahref="http://example.com/tillie"id="link3">Tillie</a>; andtheylivedatthebottomofawell.</p> <p>...</p> """ frombs4importBeautifulSoup soup=BeautifulSoup(html_doc)
2.各种搞
soup.head
#<head><title>TheDormouse'sstory</title></head>
soup.title
#<title>TheDormouse'sstory</title>
soup.body.b
#<b>TheDormouse'sstory</b>
soup.a
#<aclass="sister"href="http://example.com/elsie"id="link1">Elsie</a>
soup.find_all('a')
#[<aclass="sister"href="http://example.com/elsie"id="link1">Elsie</a>,
#<aclass="sister"href="http://example.com/lacie"id="link2">Lacie</a>,
#<aclass="sister"href="http://example.com/tillie"id="link3">Tillie</a>]
head_tag=soup.head
head_tag
#<head><title>TheDormouse'sstory</title></head>
head_tag.contents
[<title>TheDormouse'sstory</title>]
title_tag=head_tag.contents[0]
title_tag
#<title>TheDormouse'sstory</title>
title_tag.contents
#[u'TheDormouse'sstory']
len(soup.contents)
#1
soup.contents[0].name
#u'html'
text=title_tag.contents[0]
text.contents
forchildintitle_tag.children:
print(child)
head_tag.contents
#[<title>TheDormouse'sstory</title>]
forchildinhead_tag.descendants:
print(child)
#<title>TheDormouse'sstory</title>
#TheDormouse'sstory
len(list(soup.children))
#1
len(list(soup.descendants))
#25
title_tag.string
#u'TheDormouse'sstory'