|
Python爬虫:下载人生格言爬取网页将这些格言下载存储到本地代码:importrequests#导入requests库,用于提取网页fromlxmlimportetree#导入lxml库,用于Xpath数据解析#请求头header={'user-agent':'Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/127.0.0.0Safari/537.36Edg/127.0.0.0'}#每个浏览器的user-agent不一样,在浏览器中查找url="http://m.3chongmen.com/renshenggeyan/162.html"#请求网址res1=requests.get(url=url,headers=header).texthtml=etree.HTML(res1)title=html.xpath('//div[@class="title"]/h1/text()')[0]#数据解析,提取content=html.xpath('//div[@class="content"]/text()')#数据解析,提取内容content="".join(content)print(title)print(content)1234567891011121314151617运行结果:分析:导入requests库requests库是第三方库,要提前安装输入指令进行安装pipinstallrequests1*导入lxml库输入指令pipinstalllxml1headers最简单的只用加上user-agent就可以了鼠标右键,选择”检查“,点击”网络“,ctrl+R刷新页面,点击第一份文件,点击”标头“,滑到最下方查找”User-Agent“,复制到pycharm中即可数据解析Xpath用requets获取的源代码如图,想要的信息就在这里面,我们需要提取出来,因此就要用到Xpath进行解析,要先学习一下Xpath语法和lxml库的使用,可以在网上查找相关资料拓展将目录下的所有人生格言提取并保存在本地代码importrequestsfromlxmlimportetreeheader={'user-agent':'Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/127.0.0.0Safari/537.36Edg/127.0.0.0'}defspider(url):res1=requests.get(url=url,headers=header).texthtml=etree.HTML(res1)content=html.xpath('//div[@class="content"]/text()')content="".join(content)title=html.xpath('//div[@class="title"]/h1/text()')[0]returntitle,contenturl1="http://m.3chongmen.com/renshenggeyan"res=requests.get(url=url1,headers=header).texthtml=etree.HTML(res)links=html.xpath('//ul[@class="list_cnt"]//a[@target="_blank"]/@href')forlinkinlinks:title,content=spider(link)withopen(f'格言/{title}.txt','w',encoding='utf-8')asf:f.write(title+'\n\n')f.write(content)1234567891011121314151617181920212223242526运行结果:
|
|