====requests爬虫==== 1. 介绍\\ 什么是Requests Requests是用python语言基于urllib编写的,采用的是Apache2 Licensed开源协议的HTTP库。与urllib相比,Requests更加方便,可以节约我们大量的工作,建议爬虫使用Requests库。 2. 安装\\ 利用 pip 安装 pip install requests\\ 3. 基本请求\\ req = requests.get("http://www.baidu.com")\\ req = requests.post("http://www.baidu.com")\\ req = requests.put("http://www.baidu.com")\\ req = requests.delete("http://www.baidu.com")\\ req = requests.head("http://www.baidu.com")\\ req = requests.options("http://www.baidu.com")\\ 3.1 get请求\\ 参数是字典,我们也可以传递json类型的参数: import requests url = "http://www.baidu.com/s" params = {'wd': '尚学堂'} response = requests.get(url, params=params) print(response.url) response.encoding = 'utf-8' html = response.text # print(html) 3.2 post请求\\ 参数是字典,我们也可以传递json类型的参数: url = "http://www.sxt.cn/index/login/login.html" formdata = { "user": "17703181473", "password": "123456" } response = requests.post(url, data=formdata) response.encoding = 'utf-8' html = response.text # print(html) 3.3 自定义请求头部\\ 伪装请求头部是采集时经常用的,我们可以用这个方法来隐藏:\\ headers = {'User-Agent': 'python'} r = requests.get('http://www.zhidaow.com', headers = headers) print(r.request.headers['User-Agent']) 3.4 设置超时时间\\ 可以通过timeout属性设置超时时间,一旦超过这个时间还没获得响应内容,就会提示错误 requests.get('http://github.com', timeout=0.001)\\ 3.5 代理访问\\ 采集时为避免被封IP,经常会使用代理。requests也有相应的proxies属性\\ import requests proxies = { "http": "http://10.10.1.10:3128", "https": "https://10.10.1.10:1080", } requests.get("http://www.zhidaow.com", proxies=proxies) 如果代理需要账户和密码,则需这样 proxies = { "http": "http://user:pass@10.10.1.10:3128/", } 3.6 session自动保存cookies\\ seesion的意思是保持一个会话,比如 登陆后继续操作(记录身份信息) 而requests是单次请求的请求,身份信息不会被记录 # 创建一个session对象 s = requests.Session() # 用session对象发出get请求,设置cookies s.get('http://httpbin.org/cookies/set/sessioncookie/123456789') 3.7 ssl验证\\ # 禁用安全请求警告\\ requests.packages.urllib3.disable_warnings() resp = requests.get(url, verify=False, headers=headers)\\ 4 获取响应信息\\ 代码 含义 resp.json() 获取响应内容(以json字符串)\\ resp.text 获取响应内容 (以字符串)\\ resp.content 获取响应内容(以字节的方式)\\ resp.headers 获取响应头内容\\ resp.url 获取访问地址\\ resp.encoding 获取网页编码\\ resp.request.headers 请求头内容\\ resp.cookie 获取cookie\\ /TJPROJ6/RNA_SH/personal_dir/fengjie/Work_Orders/summary.py /TJPROJ6/RNA_SH/personal_dir/fengjie/fengjie/script/get_project_info /TJPROJ6/RNA_SH/personal_dir/fengjie/fengjie/script/daka/P12_daka.py import requests session = requests.session() #key1 headers = { "User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" } login_url = "http://192.168.5.60:8080/iclock/accounts/login/" form_data = { "username":username, "password":password, "logintype":"employee" } session.post(login_url,headers=headers,data=form_data) #key2 params = { "starttime": starttime, "endtime": endtime } info_url = "http://192.168.5.60:8080/iclock/staff/transactions/?"+urlencode(params) resp = session.post(info_url,headers=headers) #key3 global html try: if resp.status_code == 200: html=resp.json() return html except requests.ConnectionError as e: print('Error',e.args) 参考链接1: [[https://www.jianshu.com/p/fb6ee6cc5c1c]]