下面是一个网络刮刀,它用漂亮的汤从这个网站上刮出一个团队名单。每一列数据都被放入一个数组,然后循环到一个CSV文件中。我想刮掉团队名称(代码中的“团队”),但我正在努力将元标记(见下文的html代码)合并到我的CSV写入器循环中。
<meta property="og:site_name" content="Tampa Bay Rays" />
我认为问题在于“Team”数组中的值长度与其他列中的值长度不匹配。例如,我当前的代码打印的数组如下所示:
[Player A, Player B, Player C]
[46,36,33]
[Tampa Bay Rays]
但我需要团队数组(最后一个数组)匹配前两个数组的长度,如下所示:
[Player A, Player B, Player C]
[46,36,33]
[Tampa Bay Rays, Tampa Bay Rays, Tampa Bay Rays]
有人知道如何在我的writerow csv循环中进行此元标记调整吗?提前谢谢!
import requests
import csv
from bs4 import BeautifulSoup
page=requests.get('http://m.rays.mlb.com/roster/')
soup=BeautifulSoup(page.text, 'html.parser')
#Remove Unwanted Links
last_links=soup.find(class_='nav-tabset-container')
last_links.decompose()
side_links=soup.find(class_='column secondary span-5 right')
side_links.decompose()
#Generate CSV
f=csv.writer(open('MLB_Active_Roster.csv','w',newline=''))
f.writerow(['Name','Number','Hand','Height','Weight','DOB','Team'])
#Find Player Name Links
player_list=soup.find(class_='layout layout-roster')
player_list_items=player_list.find_all('a')
#Extract Player Name Text
names=[player_name.contents[0] for player_name in player_list_items]
#Find Player Number
number_list=soup.find(class_='layout layout-roster')
number_list_items=number_list.find_all('td',index='0')
#Extract Player Number Text
number=[player_number.contents[0] for player_number in number_list_items]
#Find B/T
hand_list=soup.find(class_='layout layout-roster')
hand_list_items=hand_list.find_all('td',index='3')
#Extract B/T
handedness=[player_hand.contents[0] for player_hand in hand_list_items]
#Find Height
height_list=soup.find(class_='layout layout-roster')
height_list_items=hand_list.find_all('td',index='4')
#Extract Height
height=[player_height.contents[0] for player_height in height_list_items]
#Find Weight
weight_list=soup.find(class_='layout layout-roster')
weight_list_items=weight_list.find_all('td',index='5')
#Extract Weight
weight=[player_weight.contents[0] for player_weight in weight_list_items]
#Find DOB
DOB_list=soup.find(class_='layout layout-roster')
DOB_list_items=DOB_list.find_all('td',index='6')
#Extract DOB
DOB=[player_DOB.contents[0] for player_DOB in DOB_list_items]
#Find Team Name
team_list=soup.find('meta',property='og:site_name')
Team=[team_name.contents[0] for team_name in team_list]
print(Team)
#Loop Excel Rows
for i in range(len(names)):
f.writerow([names[i],number[i],handedness[i],height[i],weight[i],DOB[i],Team[i]])
更改很简单,请将部件#查找团队名称
更改为:
#Find Team Name
team_list=soup.find('meta',property='og:site_name')
Team = [team_list['content'] for _ in names]
完整程序:
import requests
import csv
from bs4 import BeautifulSoup
page=requests.get('http://m.rays.mlb.com/roster/')
soup=BeautifulSoup(page.text, 'html.parser')
#Remove Unwanted Links
last_links=soup.find(class_='nav-tabset-container')
last_links.decompose()
side_links=soup.find(class_='column secondary span-5 right')
side_links.decompose()
#Generate CSV
f=csv.writer(open('MLB_Active_Roster.csv','w',newline=''))
f.writerow(['Name','Number','Hand','Height','Weight','DOB','Team'])
#Find Player Name Links
player_list=soup.find(class_='layout layout-roster')
player_list_items=player_list.find_all('a')
#Extract Player Name Text
names=[player_name.contents[0] for player_name in player_list_items]
#Find Player Number
number_list=soup.find(class_='layout layout-roster')
number_list_items=number_list.find_all('td',index='0')
#Extract Player Number Text
number=[player_number.contents[0] for player_number in number_list_items]
#Find B/T
hand_list=soup.find(class_='layout layout-roster')
hand_list_items=hand_list.find_all('td',index='3')
#Extract B/T
handedness=[player_hand.contents[0] for player_hand in hand_list_items]
#Find Height
height_list=soup.find(class_='layout layout-roster')
height_list_items=hand_list.find_all('td',index='4')
#Extract Height
height=[player_height.contents[0] for player_height in height_list_items]
#Find Weight
weight_list=soup.find(class_='layout layout-roster')
weight_list_items=weight_list.find_all('td',index='5')
#Extract Weight
weight=[player_weight.contents[0] for player_weight in weight_list_items]
#Find DOB
DOB_list=soup.find(class_='layout layout-roster')
DOB_list_items=DOB_list.find_all('td',index='6')
#Extract DOB
DOB=[player_DOB.contents[0] for player_DOB in DOB_list_items]
#Find Team Name
team_list=soup.find('meta',property='og:site_name')
Team = [team_list['content'] for _ in names]
for i in range(len(names)):
f.writerow([names[i],number[i],handedness[i],height[i],weight[i],DOB[i],Team[i]])
结果显示在CSV文件中:
Name,Number,Hand,Height,Weight,DOB,Team
Jose Alvarado,46,L/L,"6'2""",245lbs,5/21/95,Tampa Bay Rays
Matt Andriese,35,R/R,"6'2""",225lbs,8/28/89,Tampa Bay Rays
Chris Archer,22,R/R,"6'2""",195lbs,9/26/88,Tampa Bay Rays
Diego Castillo,63,R/R,"6'3""",240lbs,1/18/94,Tampa Bay Rays
Nathan Eovaldi,24,R/R,"6'2""",225lbs,2/13/90,Tampa Bay Rays
Chih-Wei Hu,58,R/R,"6'0""",220lbs,11/4/93,Tampa Bay Rays
Andrew Kittredge,36,R/R,"6'1""",200lbs,3/17/90,Tampa Bay Rays
Adam Kolarek,56,L/L,"6'3""",205lbs,1/14/89,Tampa Bay Rays
Sergio Romo,54,R/R,"5'11""",185lbs,3/4/83,Tampa Bay Rays
Jaime Schultz,57,R/R,"5'10""",200lbs,6/20/91,Tampa Bay Rays
Blake Snell,4,L/L,"6'4""",200lbs,12/4/92,Tampa Bay Rays
Ryne Stanek,55,R/R,"6'4""",215lbs,7/26/91,Tampa Bay Rays
Hunter Wood,61,R/R,"6'1""",165lbs,8/12/93,Tampa Bay Rays
Ryan Yarbrough,48,R/L,"6'5""",205lbs,12/31/91,Tampa Bay Rays
Wilson Ramos,40,R/R,"6'1""",245lbs,8/10/87,Tampa Bay Rays
Jesus Sucre,45,R/R,"6'0""",200lbs,4/30/88,Tampa Bay Rays
Jake Bauers,9,L/L,"6'1""",195lbs,10/6/95,Tampa Bay Rays
Ji-Man Choi,26,L/R,"6'1""",230lbs,5/19/91,Tampa Bay Rays
C.J. Cron,44,R/R,"6'4""",235lbs,1/5/90,Tampa Bay Rays
Matt Duffy,5,R/R,"6'2""",170lbs,1/15/91,Tampa Bay Rays
Adeiny Hechavarria,11,R/R,"6'0""",195lbs,4/15/89,Tampa Bay Rays
Daniel Robertson,28,R/R,"5'11""",200lbs,3/22/94,Tampa Bay Rays
Joey Wendle,18,L/R,"6'1""",190lbs,4/26/90,Tampa Bay Rays
Carlos Gomez,27,R/R,"6'3""",220lbs,12/4/85,Tampa Bay Rays
Kevin Kiermaier,39,L/R,"6'1""",215lbs,4/22/90,Tampa Bay Rays
Mallex Smith,0,L/R,"5'10""",180lbs,5/6/93,Tampa Bay Rays
问题在于您使用find函数的方式。
而不是使用这个:
player_list=soup.find(class_='layout layout-roster')
您应该使用以下选项:
player_list=soup.find({"class":"layout layout-roster"})
(应将此更改应用于所有查找函数)
您的结束脚本应该是这样的:
side_links=soup.find({"class":'column secondary span-5 right'})
side_links.decompose()
#Generate CSV
f=csv.writer(open('MLB_Active_Roster.csv','w',newline=''))
f.writerow(['Name','Number','Hand','Height','Weight','DOB','Team'])
#Find Player Name Links
player_list=soup.find({"class":'layout layout-roster'})
player_list_items=player_list.find_all('a')
#Extract Player Name Text
names=[player_name.contents[0] for player_name in player_list_items]
#Find Player Number
number_list=soup.find({"class":'layout layout-roster'})
number_list_items=number_list.find_all('td',{"index":'0'})
#Extract Player Number Text
number=[player_number.contents[0] for player_number in number_list_items]
#Find B/T
hand_list=soup.find({"class":'layout layout-roster'})
hand_list_items=hand_list.find_all('td',{"index":'3'})
#Extract B/T
handedness=[player_hand.contents[0] for player_hand in hand_list_items]
#Find Height
height_list=soup.find({"class":'layout layout-roster'})
height_list_items=hand_list.find_all('td',{"index":'4'})
#Extract Height
height=[player_height.contents[0] for player_height in height_list_items]
#Find Weight
weight_list=soup.find({"class":'layout layout-roster'})
weight_list_items=weight_list.find_all('td',{"index":'5'})
#Extract Weight
weight=[player_weight.contents[0] for player_weight in weight_list_items]
#Find DOB
DOB_list=soup.find({"class":'layout layout-roster'})
DOB_list_items=DOB_list.find_all('td',{"index":'6'})
#Extract DOB
DOB=[player_DOB.contents[0] for player_DOB in DOB_list_items]
#Find Team Name
team_list=soup.find('meta',{"property":'og:site_name'})
Team=[team_name.contents[0] for team_name in team_list]
print(Team)
#Loop Excel Rows
for i in range(len(names)):
f.writerow([names[i],number[i],handedness[i],height[i],weight[i],DOB[i],Team[i]])
你的代码中有太多的重复。尽量避免复制和粘贴编程。
也就是说,你可以从相同的项目中列出一个列表:['foo']*3
给出['foo','foo','foo']
。这对于团队名称很方便,对于所有团队成员都是一样的。
您可以使用zip()
和Writerow()
在一行代码中将所有列表写入CSV。
import requests
import csv
from bs4 import BeautifulSoup
page = requests.get('http://m.rays.mlb.com/roster/')
soup = BeautifulSoup(page.text, 'html.parser')
soup.find(class_='nav-tabset-container').decompose()
soup.find(class_='column secondary span-5 right').decompose()
roster = soup.find(class_='layout layout-roster')
names = [n.contents[0] for n in roster.find_all('a')]
number = [n.contents[0] for n in roster.find_all('td', index='0')]
handedness = [n.contents[0] for n in roster.find_all('td', index='3')]
height = [n.contents[0] for n in roster.find_all('td', index='4')]
weight = [n.contents[0] for n in roster.find_all('td', index='5')]
DOB = [n.contents[0] for n in roster.find_all('td', index='6')]
team = [soup.find('meta',property='og:site_name')['content']] * len(names)
with open('MLB_Active_Roster.csv', 'w', newline='') as fp:
f = csv.writer(fp)
f.writerow(['Name','Number','Hand','Height','Weight','DOB','Team'])
f.writerows(zip(names, number, handedness, height, weight, DOB, team))
我希望从< code>div内的标题和< code >内的文本中提取文本字符串 我可以用<code>汤得到标题。h1</code>,但我想获得特定于div<code>中的<code>h1</code> 超文本标记语言:
我试图从一个网站上为我的项目收集数据。但是问题是我没有在我的输出中得到我在我的开发者工具栏屏幕中看到的标签。以下是我想从其中抓取数据的DOM的快照: 我能够获得类为“bigContainer”的div标记,但是我不能在这个标记中刮取标记。例如,如果我想得到网格项标记,我得到了一个空列表,这意味着它表明没有这样的标记。为什么会这样?请帮忙!!
我已经成功地编写了从第一页抓取数据的代码,现在我不得不在这段代码中编写一个循环来抓取下一个“n”页。下面是代码 如果有人能指导/帮助我编写代码,从剩余页面中提取数据,我将不胜感激。 谢谢
问题内容: 我正在尝试使用BeautifulSoup从网页中提取表格的HTML代码。 我想知道为什么下面的代码可以与一起使用,如果我更改为,则可以打印回去。 问题答案: 文档中有一个特殊的段落,称为“解析器之间的差异”,其中指出: Beautiful Soup为许多不同的解析器提供了相同的接口,但是每个解析器都是不同的。不同的解析器将从同一文档创建不同的解析树。最大的区别在于HTML解析器和XML
考虑一下这段代码: 它只打印“divTag” 更新: 我基本上想在'a'标签中提取“字符串”值。
在处理嵌套循环的时候可以中断(break)或继续(continue)外层循环。在这类情形中,循环必须用一 些'label(标签)来注明,并且标签传递给 break/continue 语句。 #![allow(unreachable_code)] fn main() { 'outer: loop { println!("Entered the outer loop");