当前位置: 首页 > 知识库问答 >
问题:

R中的数据刮取

祝宾白
2023-03-14

我想用英超网站上的统计数据来做一个班级项目。这是网站:https://www.premierleague.com/stats/top/players/goals

library(tidyverse)
library(rvest)
url <- "https://www.premierleague.com/stats/top/players/goals?se=79"
url %>%
  read_html() %>% 
  html_nodes("table") %>% 
  .[[1]] %>% 
   html_table()
   Rank                  Player            Club         Nationality Stat
1     1            Alan Shearer               -             England  260
2     2            Wayne Rooney         Everton             England  208
3     3             Andrew Cole               -             England  187    
4     4           Frank Lampard               -             England  177
5     5           Thierry Henry               -              France  175
6     6           Robbie Fowler               -             England  163
7     7           Jermain Defoe AFC Bournemouth             England  162
8     8            Michael Owen               -             England  150
9     9           Les Ferdinand               -             England  149
10   10        Teddy Sheringham               -             England  146
11   11        Robin van Persie               -         Netherlands  144
12   12           Sergio Agüero Manchester City           Argentina  143
13   13 Jimmy Floyd Hasselbaink               -         Netherlands  127
14   14            Robbie Keane               -             Ireland  126
15   15          Nicolas Anelka               -              France  125
16   16            Dwight Yorke               - Trinidad And Tobago  123
17   17          Steven Gerrard               -             England  120
18   18              Ian Wright               -             England  113
19   19             Dion Dublin               -             England  111
20   20            Emile Heskey               -             England  110
<div data-script="pl_stats" data-widget="stats-table" data-current-size="20" 
data-stat="" data-type="player" data-page-size="20" data-page="0" data- 
comps="1" data-num-entries="2162">

<div class="dropDown noLabel topStatsFilterDropdown" data-listener="true">
    <div data-metric="mins_played" class="current currentStatContainer" 
aria-expanded="false">Minutes played</div>
    <ul class="dropdownList" role="listbox">

共有1个答案

融泓
2023-03-14

此解决方案要求您具有对selenium服务器的访问权限。

library(RSelenium) # not on cran (install with devtools::install_github("ropensci/RSelenium"))
library(rvest)

# helper functions ---------------------------

# click_el() solves the problem mentioned here:
# https://stackoverflow.com/questions/11908249/debugging-element-is-not-clickable-at-point-error
click_el <- function(rem_dr, el) {
  rem_dr$executeScript("arguments[0].click();", args = list(el))
}

# wrapper around findElement()
find_el <- function(rem_dr, xpath) {
  rem_dr$findElement("xpath", xpath)
}

# check if an element exists on the dom
el_exists <- function(rem_dr, xpath) {
  maybe_el <- read_html(rem_dr$getPageSource()[[1]]) %>%
    xml_find_first(xpath = xpath)
  length(maybe_el) != 0
}

# try to click on a element if it exists
click_if_exists <- function(rem_dr, xpath) {
  if (el_exists(rem_dr, xpath)) {
    suppressMessages({
      try({
        el <- find_el(rem_dr, xpath)
        el$clickElement()
      }, silent = TRUE
      )
    })
  }
}

# close google adds so they don't get in the way of clicking other elements
maybe_close_ads <- function(rem_dr) {
  click_if_exists(rem_dr, '//a[@id="advertClose" and @class="closeBtn"]')
}

# click on button that requires we accept cookies
maybe_accept_cookies <- function(rem_dr) {
  click_if_exists(rem_dr, "//div[@class='btn-primary cookies-notice-accept']")
}

# parse the data table you're interested in
get_tbl <- function(rem_dr) {
  read_html(rem_dr$getPageSource()[[1]]) %>% 
    html_nodes("table") %>% 
    .[[1]] %>% 
    html_table()
}

# actual execution ---------------------------

# first u need to start selenium server...i'm running the server inside a 
# docker container and having it listen on port 4445 on my local machine
# (see http://rpubs.com/johndharrison/RSelenium-Basics for more details):
`docker run -d -p 4445:4444 selenium/standalone-firefox:2.53.1`

# connect to selenium server from within r
rem_dr <- remoteDriver(
  remoteServerAddr = "localhost", port = 4445L, browserName = "firefox"
)
rem_dr$open()

# go to webpage
rem_dr$navigate("https://www.premierleague.com/stats/top/players/goals")

# close adds
maybe_close_ads(rem_dr)
Sys.sleep(3)

# the seasons to iterate over
start <- 1992:2017 # u may want to replace this with `start <- 1992:1995` when testing
seasons <- paste0(start, "/", substr(start + 1, 3, 4))

# list to hold each season's data
out_list <- vector("list", length(seasons))
names(out_list) <- seasons

for (season in seasons) {

  maybe_close_ads(rem_dr)

  # to filter the data by season, we first need to click on the "filter by season" drop down
  # menu, so that the divs representing the various seasons become active (otherwise, 
  # we can't click them)
  cur_season <- find_el(
    rem_dr, '//div[@class="current" and @data-dropdown-current="FOOTBALL_COMPSEASON" and @role="button"]'
  )
  click_el(rem_dr, cur_season)
  Sys.sleep(3)

  # now we can select the season of interest
  xpath <- sprintf(
    '//ul[@data-dropdown-list="FOOTBALL_COMPSEASON"]/li[@data-option-name="%s"]', 
    season
  )
  season_lnk <- find_el(rem_dr, xpath)
  click_el(rem_dr, season_lnk)
  Sys.sleep(3)

  # parse the table shown on the first page
  tbl <- get_tbl(rem_dr)

  # iterate over all additional pages 
  nxt_page_act <- '//div[@class="paginationBtn paginationNextContainer"]'
  nxt_page_inact <- '//div[@class="paginationBtn paginationNextContainer inactive"]'

  while (!el_exists(rem_dr, nxt_page_inact)) {

    maybe_close_ads(rem_dr)
    maybe_accept_cookies(rem_dr)

    rem_dr$maxWindowSize()
    btn <- find_el(rem_dr, nxt_page_act)
    click_el(rem_dr, btn) # click "next button"

    maybe_accept_cookies(rem_dr)
    new_tbl <- get_tbl(rem_dr)
    tbl <- rbind(tbl, new_tbl)
    cat(".")
    Sys.sleep(2)
  }

  # put this season's data into the output list
  out_list[[season]] <- tbl
  print(season)
}

这需要一点时间来运行。当我运行它时,总共有6731行数据(跨越所有季节)。

 类似资料:
  • 问题内容: 如何使用XML包抓取html表? 维基百科页面为例。我想在R中阅读它,并获得“巴西与国际足联认可的球队进行的所有比赛的清单”表作为data.frame。我怎样才能做到这一点? 问题答案: …或更短的尝试: 选择的表是页面上最长的表

  • 我正在从一个站点上刮取数据,每个项目都有一个相关的文档URL。我想从那个文件中刮数据,这是可用的HTML格式后点击链接。现在,我一直在使用Google Sheets导入feed来填充基本列。 有没有下一步,我可以做的,进入每个相应的URL并从文档中抓取元素,并用它们填充Google表单?我之所以使用RSS提要(而不是python和BS)是因为它们实际上提供了一个RSS提要。 我找过了,没有找到一个

  • 我正在尝试从此网页的一个表中提取表数据。但是,当我尝试从每个表行提取表数据时,似乎无法从每一行获取数据。我检测到的一种模式是,我无法看到存在图像的行的表数据。有没有其他的方法,我仍然可以刮我想要的数据(职位,家乡,职级等),特别是当涉及到那些图片存在的时候? 我已经能够使用“div”类获得播放器名称,但我不认为我能够将它用于其他列中的数据。

  • 我需要一些关于使用python来删除站点中的一些数据属性的帮助。我尝试过使用和但没有成功,我在网上找到了一些关于使用beautiful Soup的文章。唯一的问题是我不知道怎么做。 这是我要刮的。 我正在尝试获得值,但我不知道如何获得。希望有人能帮忙。 问候, 哈扎

  • 已解决 通过使用HTMLUnit并在打印页面前停止一段时间,我让它打印缺少的内容

  • 本文向大家介绍数据结构中的R *树,包括了数据结构中的R *树的使用技巧和注意事项,需要的朋友参考一下 基本概念 在数据处理的情况下,R *树被定义为为索引空间信息而实现的R树的变体。 R *树比标准R树的建造成本稍高,因为可能需要重新插入数据。但是生成的树通常具有更好的查询性能。与标准R树相同,它可以存储点和空间数据。R *树的概念由Norbert Beckmann,Hans-Peter Kri