<legend id='mUzpK'><style id='mUzpK'><dir id='mUzpK'><q id='mUzpK'></q></dir></style></legend><tfoot id='mUzpK'></tfoot>

      <bdo id='mUzpK'></bdo><ul id='mUzpK'></ul>
  • <i id='mUzpK'><tr id='mUzpK'><dt id='mUzpK'><q id='mUzpK'><span id='mUzpK'><b id='mUzpK'><form id='mUzpK'><ins id='mUzpK'></ins><ul id='mUzpK'></ul><sub id='mUzpK'></sub></form><legend id='mUzpK'></legend><bdo id='mUzpK'><pre id='mUzpK'><center id='mUzpK'></center></pre></bdo></b><th id='mUzpK'></th></span></q></dt></tr></i><div id='mUzpK'><tfoot id='mUzpK'></tfoot><dl id='mUzpK'><fieldset id='mUzpK'></fieldset></dl></div>
    1. <small id='mUzpK'></small><noframes id='mUzpK'>

      1. 如何让 Selenium 不等到整个页面加载,它的脚本很慢?

        How to make Selenium not wait till full page load, which has a slow script?(如何让 Selenium 不等到整个页面加载,它的脚本很慢?)
      2. <tfoot id='uYxhL'></tfoot>

        <legend id='uYxhL'><style id='uYxhL'><dir id='uYxhL'><q id='uYxhL'></q></dir></style></legend>

                <i id='uYxhL'><tr id='uYxhL'><dt id='uYxhL'><q id='uYxhL'><span id='uYxhL'><b id='uYxhL'><form id='uYxhL'><ins id='uYxhL'></ins><ul id='uYxhL'></ul><sub id='uYxhL'></sub></form><legend id='uYxhL'></legend><bdo id='uYxhL'><pre id='uYxhL'><center id='uYxhL'></center></pre></bdo></b><th id='uYxhL'></th></span></q></dt></tr></i><div id='uYxhL'><tfoot id='uYxhL'></tfoot><dl id='uYxhL'><fieldset id='uYxhL'></fieldset></dl></div>
              • <small id='uYxhL'></small><noframes id='uYxhL'>

                  <tbody id='uYxhL'></tbody>
                  <bdo id='uYxhL'></bdo><ul id='uYxhL'></ul>
                • 本文介绍了如何让 Selenium 不等到整个页面加载,它的脚本很慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  Selenium driver.get (url) 等待整个页面加载.但是一个抓取页面会尝试加载一些死掉的 JS 脚本.所以我的 Python 脚本等待它并且不能工作几分钟.这个问题可能出现在网站的每个页面上.

                  Selenium driver.get (url) wait till full page load. But a scraping page try to load some dead JS script. So my Python script wait for it and doesn't works few minutes. This problem can be on every pages of a site.

                  from selenium import webdriver
                  
                  driver = webdriver.Chrome()
                  driver.get('https://www.cortinadecor.com/productos/17/estores-enrollables-screen/estores-screen-corti-3000')
                  # It try load: https://www.cetelem.es/eCommerceCalculadora/resources/js/eCalculadoraCetelemCombo.js 
                  driver.find_element_by_name('ANCHO').send_keys("100")
                  

                  如何限制等待时间,阻止文件的AJAX加载,或者其他方式?

                  How to limit the time wait, block AJAX load of a file, or is other way?

                  我还在 webdriver.Chrome() 中测试我的脚本,但会使用 PhantomJS(),或者可能是 Firefox().因此,如果某些方法使用了浏览器设置的更改,那么它必须是通用的.

                  Also I test my script in webdriver.Chrome(), but will use PhantomJS(), or probably Firefox(). So, if some method uses a change in browser settings, then it must be universal.

                  推荐答案

                  当 Selenium 默认加载页面/url 时,它遵循默认配置,将 pageLoadStrategy 设置为 normal.为了使 Selenium 不等待整个页面加载,我们可以配置 pageLoadStrategy.pageLoadStrategy 支持以下 3 种不同的值:

                  When Selenium loads a page/url by default it follows a default configuration with pageLoadStrategy set to normal. To make Selenium not to wait for full page load we can configure the pageLoadStrategy. pageLoadStrategy supports 3 different values as follows:

                  1. 正常(全页加载)
                  2. 渴望(交互式)

                  这是配置pageLoadStrategy的代码块:

                  Here is the code block to configure the pageLoadStrategy :

                  • 火狐:

                  from selenium import webdriver
                  from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
                  
                  caps = DesiredCapabilities().FIREFOX
                  caps["pageLoadStrategy"] = "normal"  #  complete
                  #caps["pageLoadStrategy"] = "eager"  #  interactive
                  #caps["pageLoadStrategy"] = "none"
                  driver = webdriver.Firefox(desired_capabilities=caps, executable_path=r'C:path	ogeckodriver.exe')
                  driver.get("http://google.com")
                  

                • Chrome:

                  from selenium import webdriver
                  from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
                  
                  caps = DesiredCapabilities().CHROME
                  caps["pageLoadStrategy"] = "normal"  #  complete
                  #caps["pageLoadStrategy"] = "eager"  #  interactive
                  #caps["pageLoadStrategy"] = "none"
                  driver = webdriver.Chrome(desired_capabilities=caps, executable_path=r'C:path	ochromedriver.exe')
                  driver.get("http://google.com")
                  

                • 注意 : pageLoadStrategynormal, eagernoneWebDriver W3C Editor's DraftpageLoadStrategy 值作为 eager 仍然是一个WIP(Work InChromeDriver 实施中的进展).您可以在 渴望"页面加载中找到详细讨论Python 中 Chromedriver Selenium 的策略解决方法

                  Note : pageLoadStrategy values normal, eager and none is a requirement as per WebDriver W3C Editor's Draft but pageLoadStrategy value as eager is still a WIP (Work In Progress) within ChromeDriver implementation. You can find a detailed discussion in "Eager" Page Load Strategy workaround for Chromedriver Selenium in Python

                  这篇关于如何让 Selenium 不等到整个页面加载,它的脚本很慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

                  相关文档推荐

                  groupby multiple coords along a single dimension in xarray(在xarray中按单个维度的多个坐标分组)
                  Group by and Sum in Pandas without losing columns(Pandas中的GROUP BY AND SUM不丢失列)
                  Group by + New Column + Grab value former row based on conditionals(GROUP BY+新列+基于条件的前一行抓取值)
                  Groupby and interpolate in Pandas(PANDA中的Groupby算法和插值算法)
                  Pandas - Group Rows based on a column and replace NaN with non-null values(PANAS-基于列对行进行分组,并将NaN替换为非空值)
                  Grouping pandas DataFrame by 10 minute intervals(按10分钟间隔对 pandas 数据帧进行分组)
                    <bdo id='QFk5f'></bdo><ul id='QFk5f'></ul>
                    <legend id='QFk5f'><style id='QFk5f'><dir id='QFk5f'><q id='QFk5f'></q></dir></style></legend>
                          <tbody id='QFk5f'></tbody>
                        <tfoot id='QFk5f'></tfoot>

                        1. <small id='QFk5f'></small><noframes id='QFk5f'>

                            <i id='QFk5f'><tr id='QFk5f'><dt id='QFk5f'><q id='QFk5f'><span id='QFk5f'><b id='QFk5f'><form id='QFk5f'><ins id='QFk5f'></ins><ul id='QFk5f'></ul><sub id='QFk5f'></sub></form><legend id='QFk5f'></legend><bdo id='QFk5f'><pre id='QFk5f'><center id='QFk5f'></center></pre></bdo></b><th id='QFk5f'></th></span></q></dt></tr></i><div id='QFk5f'><tfoot id='QFk5f'></tfoot><dl id='QFk5f'><fieldset id='QFk5f'></fieldset></dl></div>