Web scrape troubleshooting: Difference between revisions

From LemonWiki共筆
Jump to navigation Jump to search
mNo edit summary
Line 8: Line 8:
#* Setting the temporization (sleep time) between each request e.g.: [http://php.net/manual/en/function.sleep.php PHP: sleep - Manual], [http://doc.scrapy.org/en/1.0/topics/autothrottle.html#topics-autothrottle AutoThrottle extension — Scrapy 1.0.3 documentation] or [[Sleep | Sleep random seconds in programming]].
#* Setting the temporization (sleep time) between each request e.g.: [http://php.net/manual/en/function.sleep.php PHP: sleep - Manual], [http://doc.scrapy.org/en/1.0/topics/autothrottle.html#topics-autothrottle AutoThrottle extension — Scrapy 1.0.3 documentation] or [[Sleep | Sleep random seconds in programming]].
#* The server responded with a status of 403: '[https://zh.wikipedia.org/wiki/HTTP_403 403 forbidden]' --> Change the network IP
#* The server responded with a status of 403: '[https://zh.wikipedia.org/wiki/HTTP_403 403 forbidden]' --> Change the network IP
# CATCHA
# [https://en.wikipedia.org/wiki/CAPTCHA CAPTCHA]
# AJAX
# AJAX
# The web page needed to signed in
# The web page needed to signed in

Revision as of 09:52, 26 November 2020

List of technical issues

  1. Content of web page was changed (revision): The expected web content (of specified DOM element) became empty.
    • Multiple sources of same column such as different HTML DOM but have the same column value.
    • Backup the HTML text of parent DOM element
    • (optional) Complete HTML file backup
  2. The IP was banned from server
  3. CAPTCHA
  4. AJAX
  5. The web page needed to signed in
  6. Blocking the request without Referer or other headers.
  7. Connection timeout during a http request. e.g. In PHP default_socket_timeout is 30 seconds[1][2].
  8. Language and URL-encodes string
  9. Data cleaning issues e.g. Non-breaking space or other Whitespace character
Difficulty in implementing Approach Comments
easy Url is the resource of dataset
more difficult Url is the resource of dataset. Require to simulate post form submit with the form data or user agent Using HTTP request and response data tool or PHP: cURL
more difficult Require to simulate the user behavior on browser such as click the button, submit the form and obtain the file finally. Using Selenium or Headless Chrome
difficult Ajax

Before start to writing the scirpt of web scraping (crawler)

  • Are the website offer datasets or files? e.g. open data
  • Are the website offer API (Application programming interface)?

How to find the unofficial (3rd-party) web crawler? Search keyword strategy suggested as follows:

  • target website + crawler
  • target website + bot
  • target website + download / downloader

Further reading

References


Troubleshooting of ...

Template