Web scrape troubleshooting: Difference between revisions
Jump to navigation
Jump to search
| Line 11: | Line 11: | ||
#* Backup the HTML text of parent DOM element | #* Backup the HTML text of parent DOM element | ||
#* (optional) Complete HTML file backup | #* (optional) Complete HTML file backup | ||
# The IP was banned from server | # The IP was banned from server | ||
#* Setting the temporization (sleep time) between each request e.g.: [http://php.net/manual/en/function.sleep.php PHP: sleep - Manual], [http://doc.scrapy.org/en/1.0/topics/autothrottle.html#topics-autothrottle AutoThrottle extension — Scrapy 1.0.3 documentation] or [[Sleep | Sleep random seconds in programming]]. | #* Setting the temporization (sleep time) between each request e.g.: [http://php.net/manual/en/function.sleep.php PHP: sleep - Manual], [http://doc.scrapy.org/en/1.0/topics/autothrottle.html#topics-autothrottle AutoThrottle extension — Scrapy 1.0.3 documentation] or [[Sleep | Sleep random seconds in programming]]. | ||
#* The server responded with a status of 403: '[https://zh.wikipedia.org/wiki/HTTP_403 403 forbidden]' --> Change the network IP | #* The server responded with a status of 403: '[https://zh.wikipedia.org/wiki/HTTP_403 403 forbidden]' --> Change the network IP | ||
# [https://en.wikipedia.org/wiki/CAPTCHA CAPTCHA] | # [https://en.wikipedia.org/wiki/CAPTCHA CAPTCHA] | ||
# AJAX | # AJAX | ||
#* [https://chrome.google.com/webstore/detail/autoscroll/kgkaecolmndcecnchojbndeanmiokofl/related Autoscroll] on {{Chrome}} or {{Edge}} written by [https://twitter.com/PeterLegierski Peter Legierski (@PeterLegierski) / Twitter] | #* [https://chrome.google.com/webstore/detail/autoscroll/kgkaecolmndcecnchojbndeanmiokofl/related Autoscroll] on {{Chrome}} or {{Edge}} written by [https://twitter.com/PeterLegierski Peter Legierski (@PeterLegierski) / Twitter] | ||
| Line 26: | Line 29: | ||
# Language and [http://php.net/manual/en/function.urlencode.php URL-encodes string] | # Language and [http://php.net/manual/en/function.urlencode.php URL-encodes string] | ||
# [[ | # [[How to extract content from websites]] | ||
# [[Data cleaning#Data_handling | Data cleaning]] issues e.g. [https://en.wikipedia.org/wiki/Non-breaking_space Non-breaking space] or other [https://en.wikipedia.org/wiki/Whitespace_character Whitespace character] | # [[Data cleaning#Data_handling | Data cleaning]] issues e.g. [https://en.wikipedia.org/wiki/Non-breaking_space Non-breaking space] or other [https://en.wikipedia.org/wiki/Whitespace_character Whitespace character] | ||
Revision as of 18:03, 16 January 2024
Before start to writing the scirpt of web scraping (crawler)
- Is the website offer datasets or files? e.g. open data
- Is the website offer API (Application programming interface)?
List of technical issues
- Content of web page was changed (revision): The expected web content (of specified DOM element) became empty.
- Multiple sources of same column such as different HTML DOM but have the same column value.
- Backup the HTML text of parent DOM element
- (optional) Complete HTML file backup
- The IP was banned from server
- Setting the temporization (sleep time) between each request e.g.: PHP: sleep - Manual, AutoThrottle extension — Scrapy 1.0.3 documentation or Sleep random seconds in programming.
- The server responded with a status of 403: '403 forbidden' --> Change the network IP
- AJAX
- Autoscroll on Chrome
or Edge written by Peter Legierski (@PeterLegierski) / Twitter
- Autoscroll on Chrome
- The web page needed to signed in
- Blocking the request without Referer or other headers.
- Language and URL-encodes string
- Data cleaning issues e.g. Non-breaking space or other Whitespace character
- Is link a permanent link?
- Enable/Disable the CSS or JavaScript
| Difficulty in implementing | Descriptioin | Approach | Comments |
|---|---|---|---|
| Easy | Well-formatted HTML elements | Url is the resource of dataset. | |
| Advanced | Interactive websites | Url is the resource of dataset. Require to simulate post form submit with the form data or user agent | Using HTTP request and response data tool or PHP: cURL |
| more difficult | Interactive websites | Require to simulate the user behavior on browser such as click the button, submit the form and obtain the file finally. | Using Selenium or Headless Chrome |
| Difficult | Interactive websites | Ajax |
Search keyword strategy
How to find the unofficial (3rd-party) web crawler? Search keyword strategy suggested as follows:
- target website + crawler site:github.com
- target website + scraper site:github.com
- target website + bot site:github.com
- target website + download / downloader site:github.com
- target website + browser client site:github.com
Further reading
- Stateless: Why say that HTTP is a stateless protocol? - Stack Overflow
- Stateful: What is stateful? Webopedia Definition
- List of HTTP status codes - Wikipedia
- Skill tree of web scraping
- 南韓最高法院也對 Web Scraping 給出了類似美國的判例 – Gea-Suan Lin's BLOG
References
Troubleshooting of ...
- PHP, cUrl, Python, selenium, HTTP status code errors
- Database: SQL syntax debug, MySQL errors, MySQLTuner errors or PostgreSQL errors
- HTML/Javascript: Troubleshooting of javascript, XPath
- Software: Mediawiki, Docker, FTP problems, online conference software
- Test connectivity for the web service, Web Ping, Network problem, Web user behavior, Web scrape troubleshooting
Template