Write a highly efficient python Web Crawler

profile
Mark DuanFirst published: 2015-07-14Last updated: 2025-06-25
write-a-highly-efficient-python-web-crawler

As my previous blog, I use the python web Crawler library to help crawl the static website. For the Scrapy, there can be customize download middle ware, which can deal with static content in the website like JavaScript.

However, the Scrapy already helps us with much of the underlying implementation, for example, it uses it own dispatcher and it has pipeline for dealing the parsing word after download.  One drawback for using such library is hard to deal with some strange bugs occurring because they run the paralleled jobs.

For this tutorial, I want to show the structure of a simple and efficient web crawler.

First of all, we need a scheduler, who can paralleled the job. Because the most of the time is on the requesting.  I use the  gevent to schedule the jobs. Gevent uses the libevent as its underlying library, which combines the multithreading and event-based techniques to parallel the job.

There is the sample code:

1import gevent
2from gevent import Greenlet
3from gevent import monkey
4from selenium import webdriver
5monkey.patch_socket()
6class WebCrawler:
7    def __init__(self,urls=[],num_worker = 1):
8        self.url_queue = Queue()
9        self.num_worker = num_worker
10    def worker(self,pid):
11        driver = self.initializeAnImegaDisabledDriver()  #initilize the webdirver
12#TODO catch the exception
13        while not self.url_queue.empty():
14            url = self.url_queue.get()
15            self.driver.get(url)
16            elem = self.driver.find_elements_by_xpath("//script | //iframe | //img") ## get such element from webpage
17    def run(self):
18        jobs = [gevent.spawn(self.worker,i) for i in xrange(self.num_worker)]

The next part is the headless browser part. I use the phantomjs with --webdriver=4444 --disk-cache=true --ignore-ssl-errors=true --load-images=false --max-disk-cache-size=100000. You can get the detailed option from their documents.

Phantomjs uses selenium webdriver as front-end to handle the request. However phantomjs is using the webkit and QT as its underlying browser and controller. It has memory leak bugs therefore the phantomjs will consume ton of memory and it only can use one core of your CPU but you can deploy many instances of the phantomjs on different ports. I wrote a daemon process to monitor the memory and its situation but later I realize I can use Perl script to get the status of process and when it exceeds the limits like 1G memory and send kill signal to the process.

To speed up the crawler, I choose to use static browser to verify the website first because the website is bad written, there might be deadlock occurring so just skip them.

Share On:
Share on TwitterShare on LinkedIn