I wrote a program which scrape internet sites. It is pretty straigtforward because it process link one by one.
With the limiting factor of the internet speed, the process is rather slow.
I wonder, how can I make it multiprocess? The modules of python is confusing. I don’t know the difference between subprocess and multiprocess. I’ve heard that multiprocessing is difficult and ineffective because of the GIL since python 2.x. I am using Python 3.2, so I wonder if things have been improved.
First of all: the GIL doesn’t apply to multi-processing, only to threading, and isn’t as big a problem as people seem to make out it is. For one, when using any network or file I/O the lock is released, so anything that scrapes websites is not likely to see any performance bottlenecks on account of the GIL.
The subprocess
module is intended to call arbitrary processes from your program, basically anything you can run from the command line on the machine is fair game for this module.
The multiprocessing
module on the other hand, is intended to make distributing work across multiple python processes as easy as using threads for the same kind of work. It is the module you should be looking at if you want to implement distributed site scraping using multiple processes.
That said, why don’t you take a look at Scrapy, instead:
Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
Scrapy uses an event-driven loop approach instead of using multiple threads or processes to solve the ‘network I/O is slow’ problem. An event loop switches between different tasks in the program when network data (or any other I/O) is pending. Instead of waiting for network data to come in before continuing, an event loop switches to a different task instead while leaving it up to the OS to notify the program when the network data has arrived.
If Scrapy doesn’t fit your specific needs, you can still make use of the same trick. Take a look at any of the following frameworks to help you do the same thing in your program:
- The stdlib
asyncore
module twisted
eventlets
greenlets
Scrapy uses twisted
, if that makes any difference for you.
2
If you have a list of sites to scrape, your answer may be as simple as breaking that into two lists and running two copies of your python program – one against each list. If the program normally spends most of its time waiting for the site to respond, you’ll see a speed up. If, on the other hand, your program speed is limited by your network bandwidth, or how fast your program can process each site, you will see a slowdown. So time your program running with one instance (maybe and average time for 100 sites), then time two instances, then three… You will find a sweet spot where more or fewer instances slows your program down.
Multiprocessing is very difficult where the two processes share data. Make sure they do not need to share any data or communicate with each other and multi-processing is easy. Of course, really being sure that there are no dependencies is hard. 🙂
Many of the suggestions made are for python 2.x.
I found that python 3.x provide a easy to do multiprocess, that is the concurrent.futures module
The following snippet demonstrate the way to do it easily:
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
def load_url(url, timeout):
return urllib.request.urlopen(url, timeout=timeout).read()
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_url = dict((executor.submit(load_url, url, 60), url)
for url in URLS)
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
if future.exception() is not None:
print('%r generated an exception: %s' % (url,
future.exception()))
else:
print('%r page is %d bytes' % (url, len(future.result())))
1