6

I was looking to find a way to optimize my code when I heard some good things about threads and urllib3. Apparently, people disagree which solution is the best.

The problem with my script below is the execution time: so slow!

Step 1: I fetch this page http://www.cambridgeesol.org/institutions/results.php?region=Afghanistan&type=&BULATS=on

Step 2: I parse the page with BeautifulSoup

Step 3: I put the data in an excel doc

Step 4: I do it again, and again, and again for all the countries in my list (big list) (I am just changing "Afghanistan" in the url to another country)

Here is my code:

ws = wb.add_sheet("BULATS_IA") #We add a new tab in the excel doc
    x = 0 # We need x and y for pulling the data into the excel doc
    y = 0
    Countries_List = ['Afghanistan','Albania','Andorra','Argentina','Armenia','Australia','Austria','Azerbaijan','Bahrain','Bangladesh','Belgium','Belize','Bolivia','Bosnia and Herzegovina','Brazil','Brunei Darussalam','Bulgaria','Cameroon','Canada','Central African Republic','Chile','China','Colombia','Costa Rica','Croatia','Cuba','Cyprus','Czech Republic','Denmark','Dominican Republic','Ecuador','Egypt','Eritrea','Estonia','Ethiopia','Faroe Islands','Fiji','Finland','France','French Polynesia','Georgia','Germany','Gibraltar','Greece','Grenada','Hong Kong','Hungary','Iceland','India','Indonesia','Iran','Iraq','Ireland','Israel','Italy','Jamaica','Japan','Jordan','Kazakhstan','Kenya','Kuwait','Latvia','Lebanon','Libya','Liechtenstein','Lithuania','Luxembourg','Macau','Macedonia','Malaysia','Maldives','Malta','Mexico','Monaco','Montenegro','Morocco','Mozambique','Myanmar (Burma)','Nepal','Netherlands','New Caledonia','New Zealand','Nigeria','Norway','Oman','Pakistan','Palestine','Papua New Guinea','Paraguay','Peru','Philippines','Poland','Portugal','Qatar','Romania','Russia','Saudi Arabia','Serbia','Singapore','Slovakia','Slovenia','South Africa','South Korea','Spain','Sri Lanka','Sweden','Switzerland','Syria','Taiwan','Thailand','Trinadad and Tobago','Tunisia','Turkey','Ukraine','United Arab Emirates','United Kingdom','United States','Uruguay','Uzbekistan','Venezuela','Vietnam']
    Longueur = len(Countries_List)



    for Countries in Countries_List:
        y = 0

        htmlSource = urllib.urlopen("http://www.cambridgeesol.org/institutions/results.php?region=%s&type=&BULATS=on" % (Countries)).read() # I am opening the page with the name of the correspondant country in the url
        s = soup(htmlSource)
        tableGood = s.findAll('table')
        try:
            rows = tableGood[3].findAll('tr')
            for tr in rows:
                cols = tr.findAll('td')
                y = 0
                x = x + 1
                for td in cols:
                    hum =  td.text
                    ws.write(x,y,hum)
                    y = y + 1
                    wb.save("%s.xls" % name_excel)

        except (IndexError):
            pass

So I know that all is not perfect but I am looking forward to learn new things in Python ! The script is very slow because urllib2 is not that fast, and BeautifulSoup. For the soup thing, I guess I can't really make it better, but for urllib2, I don't.

EDIT 1 : Multiprocessing useless with urllib2? Seems to be interesting in my case. What do you guys think about this potential solution ?!

# Make sure that the queue is thread-safe!!

def producer(self):
    # Only need one producer, although you could have multiple
    with fh = open('urllist.txt', 'r'):
        for line in fh:
            self.queue.enqueue(line.strip())

def consumer(self):
    # Fire up N of these babies for some speed
    while True:
        url = self.queue.dequeue()
        dh = urllib2.urlopen(url)
        with fh = open('/dev/null', 'w'): # gotta put it somewhere
            fh.write(dh.read())

EDIT 2: URLLIB3 Can anyone tell me more things about that ?

Re-use the same socket connection for multiple requests (HTTPConnectionPool and HTTPSConnectionPool) (with optional client-side certificate verification). https://github.com/shazow/urllib3

As far as I am requesting 122 times the same website for different pages, I guess reusing the same socket connection can be interesting, am I wrong ? Cant it be faster ? ...

http = urllib3.PoolManager()
r = http.request('GET', 'http://www.bulats.org')
for Pages in Pages_List:
    r = http.request('GET', 'http://www.bulats.org/agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=%s' % (Pages))
    s = soup(r.data)
Community
  • 1
  • 1
Carto_
  • 577
  • 8
  • 28
  • Yes, your problem here is that the fetching is single threaded. But if you use multithreading, you'll have to make the process of writing to excel thread safe. I recommend scrapy, a scraping framework for Python, which kind of does everything for you. – WooParadog Apr 22 '12 at 04:19
  • 1
    Thank you very much, I will see what Scrapy can do for me. And urllib3 is not a valid solution too ? :) But if there is any possibility to make it faster without using scrapy, It will be better form me. I am learning python so I'd like to understand all ! – Carto_ Apr 22 '12 at 05:02
  • You should not open more than one connection to the same website, I believe it is more like a "gentleman's agreement" – Bahadir Cambel Apr 22 '12 at 11:37
  • 2 or 3 simultaneous connections to the same server is fine, 100 would not be. Remember that every connection you make has a performance cost. There's the TCP 3-way handshake, and then there's slow-start. Use pipelining if you can, otherwise use connection keep-alive. – Ben Voigt Apr 23 '12 at 03:33

3 Answers3

9

Consider using something like workerpool. Referring to the Mass Downloader example, combined with urllib3 would look something like:

import workerpool
import urllib3

URL_LIST = [] # Fill this from somewhere

NUM_SOCKETS = 3
NUM_WORKERS = 5

# We want a few more workers than sockets so that they have extra
# time to parse things and such.

http = urllib3.PoolManager(maxsize=NUM_SOCKETS)
workers = workerpool.WorkerPool(size=NUM_WORKERS)

class MyJob(workerpool.Job):
    def __init__(self, url):
       self.url = url

    def run(self):
        r = http.request('GET', self.url)
        # ... do parsing stuff here


for url in URL_LIST:
    workers.put(MyJob(url))

# Send shutdown jobs to all threads, and wait until all the jobs have been completed
# (If you don't do this, the script might hang due to a rogue undead thread.)
workers.shutdown()
workers.wait()

You may note from the Mass Downloader examples that there are multiple ways of doing this. I chose this particular example just because it's less magical, but any of the other strategies are valid also.

Disclaimer: I am the author of both, urllib3 and workerpool.

shazow
  • 17,147
  • 1
  • 34
  • 35
  • 1
    Woaw, thx for this super-idea ! I'm gonna try that :-) I am also looking at Twisted, which seems to be quite quick too. What do you think is the best ? – Carto_ Apr 24 '12 at 06:45
  • Woaw, thx for this super-idea ! I'm gonna try that :-) I am also looking at Twisted, which seems to be quite quick too. What do you think is the best ? Hum, and also another question (sorry for that): how can I do if I want to do the same thing for one page, but with a lot of different POSTS in a form. Your solution seems adapted for that (urllib3) so pleaase, give me a hand ;) Here is the difficult part of the script : http://pastebin.com/m9TAs2cj – Carto_ Apr 24 '12 at 07:00
  • @Carto_ Twisted is a fine solution too. You could also look at using Requests instead of urllib3, which adds lots of features on top of urllib3 like cookie sessions. – shazow Apr 24 '12 at 16:40
2

I don't think urllib or BeautifulSoup is slow. I run your code in my local machine with a modified version ( removed the excel stuff ). It took around 100ms to open the connection, download the content, parse it , and print it to the console for a country.

10ms is the total amount of time that BeautifulSoup spent to parse the content, and print to the console per country. That is fast enough.

Neither I do believe using Scrappy or Threading is going to solve the problem. Because the problem is the expectation that it is going to be fast.

Welcome to the world of HTTP. It is going to be slow sometimes, sometimes it will be very fast. Couple of slow connection reasons

  • because of the server handling your request( return 404 sometimes )
  • DNS resolve ,
  • HTTP handshake,
  • your ISP's connection stability,
  • your bandwidth rate,
  • packet loss rate

etc..

Don't forget, you are trying to make 121 HTTP Requests to a server consequently and you don't know what kind of servers do they have. They might also ban your IP address because of consequent calls.

Take a look at Requests lib. Read their documentation. If you're doing this to learn Python more, don't jump into Scrapy directly.

Bahadir Cambel
  • 422
  • 5
  • 12
  • Thank you very much for all these informations ! Of course Scrapy seems to be interesting but my objective is to learn Python so I need to stay on python :-) I need to remove wb.save("%s.xls" % name_excel) for the "for" also, it was quite stupid. And I'm gonna take a look at Requests lib, as you advice :) – Carto_ Apr 22 '12 at 08:55
  • "Consider using urllib3. It supports connection pooling and multiple concurrent requests via processes (not threads). It should solve this problem. Be careful to garbage collect connection pools if you contact many different sites, since each site gets its own pool." In my case it can be interesting no ? – Carto_ Apr 22 '12 at 09:30
  • I haven't used url3 yet, but what you wrote down is very promising. The biggest step in learning is try different options, understand the problems and the possible solutions. You might also say "learn ins and outs" and find the suitable solution to your problem. But first understand the "problem" very well. Requests has a nice API over URLLib so take a look at the source code as well. Keep on playing.. – Bahadir Cambel Apr 22 '12 at 11:35
  • Thx Bahadir Cambel ! For the HTTP recup part : 5.01 seconds For the parsing part : 0.43 seconds The problem really come from the HTTP request. – Carto_ Apr 22 '12 at 12:09
  • I can download page in 100ms from Amsterdam, I don't know where you live. All these timing depends on couple of things, and believe me, urllib is not the bottleneck. There is the DNS resolve , HTTP handshake, your ISP's connection stability, your bandwidth rate, packet loss rate, etc.. And the server might be overloaded as well. The thing is you don't know if the URLLib is the problem. Execute the same request with another URL and compare the results. It still might not give you the exact comparison of the libraries. The web is a living organism that can change in a milisecond.. – Bahadir Cambel Apr 22 '12 at 13:25
  • You're right ! I am actually in China, and connected through a VPN. So yes, globally my connexion is the main factor of this speed problem. BUT, in order to learn python I'd like to understand how to parallelize the HTTP request step of my script. The parser is the quickest step, so I'd like to send him the HTML sources just-in-time. So if its possible to put the HTML source in a "queue" I guess I can win some time no ? :) – Carto_ Apr 22 '12 at 14:05
  • If you're requesting to the same server, don't parallelize the request. Use a single connection. As I mentioned in a comment to your question ; "You should not open more than one connection to the same website, I believe it is more like a "gentleman's agreement". But what you have in mind to use a queue, that is a valid solution if you're going to make requests to multiple servers. – Bahadir Cambel Apr 22 '12 at 21:40
  • Hum, understand ! But how can I do that (single connexion) ? – Carto_ Apr 23 '12 at 01:42
0

Hey Guys,

Some news from the problem ! I've found this script, which might be useful ! I'm actually testing it and it's promising (6.03 to run the script below).

My idea is to find a way to mix that with urllib3. In effet, I'm making request on the same host a lot of times.

The PoolManager will take care of reusing connections for you whenever you request the same host. this should cover most scenarios without significant loss of efficiency, but you can always drop down to a lower level component for more granular control. (urrlib3 doc site)

Anyway, it seems to be very interesting and if I can't see yet how to mix these two functionnalities (urllib3 and the threading script below), I guess it's doable ! :-)

Thank you very much for taking the time to give me a hand with that, It smells good !

import Queue
import threading
import urllib2
import time
from bs4 import BeautifulSoup as BeautifulSoup



hosts = ["http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=1", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=2", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=3", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=4", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=5", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=6"]

queue = Queue.Queue()
out_queue = Queue.Queue()

class ThreadUrl(threading.Thread):
    """Threaded Url Grab"""
    def __init__(self, queue, out_queue):
        threading.Thread.__init__(self)
        self.queue = queue
        self.out_queue = out_queue

    def run(self):
        while True:
            #grabs host from queue
            host = self.queue.get()

            #grabs urls of hosts and then grabs chunk of webpage
            url = urllib2.urlopen(host)
            chunk = url.read()

            #place chunk into out queue
            self.out_queue.put(chunk)

            #signals to queue job is done
            self.queue.task_done()

class DatamineThread(threading.Thread):
    """Threaded Url Grab"""
    def __init__(self, out_queue):
        threading.Thread.__init__(self)
        self.out_queue = out_queue

    def run(self):
        while True:
            #grabs host from queue
            chunk = self.out_queue.get()

            #parse the chunk
            soup = BeautifulSoup(chunk)
            #print soup.findAll(['table'])

            tableau = soup.find('table')
        rows = tableau.findAll('tr')
        for tr in rows:
            cols = tr.findAll('td')
            for td in cols:
                    texte_bu = td.text
                    texte_bu = texte_bu.encode('utf-8')
                    print texte_bu

            #signals to queue job is done
            self.out_queue.task_done()

start = time.time()
def main():

    #spawn a pool of threads, and pass them queue instance
    for i in range(5):
        t = ThreadUrl(queue, out_queue)
        t.setDaemon(True)
        t.start()

    #populate queue with data
    for host in hosts:
        queue.put(host)

    for i in range(5):
        dt = DatamineThread(out_queue)
        dt.setDaemon(True)
        dt.start()


    #wait on the queue until everything has been processed
    queue.join()
    out_queue.join()

main()
print "Elapsed Time: %s" % (time.time() - start)
Carto_
  • 577
  • 8
  • 28
  • 1
    Hum, when I add more pages to [hosts] it seems no to work as fine ... Don't know why ! By the way, howTo increment a var globally (I mean in all threads, like global for the functions) ? – Carto_ Apr 23 '12 at 09:42
  • 1
    Hum, it seems not to work perfectly ! I got some errors : RuntimeError: dictionary changed size during iteration Exception: Attempt to overwrite cell: sheetname=u'BULATS_IA_PARSED' rowx=107 colx=3 – Carto_ Apr 23 '12 at 16:01