httpGet vs. urllib2 slow at fetching data from the web

Can anyone tell me why urllib2.urlopen takes so long to fetch file/binary data from a website while fetches data quickly. I’m talking apples vs. oranges here because httpGet doesn’t work well with binary data and that’s what I need. Take for example this file Its an animated gif. If I use httpGet, I’ll either get a scrambled animation or just the single frame. if I use urllib2.urlopen() followed by .read()…and subsequent StringUtil.toBytes(), then I get the complete file intact just as you would in a browser but takes nearly 5 seconds for code to run.
FWIW, I have read through a mind numbing posts across the web of other folks using ImageIO or ByteArrayOutputStreams or InputStreams…but with no success for me. urllib2 works…but very slow.

Try running this in a script console and check out the file.

import urllib2
url = ''
#response = urllib2.urlopen(url)
#filebytes =
filebytes =

if filebytes != None:
	#from org.python.core.util import StringUtil
	#filebytes = StringUtil.toBytes(filebytes)
	filepath = system.file.saveFile("peanutbutterjellytime.gif")
#end if

This is what httpGet 's you.

Huh, heard complaints that urllib2 is slower but it only takes ~75ms to download that gif with urllib2 from my machine.

Strange. If I keep running the script multiple times, I will eventually get sub 1-second. But if I pause and try running again, I get a five second delay with urllib2. This is with any file from any site. httpGet consistently returns quickly.

I tried executing again quickly after cancelling the save dialog and it returns with less than 1 second delay