Despite being unfamiliar with coding, this should still do what you want. Of course, you're more than welcome to wait for another user to post a GUI solution.
Simply copy and paste the following code into a text document and save it as dlimgs.py
. I recommend making a new folder in your home directory called bin
and saving it there.
#!/ust/bin/env python
import sys,urllib2,re
def main(url):
page = getpage(url)
start = page.find('articlebody')
page = page[start:]
lines = page.split('\n')
for l in lines:
if ('<img' in l) and ('.jpg' in l):
matches = re.search(".*<img.*'(.*\.jpg)'.*",l)
img = matches.group(1)
name = img[img.rfind('/')+1:]
print 'Downloading: '+name
img = getpage(img)
with open(name,'w') as f:
f.write(img)
def getpage(url):
user_agent = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0'
headers = {'User-Agent':user_agent}
req = urllib2.Request(url, None, headers);
response = urllib2.urlopen(req)
return response.read()
if __name__ == '__main__':
main(sys.argv[1])
Then open a terminal with Ctrl+Alt+T and do the following:
- Move to where you saved it using the
cd
command (Example: cd ~/bin
)
- Invoke the script with
python dlimgs.py <url>
It will download all the images and save them in the ~/bin
folder. Note that this was written specifically for the website you supplied in the question, and so will skip the header images at the top of the page. It will probably throw errors for other websites. One more note, it will overwrite any images with the same filenames in the ~/bin
dir so be careful.