There is a nice little utility called wget that is useful to downlad files directly from command line. For example if you want to download the google image that appear in google seach page (http://www.google.com/intl/en/images/logo.gif), do the following $ wget http://www.google.com/intl/en/images/logo.gifand you will see some progress messages like the ones below
--23:08:07-- http://www.google.com/intl/en/images/logo.gif => `logo.gif' Resolving www.google.com... 22.214.171.124, 126.96.36.199 Connecting to www.google.com|188.8.131.52|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 8,558 (8.4K) [image/gif] 100%[====================================>] 8,558 --.--K/s 23:08:08 (148.57 KB/s) - `logo.gif' saved [8558/8558]
and the file will be saved on the current directory. This is particularly useful, when you want a program to automatically download some stuff to the computer.
Man page of wget gives more information on how to use it creatively. For example to download the google's image and save it as google.gif, instead of logo.gif wget http://www.google.com/intl/en/images/logo.gif --output-document=google.gif Or if you want to download a whole site
wget --wait=20 --limit-rate=20K -r -p -U Mozilla http://sitename.com
(For large sites, this can take a long time and many fill up a decent chunk of your hard drive. But it will work.)