How To Download Files From Linux Command Line

Posted at January 5, 2017 at 4:30 pm by Jithin

In this tutorial we can learn how to download files from Linux Command line.

Wget, is a part of GNU Project, the name is derived from World Wide Web (WWW). Wget is a command-line downloader for Linux and UNIX environments. It is a popular and absolutely user-friendly free-utility command line tool primarily used for non-interactive downloading files from the web. It is a brilliant tool which is useful for recursive download, offline viewing of HTML from local Server and is available for most of the platforms be it Windows, Mac, and Linux. It is very powerful and versatile and can match some of the best graphical downloaders around today.  Wget makes it possible to download files over HTTP, HTTPS and FTP. Moreover, it can be useful in mirroring the whole website as well as support for proxy browsing, pausing/resuming downloads. It has features such as resuming of downloads, bandwidth control, it can handle authentication, and much more. It also helps users in manipulating the download of huge chunks of data, multiple files, downloads of recursive nature, and protocol-based downloads,

 

Installation of Wget

Wget being a GNU project comes bundled with Most of the Standard Linux Distributions and there is no need to download and install it separately. If in-case, it’s not installed by default, you can still install it using apt or yum.

On a Red Hat Linux based system such a Fedora, you can use the following command to install Wget on your machine.

$ yum install wget

If you are using a Debian based system like Ubuntu, you can use the following command.

$ sudo apt-get install wget

 

Basic Usage of Wget

1) To download a single file using Wget.

$ wget http://www.website-name.com/file

 

2) If you want to download and Save the File using a Different Name, use the below given command.

$ wget -O [Preferred_Name] http://www.website-name.com/file

Using the above command, you would be able to save the file using the name you wish to assign it.

 

3) To download a whole website recursively, use the following command.

$ wget -r http://www.website-name.com

 

4) For Limiting the Speed of Download, you can use the below given command.

$ wget –limit-rate=[VALUE]  http://www.website-name.com

 

5) Download specific type of file like pdf and png from a website.

$ wget -r -A  png,pdf  http://www.website-name.com

 

6) To resuming a Stopped/Interrupted Download.

$ wget -c http://www.website-name.com

The above command will help you to resume the download process from where it stopped earlier.

 

7) To continue the download process in the background, use the following command.

$ wget -b http://www.website-name.com

You may prefer to continue download process in the background and make use of the shell prompt when downloading a huge file. In this case, you must execute the wget command using option -b, and monitor the download status in the wget-log file, where it would be logged as per process. You need to use the above command to continue the download process in the background.

You may check for download progress by accessing contents of the wget-log file using the tail command as follows:

$ tail -f wget-log

 

8) To customizing the number of attempts, you can use the following command.

$ wget –tries=[DESIRED_VALUE] ttp://www.website-name.com

In the normal case, if the internet connectivity lost/disrupted, the wget command would make 20 reattempts to connect to the website for completing the downloading process. However, by using the “–tries” option, users have the privilege to change this number as per their preference. The above command will help you do the same. By specifying the preferred number in the DESIRED_VALUE field, users may regulate the number of retries in case of interrupted internet connectivity.

 

If you need any further assistance please contact our support department.

 

 

0.00 avg. rating (0% score) - 0 votes

You can skip to the end and leave a response. Pinging is currently not allowed.

Leave a Reply