About Plesk Statistics Tool

The Plesk statistics is a tool in Plesk that calculates disk space and traffic usage on a per domain basis. This information will be available to end users, resellers, and the provider. The duty of the statistics tool is not only to monitor the informative, statistics calculation, but also automatically suspend the subscriptions that exceed the configured resource usage limits.  The statistics calculation is controlled by the statistics utility, which is being run by a script scheduled to run on a daily basis. On Linux, the following cron job takes care of the task:

# install_statistics


/usr/local/psa/admin/plib/DailyMaintainance/script.php >/dev/null 2>&1

Look for it in /etc/cron.daily/50plesk-daily.

On Windows, it is the task with the details of “Daily script task” in the Task Scheduler. This ‘Daily maintenance’ script includes many functions such as:

1) Suspending subscriptions that overuse resources.

2) Checking for Plesk updates etc.

Running the statistics utility is one of them. For every domain, the utility does the following, in turn:

1) Calculates disk usage and writes it to the Plesk database. The information goes into the disk usage and domains tables.

2) Parses mail and FTP logs to calculate SMTP/POP3/IMAP/FTP traffic usage for the domain.

3) Processes the data from the web server logs.

Any user can see their domains’ web server logs from the Plesk interface by using the built-in file manager. Here is where the web server logs are located on Linux:

Statistics in plesk


Here is where the web server logs are located on Windows:

Statistics in plesk


On Linux the processing of the Apache logs consists of the following steps:

1) The statistics utility reads the data from access_log, proxy_access_log, access_ssl_log, and proxy_access_ssl_log files and writes it to the corresponding *_log.stat and *_log.webstat files (i.e. the data from access_log and proxy_access_log goes in access_log.stat and access_log.webstat, and the data from *_ssl_log files goes in access_ssl_log.stat and access_ssl_log.webstat).

2) It then writes the data from the *.stat files into the corresponding *.processed files (e.g. the data from access_log.stat goes into access_log.processed), then sorts the contents. The *.stat files are removed afterwards.

3) Parses the *.processed files to calculate web server traffic, then calls the log rotate utility which cleans up the *.processed logs according to the domain’s log rotation settings. This explains why there aren’t any provisions for rotating individual access log files in the log rotate configs (found in /usr/local/psa/etc/logrotate.d/<domain.tld>) – they never improve to any noticeable size, as the information is moved to .processed logs and those are rotated instead.

4) Creates hard links in the $HTTPD_VHOSTS_D/<domain_name>/logs/ directory pointing to the actual logs stored in $HTTPD_VHOSTS_D/system/<domain_name>/logs/. This mechanism allows end users to see the logs for their domain(s) and manage them, but prevents Apache from going disordered if a user deletes the /logs directory in their web space, which Apache will be unable to recreate, as the directory is owned by root:root.

5) Writes the obtained traffic data to the Plesk database (Domains Traffic and Clients Traffic tables).

6) Calls the web statistics engine (either Webalizer or AWstats, depending on the domain’s settings). It processes the .webstat files to generate an HTML representation of the traffic data, available for the customers in the Web Statistics menu, then erases the contents of the .webstat files.

On Windows the process is much simpler. When work on the web server logs occurs, the statistics utility does the following:

1) Calculates traffic based on the data in the IIS log and writes it to the Plesk database.

2) Writes the time it ran to the registry.

3) Generates a different log and a configuration file. Those will be used by the web statistics engine (Webalizer or AWstats). The generation of web statistics is handled by a scheduled task named “Daily web statistics analyzers run task”, and the temporary log is removed after statistics calculation is done.

When the utility is ran next time, it will be connected to the registry to get the date and time which it was last executed. Then it processes the data in the IIS log that has loaded up since that time. It will be written to the web statistics log, and so on. Log rotation on Windows is implemented by the standard IIS means, so there’s no dependency on the utility running. Ideally the process goes smoothly every time. Unfortunately, in the unsound reality the statistics engine requires a fair large amount of memory to operate, and is a juicy target for the OOMkiller. Certain domains will not be rotated whose statistics utility is terminated before it has processed web server logs (as there will be no processed logs to rotate). The design of this algorithm is to ensure that data loss is avoided.


If you need any further assistance please contact our support department.