This blog is an excerpt from the book chapter:
Web Analytics Overview. In Encyclopedia of Information Science and Technology, 3rd Edition.
There are two major methods to collect usage data: web server logging and page tagging.
Web server logging is a traditional method of usage data collection. A log file is generated by a web server to record server activities and HTTP headers in a textual format. There are various formats of log files. Most commonly logged data in the NCSA Common Log Format (http://www.w3.org/Daemon/User/Config/Logging.html) are server IP, date/time, HTTP request command, response status, and response size. The figure below shows an example of the Common Log Format implemented in Apache Web Server 2.2. Additional data, such as HTTP headers, process id, scripts, request rewrite, etc., can be logged in proprietary formats or Extended Log File Format (http://www.w3.org/TR/WD-logfile.html). Log analysis software can be used to extract and analyze log files. Popular tools are Analog (http://www.analog.cx), Deep Log Analyzer (http://www.deep-software.com), Webalizer (http://www.webalizer.org), and AWSstats (http://awstats.sourceforge.net).
A third method of data collection, application level logging, is on the rise lately. Application level logging is tightly coupled with an application, which is a functional feature of the application itself. This is an expansion of the traditional web analytics which focuses on generic HTTP requests and user actions. An application can be a shopping site, a web portal, a blog service, a learning management system, a forum, or a social networking service. Each of these applications has its own unique usage data that is collected beyond generic web requests or user actions. The usage data is processed by the application itself or by a functional module tightly coupled with the application, but not by independent logging or analytics services. For example, SharePoint 2010 provides framework specific analytics data, like usage of templates and web parts.