JSON: Best Practices for Logging

Last Updated: Oct 16, 2013 03:11PM PDT

If you're like anyone else in the world, you probably don't like wasting a lot of time scouring through log files. Even though Loggly makes log files fun, we want to help you get more out of your logs without even looking at your logs. This is where structured data comes in. Many of our users have made the transition to JSON and aren't going back!

Here's an example. Say this is a portion of your log file:

Hoover, 29, 251 Kearny Street, San Francisco, CA, 2012-09-29
Teton, 21, 123 Great Avenue, Teton, ID, 2012-09-29

I want to figure out how many 29 year olds are from San Francisco. This may look familiar to people who are used to dealing with unstructured logs:

$ grep 29 file.log | cut -d , -f 4 |sort |uniq -c |sort -nr

With the above approach, you'll end up with log entries that include any date that has “29” in it. So then you end up with an even more complicated command.

If you're sending JSON data to Loggly, perform a search for city:"san francisco", choose the age field name from the "Filter by Field" section, and you'll see a count for each of the unique values:

‚Äč

 

Getting Started

You'll need to convert your plain text logs into JSON. This is usually straight forward. Within your Apache configuration file (httpd.conf), set up a custom logging format. Here are a couple of examples:

Common Log Format:

LogFormat "{ \"remoteHost\":\"%h\", \"remoteLogname\":\"%l\", \"user\":\"%u\", \"time\":\"%t\", \"request\":\"%r\", \"status\":\"%>s\", \"size\":%b }" jsonlog

CustomLog logs/access_log jsonlog

 

NCSA extended/combined log format:

LogFormat "{ \"remoteHost\":\"%h\", \"remoteLogname\":\"%l\", \"user\":\"%u\", \"time\":\"%t\", \"request\":\"%r\", \"status\":\"%s\", \"size\":\"%b\", \"referer\":\"%{Referer}i\", \"userAgent\":\"%{User-agent}i\" }" jsonlog

CustomLog logs/access_log jsonlog
LogFormat "{ \"time\":\"%t\", \"remoteIP\":\"%a\", \"host\":\"%V\", \"request\":\"%U\", \"query\":\"%q\", \"method\":\"%m\", \"status\":\"%>s\", \"userAgent\":\"%{User-agent}i\", \"referer\":\"%{Referer}i\" }" jsonlog

CustomLog logs/access_log jsonlog

 

We also have instructions for sending Apache logs over HTTPS.

Using Your Data

Now that you have JSON data within Loggly, you may never want to look at the actual log events again. You'll be able to take advantage of the many field-aware features in Loggly such as: Trends, filter by field, Top Values (unique values).

Watch Your Field Names

For every JSON field name within your events, Loggly adds those names to the list of facets, but only up to a limit. While this limit is determined dynamically, if you keep the number of unique fields below 150, all events should be indexed successfully. The system does this to protect you from an excessively high-number of JSON fields appearing for display.  And if you hit this limit, some field-value pairs that are in your events may not be visible when the event is expanded.

To avoid hitting your limit, keep your unique set of JSON field names as small as possible. For example JSON events like the following may result in a very large number of unique fields, one for each possible error code:
 

{
    "serverfault5234": "Disk full"
}

Instead generate JSON like this, so there can be only a maximum of three unique fields:
 
{
    "event": "serverfault",
    "code": 5234,
    "reason": "Disk full"
}

Regardless of whether you hit your limit or not, all event text is fully indexed as far as search is concerned, so you can always find your events via free-text search.
 

 

logglyassistly@zoho.com
https://cdn.desk.com/
false
@loggly
Loading
seconds ago
a minute ago
minutes ago
an hour ago
hours ago
a day ago
days ago
about
false
Invalid characters found
/customer/en/portal/articles/autocomplete