2

I am currently using the python structlog JSONRenderer and hoping to change my log configuration to render the event as the 1st JSON attribute for better readability.

Current Configuration:

structlog.configure(processors=[structlog.processors.JSONRenderer()])
log = structlog.get_logger()

Current log call site:

log.msg("Response: ",
                        content_type=content_type,
                        content_length=resp.headers.get('content-length'),
                        status_code=resp.status_code
                )

Current Output:

{"content_type": "application/json", "content_length": null, "status_code": 200, "event": "Response: "}

Desired Output:

{"event": "Response: ", "content_type": "application/json", "content_length": null, "status_code": 200}

Any assistance would be greatly appreciated.

Patrick Bray
  • 552
  • 2
  • 7
  • 20
  • What's the problem? The order of the key value pairs really don't matter in json or dictionaries. (kwargs use dictionaries hence are not ordered) – Abdul Aziz Barkat Jan 26 '21 at 04:37
  • Yeah key value pairs order shouldn't really matter once imported into a logging tool etc just handy to have the log message output 1st to reduce impact of structured logging when running locally & being read by humans. Thinking I am probably better off to use a separate more human readable format for this going forward though. – Patrick Bray Jan 26 '21 at 05:17

3 Answers3

3

The structlog.processors.JSONRenderer just passes the log object to json.dumps unless you specify another callable instead:

structlog.configure(processors=[structlog.processors.JSONRenderer(serializer=mydumps)])

The mydumps will then be a function that does what json.dumps does but puts event first. This could look like:

def mydumps(dic,**kw):
   mod = {}
   if 'event' in dic:
      mod["event"] = dic["event"]
   for k in dic:
      if k!="event":
         mod[k] = dic[k]
   return json.dumps(mod,**kw)

What it does is to make a new object then look for event key in the input object and put it first to the new object then proceeding to put rest of keys into the object and pass it along with **kw to json.dumps.

Note that this way you would not need to specify beforehand what other keys your logs might have (like content-type) as any event type might have different info.

kdcode
  • 524
  • 2
  • 7
3

It looks like you might be using an older version of python than 3.6 which keeps keys ordered in insertion order. You can use the KeyValueRenderer to set the key order and use OrderedDict as the context_class:

from collections import OrderedDict

structlog.configure(
    processors=[
        structlog.processors.KeyValueRenderer(
            key_order=["event", "content_type", "content_length", "status_code"]
        ),
        structlog.processors.JSONRenderer()
    ],
    context_class = OrderedDict
)
log = structlog.get_logger()

Reference: KeyValueRenderer

Abdul Aziz Barkat
  • 19,475
  • 3
  • 20
  • 33
0

I merged and improved upon both answers by re-using what is already in a structlog.

The approved answer used a custom json dump method that I want to avoid, the other solution has the side-effect of converting log from JSON to string, losing a lot of usefulness.

Complete example of mine that works both in local shell and remote server would be:

import collections
import logging
import sys

import orjson
import structlog
from structlog.typing import EventDict, WrappedLogger


class ForcedKeyOrderRenderer(structlog.processors.KeyValueRenderer):
    """Based upon KeyValueRenderer but returns dict instead of string."""

    def __call__(self, _: WrappedLogger, __: str, event_dict: EventDict) -> str:
        sorted_dict = self._ordered_items(event_dict)
        return collections.OrderedDict(**{key: value for key, value in sorted_dict})


shared_processors = [
    structlog.contextvars.merge_contextvars,
    structlog.processors.add_log_level,
    structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S", utc=True),
]

if sys.stderr.isatty():
    processors = shared_processors + [
        structlog.processors.StackInfoRenderer(),
        structlog.dev.set_exc_info,
        structlog.dev.ConsoleRenderer(),
    ]
    logger_factory = None
else:  # pragma: no cover
    processors = shared_processors + [
        structlog.processors.format_exc_info,
        structlog.processors.dict_tracebacks,
        ForcedKeyOrderRenderer(
            sort_keys=True,
            key_order=[
                "event",
                "content_type",
                "content_length",
                "status_code",
            ],
            drop_missing=True,
        ),
        structlog.processors.JSONRenderer(serializer=orjson.dumps),
    ]
    logger_factory = structlog.BytesLoggerFactory()

if not structlog.is_configured():
    structlog.configure(
        cache_logger_on_first_use=True,
        wrapper_class=structlog.make_filtering_bound_logger(logging.INFO),
        processors=processors,
        logger_factory=logger_factory,
    )

logger = structlog.get_logger()
Drachenfels
  • 3,037
  • 2
  • 32
  • 47