I need to extract and process data (variably-sized binary messages) from a very large message log. Using the Gif example and the online documentation, I have defined and compiled the variably-sized message layout into msg_log.py. Calling msg_log.from_file("small_logfile") enables me to inspect and verify field values from the first message in the logfile.
For small logfiles that fit in memory, how do I get msg_log.py to inspect the 2nd, 3rd, and subsequent messages in the log?
For very large logfiles, I would expect to page the input through a byte buffer. I haven't done that yet and haven't found examples or discussion on how to go about it. How do I keep msg_log.py in sync with the paged byte buffer as the content changes?
My message structure is currently defined as follows. (I have also used "seq" instead of "instances", but still could only inspect the first message.)
meta:
id: message
endian: be
instances:
msg_header:
pos: 0x00
type: message_header
dom_header:
pos: 0x06
type: domain_header
body:
pos: 0x2b
size: msg_header.length - 43
types:
message_header:
seq:
- id: length
type: u1
<other fixed-size fields - 5 bytes>
domain_header:
seq:
<fixed-size fields - 37 bytes>
message_body:
seq:
- id: body
size-eos: true