You taking feature requests? I often find a time-based accounting graph of log events after optionally filtering them useful. Just a simple count of how frequently messages occur over each span of time.
I already have a Python script which ingests wrapper logs from stdin, looks at the timestamp on each line, sorts them, counts how many occur in each 5 minute bucket, and outputs a CSV of bucket timestamps and counts. I usually then open that up in Excel and graph it. This is neat since I can grep to filter like this:
Hm, can you show a screenshot of what the end result looks like? It would definitely be possible (I’d just need to pull in JFreeChart, probably), but I’m not sure I have the right idea of what the end product you’re looking for is.
Yeah, that would definitely be possible. I’ll probably remove all the automatic file associations from the installer and let anyone set them up themselves as they want to.
For each 5 minute interval there is a timestamp of when it starts and then a count of how many log entries were in that time period (after filtering).
The graphical output would ideally be like a time trend chart from Perspective/Vision. I’ve been too lazy to figure out how to get datetimes nicely on my X axis and instead groom them to Unix timestamps, but my graphs look like this from my spreadsheet software:
From this I can notice spikes, maybe notice patterns, and find interesting time periods to look at the log more closely. Like in this one, I might be curious why I had over 90 log messages in 5 minutes roughly around time 1644675000 (February 12, 2022 2:10:00 PM). (I use https://www.epochconverter.com/ quite a lot…)
Would you be able to add an option to select between GMT and local?
Would you be able to add an “export to CSV” option or to Excel? Just an FYI, in the script I wrote to parse wrappers, I replaced all double quotes with single quotes so that I could wrap each row cell in double quotes, unless of course you use an obscure delimiter. I assume if you export to Excel directly though you wouldn’t need to do that.
Haha nice. My reasoning for this is so that we can more easily track using Excel which issues we’ve resolved and which we’re still looking into, and add additional columns etc.
The two timestamps are always different in my wrappers, with the main one that includes the date being local and the one in the log comment being GMT which only has the time part (which is a bit odd, especially with me being +9/10:30 different, the local date is very likely to be different to the GMT date, assuming of course that the gateway timezone is set to local time and not GMT.)
Adds log ‘density’ display to scrollbar in log viewer (thanks for the idea, @justin.brzozoski)
This is an experimental/incubating feature. Currently, logs are aggregated into 2-minute buckets and rendered in the scrollbar:
By dumb luck, the recent log I grabbed off one of our gateways to try out the new Kindling had one huge spike in density. I didn’t know that going in and was kinda confused why the graph was mostly empty when I opened it.
I’ve noticed similar things a few times; I might try to get fancier and do a log scale horizontally or cap the absolute magnitude. What I’m doing is basically manually rendering a graph and then scaling it down to fit inside the scrollbar, so when there’s lots of detail you can get some very faint lines.
I’ll probably add time-slice selection in the column settings in the next release - maybe 1, 2, 5, 10 minutes or something like that.
I suspect you also know this, but it’s difficult to tell where the log lines currently in the view exist on the time graph. The scrollbar is linear with number of lines, the time graph is linear with time, and they don’t align. I briefly (instinctively) tried to move the scroll bar on top of the spike, then realized that wasn’t going to work. I’m not sure what simple solutions exist for this issue. Maybe just highlight where the current viewable log lines are on the time graph in a second color? But that would require updating/refreshing your scaled graphic, so that might be more processing than it’s worth.
I don’t know what the good solution is here. Something to ponder for now.
I like Kindling a lot. Thank you for putting it together.
Hi @PGriffith is there a way to use Kindling to help us understand quarantine.xml files?
I often find things have been quarantined, but opening the file doesn’t help whatsoever as it is all encrypted. I just want to know what table in the database it is failing with!
Ideally also why it is failing…
Or do i need to get a different file for this analysis?
The quarantine file is not generally very helpful (it’s missing too much information) but if you zip up the entire folder in the datacache directory in the install directory, you can open that zip using the ‘S+F Cache Zip File’ tool:
That should hopefully give you some insight into what’s going wrong.