For each 5 minute interval there is a timestamp of when it starts and then a count of how many log entries were in that time period (after filtering).
The graphical output would ideally be like a time trend chart from Perspective/Vision. I’ve been too lazy to figure out how to get datetimes nicely on my X axis and instead groom them to Unix timestamps, but my graphs look like this from my spreadsheet software:
From this I can notice spikes, maybe notice patterns, and find interesting time periods to look at the log more closely. Like in this one, I might be curious why I had over 90 log messages in 5 minutes roughly around time 1644675000 (February 12, 2022 2:10:00 PM). (I use https://www.epochconverter.com/ quite a lot…)
Would you be able to add an option to select between GMT and local?
Would you be able to add an “export to CSV” option or to Excel? Just an FYI, in the script I wrote to parse wrappers, I replaced all double quotes with single quotes so that I could wrap each row cell in double quotes, unless of course you use an obscure delimiter. I assume if you export to Excel directly though you wouldn’t need to do that.
Haha nice. My reasoning for this is so that we can more easily track using Excel which issues we’ve resolved and which we’re still looking into, and add additional columns etc.
The two timestamps are always different in my wrappers, with the main one that includes the date being local and the one in the log comment being GMT which only has the time part (which is a bit odd, especially with me being +9/10:30 different, the local date is very likely to be different to the GMT date, assuming of course that the gateway timezone is set to local time and not GMT.)
v0.2.0:
Adds log ‘density’ display to scrollbar in log viewer (thanks for the idea, @justin.brzozoski)
This is an experimental/incubating feature. Currently, logs are aggregated into 2-minute buckets and rendered in the scrollbar:
By dumb luck, the recent log I grabbed off one of our gateways to try out the new Kindling had one huge spike in density. I didn’t know that going in and was kinda confused why the graph was mostly empty when I opened it.
I’ve noticed similar things a few times; I might try to get fancier and do a log scale horizontally or cap the absolute magnitude. What I’m doing is basically manually rendering a graph and then scaling it down to fit inside the scrollbar, so when there’s lots of detail you can get some very faint lines.
I’ll probably add time-slice selection in the column settings in the next release - maybe 1, 2, 5, 10 minutes or something like that.
I suspect you also know this, but it’s difficult to tell where the log lines currently in the view exist on the time graph. The scrollbar is linear with number of lines, the time graph is linear with time, and they don’t align. I briefly (instinctively) tried to move the scroll bar on top of the spike, then realized that wasn’t going to work. I’m not sure what simple solutions exist for this issue. Maybe just highlight where the current viewable log lines are on the time graph in a second color? But that would require updating/refreshing your scaled graphic, so that might be more processing than it’s worth.
I don’t know what the good solution is here. Something to ponder for now.
I like Kindling a lot. Thank you for putting it together.
Can you provide any more information? The latest published version (0.2.1) works fine on my windows laptop. The tool I’m using, JDeploy, should automatically download an appropriate JDK for you.
Hi @PGriffith is there a way to use Kindling to help us understand quarantine.xml files?
I often find things have been quarantined, but opening the file doesn’t help whatsoever as it is all encrypted. I just want to know what table in the database it is failing with!
Ideally also why it is failing…
Or do i need to get a different file for this analysis?
The quarantine file is not generally very helpful (it’s missing too much information) but if you zip up the entire folder in the datacache directory in the install directory, you can open that zip using the ‘S+F Cache Zip File’ tool:
That should hopefully give you some insight into what’s going wrong.
Standalone executables are available for Windows and Linux users who don’t want to use JDeploy; see the releases page above
CSV and XLSX exports for some data formats (see #55, thanks to Josh Hansen)
‘Legacy’ thread dump parsing from prior versions that didn’t output a JSON export (see #63, thanks again to Josh and to Corbin)
Support for pasting logs and thread dumps directly from the clipboard (see #65)
Added some UI and parsing for deadlocked threads (see #68, thanks again to Josh)
And some other miscellaneous UI tweaks and improvements on the backend. Anyone using JDeploy should automatically receive the update the next time they launch.
I’m not sure if it would even be possible but one thing that would be extremely useful is the ability to snapshot what queries are actually being run against a database connection.
Database Status Queries per second and long running aren’t very helpful to try and cut down on what is hitting certain databases.
Maybe it could turn on deeper logging for x amount of seconds, then parse it out the log?
Hmm. Currently Kindling doesn’t have any way to ‘reach in’ to a gateway and configure anything. Adding extra parsing to the logs to try to pull out patterns is doable, but tricky to generalize.