Multiple Loggers with Same Name Error

I was attempting to troubleshoot some gateway scripts today, and was running into some issues. I just started using Ignition 8.1 and also using the docker for the first time (so it's taking me some time to get used to where to locate the things I would normally do). While doing so, I ran into some issues that I'm not sure how to address. So I had created a FileWatcher class so I could monitor some files in a folder on the server. I had this working on 7.9, but am trying to migrate it to 8.1. I have run into several problems, starting with Gateway Tag Change Events not firing, so I opted to test with a Tag Value Change Event instead to troubleshoot. The first thing I ran into was this error:

jvm 1    | 2024/07/19 07:44:06 |   File "<tagevent:valueChanged>", line 5, in valueChanged
jvm 1    | 2024/07/19 07:44:06 | ValueError: Unable to change logger level:  NameError: Multiple loggers named "FileWatcher" found.

The code that generated this was:

def valueChanged(tag, tagPath, previousValue, currentValue, initialChange, missedEvents):
	from java.nio.file import Files, FileSystems, Paths

	log = system.util.getLogger("FileWatcher")
	system.util.setLoggingLevel("FileWatcher", "trace")
	
	filePaths = system.tag.readBlocking(['[default]Data Sync/Austin/FilePath','[default]Data Sync/Bastrop/FilePath','[default]Data Sync/Junction City/FilePath'])
	
	shared.fileWatcher.testFilePath(filePaths[0].value)
	shared.fileWatcher.testFilePath(filePaths[1].value)
	shared.fileWatcher.testFilePath(filePaths[2].value)

When I set the logging level to "trace" in the script, it generates the error above. Initially, I was not successful in getting my FileWatcher services to run as it did on Ignition 7.9, and being frustrated that a Tag Change Event wasn't firing my code on the Gateway events (to see if there were any issues with the code not working), this is what it produced. It doesn't have any issues with getting the actual Logger itself, but it does with setting the logging level. I noticed in the Logs that a couple of FileWatcher services now exist with (perspective.FileWatcher & com.inductiveautomation.ignition.gateway.project.ProjectFileWatcher). If I rename my logger (to say FileWatcher1), this issue is not present anymore. When getting the logger by name, this has normally referenced an instance that was existing, and if not already existing, creates one. Adjusting the logging level, this shouldn't find multiple loggers if that is the case.

So my questions:

  • Is this a bug? I suspect the search is overlapping with the new loggers from perspective and the projects.
  • Other than the Gateway Status > Logs, is there any other way, programmatically, to see all the loggers that are available so I can see what is going on here?
  • Is there a way to clear created loggers programmatically, or is a gateway restart necessary?
  • Is there a better method to naming the loggers in a system so we avoid this issue?

This is Ignition version: 8.1.38 (b2024030513)

It's hard to say. The implementation of setLoggingLevel is clearly returning an error when duplicates (by trailing text) are found, but I can't really come up with a good reason for this, even looking at the original ticket from 2015 when this change was made. The original intent was "warn when you try to set logging level on something that doesn't exist", which seems reasonable, but the error on duplicates feels unnecessarily fragile.

Yes, you can, though it's probably not necessary.

	from org.slf4j import LoggerFactory
	# will be a ch.qos.logback.classic.LoggerContext
	loggerContext = LoggerFactory.getILoggerFactory()
	loggers = loggerContext.getLoggerList()
	system.perspective.print(', '.join(l.name for l in loggers))

I would be very careful with the LoggerContext object you receive - we do zero testing around this, so it's entirely possible calling the wrong method here will just break all logging on your system until you restart.

See above :smile: There's a tempting reset method on LoggerContext, but I have no idea what it will do on a running system.

The problem isn't you, here, it's us/our implementation of this system function, so I wouldn't sweat it.

As for what you're ultimately trying to do:
From LoggerContext, you can retrieve your specific logger by name:

And then set a specific level on it: