Ignition-AI on ChatGPT (Must have paid account, Sorry)

Large Language Models (LLMs) like GPT-4 offer invaluable assistance across various tasks, enhancing creativity and efficiency with their ability to generate human-like text. However, they are not foolproof solutions; LLMs can "hallucinate" information, creating convincing yet inaccurate content. It's essential to view them as tools that augment human capabilities, not as standalone arbiters of truth. The synergy between human oversight and LLMs ensures the best outcomes, leveraging their strengths while mitigating their limitations.

That said check this out.



This looks bogus or spam.

I asked Paul, he said I could post it.

I have a paid account, I am checking it out now. If someone can provide me a challenging realistic Ignition question I don't mind testing it out and reporting back.

I gave it one to test the waters

Write me a script that has a function that accepts a tagPaths parameter and a tagValues parameter. It tries to do a system.tag.writeBlocking with the parameters and inspects the return values. If everything is not good, it raises a custom exception lets call TagWriteException and the string should be the list of tag paths that had issue and what those issues were (up to 100 character limit)

Result: (ignoring all non code response)

class TagWriteException(Exception):
    def __init__(self, message):
        super(TagWriteException, self).__init__(message)

def write_tags_and_check(tagPaths, tagValues):
    Attempts to write values to the specified tag paths and checks for errors.

    :param tagPaths: A list of tag paths to write to.
    :param tagValues: A list of values corresponding to the tag paths.
    :raises TagWriteException: If any tag writes are unsuccessful.
    if len(tagPaths) != len(tagValues):
        raise ValueError("The length of tagPaths and tagValues must be the same.")

    # Attempt to write to the tags
    results = system.tag.writeBlocking(tagPaths, tagValues)

    # Check for any issues in the results
    errorMessages = []
    for i, result in enumerate(results):
        if not result.isGood():
            errorMessages.append("{}: {}".format(tagPaths[i], result.toString()))

    # If there are errors, raise an exception
    if errorMessages:
        errorMessage = "; ".join(errorMessages)[:100]  # Limiting to 100 characters
        raise TagWriteException("Issues with tag writes: " + errorMessage)

# Example usage
    write_tags_and_check(["path/to/tag1", "path/to/tag2"], [123, 456])
except TagWriteException as e:
    print("Error: " + str(e))
except ValueError as e:
    print("Error: " + str(e))
1 Like

It's real, as in Jim's a real person.
I'll say for the record that the one question I asked got a very convincing hallucination that you'd have to be well versed to know was wrong:

I really think there's just not enough context about Ignition to get a good hitrate with LLMs. Great for general purpose programming knowledge, Python questions, etc - bad with specific domain contexts where there's exactly one right answer.


It always seems to revert back to python3 syntax as well with f strings and similar no matter how much you ask it to write in python2 . I am guessing most of the training data is python3 (compared to python2) and it's just unavoidable.

1 Like