Anyone have any guidance for how to use Ignition with Claude or any other LLM? You can make expert level web pages with just a few properly structured prompts. Can we do the same with Ignition? I have a Vision project that I want to port over to Perspective, and I think Claude would make an expert level dashboards but I am not sure where to start with it.
-
Please read the FAQ, particularly the rule about LLM content here.
-
Don't expect LLMs to produce working code or configurations for Ignition. Ignition is too small a niche. You have to already know how to work with Ignition to deal with the hallicinations and mis-application of what LLMs think are the right standards.
learn how to use perspective is the short way. Ai can help with making styles or blueprint in html so you can use as a reference for building your perspective views. and try to use standard features of perspective to keep your project stable and maintainable. good luck
@a.alghannam is 100% correct. The allure of LLM/AI is that it will be the magic push button to just make your project for you. But even in cases where it does - it is making a lot of decisions/assumptions that maybe you wouldn’t want. And with Ignition too I don’t think any LLM' is that great with it.
You will waste more time trying to get Claude to do it correclty, going back to fix things manually etc, than to just learn to do it imo. Ignition already makes it easy for you with a drag and drop page builder, learn the basics and just start imo.
Ah, looks like I kind of stepped into something here. I appreciate all the feedback.
I spent about 6 months on this vision project mostly working on the tag side contextualizing and normalizing data, and make the corresponding dashboards in Vision because the license was potentially cheaper (that was my thought process anyway). I have been very deep in the weeds with the tags, scripting, expressions, etc and have achieved very nice results. AI is absolutely ducks guts on working with the JSON files. It makes very stupid mistakes and you often have to “start over” with a chat window, and most of the time I wonder if I would be quicker and better without it but at the end of the day, it is just going to do better and better so better get used to it and once you get across the learning curve you will be pretty unstoppable with your AI enhanced workflows. Enhanced being the key word there.
The main problem I have is that I do not have access to the real banger IDE’s like Claude, so I use Google products which are not as good at coding. Claude released 4.5 Opus yesterday and it absolutely leap frogged everything before it. In 6 months, nobody will even be talking about it, as it itself will be leapfrogged.
I think in 20 years we will be talking about AI hallucinations like Boomers talk about the sound of dial up modems….
Walker Reynolds at the Tulip Operation Calling recently said that the drag and drop old way of creating dashboards is dead, and that AI will know better what to do with the data and how to present it (If you prompt it correctly of course) and will do it lightning fast. Look at how it does websites apps already it is insane. So pretty short sighted I think to not think that very shortly this will happen for Perspective.
The MCP module for Ignition is supposed to drop 1st quarter next year right? That is pretty much along the lines of what I am talking about. It will probably be a year or so I predict before we are fully at the create yourself dashboards, so Engineers will focus on the contextualizing the data with their domain expertise.
Again, appreciate all the comments. I think for now the best bet is to learn perspective the hard way, and use the tools where I can (like JSON files) but with heavy heavy heavy oversight. It is kind of funny when I first started using AI i was super polite to it, calling it Sir sometimes. now I pretty much just rag doll it any chance I get, lol. Hope that it doesn’t hold a grudge in a few years, ![]()
Not to get too off topic on an ignition forum but having spent the last few weeks pouring over AI business stuff -
In 6 months, nobody will even be talking about it, as it itself will be leapfrogged.
We will see. There’s basically no more training data left in the world for LLM’s. The only extra juice they’re getting out of them now are by making them reasoning aka burning more tokens/compute. Anthropic and openAi are the ones we know about are spending more on inference (the computation for the results of your prompt) then their revenue. There’s also no good way to properly quantify and sell things like a prompt. This leads to things like https://www.viberank.app/ where someone can pay 2k a month and then cost anthropic over 50k in less than 1 month. This is the case across the board, the more users and more usage AI companies have the more money they are burning. The new inference chips are the next / probably last hope at reversing this. So we will see if these companies are even financially viable in a few years and not just the money pit’s they have been for the last 4-5 years. Not trying to be petty or cute here but you can also just get better and better at perspective by using it yourself.
I think in 20 years we will be talking about AI hallucinations like Boomers talk about the sound of dial up modems….
Not possible. They are inherently a part of LLMs infrastructure and how they work. They cannot be coded out or done away with with more training data (of which there isn’t anymore anyways!).
Walker Reynolds at the Tulip Operation Calling recently said that the drag and drop old way of creating dashboards is dead,
No idea who this person is but that is what feels short sighted imo. What is faster, dragging and dropping components in the exact manner you know you want them to appear, or to prompt, view, re-prompt, re-view, re-query. You said it yourself you need to prompt it in the right way. Or you can just drag and drop it once lol. It feels like people are just like oh cool new toy! But the old way of doing it yourself is direct, from your mind onto the screen. Not this from your mind, into a prompt, into hoping AI understands your prompt correctly, and then reviewing and re-trying.
Hope that it doesn’t hold a grudge in a few years,
We’ll see if it hasn’t given way to the next tech hype train by then in quantum lol
Yup. Us volunteers on this forum have seen a lot of LLM hallucinations in the past couple years, and aren't happy. Feel free to carry on with LLMs, but don't bring that broken [expletive] here for us to help fix.
I know Walker personally, and he's super smart, but he might be overdosing on his own hype.
But even if you end up right about LLM code assistance, the fact that Ignition is such a small niche is still going to be a huge hurdle. I will continue toying with LLMs, but I won't rely on them for my business. We'll see about the future.
Things LLM’s struggle with:
- They scrap sites, including forums, which often times injects non-functional code that people post. So there’s often hallucinations on functions that do not work but the LLM will repeatedly push it as working
- They also struggle with versions, often they suggest code that is deprecated. They often don’t properly track the differences between say 7.9 and 8.3.
What LLMs do well and can be useful for:
- Create test cases or inject diagnostics into your code when you are running into issues you are trying to track down
- Formatting and documentation after scripts are finished
- Feeding it specific object formatting and having it replicate with input data
Things people using LLMs do poorly:
- Not understanding defining requirements and setting up contextual guidance in the prompts.
At the end of the day LLMs are simply a tool and need to be treated as such. They can be quite powerful when used well but they can, and often do, lead people astray and cause grief.
Thanks for all the feedback. I decided to jump feet first into Perspective with no LLM assistance and after about 8 hours of deep dive I am starting to understand my way around a little bit better. I understand there is no easy way. Coding has always, always, always been the easy part, so anyone using LLM strictly because they do not know how to code in the first place is very scary. Anyway, glad I asked the question and now I better understand how to approach the problem. Wish I would have dove into Perspective sooner, finding a lot of awesome resources on the Exchange.
In my view, Ignition’s value has always come from simplifying things that used to be hard. AI is rapidly erasing that barrier. We already have IDE agents that write code, generate tests, and automate workflows that used to require real expertise. If AI keeps shrinking that complexity while enabling deeper customization, Ignition’s main differentiator weakens.
I don’t think Ignition can afford to sit this out. The newest models are on a different level, and progress is accelerating. Ignition needs to align with that reality.
What part of “Inductive Automation announced an MCP plugin for ignition” strikes you as IA sitting this one out?
My comment was a response to the skepticism I’m seeing here about the module’s possibilities. I don’t think Ignition is falling behind, but the discussion here makes it seem like the module is more of an AI-trend reaction than a real competitive advantage.
I've said it before and I'll say it again. Any monkey can write code, including LLMs and AI. But that’s not the value of a system integrator or engineer using Ignition…Ignition is after all, just a tool. It’s akin to anyone can swing a hammer and drive a nail. But are you going to trust anyone to put a roof on your house? AI and people that rely on it to build fully implemented and integrated SCADA and controls systems are going to build some shitty and leaky roofs.
My impression of the MCP module is that it will be a godsend for analyzing all the data a SCADA system produces, to derive useful insights and improvement opportunities. Without necessarily pushing the data to 3rd parties.
This topic is about using LLMs to creatively construct a Perspective project. Instead of derivatively, which is all that I see LLMs doing in the code space. For now. ![]()
Yes, skepticism. Justified by the face-plants that show up here.