Nube to Mitsubishi iQ-R connection to Ignition - need help

I am currently working with a customer that has Mitsubishi iQ PCL’s connected to Ignition through KepServerEX for an automated assembly line (not built by our company) that has 2 to 3 second delays waiting on data collection to complete and allow for processing of the next part.

Our Company is producing a new automated assembly line that is almost identical that will use Mitsubishi iQ-R PLC’s with CC-Link-IE Field Gig Speed Networks for many items, and we have been tasked by our customer with finding a way to try to talk Gig Speed to Ignition to eliminate delays. It is my understanding that KepServerEX cannot connect to CC-Link-IE Field through a PCI card (Q80BD-J71GF11-T2 CC-LINK IE FIELD PCI PC I/F).

Does anyone have experience with attempting to talk Gig Speed with Mitsubishi PLC’s with this PCI Card or talking Modbus through the Mitsubishi RJ71EN71 Ethernet/CC-Link-IE Field Module or the Mitsubishi iQ-R OPC UA Server Module (RD81OPC96)?

I requested information from IA Support but they could not provide me much help, so I would appreciate any help you would offer please.

This looks quite capable.

1 Like

I have some experience working through delays with the previous generation of iQ CPUs going through KepServerEX. I was able to get the delays down by tweaking the number of channels/devices that the tags talked through. It’s been a few years so I don’t remember if configuring multiple connections to each PLC, and thus, multiple channels in KepServerEX was better or fewer was better. They might have been a happy medium with using 2-3, but again, it’s been a few years.

At that time, I also found a limitation in the Mitsubishi driver for the blocksize that was remedied in version 6.1.601.0 that you might take a look at tweaking from the defaults.

I know it's a billion years later, but I use Mitsubishi MXConfigurator to talk to hundreds of mitsubishi plcs via ethernet, CC-LINK, and CC-LINK IE.
I communicate with 32 plcs using one IP address then tunnel through the various mitsubishi protocols. I then send them to Kepware via OPC then to my database,ignition,and other software systems in our plant.

I have a coworker who only thinks in terms of ignition and always wants to kick Kepware to the curb since I introduced MXConfigurator to the plant, he doesn't understand that Kepware keeps me in control of my data with the open manner in which it writes to our database. We have very old PLCs and Andon displays that ignition cant process on it's own. Furthermore Ignition is for visualization as far as I am concerned at this facility not MES. So my data will remain filtered through Kep.

Kepware has a great reputation and functionality. It is being seriously undermined at the moment by the elimination of permanent licenses. That is an existential threat to capital investments that are expected to run for decades after commissioning.

Kepware also requires running Windows as a critical part of one's production network. Given the relative prevalence of various types of malware, particularly ransomware, I think that is engineering malpractice. In my not-so-humble opinion.

2 Likes

8.1.28 brings a native driver for MELSEC plc's. We did some testing with System Q, iQR, iQF, and it's quite performant. I hear there might be support for older FX series (FX3U-ENET etc) on the horizon as well which really opens things up - FX series is one of the most popular PLC's globally.

@charles.turner - I'd be interested to know if you can use the driver in 8.1.28 to tunnel through to all those other devices similar to what you're doing with MXConfigurator.

Yeah, great points all around. Currently they run their Kepserver in a VM on a crowded server on the corporate network with integrator VPN access...ouch.
My parallel setup is on a Windows Server but not in a VM. There is no internet access. And I am using Zero Trust for all but Emergency Situations even on-site.
I will eventually pool their Kepserver once we pull it from the corporate network.
Our wireless network, database, and storage are all on separate Linux boxes. Sorry IT.
Have not decided exactly how to implement the dmz for SQL just yet, most likely a mirrored database synced to Azure.
With most ports blocked.

Not yet, but here's to hope.
MXConfiguraor needs a serious update for a few bugs.
My primary complaint with MXC is when I want to add a new channel after it has been running for a while it won't easily accept the new tunnel. Sometimes takes hours of of attempts. Could probably benefit from some fault handling. It is also possible that it is just me, but not a lot of info on the English forums. Although I work for a Japanese Automotive Company, none of my Coworkers are in my field. And have no time for Google translate.

I endorse this approach completely.

Do you mean Microsoft SQL Server? On Windows? Were you aware that it is available for Linux, too?

:grin:

{ That's how I do development with stripped down copies of client DBs. }

FWIW, current US government recommendations for securing industrial networks places the production database on the production LAN, not in the DMZ. A read-only replica in the DMZ sounds appropriate. (Which can then be replicated to a wider network, while only exposing the production DB to the DMZ.