Is there any way to “undock” buttons, labels, and images so they can be dragged around the screen at runtime? Or would that have to be done with code?
You can move components with code. What are you trying to accomplish?
That would have to be done with code (all the component manipulation stuff is purely a part of the Designer).
You could use a mouse motion event listener and the fpmi.gui.moveComponent() library.
(Nathan and I are in rare parallel answers form today)
So I noticed - can’t believe I beat you to both It’s 3AM and I’m on a roll without coffee!
I must be slipping - Its 10AM here and I have had coffee… I’m multitasking though!
lol - I may not have the funky keyboard and pedals, but I’m multitasking too!
Last OT remark on this thread - I somehow never discovered these instant coffee packages that include the creamer and sugar until I got to Korea. Really want one now, but then I’d be up all night for sure.
[quote=“nathan”]You can move components with code. What are you trying to accomplish?
Ok, I’ll see what I can do with the moveComponent library.
What I am trying to do is set up a transparent box that the operator can drag around that will zoom in on certain parts of the machine. For instance, let’s say you had a Plant Overview screen for a bottling plant. At any given time there could be thousands of different bottles in process at once. So, if I gave the operator an image component (let’s say a magnifying glass) that they could drag around, they could have a “zoom in” effect wherever the magnifying glass was. All I would need is the location, and I’d be able to update an image (say, a conveyor with a dozen bottles on it) and display the order data for the area they were magnifying.
Sorry about the hijack. I don’t think moveComponent is what you want at all.
One approach would be to open a borderless mouseover window like the Demo project does with the webcam example. A few simple scripts would set up any components that you want the end user to be able to hover over.
Ok, I’ll see what I can do with the moveComponent library.
What I am trying to do is set up a transparent box that the operator can drag around that will zoom in on certain parts of the machine. For instance, let’s say you had a Plant Overview screen for a bottling plant. At any given time there could be thousands of different bottles in process at once. So, if I gave the operator an image component (let’s say a magnifying glass) that they could drag around, they could have a “zoom in” effect wherever the magnifying glass was. All I would need is the location, and I’d be able to update an image (say, a conveyor with a dozen bottles on it) and display the order data for the area they were magnifying.[/quote]
Step7: I think thats an awesome idea if you can pull it off. Hold on - I’m coding something up to help you out.
Ok, heres the draggable component. Detecting what its near to is a challenge I’ll leave for you, but feel free to ask questions!
draggable_component.fwin (4.75 KB)
That’s awesome Carl. I’ll surely have questions, but I already have all of the hooks in place to make it work.
Great, I might want to do a screen/video capture of that when you’re done, it sounds like it’ll be very cool.
Madness - that’s too cool!
Hey, i updated the code a bit to use 2 client tags instead of dynamic properties, cuts down on the work and allows universal scripting to all objects.
Step7, can we see your finished code as well?
obj.py (1.03 KB)
Carl, i also noticed the fpmi.gui.convertPointToScreen function, can we get some documentation on this. Also, is there a way to take the area enclosed in an object, say a rectangle, and save it to a temp windows in run time. I wouldnt mind making the whole plant on one screen, then zooming in on the selected part of the plant, which is what i believe Step7 is trying to do
What I untimately did was use a container instead of a box. When I drag the maginifying glass around the screen and it gets close to an object I want to blow up, I make the container visible and the underlying object invisible. Once my container is visible and I release the mouse, then I can open up windows, view feedback, or whatever just as though I was on my other screen where the containers usually reside in full size.
I’m still playing around wiith it, but will post some code when it’s cleaned up.
Can you tell me at what point a screen simply has too many tags and objects? I don’t care about the initial load; I know that will be slow due to all of the images.
What I’ve done now (I’m not claiming this is a sensible idea, yet) is take my zoom tool and put it on steroids. I just took a standard screen (about 800x600), and added a container with the dimensions of 10000x10000 and dropped my entire plant overview into it. I added the zoom logic onto this container, and now I can pan and scan all over the plant with the mouse, sort of like Google Maps. It looks kind of cool, and is amazingly responsive. Of course, since my orginal plant overview had limited information because the resolution of the components was too low, there really isn’t a lot to look at. But now, the sky’s the limit as far as a virtual monitor goes. Using my bottling plant example, I could even tag a bottle to remain visible on the screen as it travels through the plant, like an animated fly-through in CAD.
So, when is this going to break? Could I use, say, 500 tags and 200 objects? 1000?
Huh, I don’t really understand what you just said, I think I have to see it, but it sounds really cool!
As for your question, it won’t really ‘break’, it’ll just start taking more and more of the computer’s resources. At some point, it will ‘back up’, meaning that dispatching the tag value changes to the components takes longer than the scan time, and the scan is held up. If the scan gets held up for a long time, the next time it goes to scan, it will be so out of date, the Gateway will have to give it a fresh batch of full tags, not just the tag diffs. This will compound the problem, so I suppose that is the point at which it will ‘break’.
So when will that happen? Depends on the computer. I’ve seen screens with > 400 tag subscriptions and > 600 components working fine. Unfortunately, the benchmarks we just completed didn’t scale screen complexity up higher than this, so I can’t give a definitive answer. It also depends on how frequently your tags change. If you’re subscribed to 1000 tags, but only 100 change per second, its very different than subscribing to 1000 tags and having all 1000 change every second. Tags bindings only take up resources when the tag actually changes.
Hopefully this background info gives you enough to evaluate if your situation will work or not. You can calculate your own efficiency by opening up the Diagnostics window and looking at the SQLTags throughput (scans/sec). If your client poll rate (settable as an expert project property in the Designer) is 250ms (the default), then the ideal throughput would be 4 scans per second. The lower this gets, the worse your efficiency is (the longer its taking to handle all of the tag changes that are coming in).
Hope this helps,
Are you referring to the client PC or the server PC? Or both?
Ok, if you took the “floating components” code that was posted earlier in this thread, and attached the code to a container instead of a box, then you could drag the container (and its contents) around. Then, in the designer, if you make the container huge, you can pan around the container just like you would with Google maps. As soon as I can bring this project to a breaking point and give it a version number, I’ll send it to you.