Function to export from Perspective Table to excel

Hi everyone,

Today I started working on exporting the content of a Table in Ignition Perspective to Excel via scripting.

As often happens, the more you progress, the more you want to improve the code and make it as flexible as possible. So I was wondering if a function to handle this type of need has already been developed, considering the complexity involved in building such a complete solution.

The requirements I’m trying to bring together into a single function are:

  1. Handling input data as either JSON or Dataset

  2. Converting date columns, since they are otherwise exported in Unix format

  3. Exporting only visible columns

  4. Determining which data to export — whether to use the filtered data or the full content of the table

I’ve searched the Ignition Exchange without success, but maybe I missed something. Either way, it might be worth considering implementing built-in export functionality directly in the component in the future.

Thanks and have a great day!

You didn't find anything because your requirements are not the requirements of someone else.
I can think of a lot of other options/parameters that could be covered by such a feature.
And I say feature, because at this point it's not just a function, it's a full feature.

The issue with this kind of resource, is that it has to become increasingly complex and complicated with each use case you make it handle.
If you want flexibility, make several general purpose functions. Let's take your points as exemple:

  1. Make a function that converts a dataset to json, and a function that converts json to a dataset. Then the export function doesn't need to handle both. Just feed it whatever it was made to handle.
  2. Format your data yourself. You can even make a function to do this consistently through your project. Otherwise, you need to make the export function take parameters for the format, the localization, maybe the timezone... and that's just for dates. What about float precision ? How do we handle nulls ? Empty strings ?
  3. Well... filter them out before passing the data to the export function
  4. Ties in with points #2 and #3. format your data.

In the end, what you need is:

  • a function that takes data, and make an excel sheet out of it. Use Apache POI for this. It looks weird at first but there's nothing really hard about it.
  • some controls to pick the options you want to make available (select the data source (full table or filtered data), whether or not to filter on column visibility...)
  • an export button that will get the data, apply the options, format the data, pass it to the data_to_xls function, and save the returned workbook / download it / insert it into a db / email it / whatever you need to do with it

Then you end up with something that's flexible and reusable, but also makes sense for your use case without being overly complicated.

1 Like

You can find example code for #3 in the Exchange "Tag Report Utility" project.

Thank you both for the suggestions. It's true that the need can become so broad that it's understandable each person might think of developing their own dedicated function.

Inspired by your input, I’ve started creating several helper functions that I’ll then call from the main export function:

  • conversion from Json to Dataset
  • filtering of visible columns
  • conversion from millis to date
  • header selection (whether to keep the DB column name or the label from the UI)

I have a question: currently, to check whether the data is in JSON format or a Dataset, I’m using:

from com.inductiveautomation.ignition.common import Dataset
if isinstance(rawData, Dataset):

Is there a better or more appropriate way to handle this?

Thank you

I would say that isinstance is a normal and widely accepted way of verifying classes, and I believe the Dataset class would work for verifying the common BasicDataset, but it wouldn't identify the equally common PyDataset. I don't know if that's important, but instead of importing classes for use in the isinstance method, I like to simply use an object's built in class name string property for this sort of thing:

Example:

# Check to see if a given rawData object is a BasicDataset or DatasetUtilities.PyDataSet class'
if  rawData.__class__.__name__ in ['BasicDataset', 'DatasetUtilities$PyDataSet']:

Not true, for some years now. (Was true back in v7.9.) Dataset is an interface, not a class.

isinstance() is much more efficient than string comparisons, and will accept subclasses that don't have names (anonymous classes, etc). I would definitely use it for the Dataset case.

For more general object type discrimination, consider implementing "duck-typing" with hasattr().

Here is a simple code snipit that can be used to test that assumption

from com.inductiveautomation.ignition.common import Dataset

headers = ['Column 1', 'Column 2']
data = [[1, 2], [3, 4]]
rawData = system.dataset.toPyDataSet(system.dataset.toDataSet(headers, data))

print type(rawData)
print isinstance(rawData, Dataset)

Output: in version 8.1.44

<type 'com.inductiveautomation.ignition.common.script.builtin.DatasetUtilities$PyDataSet'>
False

As Vision developer, I'm usually tempted to use isinstance when I'm wanting to recursively dig out some nested component, but my hang up is that isinstance is too generic. For example, if I want to dig out all the JButtons, using isinstance, I'll also get any component that has JButton as a super class because even though they have a different class name, they are instances of JButtons. Consequently, otherwise unnecessary complexity has to be added to the code to filter out the unwanted classes. However, this use case isn't that, so if detecting pydatasets isn't a consideration, I don't see a problem with the OP's code.

Hmmm. That might be a jython bug. See the "implements Dataset" here (for v8.1.44):

It is, indeed, a long-standing jython bug.

The workaround is to use Dataset.isInstance(rawData)

2 Likes

the issue with this approach is it fails on dataset subclasses -- probably better to do duck typing/hasattr checks, as Phil suggests below

See my note above about precision. Precision is the reason I prefer this approach.

Nice. To me, that actually looks better.

1 Like

Thank you all for your responses.
All the arguments have been very helpful.

Huh, I'm very surprised that both of us didn't already know about that. Fascinating.

Good caveat. While it should behave the same way as much as possible, anyone doing funky stuff with string comparisons on dataset related types may end up with different results than they expect in 8.3.0.

For the sake of it, a slightly expanded version of your snippet:

from com.inductiveautomation.ignition.common import Dataset

headers = ['Column 1', 'Column 2']
data = [[1, 2], [3, 4]]
rawData = system.dataset.toDataSet(headers, data)
pds = system.dataset.toPyDataSet(rawData)

print type(rawData)
print type(pds)
print isinstance(rawData, Dataset)
print isinstance(pds, Dataset)
print Dataset.isInstance(rawData)
print Dataset.isInstance(pds)

Gives this result in 8.3.0:

>>> 
<type 'com.inductiveautomation.ignition.common.BasicDataset'>
<type 'com.inductiveautomation.ignition.common.script.builtin.DatasetUtilities$PyDataSet'>
True
False
True
True

But you don't actually have to do the explicit PyDataset wrapping anymore, and I would actively encourage, upon migration to 8.3, that folks go back and remove it, since it's not needed:

headers = ['Column 1', 'Column 2']
data = [[1, 2], [3, 4]]
rawData = system.dataset.toDataSet(headers, data)

for row in rawData:
	print row

The type() of a dataset will continue to report the underlying Java type, as well, despite this wrapping.

3 Likes

Yeah. Especially that it applies to a bunch of common interfaces like List and Map. At a quick look, based on how deep in the parentage the interface is declared. :man_shrugging:

That's the power of peer review. I had imagined everybody had ran into this, and I knew the OP's code was going to require some form of further development if it was going to have to contend with data from database queries or other sources that return pydatasets. I'm glad that the Dataset.isInstance() method came out in the discussion. I believe that's what I would roll with if I were the OP.