Dataset Column Datatype Inference

I have a dataset who's data is filled in via a runScript() binding.

There's one column that is usually an integer, but there's certainly no reason that it couldn't be a double. It seems like python (thinking it's actually java) wants to assume it's an integer unless I include a decimal point.

So the issue is when the first row in the dataset uses an integer in this column it assumes an integer type. Then, later down in the dataset, java hits an error when it tries to convert the double to an int. It's really annoying to have to go back and add the decimal point to every row entry for this dataset. Is there anything IA could do about this behavior, or is it just a java thing?

You're skipping over a lot of details, but I'm guessing somewhere in there you're calling system.dataset.toDataSet on some Python structure. You're correct; that will rely on 'sniffing' the first row's datatype to determine what Java type to use for the actual dataset column.
The easiest solution would be to explicitly cast every value in that column (in your script) via the float() builtin. Confusingly, Jython's float is actually a Java double, but c'est la vie.
The second best solution, if that's not an option for whatever reason, would be to use the DatasetBuilder class to explicitly specify your column names and types, then imperatively add however many rows you need. (If you want a nicer interface than having to import it, consider Ignition Extensions).

Correct assumption.

I opted for the annoying, but not too bad, solution of just adding ".0" to every int for those columns. Technically I only needed it for the first row of the dataset, but I want it on all rows just for consistency.

You could also use the Integration Toolkit module's unionAll() expression function, which uses the DatasetBuilder, and accepts the string shortcuts for data types.