We are running Windows Server 2022 with MSSQL Express. Based on our current growth rate, I estimate we have about five years before reaching the 10GB database size limit. However, the customer wants to ensure we won’t run into issues down the road.
I've read that an in-place upgrade of an existing MSSQL installation can sometimes cause problems. Has anyone done this recently, and if so, what was your experience?
Additionally, I believe we can use MSSQL Standard without CALs since everything is running locally on a single Windows Server. Can anyone confirm if this is correct?
Update
I got the ISO for MSSQL 2022 Standard and was able to (I think) successfully upgrade from MSSQL 2022 Express to Standard.
There was some info suggesting to use the setup and select the "Upgrade from a previous version of SQL Sever" under Installation, this gets stuck on the Select Features step. Get an error saying that there are no features selected for upgrade and I am unable to proceed further.
Not sure if you need to but I used the stop-Ignition BAT to stop the Ignition services first.
You need to select the Maintenance section and then Edition Upgrade.
Everything seems ok. I would like to change the database instance name from "SQLEXPRESS" to something else so folks know it's no longer Express but not sure how hard that would be.
By "everything" do you mean that you have no Vision Client or Perspective Sessions running anywhere else?
Our Sales Engineering's standard interpretation and advice regarding CALs is that you will need 1 for each device/machine that will run a Client or Session (and at least 1 for the Gateway).
I have Perspective screens but there is no need to use them on other nodes. They are for configuration and status only. This application just POSTs Ignition historical data to an Azure database. Everything is currently running on the one server. Only KEPServerEX is on another Windows server.
We do plan to do that but the customer wants more. Some larger system may not last as long. The current system has only been online a few months and others will be 4 or 5 times the size.
I'm afraid of the open source stuff. We have so many installs of MSSQL Express and the manager, just because they come bundled in some apps.
I had started this application with (I think) MySQL over a year ago but we had some issues with free/open source databases that make their IT group upset. I'm sure MSSQL is more of a target but they don't bug us about a CVE for this or that. So far never for MSSQL. I guess the Windows updates take care of them?
I don't know that I would make a blanket statement like this. While you can export data to CSV files, to me, that just gets messy, and if some industry regulations require you to keep data for a specific period of time, it's easiest/best to just keep it in the database.
Although the odds of ever pulling up any historical data past a month are probably rare, and 1 year are even more rare. (99% of the time everyone just looks at the last 24 hours or less).
I know you say you don't want open source, but I'd recommend looking into PostgreSQL with the Timescale add-on for historian. You won't be limited on size, and if 10GB will last you 5 years on MSSQL, it will last you way longer with Timescale.
Also, Microsoft's CAL licensing will require either user/device CALs depending on what's more cost effictive for you (do you have lots of users using only a few devices or fewer users using lots of devices), or per-core server licenses which allow unlimited users/devices, but are also priced accordingly (pretty high). When you license CALs, you're required to license it based on how many end users/devices are using it. You can't just buy the standard (non-core) server license and legally/morally use it without buying the appropriate number of CALs to go along with it.
I believe PostgreSQL was the database we were asked to patch or upgrade due to certain Vulnerabilities and Exposures (CVEs). While I didn’t perform the updates myself, I recall looking into how they could be applied and noting that it wasn’t a straightforward patch.
Everything is running on a single local server, which we access via RDP for status monitoring and configuration changes. There are no typical process screens, only configuration screens. Data is logged historically by Ignition and then sent via POST requests to an Azure database.
It might be convenient to access the configuration screens from another node, but it's not essential.
The data is related to what our customer will charge their clients and the data is shared with them. My understanding is that they want to retain all Azure data while also using Ignition as a backup, though not as the sole backup.
I have yet to encounter any CVE that wasn't already patched via regular apt update ; apt upgrade in an LTS version of Ubuntu by the time the CVE was made public. For any critical software (like PostgreSQL).
Your miseries are entirely due to using Windows. Ergo my numerous denunciations linked above.
We don't have that option. It's nice that Ubuntu can auto patch PostgreSQL or the OS so you don't need to. It was looking like we needed to backup and upgrade PostgreSQL and patch to fix the CVE.
Anyway, as always, lots I don't know but was just looking for MSSQL upgrade experiences.
I got the ISO for MSSQL 2022 Standard and was able to (I think) successfully upgrade from MSSQL 2022 Express to Standard.
There was some info suggesting to use the setup and select the "Upgrade from a previous version of SQL Sever" under Installation, this gets stuck on the Select Features step. Get an error saying that there are no features selected for upgrade and I am unable to proceed further.
Not sure if you need to but I used the stop-Ignition BAT to stop the Ignition services first.
You need to select the Maintenance section and then Edition Upgrade.
Everything seems ok. I would like to change the database instance name from "SQLEXPRESS" to something else so folks know it's no longer Express but not sure how hard that would be.
Archiving data and keeping it in the database can be two different things. Pruning to CSV, like I mentioned, would be one form of archiving the data. It's still available, just not for querying directly.
We archive data to an archive table, because, the data does still need to be able to be queried against. So, while it can be two different things, it doesn't have to be.
Have you checked out the Historical Tag Analysis tool on the exchange? This was built on our Dev system using MSSQL Express to help us get a quick glance at helping us determine which tags were logging the most data and blowing up our database. Of course this will not solve the 10GB limit but we were able to quickly identify tags that were logging every second where they only needed to be logging on change, like setpoints for example or discrete values.