I made a project in Factory PMI, which monitored online variables (waste and operation of machine) of 5 machines (the machine are connected in ethernet network environment) with the Easy Chart component.
The servers that are dealing with operating system Windows XP with 2 GB RAM.
m trying to access remotely with 5 clients concurrent, but in each client's connection, the memory of the server increases significantly, so when trying to connect the fifth client, the server disabled and the client didnt connect to aplicacion server.
What database are you using? How did the historical tables get created? Are there indexes on the timestamp of these historical tables? Are there any special conditions in the where clauses of the pens on your easy chart? How frequently do you log data?
I’m using the database, SQL Server Express 2005.
I’m accessing the data through a view, because I need a specific format for the variable of waste.
Tables were set up with the option of storing “time / date stamp.”
There are no special conditions in the clause “where”.
The tables store data every millisecond.
Every millisecond? Did I read that right? That doesn’t really make sense (pls typically have a scan rate of 10ms, computer clock resolution is 10-20ms, transactions to a database take upwards of 30-80ms. FactorySQL has a practical lower bound for logging of about 100ms).
What exactly are you trying to do? 100ms is the fastest you should ever try to do anything with FactorySQL.
Sorry I have a mistake, I’m storing the data every second
Ok good. How large are the date ranges that you’re pulling up in each client, and more importantly, is there an index on t_stamp on your historical tables? Also, what is the definition of the view?
I have checked with Run Diagnostics option in Easy Component and show me the warning: “The time column ‘t_stamp’ is not indexed”.
Now, How can I do this?
To add an index on the t_stamp column:
open up SQL Server Management Studio Express, and navigate down to your table. Right click on it and choose “modify”
Right click on the t_stamp colum and choose “Indexes/Keys”
Add a new index, and set:
Columns= the t_stamp column
Save everything, and your queries should run much, much faster and consume very little memory.
I already carry out that you suggested, although the data displayed on the component Easy Chart are faster, even still increase the memory of each client’s connection (approximately every client connection occupies 80,000 Kb).
I commented that I made the same process of testing with the application of DEMO, connecting several clients at the same time and in the same way and with the fifth client, the server’s memory reaches 100%. And finally is sent to all clients that the period has expired in less than two hours.
Well, the way the demo works is that you have a combined 120 minutes of runtime. That means that if you are running 2 clients, you’ll get 1 hour. 4 clients, half an hour, etc.
You’re saying that for each connected client, the server gets an increase of 80mb, or the client itself occupies 80mb of memory? If it is the server, what processes are increasing their memory footprint?
Each time the client connects, the server generates a process called javaw.exe, which measures 80 MB.
Oh, you’re running the clients on the server. That javaw.exe is the client.
This isn’t a good test. Yes, a client process will be around ~80mb. But there is typically one client process per machine - running 5 clients on the server itself isn’t demonstrating anything about the server’s memory consumption. Your 5 clients should be running on machines separate from the server.