Design principles for complex process control Jean Vieille SyntropicFactory, Interaxys, Control Chain Group j.vieille@syntropicfactory.com Introduction. I had recently the exciting assignment to survey the on-site development of the control system for a large CSP (concentrated solar power) power station. This “first of a kind” facility offered many control challenges for balancing heat in 600 meters long vaporizers, setting the dry-out point, adjusting the mirrors’ position at centimetre precision on a 50 meters distant target and so on. The control was initially designed based on assumptions that appeared to be sometimes defied by the reality of the engineered facility and its integration in the actual environment. The process was developed by a “research oriented” (rather than “engineering oriented”) team who did not always made the most appropriate choices for the design, including from the control and instrumentation viewpoint. As a result, this strategy was unable to operate the facility without the help of operators and automation engineer. Some progresses were made by doing successive adjustment to the control strategy until the control seemed to get worse and worse. Beside technical recommendations to revert this tendency to make the control system taking the best from the mechanics, I found that the causes for the control difficulties were due to a crippled organisation to address concurrent control engineering/commissioning, and in failing to respect some principles that I tried to work out to help the control team to get back on track. Designing control strategies is an exciting and challenging specialty within control engineering. The automation engineer of course needs a solid and large technical background that extends from physical measurement to actuation; from feedback control to model based control; from combinatory to sequential logic; from chemistry to thermodynamics and many computation, networking, data management technologies. When it comes to designing a control strategy, this knowledge is essential, but not sufficient to succeed in delivering a control system that makes the process efficient, safe and reliable. The following recommendations are based on a lifelong experience in process control facing the obvious “mistakes” I found during this survey. This is not a comprehensive review but I hope it will benefit young control engineers, those more experienced will probably remind their early struggles. Complex processes A complex process is not a complicate process, plenty of measurements and actuators. A complex process may have a limited amount of process and manipulated variables. It is complex because variables are highly coupled, correlated. Typical examples are steam generators, distillation columns, fiber extrusion… It is when this systemic complexity adds up with non-linearity and long, high order time constants that control becomes particularly challenging. Though the following principles may seem reasonable in any circumstances, they specifically apply to such context where any disrespect can compromise the success of control.. 1. Robustness - Inner stability preference – process symbiotic attitude As much as possible, control shall use the natural dynamics of the system, not trying to tightly drive process values that are dependent on other variables that are already under control. Double control is the recipe for instability and unpredictable behaviour. Control engineers must first of all focus on getting a deep understanding of the process behaviour. For me, this includes staring at multi trends diagrams for hours until I get a sound explanation of an odd spike. It also makes me never be overconfident that I have got it totally. Simple physics laws produce puzzling outcomes when playing together in a bounded spatio-temporal facility. 2. Least use of controllers Every controller increases the system order and the risk of instability. The least number of controllers, the better. 3. Cascade control Cascade control shall be used in 2 circumstances: To improve the linearity of the process response (for example, a temperature controller gives a set point to slave flow controller instead of directly driving the gas control valve) To split time constant of a slow process. If the time constant of the main loop is not significantly reduced by the cascade control, or if the linearity of the process is degraded by the slave loop (that happens !), cascade slave loop must be avoided. As cascade control increases the number of controllers, decreasing the process stability the default option is none. 4. Override control Override control allows several process variables and associated controller to drive the same manipulated variable depending on the process conditions (for example controlling the pressure during start-up, switching to flow control in steady state). The same can be done using a single controller by switching process variable errors instead. Override control may cause instability because of the non-linearity provoked by the switchover. Preventing switchover bumps may be tricky. Unless the process has the same dynamics for each process variable, they should always rely on a separate controllers to allow their specific tuning. 5. Multiple input control This control attempts to manipulate a single process variable to maintain several controlled variables within appropriate ranges. For example, the boiler steam outlet valve may be computed based on temperatures at economizer outlet, near dryout point, super-heater first section; on actual and target flow; on pressure setpoint and measure; on di-verse tuning parameters (do not laugh!). This looks an appealing approach for mathematician control engineers as long as one think about each variable independently. Because of the coupling of these variables (specifically in complex processes), the process behaviour may become unpredictable and even unstable. This beauty needs to be considered carefully, and justified. 6. Multiple controlled input This is the opposite of the previous control where the same controlled variable drives several control loops and manipulated variables. For the superheat temperature of a once-through boiler might be controlled by adjusting the inlet water flow (for slow adjustment the water inventory and dry-out point) and the generator’s power (for the rapid adjustment of the temperature within the expected range). Make sure that enough distance exist between the respective operating domains, and determine the dynamics to prevent resonances. 7. Careful use of model based / feed-forward control Process modelling and characterization allows to calculate and implement transfer functions that are able to correct a disturbance by applying the correct math to elaborate a manipulated variable. This is an open loop (Disturbance -> math -> manipulated variable-> PROCESS -> process variable) compared to closed (mainly PID) loops (process variable -> math -> manipulated variable -> PROCESS -> process variable). Model based / feed-forward control provides inherently stable and fast response to disturbances. It anticipate process variable deviation by compensating the disturbance effect before it occurs. This works only if the model is robust and stable enough for a globally positive impact on control performance. If these conditions are not met, the control will suffer. 8. Least use of instrumentation Instruments are the weakest part of control: they are subject to breakdown and errors. The less instruments, the more control is robust and reliable. 9. Neguentropic, Lean development Entropy is a measure of disorder, a natural tendency of the physical World to degenerate over time - entropy is linked to the time direction that goes to the future, not backward. Despite this apparently lethal fate, the World keeps progressing thanks to the negentropic power of properly handled information – one of the Life basic aspect. Control development does not escape the rule: it naturally tends to disorder by successive fixes that only address symptoms of issues, not root cases, and by not cleaning up useless code. Progress of control development can be quantitatively appreciated by the evolution of the code volume. Once all features have been provided, as soon as starting commissioning and optimization, the code volume should decrease. Either an existing feature shall be simpler after update or a new feature shall replace another equal or bigger in complexity. Lean thinking has been shaping mentalities in manufacturing operations for the last 3 decades, control engineers might get the idea too. To summarize all the above: if your control strategy idea makes it simpler, you are possibly heading in the right direction. When it makes it more complicated, you are more probably wrong.