By Ara Barsamian
In the last decade, fuels blending operators have been under attack: reformulated gasolines, clean burning gasolines, CARB gasolines, MTBE, Tier 2 sulfur, MTBE Phaseout, Ethanol, ULSD, what next? From all the news emanating from Washington (EPA) and Brussels (European Union), things will get worse…hopefully we will not go back to traveling by horse-and-buggy and camels.
Currently, one has to meet approximately 33 product specifications simultaneously, depending on the type of fuel. Clearly, it is both impossible and insane to try to do this by hand without tremendous giveaways and higher-than-needed inventories. The tools to automate this “precision” blending (e.g., on-line analyzers, on-line property optimizers, and off-line multi-blend, multi-time period inventory optimizers) are all available. Most people buy them, but do not use them. Instead of big savings in the form of reduced giveaways, component and product inventories and tankage, the results are reblends, big giveaways, missed shipments, messed up inventories, stress on the blenders and the lab, and unhappy customers and shareholders. In this article, we will examine the “Critical Elements” of blending automation: the on-line analyzers, the blend optimizers, the integration/usability issue, and what to do to get it right the first time.
Refinery gasoline blending operation planning involves determining optimal blending recipes over an interval of 10 days to a month, taking into account client order delivery schedules, quantities, and product grades, constrained by the available component and product inventories on hand, and their respective lab analysis quality data.
The current specifications for fuels (e.g., gasolines) include the conventional specs like octane, RVP, distillation, calculated specifications like VLI and DI (driveability index), and also environmental specs (e.g., NOX, TOX, VOC, Exhaust Toxics) . The total number of specs to be met simultaneously (e.g., for conventional reformulated 87 octane gasoline in the USA is about 33. To be able to meet these specs using the properties of blendstocks produced and available at the refineries, one has to use mathematical tools to find out what is the least expensive combination of blending ingredients to do that while respecting availability constraints; the tool to this is an optimizer. The optimizer must have credible data for calculation; the only credible and reliable way is provided by on-line property analyzers measuring in real-time the properties of blendstock components and blended gasoline at the blender header.
The on-line analyzer property quality data has a much narrower standard deviation compared with the single ASTM lab test standard, by a factor of 4 to 10 times! The on-line optimizer can use this narrower standard deviation property reading to “push” the blend target closer to the “legal” product specification to reduce the blend giveaways.
Finally, it is the issue of usability, which boils down to ease of use by ordinary mortals, not Nobel-prize winners, and this is achieved by integration and simplification.
There are two optimizers involved in blending: a short term blend planning optimizer, and an on-line blend property optimizer.
The short term blend planning problem involves both continuous variables (e.g., inventories, and discreet data (e.g., tank in/out of service), and non-linear property correlations (e.g., octane numbers). Basically, it calculates the initial blend optimal recipes for all the blends over a interval of time from 10 days to a month. The objective here is to come up with the least expensive blend while meeting all the specs within the constraints of available inventory and refinery process production constraints, using available blendstock properties, and insuring that all the deliveries are done on time. This optimization insures that you don’t use all the “good stuff” for your initial blends, and then you are left with “garbage” to do your blends at the end of the month.
For the on-line blend property optimizer, the objective is a bit different, it works within the bounds or “degrees of freedom” of the blend planner’s recipe (say, a window of + 5% in inventory constraints) to generate optimum blend recipes based on cost of quality giveaway and direct feedback of what’s actually happening in the blend – the readings from the on-line analyzers, to “tweak” the recipe every couple of minutes. There are other sophistications, like taking into account flowmeter rangeability control step size, handling infeasibility automatically, etc., which are very helpful in providing a credible and usable optimizer.
Now, what are the problems with optimization tools and techniques?
First, the most frequently used technique, linear programming, or LP, has a significant problem associated with it: its answer depends on the starting (or initial) conditions, and the magnitude of the constraints. Sometimes, it does not give you any answer at all (what’s called an infeasible solution).
By optimum, we mean a "global" optimum, which is the best answer a model can find. The classical optimizers suffer from getting stuck on a "local" optimum, which means that the best answer varies with the particular path the model takes and problem initialization condition, whereas in reality, there might be a better answer.
An additional complexity is that the blending is mostly non-linear, e.g., blending 1:1 ratio of 100RON heavy reformate and 94RON alkylate does not give you a 97RON blend. The non-linearity is accounted for using so called “non-linear property correlations”, the most popular being the Ethyl RT-70 national equation set, and the DuPont interaction method for octanes and distillation. These non-linear equations can be incorporated in the blending equations, but to solve them, you need a Non-Linear optimizer. Non-linear optimizers are variations on the theme of classical linear optimization tools like the Simplex method of Linear Programming, in the form of Successive LP, successive quadratic programming (SQP), MINOS, and generalized reduced gradient programming (GRG), among others, to handle such problems with various degrees of success. Of course, one needs to do blending studies to “customize” the coefficients of generic non-linear property equations so they give meaningful results; this is seldom done because of the costs and hassle of testing.
To handle common infeasibility issues, an additional “trick” includes the concept of fictitious “buying” and “selling” of components to almost always insure a “feasible” but not necessarily true optimum solution. This is particularly valuable to blend planners because it indicates directional signals to possibly modify the refinery run plan and process operating conditions to accommodate some of the signals.
Finally, setting up the LP equations properly requires operations research expertise, in addition to process expertise, as does the troubleshooting of infeasibilities or ridiculous or absurd “solutions”.
In summary, classical LP optimization technique and its variations suffer from finding a local rather than global optimum, sometimes not finding any solutions, having convergence problems, and being mathematically very elaborate; knowing operations research and the mathematics of optimization are necessary conditions to getting answers and devising work-arounds, a very scarce commodity in today's workforce. Nevertheless, good optimizer design techniques using the above “tricks” make the difference between a useless optimizer wasting your time and energy, and one that can reap most (~60%) of the blending optimization benefits.
Is there “light at the end of the tunnel?” Yes. New technologies using low-cost PC power made possible the use of Genetic Algorithm optimizers using Neural Networks, which will always find the best answer because the technique generates many possible scenarios, then refines the search based on the feedback it receives. Genetic Algorithms (GA’s) are also simple to use by people that are not operations research mathematicians, handle inherently mixed integer and nonlinear problems automatically, always find the correct global optimum, and are fast and inexpensive.
Conventional vs. Genetic Algorithm Optimizer
Blending, optimized or otherwise, is fundamentally dependent on accurate property values for the blendstock, and the blended product at the in-line blender header.
The overworked refinery labs just can’t grab samples and analyze them fast enough for today’s blending environment. Typically, the best you can expect is to have the lab analyze your blendstock tanks once a shift; unfortunately, it is typically more like every three days to once a week.
For troubleshooting during a blend in progress, it’s just about a nightmare to grab and store samples correctly, and then do a quick and accurate analysis to be able to calculate mid-course corrections. All this stress causes disputes between the blending people, the lab, planning and economics, and third parties (pipelines, barges, etc.).
The proven solution is the use of on-line analyzers.
For multiproperty analysis (i.e., RON, MON, RVP, DISTILLATION, DENSITY, O2, AR, BZ, OLEF), it is hard to beat a reliable Near InfraRed (NIR) analyzer or lately, a Nuclear Magnetic Resonance (NMR) analyzer. A full 70% of NIR analyzers do not work (the models are not good LONG TERM at predicting blend properties) because users try to get going “on-the-cheap” with do-it-yourself property modeling. The modeling MUST be done by the analyzer vendor’s experienced chemometrician , and it has to provide performance guarantees proven in an actual Site Acceptance Test (SAT) with live blends. Furthermore, a NIR model support contract with the vendor at modest cost insures long term headache free operation. For those users that do this, they are rewarded with an extremely reliable, low maintenance, and very flexible on-line analyzer system. The same applies to NMR!
Very importantly, these analyzers can be multistream at modest costs, and one ought to take advantage of this capability to get reliable blendstock properties for better blend optimization performance (rather than using week-old lab data); a side benefit is easing the workload on the lab, and better responsiveness from them when you really need them.
Not everything can be measured with NIR/NMR.
RVP is a key property of gasolines; no inferential type of analyzer comes close to a classical RVP analyzer in terms of accuracy and repeatability. You can start an automation project with an NIR (measures RVP to ± 0.15-0.3 psi), and as soon as you can get the money, buy and install a dedicated RVP analyzer (± 0.05 psi); it will pay for itself almost instantly!
Sulfur is another key spec for meeting the requirements of Tier 2’s 30 ppm. The sulfur analyzer that you buy today needs to cover a wide range (from today’s 300-1000 ppm sulfur to tomorrow’s 10 to 30 ppm sulfur), and unimaginable repeatibility and reproducibility. The current technology does this rather awkwardly, either using x-ray or UV fluorescence or lead acetate tape, or gas chromatography, but there is a big incentive for analyzer vendors to develop a better, faster, and more reliable wide range sulfur analyzer. New models are appearing monthly!
All the elements of a blending SYSTEM (i.e., blend planning optimizer, creation of electronic blending work orders, automatic blend lineups, automatic start-up, control and optimization, blend performance monitoring, end of blend reporting with blend key performance indicators, automatic interfaces to planning and economics, Lab LIMS, tank gauging, etc.) have to be taken into account. You don’t have to buy everything at once, although if you can afford it, you get all the benefits and you don’t need to face a future budget request. The data transfer between these modules has to be automatic, without having to re-enter by hand the same information five times, worry about scaling, engineering units, typing mistakes, and…TIME to do all this while juggling other tasks, answering the phones and beepers, marine radio blaring.
The other aspect of usability is “Human Factors” or ease-of-use aspect. For US Navy veterans, the familiar KISS principle is particularly appropriate in the blending software area. Many of the displays are designed by C++ programmers (without a clue of what a pump looks like) and suffer from contemporary software geeks “featuritis”, confusing and irrational, overloaded cascading windowed displays where the blending operator has to “fish out” from a number of different display screens the couple of key pieces of information needed to do their job (which can simply be put on one display). The other things that are very helpful are the use of “technical tricks” to insure always a feasible optimizer solution, fat red arrows pointing to problem areas, and pre-programmed defaults e.g. in blend lineup, based on what the operator does 60 to 70% of the time, rather than being cute and “cool”, and complicated.
Less than 10% of all blending automation systems are integrated….users bought the “best of breed philosophy”: a refinery LP from vendor A, a blend planning optimizer from vendor B, a Blend Ratio Controller from vendor C, a Blend Property Optimizer from vendor D, a tank gauging from vendor E, etc., without spending another couple hundred thousands to integrate all these disparate packages. They are all complicated, use different terminology, the user interfaces are different, no wonder people are not using optimizers, and it shows in the lackluster performance… to get things out the door, just dump MTBE and alky until it exceeds all the specs and move it. Considering that the gasoline part of the refinery cash flow is in the $300 to $700 millions/year, that is a poor way to run the business.
Today, fuels blenders face big, difficult, and costly challenges. Technology can help to some extent, particularly PROVEN technology, preferably designed by and for blending operations people. A full 80% of refineries have some form of blending automation, consisting of an in-line blender and a DCS, some with on-line analyzers (knock engines with RVP and distillation analyzers, some with NIR and NMR), and a blend ratio controller software on the DCS. The on-line analyzers are in various stages of use or disuse, with poor sampling systems and dubious calibrations, mostly involved in disputes between the blenders and the lab people. The optimizers, when available, are used mostly for rough monitoring, with obsolete calibration of non-linear property models, if at all. Integration, most frequently, is via e-mailed pumping orders or faxed or phoned-in blending orders, and looking at separate tank gauging screens and LIMS screens.
The benefits of a 100,000 BPD gasoline blender are easily in the order of $6 to $8 million/year. Typical results from a recent project are about 2M$/yr in giveaway reduction, and eliminating 11 component tanks and 9 finished product tanks and associated inventories, for another 3M$/yr benefits. To make money in blending, you need optimizers to economically manage your inventory, deliver on time, minimize giveaways, and flexibly blend away whatever “junk” blendstock you have. There are robust blending optimizers (which work all the time), reliable multi-property on-line analyzers, process control vendors experienced in tackling integration and usability issues in a blending project, and well-established procedures on how you install and run things. What is needed is an orderly way to go about it through a quick study to put together a design basis, cost estimates, and a project implementation plan. It’s probably the best spent money you’ll ever spend in blending.