Pulp & Paper Stock Consistency Transmitters True Cost of Ownership



One of the objections I hear regarding passive mechanical consistency transmitters is the high cost of ownership that these systems purportedly have.

The thinking goes something like this.  Mechanical transmitters typically have a sensor in the line that protrudes into the flow in the line.  Sooner or later, that sensor will get hit and damaged and will need to be replaced.  Thus, to keep a a mechanical sensor operational requires that the sensor be replaced and represents an ongoing expense.  The alternative, a rotary transmitter is typically installed such that its sensor is wholly contained within a stilling chamber and is thus not likely to be hit and damaged.  Its cost of operation must be lower, right?

While there is some truth to this, it’s not the whole story – not by a long shot.
It’s true that passive mechanical systems do get hit from time to time and their sensors will need to be replaced.  It’s also true that rotary systems don’t often get damaged because their sensors are offset from the flow.   That said, what is not true is the notion that the cost of ownership for a rotary is far less than that for a mechanical. It isn’t.

Let me illustrate this with an example using my company’s C3000 sensor:
The TECO C3000 Consistency Sensor
A rotary system will cost you somewhere in the neighborhood of $30,000. Let’s assume that it will last five years before it will need to be replaced.  A complete TECO C3000 mechanical system, on the other hand,  will typically cost you somewhere under $7k. Let’s say you have to replace the C3000 sensor once per year.  Your annual cost, including the trade-in credit for the original sensor core, is under $2k per sensor.

Over five years you’ll pay less than half of what you’d paid for the rotary initially.  Let me say that again – you’d pay less than half of what you’d pay for the rotary.

Don’t get me wrong, rotary consistency transmitters are cool devices and they certainly have their positive points, but they ain’t cheap.  Passive mechanicals are way, way less expensive and you can use them to measure mostly the same consistency range that you would use a rotary to measure.  Properly applied, the TECO C3000 sensor will give you way more bang for the buck than any other system available in the world today. 

Sampling v2.0

I want to take another look at proper sampling because it so key to a good calibration.   While there are statistical tricks to get the most out of anything you produce calibration-wise, if you don't have good sampling, you are, in the best case, creating big problems for yourself.

We want to collect samples such that they are representative of the process.  Samples that are representative have an average that is very close to the average of the whole process at that moment in time. Samples that are not representative will have averages that are not at all similar to the process. 

Collecting representative samples isn't difficult, but you do have to follow certain rules. 

1) Collect samples from lines where the flow characteristic is known to be stable, i.e., in plug flow.  Stable flow means that you will likely not have any turbulence in the line that might de-water your stock or otherwise introduce non-representative sampling.   The easiest way to ensure this is to find a straight length of pipe that is at least seven pipe diameters long, and without any bends or obstructions in it.  

2) Make sure the pipe is full.  No, really, make sure the pipe is full.  Choose lines that are horizontal, or vertical lines with flow going up.  Choosing a vertical lines with flow going down is asking for trouble.  Do not take samples from chests if you can avoid it.

3)  If you are planning to use your data to build a calibration for an instrument, you should make sure that the sample port is close to the instrument in question.  There is no point in running analyses if the instrument is in another line or on the other side of the mill.

4) The sample port should have an internal extension that protrudes roughly to the center of the stock line.Use proper sampling valves, if you can, and avoid ball valves that have been installed on the side of a pipe.  The image below illustrates how variable things can get as they move through your stock line.  As you can see, it can sometimes be a challenge to get that "representative" sample.  That said, your best chance will be to take samples from the center of the pipe as opposed to the sides.
Variability in a stock line


5) Open the valve and let the stock run freely for a few seconds to ensure that all the stock from the last sample is fully discharged from the sampling line.

6) Collect a large quantity of stock (i.e, a gallon or two at minimum - five gallons is better).

7)  When back in the lab, agitate your large volume of stock and take at least two small samples.  Analyze each according to your favorite method and average the results.  This will yield you one data point.

6) If you haven't done so before, run a Total Error Variance (TEV) to estimate the quality of your sampling and analytical technique.  TEV's are sort of a poor man's six sigma.  They will provide you an estimate of how much variability in your analyses is attributable to your sampling and how much is due to your technique.

If you don't have a TEV in hand, send me an email and I'll send you a copy of our spreadsheet that you can use.

The 64 Dollar Question



So, how accurate can a transmitter be, anyway?


This is a question I frequently hear from both my customers and my prospects.  While I understand why my customers ask this question, the real question they should be asking is, “How repeatable is your transmitter?”.

What’s the difference?  Glad you asked.

Repeatability refers to how closely something – an instrument, for example  – will reproduce a measurement given the same test conditions.

Accuracy, on the other hand, refers to how well that same something measures up to a different assessment of the same thing.  When it comes to consistency measurements, accuracy typically refers to how well a particular transmitter measures up to a lab assessment of the same stock. 

The lab assessment could be anything, but it is usually some variant of the TAPPI 240 method and this is where the problem comes from.  The TAPPI 240 method specifies a repeatability of 10% for that test, which means that 95% of the time, the lab test, if executed as specified, will yield results within 10% of each other. So, for a nominal test of 4.0% consistency, a second, properly executed test of the same stock sample should yield a number between 3.6 and 4.4%.  Of course, the repeatability statement also says that 5% of the time , or once out of twenty tests, you could get a number that’s worse than that 10% limit.

What makes this really scary is that very few laboratories actually execute the TAPPI 240 test as described in the procedure.  Many labs take short cuts – I once saw one guy try to squeeze dry a sample by stepping on it, for example - which means that the repeatability for the manual lab test may actually not be as good as 10%.

That’s the reason why we manufacturers prefer to talk in terms of repeatability rather than accuracy.  While we can never be sure how accurate our transmitters will be relative to the procedures your lab uses, we can be very certain about how our transmitters will respond, given the same stock conditions.  In the case of the TECO StockRite series of consistency transmitters, that repeatability is 0.0025 of the full scale range.  So, if your transmitter is set to read from 2% to 6%, for example, the TMC6000 system will repeat to within no greater than 0.01% (-4.0 * 0.0025).

Which is pretty doggone repeatable, if you ask me. 

So the correct answer to the question “How accurate is your transmitter?” is that we are highly repeatable.

The C5000 Sensor - Our Workhorse System




You may have heard me mention a thing or two about our retractable consistency sensor, the C5000. 

I know, I go on and on about this, but this sensor is really a big deal for our customers, and if you’ve haven’t tried one yet, it could probably be a big deal for you , too.

Why is it such a big deal? 

It’s retractable.  This means you can pull the sensor body out of the line at any time.  You don’t have to wait for a shutdown or isolate the line in order to service the system.

Because we made it to be retractable, we also made it easy to retract.  Here’s a short video I posted on YouTube on how to do it.  As you can see, you can extract a sensor in less than a minute.


Because it is retractable, we also made it hot-swappable.  That means you can swap out a sensor with another one without having to go through a recalibration.  Just auto-zero the new sensor and you can use your existing calibration without further adjustments.  All TECO sensors, by the way, have this “auto-zero” feature.  You can swap any of them at any time without having to recalibrate.  The nice thing about the C5000 is that you can do it without having to wait for a shutdown.

The C5000 also has what I like to call a superior flow characteristic.  What do I mean by that?  Well, you’ve heard me before mention that all consistency sensors – the mechanical ones, anyway -  are sensitive to changes in flow rate.  This means that shifts in flow rate will be perceived by the sensor as a shift in consistency.  If you’re not careful with your consistency setup, you might find yourself going in circles chasing what looks like a consistency problem but is really a flow rate problem.   

The C5000 avoids this problem altogether because it is immune to shifts in flow rate below 3.0 fps.  Put another way, there is no measurable impact on the consistency measurement for flows below 3.0 fps.  You can go from nothing to 3.0 fps and back again and it won’t have any impact on the consistency signal.  

 Of course, once you get above 3.0 fps, it’s a different story.  The C5000 will start to react to changes in flow rate.  Because we know that, we have provided on-board velocity compensation for all of our sensors in our transmitter.  Just land a flow signal on the transmitter and you can automatically compensate for variable flow rates up to 6.0 fps (up to 11 fps with our C9700 sensor body).

We’re low cost.  That means we won’t charge you an arm and a leg to buy one of our systems.  Also, when it comes time to swap sensor bodies, you don’t have to buy another whole system.  You can just buy the sensor body part.  We’ll even give you a credit when you send us the old sensor back (we do that with all of our consistency sensors, by the way, not just the C5000).

Finally, we’re local.  We fabricate and assemble everything in our factory in New Orleans.  We keep plenty of spare parts on the shelves, so we can ship you the things you need overnight.

If you haven't tried one of our C5000 sensors, please give me a call.  I'd be happy to talk to you about it.


Sheet Breaks are Expensive



Sheet breaks are a pain in the you-know-what and they are probably costing you more than you know.

Let’s take a close look at a very small example:

Mill A is a small recycle facility that makes roughly 65,000 tons/year of boxboard, or about 200 tons/hour.  Let’s assume their cost for recycled paper is $150 per ton and that they experience about two breaks a day.  It takes that mill roughly 20 minutes to get everything back online and back up to speed after a break.

When you run the numbers, this mill is losing – just in terms of the sourcing cost of the paper they utilize – about $300,000 each year.  Their sales loss is significantly higher, probably upwards of $1.2 million dollars each year.  If you assume that about 30% of the breaks are due to variations in freeness,  then the cost of variable freeness at this mill is at least $300,000 each year in lost production.

$300,000.   

And that’s for a small mill with only a couple of breaks a day.  How much paper do you think this mill has to sell to make up for that $300,000?

So, here's a question for you:  

How much money is variable freeness costing you? 

It's a real pain. 

Sampling Shmampling



I get a lot of questions on calibration, but today, I want to talk a little bit about sampling, since it is absolutely key to take proper samples when you’re trying to build a calibration.



Why is sampling important?  It’s important because you are trying to get sense for what an instrument is telling you when it indicates something.  The only way that you can do that is to take physical samples of your process and compare the instrument’s responses with your manual evaluations of those samples.



There are two pieces to this.  Well, three actually.  One, of course, is the method someone uses to analyze a sample.  This is usually reasonably well documented, at least for the “Official” test.  TAPPI, for example, will publish an estimate for the repeatability of every lab test it recognizes  (the repeatability for the official consistency test, by the way, is 10%, which, to my way of thinking, ain’t so great. But hey, it is what it is).  Of course, if you’re not following the official procedure exactly, then your repeatability might not be as good as that.   I’ll talk more about this in another post).



The second aspect of this is just how representative the sample that’s being analyzed is of the process it was taken from in the first place.  If the sample you’re extracting from the process isn’t representative of the process, then you are basically analyzing something that really doesn’t mean much. Put another way, if your samples aren’t representative, then you are wasting your time with your calibrations.  You won’t get very far at all.



What do I mean by representative? 



Since it’s impossible to analyze absolutely all of your stock, you have to estimate what’s in your line by analyzing just a little tiny bit at a time – this is the sample that I’ve been talking about. If a sample is representative, then it means that you could have taken any number of samples in the same way and gotten roughly the same result. Of course, keep in mind that you won’t ever get absolutely the same result because the process isn’t homogenous, and no sampling method is absolutely perfect, but you can get reasonably close if you try.  Put another way, your samples will likely be close to the average of the stock in the line, and have a narrow two sigma.



If, however, the sample isn’t representative, then that means that you could get any number of widely different results each time you captured a sample.  You wouldn’t get samples close to the average, plus they would probably be biased one way or another, and your two sigma would be wide.

   

A good sampling regime is one in which a proper sampling valve is installed in a straight length of pipe of at least seven pipe diameters.  Valves are allowed to flow for a while to ensure the sampling line is flushed of any residual stock. 



A bad sampling regime would be something like a ball valve that’s just welded on the side of a pipe somewhere.  There is no thought given to the nature of the flow in the line at that point.  Is the stock flow stable, or is it turbulent?  Has the stock dewatered?  Was there some left over stock still in the sample line from yesterday or last week before you captured it?



And it’s not enough to ensure that your samples are merely statistically representative of the process.  You also have to ensure that both you and the instrument are actually looking at the same stock.



I really think that most people simply don’t pay enough attention to this last point. 



Why do I say that?  Here’s an example. 



I was once asked by a customer to help calibrate some of their equipment because they were having all sorts of problems and disagreements with their results.  They thought the problem was with the instrumentation.  As it turned out, it wasn’t the equipment at all, but with how they were sampling their stock.   

The equipment was installed in a stock line which then dumped into a chest.  The samples, however, weren’t taken from the same stock line as the instrumentation was in.  Instead, the samples were taken from the discharge of that chest.  The chest had a residence time of about 30 minutes, so whatever came out of the discharge was stock that had been mixed for thirty minutes.  There was no way that the lab could ever analyze the same stock that the instrument was exposed to. 



This situation was set up to fail.  It was guaranteed that the lab analysis and the instrument would always disagree because they were measuring two different things at different times.  Any effort expended under these conditions is a waste of time, because as the man from New England said,  “You just can’t get there from here”. 



When you install a sampling valve, you want to take care that it is close to the instrument that you are trying to calibrate so that you can be sure that both you and instrument are analyzing the same stock.



Let me also make the point that you shouldn’t balk at the cost of the sampling valve.  Yes, it’s more expensive than a ball valve, but it makes absolutely no sense at all to save a few hundred dollars on a sampling valve when you’re trying to calibrate a $50,000 instrument that will hopefully have a multimillion dollar impact on your process.  Saving those few hundred dollars may completely invalidate the whole thing.



So, here’s what you should shoot for when sampling your process for an instrument.



1)      Select a proper sampling point

a.       Site the sampling valve close to the instrument for which it is intended.

b.      Ensure that you will be sampling the same stock that the instrument is analyzing



2)      Ensure you are getting representative samples

a.       Use a proper sampling valve

b.      Install in a section of straight line at least seven pipe diameters long.

c.       Install in the side of the line, or according to the manufacturer’s recommended method.

d.      Open the valve fully when preparing to capture a sample.

e.      Allow the valve to flow to ensure that any residual stock is cleared from the line before capturing a bucket full



3)      Bracket the instrument analysis with Samples

a.       Capture samples of stock before during and after the instrument has completed it analysis.

                                                               i.      Capture a bucket of stock

                                                             ii.      Start the instrument analysis

                                                            iii.      Capture a second bucket of stock

                                                           iv.      Let the instrument complete its analysis

                                                             v.      Capture a third bucket of stock

Yes, it’s a lot of work, but the result is worth the effort.  You'll have captured meaningful samples that you can then analyze to build you calibration with.