Streaks and Banding: Measuring macro uniformity in the context of optimization processes for inkjet printing

Dr Danny Hall, Chief Screening Scientist, Global Graphics Software

Global Graphics Software’s chief screening scientist, Dr Danny Hall, discusses the emerging standards designed to objectively characterize directional print variations with particular reference to the ISO TS 18621.21 standard:

Directional printing artifacts like streaks and banding are commonly encountered problems in digital printing systems. For example, inkjet systems may produce characteristic density variations due to inconsistencies between printheads or intra-printhead variations between nozzles. When these variations have a high spatial frequency they can be characterized as causing ‘streaking’ in the direction of print, where the variations have a low spatial frequency this can cause the appearance of ‘banding’ in the direction of print.

Other causes of directional streaking and banding effects may be due, for example, to variations in the speed of printhead or substrate velocities resulting in density variations across the direction of printing. The ‘wow’ and ‘flutter’ of the digital printing age.

In the décor market there is a visual perceptual test sometimes referred to as a ‘porthole test’. In this test a human subject is presented with a print (e.g. wallpaper or floor covering) rotating slowly behind a round window under controlled viewing conditions. If they can determine the direction of printing then it test is a ‘fail’. One aspect of the porthole test is that it allows for the perceptual response differences between different printed images, for example the same press and conditions may be able to print one job containing a lot of graphical detail, but still fail on another job requiring flat tints.

There are currently emerging standards designed to objectively characterize this type of directional print variation.  For example, the proposed ISO TS 18621-21 standard defines a measurement method for the evaluation of distortions in the macroscopic uniformity of printed areas that are oriented in the horizontal and/or vertical direction, like streaks and bands.

Such recognized standards could be very useful for the development and maintenance of printing systems; as well as potentially allowing for the quantitative comparison of directional quality between different printing systems.

Having an objective ISO measurement of directional uniformity would therefore be a very useful step forward and something we at Global Graphics would like to encourage.

As a first step the current ISO TS 18621-21 proposal looks good and useable and provides for a robust and simple metric that can be calculated using standard equipment.

However, in exploring the potential use of this standard we also note a few limitations which may constrain the widest possible utility for a general directional measure in printing. For example, the frequency response of the proposed measurement technique may limit the response of the measure to higher frequency ‘streaking’ artifacts, this may be inevitable with the measurement devices available but this potential spatial frequency bias needs to be clearly understood and accepted.

Another challenge in standardizing such a metric across different printing platforms is the difficulty in selecting some kind of objective color tint to measure. The ‘goodness’ of the proposed ISO TS 18621-21 metric will depend on the color tint chosen for measurement; therefore making such measurements standard between systems with different color gamuts is a difficult and perhaps impossible task. Nonetheless we would like to propose a color tint selection strategy which at least a priori could have the potential to provide a selection of standardized color tints that could be used meaningfully with ISO TS 18621-21 across a range of different printers.

Frequency response

The frequency response is discussed in the ISO proposal.  There is a potential bias in the measuring methodology towards lower frequencies due to the suggested 6mm sampling cut-off. For example, in our experience the main frequency elements of ‘streakiness’ may not be captured by this methodology potentially resulting in a bias towards lower-frequency ‘banding’ effects. That’s not necessarily a problem, it just needs to be understood that this metric may be biased towards ‘banding’ over ‘streakiness’ determination.

Where any streakiness is random and un-correlated with lower frequency banding: changes in high frequency streakiness can be expected to show up statistically as variations at lower frequencies (white noise). However, there are currently printing compensation systems available (such as PrintFlat ™) which can correct for directional variations so that high and low frequency variations are no longer correlated in a gaussian way. In such a case the proposed metric could in the worst case be blind to any underlying changes in high frequency streakiness variation above the band-pass of the sampling system.

Color selection

The proposed standard does not specify the printed color to use, which may make objective comparisons between systems based on this metric difficult and the metric itself is correlated with the underlying contrast of the tint selected. For example, one can expect an apparently better metric to result from printing a 5% tint compared to a 70% tint of the same ink. Therefore, an objective method for selecting color tints could be helpful and this is something we would like to explore.

This is an abstract from Danny’s forthcoming talk at the TAGA Annual Technical Conference, March 17 – 20, 2019 in Minneapolis, MN.

Register here: https://www.taga.org/register/

Adjusting rendering of outlined text in Harlequin

By Martin Bailey, CTO, Global Graphics Software

In several sectors of the print market it is common practice to convert text to outlines upstream of a RIP, on the grounds that it’s then impossible for the wrong glyph to be printed. This is normal, for instance, in much of the label and packaging industry, especially when there is very robust regulation in place, such as in pharmaceuticals.

Every page description language defines “scan conversion” rules that specify which pixels should be marked when a graphic is painted onto a page; these build on the concept of “pixel touching”, specifying exactly when a vector shape counts as touching a pixel and therefore marking it.

When you’re using PDF (or PostScript, before that) the scan conversion rules are different for text specified using live fonts and for vector shapes. If you started with live text and then converted it to outlines then you have switched from using the text scan conversion rules to using the vector graphic rules. That has always meant that text converted to outlines tends to render slightly heavier than text using live fonts. And the smaller the text is, the more the weight difference becomes apparent.

FIG 1
FIG 1 – 2pt text in Times Roman showing various scan conversion rules.

In Fig 1 you can see this difference very clearly for very small Western text rendered at 2pt and 600dpi, still a common resolution for digital printers and presses. The top line shows text using live fonts, and the second line shows the PDF scan conversion rule for a vector fill. Note that at 2pt the RIP only has about 12 pixels for the height of an upper-case glyph.

In early 2018 we added a new scan conversion rule for vector fills alongside our pre-existing rules in the Harlequin RIP. The intention was to make it possible to emulate the much lighter output that Esko’s FlexRIPs produce. Unfortunately, it also tended to emulate the ability for very fine structures, especially fine horizontal strokes in small text, to disappear. You can see this in the third row of text in Fig 1.

This is obviously not an optimal solution, so we continued our development, and have now extended the original solution with what is called “dropout control”. This prevents very fine sections of a vector fill “dropping out” when they manage to fall on the page in such a way that they don’t cross the locations in the pixels that would trigger anything being marked. You can see the effect of this in the bottom line in Fig 1.

Light rendering with dropout control was delivered to our OEM partners in late 2018 under the name RenderAccurate.

Even this optimized output won’t exactly match the output of live fonts, because the fonts themselves often include hints to the rendering engine, designed to ensure maximum legibility and conformance to the font designer’s vision. These hints can, for instance, ensure that vertical stems are the same width in all glyphs, or that the curved base of a glyph will extend slightly below the baseline to make it visually balance with glyphs with flat bases that sit on the baseline. Those hints were discarded when the text was converted to an outline, and so can’t be used any more. But the new scan conversion algorithm certainly strikes a good balance between matching the weight of live fonts and maintaining legibility.

The effect is visible in very small text in Latin fonts, as shown in Fig 1, but the impact is often masked by the physical effects of printing. And Latin glyphs tend to be relatively simple, so that the human eye and brain are pretty good at filling in the missing segments without too much impact on legibility or comprehension.

On the other hand, Chinese, Japanese and Korean (CJK) fonts are often more complex, with the result that the effect is visible at larger point sizes. And the meaning can be obscured or altered much more easily if strokes are missing. Fig 2 illustrates the same effects on Japanese text at 3pt, rendered at 600dpi. At this size and resolution, the RIP has about 22 pixels for the height of each glyph.

FIG 2
FIG 2 – 3pt text in MS Mincho, showing, from top to bottom: live fonts; default rendering for outlined text; the new, lighter, outlined text; and lighter text with dropout control.

The glyphs shown in FIG 2 are complex compared to Western scripts, but any solution that will be used with CJK scripts must obviously also be proven with the most complex character shapes, such as the Kanji in FIG 3. Some of these have so many horizontal strokes that they simply cannot be rendered with fewer than 22 device pixels vertically and require more than that for reliable rendering. The sample in this figure is rendered at with around 27 pixels for the height of each glyph.

FIG 3
FIG 3 – More complex Kanji in KozGoPro-Regular, showing, from top to bottom: live fonts; default rendering for outlined text; and the new, lighter text with dropout control.

This article has deliberately used very small text sizes as examples, simply because the effects are easier to see. But the same issues arise at larger sizes as well, albeit more rarely.

On the other hand, it is precisely because the issue appears more rarely, and because the effects are less immediately noticeable, that makes the risk of dropping strokes so dangerous. It’s perfectly possible that an occasional missing stroke, perhaps in an unusually light font, may go unnoticed in process control. And that might result in a print that disappoints a brand owner, or even that fails a regulatory check, after the label has been applied or the carton converted and filled, or even after the product being shipped.

So, when a brand demands lighter rendering of pre-outlined fonts, make sure you’re safe by also using dropout control in your RIP!