If you print on an inkjet press you’ll know that the problem of non-uniformity or banding is a particularly difficult one to resolve. It’s especially acute on areas of flat tints with the result that printed output is unacceptable to you and to your customers. This means you either don’t run certain jobs on your inkjet press or, in some sectors of the market, are forced to sell your output at a discount.
The good news is that with PrintFlat you have a solution that is quick to deploy and cost-effective, and it can be applied to any workflow with or without a RIP. With more press vendors adopting this technology, watch our new explainer video to see how you might benefit.
In my last post I gave an introduction to halftone screening. Here, I explain where screening is performed in the workflow:
Halftone screening must always be performed after the page description language (such as PDF or PostScript) has been rendered into a raster by a RIP … at least conceptually.
In many cases it’s appropriate for the screening to be performed by that RIP, which may mean that in highly optimized systems it’s done in parallel with the final rendering of the pages, avoiding the overhead of generating an unscreened contone raster and then screening it. This usually delivers the highest throughput.
Global Graphics Software’s Harlequin RIP® is a world-leading RIP that’s used to drive some of the highest quality and highest speed digital presses today. The Harlequin RIP can apply a variety of different halftone types while rendering jobs, including Advanced Inkjet Screens™.
But an inkjet press vendor may also build their system to apply screening after the RIP, taking in an unscreened raster such as a TIFF file. This may be because:
An inkjet press vendor may already be using a RIP that doesn’t provide screening that’s high enough quality, or process fast enough, to drive their devices. In that situation it may be appropriate to use a stand-alone screening engine after that existing RIP.
To apply closed loop calibration to adjust for small variations in the tonality of the prints over time, and to do so while printing multiple copies of the same output, in other words, without the need for re-ripping that output.
When a variable data optimization technology such as Harlequin VariData™ is being used that requires multiple rasters to be recomposited after the RIP. It’s better to apply screening after that recomposition to avoid visible artifacts around some graphics caused by different halftone alignment.
To access sophisticated features that are only available in a stand-alone screening engine such as Global Graphics’ PrintFlat™ technology, which is applied in ScreenPro™.
Global Graphics Software has developed the ScreenPro stand-alone screening engine for these situations. It’s used in production to screen raster output produced using RIPs such as those from Esko, Caldera and ColorGate, as well as after Harlequin RIPs in order to access PrintFlat.
The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.
Halftone screening, also sometimes called halftoning, screening or dithering, is a technique to reliably produce optical illusions that fool the eye into seeing tones and colors that are not actually present on the printed matter.
Most printing technologies are not capable of printing a significant number of different levels for any single color. Offset and flexo presses and some inkjet presses can only place ink or no ink. Halftone screening is a method to make it look as if many more levels of gray are visible in the print by laying down ink in some areas and not in others, and using such a small pattern of dots that the individual dots cannot be seen at normal viewing distance.
Conventional screening, for offset and flexo presses, breaks a continuous tone black and white image into a series of dots of varying sizes and places these dots in a rigid grid pattern. Smaller dots give lighter tones and the dot sizes within the grid are increased in size to give progressively darker shades until the dots grow so large that they tile with adjacent dots to form a solid of maximum density (100%). But this approach is mainly because those presses cannot print single pixels or very small groups, and it introduces other challenges, such as moiré between colorants and reduces the amount of detail that can be reproduced.
Most inkjet presses can print even single dots on their own and produce a fairly uniform tone from them. They can therefore use dispersed screens, sometimes called FM or stochastic halftones.
A dispersed screen uses dots that are all (more or less) the same size, but the distance between them is varied to give lighter or darker tones. There is no regular grid placement, in fact the placement is more or less randomized (which is what the word ‘stochastic’ means), but truly random placement leads to a very ‘noisy’ result with uneven tonality, so the placement algorithms are carefully set to avoid this.
Inkjet is being used more and more in labels, packaging, photo finishing and industrial print, all of which often use more than four inks, so the fact that a dispersed screen avoids moiré problems is also very helpful.
Dispersed screening can retain more detail and tonal subtlety than conventional screening can at the same resolution. This makes such screens particularly relevant to single-pass inkjet presses, which tend to have lower resolutions than the imaging methods used on, say, offset lithography. An AM screen at 600 dots per inch (dpi) would be very visible from a reading distance of less than a meter or so, while an FM screen can use dots that are sufficiently small that they produce the optical illusion that there are no dots at all, just smooth tones. Many inkjet presses are now stepping up to 1200dpi, but that’s still lower resolution than a lot of offset and flexo printing.
This blog post has concentrated on binary screening for simplicity. Many inkjet presses can place different amounts of ink at a single location (often described as using different drop sizes or more than one bit per pixel), and therefore require multi-level screening. And inkjet presses often also benefit from halftone patterns that are more structured than FM screens, but that don’t cluster into discrete dots in the same way as AM screens.
The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.
The Inkjet Conference Düsseldorf has been and gone for another year and we’re already looking ahead to the 2019 events that will be organised by ESMA.
This year delegates in the audience were able to submit questions via an app for the first time. I’m grateful to the IJC for sending me the questions that we either didn’t have time to cover after my presentation, or that occurred subsequently. So here they are with my responses:
Is it possible to increase the paper diversity with software by e.g. eliminating paper related mottling?
Yes, we have yet to come across a media/ink combination ScreenPro™ will not work well with. The major artefact we correct for is mottle. This may mean you can print satisfactory results with ScreenPro on papers where the mottle was unacceptable previously, so increasing the diversity of papers that can be used.
It sounds like ScreenPro is very good at tuning a single machine. How do you also then match that output quality among several machines?
There are two technologies in ScreenPro, the screening core itself with the Advanced Inkjet Screens (AIS), and PrintFlat™ to correct for cross web banding. ScreenPro generally improves print quality and Mirror and Pearl screens (AIS) work in the majority of screening situations. PrintFlat, however, needs to be tuned to every press and if the press changes significantly over time, if a head is changed for example, it will have to be recalibrated. This calibration actually makes subsequent ink linearization and colour profiling more consistent between machines as you have removed the cross-web density fluctuations (which are machine specific) from the test charts used to generate these profiles.
“We haven’t found ink or substrate that we couldn’t print with.” Does this include functional materials, such as metals, wood, rubber? or is it limited to cmyk-like processes?
No – with ScreenPro we have only worked with CMYK-like process colours, i.e. print that is designed to be viewed with colour matching etc. ScreenPro is designed to improve image quality and appearance. I see no reason why ScreenPro would not work with functional materials but I would like to understand what problems it is trying to solve.
What is the main innovation of the screening software in terms of how it works as opposed to what it can do?
“How it works” encompasses placing the drops differently on the substrate in order to work around common inkjet artefacts. The innovation is therefore in the algorithms used to generate the screens.
By Tom Mooney, product manager for Global Graphics Software
I’ve just returned from a road trip in the US to inkjet press manufacturers who are all interested in using ScreenPro.
The meetings have gone in a very similar manner with the opening line: “We have a print shop that wants to print this job, but take a look at this area.” They point to an area of the image, usually in the shadows, and it is either a muddy brown mess or crusty and flaky, the typical ‘orange peel’ effect. We all agree the print is unacceptable and cannot be sold, so we discuss what can be done.
Firstly, we look at the ink limitation, but this kills the color saturation in the rest of the print. We look at color management and under color removal, but this only moves the problem to a different area on the image.
Then we see what ScreenPro can do.
We try our Advanced Inkjet Screens™ and use Pearl screen on the muddy mess and Mirror on the orange peel. This does the trick and makes the prints acceptable, so the print shop can sell that print job.
As long as this quality threshold is met the customer is happy. This quality is achieved by a combination of hardware, media and ink and software. Color management is only part of the story with the software – ScreenPro makes a real impact on those hard-to-solve killer jobs.
They say a problem shared is a problem halved. Well, two weeks on from our launch of our Advanced Inkjet Screens it’s been gratifying to see how much the discussion of inkjet output quality has resonated among the press vendor community.
Just in case you missed it, we’ve introduced a set of screens that mitigate the most common artifacts that occur in inkjet printing, particularly in single-pass inkjet but also in scanning heads. Those of you who’ve attended Martin Bailey’s presentations at the InkJet Conference ( The IJC) will know that we’ve been building up to making these screens available for some time. And we’ve worked with a range of industry partners who’ve approached us for help because they’ve struggled to resolve problems with streaking and orange peel effect on their own.
Well, now Advanced Inkjet Screens are available as standard screens that are applied by our ScreenPro screening engine. They can be used in any workflow with any RIP that allows access to unscreened raster data, so that’s any Adobe PDF RIP including Esko. Vendors can replace their existing screening engine with ScreenPro to immediately benefit from improved quality, not to mention the high data rates achievable. We’ve seen huge improvements in labels and packaging workflows. Advanced Inkjet Screens are effective with all the major inkjet printheads and combinations of electronics. They work at any device resolution with any ink technology.
Why does a halftone in software work so well? Halftones create an optical illusion depending on how you place the dots. Streaking or graining on both wettable and non-absorbent substrates can be corrected. Why does this work in software so well? Halftoning controls precisely where you place the dots. It just goes to show that the assumption that everything needs to be fixed in hardware is false. We’ve published a white paper if you’re interested in finding out more.
In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. It’s eight years since there’s been a revision to the standard. We’ve already covered the main changes affecting print in previous blog posts and here Martin Bailey, the primary UK expert to the ISO committee developing PDF 2.0, gives a roundup of a few other changes to expect.
The encryption algorithms included in previous versions of PDF have fallen behind current best practices in security, so PDF adds AES-256-bit and states that all passwords used for AES-256 encryption must be encoded in Unicode.
A PDF 1.7 reader will almost certainly error and refuse to process any PDF files using the new AES-256 encryption.
Note that Adobe’s ExtensionLevel 3 to ISO 32000-1 defines a different AES-256 encryption algorithm, as used in Acrobat 9 (R=5). That implementation is now regarded as dangerously insecure and Adobe has deprecated it completely, to the extent that use of it is forbidden in PDF 2.0. Deprecation and what this means in PDF!
PDF 2.0 has deprecated a number of implementation details and features that were defined in previous versions. In this context ‘deprecation’ means that tools writing PDF 2.0 are recommended not to include those features in a file; and that tools reading PDF 2.0 files are recommended to ignore those features if they find them.
Global Graphics has taken the deliberate decision not to ignore relevant deprecated items in PDF files that are submitted and happen to be identified as PDF 2.0. This is because it is quite likely that some files will be created using an older version of PDF and using those features. If those files are then pre-processed in some way before submitting to Harlequin (e.g. to impose or trap the files) the pre-processor may well tag them as now being PDF 2.0. It would not be appropriate in such cases to ignore anything in the PDF file simply because it is now tagged as PDF 2.0.
We expect most other PDF readers to take the same course, at least for the next few years. And the rest… PDF 2.0 header: It’s only a small thing, but a PDF reader must be prepared to encounter a value of 2.0 in the file header and as the value of the Version key in the Catalog.
PDF 1.7 readers will probably vary significantly in their handling of files marked as PDF 2.0. Some may error, others may warn that a future version of that product is required, while others may simply ignore the version completely.
Harlequin 11 reports “PDF Warning: Unexpected PDF version – 2.0” and then continues to process the job. Obviously that warning will disappear when we ship a new version that fully supports PDF 2.0. UFT-8 text strings: Previous versions of PDF allowed certain strings in the file to be encoded in PDFDocEncoding or in 16-bit Unicode. PDF 2.0 adds support for UTF-8. Many PDF 1.7 readers may not recognise the UTF-8 string as UTF-8 and will therefore treat it as using PDFDocEncoding, resulting in those strings being treated as what looks like a random sequence of mainly accented characters. Print scaling: PDF 1.6 added a viewer preferences key that allowed a PDF file to specify the preferred scaling for use when printing it. This was primarily in support of engineering drawings. PDF 2.0 adds the ability to say that the nominated scaling should be enforced. Document parts: The PDF/VT standard defines a structure of Document parts (common called DPart) that can be used to associate hierarchical metadata with ranges of pages within the document. In PDF/VT the purpose is to enable embedding of data to guide the application of different processing to each page range.
PDF 2.0 has added the Document parts structure into baseline PDF, although no associated semantics or required processing for that data have been defined.
It is anticipated that the new ISO standard on workflow control (ISO 21812, expected to be published around the end of 2017) will make use of the DPart structure, as will the next version of PDF/VT. The specification in PDF 2.0 is largely meaningless until such time as products are written to work with those new standards.
The last few years have been pretty stable for PDF; PDF 1.7 was published in 2006, and the first ISO PDF standard (ISO 32000-1), published in 2008, was very similar to PDF 1.7. In the same way, PDF/X‑4 and PDF/X‑5, the most recent PDF/X standards, were both published in 2010, six years ago.
In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. Much of the new work in this version is related to tagging for content re-use and accessibility, but there are also several areas that affect print production. Among them are some changes to the rendering of PDF transparency, ways to include additional data about spot colors and about how color management should be applied.
You often read news items about a new press having been installed at a beta site but it’s not a topic that gets much of an airing apart from the odd news bulletin, is it?
And that got me thinking.
What is considered to be a successful beta test? And why should we care?
Well, if you do care, you are not just going through the motions to get your press out of the door. You are more likely to be focussed on delivering a good product. You probably view beta testing as an opportunity to make changes for the better and to help improve product management. You care what comes back because you want to develop a good product. It’s important to you to get understandable and useful data.
So what do you want to know? Your beta test should provide you with proof points as to why your printer is going to be successful in the market. “Real” users will use and abuse your press and put it through its paces in a way that your own internal hardware and software engineers will not. Any weaknesses will be exposed. And you’ll get closer to your customer by working together with them in a way that just wouldn’t be open to you if you didn’t run a beta program.
The thing is how do you extract meaningful data from your test? And how do you rule out those problems that have nothing to do with your press, such as humidity, ambient temperature, the way the site is being operated?
Somehow you need to control the environment that the beta test is conducted in and approach the beta test in quite a formal way to rule out any subjectivity that might creep in.
We’ve got some ideas on how to achieve this which I’ll share in another post. But I’d be interested in hearing how you do it. What are your top tips?
One of the many highlights of our drupa stand will be the new Harlequin RIP. We asked Martin Bailey, CTO at Global Graphics, to tell us more about it. He told us that there are a host of new features to improve inkjet output quality including richer, multi-level screening controls, more controls for variable data printing, and new features for labels and packaging applications. Hear his summary in this video below.
Fancy a test drive? Join us at drupa 2016, Stand 70 B21/C20 in the dip. Simply contact us to book a demonstration.
Stay tuned for more announcements over the next couple of weeks.