What you need to build a press that must handle variable data jobs at high speed

I’ve spoken to a lot of people about variable data printing and about what that means when a vendor builds a press or printing unit that must be able to handle variable data jobs at high speed. Over the years I’ve mentally defined several categories that such people fall into, based on the first question they ask: 

  1. “Variable data; what’s that?” 
  2. “Why should I care about variable data, nobody uses that in my industry?” 
  3. “I’ve heard of variable data and I think I need it, but what does that actually mean?” 
  4. “How do I turn on variable data optimization in Harlequin?” 

If you’re in the first two categories, I recommend that you read through the introductory chapters of our guide: “Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press”, available on our website. 

And yes, unless you’re in a very specialised industry, people probably are using variable data. As an example, five years ago pundits in the label printing industry were saying that nobody was using variable data on those. Now it’s a rapidly growing area as brands realize how useful it can be and as the convergence of coding and marking with primary consumer graphics continues. If you’re a vendor designing and building a digital press your users will expect you to support variable data when you bring it to market; don’t get stuck with a DFE (digital front end) that can’t drive your shiny new press at engine speed when they try to print a variable job. 

If you’re in category 3 then you’re in luck, we’ve just published a video to explain how variable data jobs are typically put together, and then how the DFE for a digital press deconstructs the pages again in order to optimize processing speed. It also talks about why that’s so important, especially as presses get faster every year. Watch it here:
 

And if you’re in category 4, drop us a line at info@globalgraphics.com, or, if you’re already a Harlequin OEM partner, our support team are ready and waiting for your questions.

Further reading:

  1. What’s the best effective photographic image resolution for your variable data print jobs?
  2. Why does optimization of VDP jobs matter?
  3. There really are two different kinds of variable data submission!

Be the first to receive our blog posts, news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedInTwitter and YouTube

 

Working with spot colors in Harlequin Core

Whenever we start working with a company who’s interested in using Harlequin Core for their Digital Front End (DFE), there are always three technical topics under discussion: speed, quality and capabilities. Speed and quality are often very quick discussions; much of the time they’ve approached us because they’re already convinced that Harlequin can do what they need. In the remaining cases we tend to jointly agree that the best way for them to be convinced is for them to take a copy of Harlequin Core and to run their own tests. There’s nothing quite like trying something on your own systems to give yourself confidence in the results.

So that leaves capabilities.

If the company already sells a DFE using a different core RIP they will almost always want to at least match, and usually to extend, the functionality of their existing solution when they switch to Harlequin. And if they’re building their first DFE they usually have a clear idea of what their target market will need.

At that stage we start by ensuring that we all understand that Harlequin Core can deliver rasters in whatever format is required (color channels, interleaving, resolution, bit depth, halftoning) and then cover color management pretty quickly (yes, Harlequin uses ICC profiles, including v4 and DeviceLink; yes, you can chain multiple profiles in arbitrary sequences, etc).

Then we usually come on to a series of questions that boil down to handling spot colors:

  • Most spot separations in jobs will be emulated on my digital press; can I adjust that emulation?
  • Can I make sure that the emulation works well with ICC profiles for different substrates?
  • Can I include special device colorants, such as White and Silver inks in that emulation?
  • Can I alias one spot separation name to another?
  • Can I make technical separations, like cut and fold lines, completely disappear, without knocking out if somebody upstream didn’t set them to overprint?
  • Alternatively, can I extract technical separations as vector graphics to drive a cutter/plotter with?

Since the answer to all of those is ‘yes’ we can then move on to areas where the vendor is looking for a unique capability …

But I’ve always been slightly disappointed that we don’t get to talk more about some of the interesting corners of spot handling in Harlequin. So I created a video to walk through some examples. Take a look, and I’d welcome your comments and questions!

Further reading:

  1. Channelling how many spot colors?!!
  2. Shade and color variation in textile printing
  3. Harlequin Core – the heart of your digital press
  4. What is a raster image processor 

Be the first to receive our blog posts, news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedInTwitter and YouTube

Compliance, compatibility, and why some tools are more forgiving of bad PDFs

Compliant and compatible PDF documents and the Harlequin RIP

We added support for native processing of PDF files to the Harlequin RIP® way back in 1997. When we started working on that support we somewhat naïvely assumed that we should implement the written specification and that all would be well. But it was obvious from the very first tests that we performed that we would need to do something a bit more intelligent because a large proportion of PDF files that had been supplied as suitable for production printing did not actually comply with the specification.

Launching a product that would reject many PDF files that could be accepted by other RIPs would be commercial suicide. The fact that, at the time, those other RIPs needed the PDF to be transformed into PostScript first didn’t change the business case.

Unfortunately a lot of PDF files are still being made that don’t comply with the standard, so over the almost a quarter of a century since first launching PDF support we’ve developed our own rules around what Harlequin should do with non-compliant files, and invested many decades of effort in test and development to accept non-compliant files from major applications.

The first rule that we put in place is that Harlequin is not a validation tool. A Harlequin RIP user will have PDF files to be printed, and Harlequin should render those files as long as we can have a high level of confidence that the pages will be rendered as expected.

In other words, we treat both compliance with the PDF standard and compatibility with major PDF creation tools as equally important … and supporting Harlequin RIP users in running profitable businesses as even more so!

The second rule is that silently rendering something incorrectly can be very bad, increasing costs if a reprint is required and causing a print buyer/brand to lose faith in a print service provider/converter. So Harlequin is written to require a reasonably high level of confidence that it can render the file as expected. If a developer opening up the internals of a PDF file couldn’t be sure how it was intended to be rendered then Harlequin should not be rendering it.

We’d expect most other vendors of PDF readers to apply similar logic in their products, and the evidence we’ve seen supports that expectation. The differences between how each product treats invalid PDF are the result of differences in the primary goal of each product, and therefore to the cost of output that is viewed as incorrect.

Consider a PDF viewer for general office or home use, running on a mobile device or PC. The business case for that viewer implies that the most important thing it has to do is to show as much of the information from a PDF file as possible, preferably without worrying the user with warnings or errors. It’s not usually going to be hugely important or costly if the formatting is slightly wrong. You could think of this as being at the opposite end of the scale from a RIP for production printing. In other words, the required level of confidence in accurately rendering the appearance of the page is much lower for the on-screen viewer.

You may have noticed that my description of a viewer could easily be applied to Adobe Reader or Acrobat Pro. Acrobat is also not written primarily as a validation tool, and it’s definitely not appropriate to assume that a PDF file complies with the standard just because it opens in Acrobat. Remember the Acrobat business case, and imagine what the average office user’s response would be if it would not open a significant proportion of PDF files because it flagged them as invalid!

For further reading about PDF documents and standards:

  1. Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press
  2. PDF Processing Steps – the next evolution in handling technical marks

About the author

Martin Bailey, CTO, Global Graphics Software

 

 

 

Martin Bailey, Distinguished Technologist, Global Graphics Software, is currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT and is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a new guide offering advice to anyone with  a stake in variable data printing including graphic designers, print buyers, composition developers and users.

Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedInTwitter, and YouTube

Second edition now available: Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press

At the beginning of 2020, in what we thought was the run-up to drupa, Global Graphics published a new guide called “Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press”. It was designed to complement the recommendations available for how to maximize sales from direct mail campaigns, with technical recommendations as to how you can make sure that you don’t make a PDF file for a variable data job that will bring a digital press to its knees. It also carried those lessons into additional print sectors that are rapidly adopting variable data, such as labels, packaging, product decoration and industrial print, with hints around using variable data in unusual ways for premium jobs at premium margins.

Well, as they say, a lot has happened since then.

And some of that has been positive. At the end of 2020 several new International Standards were published, including a “dated revision” (a 2nd edition) of the PDF 2.0 standard, a new standard for submission of PDF files for production printing: PDF/X-6, and a new standard for submission of variable data PDF files for printing: PDF/VT-3.

We’ve therefore updated Full Speed Ahead to cover the new standards. And at the same time we’ve taken the opportunity to extend and clarify some of the rest of the text in response to feedback on the first edition.

So now you can keep up to date, just by downloading the new edition!

DOWNLOAD THE GUIDE

Further reading:

  1. What’s the best effective photographic image resolution for your variable data print jobs?
  2. Why does optimization of VDP jobs matter?

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

There really are two different kinds of variable data submission!

There are two completely different forms of variable data handling in  the Harlequin RIP®, and I’m sometimes asked why we’ve duplicated functionality like that. The simple answer is that it’s not duplication; they each address very different use cases.

But those use cases are not, as many people then expect, “white paper workflows” vs imprinting, i.e. whether the whole design including both re-used and single-use elements is printed together vs adding variable data on top of a pre-printed substrate. Both Harlequin VariData™ and the “Dynamic overlays” that we added in Harlequin version 12 can address both of those requirements.

Incidentally, I put “white paper workflows” in quotes because that’s what it’s called in the transactional and direct mail spaces … but very similar approaches are used for variable data printing in other sectors, which may not be printing on anything even vaguely resembling paper!

The two use cases revolve around who has the data, when they have it, whether a job should start printing before all the data is available, and whether there are any requirements to restrict access to the data.

When most people in the transactional, direct mail or graphic arts print sectors think about variable data it tends to be in the form of a fully resolved document representing all of the many variations of one of a collection of pages, combining one or more static ‘backgrounds’ with single-use variable data elements, and maybe some re-used elements from which one is selected for each recipient. In other words, each page in the PDF file is meant to be printed as-is, and will be suitable for a single copy. That whole, fully resolved file is then sent to the press. It may be sent from one division of the printing company to the press room, or even from some other company entirely. The same approach is used for some VDP jobs in labels, folding carton, corrugated, signage and some industrial sectors.

This is the model for which optimized PostScript, and then optimized PDF, PDF/VT (and AFP) were designed. It’s a robust workflow that allows for significant amounts of proofing and process control at multiple stages. And it also allows very rich graphical variability. It’s the workflow for which Harlequin VariData was designed, to maximize the throughput of variable data files through the Digital Front End (DFE) and onto the press.

But in some cases the variable data is not available when the job starts printing. Indeed, the print ‘job’ may run for months in situations such as packaging lines or ID card printing. That can be managed by simply sending a whole series of optimized PDF files, each one representing a few thousand or a couple of million instances of the job to be printed. But in some cases that’s simply not convenient or efficient enough.

In other workflows the data to be printed must be selected based on the item to be printed on, and that’s only known at the very last minute … or second … before the item is printed. A rather extreme example of this is in printing ID cards. In some workflows a chip or magnetic strip is programmed first. When the card is to be printed it’s obviously important that the printed information matches the data on the chip or magnetic strip, so the printing unit reads the data from one of those, uses that to select the data to be printed, and prints it … sometimes all in less than a second. In this case you could use a fully resolved optimized PDF file and select the appropriate page from it based on identifying the next product to be printed on; I know there are companies doing exactly that. But it gets cumbersome when the selection time is very short and the number of items to be printed is very large. And you also need to have all of the data available up-front, so a more dynamic solution is better.

Printing magnetic strip on ID cards
Printing magnetic strip on ID cards.

In other cases there is a need to ensure that the data to be printed is held completely securely, which usually leads to a demand that there is never a complete set of that data in a standard file format outside of the DFE for the printer itself. ID cards are an example of this use case as well.

Printing Example ID cards

Moving away from very quick or secure responses, we’ve been observing an interesting trend in the labels and packaging market as digital presses are used more widely. Printing the graphics of the design itself and adding the kind of data that’s historically been applied using coding and marking are converging. Information like serial numbers, batch numbers, competition QR Codes, even sell & use by dates are being printed at the same time as the main graphics. Add in the growing demands for traceability, for less of a need for warehousing and for more print on demand of a larger number of different versions, and there can be some real benefits in moving all of the print process quite close to the bottling/filling/labelling lines. But it doesn’t make sense to make a million page PDF file just so you can change the batch number every 42 cartons because that’s what fits on a pallet.

These use cases are why we added Dynamic overlays to Harlequin. Locations on the output where marks should be added are specified, along with the type of mark (text, barcodes and images are the most commonly used). For most marks a data source must be specified; by default we support reading from CSV files or automated counters, but an interface to a database can easily be added for specific integrations. And, of course, formatting information such as font, color, barcode symbology etc must be provided.

The ‘overlay’ in “Dynamic overlays” gives away one of the limitations of this approach, in that the variable data added using it must be on top of all the static data. But we normally recommend that you do that for fully resolved VDP submissions using something like optimized PDF anyway because it makes processing much more efficient; there aren’t that many situations where the desired visual appearance requires variable graphics behind static ones. It’s also much less of a constraint that you’d have with imprinting, where you can only knock objects like white text out of a colored fill in the static background if you are using a white ink!

For what it’s worth, Dynamic overlays also work well for imprinting or for cases where you need to print graphics of middling complexity at high quality but where there are no static graphics at all (existing coding & marking systems can handle simple graphics at low to medium quality very well). In other words, there’s no need to have a background to print the variable data as a foreground over.

So now you know why we’ve doubled up on variable data functionality!

Further reading:

  1. What’s the best effective photographic image resolution for your variable data print jobs?
  2. Why does optimization of VDP jobs matter?

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

 

What is a Raster Image Processor (RIP)?

Ever wondered what a raster image processor or RIP does? And what does RIPping a file mean? Read on to learn more about the phases of a RIP, the engine at the heart of your Digital Front End (DFE).

The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet printhead, toner marking engine or laser platesetter can understand. The process of RIPping a job requires several steps to be performed in order, regardless of the page description language (such as PDF) that it’s submitted in. Even image file formats such as TIFF, JPEG or PNG usually need to be RIPped, to convert them into the correct color space, at the right resolution and with the right halftone screening for the press.

Interpreting: The file to be RIPped is read and decoded into an internal database of graphical elements that must be placed on the output. Each may be an image, a character of text (including font, size, color etc), a fill or stroke etc. This database is referred to as a display list.

Compositing: The display list is pre-processed to apply any live transparency that may be in the job. This phase is only required for any graphics in formats that support live transparency, such as PDF; it’s not required for PostScript language jobs or for TIFF and JPEG images because those cannot include live transparency.

Rendering: The display list is processed to convert every graphical element into the appropriate pattern of pixels to form the output raster. The term ‘rendering’ is sometimes used specifically for this part of the overall processing, and sometimes to describe the whole of the RIPing process.

Output: The raster produced by the rendering process is sent to the marking engine in the output device, whether it’s exposing a plate, a drum for marking with toner, an inkjet head or any other technology.

Sometimes this step is completely decoupled from the RIP, perhaps because plate images are stored as TIFF files and then sent to a CTP platesetter later, or because a near-line or off-line RIP is used for a digital press. In other environments the output stage is tightly coupled with rendering, and the output raster is kept in memory instead of writing it to disk to increase speed.

RIPping often includes a number of additional processes; in the Harlequin RIP® for example:

  • In-RIP imposition is performed during interpretation
  • Color management (Harlequin ColorPro®) and calibration are applied during interpretation or compositing, depending on configuration and job content
  • Screening can be applied during rendering. Alternatively it can be done after the Harlequin RIP has delivered unscreened raster data; this is valuable if screening is being applied using Global Graphics’ ScreenPro™ and PrintFlat™ technologies, for example.

A DFE for a high-speed press will typically be using multiple RIPs running in parallel to ensure that they can deliver data fast enough. File formats that can hold multiple pages in a single file, such as PDF, are split so that some pages go to each RIP, load-balancing to ensure that all RIPs are kept busy. For very large presses huge single pages or images may also be split into multiple tiles and those tiles sent to different RIPs to maximize throughput.

The raster image processor pipeline. The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, TIFF, JPEG, PNG and BMP as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.
The raster image processor pipeline. The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, TIFF, JPEG, PNG and BMP as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.

Harlequin Host Renderer brochure

 

To find out more about the Harlequin RIP, download the latest brochure here.

 

This post was first published in June 2019.

Further reading:

1. Where is screening performed in the workflow

2. What is halftone screening?

3. Unlocking document potential


To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

 

What’s the difference between PDF/X-1a and PDF/X-4?

PDFX-1 PDFX-4

Which PDF/X should I use?

Somebody asked me recently what the difference is between PDF/X-1a (first published in 2001) and PDF/X-4 (published in 2010). I thought this might also be interesting to a wider audience.

Both are ISO standards that deliberately restrict some aspects of what you can put into a PDF file in order to make them more reliable for delivery of jobs for professional print. But the two standards address different needs/desires:

PDF/X-1a content must all have been transformed into CMYK (optionally plus spots) already, so it puts all of the responsibility for correct separation and transparency handling onto the creation side. When it hits Harlequin, all the RIP can do is to lock in the correct overprint settings and (optionally) pre-flight the intended print output condition, as encapsulated in the output intent.

On the other hand, PDF/X-4 supports quite a few things that PDF/X-1a does not, including:

  • Device-independent color spaces
  • Live PDF transparency
  • Optional content (layers)

That moves a lot more of the responsibility downstream into the RIP, because it can carry unseparated colors and transparency.

Back when the earlier PDF/X standards were designed transparency handling was a bit inconsistent between RIPs, and color management was an inaccessible black art to many print service providers, which is why PDF/X-1a was popular with many printers. That’s not been the case for a decade now, so PDF/X-4 will work just fine.

In other words, the choice is more down to where the participants in the exchange want the responsibility to sit than to anything technical any more.

In addition, PDF/X-4 is much more easily transitioned between different presses, and even between completely different print technologies, such as moving a job from offset or flexo to a digital press. And it can also be used much more easily for digital delivery alongside using it for print. For many people that’s enough to push the balance firmly in favour of PDF/X-4.

For further reading about PDF documents and standards:

  1. Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press
  2. PDF Processing Steps – the next evolution in handling technical marks
  3. Compliance, compatibility, and why some tools are more forgiving of bad pdfs

About the author

Martin Bailey, CTO, Global Graphics Software
Martin Bailey, CTO, Global Graphics Software

Martin Bailey is Global Graphics’ Chief Technology Officer. He’s currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT and is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a new guide offering advice to anyone with  a stake in variable data printing including graphic designers, print buyers, composition developers and users.

Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

Harlequin RIP gains Ghent PDF Output Suite 5 compliancy

We’ve just added the Harlequin RIP® to the list of products certified as compliant with the Ghent Workgroup’s Output Suite 5 at https://www.gwg.org/ghent-pdf-output-suite-5-compliancy/

It was an interesting exercise, not because it was difficult, but because we started with a bit of archaeology. Back in February 2003 we published an “Application Data Sheet” of instructions for configuring versions 5.3 and 5.5 of the Harlequin RIP to render PDF/X-1a files. We followed that up with another edition for Harlequin 6 (the Eclipse release), addressing PDF/X-3 as well in 2004, and then for Harlequin 7 (Genesis) in 2005.

After that it seemed that PDF/X was sufficiently well understood and so widely adopted in the marketplace that we didn’t need to continue the series. Added to that, we’d added the ability for Harlequin RIPs to recognize PDF/X files and automatically change the RIP configuration around things like overprinting to, as we phrased it at the time, “Do the Right Thing™”.

So when we started writing up how to configure Harlequin for the GWG Output Suite we simply opened up the 2005 doc and replaced the screen grab of the user interface in Harlequin MultiRIP with a one from Harlequin 12.1. In 14 years we’ve added a few options, and, of course, a Windows 10 dialog looks a bit different to one from Windows XP!

We did have to add a couple of extra bullet points to the instructions, especially around perfecting the color management of spots being emulated in process colorants. Some of our color focus over the last decade has been on outputting to a fixed ink set, whether that’s on a digital press or for flexo or offset. So we made the point by delivering our sample output to be reviewed by the GWG as a CMYK raster file … and yes, all of the spot colors in the test suite showed up correctly in their emulations, it all passed 100%.

But that was it.

We thought about adding an indication of which RIP versions the instructions applied to, but ended up simply pointing out when a configuration item had been changed from a check-box to a three-way drop-down menu. The instructions will give you good output from all Harlequin RIPs shipped by Global Graphics in the last decade, and into the future as well.

I love it when stuff just works, and continues to just work, like this. There’s definitely a benefit to aiming to Do the Right Thing™!

Harlequin RIP® gains Ghent PDF Output Suite 5 compliancy
Harlequin RIP® gains Ghent PDF Output Suite 5 compliancy

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Where is screening performed in the workflow?

In my last post I gave an introduction to halftone screening. Here, I explain where screening is performed in the workflow:

 

Halftone screening must always be performed after the page description language (such as PDF or PostScript) has been rendered into a raster by a RIP … at least conceptually.

In many cases it’s appropriate for the screening to be performed by that RIP, which may mean that in highly optimized systems it’s done in parallel with the final rendering of the pages, avoiding the overhead of generating an unscreened contone raster and then screening it. This usually delivers the highest throughput.

Global Graphics Software’s Harlequin RIP® is a world-leading RIP that’s used to drive some of the highest quality and highest speed digital presses today. The Harlequin RIP can apply a variety of different halftone types while rendering jobs, including Advanced Inkjet Screens™.

But an inkjet press vendor may also build their system to apply screening after the RIP, taking in an unscreened raster such as a TIFF file. This may be because:

  • An inkjet press vendor may already be using a RIP that doesn’t provide screening that’s high enough quality, or process fast enough, to drive their devices. In that situation it may be appropriate to use a stand-alone screening engine after that existing RIP.
  • To apply closed loop calibration to adjust for small variations in the tonality of the prints over time, and to do so while printing multiple copies of the same output, in other words, without the need for re-ripping that output.
  • When a variable data optimization technology such as Harlequin VariData™ is being used that requires multiple rasters to be recomposited after the RIP. It’s better to apply screening after that recomposition to avoid visible artifacts around some graphics caused by different halftone alignment.
  • To access sophisticated features that are only available in a stand-alone screening engine such as Global Graphics’ PrintFlat™ technology, which is applied in ScreenPro™.

Global Graphics Software has developed the ScreenPro stand-alone screening engine for these situations. It’s used in production to screen raster output produced using RIPs such as those from Esko, Caldera and ColorGate, as well as after Harlequin RIPs in order to access PrintFlat.

Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.
Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.

For further reading about the causes of banding and streaking in inkjet output see our related blog posts:

  1. Streaks and Banding: Measuring macro uniformity in the context of optimization processes for inkjet printing

  2. What causes banding in inkjet? (And the smart software solution to fix it.)

Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

What is halftone screening?

Halftone screening, also sometimes called halftoning, screening or dithering, is a technique to reliably produce optical illusions that fool the eye into seeing tones and colors that are not actually present on the printed matter.

Most printing technologies are not capable of printing a significant number of different levels for any single color. Offset and flexo presses and some inkjet presses can only place ink or no ink. Halftone screening is a method to make it look as if many more levels of gray are visible in the print by laying down ink in some areas and not in others, and using such a small pattern of dots that the individual dots cannot be seen at normal viewing distance.

Conventional screening, for offset and flexo presses, breaks a continuous tone black and white image into a series of dots of varying sizes and places these dots in a rigid grid pattern. Smaller dots give lighter tones and the dot sizes within the grid are increased in size to give progressively darker shades until the dots grow so large that they tile with adjacent dots to form a solid of maximum density (100%). But this approach is mainly because those presses cannot print single pixels or very small groups, and it introduces other challenges, such as moiré between colorants and reduces the amount of detail that can be reproduced.

Most inkjet presses can print even single dots on their own and produce a fairly uniform tone from them. They can therefore use dispersed screens, sometimes called FM or stochastic halftones.

A simple halftone screen
A simple halftone screen.

 

A dispersed screen uses dots that are all (more or less) the same size, but the distance between them is varied to give lighter or darker tones. There is no regular grid placement, in fact the placement is more or less randomized (which is what the word ‘stochastic’ means), but truly random placement leads to a very ‘noisy’ result with uneven tonality, so the placement algorithms are carefully set to avoid this.

Inkjet is being used more and more in labels, packaging, photo finishing and industrial print, all of which often use more than four inks, so the fact that a dispersed screen avoids moiré problems is also very helpful.

Dispersed screening can retain more detail and tonal subtlety than conventional screening can at the same resolution. This makes such screens particularly relevant to single-pass inkjet presses, which tend to have lower resolutions than the imaging methods used on, say, offset lithography. An AM screen at 600 dots per inch (dpi) would be very visible from a reading distance of less than a meter or so, while an FM screen can use dots that are sufficiently small that they produce the optical illusion that there are no dots at all, just smooth tones. Many inkjet presses are now stepping up to 1200dpi, but that’s still lower resolution than a lot of offset and flexo printing.

This blog post has concentrated on binary screening for simplicity. Many inkjet presses can place different amounts of ink at a single location (often described as using different drop sizes or more than one bit per pixel), and therefore require multi-level screening. And inkjet presses often also benefit from halftone patterns that are more structured than FM screens, but that don’t cluster into discrete dots in the same way as AM screens.

 

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.