Compliance, compatibility, and why some tools are more forgiving of bad PDFs

Compliant and compatible PDF documents and the Harlequin RIP

We added support for native processing of PDF files to the Harlequin RIP® way back in 1997. When we started working on that support we somewhat naïvely assumed that we should implement the written specification and that all would be well. But it was obvious from the very first tests that we performed that we would need to do something a bit more intelligent because a large proportion of PDF files that had been supplied as suitable for production printing did not actually comply with the specification.

Launching a product that would reject many PDF files that could be accepted by other RIPs would be commercial suicide. The fact that, at the time, those other RIPs needed the PDF to be transformed into PostScript first didn’t change the business case.

Unfortunately a lot of PDF files are still being made that don’t comply with the standard, so over the almost a quarter of a century since first launching PDF support we’ve developed our own rules around what Harlequin should do with non-compliant files, and invested many decades of effort in test and development to accept non-compliant files from major applications.

The first rule that we put in place is that Harlequin is not a validation tool. A Harlequin RIP user will have PDF files to be printed, and Harlequin should render those files as long as we can have a high level of confidence that the pages will be rendered as expected.

In other words, we treat both compliance with the PDF standard and compatibility with major PDF creation tools as equally important … and supporting Harlequin RIP users in running profitable businesses as even more so!

The second rule is that silently rendering something incorrectly can be very bad, increasing costs if a reprint is required and causing a print buyer/brand to lose faith in a print service provider/converter. So Harlequin is written to require a reasonably high level of confidence that it can render the file as expected. If a developer opening up the internals of a PDF file couldn’t be sure how it was intended to be rendered then Harlequin should not be rendering it.

We’d expect most other vendors of PDF readers to apply similar logic in their products, and the evidence we’ve seen supports that expectation. The differences between how each product treats invalid PDF are the result of differences in the primary goal of each product, and therefore to the cost of output that is viewed as incorrect.

Consider a PDF viewer for general office or home use, running on a mobile device or PC. The business case for that viewer implies that the most important thing it has to do is to show as much of the information from a PDF file as possible, preferably without worrying the user with warnings or errors. It’s not usually going to be hugely important or costly if the formatting is slightly wrong. You could think of this as being at the opposite end of the scale from a RIP for production printing. In other words, the required level of confidence in accurately rendering the appearance of the page is much lower for the on-screen viewer.

You may have noticed that my description of a viewer could easily be applied to Adobe Reader or Acrobat Pro. Acrobat is also not written primarily as a validation tool, and it’s definitely not appropriate to assume that a PDF file complies with the standard just because it opens in Acrobat. Remember the Acrobat business case, and imagine what the average office user’s response would be if it would not open a significant proportion of PDF files because it flagged them as invalid!

For further reading about PDF documents and standards:

  1. Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press
  2. PDF Processing Steps – the next evolution in handling technical marks

About the author

Martin Bailey, CTO, Global Graphics Software

 

 

 

Martin Bailey, Distinguished Technologist, Global Graphics Software, is currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT and is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a new guide offering advice to anyone with  a stake in variable data printing including graphic designers, print buyers, composition developers and users.

Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedInTwitter, and YouTube

Second edition now available: Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press

At the beginning of 2020, in what we thought was the run-up to drupa, Global Graphics published a new guide called “Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press”. It was designed to complement the recommendations available for how to maximize sales from direct mail campaigns, with technical recommendations as to how you can make sure that you don’t make a PDF file for a variable data job that will bring a digital press to its knees. It also carried those lessons into additional print sectors that are rapidly adopting variable data, such as labels, packaging, product decoration and industrial print, with hints around using variable data in unusual ways for premium jobs at premium margins.

Well, as they say, a lot has happened since then.

And some of that has been positive. At the end of 2020 several new International Standards were published, including a “dated revision” (a 2nd edition) of the PDF 2.0 standard, a new standard for submission of PDF files for production printing: PDF/X-6, and a new standard for submission of variable data PDF files for printing: PDF/VT-3.

We’ve therefore updated Full Speed Ahead to cover the new standards. And at the same time we’ve taken the opportunity to extend and clarify some of the rest of the text in response to feedback on the first edition.

So now you can keep up to date, just by downloading the new edition!

DOWNLOAD THE GUIDE

Further reading:

  1. What’s the best effective photographic image resolution for your variable data print jobs?
  2. Why does optimization of VDP jobs matter?

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

There really are two different kinds of variable data submission!

There are two completely different forms of variable data handling in  the Harlequin RIP®, and I’m sometimes asked why we’ve duplicated functionality like that. The simple answer is that it’s not duplication; they each address very different use cases.

But those use cases are not, as many people then expect, “white paper workflows” vs imprinting, i.e. whether the whole design including both re-used and single-use elements is printed together vs adding variable data on top of a pre-printed substrate. Both Harlequin VariData™ and the “Dynamic overlays” that we added in Harlequin version 12 can address both of those requirements.

Incidentally, I put “white paper workflows” in quotes because that’s what it’s called in the transactional and direct mail spaces … but very similar approaches are used for variable data printing in other sectors, which may not be printing on anything even vaguely resembling paper!

The two use cases revolve around who has the data, when they have it, whether a job should start printing before all the data is available, and whether there are any requirements to restrict access to the data.

When most people in the transactional, direct mail or graphic arts print sectors think about variable data it tends to be in the form of a fully resolved document representing all of the many variations of one of a collection of pages, combining one or more static ‘backgrounds’ with single-use variable data elements, and maybe some re-used elements from which one is selected for each recipient. In other words, each page in the PDF file is meant to be printed as-is, and will be suitable for a single copy. That whole, fully resolved file is then sent to the press. It may be sent from one division of the printing company to the press room, or even from some other company entirely. The same approach is used for some VDP jobs in labels, folding carton, corrugated, signage and some industrial sectors.

This is the model for which optimized PostScript, and then optimized PDF, PDF/VT (and AFP) were designed. It’s a robust workflow that allows for significant amounts of proofing and process control at multiple stages. And it also allows very rich graphical variability. It’s the workflow for which Harlequin VariData was designed, to maximize the throughput of variable data files through the Digital Front End (DFE) and onto the press.

But in some cases the variable data is not available when the job starts printing. Indeed, the print ‘job’ may run for months in situations such as packaging lines or ID card printing. That can be managed by simply sending a whole series of optimized PDF files, each one representing a few thousand or a couple of million instances of the job to be printed. But in some cases that’s simply not convenient or efficient enough.

In other workflows the data to be printed must be selected based on the item to be printed on, and that’s only known at the very last minute … or second … before the item is printed. A rather extreme example of this is in printing ID cards. In some workflows a chip or magnetic strip is programmed first. When the card is to be printed it’s obviously important that the printed information matches the data on the chip or magnetic strip, so the printing unit reads the data from one of those, uses that to select the data to be printed, and prints it … sometimes all in less than a second. In this case you could use a fully resolved optimized PDF file and select the appropriate page from it based on identifying the next product to be printed on; I know there are companies doing exactly that. But it gets cumbersome when the selection time is very short and the number of items to be printed is very large. And you also need to have all of the data available up-front, so a more dynamic solution is better.

Printing magnetic strip on ID cards
Printing magnetic strip on ID cards.

In other cases there is a need to ensure that the data to be printed is held completely securely, which usually leads to a demand that there is never a complete set of that data in a standard file format outside of the DFE for the printer itself. ID cards are an example of this use case as well.

Printing Example ID cards

Moving away from very quick or secure responses, we’ve been observing an interesting trend in the labels and packaging market as digital presses are used more widely. Printing the graphics of the design itself and adding the kind of data that’s historically been applied using coding and marking are converging. Information like serial numbers, batch numbers, competition QR Codes, even sell & use by dates are being printed at the same time as the main graphics. Add in the growing demands for traceability, for less of a need for warehousing and for more print on demand of a larger number of different versions, and there can be some real benefits in moving all of the print process quite close to the bottling/filling/labelling lines. But it doesn’t make sense to make a million page PDF file just so you can change the batch number every 42 cartons because that’s what fits on a pallet.

These use cases are why we added Dynamic overlays to Harlequin. Locations on the output where marks should be added are specified, along with the type of mark (text, barcodes and images are the most commonly used). For most marks a data source must be specified; by default we support reading from CSV files or automated counters, but an interface to a database can easily be added for specific integrations. And, of course, formatting information such as font, color, barcode symbology etc must be provided.

The ‘overlay’ in “Dynamic overlays” gives away one of the limitations of this approach, in that the variable data added using it must be on top of all the static data. But we normally recommend that you do that for fully resolved VDP submissions using something like optimized PDF anyway because it makes processing much more efficient; there aren’t that many situations where the desired visual appearance requires variable graphics behind static ones. It’s also much less of a constraint that you’d have with imprinting, where you can only knock objects like white text out of a colored fill in the static background if you are using a white ink!

For what it’s worth, Dynamic overlays also work well for imprinting or for cases where you need to print graphics of middling complexity at high quality but where there are no static graphics at all (existing coding & marking systems can handle simple graphics at low to medium quality very well). In other words, there’s no need to have a background to print the variable data as a foreground over.

So now you know why we’ve doubled up on variable data functionality!

Further reading:

  1. What’s the best effective photographic image resolution for your variable data print jobs?
  2. Why does optimization of VDP jobs matter?

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

 

What is a Raster Image Processor (RIP)?

Ever wondered what a raster image processor or RIP does? And what does RIPping a file mean? Read on to learn more about the phases of a RIP, the engine at the heart of your Digital Front End (DFE).

The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet printhead, toner marking engine or laser platesetter can understand. The process of RIPping a job requires several steps to be performed in order, regardless of the page description language (such as PDF) that it’s submitted in. Even image file formats such as TIFF, JPEG or PNG usually need to be RIPped, to convert them into the correct color space, at the right resolution and with the right halftone screening for the press.

Interpreting: The file to be RIPped is read and decoded into an internal database of graphical elements that must be placed on the output. Each may be an image, a character of text (including font, size, color etc), a fill or stroke etc. This database is referred to as a display list.

Compositing: The display list is pre-processed to apply any live transparency that may be in the job. This phase is only required for any graphics in formats that support live transparency, such as PDF; it’s not required for PostScript language jobs or for TIFF and JPEG images because those cannot include live transparency.

Rendering: The display list is processed to convert every graphical element into the appropriate pattern of pixels to form the output raster. The term ‘rendering’ is sometimes used specifically for this part of the overall processing, and sometimes to describe the whole of the RIPing process.

Output: The raster produced by the rendering process is sent to the marking engine in the output device, whether it’s exposing a plate, a drum for marking with toner, an inkjet head or any other technology.

Sometimes this step is completely decoupled from the RIP, perhaps because plate images are stored as TIFF files and then sent to a CTP platesetter later, or because a near-line or off-line RIP is used for a digital press. In other environments the output stage is tightly coupled with rendering, and the output raster is kept in memory instead of writing it to disk to increase speed.

RIPping often includes a number of additional processes; in the Harlequin RIP® for example:

  • In-RIP imposition is performed during interpretation
  • Color management (Harlequin ColorPro®) and calibration are applied during interpretation or compositing, depending on configuration and job content
  • Screening can be applied during rendering. Alternatively it can be done after the Harlequin RIP has delivered unscreened raster data; this is valuable if screening is being applied using Global Graphics’ ScreenPro™ and PrintFlat™ technologies, for example.

A DFE for a high-speed press will typically be using multiple RIPs running in parallel to ensure that they can deliver data fast enough. File formats that can hold multiple pages in a single file, such as PDF, are split so that some pages go to each RIP, load-balancing to ensure that all RIPs are kept busy. For very large presses huge single pages or images may also be split into multiple tiles and those tiles sent to different RIPs to maximize throughput.

The raster image processor pipeline. The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, TIFF, JPEG, PNG and BMP as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.
The raster image processor pipeline. The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, TIFF, JPEG, PNG and BMP as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.

Harlequin Host Renderer brochure

 

To find out more about the Harlequin RIP, download the latest brochure here.

 

This post was first published in June 2019.

Further reading:

1. Where is screening performed in the workflow

2. What is halftone screening?

3. Unlocking document potential


To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

 

What’s the difference between PDF/X-1a and PDF/X-4?

PDFX-1 PDFX-4

Which PDF/X should I use?

Somebody asked me recently what the difference is between PDF/X-1a (first published in 2001) and PDF/X-4 (published in 2010). I thought this might also be interesting to a wider audience.

Both are ISO standards that deliberately restrict some aspects of what you can put into a PDF file in order to make them more reliable for delivery of jobs for professional print. But the two standards address different needs/desires:

PDF/X-1a content must all have been transformed into CMYK (optionally plus spots) already, so it puts all of the responsibility for correct separation and transparency handling onto the creation side. When it hits Harlequin, all the RIP can do is to lock in the correct overprint settings and (optionally) pre-flight the intended print output condition, as encapsulated in the output intent.

On the other hand, PDF/X-4 supports quite a few things that PDF/X-1a does not, including:

  • Device-independent color spaces
  • Live PDF transparency
  • Optional content (layers)

That moves a lot more of the responsibility downstream into the RIP, because it can carry unseparated colors and transparency.

Back when the earlier PDF/X standards were designed transparency handling was a bit inconsistent between RIPs, and color management was an inaccessible black art to many print service providers, which is why PDF/X-1a was popular with many printers. That’s not been the case for a decade now, so PDF/X-4 will work just fine.

In other words, the choice is more down to where the participants in the exchange want the responsibility to sit than to anything technical any more.

In addition, PDF/X-4 is much more easily transitioned between different presses, and even between completely different print technologies, such as moving a job from offset or flexo to a digital press. And it can also be used much more easily for digital delivery alongside using it for print. For many people that’s enough to push the balance firmly in favour of PDF/X-4.

For further reading about PDF documents and standards:

  1. Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press
  2. PDF Processing Steps – the next evolution in handling technical marks
  3. Compliance, compatibility, and why some tools are more forgiving of bad pdfs

About the author

Martin Bailey, CTO, Global Graphics Software
Martin Bailey, CTO, Global Graphics Software

Martin Bailey is Global Graphics’ Chief Technology Officer. He’s currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT and is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a new guide offering advice to anyone with  a stake in variable data printing including graphic designers, print buyers, composition developers and users.

Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

Harlequin RIP gains Ghent PDF Output Suite 5 compliancy

We’ve just added the Harlequin RIP® to the list of products certified as compliant with the Ghent Workgroup’s Output Suite 5 at https://www.gwg.org/ghent-pdf-output-suite-5-compliancy/

It was an interesting exercise, not because it was difficult, but because we started with a bit of archaeology. Back in February 2003 we published an “Application Data Sheet” of instructions for configuring versions 5.3 and 5.5 of the Harlequin RIP to render PDF/X-1a files. We followed that up with another edition for Harlequin 6 (the Eclipse release), addressing PDF/X-3 as well in 2004, and then for Harlequin 7 (Genesis) in 2005.

After that it seemed that PDF/X was sufficiently well understood and so widely adopted in the marketplace that we didn’t need to continue the series. Added to that, we’d added the ability for Harlequin RIPs to recognize PDF/X files and automatically change the RIP configuration around things like overprinting to, as we phrased it at the time, “Do the Right Thing™”.

So when we started writing up how to configure Harlequin for the GWG Output Suite we simply opened up the 2005 doc and replaced the screen grab of the user interface in Harlequin MultiRIP with a one from Harlequin 12.1. In 14 years we’ve added a few options, and, of course, a Windows 10 dialog looks a bit different to one from Windows XP!

We did have to add a couple of extra bullet points to the instructions, especially around perfecting the color management of spots being emulated in process colorants. Some of our color focus over the last decade has been on outputting to a fixed ink set, whether that’s on a digital press or for flexo or offset. So we made the point by delivering our sample output to be reviewed by the GWG as a CMYK raster file … and yes, all of the spot colors in the test suite showed up correctly in their emulations, it all passed 100%.

But that was it.

We thought about adding an indication of which RIP versions the instructions applied to, but ended up simply pointing out when a configuration item had been changed from a check-box to a three-way drop-down menu. The instructions will give you good output from all Harlequin RIPs shipped by Global Graphics in the last decade, and into the future as well.

I love it when stuff just works, and continues to just work, like this. There’s definitely a benefit to aiming to Do the Right Thing™!

Harlequin RIP® gains Ghent PDF Output Suite 5 compliancy
Harlequin RIP® gains Ghent PDF Output Suite 5 compliancy

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Where is screening performed in the workflow?

In my last post I gave an introduction to halftone screening. Here, I explain where screening is performed in the workflow:

 

Halftone screening must always be performed after the page description language (such as PDF or PostScript) has been rendered into a raster by a RIP … at least conceptually.

In many cases it’s appropriate for the screening to be performed by that RIP, which may mean that in highly optimized systems it’s done in parallel with the final rendering of the pages, avoiding the overhead of generating an unscreened contone raster and then screening it. This usually delivers the highest throughput.

Global Graphics Software’s Harlequin RIP® is a world-leading RIP that’s used to drive some of the highest quality and highest speed digital presses today. The Harlequin RIP can apply a variety of different halftone types while rendering jobs, including Advanced Inkjet Screens™.

But an inkjet press vendor may also build their system to apply screening after the RIP, taking in an unscreened raster such as a TIFF file. This may be because:

  • An inkjet press vendor may already be using a RIP that doesn’t provide screening that’s high enough quality, or process fast enough, to drive their devices. In that situation it may be appropriate to use a stand-alone screening engine after that existing RIP.
  • To apply closed loop calibration to adjust for small variations in the tonality of the prints over time, and to do so while printing multiple copies of the same output, in other words, without the need for re-ripping that output.
  • When a variable data optimization technology such as Harlequin VariData™ is being used that requires multiple rasters to be recomposited after the RIP. It’s better to apply screening after that recomposition to avoid visible artifacts around some graphics caused by different halftone alignment.
  • To access sophisticated features that are only available in a stand-alone screening engine such as Global Graphics’ PrintFlat™ technology, which is applied in ScreenPro™.

Global Graphics Software has developed the ScreenPro stand-alone screening engine for these situations. It’s used in production to screen raster output produced using RIPs such as those from Esko, Caldera and ColorGate, as well as after Harlequin RIPs in order to access PrintFlat.

Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.
Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.

For further reading about the causes of banding and streaking in inkjet output see our related blog posts:

  1. Streaks and Banding: Measuring macro uniformity in the context of optimization processes for inkjet printing

  2. What causes banding in inkjet? (And the smart software solution to fix it.)

Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

What is halftone screening?

Halftone screening, also sometimes called halftoning, screening or dithering, is a technique to reliably produce optical illusions that fool the eye into seeing tones and colors that are not actually present on the printed matter.

Most printing technologies are not capable of printing a significant number of different levels for any single color. Offset and flexo presses and some inkjet presses can only place ink or no ink. Halftone screening is a method to make it look as if many more levels of gray are visible in the print by laying down ink in some areas and not in others, and using such a small pattern of dots that the individual dots cannot be seen at normal viewing distance.

Conventional screening, for offset and flexo presses, breaks a continuous tone black and white image into a series of dots of varying sizes and places these dots in a rigid grid pattern. Smaller dots give lighter tones and the dot sizes within the grid are increased in size to give progressively darker shades until the dots grow so large that they tile with adjacent dots to form a solid of maximum density (100%). But this approach is mainly because those presses cannot print single pixels or very small groups, and it introduces other challenges, such as moiré between colorants and reduces the amount of detail that can be reproduced.

Most inkjet presses can print even single dots on their own and produce a fairly uniform tone from them. They can therefore use dispersed screens, sometimes called FM or stochastic halftones.

A simple halftone screen
A simple halftone screen.

 

A dispersed screen uses dots that are all (more or less) the same size, but the distance between them is varied to give lighter or darker tones. There is no regular grid placement, in fact the placement is more or less randomized (which is what the word ‘stochastic’ means), but truly random placement leads to a very ‘noisy’ result with uneven tonality, so the placement algorithms are carefully set to avoid this.

Inkjet is being used more and more in labels, packaging, photo finishing and industrial print, all of which often use more than four inks, so the fact that a dispersed screen avoids moiré problems is also very helpful.

Dispersed screening can retain more detail and tonal subtlety than conventional screening can at the same resolution. This makes such screens particularly relevant to single-pass inkjet presses, which tend to have lower resolutions than the imaging methods used on, say, offset lithography. An AM screen at 600 dots per inch (dpi) would be very visible from a reading distance of less than a meter or so, while an FM screen can use dots that are sufficiently small that they produce the optical illusion that there are no dots at all, just smooth tones. Many inkjet presses are now stepping up to 1200dpi, but that’s still lower resolution than a lot of offset and flexo printing.

This blog post has concentrated on binary screening for simplicity. Many inkjet presses can place different amounts of ink at a single location (often described as using different drop sizes or more than one bit per pixel), and therefore require multi-level screening. And inkjet presses often also benefit from halftone patterns that are more structured than FM screens, but that don’t cluster into discrete dots in the same way as AM screens.

 

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.

Time for an update on VDP!

Over the last fifteen years variable data in digital printing has grown from “the next big thing” with vast, untapped potential to a commonly used process for delivering all manner of personalized information. VDP is used for everything from credit card bills and bank statements to direct mail postcards and personalized catalogues, from college enrolment packs to Christmas cards and photobooks, from labels to tickets, checks to ID cards.

This huge variety of jobs is created and managed by an equally huge variety of software, from specialist composition tools to general purpose design applications carefully configured for VDP. And they are consumed by workflows involving (or even completely within) the Digital Front End (DFE) for a digital production press, where jobs must be imposed, color managed.

Time, then, to update our popular “Do PDF/VT Right” guide which has had thousands of downloads since it was first published in 2014 not to mention the number of printed copies distributed at trade shows and industry events.

Do PDF/VT Right - How to make problem-free PDF files for variable data printing
Do PDF/VT Right – How to make problem-free PDF files for variable data printing

In addition to a general overhaul there is a new section on the new ISO 21812 standard that allows workflow controls to be added to PDF files, and notes on Harlequin-specific hints, to get even more speed out of your DFE if you are a Harlequin user.

The goal remains the same: to provide a set of actionable recommendations that help you ensure that your jobs don’t slow down the print production workflow … without affecting the visual appearance that you’re trying to achieve. As a side benefit, several of the recommendations set out below will also ensure that your PDF files can be delivered more efficiently on the web and to PDF readers on mobile devices in a cross-media publishing environment.

Some of the recommendations made in this guide are things that a graphic designer can apply quickly and easily, using their current tools. Others are intended more for the software companies building composition tools. If all of us work together we can greatly reduce the chance of that “heart-attack” job; the one that absolutely, positively must be in the post today … but that runs really slowly on the press.

Download your copy here .

PDF Processing Steps – the next evolution in handling technical marks

Best practice in handling jobs containing both real graphic content and ‘technical marks’ has evolved over the last couple of decades. Technical marks include things like cut/die lines, fold lines, dimensions, legends etc in a page description language file (usually PDF these days). Much of the time, especially for pouches, folding carton and corrugated work, they’ll come originally from a CAD file and will have been merged with the graphics.

People will want to interact with the technical marks differently at various stages in the workflow:

  • Your CAD specialists will want to see the technical marks and make sure that they’ve not been changed from the original CAD input.
  • Brand owner approval may not want to see the technical marks, but prepress and production manager approvers will definitely want to see both the technical marks and the graphics together on their monitors, with the ability to make layers visible or invisible at will.
  • In some workflows the technical marks from the PDF may be used to make a physical die, or to drive a laser cutter; in others an original CAD file will be used instead.
  • On a digital press you may wish to print a short run of just the technical marks, or a combination of technical marks and graphics to ensure that finishing is properly registered with the prints.
  • The main print run, whether on a conventional press (flexo, offset, etc) or digital, will obviously include the graphics, but won’t include most of the technical marks. You may want to include the legend on the print as fool-proof identification of that job, but you’ll obviously need to disable printing of any marks that overlap with the live area or bleed, such as cut and fold marks.
  • Occasionally you may wish to do another short run with technical marks after the main print run, to ensure that finishing has not drifted out of register.

So there are a lot of places in the entire process where both technical marks and graphics may need to be turned on or off. How do you do that in your RIP?

Historically, the first method used to include technical marks, originally in PostScript, but now also in PDF, was to specify each kind of technical mark in a ‘technical separation’, encoded as a spot color in the job. Most operators tried to use a name for that spot color that indicated its intent, but there weren’t any standards, so you could end up with ‘Cut’ (or ‘CUT’, ‘cut’ etc), ‘cut-line’, ‘cut line’, ‘cutline’, ‘die’ etc etc. And that’s just thinking about naming in English. The names chosen are usually fairly meaningful to a human operator, but couldn’t be used reliably for automated processing because of the amount of variation.

As a result, many jobs arriving at a converter, at least from outside of that company, must be reviewed, and the spot names replaced, or the prepress and RIP configured to use the names from that job. That manual processing takes time and introduces the potential for errors.

But let’s assume you’ve completed that stage; how do you configure your RIP to achieve what you need with those technical separations?

The most obvious mechanism to turn off some technical marks is to tell the RIP to render the relevant spot colors as their own separations, but then not to image them on the print. It’s a very simple model, which works well as long as the job was constructed correctly, with all of the technical marks set to overprint. When somebody upstream forgot and left a cut or fold line as knockout (which never happens, of course!) you’d get a white line through the real graphics if the technical mark was on top of them.

The next evolution of that would be to configure the RIP to say that the nominated spot separation should never knock out of any other separation. That’s a configuration option in Harlequin RIPs but may not be widely available elsewhere.

Or you could tell the RIP to completely ignore one or more nominated spot colors, so they have no effect at all on any other marks on the page. Again, that’s a configuration option in Harlequin RIPs, and is one of the best ways of managing technical marks that are saved into the PDF file as technical separations.

Alternatively, since technical marks (like many other parts of a label or packaging job) are usually captured in a PDF layer (or optional content group to use the technical term), you can turn those layers on and off. Again, there are rich controls for managing PDF layers in Harlequin RIPs.

But none of these techniques get away from the need to manually check each file and set up prepress and the RIP appropriately for the spot names or layers that have been used for technical marks.

And that’s where the new ISO standard, 19593-1:2018 comes in. It defines “PDF processing Steps”, a mechanism to uniquely identify technical marks in PDF files, along with their intended function, from cutting to folding and creasing, to bleed areas, white and varnish, braille, dimensions, legends etc. It does this by building on the common practice of saving the technical marks in PDF layers, but adds some identification metadata that is not dependent on the vendor, the language or the normal practice of the originator, prepress or pressroom.

So now you can look at a PDF file and see definitively that a layer called ‘cut’ contains cutting lines. The name ‘cut’ is now just a convenience; the real information is in metadata which is completely and reliably computer-readable. In other words, it doesn’t matter if that layer were named ‘Schnittlinie’ or anything else; the manual step of identifying names that someone, somewhere put in the file upstream and figuring out what each one means, is completely eliminated.

We implemented support for PDF Processing Steps into version 12.1r0 of the Harlequin RIP, and have worked with a number of vendors whose products create files with Processing Steps in them (including Hybrid Software, Esko and Callas) to ensure that everything works seamlessly. We also worked through a wide variety of current and probable use cases to ensure that our implementation can address real-world needs. As an example we added the ability to control all graphics on a PDF page that aren’t in Processing Step layers as if they were just another layer.

In practice this means that Harlequin can be configured to deliver pretty much whatever you need, such as:

  • Export all technical marks identified as Cutting, PartialCutting, CuttingCreasing etc to a vector format to drive a cutting machine.
  • Render and print all technical marks, but none of the real graphics, for checking registration.
  • Render the real graphics, plus dimensions and legend, for the main print run.

    PDF Processing Steps promises the ability to control technical marks without needing to analyze each file and create a different setup for each job.
    PDF Processing Steps promises the ability to control technical marks without needing to analyze each file and create a different setup for each job.

The most important thing that PDF Processing Steps gives us is that you can create a configuration for one of those use cases (or for many other variations) and know that it will work for all jobs that are sent to you using PDF Processing Steps; you won’t need to reconfigure for the next job, just because an operator used different spot names.

Of course, it’ll take a while for everyone to migrate from using spot names to PDF Processing Steps. But I think you’ll agree that the benefits of doing so, in increasing efficiency and reducing the potential for errors, are obvious and significant.

For more information read the press release here.