Working with spot colors in Harlequin Core

Whenever we start working with a company who’s interested in using Harlequin Core™ for their Digital Front End (DFE), there are always three technical topics under discussion: speed, quality and capabilities. Speed and quality are often very quick discussions; much of the time they’ve approached us because they’re already convinced that Harlequin can do what they need. In the remaining cases we tend to jointly agree that the best way for them to be convinced is for them to take a copy of Harlequin Core and to run their own tests. There’s nothing quite like trying something on your own systems to give yourself confidence in the results.

So that leaves capabilities.

If the company already sells a DFE using a different core RIP they will almost always want to at least match, and usually to extend, the functionality of their existing solution when they switch to Harlequin. And if they’re building their first DFE they usually have a clear idea of what their target market will need.

At that stage we start by ensuring that we all understand that Harlequin Core can deliver rasters in whatever format is required (color channels, interleaving, resolution, bit depth, halftoning) and then cover color management pretty quickly (yes, Harlequin uses ICC profiles, including v4 and DeviceLink; yes, you can chain multiple profiles in arbitrary sequences, etc).

Then we usually come on to a series of questions that boil down to handling spot colors:

  • Most spot separations in jobs will be emulated on my digital press; can I adjust that emulation?
  • Can I make sure that the emulation works well with ICC profiles for different substrates?
  • Can I include special device colorants, such as White and Silver inks in that emulation?
  • Can I alias one spot separation name to another?
  • Can I make technical separations, like cut and fold lines, completely disappear, without knocking out if somebody upstream didn’t set them to overprint?
  • Alternatively, can I extract technical separations as vector graphics to drive a cutter/plotter with?

Since the answer to all of those is ‘yes’ we can then move on to areas where the vendor is looking for a unique capability …

But I’ve always been slightly disappointed that we don’t get to talk more about some of the interesting corners of spot handling in Harlequin. So I created a video to walk through some examples. Take a look, and I’d welcome your comments and questions!

Further reading:

  1. Channelling how many spot colors?!!
  2. Shade and color variation in textile printing
  3. Harlequin Core – the heart of your digital press
  4. What is a raster image processor 

Be the first to receive our blog posts, news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedInTwitter and YouTube

Where is screening performed in the workflow?

In my last post I gave an introduction to halftone screening. Here, I explain where screening is performed in the workflow:

 

Halftone screening must always be performed after the page description language (such as PDF or PostScript) has been rendered into a raster by a RIP … at least conceptually.

In many cases it’s appropriate for the screening to be performed by that RIP, which may mean that in highly optimized systems it’s done in parallel with the final rendering of the pages, avoiding the overhead of generating an unscreened contone raster and then screening it. This usually delivers the highest throughput.

Global Graphics Software’s Harlequin RIP® is a world-leading RIP that’s used to drive some of the highest quality and highest speed digital presses today. The Harlequin RIP can apply a variety of different halftone types while rendering jobs, including Advanced Inkjet Screens™.

But an inkjet press vendor may also build their system to apply screening after the RIP, taking in an unscreened raster such as a TIFF file. This may be because:

  • An inkjet press vendor may already be using a RIP that doesn’t provide screening that’s high enough quality, or process fast enough, to drive their devices. In that situation it may be appropriate to use a stand-alone screening engine after that existing RIP.
  • To apply closed loop calibration to adjust for small variations in the tonality of the prints over time, and to do so while printing multiple copies of the same output, in other words, without the need for re-ripping that output.
  • When a variable data optimization technology such as Harlequin VariData™ is being used that requires multiple rasters to be recomposited after the RIP. It’s better to apply screening after that recomposition to avoid visible artifacts around some graphics caused by different halftone alignment.
  • To access sophisticated features that are only available in a stand-alone screening engine such as Global Graphics’ PrintFlat™ technology, which is applied in ScreenPro™.

Global Graphics Software has developed the ScreenPro stand-alone screening engine for these situations. It’s used in production to screen raster output produced using RIPs such as those from Esko, Caldera and ColorGate, as well as after Harlequin RIPs in order to access PrintFlat.

Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.
Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.

For further reading about the causes of banding and streaking in inkjet output see our related blog posts:

  1. Streaks and Banding: Measuring macro uniformity in the context of optimization processes for inkjet printing

  2. What causes banding in inkjet? (And the smart software solution to fix it.)

Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter

What is halftone screening?

Halftone screening, also sometimes called halftoning, screening or dithering, is a technique to reliably produce optical illusions that fool the eye into seeing tones and colors that are not actually present on the printed matter.

Most printing technologies are not capable of printing a significant number of different levels for any single color. Offset and flexo presses and some inkjet presses can only place ink or no ink. Halftone screening is a method to make it look as if many more levels of gray are visible in the print by laying down ink in some areas and not in others, and using such a small pattern of dots that the individual dots cannot be seen at normal viewing distance.

Conventional screening, for offset and flexo presses, breaks a continuous tone black and white image into a series of dots of varying sizes and places these dots in a rigid grid pattern. Smaller dots give lighter tones and the dot sizes within the grid are increased in size to give progressively darker shades until the dots grow so large that they tile with adjacent dots to form a solid of maximum density (100%). But this approach is mainly because those presses cannot print single pixels or very small groups, and it introduces other challenges, such as moiré between colorants and reduces the amount of detail that can be reproduced.

Most inkjet presses can print even single dots on their own and produce a fairly uniform tone from them. They can therefore use dispersed screens, sometimes called FM or stochastic halftones.

A simple halftone screen
A simple halftone screen.

 

A dispersed screen uses dots that are all (more or less) the same size, but the distance between them is varied to give lighter or darker tones. There is no regular grid placement, in fact the placement is more or less randomized (which is what the word ‘stochastic’ means), but truly random placement leads to a very ‘noisy’ result with uneven tonality, so the placement algorithms are carefully set to avoid this.

Inkjet is being used more and more in labels, packaging, photo finishing and industrial print, all of which often use more than four inks, so the fact that a dispersed screen avoids moiré problems is also very helpful.

Dispersed screening can retain more detail and tonal subtlety than conventional screening can at the same resolution. This makes such screens particularly relevant to single-pass inkjet presses, which tend to have lower resolutions than the imaging methods used on, say, offset lithography. An AM screen at 600 dots per inch (dpi) would be very visible from a reading distance of less than a meter or so, while an FM screen can use dots that are sufficiently small that they produce the optical illusion that there are no dots at all, just smooth tones. Many inkjet presses are now stepping up to 1200dpi, but that’s still lower resolution than a lot of offset and flexo printing.

This blog post has concentrated on binary screening for simplicity. Many inkjet presses can place different amounts of ink at a single location (often described as using different drop sizes or more than one bit per pixel), and therefore require multi-level screening. And inkjet presses often also benefit from halftone patterns that are more structured than FM screens, but that don’t cluster into discrete dots in the same way as AM screens.

 

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.

Time for an update on VDP!

Over the last fifteen years variable data in digital printing has grown from “the next big thing” with vast, untapped potential to a commonly used process for delivering all manner of personalized information. VDP is used for everything from credit card bills and bank statements to direct mail postcards and personalized catalogues, from college enrolment packs to Christmas cards and photobooks, from labels to tickets, checks to ID cards.

This huge variety of jobs is created and managed by an equally huge variety of software, from specialist composition tools to general purpose design applications carefully configured for VDP. And they are consumed by workflows involving (or even completely within) the Digital Front End (DFE) for a digital production press, where jobs must be imposed, color managed.

Time, then, to update our popular “Do PDF/VT Right” guide which has had thousands of downloads since it was first published in 2014 not to mention the number of printed copies distributed at trade shows and industry events.

Do PDF/VT Right - How to make problem-free PDF files for variable data printing
Do PDF/VT Right – How to make problem-free PDF files for variable data printing

In addition to a general overhaul there is a new section on the new ISO 21812 standard that allows workflow controls to be added to PDF files, and notes on Harlequin-specific hints, to get even more speed out of your DFE if you are a Harlequin user.

The goal remains the same: to provide a set of actionable recommendations that help you ensure that your jobs don’t slow down the print production workflow … without affecting the visual appearance that you’re trying to achieve. As a side benefit, several of the recommendations set out below will also ensure that your PDF files can be delivered more efficiently on the web and to PDF readers on mobile devices in a cross-media publishing environment.

Some of the recommendations made in this guide are things that a graphic designer can apply quickly and easily, using their current tools. Others are intended more for the software companies building composition tools. If all of us work together we can greatly reduce the chance of that “heart-attack” job; the one that absolutely, positively must be in the post today … but that runs really slowly on the press.

Download your copy here .

PDF Processing Steps – the next evolution in handling technical marks

Best practice in handling jobs containing both real graphic content and ‘technical marks’ has evolved over the last couple of decades. Technical marks include things like cut/die lines, fold lines, dimensions, legends etc in a page description language file (usually PDF these days). Much of the time, especially for pouches, folding carton and corrugated work, they’ll come originally from a CAD file and will have been merged with the graphics.

People will want to interact with the technical marks differently at various stages in the workflow:

  • Your CAD specialists will want to see the technical marks and make sure that they’ve not been changed from the original CAD input.
  • Brand owner approval may not want to see the technical marks, but prepress and production manager approvers will definitely want to see both the technical marks and the graphics together on their monitors, with the ability to make layers visible or invisible at will.
  • In some workflows the technical marks from the PDF may be used to make a physical die, or to drive a laser cutter; in others an original CAD file will be used instead.
  • On a digital press you may wish to print a short run of just the technical marks, or a combination of technical marks and graphics to ensure that finishing is properly registered with the prints.
  • The main print run, whether on a conventional press (flexo, offset, etc) or digital, will obviously include the graphics, but won’t include most of the technical marks. You may want to include the legend on the print as fool-proof identification of that job, but you’ll obviously need to disable printing of any marks that overlap with the live area or bleed, such as cut and fold marks.
  • Occasionally you may wish to do another short run with technical marks after the main print run, to ensure that finishing has not drifted out of register.

So there are a lot of places in the entire process where both technical marks and graphics may need to be turned on or off. How do you do that in your RIP?

Historically, the first method used to include technical marks, originally in PostScript, but now also in PDF, was to specify each kind of technical mark in a ‘technical separation’, encoded as a spot color in the job. Most operators tried to use a name for that spot color that indicated its intent, but there weren’t any standards, so you could end up with ‘Cut’ (or ‘CUT’, ‘cut’ etc), ‘cut-line’, ‘cut line’, ‘cutline’, ‘die’ etc etc. And that’s just thinking about naming in English. The names chosen are usually fairly meaningful to a human operator, but couldn’t be used reliably for automated processing because of the amount of variation.

As a result, many jobs arriving at a converter, at least from outside of that company, must be reviewed, and the spot names replaced, or the prepress and RIP configured to use the names from that job. That manual processing takes time and introduces the potential for errors.

But let’s assume you’ve completed that stage; how do you configure your RIP to achieve what you need with those technical separations?

The most obvious mechanism to turn off some technical marks is to tell the RIP to render the relevant spot colors as their own separations, but then not to image them on the print. It’s a very simple model, which works well as long as the job was constructed correctly, with all of the technical marks set to overprint. When somebody upstream forgot and left a cut or fold line as knockout (which never happens, of course!) you’d get a white line through the real graphics if the technical mark was on top of them.

The next evolution of that would be to configure the RIP to say that the nominated spot separation should never knock out of any other separation. That’s a configuration option in Harlequin RIPs but may not be widely available elsewhere.

Or you could tell the RIP to completely ignore one or more nominated spot colors, so they have no effect at all on any other marks on the page. Again, that’s a configuration option in Harlequin RIPs, and is one of the best ways of managing technical marks that are saved into the PDF file as technical separations.

Alternatively, since technical marks (like many other parts of a label or packaging job) are usually captured in a PDF layer (or optional content group to use the technical term), you can turn those layers on and off. Again, there are rich controls for managing PDF layers in Harlequin RIPs.

But none of these techniques get away from the need to manually check each file and set up prepress and the RIP appropriately for the spot names or layers that have been used for technical marks.

And that’s where the new ISO standard, 19593-1:2018 comes in. It defines “PDF processing Steps”, a mechanism to uniquely identify technical marks in PDF files, along with their intended function, from cutting to folding and creasing, to bleed areas, white and varnish, braille, dimensions, legends etc. It does this by building on the common practice of saving the technical marks in PDF layers, but adds some identification metadata that is not dependent on the vendor, the language or the normal practice of the originator, prepress or pressroom.

So now you can look at a PDF file and see definitively that a layer called ‘cut’ contains cutting lines. The name ‘cut’ is now just a convenience; the real information is in metadata which is completely and reliably computer-readable. In other words, it doesn’t matter if that layer were named ‘Schnittlinie’ or anything else; the manual step of identifying names that someone, somewhere put in the file upstream and figuring out what each one means, is completely eliminated.

We implemented support for PDF Processing Steps into version 12.1r0 of the Harlequin RIP, and have worked with a number of vendors whose products create files with Processing Steps in them (including Hybrid Software, Esko and Callas) to ensure that everything works seamlessly. We also worked through a wide variety of current and probable use cases to ensure that our implementation can address real-world needs. As an example we added the ability to control all graphics on a PDF page that aren’t in Processing Step layers as if they were just another layer.

In practice this means that Harlequin can be configured to deliver pretty much whatever you need, such as:

  • Export all technical marks identified as Cutting, PartialCutting, CuttingCreasing etc to a vector format to drive a cutting machine.
  • Render and print all technical marks, but none of the real graphics, for checking registration.
  • Render the real graphics, plus dimensions and legend, for the main print run.

    PDF Processing Steps promises the ability to control technical marks without needing to analyze each file and create a different setup for each job.
    PDF Processing Steps promises the ability to control technical marks without needing to analyze each file and create a different setup for each job.

The most important thing that PDF Processing Steps gives us is that you can create a configuration for one of those use cases (or for many other variations) and know that it will work for all jobs that are sent to you using PDF Processing Steps; you won’t need to reconfigure for the next job, just because an operator used different spot names.

Of course, it’ll take a while for everyone to migrate from using spot names to PDF Processing Steps. But I think you’ll agree that the benefits of doing so, in increasing efficiency and reducing the potential for errors, are obvious and significant.

For more information read the press release here.

 

Martin Bailey appointed co-chair of PDF Technical Working Group

Congratulations to our very own Martin Bailey and to Peter Wyatt, the general manager of CiSRA, for being nominated co-chairs of the PDF Technical Working Group (TWG) within the PDF Competence Centre branch of the PDF Association. https://www.pdfa.org/working-group/pdf-competence-center/

Following the publication of the new ISO PDF 2.0 standard – ISO 32000-2 in July 2017, the PDF TWG will be producing PDF 2.0 Application Notes to support the implementation of the standard by developers whose PDF tools create and consume PDF.

ISO 32000-2 is the first PDF specification developed within the ISO working-group structure involving subject matter experts from many countries and is the first “post Adobe” standard  since they handed over its development to the ISO.

Speaking on news of his appointment Martin said, “The value of a standard can be greatly increased by a wider involvement of the relevant communities in shared education and discussion. The PDF Association has become the obvious group to help foster and guide that wider involvement for PDF itself and for many of the PDF-based standards in use today.”

Duff Johnson, the PDF Association’s executive director said, “PDF 2.0 is designed to be largely backward compatible, but older processors won’t handle new features.  The purpose of the new documents that will be developed by the PDF TWG is to help developers develop a common understanding of the new specification as well as best practices for implementation.” We’re very happy that Martin and Peter have agreed to lead this effort.

Martin Bailey is the primary UK expert to the ISO committees working on PDF, PDF/X and PDF/VT. In 2017 Global Graphics Software hosted two PDF 2.0 interoperability workshops on behalf of the PDF Association to provide a way for PDF tool developers to validate their work against the new ISO 32000-2 (PDF 2.0) standard by working with vendors of other tools.

The healthy buzz of conversation at PDF 2.0 interops

Last week was the first PDF 2.0 interop event in Cambridge, UK, hosted by Global Graphics on behalf of the PDF Association. The interop was an opportunity for developers from various companies working on their support for PDF 2.0 to get together and share sample files, and to process them in their own solutions. If a sample file from one vendor isn’t read correctly by a product from another vendor the developers can then figure out why, and fix either the creation tool or the consumer, or even both, depending on the exact reason for that failure.

When we make our own PDF sample files to test the Harlequin RIP there’s always a risk that the developer making the file and the developer writing the code to consume it will make the same assumptions or misread the specification in the same way. That makes testing files created by another vendor invaluable, because it validates all of those assumptions and possible misinterpretations as well.

It’s pretty early in the PDF 2.0 process (the standard itself will probably be published later this month), which means that some vendors are not yet far enough through their own development cycles to get involved yet. But that actually makes this kind of event even more valuable for those who participate because there are no currently shipping products out there that we could just buy and make sample files with. And the last thing that any of us want to do as vendors is to find out about incompatibilities after our products are shipped and in our customers’ hands.

I can tell you that our testing and discussions at the interop in Cambridge were extremely useful in finding a few issues that our internal testing had not identified. We’re busy correcting those, and will be taking updated software to the next interop, in Boston, MA on June 12th and 13th.

If you’re a Harlequin OEM or member of the Harlequin Partner Network you can also get access to our PDF 2.0 preview code to test against your own or other partners’ products; just drop me a line. If you’re using Harlequin in production I’m afraid you’ll have to wait until we release our next major version!

If you’re a software vendor with products that consume or create PDF and you’re already working on your PDF 2.0 support I’d heartily recommend registering for the June interop. I don’t know of any more efficient way to identify defects in your implementation so you can fix them before your customers even see them. Visit https://www.pdfa.org/event/pdf-interoperability-workshop-north-america/ to get started.

And if you’re a PDF software vendor and you’re not working on PDF 2.0 yet … time to start your planning!

About the author

Martin Bailey, consultant and former 0CTO, Global Graphics Software

Martin Bailey, consultant at Global Graphics Software, is a former CTO of the company and currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT. He is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a guide offering advice to anyone with a stake in variable data printing including graphic designers, print buyers, composition developers and users.

 

To be the first to receive our blog posts,

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn,  Twitter and YouTube

Channelling how many spot colors?!!

Martin Bailey, CTO, Global Graphics Software
Martin Bailey, CTO, Global Graphics Software

Recently my wife came home from a local sewing shop proudly waving a large piece of material, which turned out to be a “swatch book” for quilting fabrics. She now has it pinned up on the wall of her hobby room.

It made me wonder how many separations or spot colors I’d ever seen in a single job myself … ignoring jobs specifically designed as swatches.

I think my personal experience probably tops out at around 18 colors, which was for a design guide for a fuel company’s forecourts after a major redesign of their branding. It was a bit like a US banknote: lots of colors, but most of them green!

But I do occasionally hear about cases where a print company or converter, especially in packaging, is looking to buy a new digital press. I’m told it’s common for them to impose together all of their most challenging jobs on the grounds that if the new press (or rather, the DFE on the new press) can handle that, then they can be confident that it’ll handle any of the jobs they receive individually. Of course, if you gang together multiple unrelated jobs, each of which uses multiple spot colors, then you can end up with quite a few different ones on the whole sheet.

“Why does this matter?” I hear you ask.

It would be easy to assume that a request for a spot color in the incoming PDF file for a job is very ephemeral; that it’s immediately converted into an appropriate set of process colors to emulate that spot on the press. Several years ago, in the time of PostScript, and for PDF prior to version 1.4, you could do that. But the advent of live transparency in PDF made things a bit harder. If you naïvely transform spots to process builds as soon as you see them, and if the spot colored object is involved in any transparency blending, then you’ll get a result that looks very different to the same job being printed on a press that actually has an ink for that spot color. In other words, prints from your digital press might not match a print from a flexo press, which is definitely not a good place to be!

So in practice, the RIP needs to retain the spot as a spot until all of the transparency blending and composition has been done, and can only merge it into the process separations afterwards. And that goes for all of the spots in the job, however many of them there are.

Although I was a bit dismissive of swatches above, those are also important. Who would want to buy a wide format printer, or a printer for textiles, or even for packaging or labels, if you can’t provide swatches to your customers and to their designers?

All of this really came into focus for me because, until recently, the Harlequin RIP could only manage 250 spots per page. That sounds a lot, but wasn’t enough for some of our customers. In response to their requests we’ve just delivered a new revision to our OEM partners that can handle a little over 8000 spots per page. I’m hoping that will be enough for a while!

If you decide to take that as a challenge, I’d love to see what you print with it!

Getting to know PDF 2.0: not only but also!

Are you ready for PDF 2.0? Register now for the PDF 2.0 interoperability workshops in the UK and USA.

In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0.  It’s eight years since there’s been a revision to the standard. We’ve already covered the main changes affecting print in previous blog posts and here Martin Bailey, the primary UK expert to the ISO committee developing PDF 2.0, gives a roundup of a few other changes to expect.

Security
The encryption algorithms included in previous versions of PDF have fallen behind current best practices in security, so PDF adds AES-256-bit and states that all passwords used for AES-256 encryption must be encoded in Unicode.
A PDF 1.7 reader will almost certainly error and refuse to process any PDF files using the new AES-256 encryption.
Note that Adobe’s ExtensionLevel 3 to ISO 32000-1 defines a different AES-256 encryption algorithm, as used in Acrobat 9 (R=5). That implementation is now regarded as dangerously insecure and Adobe has deprecated it completely, to the extent that use of it is forbidden in PDF 2.0.
Deprecation and what this means in PDF!
PDF 2.0 has deprecated a number of implementation details and features that were defined in previous versions. In this context ‘deprecation’ means that tools writing PDF 2.0 are recommended not to include those features in a file; and that tools reading PDF 2.0 files are recommended to ignore those features if they find them.
Global Graphics has taken the deliberate decision not to ignore relevant deprecated items in PDF files that are submitted and happen to be identified as PDF 2.0. This is because it is quite likely that some files will be created using an older version of PDF and using those features. If those files are then pre-processed in some way before submitting to Harlequin (e.g. to impose or trap the files) the pre-processor may well tag them as now being PDF 2.0. It would not be appropriate in such cases to ignore anything in the PDF file simply because it is now tagged as PDF 2.0.
We expect most other PDF readers to take the same course, at least for the next few years.
And the rest…
PDF 2.0 header: It’s only a small thing, but a PDF reader must be prepared to encounter a value of 2.0 in the file header and as the value of the Version key in the Catalog.
PDF 1.7 readers will probably vary significantly in their handling of files marked as PDF 2.0. Some may error, others may warn that a future version of that product is required, while others may simply ignore the version completely.
Harlequin 11 reports “PDF Warning: Unexpected PDF version – 2.0” and then continues to process the job. Obviously that warning will disappear when we ship a new version that fully supports PDF 2.0.
UFT-8 text strings: Previous versions of PDF allowed certain strings in the file to be encoded in PDFDocEncoding or in 16-bit Unicode. PDF 2.0 adds support for UTF-8. Many PDF 1.7 readers may not recognise the UTF-8 string as UTF-8 and will therefore treat it as using PDFDocEncoding, resulting in those strings being treated as what looks like a random sequence of mainly accented characters.
Print scaling: PDF 1.6 added a viewer preferences key that allowed a PDF file to specify the preferred scaling for use when printing it. This was primarily in support of engineering drawings. PDF 2.0 adds the ability to say that the nominated scaling should be enforced.
Document parts: The PDF/VT standard defines a structure of Document parts (common called DPart) that can be used to associate hierarchical metadata with ranges of pages within the document. In PDF/VT the purpose is to enable embedding of data to guide the application of different processing to each page range.
PDF 2.0 has added the Document parts structure into baseline PDF, although no associated semantics or required processing for that data have been defined.
It is anticipated that the new ISO standard on workflow control (ISO 21812, expected to be published around the end of 2017) will make use of the DPart structure, as will the next version of PDF/VT. The specification in PDF 2.0 is largely meaningless until such time as products are written to work with those new standards.

 

The background
The last few years have been pretty stable for PDF; PDF 1.7 was published in 2006, and the first ISO PDF standard (ISO 32000-1), published in 2008, was very similar to PDF 1.7. In the same way, PDF/X‑4 and PDF/X‑5, the most recent PDF/X standards, were both published in 2010, six years ago.
In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. Much of the new work in this version is related to tagging for content re-use and accessibility, but there are also several areas that affect print production. Among them are some changes to the rendering of PDF transparency, ways to include additional data about spot colors and about how color management should be applied.

Getting to know PDF 2.0: halftones

Are you ready for PDF 2.0? Register now for the PDF 2.0 interoperability workshops in the UK and USA.

Martin Bailey, CTO, Global Graphics Software
Martin Bailey, CTO, Global Graphics Software

In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. It’s eight years since there’s been a revision to the standard. In his next blog post about the changes afoot, Martin Bailey, the primary UK expert to the ISO committee developing PDF 2.0, looks at halftones, an area where the new specification will offer significant benefits for flexo jobs.

Lists of spot functions in halftones
PDF allows a PDF file to specify the halftone to be used for screening output in a variety of ways. The simplest is to identify a spot function by name, but that method was constrained in versions of the PDF standard up to PDF 1.7 to use only names that were explicitly listed in the specification itself. This has been a significant limitation in print sectors where custom halftones are common, such as flexography, gravure … and pretty much everywhere apart from offset plate-making!

PDF 2.0 allows the PDF file to specify the halftone dot shape as a list of spot function names, and those names no longer need to be picked from the ones specified in the standard. The renderer should use the first named spot function in the list that it supports. This allows a single file to be created that can be used in a variety of RIPs that support different sets of proprietary halftones and to select the best one available in each RIP for that specific object.

This functionality is expected to be used mainly for high-quality flexo press work, where it’s a key part of the workflow to specify which halftone should be used for each graphical element.

A PDF 1.7 reader will probably either error or completely ignore the screening information embedded in the PDF if a file using the new list form is encountered. In the flexo space that could easily cause problems on-press, so take care that you’ve upgraded your RIPs before you start to try rendering PDF files using this new capability.

Halftone Origin (HTO)
Very old versions of PDF (up to PDF 1.3) included a partial definition of an entry named HTP, which was intended to allow the location of the origin or phase of a halftone to be specified. That entry was unfortunately useless because it did not specify the coordinate system to apply and it was removed many years ago.

PDF 2.0 adds a new entry called HTO to achieve the same goal, but this time fully specified. The use case is anywhere where precise specification of the halftone phase is valuable. Examples include pre-imposed sheets for VLF plate-setters, where specifying the halftone phase for each imposed page can reduce the misalignment of halftones that can occur over very long distances, or setting the halftone phase of each of a set of step-and-repeat labels to ensure that the halftone dots are placed in exactly the same position relative to the design in each instance.

A PDF 1.7 reader will simply ignore the new key, so there’s no danger of new files causing problems in an older workflow. On the other hand, those older RIPs will render as they always have, which would be a missed opportunity for the target use cases.

Halftone selection in transparent areas
Up to PDF 1.7 there has been a requirement to apply the “default halftone” in all areas where transparency compositing has been applied. This was problematic for those print technologies where different halftones must be used for different object types to achieve maximum quality, e.g. for flexo. Transparency is used in these jobs most commonly for drop shadows, so that’s where you’re most likely to have encountered problems.

PDF 2.0 effectively gives complete freedom to renderers to apply the supplied screening parameters in whatever way they see fit, but two example implementations are provided to encourage similarity between implementations. One of those matches the requirements from PDF 1.7, while the other applies the screen defined for the top-most graphical element in areas where transparency was applied. The second one means that the screening selected for the drop shadow will now be used, matching requirements for the flexo market.

The background
The last few years have been pretty stable for PDF; PDF 1.7 was published in 2006, and the first ISO PDF standard (ISO 32000-1), published in 2008, was very similar to PDF 1.7. In the same way, PDF/X‑4 and PDF/X‑5, the most recent PDF/X standards, were both published in 2010, six years ago.

In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. Much of the new work in this version is related to tagging for content re-use and accessibility, but there are also several areas that affect print production. Among them are some changes to the rendering of PDF transparency, ways to include additional data about spot colors and about how color management should be applied.