Tag Archives: digital printing

Channelling how many spot colors?!!

Recently my wife came home from a local sewing shop proudly waving a large piece of material, which turned out to be a “swatch book” for quilting fabrics. She now has it pinned up on the wall of her hobby room.

It made me wonder how many separations or spot colors I’d ever seen in a single job myself … ignoring jobs specifically designed as swatches.

I think my personal experience probably tops out at around 18 colors, which was for a design guide for a fuel company’s forecourts after a major redesign of their branding. It was a bit like a US banknote: lots of colors, but most of them green!

But I do occasionally hear about cases where a print company or converter, especially in packaging, is looking to buy a new digital press. I’m told it’s common for them to impose together all of their most challenging jobs on the grounds that if the new press (or rather, the DFE on the new press) can handle that, then they can be confident that it’ll handle any of the jobs they receive individually. Of course, if you gang together multiple unrelated jobs, each of which uses multiple spot colors, then you can end up with quite a few different ones on the whole sheet.

“Why does this matter?” I hear you ask.

It would be easy to assume that a request for a spot color in the incoming PDF file for a job is very ephemeral; that it’s immediately converted into an appropriate set of process colors to emulate that spot on the press. Several years ago, in the time of PostScript, and for PDF prior to version 1.4, you could do that. But the advent of live transparency in PDF made things a bit harder. If you naïvely transform spots to process builds as soon as you see them, and if the spot colored object is involved in any transparency blending, then you’ll get a result that looks very different to the same job being printed on a press that actually has an ink for that spot color. In other words, prints from your digital press might not match a print from a flexo press, which is definitely not a good place to be!

So in practice, the RIP needs to retain the spot as a spot until all of the transparency blending and composition has been done, and can only merge it into the process separations afterwards. And that goes for all of the spots in the job, however many of them there are.

Although I was a bit dismissive of swatches above, those are also important. Who would want to buy a wide format printer, or a printer for textiles, or even for packaging or labels, if you can’t provide swatches to your customers and to their designers?

All of this really came into focus for me because, until recently, the Harlequin RIP could only manage 250 spots per page. That sounds a lot, but wasn’t enough for some of our customers. In response to their requests we’ve just delivered a new revision to our OEM partners that can handle a little over 8000 spots per page. I’m hoping that will be enough for a while!

If you decide to take that as a challenge, I’d love to see what you print with it!

Getting to know PDF 2.0: not only but also!

Are you ready for PDF 2.0? Register now for the PDF 2.0 interoperability workshops in the UK and USA.

In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0.  It’s eight years since there’s been a revision to the standard. We’ve already covered the main changes affecting print in previous blog posts and here Martin Bailey, the primary UK expert to the ISO committee developing PDF 2.0, gives a roundup of a few other changes to expect.

Security
The encryption algorithms included in previous versions of PDF have fallen behind current best practices in security, so PDF adds AES-256-bit and states that all passwords used for AES-256 encryption must be encoded in Unicode.
A PDF 1.7 reader will almost certainly error and refuse to process any PDF files using the new AES-256 encryption.
Note that Adobe’s ExtensionLevel 3 to ISO 32000-1 defines a different AES-256 encryption algorithm, as used in Acrobat 9 (R=5). That implementation is now regarded as dangerously insecure and Adobe has deprecated it completely, to the extent that use of it is forbidden in PDF 2.0.
Deprecation and what this means in PDF!
PDF 2.0 has deprecated a number of implementation details and features that were defined in previous versions. In this context ‘deprecation’ means that tools writing PDF 2.0 are recommended not to include those features in a file; and that tools reading PDF 2.0 files are recommended to ignore those features if they find them.
Global Graphics has taken the deliberate decision not to ignore relevant deprecated items in PDF files that are submitted and happen to be identified as PDF 2.0. This is because it is quite likely that some files will be created using an older version of PDF and using those features. If those files are then pre-processed in some way before submitting to Harlequin (e.g. to impose or trap the files) the pre-processor may well tag them as now being PDF 2.0. It would not be appropriate in such cases to ignore anything in the PDF file simply because it is now tagged as PDF 2.0.
We expect most other PDF readers to take the same course, at least for the next few years.
And the rest…
PDF 2.0 header: It’s only a small thing, but a PDF reader must be prepared to encounter a value of 2.0 in the file header and as the value of the Version key in the Catalog.
PDF 1.7 readers will probably vary significantly in their handling of files marked as PDF 2.0. Some may error, others may warn that a future version of that product is required, while others may simply ignore the version completely.
Harlequin 11 reports “PDF Warning: Unexpected PDF version – 2.0” and then continues to process the job. Obviously that warning will disappear when we ship a new version that fully supports PDF 2.0.
UFT-8 text strings: Previous versions of PDF allowed certain strings in the file to be encoded in PDFDocEncoding or in 16-bit Unicode. PDF 2.0 adds support for UTF-8. Many PDF 1.7 readers may not recognise the UTF-8 string as UTF-8 and will therefore treat it as using PDFDocEncoding, resulting in those strings being treated as what looks like a random sequence of mainly accented characters.
Print scaling: PDF 1.6 added a viewer preferences key that allowed a PDF file to specify the preferred scaling for use when printing it. This was primarily in support of engineering drawings. PDF 2.0 adds the ability to say that the nominated scaling should be enforced.
Document parts: The PDF/VT standard defines a structure of Document parts (common called DPart) that can be used to associate hierarchical metadata with ranges of pages within the document. In PDF/VT the purpose is to enable embedding of data to guide the application of different processing to each page range.
PDF 2.0 has added the Document parts structure into baseline PDF, although no associated semantics or required processing for that data have been defined.
It is anticipated that the new ISO standard on workflow control (ISO 21812, expected to be published around the end of 2017) will make use of the DPart structure, as will the next version of PDF/VT. The specification in PDF 2.0 is largely meaningless until such time as products are written to work with those new standards.

 

The background
The last few years have been pretty stable for PDF; PDF 1.7 was published in 2006, and the first ISO PDF standard (ISO 32000-1), published in 2008, was very similar to PDF 1.7. In the same way, PDF/X‑4 and PDF/X‑5, the most recent PDF/X standards, were both published in 2010, six years ago.
In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. Much of the new work in this version is related to tagging for content re-use and accessibility, but there are also several areas that affect print production. Among them are some changes to the rendering of PDF transparency, ways to include additional data about spot colors and about how color management should be applied.

Getting to know PDF 2.0 – update from Down Under

Are you ready for PDF 2.0? Register now for the PDF 2.0 interoperability workshops in the UK and USA.

Martin Bailey, CTO, Global Graphics Software

Martin Bailey, CTO, Global Graphics Software

I’ve been in the ISO PDF committee meeting in Sydney, Australia for a couple of days this week to review the comments submitted to the most recent ballot on PDF 2.0. Over 100 comments were received, including some complex issues around digital signatures, structure tagging (especially lists), optional content, document parts and soft masks. In all cases the committee was able to reach a consensus on what should be done for PDF 2.0.

The plan is now for one more ballot, the responses to which will be reviewed in Q2 next year, with an expectation that final text for PDF 2.0 will be delivered to ISO for publication shortly thereafter.

So we’re still on track for publication next year.

All of which means that it’s past time that a couple of PDF’s unsung heroes were acknowledged. The project leaders for PDF 2.0 have invested very substantial amounts of time and mental energy updating text in response to comments and ballots over the last several years. When somebody like me requests a change it’s the project leaders who help to double-check that every last implication of that change is explored to ensure that we don’t have any inconsistency.

So a big thank you to Duff Johnson of the PDF Association and Peter Wyatt of CISRA (Canon)!

It’s also worth noting that one of the significant improvements in PDF 2.0 that probably won’t get highlighted elsewhere is that the text now is much more consistent. When you’re writing a detailed technical document 1000 pages long it’s inevitable that some disconnections between different sections will creep in. PDF 2.0 illustrates the value of a broad group of people from many countries and many industries reviewing text in the ISO process: we’ve managed to stamp on many of those cases in this new version.

Getting to know PDF 2.0: rendering PDF transparency

Are you ready for PDF 2.0? Register now for the PDF 2.0 interoperability workshops in the UK and USA.

ads_spread

In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0.  It’s eight years since there’s been a revision to the standard. In the second of a series of blog posts Martin Bailey, the primary UK expert to the ISO committee developing PDF 2.0, looks at the changes to rendering PDF transparency for print.
These changes are all driven by what we’ve learned in the last few years about where the previous PDF standards could trip people up in real-world jobs.
Inheritance of transparency color spaces
Under certain circumstances a RIP will now automatically apply a color-managed (CIEBased) color space when a device color space (such as DeviceCMYK) is used in a transparent object. It will do that by inheriting it from a containing Form XObject or the current page.
That sounds very technical, but the bottom line is that it will now be much easier to get the correct color when imposing multiple PDF files from different sources together. That’s especially the case when you’re imposing PDF/X files that use different profiles in their output intents, even though they may all be intended for the same target printing condition. The obvious examples of this kind of use case is placing display advertising for publications, or gang-printing.
We’ve tried hard to minimize impact on existing workflows in making these improvements, but there will inevitably be some cases where a PDF 2.0 workflow will produce different results from at least some existing solutions, and this is one case where that could happen. But we believe that the kinds of construct where PDF 2.0 will produce different output are very uncommon in PDF files apart from in the cases where it will provide a benefit by allowing a much closer color match to the designer/advertiser’s goal than could be achieved easily before.
Clarifications on when object colors must be transformed to the blend color space
The ISO PDF 1.7 standard, and all previous PDF specifications were somewhat vague about exactly when the color space of a graphical object involved with PDF transparency needed to be transformed into the blending color space. The uncertainty meant that implementations from different vendors could (and sometimes did) produce very different results.
Those statements have been greatly clarified in PDF 2.0.
This is another area where an upgrade to a PDF 2.0 workflow may mean that your jobs render slightly differently … but the up-side is that if you run pre-press systems or digital presses from multiple vendors they should now all be more similar to each other.
As a note to Harlequin RIP users, the new rules are in line with the way that Harlequin has always behaved; in other words, you won’t see any changes in this area when you upgrade.
ColorDodge & Burn
It tends to be taken for granted that the older PDF specifications must match what Adobe® Acrobat® does, but that’s not always correct. As an example, the formulae for the ColorDodge and ColorBurn transparency blending modes in the PDF specification have never matched the implementation in Acrobat. In pursuit of compatibility Harlequin was changed to match Acrobat rather than the specification many years ago. In PDF 2.0 the standard is finally catching up with reality and now both Acrobat and Harlequin will be formally ‘correct’!
The background
The last few years have been pretty stable for PDF; PDF 1.7 was published in 2006, and the first ISO PDF standard (ISO 32000-1), published in 2008, was very similar to PDF 1.7. In the same way, PDF/X 4 and PDF/X 5, the most recent PDF/X standards, were both published in 2010, six years ago.
In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. Much of the new work in this version is related to tagging for content re-use and accessibility, but there are also several areas that affect print production. Among them are some changes to the rendering of PDF transparency, ways to include additional data about spot colors and about how color management should be applied.

Sign up to the Global Graphics newsletter here for regular updates.

Martin Bailey, CTO, Global Graphics Software

Perceived resolution – the Q Factor!

There’s been a lot of emphasis in the industry recently on perceived resolution. I’m sure you will have come across the phrase from major vendors:

“The Xerox Rialto 900 (…) offers 1,000 dpi perceived resolution for high quality output.”

Oce Vaior Print i300: “The multilevel dot modulation in combination with 600x600dpi resolution boosts the print quality of image elements and shadings to perceived 1200 dpi.”

But what is resolution anyway, and is it the only thing we need to worry about to ensure high quality output?

How we perceive resolution has changed over the years. For conventional print and first generation digital presses (except for wide format), resolution was two dimensional (across and along the media). More recently, inkjet presses (and some toner) can place different amounts of colorant at each location on the substrate, using greyscale heads, multiple passes with the same head, or multiple heads imaging at the same location. This means that resolution has effectively become 3D: not only along and across the media, but also in the amount of colorant applied at any single pixel position.

At Global Graphics we call this “multi-level output”, compared to the “binary” output where each pixel can either be coloured or not, with no intermediate steps.

Resolution? Or addressability and droplet size?
As print geeks know well, press resolution has very little to do with resolving power, it is really a marketing simplification to use the word ‘resolution’ for ‘addressability’ – e.g. at 600 dpi, each addressable pixel is 1/600” from its neighbours. The detail that can be displayed is a factor of droplet size as well as addressability; as droplets get bigger each one covers more than just a single (square!) pixel on the media, so less fine detail is retained.

Droplet placement accuracy also comes into play. In a perfect world we would have a regular grid of droplets, but in practice we don’t usually get one. The variation in separation between droplets can lead to coalescing, mottling or streaking on some substrates, especially on UV inkjet presses, but it can occur on aqueous as well.

Droplet size     Droplet size 2

Addressability and droplet size affect the rendering of small type and other high-contrast fine detail. Droplet placement accuracy affects texture of final print. So we still don’t have a clear metric for “perceived resolution” …

What about resolution and bit depth?
Using multi-level output can produce smoother rendering of images and other graphics with gradual tone or colour changes than binary output at the same resolution can achieve.

Binary v multi-level screening

Multi-level output, shown left, can produce smoother rendering of images than binary output, shown right, at the same resolution.

But nozzle redundancy is also vital: In a single pass press, with a page-wide array, a single blocked nozzle will leave a white line down the substrate unless something is built in to fix that, such as nozzle redundancy. And that redundancy must use up some of the press’ capability to use multiple nozzles in the same location for multi-level output, so 1200 dpi nozzles often doesn’t mean 1200 dpi addressability on the substrate.

And sometimes each nozzle can only deliver one droplet size; sometimes it can deliver a variety of sizes.

So what’s the real quality that these presses are capable of? We need a lot of information to really understand what’s going on: dpi across and along the media, number of nozzles imaging any single pixel, droplet sizes available from that nozzle, proportion of nozzles used for redundancy … I don’t think I’ve ever seen a press vendor’s public specification that gives us all the information we want.

Can we even say, simplistically, that higher resolution and bit depth are good? If everything else is equal then yes, in many cases, except that you can push either too far. On an aqueous inkjet, higher resolutions really need smaller highlight droplets; smaller lone droplets tend to disappear into some media and can lead to loss of extreme highlights on the output. Interestingly you end up with output that looks remarkably close to the way flexo loses those same highlights!

And you also need to remember that higher addressability means high computational requirements, and more computations mean more expensive DFEs, higher running costs, maybe even less green … (a faster RIP can offset this, of course!) It also makes the press more expensive, and harder to run as fast.

And what’s the impact on quality?
There are other factors other than bit depth, addressability and droplet size and placement which affect the final result, for example:

  • Items affecting ink spread or movement on the substrate such as paper smoothness, absorbency, coatings, ink viscosity and surface tension;
  • Movement of the colorant into the substrate, reducing the capability of showing very small detail or saturated colours.
  • Registration
  • Halftone screening
  • Colour management, including ink limitation and reduction

So the ‘virtual’, mathematical discussion of resolution and droplet size are is certainly not the only factor in determining the quality of output. Quality arises from a complex mix of heads, electronics, wave forms, inks, media, resolution, registration, bit depth and half-toning etc. We don’t have a good way to provide a single, understandable quality metric to sum it all up. ISO DTS 15311-1 is defining testing and reporting methodologies in this area, although it still doesn’t provide a simple quality metric.

So what’s the answer?
We just don’t have a single number that sums up the quality capability of a digital press at the moment. But then simply reporting ‘resolution’ has never really fulfilled that role in the past for binary systems, from imagesetters to platesetters to office printers … to digital cameras. So perhaps we shouldn’t be too disappointed.

What should you do when a vendor reports “perceived resolution”? I’d suggest that you take it as an indication of the level in the marketplace that the vendor is intending to address … and then draw your own conclusions based on print samples.

If you’re looking to buy a press, have the vendor:

  • Print samples on the media and at the speed that you expect to use
  • Use a variety of graphical constructs to explore press behaviour:
  • Flat tints at a range of tones and colours
  • Smooth graduations, including some long ones all the way to white
  • Photographic images, including high and low key, soft-focus and sharp detail
  • Fine vector detail such as small serif and sans serif text

If you’re already running a press do the same. Each technology has different strengths and weaknesses; you may even need multiple presses to address all work in your particular target sector. The key thing is to understand what your presses are good at, and what to avoid, and then to work with your customers to achieve the best possible result … and to set expectations appropriately in advance.

If you’re a press vendor, talk to us about how Global Graphics’ multi-level screening technologies can maximise the quality and the value of your hardware.

Read about our latest advances in screening, presented at the Inkjet Conference, October 2015.