Choosing the class of your raster image processor (RIP) – Part II

Part II: Factors influencing your choice of integration

If you’re in the process of building a digital front end for your press, you’ll need to consider how much RIPing power you need for the capabilities of the press and the kinds of jobs that will be run on it. The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet print head, toner marking engine or laser plate-setter can understand. But how do you know what RIP is best for you and what solution can best deliver maximum throughput on your output device? In this second post, Global Graphics Software’s CTO, Martin Bailey, discusses the factors to consider when choosing a RIP.

In my last post I gave a pointer to a spreadsheet that can be used to calculate the data rate required for a digital press. This single number can be used to make a first approximation of which class of RIP integration you should be considering.

For integrations based on the Harlequin RIP® reasonable guidelines are:

  • Up to 250MB/s: can be done with a single RIP using multi-threading in that RIP
  • Up to 1GB/s: use multiple RIPs on a single server using the Harlequin Scalable RIP
  • Over 1GB/s: use multiple RIPs spread over multiple servers using the Harlequin Scalable RIP

These numbers indicate the data rate that the RIP needs to provide when every copy of the output is different. The value may need to be adjusted for other scenarios:

  • If you’re printing the same raster many times, the RIP data rate may be reduced in proportion; the RIP has 100 times as long to process a PDF page if you’re going to be printing 100 copies of it, for instance.
  • If you’re printing variable data print jobs with significant re-use of graphical elements between copies, then Harlequin VariData™ can be used to accelerate processing. This effect is already factored into the recommendations above.

The complexity of the jobs you’re rendering will also have an impact.

Transactional or industrial labelling jobs, for example, tend to be very simple, with virtually no live PDF transparency and relatively low image coverage. They are therefore typically fast to render. If your data rate calculation puts you just above a threshold in the list above, you may be able to take one step down to a simpler system.

On the other hand, jobs such as complex marketing designs or photobooks are very image-heavy and tend to use a lot of live transparency. If your data rate is just below a threshold on the list above, you will probably need to step up to a higher level of system.

But be careful when making those adjustments, however. If you do so you may have to choose either to build and support multiple variations of your DFE, to support different classes of print site, or to design a single model of DFE that can cope with the needs of the great majority of your customers. Building a single model certainly reduces development, test and support costs, and may reduce your average bill of materials. But doing that also tends to mean that you will need to base your design on the raw, “every copy different”, data rate requirements, because somebody, somewhere will expect to be able to use your press to do just that.

Our experience has also been that the complexity of jobs in any particular sector is increasing over time, and the run lengths that people will want to print are shortening. Designing for current expectations may give you an under-powered solution in a few years’ time, maybe even by the time you ship your first digital press. Moore’s law, that computers will continue to deliver higher and higher performance at about the same price point, will cancel out some of that effect, but usually not all of it.

And if your next press will print with more inks, at a higher resolution, and at higher speed you may be surprised at how much impact that combination will have on the data rate requirements, and therefore possibly on the whole architecture of the Digital Front End to drive it.

And finally, the recommendations above implicitly assume that a suitable computer configuration is used. You won’t achieve 1GB/s output from multiple RIPs on a computer with a single, four-core CPU, for example. Key aspects of hardware affecting speed are: number of cores, CPU clock speed, disk space available, RAM available, disk read and write speed, band-width to memory, L2 and L3 cache sizes on the CPU and (especially for multi-server configurations) network speed and bandwidth.

Fortunately, the latest version of the Harlequin RIP offers a framework that can help you to meet all these requirements. It offers a complete scale of solutions from a single RIP through multiple RIPs on a single server, up to multiple RIPs across multiple servers.

 

The above is an excerpt from our latest white paper: Scalable performance with the Harlequin RIP. Download the white paper here.

Read Part I – Calculating data rates here.

Choosing the class of your raster image processor (RIP) – Part I

Part I: How to calculate data rates

If you’re in the process of choosing or building a digital front end for your press, you’ll need to consider how much RIPing power you need for the capabilities of the press and the kinds of jobs that will be run on it. The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet printhead, toner marking engine or laser platesetter can understand. But how do you know what RIP is best for you and what solution can best deliver maximum throughout on your output device? This is the first of two posts by Global Graphics Software’s CTO, Martin Bailey, where he advises how to size a solution for a digital press using the data rate required on the output side.

Over the years at Global Graphics Software, we’ve found that the best guidance we can give to our OEM partners in sizing digital press systems based on our own solution, the Harlequin RIP®, comes from a relatively simple calculation of the data rate required on the output side. And now we’re making a tool to calculate those data rates available to you. All you need to do is to download it from the web and to open it in Excel.

Download it here:  Global_Graphics_Software_Press_data_rates

You will, of course, also need the specifications of the press(es) that you want to calculate data rates for.

You can use the spreadsheet to calculate data rates based on pages per minute, web speed, sheets or square meters per minute or per hour, or on head frequency. Which is most appropriate for you depends on which market sector you’re selling your press into and where your focus is on the technical aspects of the press.

It calculates the data rate for delivering unscreened 8 bits per pixel (contone) rasters. This has proven to be a better metric for estimating RIP requirements than taking the bit depth of halftoned raster delivery into account. In practice Harlequin will run at about the same speed for 8-bit contone and for 1-bit halftone output because the extra work of halftoning is offset by the reduced volume of raster data to move around. Multi-level halftones delivered in 2-bit or 4-bit rasters take a little bit longer, but not enough to need to be considered here.

You can also use the sheet-fed calculation for conventional print platesetters if you so desire. You might find it eye-opening to compare data rate requirements for an offset or flexo platesetter with those for a typical digital press!

Fortunately, the latest version of the Harlequin RIP offers a framework that can help you to meet all these requirements. It offers a complete scale of solutions from a single RIP through multiple RIPs on a single server, up to multiple RIPs across multiple servers.

In my next post I’ll share how the data rate number can be used to make a first approximation of which class of RIP integration you should be considering.

 

The above is an excerpt from our latest white paper: Scalable performance with the Harlequin RIP®. Download the white paper here

What does a RIP do?

Ever wondered what a raster image processor or RIP does? And what does RIPing a page mean? Read on to learn more about the phases of a RIP, the engine at the heart of your Digital Front End.

The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet print head, toner marking engine or laser platesetter can understand. The process of RIPing a page requires several steps to be performed in order, regardless of whether that page is submitted as PostScript, PDF or any other page description language.

Interpreting: the page description language to be RIPed is read and decoded into an internal database of graphical elements that must be placed on the page. Each may be an image, a character of text (including font, size, color etc), a fill or stroke etc. This database is referred to as a display list.

Compositing: The display list is pre-processed to apply any live transparency that may be in the job. This phase is only required for any pages in PDF and XPS jobs that use live transparency; it’s not required for PostScript language pages because those cannot include live transparency.

Rendering: The display list is processed to convert every graphical element into the appropriate pattern of pixels to form the output raster. The term ‘rendering’ is sometimes used specifically for this part of the overall processing, and sometimes to describe the whole of the RIPing process. It’s only used it in the first sense in this document.

Output: the raster produced by the rendering process is sent to the marking engine in the output device, whether it’s exposing a plate, a drum for marking with toner, an inkjet head or any other technology.

Sometimes this step is completely decoupled from the RIP, perhaps because plate images are stored as TIFF files and then sent to a CTP platesetter later, or because a near-line or off-line RIP is used for a digital press. In other environments the output stage is tightly coupled with rendering.

RIPing often includes a number of additional processes; in the Harlequin RIP® for example:

  • In-RIP imposition is performed during interpretation
  • Color management (Harlequin ColorPro®) and calibration are applied during interpretation or compositing, depending on configuration and job content
  • Screening is applied during rendering or after the Harlequin RIP has delivered unscreened raster data if screening is being applied post- RIP, when Global Graphics’ ScreenPro™ and PrintFlat™ technologies are being used, for example.

These are all important processes in many print workflows.

 

The Harlequin Host Renderer
The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, XPS, JPEG, BMP and TIFF as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.

 

The above is an excerpt from our latest white paper: Scalability with the Harlequin RIP®. Download the white paper here

Adjusting rendering of outlined text in Harlequin

By Martin Bailey, CTO, Global Graphics Software

In several sectors of the print market it is common practice to convert text to outlines upstream of a RIP, on the grounds that it’s then impossible for the wrong glyph to be printed. This is normal, for instance, in much of the label and packaging industry, especially when there is very robust regulation in place, such as in pharmaceuticals.

Every page description language defines “scan conversion” rules that specify which pixels should be marked when a graphic is painted onto a page; these build on the concept of “pixel touching”, specifying exactly when a vector shape counts as touching a pixel and therefore marking it.

When you’re using PDF (or PostScript, before that) the scan conversion rules are different for text specified using live fonts and for vector shapes. If you started with live text and then converted it to outlines then you have switched from using the text scan conversion rules to using the vector graphic rules. That has always meant that text converted to outlines tends to render slightly heavier than text using live fonts. And the smaller the text is, the more the weight difference becomes apparent.

FIG 1
FIG 1 – 2pt text in Times Roman showing various scan conversion rules.

In Fig 1 you can see this difference very clearly for very small Western text rendered at 2pt and 600dpi, still a common resolution for digital printers and presses. The top line shows text using live fonts, and the second line shows the PDF scan conversion rule for a vector fill. Note that at 2pt the RIP only has about 12 pixels for the height of an upper-case glyph.

In early 2018 we added a new scan conversion rule for vector fills alongside our pre-existing rules in the Harlequin RIP. The intention was to make it possible to emulate the much lighter output that Esko’s FlexRIPs produce. Unfortunately, it also tended to emulate the ability for very fine structures, especially fine horizontal strokes in small text, to disappear. You can see this in the third row of text in Fig 1.

This is obviously not an optimal solution, so we continued our development, and have now extended the original solution with what is called “dropout control”. This prevents very fine sections of a vector fill “dropping out” when they manage to fall on the page in such a way that they don’t cross the locations in the pixels that would trigger anything being marked. You can see the effect of this in the bottom line in Fig 1.

Light rendering with dropout control was delivered to our OEM partners in late 2018 under the name RenderAccurate.

Even this optimized output won’t exactly match the output of live fonts, because the fonts themselves often include hints to the rendering engine, designed to ensure maximum legibility and conformance to the font designer’s vision. These hints can, for instance, ensure that vertical stems are the same width in all glyphs, or that the curved base of a glyph will extend slightly below the baseline to make it visually balance with glyphs with flat bases that sit on the baseline. Those hints were discarded when the text was converted to an outline, and so can’t be used any more. But the new scan conversion algorithm certainly strikes a good balance between matching the weight of live fonts and maintaining legibility.

The effect is visible in very small text in Latin fonts, as shown in Fig 1, but the impact is often masked by the physical effects of printing. And Latin glyphs tend to be relatively simple, so that the human eye and brain are pretty good at filling in the missing segments without too much impact on legibility or comprehension.

On the other hand, Chinese, Japanese and Korean (CJK) fonts are often more complex, with the result that the effect is visible at larger point sizes. And the meaning can be obscured or altered much more easily if strokes are missing. Fig 2 illustrates the same effects on Japanese text at 3pt, rendered at 600dpi. At this size and resolution, the RIP has about 22 pixels for the height of each glyph.

FIG 2
FIG 2 – 3pt text in MS Mincho, showing, from top to bottom: live fonts; default rendering for outlined text; the new, lighter, outlined text; and lighter text with dropout control.

The glyphs shown in FIG 2 are complex compared to Western scripts, but any solution that will be used with CJK scripts must obviously also be proven with the most complex character shapes, such as the Kanji in FIG 3. Some of these have so many horizontal strokes that they simply cannot be rendered with fewer than 22 device pixels vertically and require more than that for reliable rendering. The sample in this figure is rendered at with around 27 pixels for the height of each glyph.

FIG 3
FIG 3 – More complex Kanji in KozGoPro-Regular, showing, from top to bottom: live fonts; default rendering for outlined text; and the new, lighter text with dropout control.

This article has deliberately used very small text sizes as examples, simply because the effects are easier to see. But the same issues arise at larger sizes as well, albeit more rarely.

On the other hand, it is precisely because the issue appears more rarely, and because the effects are less immediately noticeable, that makes the risk of dropping strokes so dangerous. It’s perfectly possible that an occasional missing stroke, perhaps in an unusually light font, may go unnoticed in process control. And that might result in a print that disappoints a brand owner, or even that fails a regulatory check, after the label has been applied or the carton converted and filled, or even after the product being shipped.

So, when a brand demands lighter rendering of pre-outlined fonts, make sure you’re safe by also using dropout control in your RIP!

New to inkjet? Come and see us at Hunkeler Innovationdays

Martin Bailey, CTO, Global Graphics Software
Martin Bailey, CTO, Global Graphics Software

If you are new to inkjet and are building your first press no doubt you’ll have many questions about the workflow and the Digital Front End.

In fact, you’re probably wondering how to scope out the functionality you need to create a DFE that is customised to exactly what your customers require. Among your concerns will be how you’re going to achieve the throughput you need to keep the press running at rated speed, especially when handling variable data. Or it might be handling special colours or achieving acceptable image quality that is keeping you awake at night.  And how to achieve this without increasing the bill of materials for your press?

At Hunkeler Innovationdays we’ll have a range of resources available to address just such questions with some real case study examples of how our OEM customers have solved the problems that were causing them a headache using our technology and the skills of our Technical Services team.

For instance, how, on a personalised run, when every label or page might be different, can you stop the press from falling idle whilst the RIP catches up?  Our ScreenPro™ technology helps Mark Andy cut processing time by 50% on the Mark Andy Digital Series HD, enabling fully variable (every label is different) continuous printing at high-speed and at high-quality.

How can you avoid streaking on the image if your substrate is racing under your printheads at speeds of up to 300m/min for aqueous and maybe 90m/min for UV.  Or mottling? The Mirror and Pearl Advanced Inkjet Screens™ available with ScreenPro have been developed specifically to address these problems.

During the lifetime of the press, how can you avoid variations in quality that look like banding because your printheads have worn or been replaced?  Take a look at what Ellerhold AG has achieved by deploying PrintFlat™.

The ScreenPro screening engine is one of the building blocks you’ll need for your inkjet press. Our Fundamentals components provide other functions that are essential to the workflow such as job management, soft proofing, and colour management.

Using a variety of white papers, print samples, video footage and case studies , we’ll be sharing our experience.  So, come along and meet the team:  that’s me, Jeremy Spencer, Justin Bailey and our colleague Jonathan Wilson from Meteor Inkjet if you want to chat about their printhead driver electronics that are endorsed by the world’s leading industrial inkjet printhead manufacturers.

 

Join us at Hunkeler Innovationdays 2019

 

Simple VDP support

VDP is a topic that has the potential to get people very excited. We are no exception. For instance we were delighted when Mark Andy told us that our technology reduces process and RIP times on the Digital Series HD by 50% even with full color, every-page-different.

Confident of the benefits print shops would experience if they could take on higher premium personalised jobs, we made sure from the early days, that our technology would be a) able to handle variable data in “regular” flavour PDFs by intelligent rendering b) be PDF/VT compliant (since IPEX 2010) and capable of high-speeds without sacrificing quality. And now there’s a new development that we’ve introduced this year with the launch of Version 12 of the Harlequin Host Renderer in April 2018.

What about when your VDP workflow doesn’t really benefit from PDF/VT but needs a lighter weight solution for adding text, graphics and barcodes?
Harlequin Host Renderer 12 now supports Dynamic Overlays for these use cases. Some applications such as packaging, labels and industrial print, require a simple form of VDP support. This might be where a single background page is combined with overlay graphics that are selected on the basis of a data file supplied in a format like CSV. Serial and batch codes can be added using dynamic counters without writing values to a CSV first. Support has been added to apply overlays on top of a single page PDF file to add simple serial numbers on labels or QR codes for personalized URLs, postal barcodes and addresses on envelopes.

Secure tickets

This secure ticket is generated with in-RIP bar-code support where data is read dynamically from a CSV file.

The example shows:

  •  A complex guilloche pattern in the background
  • Two lines of micro text identifying the recipient by name
  • A QR Code encoding a personalized URL (PURL)
  • The Global Graphics ‘g’ is painted in the centre of the QR Code
  • A six-character code in which each character is drawn with one of six different colours

Folding cartons

The background for this image comprises three folding cartons using nested imposition.

The overlay includes:

  • The first name of the recipient in large white text with a silver border
  • The full name of the recipient together with their city and state
  • A line of microtext showing their full name repeated to fill the space available
  • A QR Code recording a personalized URL (PURL), with a Global Graphics logo placed over the centre of it
  • The flag for the state of the recipient

 

 

 

 

 

 

Avoiding the orange peel

When you speak frequently at industry events as I do, you can tell what resonates with your audience. So, it was very gratifying to experience the collective nodding of heads at the Inkjet Conference in Neuss, Dusseldorf this week.

I gave an on update mitigating texture artifacts on inkjet presses using halftone screens.

You see, it turns out that there is more commonality between inkjet presses than we previously thought. I’m not saying that there is no need for a custom approach, because there will always be presses with specific characteristics that will need addressing through services like our BreakThrough engineering service.

What I am saying is that we’ve discovered that what matters most is the media. And it gives rise to two distinct types of behavior.

On reasonably absorbent and/or wettable media drops tend to coalesce on the substrate surface in the direction of the substrate, causing visible streaking especially in mid and three-quarter tones. These issues are amenable to correction in a half tone.

Whereas on non-absorbent, poorly wettable media such as flexible plastics or metal, prints are characterized by a mottle effect that looks a bit like orange peel. 

This effect seems to be triggered by ink shrinkage during cure. This can be corrected with a halftone with specially designed characteristics. We have one in test on real presses at the moment.

So it won’t be long now before we introduce two advanced screens for inkjet that will greatly improve quality on the majority of inkjet presses. One to counteract streaking. The other to counteract the orange peel effect. And the next project is to address non-uniformity across the web. Fixing that in software gives you the granularity to address every nozzle separately on any head/ electronics.

And for those presses aforementioned with unique properties that need special tuning? Our Chameleon design tools can create unique halftones for these cases.

I do like it when a good plan comes together!

The healthy buzz of conversation at PDF 2.0 interops

Last week was the first PDF 2.0 interop event, hosted by Global Graphics in Cambridge, UK on behalf of the PDF Association. The interop was an opportunity for developers from various companies working on their support for PDF 2.0 to get together and share sample files, and to process them in their own solutions. If a sample file from one vendor isn’t read correctly by a product from another vendor the developers can then figure out why, and fix either the creation tool or the consumer, or even both, depending on the exact reason for that failure.

When we make our own PDF sample files to test the Harlequin RIP there’s always a risk that the developer making the file and the developer writing the code to consume it will make the same assumptions or misread the specification in the same way. That makes testing files created by another vendor invaluable, because it validates all of those assumptions and possible misinterpretations as well.

It’s pretty early in the PDF 2.0 process (the standard itself will probably be published later this month), which means that some vendors are not yet far enough through their own development cycles to get involved yet. But that actually makes this kind of event even more valuable for those who participate because there are no currently shipping products out there that we could just buy and make sample files with. And the last thing that any of us want to do as vendors is to find out about incompatibilities after our products are shipped and in our customers’ hands.

I can tell you that our testing and discussions at the interop in Cambridge were extremely useful in finding a few issues that our internal testing had not identified. We’re busy correcting those, and will be taking updated software to the next interop, in Boston, MA on June 12th and 13th.

If you’re a Harlequin OEM or member of the Harlequin Partner Network you can also get access to our PDF 2.0 preview code to test against your own or other partners’ products; just drop me a line. If you’re using Harlequin in production I’m afraid you’ll have to wait until we release our next major version!

If you’re a software vendor with products that consume or create PDF and you’re already working on your PDF 2.0 support I’d heartily recommend registering for the June interop. I don’t know of any more efficient way to identify defects in your implementation so you can fix them before your customers even see them. Visit https://www.pdfa.org/event/pdf-interoperability-workshop-north-america/ to get started.

And if you’re a PDF software vendor and you’re not working on PDF 2.0 yet … time to start your planning!

Channelling how many spot colors?!!

Martin Bailey, CTO, Global Graphics Software
Martin Bailey, CTO, Global Graphics Software

Recently my wife came home from a local sewing shop proudly waving a large piece of material, which turned out to be a “swatch book” for quilting fabrics. She now has it pinned up on the wall of her hobby room.

It made me wonder how many separations or spot colors I’d ever seen in a single job myself … ignoring jobs specifically designed as swatches.

I think my personal experience probably tops out at around 18 colors, which was for a design guide for a fuel company’s forecourts after a major redesign of their branding. It was a bit like a US banknote: lots of colors, but most of them green!

But I do occasionally hear about cases where a print company or converter, especially in packaging, is looking to buy a new digital press. I’m told it’s common for them to impose together all of their most challenging jobs on the grounds that if the new press (or rather, the DFE on the new press) can handle that, then they can be confident that it’ll handle any of the jobs they receive individually. Of course, if you gang together multiple unrelated jobs, each of which uses multiple spot colors, then you can end up with quite a few different ones on the whole sheet.

“Why does this matter?” I hear you ask.

It would be easy to assume that a request for a spot color in the incoming PDF file for a job is very ephemeral; that it’s immediately converted into an appropriate set of process colors to emulate that spot on the press. Several years ago, in the time of PostScript, and for PDF prior to version 1.4, you could do that. But the advent of live transparency in PDF made things a bit harder. If you naïvely transform spots to process builds as soon as you see them, and if the spot colored object is involved in any transparency blending, then you’ll get a result that looks very different to the same job being printed on a press that actually has an ink for that spot color. In other words, prints from your digital press might not match a print from a flexo press, which is definitely not a good place to be!

So in practice, the RIP needs to retain the spot as a spot until all of the transparency blending and composition has been done, and can only merge it into the process separations afterwards. And that goes for all of the spots in the job, however many of them there are.

Although I was a bit dismissive of swatches above, those are also important. Who would want to buy a wide format printer, or a printer for textiles, or even for packaging or labels, if you can’t provide swatches to your customers and to their designers?

All of this really came into focus for me because, until recently, the Harlequin RIP could only manage 250 spots per page. That sounds a lot, but wasn’t enough for some of our customers. In response to their requests we’ve just delivered a new revision to our OEM partners that can handle a little over 8000 spots per page. I’m hoping that will be enough for a while!

If you decide to take that as a challenge, I’d love to see what you print with it!

Getting to know PDF 2.0: not only but also!

Are you ready for PDF 2.0? Register now for the PDF 2.0 interoperability workshops in the UK and USA.

In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0.  It’s eight years since there’s been a revision to the standard. We’ve already covered the main changes affecting print in previous blog posts and here Martin Bailey, the primary UK expert to the ISO committee developing PDF 2.0, gives a roundup of a few other changes to expect.

Security
The encryption algorithms included in previous versions of PDF have fallen behind current best practices in security, so PDF adds AES-256-bit and states that all passwords used for AES-256 encryption must be encoded in Unicode.
A PDF 1.7 reader will almost certainly error and refuse to process any PDF files using the new AES-256 encryption.
Note that Adobe’s ExtensionLevel 3 to ISO 32000-1 defines a different AES-256 encryption algorithm, as used in Acrobat 9 (R=5). That implementation is now regarded as dangerously insecure and Adobe has deprecated it completely, to the extent that use of it is forbidden in PDF 2.0.
Deprecation and what this means in PDF!
PDF 2.0 has deprecated a number of implementation details and features that were defined in previous versions. In this context ‘deprecation’ means that tools writing PDF 2.0 are recommended not to include those features in a file; and that tools reading PDF 2.0 files are recommended to ignore those features if they find them.
Global Graphics has taken the deliberate decision not to ignore relevant deprecated items in PDF files that are submitted and happen to be identified as PDF 2.0. This is because it is quite likely that some files will be created using an older version of PDF and using those features. If those files are then pre-processed in some way before submitting to Harlequin (e.g. to impose or trap the files) the pre-processor may well tag them as now being PDF 2.0. It would not be appropriate in such cases to ignore anything in the PDF file simply because it is now tagged as PDF 2.0.
We expect most other PDF readers to take the same course, at least for the next few years.
And the rest…
PDF 2.0 header: It’s only a small thing, but a PDF reader must be prepared to encounter a value of 2.0 in the file header and as the value of the Version key in the Catalog.
PDF 1.7 readers will probably vary significantly in their handling of files marked as PDF 2.0. Some may error, others may warn that a future version of that product is required, while others may simply ignore the version completely.
Harlequin 11 reports “PDF Warning: Unexpected PDF version – 2.0” and then continues to process the job. Obviously that warning will disappear when we ship a new version that fully supports PDF 2.0.
UFT-8 text strings: Previous versions of PDF allowed certain strings in the file to be encoded in PDFDocEncoding or in 16-bit Unicode. PDF 2.0 adds support for UTF-8. Many PDF 1.7 readers may not recognise the UTF-8 string as UTF-8 and will therefore treat it as using PDFDocEncoding, resulting in those strings being treated as what looks like a random sequence of mainly accented characters.
Print scaling: PDF 1.6 added a viewer preferences key that allowed a PDF file to specify the preferred scaling for use when printing it. This was primarily in support of engineering drawings. PDF 2.0 adds the ability to say that the nominated scaling should be enforced.
Document parts: The PDF/VT standard defines a structure of Document parts (common called DPart) that can be used to associate hierarchical metadata with ranges of pages within the document. In PDF/VT the purpose is to enable embedding of data to guide the application of different processing to each page range.
PDF 2.0 has added the Document parts structure into baseline PDF, although no associated semantics or required processing for that data have been defined.
It is anticipated that the new ISO standard on workflow control (ISO 21812, expected to be published around the end of 2017) will make use of the DPart structure, as will the next version of PDF/VT. The specification in PDF 2.0 is largely meaningless until such time as products are written to work with those new standards.

 

The background
The last few years have been pretty stable for PDF; PDF 1.7 was published in 2006, and the first ISO PDF standard (ISO 32000-1), published in 2008, was very similar to PDF 1.7. In the same way, PDF/X‑4 and PDF/X‑5, the most recent PDF/X standards, were both published in 2010, six years ago.
In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. Much of the new work in this version is related to tagging for content re-use and accessibility, but there are also several areas that affect print production. Among them are some changes to the rendering of PDF transparency, ways to include additional data about spot colors and about how color management should be applied.