Just when you’ve all cozied down with PDF 1.7 what happens? Yes, that’s right. A new standard rears its head.
Around the middle of 2017 the ISO committee will publish PDF 2.0 (ISO 32000-2). So by the end of 2017 you’ll probably need to be considering how to ensure that your workflow can handle PDF 2.0 files correctly.
As the primary UK expert to this committee I thought I’d give you a heads up now on what to expect. And over the coming months via this blog and our newsletter I’ll endeavor to keep you posted on what to look out for as far as print is concerned. Because, of course, there are many aspects to the standard that do not concern print at all. For instance there are lots of changes in areas such as structure tagging for accessibility and digital signatures that might be important for business and consumer applications.
As you probably already know, in 2008 Adobe handed over ownership and development of the PDF standard to the International Standards Organization. Since that time I’ve been working alongside other experts to ensure that standards have real-world applicability.
And here’s one example relating to color.
The printing condition for which a job was created can be encapsulated in professional print production jobs by specifying an “output intent” in the PDF file. The output intent structure was invented for the PDF/X standards, at first in support of pre-flight, and later to enable color management at the print site to match that used in proofing at the design stage.
But the PDF/X standards only allow a single output intent to be specified for all pages in a job.
PDF 2.0 allows separate output intents to be included for every page individually. The goal is to support jobs where different media are used for various pages, e.g. for the first sheet for each recipient of a transactional print job, or for the cover of a saddle-stitched book. The output intents in PDF 2.0 are an extension of those described in PDF/X, and the support for multiple output intents will probably be adopted back into PDF/X-6 and into the next PDF/VT standard.
But of course, like many improvements, this one does demand a little bit of care. A PDF 1.7 or existing PDF/X reader will ignore the new page level output intents and could therefore produce the wrong colors for a job that contains them.
In my next post I’ll be covering changes around live transparency in PDF 2.0. Bet you can’t wait!
You can sign up to the Global Graphics newsletter here.
The last few years have been pretty stable for PDF; PDF 1.7 was published in 2006, and the first ISO PDF standard (ISO 32000-1), published in 2010, was very similar to PDF 1.7. In the same way, PDF/X 4 and PDF/X 5, the most recent PDF/X standards, were both published in 2010, six years ago.
In the middle of 2017 ISO 32000-2 will be published, defining PDF 2.0. Much of the new work in this version is related to tagging for content re-use and accessibility, but there are also several areas that affect print production. Among them are some changes to the rendering of PDF transparency, ways to include additional data about spot colors and about how color management should be applied.
I’m proud to announce that I’m chairing a new task force that has just been created in TC130, the ISO committee focused on standardization for the printing industry. The task force is named “PDF Common Metadata”, and its focus is on constructing a metadata framework that can be embedded within a PDF file to guide production workflow decisions.
We created a precursor to this work in PDF/VT, in cooperation with CIP4. In that case a hierarchical structure of metadata in the PDF file was intended to be used with a templated JDF job ticket (or similar structure) to ensure that complex variable data jobs could be imposed, printed and finished correctly. Unfortunately the model we used set the bar too high and most composition vendors and press manufacturers felt that implementation was too difficult.
But there are a wide range of situations where a simpler model has real value. Indeed, the current work grew out of requests from the transactional print space to be able to include media selections and simplex/duplex controls in a PDF file. That request was initially reviewed by the PDF/VT Competence Center in the PDF Association who concluded that the benefits of a suitable solution would apply across the printing industry, not just in variable data.
The solution proposed is to build on the concept of ‘intents’ from JDF (although not directly on JDF itself). These describe what the final printed piece is supposed to look like, rather than specifying the details of the processes required to make it. The thought process is that the digital front end (DFE) on a digital press can map from that to the actual steps needed.
As a simple example, a request for a specific substrate should be fairly easy to map to an entry in the media library in a DFE and therefore to tray selections (on a sheet-fed press) and to installing the correct ICC color profile. In closed loop workflows such as web to print the first mapping shouldn’t be necessary at all, because the media selection will be pre-populated from the same data as the media library.
The committee met for the first time in San Jose last week, and we’re looking forward to some lively debate. Our first goal is a standard for graphic arts, but there has already been discussion of following on with equivalents targeted more specifically at packaging and at wide format.
If you’re interested in getting involved please contact your national standards body and tell them you want to work in ISO TC130/WG2/TF5. If you don’t know who to contact in your country, drop me a line and I’m happy to make introductions.
You often read news items about a new press having been installed at a beta site but it’s not a topic that gets much of an airing apart from the odd news bulletin, is it?
And that got me thinking.
What is considered to be a successful beta test? And why should we care?
Well, if you do care, you are not just going through the motions to get your press out of the door. You are more likely to be focussed on delivering a good product. You probably view beta testing as an opportunity to make changes for the better and to help improve product management. You care what comes back because you want to develop a good product. It’s important to you to get understandable and useful data.
So what do you want to know? Your beta test should provide you with proof points as to why your printer is going to be successful in the market. “Real” users will use and abuse your press and put it through its paces in a way that your own internal hardware and software engineers will not. Any weaknesses will be exposed. And you’ll get closer to your customer by working together with them in a way that just wouldn’t be open to you if you didn’t run a beta program.
The thing is how do you extract meaningful data from your test? And how do you rule out those problems that have nothing to do with your press, such as humidity, ambient temperature, the way the site is being operated?
Somehow you need to control the environment that the beta test is conducted in and approach the beta test in quite a formal way to rule out any subjectivity that might creep in.
We’ve got some ideas on how to achieve this which I’ll share in another post. But I’d be interested in hearing how you do it. What are your top tips?
It’s been a really interesting week chatting to vendors and the press about our new software and services package for inkjet. In case you missed it, we’ve called it Fundamentals because it combines essential software components and engineering expertise that press vendors need to build a Digital Front End.
What’s the big deal you might say? Well, The Times They Are A Changing to quote Bob Dylan both in terms of the progression of inkjet technology and the swing towards digital printing in the labels and packaging sector which is where we have focussed our initial offering of Fundamentals.
Thanks to our lengthy graphic arts experience – we’ve been supplying software to drive digital presses since 2002 – we are regularly approached by inkjet press vendors either to intervene at some point in an existing workflow or because they’re starting from a blank sheet of paper and need to figure out how to build a Digital Front End for a new press.
If they have an existing DFE a press vendor might be stuck on output quality, or maybe they can’t get the throughput in speed that they need. If they’re building a new press they might not know where to quickly source the components they need. Or often they can’t allocate enough engineering resource to the DFE when they need to. Plus it takes a very special skill set to know how to wire it all together.
How do we know all this? Because vendors tell us so. And Fundamentals is our response to this market demand. It offers best of breed software products with an engineering service that allows the press vendor to address their specific applications.
It will grow, of course. We are already looking at a Fundamentals software bundle for industrial inkjet for example. But the good news for press vendors is that we can do all of the above and then some!
One of the many highlights of our drupa stand will be the new Harlequin RIP. We asked Martin Bailey, CTO at Global Graphics, to tell us more about it. He told us that there are a host of new features to improve inkjet output quality including richer, multi-level screening controls, more controls for variable data printing, and new features for labels and packaging applications. Hear his summary in this video below.
Fancy a test drive? Join us at drupa 2016, Stand 70 B21/C20 in the dip. Simply contact us to book a demonstration.
Stay tuned for more announcements over the next couple of weeks.
Remember in olden times how you sent a file to print on a wing and a prayer? OK, it wasn’t that bad! But it was unreliable. Figures showed that print-ready file delivery had failure rates of between 30 – 70% and this was a real problem for print service providers with high throughput like magazine houses.
Then PDF/X came along and greatly improved the situation. It was strengthened by additional standardisation efforts from several other bodies including Ghent PDF Workgroup and Altona.
PDF/X worked because it ensured delivery of files ready to high-quality print. And because it dealt with the headache so well, print service providers recommended it.
Fast forward for a moment to today and to the tidal wave that is variable data printing. Most buyers deliver the brief and the dataset to the print service provider (PSP). A full service PSP will offer data mining, graphic design, composition and print. Offering a full service promises higher margins. If you only provide a print service you can expect lower margins, but your model connects better to web-to-print services that are burgeoning. But if you try to “just print” VDP jobs, those that fail will eliminate profit.
I’ve been invited to speak at the Online Print Symposium in Munich (17th – 18th March) about why PDF/VT and Industry 4.0 are set to change online print forever. The truth is that VDP has been hanging around street corners looking for a PDF/X. Well, now it’s found one because that’s what PDF/VT is. It’s been created to deal with every page being different and to give PSPs more control over the workflow.
I would go so far to suggest that print-ready file delivery of graphically rich variable data from outside the print company is unlikely to succeed without it! And on that bombshell…!
Standards for variable data printing (VDP) have come a long way since the first work by CGATS to develop a universal delivery format in the late 1990s. In 2010 the International Standards Organization published the PDF/VT standard, marking the first really effective specification for a reliable, vendor-neutral exchange of variable data jobs, both within and between companies.
A special type of the PDF file format, PDF/VT is specifically used for variable data and transactional printing in a variety of environments, from desktop printing to high volume digital production presses. Built on PDF/X, it therefore brings all the advantages of that standard in enforcing best practices for reproducible and predictable color and appearance to the variable data and transactional print worlds.
The industry is gradually realizing its value to improve quality, competitiveness and productivity, and I’ve been working with the PDF/VT Competence Center, especially with Christoph Oeters (Sofha), Paul Jones (Teclyn bv) and Tim Donahue (technical consultant) to produce a new set of Application Notes highlighting the benefits of using PDF/VT and the workflows that it enables.
The Application Notes explain how to make the highest quality and most efficient PDF/VT files to achieve the required visual appearance of a job, so if you develop software to read and write PDF/VT files, for example in composition tools, RIPs, digital front ends and imposition tools, or if you work on print workflow integration, you’ll find the notes really beneficial. They also show how document part metadata can be applied and leveraged for VDP specific production workflows.
Of course, there are wider benefits to using PDF/VT: The adoption of PDF/VT will allow the industry to finally move towards a reliable, vendor-neutral exchange of variable data jobs, simplifying the process of variable data printing significantly.
There’s been a lot of emphasis in the industry recently on perceived resolution. I’m sure you will have come across the phrase from major vendors:
“The Xerox Rialto 900 (…) offers 1,000 dpi perceived resolution for high quality output.”
Oce Vaior Print i300: “The multilevel dot modulation in combination with 600x600dpi resolution boosts the print quality of image elements and shadings to perceived 1200 dpi.”
But what is resolution anyway, and is it the only thing we need to worry about to ensure high quality output?
How we perceive resolution has changed over the years. For conventional print and first generation digital presses (except for wide format), resolution was two dimensional (across and along the media). More recently, inkjet presses (and some toner) can place different amounts of colorant at each location on the substrate, using greyscale heads, multiple passes with the same head, or multiple heads imaging at the same location. This means that resolution has effectively become 3D: not only along and across the media, but also in the amount of colorant applied at any single pixel position.
At Global Graphics we call this “multi-level output”, compared to the “binary” output where each pixel can either be coloured or not, with no intermediate steps.
Resolution? Or addressability and droplet size? As print geeks know well, press resolution has very little to do with resolving power, it is really a marketing simplification to use the word ‘resolution’ for ‘addressability’ – e.g. at 600 dpi, each addressable pixel is 1/600” from its neighbours. The detail that can be displayed is a factor of droplet size as well as addressability; as droplets get bigger each one covers more than just a single (square!) pixel on the media, so less fine detail is retained.
Droplet placement accuracy also comes into play. In a perfect world we would have a regular grid of droplets, but in practice we don’t usually get one. The variation in separation between droplets can lead to coalescing, mottling or streaking on some substrates, especially on UV inkjet presses, but it can occur on aqueous as well.
Addressability and droplet size affect the rendering of small type and other high-contrast fine detail. Droplet placement accuracy affects texture of final print. So we still don’t have a clear metric for “perceived resolution” …
What about resolution and bit depth? Using multi-level output can produce smoother rendering of images and other graphics with gradual tone or colour changes than binary output at the same resolution can achieve.
But nozzle redundancy is also vital: In a single pass press, with a page-wide array, a single blocked nozzle will leave a white line down the substrate unless something is built in to fix that, such as nozzle redundancy. And that redundancy must use up some of the press’ capability to use multiple nozzles in the same location for multi-level output, so 1200 dpi nozzles often doesn’t mean 1200 dpi addressability on the substrate.
And sometimes each nozzle can only deliver one droplet size; sometimes it can deliver a variety of sizes.
So what’s the real quality that these presses are capable of? We need a lot of information to really understand what’s going on: dpi across and along the media, number of nozzles imaging any single pixel, droplet sizes available from that nozzle, proportion of nozzles used for redundancy … I don’t think I’ve ever seen a press vendor’s public specification that gives us all the information we want.
Can we even say, simplistically, that higher resolution and bit depth are good? If everything else is equal then yes, in many cases, except that you can push either too far. On an aqueous inkjet, higher resolutions really need smaller highlight droplets; smaller lone droplets tend to disappear into some media and can lead to loss of extreme highlights on the output. Interestingly you end up with output that looks remarkably close to the way flexo loses those same highlights!
And you also need to remember that higher addressability means high computational requirements, and more computations mean more expensive DFEs, higher running costs, maybe even less green … (a faster RIP can offset this, of course!) It also makes the press more expensive, and harder to run as fast.
And what’s the impact on quality? There are other factors other than bit depth, addressability and droplet size and placement which affect the final result, for example:
Items affecting ink spread or movement on the substrate such as paper smoothness, absorbency, coatings, ink viscosity and surface tension;
Movement of the colorant into the substrate, reducing the capability of showing very small detail or saturated colours.
Colour management, including ink limitation and reduction
So the ‘virtual’, mathematical discussion of resolution and droplet size are is certainly not the only factor in determining the quality of output. Quality arises from a complex mix of heads, electronics, wave forms, inks, media, resolution, registration, bit depth and half-toning etc. We don’t have a good way to provide a single, understandable quality metric to sum it all up. ISO DTS 15311-1 is defining testing and reporting methodologies in this area, although it still doesn’t provide a simple quality metric.
So what’s the answer? We just don’t have a single number that sums up the quality capability of a digital press at the moment. But then simply reporting ‘resolution’ has never really fulfilled that role in the past for binary systems, from imagesetters to platesetters to office printers … to digital cameras. So perhaps we shouldn’t be too disappointed.
What should you do when a vendor reports “perceived resolution”? I’d suggest that you take it as an indication of the level in the marketplace that the vendor is intending to address … and then draw your own conclusions based on print samples.
If you’re looking to buy a press, have the vendor:
Print samples on the media and at the speed that you expect to use
Use a variety of graphical constructs to explore press behaviour:
Flat tints at a range of tones and colours
Smooth graduations, including some long ones all the way to white
Photographic images, including high and low key, soft-focus and sharp detail
Fine vector detail such as small serif and sans serif text
If you’re already running a press do the same. Each technology has different strengths and weaknesses; you may even need multiple presses to address all work in your particular target sector. The key thing is to understand what your presses are good at, and what to avoid, and then to work with your customers to achieve the best possible result … and to set expectations appropriately in advance.
If you’re a press vendor, talk to us about how Global Graphics’ multi-level screening technologies can maximise the quality and the value of your hardware.
Readabout our latest advances in screening, presented at the Inkjet Conference, October 2015.
Making progress in half-tone screening technology – our samples are ready to display!
We’re really looking forward to the Inkjet Conference in Düsseldorf next week. Global Graphics’ CTO, Martin Bailey, will be speaking at the conference and focusing on the problems inkjet vendors have encountered when printing on high-speed inkjets, particularly with regard to optimum image quality and droplet placement.
With this in mind, for the last few months we’ve been working with a number of inkjet press manufacturers to develop entirely new half-tone screening technology for presses that can vary the amount of ink delivered in any one location on the media. We’ve just received our sample prints to show you at the Conference and we’re really pleased with the results – you can see the improvement immediately.
The samples show typical ‘before and after’ scenarios: The ‘before’ samples are quite noisy and show mottle and puddling; the ‘after’ samples, printed with Global Graphics screening technology, show much smoother gradients where we manage the transition of droplet size in multi-level heads.
We have also prepared sample prints showing what the output looks like with no tuning on: They show noise and steps in gradients for multi-level output, then we demonstrate what happens when we use transition points of drop size when using inks such as white, orange and violet in the colour spectrum.
Look out for Martin at the Conference and drop by our table in the IJC Networking Arena to see the prints for yourself.
If you are interested in the benefits of half-tone screening on high-speed inkjets and would like to join our research programme, watch our video here for more information: https://www.youtube.com/watch?v=WNrSbb46efg.
Our Harlequin product team has launched a set of hybrid screens specially developed to give premium quality in flexo work.
The screens address the well-known issues of how to achieve high-quality in the highlight areas of images, such as tones close to white or skin tones, and how to print those areas with smooth gradations.
“We used the Harlequin Cross Modulated™ screens as the basis for development and have expanded the number of screens and included a mechanism to auto select calibration that goes with a particular screen,” comments Martin Bailey, CTO, Global Graphics.
“With the new Harlequin Cross Modulated Flexo (HXMFlexo) screens you can produce high-quality graphical objects by selecting from a wide choice of screen resolutions, rulings and dot sizes. Pre-press operators also now have the ability to bump up curves at the highlight end to compensate for flexo not being able to produce tones close to white clearly, so you can achieve smooth gradations even in high-key images.”
The new screens are the result of working with our OEM partners in the flexo market who have used the Harlequin RIP for years and we’ve been able to take input from a variety of vendors to fine tune our plans.
HXMFlexo works with the latest editions of the Harlequin RIP.