HP Site Flow is a workflow and production automation system for HP digital press owners.
When HP Inc began developing HP Site Flow, an end-to-end workflow and production automation system for HP digital press owners, they encountered several challenges including: addressing the growing personalized market; the need to ‘normalize’ PDFs, given the wide variation in the quality of files entering the system; and the ability to quickly scale up or down to accommodate varying levels of demand.
Read the case study to see how Mako Core™ SDK proved its capability and adaptability by rising to HP Site Flow’s development challenges, resulting in increased productivity and profitability for its users.
Twenty years ago it was common to find people RIPping jobs for production print with no color management. Indeed, many print service providers (PSPs), magazine publishers etc actively avoided it as being “too complicated” and “unpredictable”. You might read that as an indictment of their vendors for a lack of investment either in developing good product or in educating their users. Alternatively it might simply show that the printing companies were quite understandably risk-averse because it could be expensive if the client didn’t like the resulting color, especially in an environment like display advertising in a major magazine, or packaging for a major brand.
A decade after that more and more people (on both the buying and the printing sides) grasped the value of color management in print and were using it, but there was still a significant minority that had not managed to make the time to understand it. This is borne out by the uproar when Adobe ‘forced’ people to use color management by changing from using CMYK for the alternate color space for Pantone spots in Creative Cloud to using Lab1, and by the continuing demand for support for PDF/X‑1a, where everything has already been converted to press colorants before the PDF is made.
Now we’re in 2022, and the need for color management is accepted almost universally in print sectors that use an ink set based on CMYK. I phrased it that way because some of the industrial print space (textiles, ceramics, laminate flooring etc) have historically used many inks, but usually job-specific rather than CMYK. Some of those markets will continue to use job-specific ink sets as they transition to digital, while others would find a switch to digital extremely challenging without a concurrent switch to a color managed workflow2.
So, why am I writing this now?
It’s because I still talk to people who tell me that they don’t need to do any color management inside the RIP when processing PDF; they RIP it first and then apply color management.
I’m sorry, that just won’t work reliably and with maximum quality.
There was a time, back in the days when PDF 1.3 was the latest and greatest (which pretty much means last millennium) when a PSP could get away with this approach, because their customers were happy to define all their colors in CMYK and spots. As soon as they used anything else, including Lab or colors tagged with ICC profiles, they’d have to have some fallback to generate CMYK values from that data. It doesn’t need a full color management module (CMM), but they’d need something.
And then along came PDF 1.4, adding transparency. And transparency requires that you can convert colors between color spaces, potentially multiple times. That’s because PDF transparency includes the concept of transparency groups. Each group is one or more graphics that are blended with any graphics that are behind them in the design.
The blending depends on a number of parameters, the most obvious of which are the blend mode (Overlay, Multiply, Hard Light etc), and the blend color space. The result of rendering all graphics that are underneath the transparency group will be transformed from whatever space the RIP holds it in (often the CMYK for the output device) into the blending color space. The result of rendering all the graphics inside the transparency group itself is also transformed into the blending color space. Then the blend mode is applied, to do the actual transparency calculation, and the result is transformed back into whatever color space the RIP needs it to be in for further processing (again, often the CMYK of the output device). The blending color space is quite often sRGB, because that’s the default in a number of popular design applications.
So correct rendering of the transparency will often require color transforms between the color space in which graphics are specified (such as, maybe, an image tagged with an ECI RGB ICC profile), the blend color space (most commonly sRGB) and the output device color space (usually a specific CMYK). That’s just not possible without applying a pretty complete color management process during RIPping. And if you try to take short-cuts you’ll usually get a visually different result, sometimes very different.
Color transformation with transparency requires a full color management capability.
Even so, back in the early 2000s a PSP could avoid the need to upgrade software, process control and operator training by insisting that their customers supplied files in a format such as PDF/X-1a, which prohibited device-independent colors and transparency. But making a PDF/X-1a file from a rich design in a creative application requires a number of compromises affecting graphical elements that were originally specified in device independent colors, or which use transparency. Both risk degrading the quality of the final piece.
These days insisting on PDF/X-1a to avoid the need for color management in the RIP is no longer widely acceptable to customers3. And therefore neither is color managing after the RIPping is complete.
Your check-list is therefore:
Don’t use PDF/X-1a. In fact don’t use PDF/X-3 either. Both are two decades old. PDF/X-3 may allow device-independent colors, but both of them force the creation tool to flatten transparency, discard layers and a bunch of other potentially damaging procedures. It’s over ten years since PDF/X-4 was published, and that’s currently the best balance between capability and getting too far ahead of common usage in print workflows.
If you’re a print service provider, converter, industrial printing manufacturer or digital press vendor, don’t cut corners; use a workflow that applies the color management in or before the RIP4. It shouldn’t be hard; all the leading RIP vendors (and therefore leading press vendors, because they license technology from the RIP vendors) supply suitable systems.
About the author
Martin Bailey, consultant at Global Graphics Software, is a former CTO of the company and currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT. He is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a guide offering advice to anyone with a stake in variable data printing including graphic designers, print buyers, composition developers and users.
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here
Notes 1 – If a spot color will be emulated using process inks on press, then using a CMYK alternate gives predictable color numbers in those inks, but is less good at producing a predictable color appearance. Using Lab for the alternate color space often leads to unpredictable color numbers on each separation, but a more predictable color appearance on the print. There is a benefit to both models, but when it comes to paying for printing the color appearance usually wins!
2 – if run-lengths on digital are long enough to justify warehousing a variety of inks, and changings inks on inkjet presses, it can be reasonable to stay with job-specific ink sets, especially if it’s difficult or expensive to make usable inks for all of CMY and K. As an example, the best Magenta ink for inkjet printing on ceramics is made with gold. Any move to using digital presses for short-run printing more or less requires a fixed ink set to allow for quick job changes without excessive waste, and that typically means CMYK+.
3 – and I say that as the chair of the committees that developed PDF/X for many years, first in CGATS and then in ISO.
4 – There are situations where applying color management in a color server before the RIP can be useful, especially when multiple presses will be used in parallel. This approach brings its own challenges around handling spot colors in the job that will be emulated on press, but can produce excellent results when used with care.
I’m excited to announce that Mako 6.6 will support OpenXPS (OXPS) as both an input and output!
But Mako already has lots of inputs and outputs – so why is this one so exciting?
Mako in Printing
Mako, in my opinion, is the premier SDK of choice when it comes to meeting challenges where performance, reliability and accuracy are required. This is particularly so with many printing use-cases.
These use-cases can include handling multiple page description languages (PDLs) from upstream workflows including PDF, PostScript, XPS, PCL, IJPDS and PPML. The only other PDL that was missing, until now, was OXPS.
The exciting part is that having this final PDL puts Mako in a unique and enviable position, as it now supports all common print PDLs, with a single, simple, consistent and clean interface and document object model (DOM).
Formats supported by Mako.
Mako benefits
Consolidate multiple SDKs for each required PDL.
Requires less developer downtime to learn multiple libraries and interfaces.
Offers a single point of contact and support for all your common PDLs from a trusted company with years of industry experience.
If you’re interested in hearing more, please get in touch with us and see how we can help with your software challenges.
If you fancy taking a look at some code samples to see what Mako can do, feel free to head over to our developer documentation.
Vast amounts of data can slow down your digital press resulting in wasted product or delayed delivery times.
In this post, Global Graphics Software’s product manager for Mako, David Stevenson, explores the challenge of printing large amounts of raster data and the options available to ensure that data doesn’t slow down your digital press:
The print market is increasingly moving to digital: digital printing offers many advantages over conventional printing, the most valuable of these is mass-produced, personalized output making every copy of the print different. At the same time digital presses are getting faster, wider, and printing at higher resolutions with extended gamut color becoming common place.
To drive the new class of digital presses, you need vast amounts of raster data every second. Traditional print software designed for non-digital workflows attempts to handle this vast amount of data by RIPping ahead, storing rasters to physical disks. However, the rate at which data is needed for the digital press causes disk-based workflows to rapidly hit the data rate boundary. This is the point where even state-of-the-art storage devices are simply too small and slow for the huge data rates required to keep the press running at full rated speed.
This is leading to a new generation of RIPs that ditch the disk and RIP print jobs on the fly directly to the press electronics. As well as driving much higher data rates, it also has the benefit of no wasted time RIPping ahead.
As you can imagine, RIPping directly to the press electronics presents some engineering challenges. For example, two print jobs may look identical before and after printing, but the way in which they have been made can cause them to RIP at very different rates. Additionally, your RIP of choice can have optimizations that make jobs constructed in certain ways to RIP faster or slower. This variability in print job and RIP time is a bit like playing a game of Russian roulette: if you lose the press will be starved of data causing wasted product or delivery delays.
With a RIP driving your press directly you need to have confidence that all jobs submitted can be printed at full speed. That means you need the performance of each print job and page to be predictable and you need to know what speed the press can be run at for a given combination of print Job, RIP and PC.
Knowing this, you may choose to slow down the press so that your RIP can keep up. Better still, keep the press running at full speed by streamlining the job with knowledge of optimizations that work well with your choice of RIP.
Or you could choose to return the print job to the generator with a report explaining what is causing it to run slowly. Armed with this information, the generator can rebuild the job, optimized for your chosen RIP.
Whatever you choose, you will need predictable print jobs to drive your press at the highest speed to maximize your digital press’s productivity.
We added support for native processing of PDF files to the Harlequin RIP® way back in 1997. When we started working on that support we somewhat naïvely assumed that we should implement the written specification and that all would be well. But it was obvious from the very first tests that we performed that we would need to do something a bit more intelligent because a large proportion of PDF files that had been supplied as suitable for production printing did not actually comply with the specification.
Launching a product that would reject many PDF files that could be accepted by other RIPs would be commercial suicide. The fact that, at the time, those other RIPs needed the PDF to be transformed into PostScript first didn’t change the business case.
Unfortunately a lot of PDF files are still being made that don’t comply with the standard, so over the almost a quarter of a century since first launching PDF support we’ve developed our own rules around what Harlequin should do with non-compliant files, and invested many decades of effort in test and development to accept non-compliant files from major applications.
The first rule that we put in place is that Harlequin is not a validation tool. A Harlequin RIP user will have PDF files to be printed, and Harlequin should render those files as long as we can have a high level of confidence that the pages will be rendered as expected.
In other words, we treat both compliance with the PDF standard and compatibility with major PDF creation tools as equally important … and supporting Harlequin RIP users in running profitable businesses as even more so!
The second rule is that silently rendering something incorrectly can be very bad, increasing costs if a reprint is required and causing a print buyer/brand to lose faith in a print service provider/converter. So Harlequin is written to require a reasonably high level of confidence that it can render the file as expected. If a developer opening up the internals of a PDF file couldn’t be sure how it was intended to be rendered then Harlequin should not be rendering it.
We’d expect most other vendors of PDF readers to apply similar logic in their products, and the evidence we’ve seen supports that expectation. The differences between how each product treats invalid PDF are the result of differences in the primary goal of each product, and therefore to the cost of output that is viewed as incorrect.
Consider a PDF viewer for general office or home use, running on a mobile device or PC. The business case for that viewer implies that the most important thing it has to do is to show as much of the information from a PDF file as possible, preferably without worrying the user with warnings or errors. It’s not usually going to be hugely important or costly if the formatting is slightly wrong. You could think of this as being at the opposite end of the scale from a RIP for production printing. In other words, the required level of confidence in accurately rendering the appearance of the page is much lower for the on-screen viewer.
You may have noticed that my description of a viewer could easily be applied to Adobe Reader or Acrobat Pro. Acrobat is also not written primarily as a validation tool, and it’s definitely not appropriate to assume that a PDF file complies with the standard just because it opens in Acrobat. Remember the Acrobat business case, and imagine what the average office user’s response would be if it would not open a significant proportion of PDF files because it flagged them as invalid!
For further reading about PDF documents and standards:
Martin Bailey, Distinguished Technologist, Global Graphics Software, is currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT and is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a new guide offering advice to anyone with a stake in variable data printing including graphic designers, print buyers, composition developers and users.
Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here
Martin Bailey, distinguished technologist at Global Graphics Software, joins hosts Deborah Corn and Pat McGrew in this special episode of The Print Report. Together they discuss the innovative methods used at Global Graphics to solve complex and common printing problems using software.
Martin highlights the award-winning PrintFlat™ technology, which gives smooth, uniform tints and accurate tone reproduction via a simple ‘fingerprint’ calibration of the screening process, and the value of creating optmized PDF files so they don’t slow down your digital press and disrupt the production process.
Over the past year, Microsoft has been working hard to bring its new Cloud printing service, Universal Print, to general availability.
As a part of Universal Print, developers get access to a set of Graph APIs that allows analysis and modification of print job payload data. This feature enables a few different scenarios, including adding security (e.g. redactions or watermarks) to a Universal Print-based workflow.
As a curious engineer, I wanted to see how different it would be for an independent software vendor (ISV) to use our Mako™ Core SDK to modify a print job flowing through Universal Print, instead of using a more traditional route of using a virtual printer driver.
Thinking about the workflow a little more, I came up with the following design:
Using the Mako SDK to modify documents in Universal Print.
In the design above, we can see the end-user’s Word document gets printed to a virtual printer. This allows the ISV to be notified of the job, and modify it accordingly using Mako. Once modified, the ISV then redirects the job on to the physical printer for printing.
There’s a couple of nice things about this design:
Firstly, it uses the Graph API to access Universal Print, which is an easy-to-use and well documented REST API. Secondly, since the functionality is accessed via a REST API, it allows our ISV service to be written in whichever Mako supported language we like.
I chose C# to make best use of the C# Graph API SDK.
Developing the service
There are five main steps to developing the service:
Handle print job notifications
Download the print job payload
Modify the payload
Upload the payload
Redirect to the target printer
Handle print job notifications
To be notified of print jobs in Universal print, you can use the Graph’s change notifications. These will allow you to sign up to a notification, which will call a provided webhook.
Download the print job payload
Once we have notification that a print job has been sent to our virtual printer, we can start downloading its payload.
Here we use the appropriate Graph APIs, along with standard Graph authentication to access the print job’s document. We then simply save it to disk.
Modify the payload
Once we have the document on disk (although Mako can also modify streams too!), we can open the document and modify it using Mako’s document object model (DOM).
Alternatively, Mako can also convert from one page description language (PDL) to another. This is useful in situations where your destination printer doesn’t support the input PDL.
Upload the payload
Uploading the modified document is straightforward. This time we use the Graph API to create an upload session, and use the WebClient class to put the document back into the original print job.
Redirect to the target printer
And finally, after the print job has been updated, we can redirect it onto another printer. This redirection also automatically completes the print job and task.
Alternatively, if we want to be a little more green, we could always send the document to OneDrive, Sharepoint, or another document management system. After doing so, you then complete the print job and its associated task.
See it in action
We actually coded this demo live in our last Mako webinar, showing an implementation where an ISV wants to automatically redact content.
Access the code directly at our GitHub repository or watch the webinar recording below:
Try it out
We’re keen to talk to you about your Universal Print project and see how we can help. Contact us here.
In this week’s post, Global Graphics Software’s principal engineer, Andrew Cardy, explores the structure tagging API in the Mako™ Core SDK. This feature is particularly valuable as it allows developers to create PDFs that can be read by screen readers, such as Jaws®. This helps blind or partially sighted users unlock the content of a PDF. Here, Andy explains how to use the structure tagging API in Mako to tag both text and images:
What can we Structure Tag?
Before I begin, let’s talk about PDF: PDF is a fixed-format document. This means you can create it once, and it should (aside from font embedding or rendering issues) look identical across machines. This is obviously a great thing for ensuring your document looks great on your user’s devices, but the downside is that some PDF generators can create fixed content that is ordered in a way that is hard for screen readers to understand.
Luckily Mako also has an API for page layout analysis. This API will analyze the structure of the PDF, and using various heuristics and techniques, will group the text on the page together in horizontal runs and vertical columns. It’ll then assign a page reading order.
The structure tagging API makes it easy to take the layout analysis of the page and use it to tag and structure the text. So, while we’re tagging the images, we’ll tag the text too!
Mako’s Structure Tagging API
Mako’s structure tagging API is simple to use. Our architect has done a great job of taking the complicated PDF specification and distilling it down to a number of useful APIs.
Let’s take a look at how we use them to structure a document from start to finish:
Setting the Structure Root
Setting the root structure is straight forward. Firstly, we create an instance of IStructure and set it in the document.
Next we create an instance of a Document level IStructureElement and add that to the structure element we’ve just created.
One thing that I learnt the hard way, is that Acrobat will not allow child structures to be read by a screen reader if their parent has alternative (alt) text set.
Add alternate text only to tags that don’t have child tags. Adding alternate text to a parent tag prevents a screen reader from reading any of that tag’s child tags. (Adobe Acrobat help)
Originally, when I started this research project, I had alt text set at the document level, which caused all sorts of confusion when my text and image alt text wasn’t read!
Using the Layout Analysis API
Now that we’ve structured the document, it’s time to structure the text. Firstly, we want to understand the layout of the page. To do this, we use IPageLayout. We give it a reference to the page we want to analyze, then perform the analysis on it.
Now the page has been analyzed, it’s easy to iterate through the columns and nodes in the page layout data.
Tagging the text
Once we’ve found our text runs, we can tag our text with a span IStructureElement. We append this new structure element to the parent paragraph created while we were iterating over the columns.
We also tag the original source Mako DOM node against the new span element.
Tagging the images
Once the text is structured, we can structure the images too.
Earlier, I used Microsoft’s Vision API to take the images in the document and give us a textual description of them. We can now take this textual description and add it to a figure IStructureElement.
Again, we make sure we tag the new figure structure element against the original source Mako DOM image.
Notifying Readers of the Structure Tags
The last thing we need to do is set some metadata in the document’s assembly, this is straight forward enough. Setting this metadata helps viewers to identify that this document is structure tagged.
Putting it all Together
So, after we’ve automated all of that, we now get a nice structure, which, on the whole, flows well and reads well.
We can see this structure in Acrobat DC:
And if we take a look at one of the images, we can see our figure structure now has some alternative text, generated by Microsoft’s Vision API. The alt text will be read by screen readers.
Figure properties dialogue
It’s not perfect, but then taking a look at how Adobe handles text selection quite nicely illustrates just how hard it is to get it right. In the image below, I’ve attempted to select the whole of the title text in Acrobat.
Layout analysis is hard to get right!
In comparison, our page layout analysis seems to have gotten these particular text runs spot on. But how does it fair with the Jaws screen reader? Let’s see it in action!
So, it does a pretty good job. The images have captions automatically generated, there is a sense of flow and most of the content reads in the correct order. Not bad.
Printing accessible PDFs
You may be aware that the Mako SDK comes with a sample virtual printer driver that can print to PDF. I want to take this one step further and add our accessibility structure tagging tool to the printer driver. This way, we could print from any application, and the output will be accessible PDF!
In the video below I’ve found an interesting blog post that I want to save and read offline. If I were partially sighted, it may be somewhat problematic as the PDF printer in Windows 10 doesn’t provide structure tagging, meaning that the PDF I create may not work so well with my combination of PDF reader and screen reader. However, if I throw in my Mako-based structure and image tagger, we’ll see if it can help!
Of course, your mileage will vary and the quality of the tagging will depend on the quality and complexity of the source document. The thing is, structural analysis is a hard problem, made harder sometimes by poorly behaving generators, but that’s another topic in itself. Until all PDF files are created perfectly, we’ll do the best we can!
Want to give it a go?
Please do get in touch if you’re interested in having a play with the technology, or just want to chat about it.
Andy Cardy, Principal Engineer at Global Graphics Software
Andy Cardy is a Principal Engineer for Global Graphics Software and a Developer Advocate for the Mako SDK.
Find out more about Mako’s features in Andy’s coding demo:
In this session Andy uses coding in C++ and C# to show you three complex tasks that you can easily achieve with Mako:
• PDF rendering – visualizing PDF for screen and print (15 mins)
• Using Mako in Cloud-ready frameworks (15 mins)
• Analyzing and editing with the Mako Document Object Model (15 mins)
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here
Somebody asked me recently what the difference is between PDF/X-1a (first published in 2001) and PDF/X-4 (published in 2010). I thought this might also be interesting to a wider audience.
Both are ISO standards that deliberately restrict some aspects of what you can put into a PDF file in order to make them more reliable for delivery of jobs for professional print. But the two standards address different needs/desires:
PDF/X-1a content must all have been transformed into CMYK (optionally plus spots) already, so it puts all of the responsibility for correct separation and transparency handling onto the creation side. When it hits Harlequin, all the RIP can do is to lock in the correct overprint settings and (optionally) pre-flight the intended print output condition, as encapsulated in the output intent.
On the other hand, PDF/X-4 supports quite a few things that PDF/X-1a does not, including:
Device-independent color spaces
Live PDF transparency
Optional content (layers)
That moves a lot more of the responsibility downstream into the RIP, because it can carry unseparated colors and transparency.
Back when the earlier PDF/X standards were designed transparency handling was a bit inconsistent between RIPs, and color management was an inaccessible black art to many print service providers, which is why PDF/X-1a was popular with many printers. That’s not been the case for a decade now, so PDF/X-4 will work just fine.
In other words, the choice is more down to where the participants in the exchange want the responsibility to sit than to anything technical any more.
In addition, PDF/X-4 is much more easily transitioned between different presses, and even between completely different print technologies, such as moving a job from offset or flexo to a digital press. And it can also be used much more easily for digital delivery alongside using it for print. For many people that’s enough to push the balance firmly in favour of PDF/X-4.
For further reading about PDF documents and standards:
Martin Bailey is Global Graphics’ Chief Technology Officer. He’s currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT and is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a new guide offering advice to anyone with a stake in variable data printing including graphic designers, print buyers, composition developers and users.
Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here
Would you fill your brand-new Ferrari with cheap and inferior fuel? It’s a question posed by Martin Bailey in his new guide: ‘Full Speed Ahead – how to make variable data PDF files that won’t slow your digital press’. It’s an analogy he uses to explain the importance of putting well-constructed PDF files through your DFE so that they don’t disrupt the printing process and the DFE runs as efficiently as possible.
Here are Martin’s recommendations to help you avoid making jobs that delay the printing process, so you can be assured that you’ll meet your print deadline reliably and achieve your printing goals effectively:
If you’re printing work that doesn’t make use of variable data on a digital press, you’re probably producing short runs. If you weren’t, you’d be more likely to choose an offset or flexo press instead. But “short runs” very rarely means a single copy.
Let’s assume that you’re printing, for example, 50 copies of a series of booklets, or of an imposed form of labels. In this case the DFE on your digital press only needs to RIP each PDF page once.
To continue the example, let’s assume that you’re printing on a press that can produce 100 pages per minute (or the equivalent area for labels etc.). If all your jobs are 50 copies long, you therefore need to RIP jobs at only two pages per minute (100ppm/50 copies). Once a job is fully RIPped and the copies are running on press you have plenty of time to get the next job prepared before the current one clears the press.
But VDP jobs place additional demands on the processing power available in a DFE because most pages are different to every other page and must therefore each be RIPped separately. If you’re printing at 100 pages per minute the DFE must RIP at 100 pages per minute; fifty times faster than it needed to process for fifty copies of a static job.
Each minor inefficiency in a VDP job will often only add between a few milliseconds and a second or two to the processing of each page, but those times need to be multiplied up by the number of pages in the job. An individual delay of half a second on every page of a 10,000-page job adds up to around an hour and a half for the whole job. For a really big job of a million pages it only takes an extra tenth of a second per page to add 24 hours to the total processing time.
If you’re printing at 120ppm the DFE must process each page in an average of half a second or less to keep up with the press. The fastest continuous feed inkjet presses at the time of writing are capable of printing an area equivalent to over 13,000 pages per minute, which means each page must be processed in just over 4ms. It doesn’t take much of a slow-down to start impacting throughput.
If you’re involved in this kind of calculation you may find the digital press data rate calculator useful: Download the data rate calculator
Global Graphics Software’s digital press data rate calculator.
This extra load has led DFE builders to develop a variety of optimizations. Most of these work by reducing the amount of data that must be RIPped. But even with those optimizations a complex VDP job typically requires significantly more processing power than a ‘static’ job where every copy is the same.
The amount of processing required to prepare a PDF file for print in a DFE can vary hugely without affecting the visual appearance of the printed result, depending on how it is constructed.
Poorly constructed PDF files can therefore impact a print service provider in one or both of two ways:
Output is not achieved at engine speed, reducing return on investment (ROI) because fewer jobs can be produced per shift. In extreme cases when printing on a continuous feed (web-fed) press a failure to deliver rasters for printing fast enough can also lead to media wastage and may confuse in-line or near-line finishing.
In order to compensate for jobs that take longer to process in the DFE, press vendors often provide more hardware to expand the processing capability, increasing the bill of materials, and therefore the capital cost of the DFE.
Once the press is installed and running the production manager will usually calculate and tune their understanding of how many jobs of what type can be printed in a shift. Customer services representatives work to ensure that customer expectations are set appropriately, and the company falls into a regular pattern. Most jobs are quoted on an acceptable turn-round time and delivered on schedule.
Depending on how many presses the print site has, and how they are connected to one or more DFEs this may lead to a press sitting idle, waiting for pages to print. It may also delay other jobs in the queue or mean that they must be moved to a different press. Moving jobs at the last minute may not be easy if the presses available are not identical. Different presses may require different print streams or imposition and there may be limitations on stock availability, etc.
Many jobs have tight deadlines on delivery schedules; they may need to be ready for a specific time, with penalties for late delivery, or the potential for reduced return for the marketing department behind a direct mail campaign. Brand owners may be ordering labels or cartons on a just in time (JIT) plan, and there may be consequences for late delivery ranging from an annoyed customer to penalty clauses being invoked.
Those problems for the print service provider percolate upstream to brand owners and other groups commissioning digital print. Producing an inefficiently constructed PDF file will increase the risk that your job will not be delivered by the expected time.
You shouldn’t take these recommendations as suggesting that the DFE on any press is inadequate. Think of it as the equivalent of a suggestion that you should not fill your brand-new Ferrari with cheap and inferior fuel!
The above is an excerpt from Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press. The guide is designed to help you avoid making jobs that disrupt and delay the printing process, increasing the probability of everyone involved in delivering the printed piece; hitting their deadlines reliably and achieving their goals effectively.
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here
About the author:
Martin Bailey, CTO, Global Graphics Software
Martin Bailey first joined what has now become Global Graphics Software in the early nineties, and has worked in customer support, development and product management for the Harlequin RIP as well as becoming the company’s Chief Technology Officer. During that time he’s also been actively involved in a number of print-related standards activities, including chairing CIP4, CGATS and the ISO PDF/X committee. He’s currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT.
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here