This month WhatTheyThink’s third Technology Outlook takes place. It’s a series of webinars and interviews that highlight new innovations from industry analysts and thought leaders.
As part of the Thought Leadership Video series, David Zwang of WhatTheyThink chatted to Mako™ product manager David Stevenson, about how, by using our vast experience in RIPs and rendering, we’ve created a high-performance framework for print inspection systems.
David introduces Smart QI™, a quality inspection system available with SmartDFE™. Designed especially for print, SmartQI is a camera-based, real-time quality inspection system, offering the same real-time streaming of rasters. It is especially useful as the use of variable data increases, and press speeds and resolutions continue to grow, making it essential to inspect the print for defects before it comes off the press and goes into finishing and converting.
Project manager Jason Hook shows how we’ve implemented OPC UA into our solutions in this film: How to transform your inkjet business with Industry 4.0 and OPC UA. Jason demonstrates how we track performance metrics like pressure levels across an entire production line using our PC and Ink Delivery System, all while uploading it securely onto cloud servers using AWS IoT SiteWise and Azure IoT.
It’s my first time at the Industrial Print Integration Conference; I’ve packed my suitcase and my passport is raring to go, glad to be out of the drawer after two years of hibernation. I’m looking forward to meeting new people in the industry and learning about the new developments in technology.
If you’re interested in integrating print into your smart factory, join me for my talk at 12.30pm on Wednesday, 18 May 2022. I’ll be explaining how you integrate inkjet into the Smart Factory with the help of fully automated software that connects to the rest of the production system via Industry 4.0 technologies like OPC UA, the open standard for exchanging information for industrial communication. I’ll also explain how you can build in capability so you can deliver everything from mass production to mass customization at the same cost as current print systems.
And if you want to know more, then come along to our booth A7. We’re going to be showing a demo of our SmartDFE™, which I think is pretty impressive. You can watch a snippet here:
SmartDFE is our smart software that drives an inkjet printing subsystem in a factory setting, including those printers used for ultra-high speeds and 300m per minute production rates! The demo shows what happens when you combine high-tech SCADA systems (Supervisory Control and Data Acquisition) with OPC UA to monitor and control virtual print subsystems via iPads. You can control them both inside and outside of your plant location so management always knows what’s happening without ever having be physically present.
Ian Bolton is the product manager for SmartDFE™ and Direct™. He works with printer OEMs to break down barriers that might be preventing them from reaching their digital printer’s full potential. A software engineer at heart, Ian has a masters in Advanced Computer Science from the University of Manchester, and over 15 years’ experience developing software for both start-ups and large corporations, such as Arm and Sony Ericsson. He draws on this technical background and his passion for problem-solving to define and drive features and requirements for innovative software solutions for digital print.
Be the first to receive our blog posts, news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here
I’ve spoken to a lot of people about variable data printing and about what that means when a vendor builds a press or printing unit that must be able to handle variable data jobs at high speed. Over the years I’ve mentally defined several categories that such people fall into, based on the first question they ask:
“Variable data; what’s that?”
“Why should I care about variable data, nobody uses that in my industry?”
“I’ve heard of variable data and I think I need it, but what does that actually mean?”
“How do I turn on variable data optimization in Harlequin?”
And yes, unless you’re in a very specialised industry, people probably are using variable data. As an example, five years ago pundits in the label printing industry were saying that nobody was using variable data on those. Now it’s a rapidly growing area as brands realize how useful it can be and as the convergence of coding and marking with primary consumer graphics continues. If you’re a vendor designing and building a digital press your users will expect you to support variable data when you bring it to market; don’t get stuck with a DFE (digital front end) that can’t drive your shiny new press at engine speed when they try to print a variable job.
If you’re in category 3 then you’re in luck, we’ve just published a video to explain how variable data jobs are typically put together, and then how the DFE for a digital press deconstructs the pages again in order to optimize processing speed. It also talks about why that’s so important, especially as presses get faster every year. Watch it here:
And if you’re in category 4, drop us a line at firstname.lastname@example.org, or, if you’re already a Harlequin OEM partner, our support team are ready and waiting for your questions.
Martin Bailey, distinguished technologist at Global Graphics Software, chats to Marcus Timson of FuturePrint in this episode of the FuturePrint podcast. They discuss Martin’s role in making standards work better for print so businesses can compete on the attributes that matter, and software’s role in solving complex problems and reducing manual touchpoints in workflows.
They also discuss the evolution of software in line with hardware developments over the last few years, managing the increasing amounts of data needed to meet the demands of today’s print quality, the role of Global Graphics Software in key market segments and more.
Listen in here:
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here
In this post, Global Graphics Software’s product manager for Mako, David Stevenson, explores the challenge of printing large amounts of raster data and the options available to ensure that data doesn’t slow down your digital press:
The print market is increasingly moving to digital: digital printing offers many advantages over conventional printing, the most valuable of these is mass-produced, personalized output making every copy of the print different. At the same time digital presses are getting faster, wider, and printing at higher resolutions with extended gamut color becoming common place.
To drive the new class of digital presses, you need vast amounts of raster data every second. Traditional print software designed for non-digital workflows attempts to handle this vast amount of data by RIPping ahead, storing rasters to physical disks. However, the rate at which data is needed for the digital press causes disk-based workflows to rapidly hit the data rate boundary. This is the point where even state-of-the-art storage devices are simply too small and slow for the huge data rates required to keep the press running at full rated speed.
This is leading to a new generation of RIPs that ditch the disk and RIP print jobs on the fly directly to the press electronics. As well as driving much higher data rates, it also has the benefit of no wasted time RIPping ahead.
As you can imagine, RIPping directly to the press electronics presents some engineering challenges. For example, two print jobs may look identical before and after printing, but the way in which they have been made can cause them to RIP at very different rates. Additionally, your RIP of choice can have optimizations that make jobs constructed in certain ways to RIP faster or slower. This variability in print job and RIP time is a bit like playing a game of Russian roulette: if you lose the press will be starved of data causing wasted product or delivery delays.
With a RIP driving your press directly you need to have confidence that all jobs submitted can be printed at full speed. That means you need the performance of each print job and page to be predictable and you need to know what speed the press can be run at for a given combination of print Job, RIP and PC.
Knowing this, you may choose to slow down the press so that your RIP can keep up. Better still, keep the press running at full speed by streamlining the job with knowledge of optimizations that work well with your choice of RIP.
Or you could choose to return the print job to the generator with a report explaining what is causing it to run slowly. Armed with this information, the generator can rebuild the job, optimized for your chosen RIP.
Whatever you choose, you will need predictable print jobs to drive your press at the highest speed to maximize your digital press’s productivity.
The impact of poorly constructed PDF files on production schedules has increased as press resolution, colorant count, speed, and width rise – greatly increasing the data rate required to drive them.
This increase in data places additional demands on the processing power of the DFE and risks slowing down the digital press: a delay of half a second on every page of a 10,000-page job adds 90 minutes to the whole job, while for a job of a million pages an extra tenth of a second per page adds 24 hours to the total processing time.
In his guide: Full Speed Ahead – How to make variable data PDF files that won’t slow your digital press, Martin Bailey, distinguished technologist at Global Graphics Software, gives some technical recommendations as to how you can make sure that you don’t make a PDF file for a variable data job that will bring a digital press to its knees. It provides objective information for graphic designers, print buyers, production managers, press operators, owners of PSPs, and developers of digital presses and composition tools.
Martin has just released a second edition of the guide and in this film he talks about the updates to Digimarc‘s marketing communications manager, Rob Fay. Digimarc provides additional functionality to Global Graphics’ software platforms and is a sponsor of the guide.
Topics in the interview include:
The guide’s purpose and target audiences
Background on updates related to the standards PDF/X-6 and PDF/VT-3
Differences in the various VDP applications: traceability; trackability; and personalization
Recent improvements in DFE (digital front end) technology that are enabling more advanced VDP
At the beginning of 2020, in what we thought was the run-up to drupa, Global Graphics published a new guide called “Full Speed Ahead: How to make variable data PDF files that won’t slow your digital press”. It was designed to complement the recommendations available for how to maximize sales from direct mail campaigns, with technical recommendations as to how you can make sure that you don’t make a PDF file for a variable data job that will bring a digital press to its knees. It also carried those lessons into additional print sectors that are rapidly adopting variable data, such as labels, packaging, product decoration and industrial print, with hints around using variable data in unusual ways for premium jobs at premium margins.
Well, as they say, a lot has happened since then.
And some of that has been positive. At the end of 2020 several new International Standards were published, including a “dated revision” (a 2nd edition) of the PDF 2.0 standard, a new standard for submission of PDF files for production printing: PDF/X-6, and a new standard for submission of variable data PDF files for printing: PDF/VT-3.
We’ve therefore updated Full Speed Ahead to cover the new standards. And at the same time we’ve taken the opportunity to extend and clarify some of the rest of the text in response to feedback on the first edition.
So now you can keep up to date, just by downloading the new edition!
There are two completely different forms of variable data handling in the Harlequin RIP®, and I’m sometimes asked why we’ve duplicated functionality like that. The simple answer is that it’s not duplication; they each address very different use cases.
But those use cases are not, as many people then expect, “white paper workflows” vs imprinting, i.e. whether the whole design including both re-used and single-use elements is printed together vs adding variable data on top of a pre-printed substrate. Both Harlequin VariData™ and the “Dynamic overlays” that we added in Harlequin version 12 can address both of those requirements.
Incidentally, I put “white paper workflows” in quotes because that’s what it’s called in the transactional and direct mail spaces … but very similar approaches are used for variable data printing in other sectors, which may not be printing on anything even vaguely resembling paper!
The two use cases revolve around who has the data, when they have it, whether a job should start printing before all the data is available, and whether there are any requirements to restrict access to the data.
When most people in the transactional, direct mail or graphic arts print sectors think about variable data it tends to be in the form of a fully resolved document representing all of the many variations of one of a collection of pages, combining one or more static ‘backgrounds’ with single-use variable data elements, and maybe some re-used elements from which one is selected for each recipient. In other words, each page in the PDF file is meant to be printed as-is, and will be suitable for a single copy. That whole, fully resolved file is then sent to the press. It may be sent from one division of the printing company to the press room, or even from some other company entirely. The same approach is used for some VDP jobs in labels, folding carton, corrugated, signage and some industrial sectors.
This is the model for which optimized PostScript, and then optimized PDF, PDF/VT (and AFP) were designed. It’s a robust workflow that allows for significant amounts of proofing and process control at multiple stages. And it also allows very rich graphical variability. It’s the workflow for which Harlequin VariData was designed, to maximize the throughput of variable data files through the Digital Front End (DFE) and onto the press.
But in some cases the variable data is not available when the job starts printing. Indeed, the print ‘job’ may run for months in situations such as packaging lines or ID card printing. That can be managed by simply sending a whole series of optimized PDF files, each one representing a few thousand or a couple of million instances of the job to be printed. But in some cases that’s simply not convenient or efficient enough.
In other workflows the data to be printed must be selected based on the item to be printed on, and that’s only known at the very last minute … or second … before the item is printed. A rather extreme example of this is in printing ID cards. In some workflows a chip or magnetic strip is programmed first. When the card is to be printed it’s obviously important that the printed information matches the data on the chip or magnetic strip, so the printing unit reads the data from one of those, uses that to select the data to be printed, and prints it … sometimes all in less than a second. In this case you could use a fully resolved optimized PDF file and select the appropriate page from it based on identifying the next product to be printed on; I know there are companies doing exactly that. But it gets cumbersome when the selection time is very short and the number of items to be printed is very large. And you also need to have all of the data available up-front, so a more dynamic solution is better.
In other cases there is a need to ensure that the data to be printed is held completely securely, which usually leads to a demand that there is never a complete set of that data in a standard file format outside of the DFE for the printer itself. ID cards are an example of this use case as well.
Moving away from very quick or secure responses, we’ve been observing an interesting trend in the labels and packaging market as digital presses are used more widely. Printing the graphics of the design itself and adding the kind of data that’s historically been applied using coding and marking are converging. Information like serial numbers, batch numbers, competition QR Codes, even sell & use by dates are being printed at the same time as the main graphics. Add in the growing demands for traceability, for less of a need for warehousing and for more print on demand of a larger number of different versions, and there can be some real benefits in moving all of the print process quite close to the bottling/filling/labelling lines. But it doesn’t make sense to make a million page PDF file just so you can change the batch number every 42 cartons because that’s what fits on a pallet.
These use cases are why we added Dynamic overlays to Harlequin. Locations on the output where marks should be added are specified, along with the type of mark (text, barcodes and images are the most commonly used). For most marks a data source must be specified; by default we support reading from CSV files or automated counters, but an interface to a database can easily be added for specific integrations. And, of course, formatting information such as font, color, barcode symbology etc must be provided.
The ‘overlay’ in “Dynamic overlays” gives away one of the limitations of this approach, in that the variable data added using it must be on top of all the static data. But we normally recommend that you do that for fully resolved VDP submissions using something like optimized PDF anyway because it makes processing much more efficient; there aren’t that many situations where the desired visual appearance requires variable graphics behind static ones. It’s also much less of a constraint that you’d have with imprinting, where you can only knock objects like white text out of a colored fill in the static background if you are using a white ink!
For what it’s worth, Dynamic overlays also work well for imprinting or for cases where you need to print graphics of middling complexity at high quality but where there are no static graphics at all (existing coding & marking systems can handle simple graphics at low to medium quality very well). In other words, there’s no need to have a background to print the variable data as a foreground over.
So now you know why we’ve doubled up on variable data functionality!
It goes without saying that the final quality of your printed piece is paramount. But when speed and time constraints are also critical, what can you do to ensure your files fly through the press and still reward you with the quality you expect? Optimizing the images in the piece is a good place to start, but if you’re creating a job with variable data, where there are thousands of pages to print, each with a different image, how do you know what a sensible effective resolution is for those images that will ensure your PDF file doesn’t trip up the print production workflow?
In his latest guide, Full Speed Ahead, how to make variable data PDF files that won’t slow your digital press, Martin Bailey, CTO at Global Graphics Software, advises not to ask the print workflow to do more work than necessary if that doesn’t change the look of the printed result. Images are commonly re-used within a VDP job, so being able to process each image only once and then re-use the result many times can significantly increase the throughput of the digital front end. On the other hand, some images are personal to every recipient and must therefore be processed for every single recipient, slowing the workflow down.
Martin offers the following tips for setting appropriate effective photographic image resolutions:
Aim for 300 ppi, however the most appropriate image resolution for digital presses varies, depending on printing heads, media and screening used.
Bear in mind image content; soft and dreamy images can be sometimes placed at a lower resolution.
Don’t use a higher effective image resolution for photographic images than the output resolution as this is often not productive. The example in Fig 1 below illustrates how easy it is to use an image at several times the required resolution:
Fig 1: The same 12-megapixel (4000 x 3000 px) image placed on the page at three different sizes. Source: Full Speed Ahead, how to make variable data PDF files that won’t slow your digital press.
When an image is placed onto a page the original resolution of that image is largely irrelevant; what matters is how many pixels there are per inch on the final printed page. As an example, if you have a photograph from a 12 MP compact camera it’ll probably be approximately 3000 pixels by 4000 pixels. If that’s placed on the page as 3 inches by 4 inches (7.5 x 10cm) the effective resolution is about 1000ppi (4000/4). That would usually be about three times as much as you need in each dimension.
A variety of tools are available for optimizing image resolution, and some composition tools can also do this automatically. To find out more about the best effective resolution for your images, and to pick up more tips for optimizing your images for variable data printing, download the guide:
Would you fill your brand-new Ferrari with cheap and inferior fuel? It’s a question posed by Martin Bailey in his new guide: ‘Full Speed Ahead – how to make variable data PDF files that won’t slow your digital press’. It’s an analogy he uses to explain the importance of putting well-constructed PDF files through your DFE so that they don’t disrupt the printing process and the DFE runs as efficiently as possible.
Here are Martin’s recommendations to help you avoid making jobs that delay the printing process, so you can be assured that you’ll meet your print deadline reliably and achieve your printing goals effectively:
If you’re printing work that doesn’t make use of variable data on a digital press, you’re probably producing short runs. If you weren’t, you’d be more likely to choose an offset or flexo press instead. But “short runs” very rarely means a single copy.
Let’s assume that you’re printing, for example, 50 copies of a series of booklets, or of an imposed form of labels. In this case the DFE on your digital press only needs to RIP each PDF page once.
To continue the example, let’s assume that you’re printing on a press that can produce 100 pages per minute (or the equivalent area for labels etc.). If all your jobs are 50 copies long, you therefore need to RIP jobs at only two pages per minute (100ppm/50 copies). Once a job is fully RIPped and the copies are running on press you have plenty of time to get the next job prepared before the current one clears the press.
But VDP jobs place additional demands on the processing power available in a DFE because most pages are different to every other page and must therefore each be RIPped separately. If you’re printing at 100 pages per minute the DFE must RIP at 100 pages per minute; fifty times faster than it needed to process for fifty copies of a static job.
Each minor inefficiency in a VDP job will often only add between a few milliseconds and a second or two to the processing of each page, but those times need to be multiplied up by the number of pages in the job. An individual delay of half a second on every page of a 10,000-page job adds up to around an hour and a half for the whole job. For a really big job of a million pages it only takes an extra tenth of a second per page to add 24 hours to the total processing time.
If you’re printing at 120ppm the DFE must process each page in an average of half a second or less to keep up with the press. The fastest continuous feed inkjet presses at the time of writing are capable of printing an area equivalent to over 13,000 pages per minute, which means each page must be processed in just over 4ms. It doesn’t take much of a slow-down to start impacting throughput.
This extra load has led DFE builders to develop a variety of optimizations. Most of these work by reducing the amount of data that must be RIPped. But even with those optimizations a complex VDP job typically requires significantly more processing power than a ‘static’ job where every copy is the same.
The amount of processing required to prepare a PDF file for print in a DFE can vary hugely without affecting the visual appearance of the printed result, depending on how it is constructed.
Poorly constructed PDF files can therefore impact a print service provider in one or both of two ways:
Output is not achieved at engine speed, reducing return on investment (ROI) because fewer jobs can be produced per shift. In extreme cases when printing on a continuous feed (web-fed) press a failure to deliver rasters for printing fast enough can also lead to media wastage and may confuse in-line or near-line finishing.
In order to compensate for jobs that take longer to process in the DFE, press vendors often provide more hardware to expand the processing capability, increasing the bill of materials, and therefore the capital cost of the DFE.
Once the press is installed and running the production manager will usually calculate and tune their understanding of how many jobs of what type can be printed in a shift. Customer services representatives work to ensure that customer expectations are set appropriately, and the company falls into a regular pattern. Most jobs are quoted on an acceptable turn-round time and delivered on schedule.
Depending on how many presses the print site has, and how they are connected to one or more DFEs this may lead to a press sitting idle, waiting for pages to print. It may also delay other jobs in the queue or mean that they must be moved to a different press. Moving jobs at the last minute may not be easy if the presses available are not identical. Different presses may require different print streams or imposition and there may be limitations on stock availability, etc.
Many jobs have tight deadlines on delivery schedules; they may need to be ready for a specific time, with penalties for late delivery, or the potential for reduced return for the marketing department behind a direct mail campaign. Brand owners may be ordering labels or cartons on a just in time (JIT) plan, and there may be consequences for late delivery ranging from an annoyed customer to penalty clauses being invoked.
Those problems for the print service provider percolate upstream to brand owners and other groups commissioning digital print. Producing an inefficiently constructed PDF file will increase the risk that your job will not be delivered by the expected time.
You shouldn’t take these recommendations as suggesting that the DFE on any press is inadequate. Think of it as the equivalent of a suggestion that you should not fill your brand-new Ferrari with cheap and inferior fuel!
The above is an excerpt from Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press. The guide is designed to help you avoid making jobs that disrupt and delay the printing process, increasing the probability of everyone involved in delivering the printed piece; hitting their deadlines reliably and achieving their goals effectively.
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here
About the author:
Martin Bailey first joined what has now become Global Graphics Software in the early nineties, and has worked in customer support, development and product management for the Harlequin RIP as well as becoming the company’s Chief Technology Officer. During that time he’s also been actively involved in a number of print-related standards activities, including chairing CIP4, CGATS and the ISO PDF/X committee. He’s currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT.
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here