Martin Bailey, distinguished technologist at Global Graphics Software, chats to Marcus Timson of FuturePrint in this episode of the FuturePrint podcast. They discuss Martin’s role in making standards work better for print so businesses can compete on the attributes that matter, and software’s role in solving complex problems and reducing manual touchpoints in workflows.
They also discuss the evolution of software in line with hardware developments over the last few years, managing the increasing amounts of data needed to meet the demands of today’s print quality, the role of Global Graphics Software in key market segments and more.
Listen in here:
To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here
Following his post last week about the speed and scalability of your raster image processor, in this film, Martin Bailey, distinguished technologist at Global Graphics Software, explains how to determine how much raster image processor (RIP) power you need to drive a digital press by calculating the press data rate. It’s the best way of calculating how much RIP power you need in the Digital Front End (DFE) to drive it at engine speed and to ensure profitable printing.
If you’re building a digital press, or a digital front end (DFE) to drive a digital press, you want it to be as efficient and cost-effective as possible. As the trend towards printing short runs and personalization grows, especially in combination with increasing resolutions, more colorants and faster presses, the speed and scalability of the raster image processor (RIP) inside that DFE are key factors in determining profitability.
For your digital press to print at speed you’ll need to understand the amount of data that it requires, i.e. its data rate. In this film, Martin Bailey, distinguished technologist at Global Graphics Software, explains how different stages in data handling will need different data rates and how to integrate the appropriate number of RIP cores to generate that much data without inflating the bill of materials and DFE hardware.
Martin also explains that your next press may have a much higher data rate requirement than your current one.
In this latest case study, Tom Bouman, worldwide workflow product marketing manager at HP PageWide Industrial, explains why the Harlequin RIP®, with its track record for high quality and speed and its ability to scale, was the obvious choice to use at the heart of its digital front end when the division was set up to develop presses for the industrial inkjet market back in 2008.
Today, the Harlequin RIP Core is at the heart of all the PageWide T-series presses, driving the HP Production Elite Print Server digital front end. Presses range from 20-inch for commercial printing, through to the large 110-inch (T1100 series) printers for high-volume corrugated pre-print, offering a truly scalable solution that sets the standard in performance and quality.
Product manager Paul Dormer gives an insight into why the Harlequin Core is the leading print OEMs’ first choice to power digital inkjet presses in this new film.
A raster image processor (RIP), Harlequin Core converts text, object and image data from file formats such as PDF, TIFF™ or JPEG, into a raster that a printing device can understand. It’s at the heart of the digital front end that drives the press.
Proven in the field for decades, Harlequin Core is known for its incredible speed and is the fastest RIP engine available. It is used in every print sector, from industrial inkjet such as textiles and flooring, to labels and packaging, commercial, transactional, and newspapers.
As presses become wider, faster, and higher resolution, handling vast amounts of data, the Harlequin Core remains the RIP of choice for many leading brands including HP, Mimaki, Mutoh, Roland, Durst, Agfa and Delphax.
We’ve now been shipping the Harlequin Host Renderer™ (HHR) to OEMs and partners for over a decade, driving digital printers and presses. Back then Harlequin was our only substantial software component for use in digital front ends (DFEs), and we just came up with a name that seemed to describe what it did.
Since then our technology set includes a component that can be used upstream of the RIP, for creating, modifying, analyzing, visualizing, etc page description languages like PDF: that’s Mako™. And we’ve also added a high-performance halftone screening engine: ScreenPro™.
We’ve positioned these components as a “Core” range and their names reflect this: “Mako Core” and “ScreenPro Core”. We also added higher level components in our Direct™ range, for printer OEMs who don’t want to dig into the complexities of system engineering, or who want to get to market faster.
Harlequin is already part of Harlequin Direct™, and we’re now amending the name of the SDK to bring it into line with our other “Core” component technologies. The diagram below shows how those various offerings fit together for a wide range of digital printer and press vendors (please click on it for a better view).
So, farewell “Harlequin Host Renderer”, hello “Harlequin Core”.
We added support for native processing of PDF files to the Harlequin RIP® way back in 1997. When we started working on that support we somewhat naïvely assumed that we should implement the written specification and that all would be well. But it was obvious from the very first tests that we performed that we would need to do something a bit more intelligent because a large proportion of PDF files that had been supplied as suitable for production printing did not actually comply with the specification.
Launching a product that would reject many PDF files that could be accepted by other RIPs would be commercial suicide. The fact that, at the time, those other RIPs needed the PDF to be transformed into PostScript first didn’t change the business case.
Unfortunately a lot of PDF files are still being made that don’t comply with the standard, so over the almost a quarter of a century since first launching PDF support we’ve developed our own rules around what Harlequin should do with non-compliant files, and invested many decades of effort in test and development to accept non-compliant files from major applications.
The first rule that we put in place is that Harlequin is not a validation tool. A Harlequin RIP user will have PDF files to be printed, and Harlequin should render those files as long as we can have a high level of confidence that the pages will be rendered as expected.
In other words, we treat both compliance with the PDF standard and compatibility with major PDF creation tools as equally important … and supporting Harlequin RIP users in running profitable businesses as even more so!
The second rule is that silently rendering something incorrectly can be very bad, increasing costs if a reprint is required and causing a print buyer/brand to lose faith in a print service provider/converter. So Harlequin is written to require a reasonably high level of confidence that it can render the file as expected. If a developer opening up the internals of a PDF file couldn’t be sure how it was intended to be rendered then Harlequin should not be rendering it.
We’d expect most other vendors of PDF readers to apply similar logic in their products, and the evidence we’ve seen supports that expectation. The differences between how each product treats invalid PDF are the result of differences in the primary goal of each product, and therefore to the cost of output that is viewed as incorrect.
Consider a PDF viewer for general office or home use, running on a mobile device or PC. The business case for that viewer implies that the most important thing it has to do is to show as much of the information from a PDF file as possible, preferably without worrying the user with warnings or errors. It’s not usually going to be hugely important or costly if the formatting is slightly wrong. You could think of this as being at the opposite end of the scale from a RIP for production printing. In other words, the required level of confidence in accurately rendering the appearance of the page is much lower for the on-screen viewer.
You may have noticed that my description of a viewer could easily be applied to Adobe Reader or Acrobat Pro. Acrobat is also not written primarily as a validation tool, and it’s definitely not appropriate to assume that a PDF file complies with the standard just because it opens in Acrobat. Remember the Acrobat business case, and imagine what the average office user’s response would be if it would not open a significant proportion of PDF files because it flagged them as invalid!
For further reading about PDF documents and standards:
With fewer design limitations, a faster turnaround, no minimum run length and higher margins (not to mention reduced use of power and water, and of pollution), it’s not surprising that the digitally printed textile market is growing.1 Inkjet has certainly made textile design and printing much more flexible than screen printing – and that goes for everybody involved, from the designer through the printing company, to the buyer.
But printing textiles on inkjet doesn’t come without its challenges: as a software provider focusing on print quality issues, we often hear from print service providers who can only digitally print two thirds of the jobs they receive because they would not be paid for the quality they could achieve on the others.
Shade or color variation is a common problem. It’s not new in digital printing (it’s always been an issue for screen-printed and dyed textiles as well) and is usually managed by providing a shade band, which printer operators refer to, to check allowable color variations between pieces.
But, unlike screen-printing or dyeing, the color variation on an inkjet press can be visible over a small distance, just a few centimeters, and this results in visible bands across the output. Banding describes features that tend to be 1 – 10 cm across and they’re often caused by variation of inkjet pressure or voltage differences within the head, which typically results in a frown or smile shape. We also see a certain amount of manufacturing variation between heads so that one may print lighter or darker than the head next to it in a print bar. Some types of heads can also wear in use, which can result in less regular banding that can change over time. This means that large areas which should be flat color may not be.
When such a variation occurs it can greatly complicate a lot of post-print steps, especially if you need to put more than one piece of textile together, either in sewing or use (such as a pair of curtains). If that’s the case, then a significant difference may be unacceptable and your printing rejected by your buyer. Ultimately this leads to print service providers rejecting jobs, because they know their digital press can’t handle printing those tricky flat tints or smooth tones.
What can you do about it? The first thing many companies do to try to overcome this banding is to adjust the voltage to the inkjet head, but this is often time-consuming and expensive because it requires an expert technician. A better alternative is to make the correction in software, which is a more cost-effective and faster solution. It means it can be automated and can act at a much finer granularity, so printing is more accurate. There’s no need to mess with controls that could damage the press, and printing companies themselves can make corrections without the vendor sending a technician on-site.
Our solution at Global Graphics Software for improving banding is PrintFlat™. It corrects tonality to hide banding based on measurements from the press. It adjusts every nozzle separately and doesn’t need a specialist engineer to make press adjustments. PrintFlat can be integrated into different digital front ends, using a variety of RIPs, including Caldera and Colorgate and, not to mention, our own Harlequin RIP®.
Over the years of working with many press manufacturers we’ve discovered that many technical issues and solutions are common across different sectors, including transactional, wide-format, commercial, labels and packaging, and industrial, including ceramics, wall coverings, flooring and of course textiles. That means that we already have years of experience in correcting for banding. Using PrintFlat in your press means print service providers can now take on those jobs they would normally reject.
To learn more about how to eliminate shade and color variation when printing on an inkjet press, listen to Global Graphics Software’s CTO Martin Bailey’s talk for FESPA 2020:
There are two completely different forms of variable data handling in the Harlequin RIP®, and I’m sometimes asked why we’ve duplicated functionality like that. The simple answer is that it’s not duplication; they each address very different use cases.
But those use cases are not, as many people then expect, “white paper workflows” vs imprinting, i.e. whether the whole design including both re-used and single-use elements is printed together vs adding variable data on top of a pre-printed substrate. Both Harlequin VariData™ and the “Dynamic overlays” that we added in Harlequin version 12 can address both of those requirements.
Incidentally, I put “white paper workflows” in quotes because that’s what it’s called in the transactional and direct mail spaces … but very similar approaches are used for variable data printing in other sectors, which may not be printing on anything even vaguely resembling paper!
The two use cases revolve around who has the data, when they have it, whether a job should start printing before all the data is available, and whether there are any requirements to restrict access to the data.
When most people in the transactional, direct mail or graphic arts print sectors think about variable data it tends to be in the form of a fully resolved document representing all of the many variations of one of a collection of pages, combining one or more static ‘backgrounds’ with single-use variable data elements, and maybe some re-used elements from which one is selected for each recipient. In other words, each page in the PDF file is meant to be printed as-is, and will be suitable for a single copy. That whole, fully resolved file is then sent to the press. It may be sent from one division of the printing company to the press room, or even from some other company entirely. The same approach is used for some VDP jobs in labels, folding carton, corrugated, signage and some industrial sectors.
This is the model for which optimized PostScript, and then optimized PDF, PDF/VT (and AFP) were designed. It’s a robust workflow that allows for significant amounts of proofing and process control at multiple stages. And it also allows very rich graphical variability. It’s the workflow for which Harlequin VariData was designed, to maximize the throughput of variable data files through the Digital Front End (DFE) and onto the press.
But in some cases the variable data is not available when the job starts printing. Indeed, the print ‘job’ may run for months in situations such as packaging lines or ID card printing. That can be managed by simply sending a whole series of optimized PDF files, each one representing a few thousand or a couple of million instances of the job to be printed. But in some cases that’s simply not convenient or efficient enough.
In other workflows the data to be printed must be selected based on the item to be printed on, and that’s only known at the very last minute … or second … before the item is printed. A rather extreme example of this is in printing ID cards. In some workflows a chip or magnetic strip is programmed first. When the card is to be printed it’s obviously important that the printed information matches the data on the chip or magnetic strip, so the printing unit reads the data from one of those, uses that to select the data to be printed, and prints it … sometimes all in less than a second. In this case you could use a fully resolved optimized PDF file and select the appropriate page from it based on identifying the next product to be printed on; I know there are companies doing exactly that. But it gets cumbersome when the selection time is very short and the number of items to be printed is very large. And you also need to have all of the data available up-front, so a more dynamic solution is better.
In other cases there is a need to ensure that the data to be printed is held completely securely, which usually leads to a demand that there is never a complete set of that data in a standard file format outside of the DFE for the printer itself. ID cards are an example of this use case as well.
Moving away from very quick or secure responses, we’ve been observing an interesting trend in the labels and packaging market as digital presses are used more widely. Printing the graphics of the design itself and adding the kind of data that’s historically been applied using coding and marking are converging. Information like serial numbers, batch numbers, competition QR Codes, even sell & use by dates are being printed at the same time as the main graphics. Add in the growing demands for traceability, for less of a need for warehousing and for more print on demand of a larger number of different versions, and there can be some real benefits in moving all of the print process quite close to the bottling/filling/labelling lines. But it doesn’t make sense to make a million page PDF file just so you can change the batch number every 42 cartons because that’s what fits on a pallet.
These use cases are why we added Dynamic overlays to Harlequin. Locations on the output where marks should be added are specified, along with the type of mark (text, barcodes and images are the most commonly used). For most marks a data source must be specified; by default we support reading from CSV files or automated counters, but an interface to a database can easily be added for specific integrations. And, of course, formatting information such as font, color, barcode symbology etc must be provided.
The ‘overlay’ in “Dynamic overlays” gives away one of the limitations of this approach, in that the variable data added using it must be on top of all the static data. But we normally recommend that you do that for fully resolved VDP submissions using something like optimized PDF anyway because it makes processing much more efficient; there aren’t that many situations where the desired visual appearance requires variable graphics behind static ones. It’s also much less of a constraint that you’d have with imprinting, where you can only knock objects like white text out of a colored fill in the static background if you are using a white ink!
For what it’s worth, Dynamic overlays also work well for imprinting or for cases where you need to print graphics of middling complexity at high quality but where there are no static graphics at all (existing coding & marking systems can handle simple graphics at low to medium quality very well). In other words, there’s no need to have a background to print the variable data as a foreground over.
So now you know why we’ve doubled up on variable data functionality!
Ever wondered what a raster image processor or RIP does? And what does RIPping a file mean? Read on to learn more about the phases of a RIP, the engine at the heart of your Digital Front End (DFE).
The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet printhead, toner marking engine or laser platesetter can understand. The process of RIPping a job requires several steps to be performed in order, regardless of the page description language (such as PDF) that it’s submitted in. Even image file formats such as TIFF, JPEG or PNG usually need to be RIPped, to convert them into the correct color space, at the right resolution and with the right halftone screening for the press.
Interpreting: The file to be RIPped is read and decoded into an internal database of graphical elements that must be placed on the output. Each may be an image, a character of text (including font, size, color etc), a fill or stroke etc. This database is referred to as a display list.
Compositing: The display list is pre-processed to apply any live transparency that may be in the job. This phase is only required for any graphics in formats that support live transparency, such as PDF; it’s not required for PostScript language jobs or for TIFF and JPEG images because those cannot include live transparency.
Rendering: The display list is processed to convert every graphical element into the appropriate pattern of pixels to form the output raster. The term ‘rendering’ is sometimes used specifically for this part of the overall processing, and sometimes to describe the whole of the RIPing process.
Output: The raster produced by the rendering process is sent to the marking engine in the output device, whether it’s exposing a plate, a drum for marking with toner, an inkjet head or any other technology.
Sometimes this step is completely decoupled from the RIP, perhaps because plate images are stored as TIFF files and then sent to a CTP platesetter later, or because a near-line or off-line RIP is used for a digital press. In other environments the output stage is tightly coupled with rendering, and the output raster is kept in memory instead of writing it to disk to increase speed.
RIPping often includes a number of additional processes; in the Harlequin RIP® for example:
In-RIP imposition is performed during interpretation
Color management (Harlequin ColorPro®) and calibration are applied during interpretation or compositing, depending on configuration and job content
Screening can be applied during rendering. Alternatively it can be done after the Harlequin RIP has delivered unscreened raster data; this is valuable if screening is being applied using Global Graphics’ ScreenPro™ and PrintFlat™ technologies, for example.
A DFE for a high-speed press will typically be using multiple RIPs running in parallel to ensure that they can deliver data fast enough. File formats that can hold multiple pages in a single file, such as PDF, are split so that some pages go to each RIP, load-balancing to ensure that all RIPs are kept busy. For very large presses huge single pages or images may also be split into multiple tiles and those tiles sent to different RIPs to maximize throughput.
To find out more about the Harlequin RIP, download the latest brochure here.