This week, Mako™ product manager David Stevenson explains vector flattening:
When you print PDF content or save or export it to other formats that do not support transparency, it will need to undergo a process called flattening. Flattening usually involves rasterizing areas of the page that are subject to transparency effects, which could mean replacing sharp-edged vector content with a jagged-edged bitmap. Of course, increasing the resolution of the rasterization can mitigate that problem, but doing so takes longer and adds to file size.
The alternative is to retain vector geometry, including text, as vector objects. This requires dividing the artwork down into smaller parts that no longer overlap, then tracing the edges of the new shapes with a vector path. In the latest release, Global Graphics Software’s Mako Core SDK (v6.2.0) adds this capability to its raster-based transparency flattening API. Using existing APIs that apply De Casteljau’s algorithm to decompose Bézier curves and a new method to trace around shapes, flattened content can retain its device independence and printing quality.
I’ve included a short demo of the vector-based transparency flattening feature using Mako here:
Following his post last week about the speed and scalability of your raster image processor, in this film, Martin Bailey, distinguished technologist at Global Graphics Software, explains how to determine how much raster image processor (RIP) power you need to drive a digital press by calculating the press data rate. It’s the best way of calculating how much RIP power you need in the Digital Front End (DFE) to drive it at engine speed and to ensure profitable printing.
If you’re building a digital press, or a digital front end (DFE) to drive a digital press, you want it to be as efficient and cost-effective as possible. As the trend towards printing short runs and personalization grows, especially in combination with increasing resolutions, more colorants and faster presses, the speed and scalability of the raster image processor (RIP) inside that DFE are key factors in determining profitability.
For your digital press to print at speed you’ll need to understand the amount of data that it requires, i.e. its data rate. In this film, Martin Bailey, distinguished technologist at Global Graphics Software, explains how different stages in data handling will need different data rates and how to integrate the appropriate number of RIP cores to generate that much data without inflating the bill of materials and DFE hardware.
Martin also explains that your next press may have a much higher data rate requirement than your current one.
In this latest case study, Tom Bouman, worldwide workflow product marketing manager at HP PageWide Industrial, explains why the Harlequin RIP®, with its track record for high quality and speed and its ability to scale, was the obvious choice to use at the heart of its digital front end when the division was set up to develop presses for the industrial inkjet market back in 2008.
Today, the Harlequin RIP Core is at the heart of all the PageWide T-series presses, driving the HP Production Elite Print Server digital front end. Presses range from 20-inch for commercial printing, through to the large 110-inch (T1100 series) printers for high-volume corrugated pre-print, offering a truly scalable solution that sets the standard in performance and quality.
Product manager Paul Dormer gives an insight into why the Harlequin Core is the leading print OEMs’ first choice to power digital inkjet presses in this new film.
A raster image processor (RIP), Harlequin Core converts text, object and image data from file formats such as PDF, TIFF™ or JPEG, into a raster that a printing device can understand. It’s at the heart of the digital front end that drives the press.
Proven in the field for decades, Harlequin Core is known for its incredible speed and is the fastest RIP engine available. It is used in every print sector, from industrial inkjet such as textiles and flooring, to labels and packaging, commercial, transactional, and newspapers.
As presses become wider, faster, and higher resolution, handling vast amounts of data, the Harlequin Core remains the RIP of choice for many leading brands including HP, Mimaki, Mutoh, Roland, Durst, Agfa and Delphax.
We’ve now been shipping the Harlequin Host Renderer™ (HHR) to OEMs and partners for over a decade, driving digital printers and presses. Back then Harlequin was our only substantial software component for use in digital front ends (DFEs), and we just came up with a name that seemed to describe what it did.
Since then our technology set includes a component that can be used upstream of the RIP, for creating, modifying, analyzing, visualizing, etc page description languages like PDF: that’s Mako™. And we’ve also added a high-performance halftone screening engine: ScreenPro™.
We’ve positioned these components as a “Core” range and their names reflect this: “Mako Core” and “ScreenPro Core”. We also added higher level components in our Direct™ range, for printer OEMs who don’t want to dig into the complexities of system engineering, or who want to get to market faster.
Harlequin is already part of Harlequin Direct™, and we’re now amending the name of the SDK to bring it into line with our other “Core” component technologies. The diagram below shows how those various offerings fit together for a wide range of digital printer and press vendors (please click on it for a better view).
So, farewell “Harlequin Host Renderer”, hello “Harlequin Core”.
When drupa opened its virtual doors this year, Eric Worrall, Global Graphics Software’s VP of products and services, joined industry colleagues from the SAS-Institute, Zaikio GmbH and Print Business Media and took part in one of the live sessions: a discussion about how man adapts to the world where machines can make decisions faster and with more precision than man can, and what print companies need to understand and do to prepare for the upheaval.
Do we need a new way to think about what printers do?
Watch it here:
To find out more about the smart factory and the smart digital front end, visit our website.
In this post, Global Graphics Software’s product manager for Mako, David Stevenson, explores the challenge of printing large amounts of raster data and the options available to ensure that data doesn’t slow down your digital press:
The print market is increasingly moving to digital: digital printing offers many advantages over conventional printing, the most valuable of these is mass-produced, personalized output making every copy of the print different. At the same time digital presses are getting faster, wider, and printing at higher resolutions with extended gamut color becoming common place.
To drive the new class of digital presses, you need vast amounts of raster data every second. Traditional print software designed for non-digital workflows attempts to handle this vast amount of data by RIPping ahead, storing rasters to physical disks. However, the rate at which data is needed for the digital press causes disk-based workflows to rapidly hit the data rate boundary. This is the point where even state-of-the-art storage devices are simply too small and slow for the huge data rates required to keep the press running at full rated speed.
This is leading to a new generation of RIPs that ditch the disk and RIP print jobs on the fly directly to the press electronics. As well as driving much higher data rates, it also has the benefit of no wasted time RIPping ahead.
As you can imagine, RIPping directly to the press electronics presents some engineering challenges. For example, two print jobs may look identical before and after printing, but the way in which they have been made can cause them to RIP at very different rates. Additionally, your RIP of choice can have optimizations that make jobs constructed in certain ways to RIP faster or slower. This variability in print job and RIP time is a bit like playing a game of Russian roulette: if you lose the press will be starved of data causing wasted product or delivery delays.
With a RIP driving your press directly you need to have confidence that all jobs submitted can be printed at full speed. That means you need the performance of each print job and page to be predictable and you need to know what speed the press can be run at for a given combination of print Job, RIP and PC.
Knowing this, you may choose to slow down the press so that your RIP can keep up. Better still, keep the press running at full speed by streamlining the job with knowledge of optimizations that work well with your choice of RIP.
Or you could choose to return the print job to the generator with a report explaining what is causing it to run slowly. Armed with this information, the generator can rebuild the job, optimized for your chosen RIP.
Whatever you choose, you will need predictable print jobs to drive your press at the highest speed to maximize your digital press’s productivity.
This week WhatTheyThink launched its 2021 Technology Outlook – a resource guide designed for you to quickly learn about new innovations from industry analysts and thought leaders. It includes five technology focus areas: digital printing, labels & packaging, software & workflow, wide format & signage and textiles & apparel, and finishing.
As part of the software & workflow technology focus, David Zwang of WhatTheyThink chatted to our VP of products and services, Eric Worrall, about digital front ends (DFEs), the elements that comprise a DFE, and the recent launch of Global Graphics’ SmartDFE™, a complete single-source software and electronics stack that does everything from job creation through to printhead electronics, and a vital component in the smart factory of the future. Smart factories are designed to autonomously run the entire production process and this will include the print subsystems.
Watch it here:
To find out more about the smart factory and the smart digital front end, visit our website.
Be the first to receive our news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here
We added support for native processing of PDF files to the Harlequin RIP® way back in 1997. When we started working on that support we somewhat naïvely assumed that we should implement the written specification and that all would be well. But it was obvious from the very first tests that we performed that we would need to do something a bit more intelligent because a large proportion of PDF files that had been supplied as suitable for production printing did not actually comply with the specification.
Launching a product that would reject many PDF files that could be accepted by other RIPs would be commercial suicide. The fact that, at the time, those other RIPs needed the PDF to be transformed into PostScript first didn’t change the business case.
Unfortunately a lot of PDF files are still being made that don’t comply with the standard, so over the almost a quarter of a century since first launching PDF support we’ve developed our own rules around what Harlequin should do with non-compliant files, and invested many decades of effort in test and development to accept non-compliant files from major applications.
The first rule that we put in place is that Harlequin is not a validation tool. A Harlequin RIP user will have PDF files to be printed, and Harlequin should render those files as long as we can have a high level of confidence that the pages will be rendered as expected.
In other words, we treat both compliance with the PDF standard and compatibility with major PDF creation tools as equally important … and supporting Harlequin RIP users in running profitable businesses as even more so!
The second rule is that silently rendering something incorrectly can be very bad, increasing costs if a reprint is required and causing a print buyer/brand to lose faith in a print service provider/converter. So Harlequin is written to require a reasonably high level of confidence that it can render the file as expected. If a developer opening up the internals of a PDF file couldn’t be sure how it was intended to be rendered then Harlequin should not be rendering it.
We’d expect most other vendors of PDF readers to apply similar logic in their products, and the evidence we’ve seen supports that expectation. The differences between how each product treats invalid PDF are the result of differences in the primary goal of each product, and therefore to the cost of output that is viewed as incorrect.
Consider a PDF viewer for general office or home use, running on a mobile device or PC. The business case for that viewer implies that the most important thing it has to do is to show as much of the information from a PDF file as possible, preferably without worrying the user with warnings or errors. It’s not usually going to be hugely important or costly if the formatting is slightly wrong. You could think of this as being at the opposite end of the scale from a RIP for production printing. In other words, the required level of confidence in accurately rendering the appearance of the page is much lower for the on-screen viewer.
You may have noticed that my description of a viewer could easily be applied to Adobe Reader or Acrobat Pro. Acrobat is also not written primarily as a validation tool, and it’s definitely not appropriate to assume that a PDF file complies with the standard just because it opens in Acrobat. Remember the Acrobat business case, and imagine what the average office user’s response would be if it would not open a significant proportion of PDF files because it flagged them as invalid!
For further reading about PDF documents and standards: