Choosing the class of your raster image processor (RIP) – Part II

Part II: Factors influencing your choice of integration

If you’re in the process of building a digital front end for your press, you’ll need to consider how much RIPing power you need for the capabilities of the press and the kinds of jobs that will be run on it. The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet print head, toner marking engine or laser plate-setter can understand. But how do you know what RIP is best for you and what solution can best deliver maximum throughput on your output device? In this second post, Global Graphics Software’s CTO, Martin Bailey, discusses the factors to consider when choosing a RIP.

In my last post I gave a pointer to a spreadsheet that can be used to calculate the data rate required for a digital press. This single number can be used to make a first approximation of which class of RIP integration you should be considering.

For integrations based on the Harlequin RIP® reasonable guidelines are:

  • Up to 250MB/s: can be done with a single RIP using multi-threading in that RIP
  • Up to 1GB/s: use multiple RIPs on a single server using the Harlequin Scalable RIP
  • Over 1GB/s: use multiple RIPs spread over multiple servers using the Harlequin Scalable RIP

These numbers indicate the data rate that the RIP needs to provide when every copy of the output is different. The value may need to be adjusted for other scenarios:

  • If you’re printing the same raster many times, the RIP data rate may be reduced in proportion; the RIP has 100 times as long to process a PDF page if you’re going to be printing 100 copies of it, for instance.
  • If you’re printing variable data print jobs with significant re-use of graphical elements between copies, then Harlequin VariData™ can be used to accelerate processing. This effect is already factored into the recommendations above.

The complexity of the jobs you’re rendering will also have an impact.

Transactional or industrial labelling jobs, for example, tend to be very simple, with virtually no live PDF transparency and relatively low image coverage. They are therefore typically fast to render. If your data rate calculation puts you just above a threshold in the list above, you may be able to take one step down to a simpler system.

On the other hand, jobs such as complex marketing designs or photobooks are very image-heavy and tend to use a lot of live transparency. If your data rate is just below a threshold on the list above, you will probably need to step up to a higher level of system.

But be careful when making those adjustments, however. If you do so you may have to choose either to build and support multiple variations of your DFE, to support different classes of print site, or to design a single model of DFE that can cope with the needs of the great majority of your customers. Building a single model certainly reduces development, test and support costs, and may reduce your average bill of materials. But doing that also tends to mean that you will need to base your design on the raw, “every copy different”, data rate requirements, because somebody, somewhere will expect to be able to use your press to do just that.

Our experience has also been that the complexity of jobs in any particular sector is increasing over time, and the run lengths that people will want to print are shortening. Designing for current expectations may give you an under-powered solution in a few years’ time, maybe even by the time you ship your first digital press. Moore’s law, that computers will continue to deliver higher and higher performance at about the same price point, will cancel out some of that effect, but usually not all of it.

And if your next press will print with more inks, at a higher resolution, and at higher speed you may be surprised at how much impact that combination will have on the data rate requirements, and therefore possibly on the whole architecture of the Digital Front End to drive it.

And finally, the recommendations above implicitly assume that a suitable computer configuration is used. You won’t achieve 1GB/s output from multiple RIPs on a computer with a single, four-core CPU, for example. Key aspects of hardware affecting speed are: number of cores, CPU clock speed, disk space available, RAM available, disk read and write speed, band-width to memory, L2 and L3 cache sizes on the CPU and (especially for multi-server configurations) network speed and bandwidth.

Fortunately, the latest version of the Harlequin RIP offers a framework that can help you to meet all these requirements. It offers a complete scale of solutions from a single RIP through multiple RIPs on a single server, up to multiple RIPs across multiple servers.

 

The above is an excerpt from our latest white paper: Scalable performance with the Harlequin RIP. Download the white paper here.

Read Part I – Calculating data rates here.

Choosing the class of your raster image processor (RIP) – Part I

Part I: How to calculate data rates

If you’re in the process of choosing or building a digital front end for your press, you’ll need to consider how much RIPing power you need for the capabilities of the press and the kinds of jobs that will be run on it. The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet printhead, toner marking engine or laser platesetter can understand. But how do you know what RIP is best for you and what solution can best deliver maximum throughout on your output device? This is the first of two posts by Global Graphics Software’s CTO, Martin Bailey, where he advises how to size a solution for a digital press using the data rate required on the output side.

Over the years at Global Graphics Software, we’ve found that the best guidance we can give to our OEM partners in sizing digital press systems based on our own solution, the Harlequin RIP®, comes from a relatively simple calculation of the data rate required on the output side. And now we’re making a tool to calculate those data rates available to you. All you need to do is to download it from the web and to open it in Excel.

Download it here:  Global_Graphics_Software_Press_data_rates

You will, of course, also need the specifications of the press(es) that you want to calculate data rates for.

You can use the spreadsheet to calculate data rates based on pages per minute, web speed, sheets or square meters per minute or per hour, or on head frequency. Which is most appropriate for you depends on which market sector you’re selling your press into and where your focus is on the technical aspects of the press.

It calculates the data rate for delivering unscreened 8 bits per pixel (contone) rasters. This has proven to be a better metric for estimating RIP requirements than taking the bit depth of halftoned raster delivery into account. In practice Harlequin will run at about the same speed for 8-bit contone and for 1-bit halftone output because the extra work of halftoning is offset by the reduced volume of raster data to move around. Multi-level halftones delivered in 2-bit or 4-bit rasters take a little bit longer, but not enough to need to be considered here.

You can also use the sheet-fed calculation for conventional print platesetters if you so desire. You might find it eye-opening to compare data rate requirements for an offset or flexo platesetter with those for a typical digital press!

Fortunately, the latest version of the Harlequin RIP offers a framework that can help you to meet all these requirements. It offers a complete scale of solutions from a single RIP through multiple RIPs on a single server, up to multiple RIPs across multiple servers.

In my next post I’ll share how the data rate number can be used to make a first approximation of which class of RIP integration you should be considering.

 

The above is an excerpt from our latest white paper: Scalable performance with the Harlequin RIP®. Download the white paper here

What does a RIP do?

Ever wondered what a raster image processor or RIP does? And what does RIPing a page mean? Read on to learn more about the phases of a RIP, the engine at the heart of your Digital Front End.

The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet print head, toner marking engine or laser platesetter can understand. The process of RIPing a page requires several steps to be performed in order, regardless of whether that page is submitted as PostScript, PDF or any other page description language.

Interpreting: the page description language to be RIPed is read and decoded into an internal database of graphical elements that must be placed on the page. Each may be an image, a character of text (including font, size, color etc), a fill or stroke etc. This database is referred to as a display list.

Compositing: The display list is pre-processed to apply any live transparency that may be in the job. This phase is only required for any pages in PDF and XPS jobs that use live transparency; it’s not required for PostScript language pages because those cannot include live transparency.

Rendering: The display list is processed to convert every graphical element into the appropriate pattern of pixels to form the output raster. The term ‘rendering’ is sometimes used specifically for this part of the overall processing, and sometimes to describe the whole of the RIPing process. It’s only used it in the first sense in this document.

Output: the raster produced by the rendering process is sent to the marking engine in the output device, whether it’s exposing a plate, a drum for marking with toner, an inkjet head or any other technology.

Sometimes this step is completely decoupled from the RIP, perhaps because plate images are stored as TIFF files and then sent to a CTP platesetter later, or because a near-line or off-line RIP is used for a digital press. In other environments the output stage is tightly coupled with rendering.

RIPing often includes a number of additional processes; in the Harlequin RIP® for example:

  • In-RIP imposition is performed during interpretation
  • Color management (Harlequin ColorPro®) and calibration are applied during interpretation or compositing, depending on configuration and job content
  • Screening is applied during rendering or after the Harlequin RIP has delivered unscreened raster data if screening is being applied post- RIP, when Global Graphics’ ScreenPro™ and PrintFlat™ technologies are being used, for example.

These are all important processes in many print workflows.

 

The Harlequin Host Renderer
The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, XPS, JPEG, BMP and TIFF as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.

 

The above is an excerpt from our latest white paper: Scalability with the Harlequin RIP®. Download the white paper here

Unlocking document potential

Using Mako to pre-process PDFs for print workflows follows quite naturally. With its built-in RIP, Mako has exceptional capability to deal with fonts, color, transparency and graphic complexity to suit the most demanding of production requirements.

What is less obvious is Mako’s value to enterprise print management (EPM). Complementing Mako’s support for PDF and XPS is the ability to convert from (and to) PCL5 and PCL/XL. Besides conversion, Mako can also render such documents, for example to create a thumbnail of a PCL job so that a user can more easily identify the correct document to print or move it to the next stage in a managed process. Mako’s document object model (DOM) architecture allows content to be extracted for record-keeping purposes or be added to – a watermark or barcode, for example.

Document Object Model to access the raw building blocks of documents.

The ability to look inside a document, irrespective of the format of the original, has brought Mako to the attention of electronic document and records management system (EDRMS) vendors, seeking to add value to their data extraction, search and categorization processes. Being able to treat different formats of document in the same way simplifies development and improves process efficiency.

Mako’s ability to analyse page layout and extract text in the correct reading order, or to interpret and update document metadata, is a valuable tool to developers of EDRMS solutions. In the face of GDPR (General Data Protection Regulation) and sector-specific regulations, the need for such solutions is clear. And as many of those documents are destined to be printed at some point in their lifecycle, they exist as discrete, paginated digital documents for which Mako is the key to unlocking their business value.

If you would like to discuss this or any aspect of Mako. Please email justin.bailey@globalgraphics.com

Break through to drupa 2020

Break through to drupa 2020 with the Technical Services team from Global Graphics Software.

Delivering a new device to the market is a huge undertaking; you must source heads and driver electronics, design mechanical systems and create inks. This is of course an over simplification as sourcing or developing all the components is just the start, you now need to make it all work together.

Once you have all these components working together you then need to source software to drive the device at the speed and to the quality your market requires. Finding yourself at this point just before a show like drupa, a show that only happens every four years, will test any product and development manager’s nerve.

Not only is the software choice key for delivering market leading quality and speed, but it may also need to evolve rapidly as you lead up to drupa 2020 to deal with limitations in your new device’s fixed components.

At Global Graphics Software we understand the pressure you are under to get your device finished leading up to a show like drupa. As well as offering a single source for all your software we also actively work with you to evolve the software rapidly to solve your device’s and market’s unique challenges. We have a team of experts including principal software engineers, color scientists and screening scientists who are on standby to join your engineers at a moment’s notice.

If you have Global Graphics Software technologies somewhere in your device, you can call on our Technical Services team to break through to drupa 2020. They can help accelerate and break through any challenges you find as you work towards the show. They can even help in areas where you haven’t chosen to use Global Graphics Software technologies.

As you can imagine our Technical Services team is in great demand, let us pencil in some time for your team by reaching out to us at: technicalservices@globalgraphics.com.

Streaks and Banding: Measuring macro uniformity in the context of optimization processes for inkjet printing

Dr Danny Hall, Chief Screening Scientist, Global Graphics Software

Global Graphics Software’s chief screening scientist, Dr Danny Hall, discusses the emerging standards designed to objectively characterize directional print variations with particular reference to the ISO TS 18621.21 standard:

Directional printing artifacts like streaks and banding are commonly encountered problems in digital printing systems. For example, inkjet systems may produce characteristic density variations due to inconsistencies between printheads or intra-printhead variations between nozzles. When these variations have a high spatial frequency they can be characterized as causing ‘streaking’ in the direction of print, where the variations have a low spatial frequency this can cause the appearance of ‘banding’ in the direction of print.

Other causes of directional streaking and banding effects may be due, for example, to variations in the speed of printhead or substrate velocities resulting in density variations across the direction of printing. The ‘wow’ and ‘flutter’ of the digital printing age.

In the décor market there is a visual perceptual test sometimes referred to as a ‘porthole test’. In this test a human subject is presented with a print (e.g. wallpaper or floor covering) rotating slowly behind a round window under controlled viewing conditions. If they can determine the direction of printing then it test is a ‘fail’. One aspect of the porthole test is that it allows for the perceptual response differences between different printed images, for example the same press and conditions may be able to print one job containing a lot of graphical detail, but still fail on another job requiring flat tints.

There are currently emerging standards designed to objectively characterize this type of directional print variation.  For example, the proposed ISO TS 18621-21 standard defines a measurement method for the evaluation of distortions in the macroscopic uniformity of printed areas that are oriented in the horizontal and/or vertical direction, like streaks and bands.

Such recognized standards could be very useful for the development and maintenance of printing systems; as well as potentially allowing for the quantitative comparison of directional quality between different printing systems.

Having an objective ISO measurement of directional uniformity would therefore be a very useful step forward and something we at Global Graphics would like to encourage.

As a first step the current ISO TS 18621-21 proposal looks good and useable and provides for a robust and simple metric that can be calculated using standard equipment.

However, in exploring the potential use of this standard we also note a few limitations which may constrain the widest possible utility for a general directional measure in printing. For example, the frequency response of the proposed measurement technique may limit the response of the measure to higher frequency ‘streaking’ artifacts, this may be inevitable with the measurement devices available but this potential spatial frequency bias needs to be clearly understood and accepted.

Another challenge in standardizing such a metric across different printing platforms is the difficulty in selecting some kind of objective color tint to measure. The ‘goodness’ of the proposed ISO TS 18621-21 metric will depend on the color tint chosen for measurement; therefore making such measurements standard between systems with different color gamuts is a difficult and perhaps impossible task. Nonetheless we would like to propose a color tint selection strategy which at least a priori could have the potential to provide a selection of standardized color tints that could be used meaningfully with ISO TS 18621-21 across a range of different printers.

Frequency response

The frequency response is discussed in the ISO proposal.  There is a potential bias in the measuring methodology towards lower frequencies due to the suggested 6mm sampling cut-off. For example, in our experience the main frequency elements of ‘streakiness’ may not be captured by this methodology potentially resulting in a bias towards lower-frequency ‘banding’ effects. That’s not necessarily a problem, it just needs to be understood that this metric may be biased towards ‘banding’ over ‘streakiness’ determination.

Where any streakiness is random and un-correlated with lower frequency banding: changes in high frequency streakiness can be expected to show up statistically as variations at lower frequencies (white noise). However, there are currently printing compensation systems available (such as PrintFlat ™) which can correct for directional variations so that high and low frequency variations are no longer correlated in a gaussian way. In such a case the proposed metric could in the worst case be blind to any underlying changes in high frequency streakiness variation above the band-pass of the sampling system.

Color selection

The proposed standard does not specify the printed color to use, which may make objective comparisons between systems based on this metric difficult and the metric itself is correlated with the underlying contrast of the tint selected. For example, one can expect an apparently better metric to result from printing a 5% tint compared to a 70% tint of the same ink. Therefore, an objective method for selecting color tints could be helpful and this is something we would like to explore.

This is an abstract from Danny’s forthcoming talk at the TAGA Annual Technical Conference, March 17 – 20, 2019 in Minneapolis, MN.

Register here: https://www.taga.org/register/

Adjusting rendering of outlined text in Harlequin

By Martin Bailey, CTO, Global Graphics Software

In several sectors of the print market it is common practice to convert text to outlines upstream of a RIP, on the grounds that it’s then impossible for the wrong glyph to be printed. This is normal, for instance, in much of the label and packaging industry, especially when there is very robust regulation in place, such as in pharmaceuticals.

Every page description language defines “scan conversion” rules that specify which pixels should be marked when a graphic is painted onto a page; these build on the concept of “pixel touching”, specifying exactly when a vector shape counts as touching a pixel and therefore marking it.

When you’re using PDF (or PostScript, before that) the scan conversion rules are different for text specified using live fonts and for vector shapes. If you started with live text and then converted it to outlines then you have switched from using the text scan conversion rules to using the vector graphic rules. That has always meant that text converted to outlines tends to render slightly heavier than text using live fonts. And the smaller the text is, the more the weight difference becomes apparent.

FIG 1
FIG 1 – 2pt text in Times Roman showing various scan conversion rules.

In Fig 1 you can see this difference very clearly for very small Western text rendered at 2pt and 600dpi, still a common resolution for digital printers and presses. The top line shows text using live fonts, and the second line shows the PDF scan conversion rule for a vector fill. Note that at 2pt the RIP only has about 12 pixels for the height of an upper-case glyph.

In early 2018 we added a new scan conversion rule for vector fills alongside our pre-existing rules in the Harlequin RIP. The intention was to make it possible to emulate the much lighter output that Esko’s FlexRIPs produce. Unfortunately, it also tended to emulate the ability for very fine structures, especially fine horizontal strokes in small text, to disappear. You can see this in the third row of text in Fig 1.

This is obviously not an optimal solution, so we continued our development, and have now extended the original solution with what is called “dropout control”. This prevents very fine sections of a vector fill “dropping out” when they manage to fall on the page in such a way that they don’t cross the locations in the pixels that would trigger anything being marked. You can see the effect of this in the bottom line in Fig 1.

Light rendering with dropout control was delivered to our OEM partners in late 2018 under the name RenderAccurate.

Even this optimized output won’t exactly match the output of live fonts, because the fonts themselves often include hints to the rendering engine, designed to ensure maximum legibility and conformance to the font designer’s vision. These hints can, for instance, ensure that vertical stems are the same width in all glyphs, or that the curved base of a glyph will extend slightly below the baseline to make it visually balance with glyphs with flat bases that sit on the baseline. Those hints were discarded when the text was converted to an outline, and so can’t be used any more. But the new scan conversion algorithm certainly strikes a good balance between matching the weight of live fonts and maintaining legibility.

The effect is visible in very small text in Latin fonts, as shown in Fig 1, but the impact is often masked by the physical effects of printing. And Latin glyphs tend to be relatively simple, so that the human eye and brain are pretty good at filling in the missing segments without too much impact on legibility or comprehension.

On the other hand, Chinese, Japanese and Korean (CJK) fonts are often more complex, with the result that the effect is visible at larger point sizes. And the meaning can be obscured or altered much more easily if strokes are missing. Fig 2 illustrates the same effects on Japanese text at 3pt, rendered at 600dpi. At this size and resolution, the RIP has about 22 pixels for the height of each glyph.

FIG 2
FIG 2 – 3pt text in MS Mincho, showing, from top to bottom: live fonts; default rendering for outlined text; the new, lighter, outlined text; and lighter text with dropout control.

The glyphs shown in FIG 2 are complex compared to Western scripts, but any solution that will be used with CJK scripts must obviously also be proven with the most complex character shapes, such as the Kanji in FIG 3. Some of these have so many horizontal strokes that they simply cannot be rendered with fewer than 22 device pixels vertically and require more than that for reliable rendering. The sample in this figure is rendered at with around 27 pixels for the height of each glyph.

FIG 3
FIG 3 – More complex Kanji in KozGoPro-Regular, showing, from top to bottom: live fonts; default rendering for outlined text; and the new, lighter text with dropout control.

This article has deliberately used very small text sizes as examples, simply because the effects are easier to see. But the same issues arise at larger sizes as well, albeit more rarely.

On the other hand, it is precisely because the issue appears more rarely, and because the effects are less immediately noticeable, that makes the risk of dropping strokes so dangerous. It’s perfectly possible that an occasional missing stroke, perhaps in an unusually light font, may go unnoticed in process control. And that might result in a print that disappoints a brand owner, or even that fails a regulatory check, after the label has been applied or the carton converted and filled, or even after the product being shipped.

So, when a brand demands lighter rendering of pre-outlined fonts, make sure you’re safe by also using dropout control in your RIP!

New to inkjet? Come and see us at Hunkeler Innovationdays

Martin Bailey, CTO, Global Graphics Software
Martin Bailey, CTO, Global Graphics Software

If you are new to inkjet and are building your first press no doubt you’ll have many questions about the workflow and the Digital Front End.

In fact, you’re probably wondering how to scope out the functionality you need to create a DFE that is customised to exactly what your customers require. Among your concerns will be how you’re going to achieve the throughput you need to keep the press running at rated speed, especially when handling variable data. Or it might be handling special colours or achieving acceptable image quality that is keeping you awake at night.  And how to achieve this without increasing the bill of materials for your press?

At Hunkeler Innovationdays we’ll have a range of resources available to address just such questions with some real case study examples of how our OEM customers have solved the problems that were causing them a headache using our technology and the skills of our Technical Services team.

For instance, how, on a personalised run, when every label or page might be different, can you stop the press from falling idle whilst the RIP catches up?  Our ScreenPro™ technology helps Mark Andy cut processing time by 50% on the Mark Andy Digital Series HD, enabling fully variable (every label is different) continuous printing at high-speed and at high-quality.

How can you avoid streaking on the image if your substrate is racing under your printheads at speeds of up to 300m/min for aqueous and maybe 90m/min for UV.  Or mottling? The Mirror and Pearl Advanced Inkjet Screens™ available with ScreenPro have been developed specifically to address these problems.

During the lifetime of the press, how can you avoid variations in quality that look like banding because your printheads have worn or been replaced?  Take a look at what Ellerhold AG has achieved by deploying PrintFlat™.

The ScreenPro screening engine is one of the building blocks you’ll need for your inkjet press. Our Fundamentals components provide other functions that are essential to the workflow such as job management, soft proofing, and colour management.

Using a variety of white papers, print samples, video footage and case studies , we’ll be sharing our experience.  So, come along and meet the team:  that’s me, Jeremy Spencer, Justin Bailey and our colleague Jonathan Wilson from Meteor Inkjet if you want to chat about their printhead driver electronics that are endorsed by the world’s leading industrial inkjet printhead manufacturers.

 

Join us at Hunkeler Innovationdays 2019

 

Convert from PDF & XPS formats with Mako™

Product manager David Stevenson provides an update on the latest release of Mako:

We’ve just released Mako version 4.6 and I’m pleased to let you know that new in this release is support for PCL5 input, adding to the PCL/XL support already available. Aimed primarily at the enterprise print market, this capability makes it possible to convert to and from PDF & XPS formats and to render thumbnails for preview purposes.

This latest release will also be of interest to our prepress customers: we’ve  improved overall performance and added new, fast render-to-buffer capability, in monochrome and color.

Finally, there is also new and improved support for PDF-named destinations, document metadata and more.

Contact me to find out more: david.stevenson@globalgraphics.com

David Stevenson
David Stevenson,
Product Manager

 

Screening for the next-generation high-definition devices

In days gone by, almost every job was more or less 600 dpi in both directions. Now there is a drive to higher definition, with higher resolutions and smaller drop sizes.

So we’ve introduced a new feature in ScreenPro™ that allows the resolution of a job to be “upscaled” meaning that a RIP can still render at 600 dpi through an existing workflow and then ScreenPro can upscale the job to the printer resolution. The benefit is that you don’t need to change your existing workflow, can cut down on RIP time by RIPping at 600 dpi, but print on a 1200 dpi machine for increased definition.

There are various ways of achieving higher resolutions: use the new generation of print heads running at 1200 dpi, use multiple print bars, or use scanning head printers for multiple passes. Sometimes it really is increased resolution that is required and other times it is higher addressability and, for example with textile printing, sometimes you just need to be able to put down more ink in any given location.

Once manufacturers have achieved 1200 x 1200 dpi there are other problems to solve. There is four times as much data generated that needs to be passed through the workflow pipeline to the press compared to a 600 dpi data path. There are some applications where the higher addressability isn’t needed, and 600 dpi is ok, in this case you could run the press twice as fast and get twice the production if you ran it at 1200 x 600 dpi, or three times as fast at 1200 x 400 dpi.

To solve the problem of too much data slowing down processing times we have implemented resolution upscaling in the latest release of ScreenPro. The typical example is that we have an existing press and workflow to go with it at 600dpi. The RIP delivers data at this resolution. We then have a choice – to send it to the 600 dpi printer, in which case we screen as normal, or we send to the 1200 dpi machine.

In this simple case we use ScreenPro to double the number of dots it produces in both directions. For non-square resolutions we multiply in one direction only. Also for non-square resolutions we have to change the shape of the screens, a circular screen will be distorted by the non-square printer resolution so we have to correct for that up front.

What this means is that you can continue to RIP at 600 dpi and keep the same workflow right up to the last process of screening. You keep the same PC processor requirements, same network data transfer speeds. Only at the last stage use ScreenPro to upscale to your desired resolution.

This will be a really useful feature for many customers developing the next generation of high definition digital printers.

Hunkeler Innovationdays 2019
Join us at Hunkeler Innovationdays 2019 to learn more about the new features in ScreenPro.