Where is screening performed in the workflow?

In my last post I gave an introduction to halftone screening. Here, I explain where screening is performed in the workflow:

 

Halftone screening must always be performed after the page description language (such as PDF or PostScript) has been rendered into a raster by a RIP … at least conceptually.

In many cases it’s appropriate for the screening to be performed by that RIP, which may mean that in highly optimized systems it’s done in parallel with the final rendering of the pages, avoiding the overhead of generating an unscreened contone raster and then screening it. This usually delivers the highest throughput.

Global Graphics Software’s Harlequin RIP® is a world-leading RIP that’s used to drive some of the highest quality and highest speed digital presses today. The Harlequin RIP can apply a variety of different halftone types while rendering jobs, including Advanced Inkjet Screens™.

But an inkjet press vendor may also build their system to apply screening after the RIP, taking in an unscreened raster such as a TIFF file. This may be because:

  • An inkjet press vendor may already be using a RIP that doesn’t provide screening that’s high enough quality, or process fast enough, to drive their devices. In that situation it may be appropriate to use a stand-alone screening engine after that existing RIP.
  • To apply closed loop calibration to adjust for small variations in the tonality of the prints over time, and to do so while printing multiple copies of the same output, in other words, without the need for re-ripping that output.
  • When a variable data optimization technology such as Harlequin VariData™ is being used that requires multiple rasters to be recomposited after the RIP. It’s better to apply screening after that recomposition to avoid visible artifacts around some graphics caused by different halftone alignment.
  • To access sophisticated features that are only available in a stand-alone screening engine such as Global Graphics’ PrintFlat™ technology, which is applied in ScreenPro™.

Global Graphics Software has developed the ScreenPro stand-alone screening engine for these situations. It’s used in production to screen raster output produced using RIPs such as those from Esko, Caldera and ColorGate, as well as after Harlequin RIPs in order to access PrintFlat.

Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.
Achieve excellent quality at high speeds on your digital inkjet press: The ScreenPro engine from Global Graphics Software is available as a cross platform development component to integrate seamlessly into your workflow solution.

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.

What is halftone screening?

Halftone screening, also sometimes called halftoning, screening or dithering, is a technique to reliably produce optical illusions that fool the eye into seeing tones and colors that are not actually present on the printed matter.

Most printing technologies are not capable of printing a significant number of different levels for any single color. Offset and flexo presses and some inkjet presses can only place ink or no ink. Halftone screening is a method to make it look as if many more levels of gray are visible in the print by laying down ink in some areas and not in others, and using such a small pattern of dots that the individual dots cannot be seen at normal viewing distance.

Conventional screening, for offset and flexo presses, breaks a continuous tone black and white image into a series of dots of varying sizes and places these dots in a rigid grid pattern. Smaller dots give lighter tones and the dot sizes within the grid are increased in size to give progressively darker shades until the dots grow so large that they tile with adjacent dots to form a solid of maximum density (100%). But this approach is mainly because those presses cannot print single pixels or very small groups, and it introduces other challenges, such as moiré between colorants and reduces the amount of detail that can be reproduced.

Most inkjet presses can print even single dots on their own and produce a fairly uniform tone from them. They can therefore use dispersed screens, sometimes called FM or stochastic halftones.

A simple halftone screen
A simple halftone screen.

 

A dispersed screen uses dots that are all (more or less) the same size, but the distance between them is varied to give lighter or darker tones. There is no regular grid placement, in fact the placement is more or less randomized (which is what the word ‘stochastic’ means), but truly random placement leads to a very ‘noisy’ result with uneven tonality, so the placement algorithms are carefully set to avoid this.

Inkjet is being used more and more in labels, packaging, photo finishing and industrial print, all of which often use more than four inks, so the fact that a dispersed screen avoids moiré problems is also very helpful.

Dispersed screening can retain more detail and tonal subtlety than conventional screening can at the same resolution. This makes such screens particularly relevant to single-pass inkjet presses, which tend to have lower resolutions than the imaging methods used on, say, offset lithography. An AM screen at 600 dots per inch (dpi) would be very visible from a reading distance of less than a meter or so, while an FM screen can use dots that are sufficiently small that they produce the optical illusion that there are no dots at all, just smooth tones. Many inkjet presses are now stepping up to 1200dpi, but that’s still lower resolution than a lot of offset and flexo printing.

This blog post has concentrated on binary screening for simplicity. Many inkjet presses can place different amounts of ink at a single location (often described as using different drop sizes or more than one bit per pixel), and therefore require multi-level screening. And inkjet presses often also benefit from halftone patterns that are more structured than FM screens, but that don’t cluster into discrete dots in the same way as AM screens.

 

The above is an excerpt from our latest white paper: How to mitigate artifacts in high-speed inkjet printing. Download the white paper here.

Time for an update on VDP!

Over the last fifteen years variable data in digital printing has grown from “the next big thing” with vast, untapped potential to a commonly used process for delivering all manner of personalized information. VDP is used for everything from credit card bills and bank statements to direct mail postcards and personalized catalogues, from college enrolment packs to Christmas cards and photobooks, from labels to tickets, checks to ID cards.

This huge variety of jobs is created and managed by an equally huge variety of software, from specialist composition tools to general purpose design applications carefully configured for VDP. And they are consumed by workflows involving (or even completely within) the Digital Front End (DFE) for a digital production press, where jobs must be imposed, color managed.

Time, then, to update our popular “Do PDF/VT Right” guide which has had thousands of downloads since it was first published in 2014 not to mention the number of printed copies distributed at trade shows and industry events.

Do PDF/VT Right - How to make problem-free PDF files for variable data printing
Do PDF/VT Right – How to make problem-free PDF files for variable data printing

In addition to a general overhaul there is a new section on the new ISO 21812 standard that allows workflow controls to be added to PDF files, and notes on Harlequin-specific hints, to get even more speed out of your DFE if you are a Harlequin user.

The goal remains the same: to provide a set of actionable recommendations that help you ensure that your jobs don’t slow down the print production workflow … without affecting the visual appearance that you’re trying to achieve. As a side benefit, several of the recommendations set out below will also ensure that your PDF files can be delivered more efficiently on the web and to PDF readers on mobile devices in a cross-media publishing environment.

Some of the recommendations made in this guide are things that a graphic designer can apply quickly and easily, using their current tools. Others are intended more for the software companies building composition tools. If all of us work together we can greatly reduce the chance of that “heart-attack” job; the one that absolutely, positively must be in the post today … but that runs really slowly on the press.

Download your copy here .

Looking to reduce errors with simple job management, keep control of color, and run at ultra-high speed for jobs with variable data?

With just a few days to Labelexpo Europe 2019, preparation is in full swing. Come along to booth 9A15 where we’ll be previewing a new version of Fundamentals™, our toolkit for building a digital front end.

Fundamentals is a collaboration between Global Graphics Software and HYBRID Software – and its beauty lies in its simplicity: Fundamentals 2.0 makes it easy for the press operator to keep control of the workflow. Easy step and repeat and nesting via STEPZ with award-winning VDP composition from HYBRID Software makes it possible to estimate and plan single or multi-gang jobs and see how the output will appear when printed, helping to reduce errors and wasted media.

Consistent and predictable color for a wide range of design and creation workflows using industry-standard tools is achieved with Harlequin ColorPro ™  and there’s support for ICC profiles, including device ink and N-channel profiles too.

ScreenPro™, the award-winning multi-level screening engine, streams data directly to the print electronics at press speed, unlocking maximum productivity on variable data jobs to process ultra-high data rates with the reliability required to maximize press up time.

Find out more about Fundamentals here: www.globalgraphics.com/fundamentals

We’ll be on stand 9A15 – please stop by and say hello. If you’d like to book a time to chat, simply contact us: sales@globalgraphics.com. We look forward to seeing you.

Join us on stand 9A15 at Labelexpo2019

Looking for a simple way to print label and flexible packaging jobs? See us at Labelexpo Europe 2019

It’s that time of year again … we’re back from our summer vacations and are now preparing for Labelexpo Europe 2019.

And things have certainly moved on at a pace since we were last in Brussels: This year, on booth 9A15, we’ll be previewing a new version of Fundamentals™, our simple toolkit for building a digital front end. Fundamentals is a cooperation between Global Graphics Software and HYBRID Software and we have developed it so you can access the essential software components you need to create a DFE using a simple, modern web-based user interface.

The latest version, Fundamentals 2.0, has a host of new features that make it easy to print label and flexible packaging jobs in only a few steps and it reduces waste by making the best use of media – all this with accurate, consistent color throughout.

Fundamentals product manager, Tom Mooney says: “We are delighted to introduce some major changes to Fundamentals at Labelexpo, such as Hybrid’s STEPZ imposition and award-winning VDP composition engine, as well as significant technology evolutions with multiple parallel Harlequin RIPs for high-speed PDF processing. We have also introduced a new generation of ScreenPro™ that streams image data at the high press speeds demanded by the new models of digital presses.”

Earlier in the year, we visited Mark Andy Inc (find them on booths 4C45 and 8A60) and saw at first hand how Fundamentals has made a real impact on the Mark Andy ProWORX Digital Front End.

Watch our short film to get the full story:

http://bit.ly/Mark-Andy-Fundamentals

We’ll be on stand 9A15 – please stop by and say hello. If you’d like to book a time to chat, simply contact us: sales@globalgraphics.com. We look forward to seeing you.

www.globalgraphics.com/fundamentals

 

Join us on stand 9A15 at Labelexpo2019

PDF Processing Steps – the next evolution in handling technical marks

Best practice in handling jobs containing both real graphic content and ‘technical marks’ has evolved over the last couple of decades. Technical marks include things like cut/die lines, fold lines, dimensions, legends etc in a page description language file (usually PDF these days). Much of the time, especially for pouches, folding carton and corrugated work, they’ll come originally from a CAD file and will have been merged with the graphics.

People will want to interact with the technical marks differently at various stages in the workflow:

  • Your CAD specialists will want to see the technical marks and make sure that they’ve not been changed from the original CAD input.
  • Brand owner approval may not want to see the technical marks, but prepress and production manager approvers will definitely want to see both the technical marks and the graphics together on their monitors, with the ability to make layers visible or invisible at will.
  • In some workflows the technical marks from the PDF may be used to make a physical die, or to drive a laser cutter; in others an original CAD file will be used instead.
  • On a digital press you may wish to print a short run of just the technical marks, or a combination of technical marks and graphics to ensure that finishing is properly registered with the prints.
  • The main print run, whether on a conventional press (flexo, offset, etc) or digital, will obviously include the graphics, but won’t include most of the technical marks. You may want to include the legend on the print as fool-proof identification of that job, but you’ll obviously need to disable printing of any marks that overlap with the live area or bleed, such as cut and fold marks.
  • Occasionally you may wish to do another short run with technical marks after the main print run, to ensure that finishing has not drifted out of register.

So there are a lot of places in the entire process where both technical marks and graphics may need to be turned on or off. How do you do that in your RIP?

Historically, the first method used to include technical marks, originally in PostScript, but now also in PDF, was to specify each kind of technical mark in a ‘technical separation’, encoded as a spot color in the job. Most operators tried to use a name for that spot color that indicated its intent, but there weren’t any standards, so you could end up with ‘Cut’ (or ‘CUT’, ‘cut’ etc), ‘cut-line’, ‘cut line’, ‘cutline’, ‘die’ etc etc. And that’s just thinking about naming in English. The names chosen are usually fairly meaningful to a human operator, but couldn’t be used reliably for automated processing because of the amount of variation.

As a result, many jobs arriving at a converter, at least from outside of that company, must be reviewed, and the spot names replaced, or the prepress and RIP configured to use the names from that job. That manual processing takes time and introduces the potential for errors.

But let’s assume you’ve completed that stage; how do you configure your RIP to achieve what you need with those technical separations?

The most obvious mechanism to turn off some technical marks is to tell the RIP to render the relevant spot colors as their own separations, but then not to image them on the print. It’s a very simple model, which works well as long as the job was constructed correctly, with all of the technical marks set to overprint. When somebody upstream forgot and left a cut or fold line as knockout (which never happens, of course!) you’d get a white line through the real graphics if the technical mark was on top of them.

The next evolution of that would be to configure the RIP to say that the nominated spot separation should never knock out of any other separation. That’s a configuration option in Harlequin RIPs but may not be widely available elsewhere.

Or you could tell the RIP to completely ignore one or more nominated spot colors, so they have no effect at all on any other marks on the page. Again, that’s a configuration option in Harlequin RIPs, and is one of the best ways of managing technical marks that are saved into the PDF file as technical separations.

Alternatively, since technical marks (like many other parts of a label or packaging job) are usually captured in a PDF layer (or optional content group to use the technical term), you can turn those layers on and off. Again, there are rich controls for managing PDF layers in Harlequin RIPs.

But none of these techniques get away from the need to manually check each file and set up prepress and the RIP appropriately for the spot names or layers that have been used for technical marks.

And that’s where the new ISO standard, 19593-1:2018 comes in. It defines “PDF processing Steps”, a mechanism to uniquely identify technical marks in PDF files, along with their intended function, from cutting to folding and creasing, to bleed areas, white and varnish, braille, dimensions, legends etc. It does this by building on the common practice of saving the technical marks in PDF layers, but adds some identification metadata that is not dependent on the vendor, the language or the normal practice of the originator, prepress or pressroom.

So now you can look at a PDF file and see definitively that a layer called ‘cut’ contains cutting lines. The name ‘cut’ is now just a convenience; the real information is in metadata which is completely and reliably computer-readable. In other words, it doesn’t matter if that layer were named ‘Schnittlinie’ or anything else; the manual step of identifying names that someone, somewhere put in the file upstream and figuring out what each one means, is completely eliminated.

We implemented support for PDF Processing Steps into version 12.1r0 of the Harlequin RIP, and have worked with a number of vendors whose products create files with Processing Steps in them (including Hybrid Software, Esko and Callas) to ensure that everything works seamlessly. We also worked through a wide variety of current and probable use cases to ensure that our implementation can address real-world needs. As an example we added the ability to control all graphics on a PDF page that aren’t in Processing Step layers as if they were just another layer.

In practice this means that Harlequin can be configured to deliver pretty much whatever you need, such as:

  • Export all technical marks identified as Cutting, PartialCutting, CuttingCreasing etc to a vector format to drive a cutting machine.
  • Render and print all technical marks, but none of the real graphics, for checking registration.
  • Render the real graphics, plus dimensions and legend, for the main print run.

    PDF Processing Steps promises the ability to control technical marks without needing to analyze each file and create a different setup for each job.
    PDF Processing Steps promises the ability to control technical marks without needing to analyze each file and create a different setup for each job.

The most important thing that PDF Processing Steps gives us is that you can create a configuration for one of those use cases (or for many other variations) and know that it will work for all jobs that are sent to you using PDF Processing Steps; you won’t need to reconfigure for the next job, just because an operator used different spot names.

Of course, it’ll take a while for everyone to migrate from using spot names to PDF Processing Steps. But I think you’ll agree that the benefits of doing so, in increasing efficiency and reducing the potential for errors, are obvious and significant.

For more information read the press release here.

Choosing the class of your raster image processor (RIP) – Part II

Part II: Factors influencing your choice of integration

If you’re in the process of building a digital front end for your press, you’ll need to consider how much RIPing power you need for the capabilities of the press and the kinds of jobs that will be run on it. The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet print head, toner marking engine or laser plate-setter can understand. But how do you know what RIP is best for you and what solution can best deliver maximum throughput on your output device? In this second post, Global Graphics Software’s CTO, Martin Bailey, discusses the factors to consider when choosing a RIP.

In my last post I gave a pointer to a spreadsheet that can be used to calculate the data rate required for a digital press. This single number can be used to make a first approximation of which class of RIP integration you should be considering.

For integrations based on the Harlequin RIP® reasonable guidelines are:

  • Up to 250MB/s: can be done with a single RIP using multi-threading in that RIP
  • Up to 1GB/s: use multiple RIPs on a single server using the Harlequin Scalable RIP
  • Over 1GB/s: use multiple RIPs spread over multiple servers using the Harlequin Scalable RIP

These numbers indicate the data rate that the RIP needs to provide when every copy of the output is different. The value may need to be adjusted for other scenarios:

  • If you’re printing the same raster many times, the RIP data rate may be reduced in proportion; the RIP has 100 times as long to process a PDF page if you’re going to be printing 100 copies of it, for instance.
  • If you’re printing variable data print jobs with significant re-use of graphical elements between copies, then Harlequin VariData™ can be used to accelerate processing. This effect is already factored into the recommendations above.

The complexity of the jobs you’re rendering will also have an impact.

Transactional or industrial labelling jobs, for example, tend to be very simple, with virtually no live PDF transparency and relatively low image coverage. They are therefore typically fast to render. If your data rate calculation puts you just above a threshold in the list above, you may be able to take one step down to a simpler system.

On the other hand, jobs such as complex marketing designs or photobooks are very image-heavy and tend to use a lot of live transparency. If your data rate is just below a threshold on the list above, you will probably need to step up to a higher level of system.

But be careful when making those adjustments, however. If you do so you may have to choose either to build and support multiple variations of your DFE, to support different classes of print site, or to design a single model of DFE that can cope with the needs of the great majority of your customers. Building a single model certainly reduces development, test and support costs, and may reduce your average bill of materials. But doing that also tends to mean that you will need to base your design on the raw, “every copy different”, data rate requirements, because somebody, somewhere will expect to be able to use your press to do just that.

Our experience has also been that the complexity of jobs in any particular sector is increasing over time, and the run lengths that people will want to print are shortening. Designing for current expectations may give you an under-powered solution in a few years’ time, maybe even by the time you ship your first digital press. Moore’s law, that computers will continue to deliver higher and higher performance at about the same price point, will cancel out some of that effect, but usually not all of it.

And if your next press will print with more inks, at a higher resolution, and at higher speed you may be surprised at how much impact that combination will have on the data rate requirements, and therefore possibly on the whole architecture of the Digital Front End to drive it.

And finally, the recommendations above implicitly assume that a suitable computer configuration is used. You won’t achieve 1GB/s output from multiple RIPs on a computer with a single, four-core CPU, for example. Key aspects of hardware affecting speed are: number of cores, CPU clock speed, disk space available, RAM available, disk read and write speed, band-width to memory, L2 and L3 cache sizes on the CPU and (especially for multi-server configurations) network speed and bandwidth.

Fortunately, the latest version of the Harlequin RIP offers a framework that can help you to meet all these requirements. It offers a complete scale of solutions from a single RIP through multiple RIPs on a single server, up to multiple RIPs across multiple servers.

 

The above is an excerpt from our latest white paper: Scalable performance with the Harlequin RIP. Download the white paper here.

Read Part I – Calculating data rates here.

Choosing the class of your raster image processor (RIP) – Part I

Part I: How to calculate data rates

If you’re in the process of choosing or building a digital front end for your press, you’ll need to consider how much RIPing power you need for the capabilities of the press and the kinds of jobs that will be run on it. The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet printhead, toner marking engine or laser platesetter can understand. But how do you know what RIP is best for you and what solution can best deliver maximum throughout on your output device? This is the first of two posts by Global Graphics Software’s CTO, Martin Bailey, where he advises how to size a solution for a digital press using the data rate required on the output side.

Over the years at Global Graphics Software, we’ve found that the best guidance we can give to our OEM partners in sizing digital press systems based on our own solution, the Harlequin RIP®, comes from a relatively simple calculation of the data rate required on the output side. And now we’re making a tool to calculate those data rates available to you. All you need to do is to download it from the web and to open it in Excel.

Download it here:  Global_Graphics_Software_Press_data_rates

You will, of course, also need the specifications of the press(es) that you want to calculate data rates for.

You can use the spreadsheet to calculate data rates based on pages per minute, web speed, sheets or square meters per minute or per hour, or on head frequency. Which is most appropriate for you depends on which market sector you’re selling your press into and where your focus is on the technical aspects of the press.

It calculates the data rate for delivering unscreened 8 bits per pixel (contone) rasters. This has proven to be a better metric for estimating RIP requirements than taking the bit depth of halftoned raster delivery into account. In practice Harlequin will run at about the same speed for 8-bit contone and for 1-bit halftone output because the extra work of halftoning is offset by the reduced volume of raster data to move around. Multi-level halftones delivered in 2-bit or 4-bit rasters take a little bit longer, but not enough to need to be considered here.

You can also use the sheet-fed calculation for conventional print platesetters if you so desire. You might find it eye-opening to compare data rate requirements for an offset or flexo platesetter with those for a typical digital press!

Fortunately, the latest version of the Harlequin RIP offers a framework that can help you to meet all these requirements. It offers a complete scale of solutions from a single RIP through multiple RIPs on a single server, up to multiple RIPs across multiple servers.

In my next post I’ll share how the data rate number can be used to make a first approximation of which class of RIP integration you should be considering.

 

The above is an excerpt from our latest white paper: Scalable performance with the Harlequin RIP®. Download the white paper here

What does a RIP do?

Ever wondered what a raster image processor or RIP does? And what does RIPing a page mean? Read on to learn more about the phases of a RIP, the engine at the heart of your Digital Front End.

The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet print head, toner marking engine or laser platesetter can understand. The process of RIPing a page requires several steps to be performed in order, regardless of whether that page is submitted as PostScript, PDF or any other page description language.

Interpreting: the page description language to be RIPed is read and decoded into an internal database of graphical elements that must be placed on the page. Each may be an image, a character of text (including font, size, color etc), a fill or stroke etc. This database is referred to as a display list.

Compositing: The display list is pre-processed to apply any live transparency that may be in the job. This phase is only required for any pages in PDF and XPS jobs that use live transparency; it’s not required for PostScript language pages because those cannot include live transparency.

Rendering: The display list is processed to convert every graphical element into the appropriate pattern of pixels to form the output raster. The term ‘rendering’ is sometimes used specifically for this part of the overall processing, and sometimes to describe the whole of the RIPing process. It’s only used it in the first sense in this document.

Output: the raster produced by the rendering process is sent to the marking engine in the output device, whether it’s exposing a plate, a drum for marking with toner, an inkjet head or any other technology.

Sometimes this step is completely decoupled from the RIP, perhaps because plate images are stored as TIFF files and then sent to a CTP platesetter later, or because a near-line or off-line RIP is used for a digital press. In other environments the output stage is tightly coupled with rendering.

RIPing often includes a number of additional processes; in the Harlequin RIP® for example:

  • In-RIP imposition is performed during interpretation
  • Color management (Harlequin ColorPro®) and calibration are applied during interpretation or compositing, depending on configuration and job content
  • Screening is applied during rendering or after the Harlequin RIP has delivered unscreened raster data if screening is being applied post- RIP, when Global Graphics’ ScreenPro™ and PrintFlat™ technologies are being used, for example.

These are all important processes in many print workflows.

 

The Harlequin Host Renderer
The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, XPS, JPEG, BMP and TIFF as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.

 

The above is an excerpt from our latest white paper: Scalability with the Harlequin RIP®. Download the white paper here

Unlocking document potential

Using Mako to pre-process PDFs for print workflows follows quite naturally. With its built-in RIP, Mako has exceptional capability to deal with fonts, color, transparency and graphic complexity to suit the most demanding of production requirements.

What is less obvious is Mako’s value to enterprise print management (EPM). Complementing Mako’s support for PDF and XPS is the ability to convert from (and to) PCL5 and PCL/XL. Besides conversion, Mako can also render such documents, for example to create a thumbnail of a PCL job so that a user can more easily identify the correct document to print or move it to the next stage in a managed process. Mako’s document object model (DOM) architecture allows content to be extracted for record-keeping purposes or be added to – a watermark or barcode, for example.

Document Object Model to access the raw building blocks of documents.

The ability to look inside a document, irrespective of the format of the original, has brought Mako to the attention of electronic document and records management system (EDRMS) vendors, seeking to add value to their data extraction, search and categorization processes. Being able to treat different formats of document in the same way simplifies development and improves process efficiency.

Mako’s ability to analyse page layout and extract text in the correct reading order, or to interpret and update document metadata, is a valuable tool to developers of EDRMS solutions. In the face of GDPR (General Data Protection Regulation) and sector-specific regulations, the need for such solutions is clear. And as many of those documents are destined to be printed at some point in their lifecycle, they exist as discrete, paginated digital documents for which Mako is the key to unlocking their business value.

If you would like to discuss this or any aspect of Mako. Please email justin.bailey@globalgraphics.com