A nostalgic look back at the ISO PDF/X standard

In this blog post, Martin Bailey recalls his days as the first chair of the ISO PDF/X task force and how the standard has developed over the last 20 years.

Over the last few years there has been quite an outpouring of nostalgia around PDF. That was first for PDF itself, but at the end of 2021 we reached two decades since the first publication of an ISO PDF/X standard.

I’d been involved with PDF/X in its original home of CGATS (the Committee for Graphic Arts Technical Standards, the body accredited by ANSI to develop US national standards for printing) for several years before it moved to ISO. And then I became the first chair of the PDF/X task force in ISO. So I thought I’d add a few words to the pile, and those have now been published on the PDF Association’s web site at https://www.pdfa.org/the-route-to-pdf-x-and-where-we-are-now-a-personal-history/.

I realised while I was writing it that it really was a personal history for me. PDF/X was one of the first standards that I was involved in developing, back when the very idea of software standards was quite novel. Since then, supported and encouraged by Harlequin and Global Graphics Software, I’ve also worked on standards and chaired committees in CIP3, CIP4, Ecma, the Ghent Working Group, ISO and the PDF Association (I apologise if I’ve missed any off that list!).

It would be easy to assume that working on all of those standards meant that I knew a lot about what we were standardising from day one. But the reality is that I’ve learned a huge amount of what I know about print from being involved, and from talking to a lot of people.

Perhaps the most important lesson was that you can’t (or at least shouldn’t) only take into account your own use cases while writing a standard. Most of the time a standard that satisfies only a single company should just be proprietary working practice instead. It’s only valuable as a standard if it enables technologies, products and workflows in many different companies.

That sounds as if it should be obvious, but the second major lesson was something that has been very useful in environments outside of standards as well. An awful lot of people assume that everyone cares a lot about the things that they care about, and that everything else is unimportant. As an example, next time you’re at a trade show (assuming they ever come back in their historical form) take a look and see how many vendors claim to have product for “the whole workflow”. Trust me, for production printing, nobody has product for the whole workflow. Each one just means that they have product for the bits of the workflow that they think are important. The trouble is that you can’t actually print stuff effectively and profitably if all you have is those ‘important’ bits. To write a good standard you have to take off the blinkers and see beyond what your own products and workflows are doing. And in doing that I’ve found that it also teaches you more about what your own ‘important’ parts of the workflow need to do.

Along the way I’ve also met some wonderful people and made some good friends. Our conversations may have a tendency to dip in and out of print geek topics, but sometimes those are best covered over a beer or two!

About the author

Martin Bailey, CTO, Global Graphics Software

Martin Bailey is currently the primary UK expert to the ISO committees maintaining and developing PDF and PDF/VT and is the author of Full Speed Ahead: how to make variable data PDF files that won’t slow your digital press, a new guide offering advice to anyone with a stake in variable data printing including graphic designers, print buyers, composition developers and users.

Further reading

  1. Compliance, compatibility, and why some tools are more forgiving of bad PDFs
  2. What the difference between PDF/X-1a and PDF/X-4

Be the first to receive our blog posts, news updates and product news. Why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedInTwitter and YouTube

What is a Raster Image Processor (RIP)?

Ever wondered what a raster image processor or RIP does? And what does RIPping a file mean? Read on to learn more about the phases of a RIP, the engine at the heart of your Digital Front End (DFE).

The RIP converts text and image data from many file formats including PDF, TIFF™ or JPEG into a format that a printing device such as an inkjet printhead, toner marking engine or laser platesetter can understand. The process of RIPping a job requires several steps to be performed in order, regardless of the page description language (such as PDF) that it’s submitted in. Even image file formats such as TIFF, JPEG or PNG usually need to be RIPped, to convert them into the correct color space, at the right resolution and with the right halftone screening for the press.

Interpreting: The file to be RIPped is read and decoded into an internal database of graphical elements that must be placed on the output. Each may be an image, a character of text (including font, size, color etc), a fill or stroke etc. This database is referred to as a display list.

Compositing: The display list is pre-processed to apply any live transparency that may be in the job. This phase is only required for any graphics in formats that support live transparency, such as PDF; it’s not required for PostScript language jobs or for TIFF and JPEG images because those cannot include live transparency.

Rendering: The display list is processed to convert every graphical element into the appropriate pattern of pixels to form the output raster. The term ‘rendering’ is sometimes used specifically for this part of the overall processing, and sometimes to describe the whole of the RIPing process.

Output: The raster produced by the rendering process is sent to the marking engine in the output device, whether it’s exposing a plate, a drum for marking with toner, an inkjet head or any other technology.

Sometimes this step is completely decoupled from the RIP, perhaps because plate images are stored as TIFF files and then sent to a CTP platesetter later, or because a near-line or off-line RIP is used for a digital press. In other environments the output stage is tightly coupled with rendering, and the output raster is kept in memory instead of writing it to disk to increase speed.

RIPping often includes a number of additional processes; in the Harlequin RIP® for example:

  • In-RIP imposition is performed during interpretation
  • Color management (Harlequin ColorPro®) and calibration are applied during interpretation or compositing, depending on configuration and job content
  • Screening can be applied during rendering. Alternatively it can be done after the Harlequin RIP has delivered unscreened raster data; this is valuable if screening is being applied using Global Graphics’ ScreenPro™ and PrintFlat™ technologies, for example.

A DFE for a high-speed press will typically be using multiple RIPs running in parallel to ensure that they can deliver data fast enough. File formats that can hold multiple pages in a single file, such as PDF, are split so that some pages go to each RIP, load-balancing to ensure that all RIPs are kept busy. For very large presses huge single pages or images may also be split into multiple tiles and those tiles sent to different RIPs to maximize throughput.

The raster image processor pipeline. The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, TIFF, JPEG, PNG and BMP as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.
The raster image processor pipeline. The Harlequin RIP includes native interpretation of PostScript, EPS, DCS, TIFF, JPEG, PNG and BMP as well as PDF, PDF/X and PDF/VT, so whatever workflows your target market uses, it gives accurate and predictable image output time after time.

Harlequin Host Renderer brochure


To find out more about the Harlequin RIP, download the latest brochure here.


This post was first published in June 2019.

Further reading:

1. Where is screening performed in the workflow

2. What is halftone screening?

3. Unlocking document potential

To be the first to receive our blog posts, news updates and product news why not subscribe to our monthly newsletter? Subscribe here

Follow us on LinkedIn and Twitter


Mako™ – the print developer’s Swiss Army knife

Mako - the Swiss Army knife of SDKs!
Mako – the print developer’s Swiss Army knife.

Working with a Mako customer recently, I showed him how to code a utility to extract data from a stack of PDF invoices to populate a spreadsheet. I suppose you could describe it as reverse database publishing. This customer had originally licensed Mako to convert XPS to PDF, and later used it to generate CMYK bitmaps of the pages, i.e. using it as a RIP (raster image processor).

With this additional application of Mako, the customer observed that Mako was “like a Swiss Army knife” as it offered so many tools in one – converting, rendering, extracting, combining and processing, of pages and the components that made them up. And doing it not just for PDF but for XPS, PCL and PostScript® too. His description struck a chord with me as it seemed very appropriate. Mako does indeed offer a wide range of capabilities for processing print job formats. It’s not the fastest or feature-richest of the RIPs from Global Graphics Software – that would be Harlequin®. Or the most sophisticated and performant of screening tools – that would be ScreenPro™. But Mako can do both of those things very competently, and much more besides.

For example, we have used Mako to create a Windows desktop app to edit a PDF in ways relevant to production print workflows, such as changing spot colors or converting them to process colors. All the viewing and editing operations are implemented with Mako API calls. That fact alone emphasizes the wide range of applications to which Mako can be put, and I think, fully justifying that “Swiss Army knife” moniker.

For more information visit: www.globalgraphics.com/mako