Pigmento: Pigment-Based Image Analysis and Editing

Jianchao Tan, Stephen DiVerdi, Jingwan (Cynthia) Lu, Yotam Gingold
IEEE Transactions on Visualization and Computer Graphics (TVCG), to appear.

Paper: PDF, 300dpi images (15 MB) | PDF, full size images (45 MB)

Code: GitHub

Presentation: Keynote (200M) | PDF (140M) | PDF with notes (140M) | Video: YouTube or MP4 (500M)

Supplementary material (additional results and multi-spectral decomposition data): Zip (300 MB)

Analysis and editing of Monet’s “Impression, soleil levant.” From left to right: input image, extracted palette in RGB, multispectral coefficient curves for palette pigments, mixing weights, recoloring, and cut-copy-paste.

Abstract:

The colorful appearance of a physical painting is determined by the distribution of paint pigments across the canvas, which we model as a per-pixel mixture of a small number of pigments with multispectral absorption and scattering coefficients. We present an algorithm to efficiently recover this structure from an RGB image, yielding a plausible set of pigments and a low RGB reconstruction error. We show that under certain circumstances we are able to recover pigments that are close to ground truth, while in all cases our results are always plausible. Using our decomposition, we repose standard digital image editing operations as operations in pigment space rather than RGB, with interestingly novel results. We demonstrate tonal adjustments, selection masking, cut-copy-paste, recoloring, palette summarization, and edge enhancement.

BibTeX (expected):

@article{Tan:2018:PPB,
 author    = {Tan, Jianchao and DiVerdi, Stephen and Lu, Jingwan and Gingold, Yotam},
 title     = {Pigmento: Pigment-Based Image Analysis and Editing},
 journal   = {Transactions on Visualization and Computer Graphics (TVCG)},
 volume    = {to appear},
 year      = {2018},
 publisher = {IEEE},
 keywords  = {painting, color, RGB, non-photorealistic editing, NPR, kubelka-munk, pigment, paint, mixing, layering, image, editing}
}

Funding: This work was supported in part by the United States National Science Foundation (IIS-1451198 & IIS-1453018), a Google research award, and a gift from Adobe Systems Inc.