Toward Tiny and High-quality Facial Makeup with Data Amplify Learning

Anonymous Author

Our TinyBeauty effectively synthesizes stunning makeup styles with consistent content, enabling seamless video application.

Abstract

Contemporary makeup approaches primarily hinge on unpaired learning paradigms, yet they grapple with the challenges of inaccurate supervision (e.g., face misalignment) and sophisticated facial prompts (including face parsing, and landmark detection). These challenges prohibit low-cost deployment of facial makeup models, especially on mobile devices. To solve above problems, we propose a brand-new learning paradigm, termed "Data Amplify Learning (DAL)," alongside a compact makeup model named "TinyBeauty." The core idea of DAL lies in employing a Diffusion-based Data Amplifier (DDA) to "amplify" limited images for the model training, thereby enabling accurate pixel-to-pixel supervision with merely a handful of annotations. Two pivotal innovations in DDA facilitate the above training approach: (1) A Residual Diffusion Model (RDM) is designed to generate high-fidelity detail and circumvent the detail vanishing problem in the vanilla diffusion models; (2) A Fine-Grained Makeup Module (FGMM) is proposed to achieve precise makeup control and combination while retaining face identity. Coupled with DAL, TinyBeauty necessitates merely 80K parameters to achieve a state-of-the-art performance without intricate face prompts. Meanwhile, TinyBeauty achieves a remarkable inference speed of up to 460 fps on the iPhone 13. Extensive experiments show that DAL can produce highly competitive makeup models using only 5 image pairs.

The Data Amplify Learning process contains two components: (1)A data amplifier which utilizes a pretrained diffusion model to amplify a small set of seed data into a larger synthesized dataset. (2) A lightweight model which is trained on the amplified data to accurately learn the makeup styles while retaining identity features of the original images.

More Video Results

Experimental Results of DDA

Experimental Results of TinyBeauty

Method

Diffusion-based Data Amplifier

To generate high-quality paired makeup data, our Diffusion-based Data Amplifier(DDA) is required to contain the subject's original facial features, skin texture details, such as wrinkles and spots, and consistent, precise makeup styles across various portraits. To overcome these obstacles, we introduce a Residual Diffusion Model that preserves texture and detail, reducing distortion and mask-like effects. Moreover, we propose a Fine-Grained Makeup Module to ensure the precise application of makeup to the appropriate facial areas and generate visually consistent makeup styles, as the following shows.

Overview of the Diffusion-based Data Amplifier (DDA).

TinyBeauty Model

Benefiting from DDA-generated paired data, the TinyBeauty Model sidesteps previous laborious pre-processing by directly applying L1 loss aligning generated images closely with their targets, which can be designed as a hardware-friendly network optimized for resource-constrained devices.

Parameters, FLOPs, and runtimes on an iPhone13 of TinyBeauty and competing methods. (+) means the method uses facial pre-processing including face parsing, and (-) means the method only has a single model.