MMFace-DiT

A Dual-Stream Diffusion Transformer for
High-Fidelity Multimodal Face Generation

CVPR 2026
University of North Texas

Stay tuned for more updates!πŸ”₯

Video Presentation

MMFace-DiT Synthesis Overview
Figure 1: High-Fidelity Face Synthesis. MMFace-DiT synthesizes photorealistic portraits from multi-modal inputs. Left: Given a semantic mask and text prompt, our model generates a face with diverse identity variations across multiple VAEs. Right: Guided by a sketch, it performs precise attribute-guided generation for numerous hair colors. This demonstrates our model’s ability to seamlessly fuse spatial and semantic guidance.

Abstract

Recent multimodal face generation models address the spatial control limitations of text-to-image diffusion models by augmenting text-based conditioning with spatial priors such as segmentation masks, sketches, or edge maps. However, existing approaches typically append auxiliary control modules or stitch together separate uni-modal networks. These ad hoc designs inherit architectural constraints, duplicate parameters, and often fail under conflicting modalities, leading to modal dominance.

We introduce MMFace-DiT, a unified dual-stream diffusion transformer engineered for synergistic multimodal face synthesis. Its core novelty lies in a dual-stream transformer block that processes spatial (mask/sketch) and semantic (text) tokens in parallel, deeply fusing them through a shared Rotary Position-Embedded (RoPE) Attention mechanism. Furthermore, a novel Modality Embedder enables a single cohesive model to dynamically adapt to varying spatial conditions without retraining. MMFace-DiT achieves a 40% improvement in visual fidelity and prompt alignment over five state-of-the-art multimodal face generation models, establishing a flexible new paradigm for end-to-end controllable generative modeling.

Methodology

Generation Pipeline

MMFace-DiT Pipeline

Dual-Stream Architecture

MMFace-DiT Architecture
Scroll horizontally to view the Pipeline & Architecture

1. Unified Conditioning & Dynamic Modality Adaptation

Unlike prior works requiring separate models per modality, MMFace-DiT adapts to masks or sketches dynamically in a single forward pass. This is driven by our global conditioning vector, \(C_{global}\), which introduces a novel Modality Embedder \(E_{modality}\) that maps a discrete modality flag to a dense vector:

$$ C_{global} = E_{time}(t) + E_{caption}(c_{pooled}) + E_{modality}(m) $$

2. Adaptive Layer Normalization (AdaLN)

The unified global conditioning vector orchestrates the behavior of each block independently. It is transformed to generate a comprehensive set of modulation parameters \(\{\gamma, \beta, \alpha\}\) for both the attention and MLP components. This enables text, timestep, and the active modality to exert fine-grained, layer-specific control over the entire network.

3. Dual-Stream Shared RoPE Attention for Deep Fusion

Our transformer processes image tokens (\(T_i\)) and text tokens (\(T_t\)) in parallel streams. To prevent modal dominance, they are continuously fused via a central, shared Multi-Head Attention mechanism. We apply 2D axial RoPE for spatial image patches and 1D sequential RoPE for text tokens:

$$ \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{\text{RoPE}(Q)\text{RoPE}(K)^T}{\sqrt{d_k}}\right)V $$

This mechanism allows every image patch to bidirectionally attend to every text token, ensuring precise spatial-semantic alignment.

4. Dynamic Gated Residual Connections

Following attention and MLP operations, we employ a gating connection to modulate the output stream \(T_{in}\). The gating scalar \(\alpha\) derived from \(C_{global}\) acts as a dynamic, learned filter to selectively emphasize or suppress modalities. This prevents strong geometric priors (e.g., a dense sketch) from overpowering subtle semantic cues (e.g., text descriptors):

$$ T_{out} = T_{in} + \alpha \odot F(\text{AdaLN}(T_{in}, \gamma, \beta)) $$

5. VLM-Powered Data Enrichment & Optimization

To overcome the bottleneck of semantically shallow annotations in existing face datasets, we utilize a robust annotation pipeline built on the InternVL3 Visual Language Model and Qwen3 LLM. This yields 1M high-quality, descriptive captions. The entire framework operates in the compressed latent space of the 16-channel FLUX VAE and natively supports optimization via both DDPM (Min-SNR) and Rectified Flow Matching (RFM) objectives.

Qualitative Results

Comparison of MMFace-DiT against leading spatial conditioning methods.

Text + Semantic Mask Generation

Mask Conditioning Results

Text + Sketch Generation

Sketch Conditioning Results
Scroll horizontally to view Mask & Sketch results

Quantitative Results

Table 1: Quantitative results for Text + Mask conditioned face generation. Our MMFace-DiT variants include Ours (D), trained with diffusion-based DDPM objectives, and Ours (F), trained using flow-matching objectives. Both substantially outperform all baselines across perceptual quality and text-image alignment metrics. Best results are highlighted.

Method FID ↓ LPIPS ↓ SSIM ↑ ACC ↑ mIoU ↑ CLIP ↑ Dist. ↓ LLM Sc. ↑
TediGAN62.550.430.4879.7739.0225.260.750.4061
ControlNet49.390.570.4182.8643.9525.390.750.3103
UAC48.880.460.4878.2738.8223.750.760.3516
CD49.000.560.4685.6938.8525.070.750.3029
DDGI50.880.450.4986.0036.0224.290.760.3851
MM2Latent49.780.590.4584.5738.1926.780.730.3619
Ours (D) 27.95 0.34 0.51 93.95 49.16 31.69 0.68 0.6006
Ours (F) 16.63 0.34 0.53 93.74 50.12 31.34 0.69 0.6372

Table 2: Quantitative results for Text + Sketch conditioned face generation. Our MMFace-DiT includes Ours (D) (diffusion-based DDPM) and Ours (F) (flow-matching) variants.

Method FID ↓ LPIPS ↓ SSIM ↑ CLIP ↑ Dist. ↓ LLM Sc. ↑
TediGAN121.240.550.3021.620.780.10
ControlNet67.130.540.5626.170.740.44
UAC118.520.610.4122.920.770.27
DDGI56.570.430.5123.950.760.43
MM2Latent40.910.580.4627.040.730.39
Ours (D) 27.67 0.24 0.72 31.56 0.68 0.69
Ours (F) 9.14 0.20 0.70 31.30 0.69 0.72

Ablation Study: Core Components

Table 3: Ablation study on core model components with spatial metrics. We incrementally add our innovations, demonstrating the impact of the Modality Embedder (ME), Dual-Stream (DS) design, Rotary Position Embedding (RoPE) Attention, and the final VAE choice. Spatial metrics (SSIM, ACC, mIoU) show simultaneous improvement with semantic metrics, proving mitigation of modality dominance.

Model Components Semantic Metrics Spatial Metrics
ME DS RoPE VAE FID ↓ LPIPS ↓ CLIP ↑ SSIM ↑ ACC ↑ mIoU ↑
Model-1 βœ— βœ— βœ— SD2 44.520.48624.530.4486.6544.86
Model-2 βœ“ βœ— βœ— SD2 40.490.36624.310.4687.9146.34
Model-3 βœ“ βœ“ βœ— SD2 35.610.36729.690.4990.7948.91
Model-4 βœ“ βœ“ βœ“ SD2 33.770.32631.420.5092.2950.05
Model-5 (Final) βœ“ βœ“ βœ“ Flux 27.95 0.340 31.69 0.51 93.95 49.16

VAE Architecture Comparison

VAE Architecture Ablation
Figure 6: Qualitative comparison of VAE backbones integrated into MMFace-DiT. The choice of VAE highlights a clear trade-off between statistical fidelity and perceptual quality. Flux consistently yields the most perceptually faithful outputs with superior color accuracy and natural texture.

BibTeX

@inproceedings{krishnamurthy2026mmfacedit,
  title     = {MMFace-DiT: A Dual-Stream Diffusion Transformer for High-Fidelity Multimodal Face Generation},
  author    = {Krishnamurthy, Bharath and Rattani, Ajita},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2026}
}