本文提出一個以學習樣本為依據的超解析人臉影像合成系統。這樣的問題是困難的,因為往往兩個在低解析影像看似相似的人於高解析會有相當大的鑑別性。為了有效率的使用有限的訓練樣本集以學習低和高解析之間的關係,過去的系統按不同的目標需求發展出不同的人臉影像特徵表達方式;如:直接處理整張人臉影像以維持整體的人臉幾何架構、局部人臉區塊為處理單位以解決訓練樣本數不足,或是設計不同影像一維二維Glocal的表達形式來提高資料分析上的效率。特別是即使是基於同一個核心演算法也會因影像的表達方式不同而有不同的合成結果。 本文利用boosting的概念;首先,我們將訓練樣本分為正與負樣本群;其中正樣本群的每一個樣本為同一個人的兩張不同解析度人臉影像對(意即:輸入的低解析影像與目標的高解析影像),負樣本為兩張不同人的高低解析影像對。所提出的系統將不斷的迭代挑選好的特徵表達方式以提高合成能力和人臉間的鑑別率。明確來說,每一特徵會對應到一個以這個特徵為基礎的學習回歸方程,最佳的特徵會使得正負樣本群分的最開。同時,被誤判的樣本群將藉由調整樣本權重與回歸模型於下一時間點越來越被重視。這樣的過程,挑選的特徵具有相當多的變化性,舉例來說:有些是著重於整體幾何架構、局部人臉五官特性、有些強調輪廓資訊等,這使得我們所提出的系統不只適用於與訓練樣本一樣的資料庫中,更能處理不同角度或表情或真實情況下的人臉影像合成問題。 This study develops a face hallucination system based on a novel two-dimensional direct combined model (2DDCM) algorithm that employs a large collection of low-resolution/high-resolution facial pairwise training examples. This approach uses a formulation that directly combines the pairwise example in a 2D combined matrix while completely preserving the geometry-meaningful facial structures and the detailed facial features. Such a representation would be expected to yield a useful transformation for face reconstruction. Our algorithm achieves this goal by addressing four key issues. First, we establish the 2D combination representation that defines two structure-meaningful vector spaces to respectively describe the vertical and the horizontal facial-geometry properties. Second, we directly combine the low-resolution and high-resolution pairwise examples to completely model their relationship, thereby preserving their significant features. Third, we develop an optimization framework that finds an optimal transformation to best reconstruct the given low-resolution input. The 2D combination representation makes the transformation more powerful than other approaches. Fourth, specific to our framework, we will appropriately apply the proposed 2DDCM algorithm for modeling global and local properties of the facial image. Our approach is demonstrated by extensive experiments with high-quality hallucinated faces.