This study develops an example-based face hallucination system based on a novel two-dimensional direct combined model (2DDCM) approach. The 2DDCM model combines the low-resolution and high-resolution pairwise images in the training set in a combined (or concatenated) matrix form in order to better preserve the correlation between the two images during the system learning process. Notably, the images processed by the 2DDCM model have the form of two-dimensional (2D) matrices rather than one-dimensional (1D) vectors, and hence the facial geometry features in the vertical and horizontal directions can be more reliably extracted. The proposed hallucination system comprises two 2DDCM-based modules, namely a global module for global facial structure reconstruction and a local module for facial texture detail compensation. In implementing the local module, a 2DDCM-based bi-directional transformation method is adopted to identify the detailed facial textures which are lost in the global synthesis process. The experimental results show that the synthesized results obtained using the proposed 2DDCM framework are in good quantitative agreement with the ground-truth images. Moreover, the proposed framework demonstrates the ability to synthesize high-resolution facial images given only a small number of training pairs, even when the facial features, alignment and appearance of the testing image differ from those of the original training set. Finally, the 2DDCM representation ensures that the synthesized results better preserve the subject-specific characteristics of the input facial image, and therefore improves the performance of downstream applications such as automatic face recognition.