Humans

Table of Contents

HUMANS

 

These databases are images of human faces, whole bodies and body parts.

Bodies and body parts

Human 3.6M a dataset: Large Scale Datasets for 3D Human Sensing in Natural Environments by Catalin Ionescu, Dragos Papava, Vlad Olaru and Christian Sminchisescu:

Description: This database was created by filming 11 actors, 5 women and 6 men,  acting out 17 various circumstances like pretending to eat, giving directions, having a discussion, greeting, activities while seating, taking a photo, posing, making purchases, smoking, waiting, walking, sitting on chair, talking on the phone, walking a dog or walking with someone else. The actors did not seem to have any props except for chairs. They also performed these circumstances with diverse poses, walking with their one or both hand in their pockets and etc. Each actor went to a 3D scanner and performed all these movements while filmed with 4 calibrated cameras. The database is there for organized in 4 subset. One contains the pictures of the actors acting out all these different scenarios. The second contains a 3D model of the same thing that was recorded for subset number 1 but in the 3D model they are wearing different clothes, have different hair style and there are pictures of them acting out the same scenarios but in different surroundings. And finally researchers also get a “visualization code, set of implementations for a set of baseline prediction methods, code for data manipulation, feature extraction as well as large scale discriminative learning methods based on Fourier approximations.“ The creators suggest that this database can be used to train computer algorithms and vision models to function in natural surroundings.

License: http://vision.imar.ro/human3.6m/eula.php

Link: http://vision.imar.ro/human3.6m/description.php

Reference:

  1. Catalin Ionescu, Dragos Papava, Vlad Olaru and Cristian Sminchisescu, 6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, No. 7, July 2014 [pdf][bibtex]
  2. Catalin Ionescu, Fuxin Li and Cristian Sminchisescu, Latent Structured Models for Human Pose Estimation, International Conference on Computer Vision, 2011 [pdf][bibtex]

 

Wet and Wrinkled Fingerprint Recognition:

Description: this set contains scanned fingerprints of 300 fingers, 185 where obviously wrinkled. Each image is categorised by which finger it is and if it is dry or wet. They tested out two different fingerprint recognition algorithms, “the commercial algorithm” and the NIST NBIS which is publicly available. They found out that fingerprint identification algorithms would make more mistakes if the image in their memory is of a dry finger but the one put on the scanner is wet. They also discovered that the thumb changed the least after becoming wet so the algorithm‘s accuracy was highest for the thumb despite them having only a scan of a dry thumb in their gallery. (Krishnasamy, Belongie, Kriegman. 2011).

License: To get an access to this dataset researchers have to send a request with information about who they are and how they would use it.

Link: http://vision.ucsd.edu/content/wet-and-wrinkled-fingerprint-recognition

Reference: The image above was obtained from the website under link and is shown with permission from Prasanna Krishasamy.

Krishnasamy P., Belongie S., Kriegman D., “Wet Fingerprint Recognition: Challenges and Opportunities”, International Joint Conference on Biometrics (IJCB), Washington, DC, October, 2011. [BibTex][pdf]

 

Faces

These sets contain image of human faces and some contain 3D model of them or information about facial features. Some images show people displaying various emotions, in different illumination conditions, from different angles or in with various accessories or occlusions. They can be used to evolve algorithms for surveillance cameras, understand our ability to recognise faces and etc.

 

Analyzing Facial Expressions In Three Dimensional Space BU-3DFE (Binghamton University 3D Facial Expression) Database:

 

1.   BU-3DFE (Binghamton University 3D Facial Expression) Database

 

Description: Images and 3D models of 100 individuals, 56 women and 44 men, from the age 18-70. The individuals where of different ethnicity. Each individual showed 7 facial expressions sadness, anger, happiness, surprise, fear, disgust and neutral. Every facial expression, except for neutral, was shown with 4 levels of intensity, making 25 3D face models for each expression per subject. The facial scan was done from 2 views about 45° from frontal position.

 

Web-link: http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

 

Contact:

Professor Lijun Yin of State University of New York at Binghamton (lijun(Replace this parenthesis with the @ sign)cs.binghamton.edu)

 

Reference: 

Lijun Yin; Xiaozhou Wei, Yi Sun, Jun Wang, Matthew J. Rosato, “A 3D Facial Expression Database For Facial Behavior Research”, The 7th International Conference on Automatic Face and Gesture Recognition, 2006, p211 – 216

 

 

 

 

2.     BU-4DFE (3D Dynamic Facial Expression Database – Dynamic Data)

 

Description: This set is an extension to BU-3DFE with 606 3D facial expression sequences with resolution of 35 thousand vertices. They made video sequences of 101 individuals, 58 women and 43 men with different ethnicity, showing 6 facials expressions. Those facial expressions were anger, disgust, fear, happiness, sadness and surprise. There are about 100 frames in each facial sequence. The 3D models were made from these videos.

 

Web-link:

http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

 

Contact:

Professor Lijun Yin of State University of New York at Binghamton (lijun(Replace this parenthesis with the @ sign)cs.binghamton.edu)

 

Reference:

Lijun Yin; Xiaochen Chen; Yi Sun; Tony Worm; Michael Reale, “A High-Resolution 3D Dynamic Facial Expression Database”, The 8th International Conference on Automatic Face and Gesture Recognition, 2008

 

 

 

3.     BP4D-Spontanous: Binghamton-Pittsburgh 3D Dynamic Spontaneous Facial Expression Database

 

Description: this set includes 3D un-posed facial expressions of 41 individuals, 18 males and 23 females from 18-29 years old. The subjects were from various ancestral origins. The participants did series of activities to elicit eight different emotions. The reason for this is because posed emotional expression are never exactly the same as spontaneous. They used Facial Action Coding System to code the Action Units on the frame-level base for ground-truth of facial expressions.

 

Web-Link: http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

 

Contact:

Professor Lijun Yin of State University of New York at Binghamton (lijun(Replace this parenthesis with the @ sign)cs.binghamton.edu)

 

Reference:

Xing Zhang, Lijun Yin, Jeff Cohn, Shaun Canavan, Michael Reale, Andy Horowitz, Peng Liu, and Jeff Girard, “BP4D-Spontaneous: A high resolution spontaneous 3D dynamic facial expression database”, Image and Vision Computing, 32 (2014), pp. 692-706  (special issue of the Best of FG13)

 

 

 

AR Face database:

Description: This set contains 4000 photos in colour of 126 different faces.  70 male and 56 female. The photos where takin under controlled condition but there whore whatever clothes, glasses, make-up or hair style they had when they came for the photoshoot. Each participant was photographed in 13 different conditions and then again 2 weeks later. The conditions are: Neutral expression, smile, anger, scream, left light on, right light on, all side lights on, wearing sun glasses, wearing sun glasses and left light on, wearing sun glasses and right light on, wearing a scarf, searing a scarf and left light on, wearing a scarf and a right light on. Each condition has specific number value as researchers can see on their website. Researchers can also download a cropped version and cite article number 2 below. There is also a “manual annotations” available and to use that researchers have to cite article number 3.

License: see website.

Link: http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html

References:

  1. M. Martinez and R. Benavente. The AR Face Database. CVC Technical Report #24, June 1998
  2. PCA versus LDA” EEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2, pp. 228-233, 2001
  3. “Features versus Context: An approach for precise and detailed detection and delineation of faces and facial features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 11, pp. 2022-2038, 2010.
    Manual Annotations

 

Basal Face Model:

 

Description: this set contains 3D scans of 100 men and 100 women. They also give information about mean facial texture, shape and information about the position for eyes, nose and etc. There is only plain coloured background and the individuals’ hairline does not show. There are 3 different illumination conditions and also 9 different poses. The faces are also put on different axis from younger to older, from thinner to heavier, from taller to shorter face and from feminine to masculine.

License:  see http://faces.cs.unibas.ch/bfm/main.php?nav=1-2&id=downloads

Link: http://faces.cs.unibas.ch/bfm/

Reference:

  1. Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter IN:Proceedings of the 6th IEEE International Conference on Advanced Video and Signal based Surveillance (AVSS) for Security, Safety and Monitoring in Smart Environments Genova (Italy) – September 2-4, 2009

 

 

 

 

BioID Face Database:

Description: This database contains 1521 greyscale photos of 23 different individuals. The faces are in frontal position. They wanted to capture real life conditions so the pictures are taken with various backgrounds, face sizes and illumination. There are also a file which contain eye positions for each picture that is accessible.

License: see website.

Link: https://www.bioid.com/About/BioID-Face-Database

References: “BioID Face Database – FaceDB”, copyright BioID GmbH, https://www.bioid.com/About/BioID-Face-Database

  1. Jesorsky, K. Kirchberg, R. Frischholz
    In J. Bigun and F. Smeraldi, editors, Audio and Video based Person Authentication – AVBPA 2001, pages 90-95. Springer, 2001.
  2. The image was obtained, 20. Desember 2016, from “BioID Face Database – FaceDB”, copyright BioID GmbH, https://www.bioid.com/About/BioID-Face-Database

 

 

BOSPHORUS DATABASE:

 

Description: This set contains 4.666 face scans of 60 male and 45 female Caucasians between 25 to 35 years old. This set is both in colour and greyscale. Some had facial hairs. The faces are shown with a white background. This 3D face set shows multiple emotional expressions, head rotations and “types of occlusions” ( by occlusion they mean that something, like a hand, covers a part of the face). This set also contains information about position of facial features.

License: see website http://bosphorus.ee.boun.edu.tr/HowtoObtain.aspx

Link: http://bosphorus.ee.boun.edu.tr/Content.aspx

References:

  1. Savran, B. Sankur, M. T. Bilge, “Regression-based Intensity Estimation of Facial Action Units”, Image and Vision Computing, Vol. 30, Issue 10, p774-784, October 2012.
  2. Savran, B. Sankur, M. T. Bilge, “Comparative Evaluation of 3D versus 2D Modality for Automatic Detection of Facial Action Units”, Pattern Recognition, Vol. 45, Issue 2, p767-782, February 2012.
  3. Savran, B. Sankur, M. T. Bilge, “Estimation of Facial Action Intensities on 2D and 3D Data”, European Signal Processing Conference (EUSIPCO), Barcelona, Spain, 2011.
  4. Savran, B. Sankur, M. T. Bilge, “Facial action unit detection: 3D versus 2D modality”, IEEE CVPR’10 Workshop on Human Communicative Behavior Analysis, San Francisco, California, USA, June 2010.
  5. Savran, B. Sankur, “Automatic Detection of Facial Actions from 3D Data”, IEEE ICCV’09: Workshop on Human Computer Interaction, Kyoto, Japan, September-October 2009.
  6. Çeliktutan, H. Çınar, B. Sankur, “Automatic Facial Feature Extraction Robust Against Facial Expressions and Pose Variations”, IEEE Int. Conf. On Automatic Face and Gesture Recognition, Amsterdam, Holland, September. 2008.
  7. Dibeklioğlu, A. A. Salah, L. Akarun, “3D Facial Landmarking Under Expression, Pose, and Occlusion Variations”, IEEE 2nd International Conferance on Biometrics: Theory,
    Applications, and Systems (IEEE BTAS), Washington, DC, USA, September 2008.
  8. Alyüz, B. Gökberk, H. Dibeklioğlu, L. Akarun, “Component-based Registration with Curvature Desciptors for Expression Insensitive 3D Face Recognition”, 8th IEEE International Conference on Automatic Face and Gesture Recognition, Amsterdam, The Netherlands, September 2008.
  9. Alyüz, B. Gökberk, L. Akarun, “3D Face Recognition System for Expression and Occlusion Invariance”, N. Alyuz, B. Gökberk, L. Akarun, IEEE 2nd International Conferance on Biometrics: Theory, Applications, and Systems (IEEE BTAS), Washington, DC, USA, September 2008.
  10. Savran, B. Sankur, “Non-Rigid Registration of 3D Surfaces by Deformable 2D Triangular Meshes”, CVPR’08: Workshop on Non-Rigid Shape Analysis and Deformable Image Alignment (NORDIA’08), Alaska, USA, June 2008.
  11. Alyüz, B. Gökberk, H. Dibeklioğlu, A. Savran, A. A. Salah, L. Akarun, B. Sankur, “3D Face Recognition Benchmarks on the Bosphorus Database with Focus on Facial Expressions” , The First COST 2101 Workshop on Biometrics and Identity Management (BIOID 2008), Roskilde University, Denmark, May 2008.
  12. Savran, N. Alyüz, H. Dibeklioğlu, O. Çeliktutan, B. Gökberk, B. Sankur, and L. Akarun, “Bosphorus Database for 3D Face Analysis”, The First COST 2101 Workshop on Biometrics and Identity Management (BIOID 2008), Roskilde University, Denmark, 7-9 May 2008.
  13. Savran, O. Çeliktutan, A. Akyol, J. Trojanova, H. Dibeklioğlu, S. Esenlik, N. Bozkurt, C. Demirkır, E. Akagündüz, K. Çalışkan, N.Alyüz, B.Sankur, İ. Ulusoy, L. Akarun, T. M. Sezgin, “3D Face Recognition Performance Under Adversorial Conditions”, in Proc. eNTERFACE’07 Workshop on Multimodal Interfaces, Istanbul, Turkey, July 2007.

 

Caltech 10,00 Web Faces:

Description: This set has 10.524 pictures of human faces. The images have different resolutions and “in different settings“. The pictures in the database where obtained from the internet by searching common names into Google Image Search. So called

“ground truth file“ is available on their website but it contains information about the position of the centre of the mouth, eyes and nose of each face.

License: see website.

Link: http://www.vision.caltech.edu/Image_Datasets/Caltech_10K_WebFaces/

Reference: Angelova, A., Abu-Mostafam, Y., & Perona, P. (2005, June). Pruning training sets for learning of object categories. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) (Vol. 1, pp. 494-501). IEEE.

 

CAS-PEAL-R1 Face Database:

Description: The original database, CAS-PEAL contains 99.594 pictures of 1040 subjects, 595 males and 445 females. Photos of the subjects were taken with 9 different cameras equally spaced at the same time. The subject show various expressions, with different accessories and are photographed with different illumination. Only a 30.900 pictures from this database has been made public and is called CAS-PEAL-R1.  CAS-PEAL-R1 also contains photos of 1040 individuals. There are 5 photos each with different expression of 377 individuals. There are more than 9 photos with various illumination for 233 subjects. 6 photos with different accessories for 438 of the subjects. From 2 to 4 photos of 297 persons with 2 to 4 different backgrounds.  There are also 2 photos of 296 individuals at different distances. The creators photographed 66 subjects twice with 6 month between the photoshoots.

License: see http://www.jdl.ac.cn/peal/index.html

Link: http://www.jdl.ac.cn/peal/JDL-PEAL-Release.htm

Reference: Wen Gao, Bo Cao, Shiguang Shan, Xilin Chen, Delong Zhou, Xiaohua Zhang, Debin Zhao. The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations. IEEE Trans. on System Man, and Cybernetics (Part A) , vol.38, no.1, pp149-161, 2008.1

 

 

CHICAGO FACE DATABASE:

Description:  This set contains photos of 158 individuals, 37 of them where black males, 48 of them where white females, 36 where white males and 37 where white females. The subjects where from 17 to 65 years old. They whore the same grey t-shirt over their clothes during the photoshoot in front of a white background. Each person was asked to show neutral, happy, fearful and threatening facial expression while keeping their head still. These pictures are all frontal photos of their faces.

License: see website.

Link: www.chicagofaces.org

Reference:

Ma, Correll and Wittenbrink. (2015). The Chicago Face Database: A Free Stimulus Set of Faces and Norming Data. Behaviour Research Methods, 47, 1122-1135.

 

Child Affective Facial Expression Set (CAFE):

 

 

Description: This database contains 1192 photos of 154 children at the age 2-8 years old with different ethnicities. 64 of the children are boys and 90 of them are girls. The photos are in colour and are taken in front of a white background. They all where a white garment up to their chin so only their hair, face and a bit part of their neck is showing. Each child shows 7 different facial expressions, anger, disgust, fear, happy, neutral, sad and surprised. They showed each emotion, except for surprise, with their mouth open and closed. When they pretended to be surprised they always had their mouth open.

License: see https://nyu.databrary.org/volume/30

Link: http://www.childstudycenter-rutgers.com/the-child-affective-facial-expression-se

Reference:

LoBue, V. & Thrasher, C. (2015). The Child Affective Facial Expression (CAFE) Set: Validity and reliability from untrained adults. Frontiers in Emotion Science, 5. PDF

 

 

ChokePoint Dataset:

Description: This database includes 48 videos and 64.204 face images of 23 men and 6 women. Only one individual is present at a time in both the videos and the images. “Three cameras were placed above several portals and the subjects had to walk through the portals. While the individuals where walking through a portal they captures several face images.  The faces where captured during different light conditions, poses, sharpness but also misalignment. This database also contains 2 videos where a subject walked through a crowded portal.” License: see website.

Link: http://arma.sourceforge.net/chokepoint/#example

Reference:

 

  1. Wong, S. Chen, S. Mau, C. Sanderson, B.C. Lovell
    Patch-based Probabilistic Image Quality Assessment for Face Selection and Improved Video-based Face Recognition IEEE Biometrics Workshop, Computer Vision and Pattern Recognition (CVPR) Workshops, pages 81-88. IEEE, June 2011.

 

Cohn-Kanade AU-Coded Expression Database:

Description: There are 2 version of this database and a third one is being made according to the website. The first version contains 486 sequences from 97 individuals. “Each sequences start with a neutral expression, than the subjects were asked to show an expression which is the end of the sequence”. When the subjects show the expression they asked of them they call it the “peak expression“. The peak expressions are labelled by the emotion the subjects were asked to show. The photos were also FACS coded. The second version has validated emotion labels and 22% more sequences than version 1 and 27% additional posers.  The photos are also FACS coded. “Additionally, CK+ provides protocols and baseline results for facial feature tracking and action unit and emotion recognition” according to their website.

License: http://www.consortium.ri.cmu.edu/ckagree/

Link: http://www.pitt.edu/~emotion/ck-spread.htm

References:

  1. Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG’00), Grenoble, France, 46-53.
  2. Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews, I. (2010). The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression. Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, 94-101.

Color FERET Database:

Description: It contains 14.126 pictures of faces. These photos are of 1.199 different people and each individual has their own set but there are also 365 sets which have images of an individual already in the database and are taken on a different day then the rest of the pictures of that same individual.  Some people were photographed many times and some were photographed 2 years after the first photo was taken. This allows researchers to study changes in these people‘s appearances.

License: see Release Agreement

Link: https://www.nist.gov/itl/iad/image-group/color-feret-database

References:

  1. Phillips, P.J., Wechsler, H., Huang, J., Rauss, P. (1998). The FERET database and evaluation procedure for face recognition algorithms.Image and Vision Computing J. 16(5), 295-306.
  2. Phillips, P. J., Moon, H., Rizvi, S. A., Rauss, P. J. (2000). The FERET Evaluation Methodology for Face Recognition Algorithms. IEEE Trans. Pattern Analysis and Machine Intelligence. 22, 1090-1104.

 

Dartmouth Database of children face‘s:

Description: The pictures are of 80 kids, 40 girls and 40 boys, from 6 to 16 years old. You can only see their faces and a black background and their hair is covered by a black piece of clothing. There are about 40 pictures of each child. The pictures are all taken from 5 different angels from the front to the sides and each child shows 8 different facial expressions from all the 5 angels. The facial expression which each child shows is neutral, pleased, happy, surprised, angry, sad, disgusted and afraid.

License: Dartmouth College (custom license)

Link: http://www.faceblind.org/social_perception/K_Dalrymple/DDCF.html

Reference:

Dalrymple, K.A., Gomez, J., & Duchaine, B. (2013). The Dartmouth Database of Children’s Faces: Acquisition and validation of a new face stimulus set. PLoS ONE, 8(11), e79131. doi:10.1371/journal.pone.0079131

 

 

DISFA databases:

 

DISFA (Denver Intensity of Spontaneous Facial Action) Database:

Description: This set contains pictures of 27 adults, 15 men and 12 women. Various facial expressions are elicited by a videos stimulus each 4 minutes long.

License: see website.

Link: http://mohammadmahoor.com/databases/denver-intensity-of-spontaneous-facial-action/

Reference:

  1. Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P., Cohn, J.F. “DISFA: A Spontaneous Facial Action Intensity Database,” Affective Computing, IEEE Transactions on , vol.4, no.2, pp.151,160, April-June 2013 , doi: 10.1109/T-AFFC.2013.4
  2. Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P., “Automatic detection of non-posed facial action units,” Image Processing (ICIP), 2012, 19th IEEE International Conference on , vol., no., pp.1817,1820, Sept. 30 2012-Oct. 3 2012 , doi: 10.1109/ICIP.2012.6467235

 

DIFA+ (Extended Denver Intensity of Spontaneous) Database:

Description: This set is an extended version of DISFA.  Each individual was recorded when they were posing an emotional expression and then again but facial expression where elicited by video stimuli. This set also includes artificially labelled frame-based description of 5-level intensity of twelve FACS facial actions and also information about position of facial features.

License: This set is available for research purposes only. Researchers have to send a request and sign an agreement through e-mail.

Link: http://mohammadmahoor.com/databases/denver-intensity-of-spontaneous-facial-action/

References: Mavadati, M., Sanger, P., & Mahoor, M. H. (2016). Extended DISFA Dataset: Investigating Posed and Spontaneous Facial Expressions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 1-8).

 

 

3D Mask Attack Dataset:

 

Description: This database consists of 76.500 frames of 17 individuals. Kinect was used to record both real-access and spoofing attacks. Every frame contains a depth picture, 640×490 pixels, corresponding RGB image and manually annotated eye positions. Each individual was recorded in 3 sessions under controlled conditions with neutral expressions and frontal view but in the third session “3D masks attack are captured by a single attacker“. For each sessions the subjects were recorded 5 times of 300 frames.  The eye positions are manually labelled in each video. The real-size masks where made using “ThatsMyFace.com“. The image above was obtained from 3D Mask Attack Database website https://www.idiap.ch/dataset/3dmad. There are a lot more various databases from the same website that I simply did not have time to go through.

License: Not known.

Link: https://www.idiap.ch/dataset/3dmad

Reference:

Erdogmus, N., & Marcel, S. (2013, September). Spoofing in 2D face recognition with 3D masks and anti-spoofing with kinect. In Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on (pp. 1-6). IEEE.

 

3D RMA: 3D database:

Description: This set contains photos 3D photo of 120 individuals. Around 80 of those 120 individuals were students from the same ethnic origins and around the same age. The rest was the universities staff from 20 to 60 years old. Only 14 of them where women. Each individual was photographed twice, the first time in November, 1997 and the second time in January, 1998. They were photographed in each session looking straight forward to the camera, oriented to left from the camera, oriented to right and also photographed looking up and looking down. Each orientation was photographed 3 times. Some subjects had facial hair.

License: see website.

Link: http://www.sic.rma.ac.be/~beumier/DB/3d_rma.html

References: not known.

 

EUROCOM KINECT FACE DATASET:

Description: This database contains multimodal facial pictures of 52 individuals, 38 men and 14 women. The photos were taken in a Lab, with the same background and the subject stood 1 m in front of a Kinect camera. Each photo is 256×256 in size. The facial images where taken for each individual at different illumination conditions, occlusion and different facial expressions. The facial expressions where neutral, smile, open mouth, left profile, right profile, with sunglasses, something covering the mouth and a part of the face is covered with paper. For each image there is information about 6 positions in the face: left eye, right eye, the tip of the nose, right side of mouth, right side of mouth and the chin. There is also information about gender, year of birth and if the person uses glasses or not. And there is also a “depth map” which contains Depth information.

License: see website.

Link: http://rgb-d.eurecom.fr/

Reference: Rui Min, Neslihan Kose, Jean-Luc Dugelay, “KinectFaceDB: A Kinect Database for Face Recognition,” Systems, Man, and Cybernetics: Systems, IEEE Transactions on , vol.44, no.11, pp.1534,1548, Nov. 2014, doi: 10.1109/TSMC.2014.2331215

 

 

Extended Yale Face Database B:

Description: This set has 16.128 photos of 28 individuals. Each person was photographed from 9 different poses and 64 “illumination conditions“.  The images are in grey-scale. There are two versions of this set one is the original but the other has cropped version.

License: see website.

Link: http://vision.ucsd.edu/~iskwak/ExtYaleDatabase/ExtYaleB.html

Reference:

Georghiades, A., Belhumeur, P., Kriegman, D. (2001). From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose. IEEE Trans. Pattern Analyses and Machine Intelligence, 23(6), 643-660.

 

 

Face Recognition Database:

Description: This dataset includes 200 pictures of 3D head models of 10 individuals. There is various lightning condition, pose and background in the pictures.

License: see website.

Link: http://cbcl.mit.edu/software-datasets/heisele/facerecognition-database.html

References: “Credit is hereby given to the Massachusetts Institute of Technology and to the Centre for Biological and Computational Learning for providing the database of facial images.”

  1. “B. Weyrauch, J. Huang, B. Heisele, and V. Blanz. Component-based Face Recognition with 3D Morphable Models, First IEEE Workshop on Face Processing in Video, Washington, D.C., 2004.”

Face Recognition Data, University of Essex, UK:

Description: This set has 7900 coloured pictures of 395 subjects with 20 pictures of each and every one of them. These photos are of men and women mostly between 18-20 years old or older from different racial origins. Some of them have glasses or beards.

License: see website.

Link: http://cswww.essex.ac.uk/mv/allfaces/index.html

Reference: Not known.

FaceScrub:

Description: This set contains over 100 thousand pictures of 530 different male and female celebrities, around 200 pictures of each individual. They used this database to create an easy way to get rid of photos that might not be of the same person.

License: https://creativecommons.org/licenses/by-nc-nd/3.0/

Link: http://vintage.winklerbros.net/facescrub.html

Reference:

H.-W. Ng, S. Winkler. A data-driven approach to cleaning large face datasets. Proc. IEEE International Conference on Image Processing (ICIP), Paris, France, Oct. 27-30, 2014.

 

FEI Face database:

 

Description: This website contains 2 datasets. The original dataset contains 2800 photos of 200 subjects, 100 women and 100 men. Each face was photographed 14 times with profile rotation of up to about 180 degrees. The models age is from 19 to 40 years old. Each image is 640×480 pixels and are all in colour. You can also download another dataset from the same website which contains only 4 frontal pictures of each model in greyscale. 2 pictures showing 2 different facial expressions and in one frontal picture is of the real face but in the other one their face has been partly normalized.

License: free for research purposes.

Link: http://fei.edu.br/~cet/facedatabase.html

References:

  1. E. Thomaz and G. A. Giraldi. (2010). A new ranking method for Principal Components Analysis and its application to face image analysis, Image and Vision Computing, 28( 6) , 902-913.
  2. Z. Tenorio and C. E. Thomaz. (2011). Analise multilinear discriminante de formas frontais de imagens 2D de face. In proceedings of the X Simposio Brasileiro de Automacao Inteligente SBAI 2011, 266-271, Universidade Federal de Sao Joao del Rei, Sao Joao del Rei, Minas Gerais, Brazil,
  3. Amaral, C. Figaro-Garcia, G. J. F. Gattas, C. E. Thomaz. (2009). “Normalizacao espacial de imagens frontais de face em ambientes controlados e nao-controlados” (in portuguese), Periodico Cientifico Eletronico da FATEC Sao Caetano do Sul (FaSCi-Tech), 1(1).
  4. Amaral and C. E. Thomaz. (2008).”Normalizacao Espacial de Imagens Frontais de Face“. Technical Report 01/2008 (in portuguese), Department of Electrical Engineering, FEI, São Bernardo do Campo, São Paulo, Brazil.
  5. L. de Oliveira Junior and C. E. Thomaz. (2006). “Captura e Alinhamento de Imagens: Um Banco de Faces Brasileiro“. Undergraduate Technical Report (in Portuguese), Department of Electrical Engineering, FEI, São Bernardo do Campo, São Paulo, Brazil.

 

Georgia Tech face database:

Description: In this set there are coloured 640×480 pixels pictures of 50 individuals. Each subject was photographed in 2 sessions between June the year 1999 until November, 1999. The faces are frontal or tilted and taken with various illumination conditions, scale and facial expression. This set also contains another form of these same pictures but where the background has been removed.

License: Not known.

Link: http://www.anefian.com/research/face_reco.htm

References:

  1. Ling Chen, Hong Man and Ara V. Nefian, “Face recognition based multi-class mapping of Fisher scores”, Pattern Recognition, Special issue on Image Understanding for Digital Photographs, March 2005.[pdf]
  2. Navin Goel, George Bebis and Ara V. Nefian, “Face recognition experiments with random projections”, SPIE Conference on Biometric Technology for Human Identification 2005.
  3. Ara V. Nefian “Embedded Bayesian networks for face recognition”, IEEE International Conference on Multimedia and Expo, August 2002. [pdf]
  4. Ara V. Nefian and Monson H. Hayes, “Maximum likelihood training of the embedded HMM for face detection and recognition”, IEEE International Conference on Image Processing 2000. [pdf]
  5. Ara V. Nefian and Monson H. Hayes III, “An embedded HMM based approach for face detection and recognition”, IEEE International Conference on Acoustic Speech and Signal Processing 1999. [pdf]
  6. Ara V. Nefian and Monson H. Hayes III, “Face recognition using an embedded HMM”, IEEE Conference on Audio and Visual-based Person Authentication 1999. [pdf]
  7. V. Nefian and Monson H. Hayes III, “Face detection and recognition using Hidden Markov Models”, IEEE International Conference on Image Processing1998. [pdf]
  8. V. Nefian and Monson H. Hayes III, “Hidden Markov Models for face recognition”, IEEE International  Conference on Acoustic Speech and Signal Processing 1998. [pdf]
  9. Ara V. Nefian, Mehdi Khosravi and Monson H. Hayes III, “Real-time human face detection from uncontrolled   environments”, SPIE Visual Communications on Image Processing 1997.
  10. Ara V. Nefian ,”A hidden Markov model based approach for face detection and recognition”, PhD Thesis, 1999. [pdf]
  11. Ara V. Nefian, “Statistical approaches to face recognition”, Thesis Proposal 1996. [pdf]

 

 

Honk Kong Polytechnic University NIR Face Database (PolyU-NIRFD):

Description: This database contains 3400 photos of 335 individuals. The subjects were photographed 80-120cm from the JAI camera which was used, which is sensitive to NIR band. LED light source was used. First a frontal face photo was taken then they were asked to show various expressions and poses. 15 of those 335 individuals had two photograph sessions with more than 2 months between photoshoots.

License: see website.

Link: http://www4.comp.polyu.edu.hk/~biometrics/polyudb_face.htm

Reference: Baochang Zhang, Lei Zhang, David Zhang, and Linlin Shen, Directional Binary Code with Application to PolyU Near-Infrared Face Database, Pattern Recognition Letters, vol. 31, issue 14, pp. 2337-2344, Oct. 2010.

Indian Movie Face Database (IMFDB):

 

Description: This set contains 34.512 pictures of 100 Indian actors. The pictures where obtained from more than 100 videos. The pictures are handpicked and cropped from the video frames. The images have various light conditions, “scale, resolution and the actors are at various ages, gender, in a different pose and showing different emotions”. They also have different “occlusion” which means that in some pictures a part of the actor face is covered by a hand or an object.

License: Not known.

Link: http://cvit.iiit.ac.in/projects/IMFDB/

Reference:

Shankar Setty, Moula Husain, Parisa Beham, Jyothi Gudavalli, Menaka Kandasamy, Radhesyam Vaddi, Vidyagouri Hemadri, J C Karure, Raja Raju, Rajan, Vijay Kumar and C V Jawahar. “Indian Movie Face Database: A Benchmark for Face Recognition Under Wide Variations”. National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), 2013

JAFFE database:

Description: this set contains 213 greyscale frontal photos of Japanese female models. The models were asked to show 7 distinct emotions neutral, happiness, disgust, anger, fear, sadness and surprise. Each photo is categorized by the emotion they were asked to express.

License: see website.

Link: http://www.kasrl.org/jaffe.html

References: Michael J. Lyons, Shigeru Akemastu, Miyuki Kamachi, Jiro Gyoba. Coding Facial Expressions with Gabor Wavelets, 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200-205 (1998).

 

Karolinska Directed Emotional Faces (KDEF):

Description: This database contains 4900 photos of 70 subject. Each model was asked to express 7 emotions while photographed twice at 5 different angels.

License: see website.

Link: http://www.emotionlab.se/resources/kdef

References: Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska Directed Emotional Faces – KDEF, CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, ISBN 91-630-7164-9.

For AKDEF:

Lundqvist, D., & Litton, J. E. (1998). The Averaged Karolinska Directed Emotional Faces – AKDEF, CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, ISBN 91-630-7164-9.  

 

 

Labelled Faces in the Wild (LFW):

Description: There are 4 distinct sets, the original LFW and 3 others “aligned” images. The original LFW contains 13.233 pictures which were all obtained from the internet. There are at least one photo of 5.749 individuals but two or more pictures of 1680 individuals. The subjects are all a part of a background. The pictures are labelled with the name of the person in the pictures. Viola-Jones face detector was used to detect the faces in the pictures.

License: Not known.

Link: http://vis-www.cs.umass.edu/lfw/#reference

References:

  1. Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, October, 2007. [pdf]
  2. Gary B. Huang and Erik Learned-Miller. Labeled Faces in the Wild: Updates and New Reporting Procedures. University of Massachusetts, Amherst, Technical Report UM-CS-2014-003, May, 2014. [pdf]
  3. LFW funneled images If you use the LFW images aligned by funneling, please cite: Gary B. Huang, Vidit Jain, and Erik Learned-Miller. Unsupervised joint alignment of complex images.  International Conference on Computer Vision (ICCV), 2007.
  4. LFW deep funneled images If you use the LFW imaged aligned by deep funneling, please cite: Gary B. Huang, Marwan Mattar, Honglak Lee, and Erik Learned-Miller. Learning to Align from Scratch. Advances in Neural Information Processing Systems (NIPS), 2012.
  5. Erik Learned-Miller, Gary B. Huang, Aruni RoyChowdhury, Haoxiang Li, and Gang Hua. Labeled Faces in the Wild: A Survey. In Advances in Face Detection and Facial Image Analysis, edited by Michal Kawulok, M. Emre Celebi, and Bogdan Smolka, Springer, pages 189-248, 2016.
    [Springer Page] [Draft pdf]

Labelled Faces in the Wild-a (LFW-a):

Description: This set contains the same pictures that are in the original LFW but they are in greyscale and have been aligned using commercial face alignment software. They show the alignment to refine the accuracy of face recognition algorithms.

License: Not known.

Link: http://www.openu.ac.il/home/hassner/data/lfwa/

References:

  1. Lior Wolf, Tal Hassner, and Yaniv Taigman, Effective Face Recognition by Combining Multiple Descriptors and Learned Background Statistics, IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 33(10), Oct. 2011 (PDF)
  2. Lior Wolf, Tal Hassner and Yaniv Taigman, Similarity Scores based on Background Samples, Asian Conference on Computer Vision (ACCV), Xi’ an, Sept 2009 (PDF)
  3. Yaniv Taigman, Lior Wolf and Tal Hassner, Multiple One-Shots for Utilizing Class Label Information, The British Machine Vision Conference (BMVC), London, Sept 2009 (project, PDF)

LFW crop face database:

Description: This is the cropped version of LFW set where the faces have been cropped from the background. “The reason for this dataset is because researchers where concerned with the background of the pictures boosting face matching accuracy.” The pictures are both available in greyscale and in colour.

License: Not known.

Link: http://conradsanderson.id.au/lfwcrop/

References:

  1. Sanderson, B.C. Lovell. Multi-Region Probabilistic Histograms for Robust and Scalable Identity Inference. ICB 2009, LNCS 5558, pp. 199-208, 2009.
  2. B. Huang, M. Ramesh, T. Berg, E. Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, 2007.

Long Distance Heterogeneous Face Database:

Description: This set contains pictures of 100 individuals, 70 men and 30 women. The photos were captured at different illumination condition. The photos were taken outside at different distances of 60m, 100m and 150m but also inside at 1m distance, under a fluorescent light. The pictures where taken at front of the subjects and they were all without glasses. The pictures where collected in a single sitting.

License: see website under link.

Link: http://biolab.korea.ac.kr/database/

Reference:

  1. Kang, H. Han, A. K. Jain, and S.-W. Lee, “Nighttime Face Recognition at Large Standoff: Cross-Distance and Cross-Spectral Matching”, Pattern Recognition, Vol. 47, No. 12, 2014, pp. 3750-3766. [pdf]
  2. Maeng, S. Liao, D. Kang, S.-W. Lee, and A. K. Jain, “Nighttime Face Recognition at Long Distance: Cross-distance and Cross-spectral Matching”, ACCV, Daejeon, Korea, Nov. 5-9, 2012 [pdf]

McGillFaces Database:

Description: This dataset consists of 18.000 video frames of 60 individuals, 31 women and 29 men. The videos where recorded at various surroundings, both indoors and outdoors. Therefor the frames are recorded at various lighting conditions and background. The individuals show various facial expressions, poses, motions and occlusions. For each video they provide: “the original video frames. Detected, aligned, scaled and masked face images after applying the pre-processing step. Gender and facial hair. Continuous and probabilistic head pose ground third obtained via the manual labelling method.”

License: see website.

Link: https://sites.google.com/site/meltemdemirkus/mcgill-unconstrained-face-video-database/

Reference:

  1. Demirkus, D. Precup, J. Clark, T. Arbel, “Hierarchical Temporal Graphical Model for Head Pose Estimation and Subsequent Attribute Classification in Real-World Videos”Computer Vision and Image Understanding (CVIU), Special Issue on Generative Models in Computer Vision, March 2015.
  2. Demirkus, J. J. Clark and T. Arbel, “Robust Semi-automatic Head Pose Labeling for Real-World Face Video Sequences”Multimedia Tools and Applications, January 2013.
  3. Demirkus, D. Precup, J. Clark, T. Arbel, “Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Estimating Binary Facial Attribute Classes in Real-World Face Videos”IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Sept. 2015.
  4. Demirkus, D. Precup, J. Clark, T. Arbel, “Hierarchical Temporal Graphical Model for Head Pose Estimation and Subsequent Attribute Classification in Real-World Videos”Computer Vision and Image Understanding (CVIU), Special Issue on Generative Models in Computer Vision, March 2015.
  5. Demirkus, D. Precup, J. Clark, T. Arbel, “Probabilistic Temporal Head Pose Estimation Using a Hierarchical Graphical Model”, European Conference on Computer Vision (ECCV), 2014.

 

Make up datasets:

 

Description: This website contains 3 sets. Make up in the “wild” consist of face pictures of 125 individuals with or without makeup which was gotten from the internet. There are only few who have both picture of them with and without makeup. In the YouTube makeup database there are images of 151 women with pale skin. There are about 1 to 3 pictures of each woman with and without makeup. In both sets the images show various lighting conditions, colour and the amount of makeup, they show different facial expressions and a rotation of the head. (Chen, Dantcheva, Ross. 2013).  Virtual Makeup includes of 204 pictures of 51 women, one picture with no makeup, one just with lipstick, one just with eye makeup and one photo with a full makeover (Dantcheva, Chen, Ross. 2012).

License: see website

Link: http://www.antitza.com/makeup-datasets.html

Reference:

  1. A. Dantcheva, C. Chen, A. Ross, “Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?,” Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.
    2. C. Chen, A. Dantcheva, A. Ross, “Automatic Facial Makeup Detection with Application in Face Recognition,” Proc. of 6th IAPR International Conference on Biometrics (ICB), (Madrid, Spain), June 2013.
    3. C. Chen, A. Dantcheva, A. Ross, “An Ensemble of Patch-based Subspaces for Makeup-Robust Face Recognition,” Information Fusion Journal, Vol. 32, pp. 80 – 92, November 2016.

MR2 Face Stimuli:

 

Description: This set contains photos of 74 individuals from 18 to 25 years old. While photographed the subjects stood in front of a white background and wore a black t-shirt. The subjects had to have facial features that are prototypical for 3 different ethnicities, African-American, Asian and European. None of the participants had facial piercings, unnatural hair styles, unnatural hair colour or facial hair. They all had medium to dark brown eyes so there was no specific eye colour for any race. They did not have any make-up, jewellery or hair accessories and their hair was pulled back, away from their face.

License: see website.

Link: http://ninastrohminger.com/the-mr2/

Reference:

Strohminger, N., Gray, K., Chituc, V., Heffner, J., Schein, C., and Heagins, T.B. (in press). The MR2: A multi-racial mega-resolution database of facial stimuli. Behavior Research Methods.

 

MUCT Face Database:

Description: Pictures of 3.755 individual‘s faces. For each face there are “76 manual landmarks” which are information about position of facial features. Each subject was photographed in front of a grey background. The subjects are of various ages, genders, and ethnicity and show various poses and expressions.

License: Not known.

Link: http://www.milbo.org/muct/

Reference: Millborrow, S., Morkel, J., Nicolls, F. (2010). The MUCT Landmarked Face Database. Pattern Recognition Association of South Africa.

ORL Database of faces:

Description: In this dataset are 400 greyscale photos of 40 individuals standing in front of a plain black background. There are 10 photographs of each person and they were all taken between April 1992 and April 1994. Each person is in frontal position and upright. The pictures where taken at different time, with different lightning, facial expressions and other details like with glasses or not. The photos are in 92×112 pixels.

License: Not known.

Link: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html

Reference: http://www.cl.cam.ac.uk/research/dtg/attarchive/abstracts.html#55

 

 

 

OUI-Audience Face Image Project:

 

Description: This database contains of 26.580 photos of 2.284 individuals from couple of months old to over 60 years old. The photos are labelled with gender. The photos are taken at various illumination conditions and the individuals are photographed at different poses.

License: see website.

Link: http://www.openu.ac.il/home/hassner/Adience/publications.html

Reference: Eran Eidinger, Roee Enbar, and Tal Hassner, Age and Gender Estimation of Unfiltered Faces, Transactions on Information Forensics and Security (IEEE-TIFS), special issue on Facial Biometrics in the Wild, Volume 9, Issue 12, pages 2170 – 2179, Dec. 2014

 

 

Psychological Image Colection at Stirling (PICS):

 

This website contains two main databases called Stirling/ESRC 3d face database and 2D face set. On the website it says that to site PICS in a paper to cite the link to the website. License is not specified on the website itself.

License: Not known.

Link: http://pics.psych.stir.ac.uk/

References:

Stirling/ESRC 3d face database:

Description: This database is still being processed but currently has 45 pictures of men and 54 pictures of women in 3D. Their neck, hair is showing in the pictures.

License: see website

Link: http://pics.psych.stir.ac.uk/ESRC/index.htm

Reference: to cite the website you have to give the URL because they have not yet published a paper describing the database.

 

2D face set:

Description: Is actually a set of 9 different databases of faces. All described on the same website. Some of these sets are in black and white, others are in colour. Some are only in neutral frontal pose but others show emotions or show the same face seen from different angles. Some show the same expression in different poses but other subsets show many different facial expression at each angle.

License: Not known

Link: http://pics.psych.stir.ac.uk/2D_face_sets.htm

Reference: Not specified.

 

PubFig: Public Figures Face Database:

 

Description: This set contains 58.797 images of 200 different individuals. This pictures where gathered from the internet. These images have various illumination conditions, scenes, cameras, poses, facial expressions and parameters.  The creators of this set also made a face verification benchmark on this database that quality of algorithms.

License: Not known.

Link: http://www.cs.columbia.edu/CAVE/databases/pubfig/

Reference: “Attribute and Simile Classifiers for Face Verification,”
Neeraj Kumar, Alexander C. Berg, Peter N. Belhumeur, and Shree K. Nayar,
International Conference on Computer Vision (ICCV), 2009.

Put face database:

Description: The creators of this set gathered 9.971 images of 100 people to create this 3D picture set. The photos were taken in a controlled conditions and each individual was photographed with neutral expression with head turning left to right,  the face turned up or down, raised head turning from left to right, lowered head turning left to right and some individuals had glasses. To facilitate evaluation and development of face recognition each photo has additional information about the position of the eyes, mouth, nose and etc. There is also a so called “manually annotated rectangles containing a face, eyes (left and right separately), a nose and a mouth”.

License: see website https://biometrics.cie.put.poznan.pl/index.php?option=com_content&view=article&id=3&Itemid=15&lang=en

Link: https://biometrics.cie.put.poznan.pl/

Reference:

  1. Kasiński, A. Florek, A. Schmidt Image Processing & Communications Volume 13, Number 3-4, 59-64, 2008.

 

Radboud Faces Database:

Description: This dataset contains about 8040 photos in colour of 67 different male and female models from various ages, mostly Caucasian but some are Moroccan Dutch males. In the photos the models show 8 emotion expressions, anger, contempt, disgust, fear, happiness, neutral, sadness and surprise. Each picture was taken from 5 different camera angels at the same time and each emotion was photographed with 3 different gaze directions.

License: see website.

Link: http://www.socsci.ru.nl:8180/RaFD2/RaFD?p=main

Reference:

Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition & Emotion, 24(8), 1377—1388. DOI: 10.1080/02699930903485076

 

SCface – Surveillance Cameras Face Database:

 

Description: This set contains 4160 photos of 133 human faces as a part of an uncontrolled indoor scene taken by 5 video surveillance cameras. The pictures are taken from different distances. The cameras where above the persons head. The models where not looking at a specific point.

License: see website.

Link: http://www.scface.org/

Reference: Grgic, M., Delac, K., Grgic, S.  (2011). SCface – surveillance cameras face database. Multimedia Tools and Applications Journal, 51(3), 863-879.

Senthilkumar Face Database (Version 1):

Description: This dataset consists of 80 greyscale pictures of 5 men.  There are about 16 pictures of each man. They are mostly frontal views with various occlusions, illumination conditions and facial expressions. The photos where manually cropped to 140×188 pixels and then normalized.

License: Not known.

Link: http://www.geocities.ws/senthilirtt/Senthil%20Face%20Database%20Version1

Reference: Not known.

Senthil IRTT Face Database Version 1.1:

Description: This set contains 317 pictures, size 550×780 pixels, of faces of 13 IRTT students. The pictures are in colour and grayscale. They subjects where from 23 to 24 years old, 1 female the rest male. The photos were taken with a white background captured with digital camera 14.1 megapixels resolution.  Some subjects had a scarf, show different facial expressions, poses and some have a hat.

License: Not known.

Link: http://www.geocities.ws/senthilirtt/Senthil%20IRTT%20Face%20Database%20Version%201.1

Reference: Not known.

SiblingsDB Database:

Description: This dataset consists of pictures of 97 pairs of siblings from 18-50 years old. The photos were taken by a professional photographer with a green background. 56 of them have smiling frontal and profile photo. These individuals are students and employees of the Politecnio di Torino and their siblings. 57% of the individuals are men. There are frontal photos of 92 siblings with neutral expression. They took frontal and profile photo of 158 individuals. They also took profile and frontal photos of 112 individuals showing neutral expression and then frontal and profile again smiling.  With each photo there are information about the positions of facial features, sex, birth date, age and how many participants thought they were siblings or not. They paired pictures together with siblings and non-siblings. There is another similar sibling database on this website except that one contains 98 pairs of famous (where at least on sibling is famous) siblings obtained from the internet.

License: see website.

Link: http://areeweb.polito.it/ricerca/cgvg/siblingsDB.html

Reference: T.F. Vieira, A. Bottino, A. Laurentini, M. De Simone, Detecting Siblings in Image Pairs, The Visual Computer, 2013, vol XX, p. YY, doi: 10.1007/s00371-013-0884-3

 

Texas 3D Face Recognition Database (Texas 3DFRD):

 

Description: Contains 1149 pairs of range and facial colour pictures of 105 individuals. Stereo imaging system with high spatial resolution of 0.32 mm along the x, y and z dimensions. The set also contains additional information of facial expression, ethnicity, gender and locations of anthropometric facial fiducial points.

License: see website.

Link: http://live.ece.utexas.edu/research/texas3dfr/

Reference:

  1. Gupta, M. K. Markey, A. C. Bovik, “Anthropometric 3D Face Recognition”, International Journal of Computer Vision, 2010, Volume 90, 3:331-349.
  2. Gupta, K. R. Castleman, M. K. Markey, A. C. Bovik, “Texas 3D Face Recognition Database”, IEEE Southwest Symposium on Image Analysis and Interpretation, May 2010, p 97-100, Austin, TX.
  3. Gupta, K. R. Castleman, M. K. Markey, A. C. Bovik, “Texas 3D Face Recognition Database”, URL: http://live.ece.utexas.edu/research/texas3dfr/index.htm.

UFI – Unconstrained Facial Images:

 

Description: This dataset was originally intended for benchmarking of the face recognition algorithms. It includes real live photos from reporters of the Czech News Agency (ČTK). They are divided into 2 subsets which are also divided in to training and testing sets. The first one is Cropped Images Dataset which include 4295 photos of 605 different individuals. And as you probably guessed they were cropped to the size of 128×128 so they mostly show only the face. The photos are in greyscale and where taken with various illumination conditions, poses and facial expressions.  The second on is called Large Images Dataset. That set contains 4346 pictures of 530 different individuals. The photos are 384×384 in size and are in greyscale. They are taken in different environments, with various illumination conditions, poses and facial expressions.

License: see website.

Link: http://ufi.kiv.zcu.cz/

Reference: L. Lenc, P. Kral, Unconstrained Facial Images: Database for Face Recognition under Real-world Conditions, in 14th Mexican International Conference on Artificial Intelligence (MICAI 2015), Cuernavaca, Mexico, 25-31 October 2015, Springer, FullText, Bibtex.

UMB-DB 3D Face Database:

 

Description: This set contains 3D models and coloured 2D pictures of 143 individuals, 98 men and 45 women. The subjects show 4 different facial expressions smiling, bored, hungry and neutral. There are also models and pictures of them where a part of their face is occludes with hair, glasses, hands, hats, scarves or other objects. With each picture or model are information about positions of 7 facial parts like eyes, eyebrows and etc.

License: see website.

Link: http://www.ivl.disco.unimib.it/minisites/umbdb//description.html

Reference: The image above was obtained from the website under link and is shown with permission from Claudio Cusano.

  1. Colombo, C. Cusano, and R. Schettini, “UMB-DB: A Database of Partially Occluded 3D Faces,” in in Proc. ICCV 2011 Workshops, pp. 2113-2119, 2011.

 

University of Oulu Physics-Based Face Database:

Description: Photos of 125 various faces. 16 pictures of each person but 32 pictures of subjects who have glasses. There is the same background in each picture. The faces are in frontal position and captured under Daylight, Horizon, Fluorescent, and Incandescent illumination. This database also contains “3 spectral reflectance of skin per person measured from both cheeks and forehead“.

License: see website.

Link: http://www.cse.oulu.fi/CMV/Downloads/Pbfd

References:

  1. Marszalec E, Martinkauppi B, Soriano M & Pietikäinen M (2000). A physics-based face database for color research Journal of Electronic Imaging Vol. 9 No. 1 pp. 32-38.
  2. Soriano M, Marszalec E & Pietikäinen M (1999). Color correction of face images under different illuminants by RGB eigenfaces. Proc. 2nd Audio- and Video-Based Biometric Person Authentication Conference (AVBPA99), March 22-23, Washington DC USA pp. 148-153.
  3. Martinkauppi B (1999). Improving results of simple RGB-model for cameras using estimation. SPIE Europto Conf. on Polarization and Color Techniques in Industrial Inspection, June 17-18, Munich, Germany, pp. 295-303.
  4. Soriano M, Martinkauppi B, Marszalec E & Pietikäinen M (1999). Making saturated images useful again. SPIE Europto Conf. on Polarization and Color Techniques in Industrial Inspection, June 17-18, Munich, Germany, pp. 113-121.

10k US Adult Faces Database:

 

Description: This dataset has 10.169 pictures of different faces. This images are in JPEG form and 72×256 pixels.  This set also contains various information of each face. “Manual ground-truth annotations of 77 various landamark-points of 2.222 faces which is useful for face recognition.” Psychology attributes of each participant to study subject-centric versus item-centric face and memory effects. It also contains software so you can collect pictures from the dataset from various properties like gender, emotion, race, attractiveness and memorability.

License: see website.

Link: http://www.wilmabainbridge.com/facememorability2.html

Reference:

The image above was obtained from the website under link and is shown with permission from Wilma Brainbridge.

  1. Khosla, A., Bainbridge, W.A., Torralba, A., & Oliva, A. (2013). Modifying the memorability of face photographs. Proceedings of the International Conference on Computer Vision (ICCV), Sydney, Australia
  2. Main Citation: Bainbridge, W.A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face images. Journal of Experimental Psychology: General. Journal of Experimental Psychology: General, 142(4), 1323-1334

 

VT-AAST Benchmarking Dataset:

 

Description: This database was created for creating human skin complexion technique benchmarking of automatic face recognition algorithms. This set is split into 4 subset. Subset 1 contains 286 unchanged pictures in colour of 1027 faces taken by their digital cameras. The images have various poses, environment, light conditions, orientation, race and facial expression. The second subset has the same pictures as the first one but saved in a different file format. Subset nr 3 is equivalent to the first two but includes human coloured skin regions resulting from an artificial segmentation procedure. The 4 and final subset contains the same photos as in subset 1 but some region where changed into grayscale.

License: see website.

Link: http://abdoaast.wixsite.com/abdallahabdallah/the-vt-mena-benchmarking-datas

Reference:

  1. Abdallah S. Abdallah, M. Abou El-Nasr, and A. Lynn Abbott, “Fusion of 2D-DCT and Edge Features for Face Detection with an SOM Classifier”, International Conference of Applied Electronics, AE 2007.
  2. Abdallah S.A., M. Abou El-Nasr, and A.L. Abbott, “A New Face Detection Technique using 2D-DCT and SOM”, Fourth International Conference on Machine Learning and Data Analysis, MLDA 2007.
  3. Abdallah S. Abdallah, M. Abou El-Nasr, and A. Lynn Abbott, “A New Color Image Database for Benchmarking of Automatic Face Detection and Human Skin Segmentation Techniques”, Fourth International Conference on Machine Learning and Pattern Recognition, MLPR 2007.
  4. Abdallah A.S., MacKenzie A.B., et al., “Human-Activity-Focused Classification of Spectrum Data”, IEEE DySPAN 2015, Stockholm, Sweden (under review).
  5. Abdallah A.S. and MacKenzie A.B., “A New Cross-Layer Controller for Adaptive Video Streaming over IEEE 802.11 Networks”, IEEE ICC 2015, London, UK.
  6. Reed, Abdallah A.S., et al., “The FINS Framework: Design and Implementation of the Flexible Internetwork Stack (FINS) Framework”, IEEE Transaction on Mobile Computing (TMC).
  7. Thompson Michael S., Abdallah A.S., et al., “The FINS Framework: An Open-Source Userspace Networking Subsystem for Linux”, IEEE Network Magazine, October 2014.
  8. Abdallah A.S., MacKenzie A.B., et al., “On Software Tools and Stack Architectures for Wireless Network Experiments”, IEEE WCNC 2011.
  9. Abdallah S. Abdallah, et al. “Facilitating Experimental Networking Research with the FINS Framework”, the 6th A CM International Workshop on Wireless Network Testbeds, Experimental Evaluation and Characterization (W iNTECH ’11).[ACM digital library].
  10. Abdallah A.S., MacKenzie A.B., DaSilva L.A., Michael S., “On Software Tools and Stack Architectures for Wireless Network Experiments”, IEEE Wireless Communications and Networking Conference, WCNC 2011. [PDF].
  11. Thompson Michael S., Hilal Amr E., Abdallah Abdallah S., DaSilva Luiz A., MacKenzie A.B., “The MANIAC Challenge: Exploring MANETs through competition”, 2010 Proceedings of the 8th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, WiOpt 2010. [PDF].
  12. Ghaboosi K., MacKenzie A.B.,  DaSilva L.A. , Abdallah A.S. , Latva-Aho M. , “A Channel Selection Mechanism based on Incumbent Appearance Expectation for Cognitive Networks”, IEEE Wireless Communications and Networking Conference, WCNC 2009. [PDF]

 

 

Yale Face Database:

Description: This set has 165 black and white pictures of faces of 15 different subjects. 11 pictures where taken of each individual showing different emotions (happy, normal, sad sleepy, surprised and wink) or posture.

License: Not known.

Link: http://vision.ucsd.edu/content/yale-face-database

Reference: Not known.

 

 

Groups

Urban Tribes:

Description: This database contains various photos of people in groups. The photos where collected from the internet and are categorised into bikers, hipsters and goths which the creators call “urban tribes”. “This can be used to improve recommendation services, context sensitive advertising and et cetera since people are mostly able to classify group photos in a socially meaningful manner.”

License: Not known.

Link: http://vision.ucsd.edu/content/urban-tribes

Reference:

  1. Kwak I., Murillo A.C., Belhumeur P., Belongie S., Kriegman D., “From Bikers to Surfers: Visual Recognition of Urban Tribes”, British Machine Vision Conference (BMVC), Bristol, September, 2013.
  2. Murillo A.C., Kwak I., Bourdev L., Kriegman D., Belongie S., “Urban Tribes: Analyzing Group Photos from a Social Perspective”, CVPR Workshop on Socially Intelligent Surveillance and Monitoring (SISM), Providence, RI, June, 2012.