Humans

The original version of the following list of visual stimulus sets was compiled by Johanna Margret Sigurdardottir and will be updated as needed. We neither host nor do we provide copies of the stimuli. Researchers who may wish to use a particular stimulus set should seek further information, including on possible licences, e.g. by following the provided web links, reading the referenced papers, and/or emailing the listed contact person/persons for a particular stimulus set. If you notice an error, know of a stimulus set that should be included, or have any other questions or comments, please contact Heida Maria Sigurdardottir (heidasi(Replace this parenthesis with the @ sign)hi.is). The list is provided as is without any warranty whatsoever.

These databases contain images of human faces, whole bodies, and body parts.

Table of Contents

Bodies and body parts

Human 3.6M a dataset: Large Scale Datasets for 3D Human Sensing in Natural Environments by Catalin Ionescu, Dragos Papava, Vlad Olaru and Christian Sminchisescu

Description

This database was created by filming 11 actors, 5 women and 6 men, acting out 17 various scenarios like pretending to eat, giving directions, having a discussion, greeting, activities while seating, taking a photo, posing, making purchases, smoking, waiting, walking, sitting on chair, talking on the phone, walking a dog or walking with someone else. The actors do not seem to have any props except for chairs. They also performed these scenarios with diverse poses, walking with one or both hand in their pockets and etc. Each actor was scanned with a 3D scanner and performed all these movements while being filmed with four calibrated cameras. The database is organized into four subsets. One contains the pictures of the actors acting out all these different scenarios. The second contains a 3D model of the same thing that was recorded for subset number 1 but in the 3D model they are wearing different clothes, have different hair styles and there are pictures of them acting out the same scenarios but in different surroundings. And finally there is a “visualization code, set of implementations for a set of baseline prediction methods, code for data manipulation, feature extraction as well as large scale discriminative learning methods based on Fourier approximations.“ The creators suggest that this database can be used to train computer algorithms and vision models to function in natural surroundings.

Link

http://vision.imar.ro/human3.6m/description.php

License

http://vision.imar.ro/human3.6m/eula.php

Reference(s)

Ionescu, C., Papava, D., & Olaru, V. (2014). Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7). https://doi.org/10.1109/TPAMI.2013.248.

Ionescu, C., Li, F., & Sminchisescu, C. (2011). Latent structured models for human pose estimation. International Conference on Computer Vision. Published. https://doi.org/10.1109/ICCV.2011.6126500.

Wet and Wrinkled Fingerprint Recognition

Description

This data set contains scanned fingerprints of 300 fingers, some of which are obviously wrinkled. Each image is categorised by which finger it is and whether it is dry or wet.

License

To get an access to this dataset researchers have to send a request to pras.bits(Replace this parenthesis with the @ sign)gmail.com with information about who they are and how they would use it.

Link

http://vision.ucsd.edu/content/wet-and-wrinkled-fingerprint-recognition

Reference(s)

Krishnasamy, P., Belongie, S., & Kriegman, D. (2011). Wet fingerprint recognition: Challenges and opportunities. International Joint Conference on Biometrics (IJCB). Published. https://doi.org/10.1109/IJCB.2011.6117594.

Beijbom, O., Edmunds, P. J., Roelfsema, C., Smith, J., Kline, D. I., Neal, B. P., Dunlap, M. J., Moriarty, V., Fan, T. Y., Tan, C. J., Chan, S., Treibitz, T., Gamst, A., Mitchell, B. G., & Kriegman, D. (2015). Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation. PLOS ONE, 10(7), e0130312. https://doi.org/10.1371/journal.pone.0130312

Faces

These data sets contain image of human faces. Some contain 3D models of faces and/or information on facial features. Some images show people displaying various emotions, in different illumination conditions, from different angles or with various accessories or occlusions.

Three Dimensional Space BU-3DFE (Binghamton University 3D Facial Expression) Database

BU-3DFE (Binghamton University 3D Facial Expression) Database

Description

Images and 3D models of 100 individuals, 56 women and 44 men, of differing ethnicity, aged between 18-70. Each individual showed 7 facial expressions: sadness, anger, happiness, surprise, fear, disgust and neutral. Every facial expression, except for neutral, has 4 levels of intensity. There are 25 3D face models for each expression per subject. The facial scan was done from 2 views about 45° from frontal position.

Link

http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

License

To gain access to this dataset, contact lijun(Replace this parenthesis with the @ sign)cs.binghamton.edu.

Reference(s)

Yin, L., Wei, X., Sun, Y., Wang, J., & Rosato, M. J. (2006). 3D Facial Expression Database For Facial Behavior Research. 7th International Conference on Automatic Face and Gesture Recognition (FGR06). Published. https://doi.org/10.1109/FGR.2006.6.

BU-4DFE (3D Dynamic Facial Expression Database – Dynamic Data)

Description

This set is an extension to BU-3DFE dataset above with 606 3D facial expression sequences with a resolution of 35 thousand vertices. It consists of video sequences of 101 individuals; 58 women and 43 men of different ethnicities, showing 6 facials expressions (anger, disgust, fear, happiness, sadness and surprise). There are about 100 frames in each facial sequence. The 3D models were made from the aformentioned videos.

Link

http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

License

To gain access to this dataset, contact lijun(Replace this parenthesis with the @ sign)cs.binghamton.edu.

Reference(s)

Yin, L., Chen, X., Sun, Y., Worm, T., & Reale, M. (2008). A high-resolution 3D dynamic facial expression database. 8th IEEE International Conference on Automatic Face & Gesture Recognition. Published. https://doi.org/10.1109/AFGR.2008.4813324

BP4D-Spontanous: Binghamton-Pittsburgh 3D Dynamic Spontaneous Facial Expression Database

Description

This set includes 3D un-posed facial expressions of 41 individuals, 18 males and 23 females of varying ethnicity aged between 18-29 years. The participants did a series of activities to elicit eight different emotions. A facial action coding system was used to code the action units on the frame-level base for ground-truth of facial expressions.

Link

http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

License

To gain access to this dataset, contact lijun(Replace this parenthesis with the @ sign)cs.binghamton.edu.

Reference(s)

Zhang, X., Yin, L., Cohn, J. F., Canavan, S., Reale, M., Horowitz, A., Liu, P., & Girard, J. M. (2014). BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing, 32(10), 692–706. https://doi.org/10.1016/j.imavis.2014.06.002

AR Face database

Description

This set contains 4000 photos in colour of 126 different faces (70 male and 56 female). The photos where taken under controlled conditions but participants wore whatever clothes, glasses, make-up or hair style they had when they came for the photoshoot. Each participant was photographed in 13 different conditions and then again 2 weeks later. The conditions are: Neutral expression, smile, anger, scream, left light on, right light on, all side lights on, wearing sunglasses, wearing sunglasses and left light on, wearing sunglasses and right light on, wearing a scarf, wearing a scarf and left light on, wearing a scarf and a right light on.

Link
http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html

License 

To gain access to this data set, contact aleix(Replace this parenthesis with the @ sign)ece.osu.edu

Reference(s)

Ding, L., & Martinez, A. M. (2010). Features versus context: An approach for precise and detailed detection and delineation of faces and facial features. IEEE transactions on pattern analysis and machine intelligence32(11), 2022–2038. https://doi.org/10.1109/TPAMI.2010.28

Martinez, A.M. & Benavente R. The AR Face Database. CVC Technical Report #24, June 1998

Martinez, A.M. & Kak A.C. “PCA versus LDA,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228-233, Feb. 2001, doi: 10.1109/34.908974.

Basel Face Model

Description

This set contains 3D scans of 100 men and 100 women. They also give information about mean facial texture, shape and information about the position for eyes, nose and etc. There is only plain coloured background and the individuals’ hairline does not show. There are 3 different illumination conditions and 9 different poses. The faces are put on different axes from younger to older, from thinner to heavier, from taller to shorter faces and from feminine to masculine.

Link

http://faces.cs.unibas.ch/bfm/

License

The set of stimuli can be requested by emailling mail(Replace this parenthesis with the @ sign)unitectra.ch or by filling out the form on the following link – https://faces.dmi.unibas.ch/bfm/bfm2019.html

Reference(s)

Paysan, P., Knothe, R., Amberg, B., Romdhani, S., and Vetter, T. Proceedings of the 6th IEEE International Conference on Advanced Video and Signal based Surveillance (AVSS) for Security, Safety and Monitoring in Smart Environments Genova (Italy) – September 2-4, 2009

Gerig, T., et al. (2018), “Morphable Face Models – An Open Framework,” 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 75-82, doi: 10.1109/FG.2018.00021.

BioID Face Database

Description

This database contains 1521 greyscale photos of 23 different individuals. The faces are in frontal position. They wanted to capture real life conditions so the pictures are taken with various backgrounds, face sizes and illumination. Information is also available on eye positions for each picture.

Link
https://www.bioid.com/About/BioID-Face-Database

License
Stimuli are available to be downloaded from their website (https://www.bioid.com/About/BioID-Face-Database). It is also possible to contact info(Replace this parenthesis with the @ sign)bioid.com for any further information.

Reference(s)

Lou, G. & Shi, H. (2020) “Face image recognition based on convolutional neural network,” China Communications, vol. 17, no. 2, pp. 117-124, doi: 10.23919/JCC.2020.02.010.

Bosphorus Database

Description

This set contains 4.666 face scans of 60 male and 45 female Caucasians between 25 to 35 years old. This set is both in colour and greyscale. Some had facial hair. The faces are shown with a white background. This 3D face set shows multiple emotional expressions, head rotations and “types of occlusions” (meaning that something, like a hand, covers a part of the face). This set also contains information about position of facial features.

Link

http://bosphorus.ee.boun.edu.tr/Content.aspx

License

To obtain the stimuli, follow the instructions at the following – http://bosphorus.ee.boun.edu.tr/HowtoObtain.aspx. If you have further questions, contact arman.savran(Replace this parenthesis with the @ sign)boun.edu.tr

Reference(s)

Savran, B. Sankur, M., Bilge, T. (2011), “Estimation of facial action intensities on 2D and 3D data,” 2011 19th European Signal Processing Conference, 2011, pp. 1969-1973

Savran, B. Sankur, M., Bilge, T. (2012), “Regression-based Intensity Estimation of Facial Action Units”, Image and Vision Computing, Vol. 30, Issue 10, p774-784. https://doi.org/10.1016/j.imavis.2011.11.008.

Caltech 10,000 Web Faces

Description

This set has 10.524 pictures of human faces. The images have different resolutions and settings. The pictures in the database were obtained from the internet.  A “ground truth file” is available on the website below, containing information about the position of the centre of the mouth, eyes and nose of each face.

Link
http://www.vision.caltech.edu/Image_Datasets/Caltech_10K_WebFaces/

License

The database is available for download at the following address – http://www.vision.caltech.edu/Image_Datasets/Caltech_10K_WebFaces/#Download

Reference(s)

Angelova, A., Abu-Mostafam, Y., & Perona, P. (2005). “Pruning training sets for learning of object categories.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, 494-501. doi: 10.1109/CVPR.2005.283.

Chicago Face Database

Description

The Chicago Face Database was developed at the University of Chicago by Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. The CFD is intended for use in scientific research. It provides high-resolution, standardized photographs of male and female faces of varying ethnicity between the ages of 17-65. Extensive norming data are available for each individual model. These data include both physical attributes (e.g., face size) as well as subjective ratings by independent judges (e.g., attractiveness).

Link

https://www.chicagofaces.org/

License

To gain access to this dataset, fill in the request form on the following page – https://www.chicagofaces.org/download/

Reference(s)

Ma, D.S., Correll, J., Wittenbrink, B. (2015). The Chicago Face Database: A Free Stimulus Set of Faces and Norming Data. Behaviour Research Methods, 47, 1122-1135. doi: 10.3758/s13428-014-0532-5

Main CFD Set

Description

The main CFD set consists of images of 597 unique individuals. They include self-identified Asian, Black, Latino, and White female and male models, recruited in the United States. All models are represented with neutral facial expressions. A subset of the models is also available with happy (open mouth), happy (closed mouth), angry, and fearful expressions. Norming data are available for all neutral expression images. Subjective rating norms are based on a U.S. rater sample.

Chigago Face Database – Multi-Racial (CFD-MR)

Description

The CFD-MR extension set includes images of 88 unique individuals, who self-reported multiracial ancestry. All models were recruited in the United States. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressions are in production and will become available with a future update of the database. Norming data include the standard set of CFD objective and subjective image norms as well as the models’ self-reported ancestry. Subjective norms are based on a U.S. rater sample.

CFD-India

The CFD-INDIA extension set includes images of 142 unique individuals, recruited in Delhi, India. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressionsare in production and will become available with a future update of the database. Norming data consist of the standard set of CFD objective and subjective image norms, including ratings of perceived ethnicity. Subjective norms are available for a U.S. rater sample and for a sample of raters recruited in India. In addition, an extended set of self-reported model background data (e.g., ancestry, home state, caste) is available upon request.

Child Affective Facial Expression Set (CAFE)

Description

This database contains 1192 photos of 154 children at the age 2-8 years old of different ethnicities. 64 of the children are boys and 90 of them are girls. The photos are in color and are taken in front of a white background. All children wear a white garment up to their chin so only their hair, face and a bit part of their neck is showing. Each child shows 7 different facial expressions: anger, disgust, fear, happiness, neutral, sadness and surprise. They show each emotion, except for surprise, with their mouth open and closed. When they pretended to be surprised they always had their mouth open.

Link

http://www.childstudycenter-rutgers.com/the-child-affective-facial-expression-s

License

To gain access to the stimuli, register as an authorized investigator at the following website – https://nyu.databrary.org/. For further questions, contact contact(Replace this parenthesis with the @ sign)datalibrary.org

Reference(s)

LoBue, V. & Thrasher, C. (2015). The Child Affective Facial Expression (CAFE) Set: Validity and reliability from untrained adults. Frontiers in Emotion Science, 5. https://doi.org/10.3389/fpsyg.2014.01532

ChokePoint Dataset

Description

This database includes 48 videos and 64.204 face images of 23 men and 6 women. Only one individual is present at a time in both the videos and the images. Three cameras were placed above several portals and the subjects had to walk through the portals. While the individuals were walking through a portal they captures several face images.  The faces were captured during different light conditions, poses, sharpness but also misalignment. This database also contains 2 videos where a subject walked through a crowded portal.

Link

http://arma.sourceforge.net/chokepoint/

License

The stimulus set is available to the scientific community at the following address – http://arma.sourceforge.net/chokepoint/#licence

Reference(s)

Wong, Y., Chen, S., Mau, S., Sanderson, C., & Lovell, B.C. (2011), Patch-based Probabilistic Image Quality Assessment for Face Selection and Improved Video-based Face Recognition, IEEE Biometrics Workshop, Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 81-88. doi:10.1109/CVPRW.2011.5981881

Cohn-Kanade AU-Coded Expression Database

Description
There are 2 versions of this database. The first version contains 486 sequences from 97 individuals. Each sequence starts with a neutral expression, then the subjects were asked to show an expression which signals the end of the sequence. Peak expressions are labelled by the emotion the subjects were asked to show. The photos were also FACS coded. The second version has validated emotion labels and 22% more sequences than version 1 and 27% additional posers.  The photos are also FACS coded. “Additionally, CK+ provides protocols and baseline results for facial feature tracking and action unit and emotion recognition” according to the website.

Link
http://www.pitt.edu/~emotion/ck-spread.htm

License

To gain access to the set of stimuli, complete the following form (http://www.jeffcohn.net/wp-content/uploads/2020/04/CK-AgreementForm.pdf) and mail it to MER160(Replace this parenthesis with the @ sign)pitt.edu

Reference(s)

Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial
expression analysis. Proceedings of the Fourth IEEE International Conference
on Automatic Face and Gesture Recognition (FG’00), Grenoble, France, 46-53.

Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews, I.
(2010). The Extended Cohn-Kanade Dataset (CK+): A complete expression
dataset for action unit and emotion-specified expression. Proceedings of the
Third International Workshop on CVPR for Human Communicative Behavior
Analysis (CVPR4HB 2010), San Francisco, USA, 94-101.

Color FERET Database

Description

The database contains 14,126 pictures of faces. These photos are of 1,199 different people; each individual has their own set but there are also 365 sets which have images of an individual already in the database and are taken on a different day than the rest of the pictures of that same individual.  Some people were photographed many times and some were photographed 2 years after the first photo was taken. This allows researchers to study changes in these people‘s appearances.

Link

https://www.nist.gov/itl/products-and-services/color-feret-database

License

To obtain access to the databse, see Release Agreement and contact colorferet(Replace this parenthesis with the @ sign)nist.gov

Reference(s)

Phillips, P.J., Wechsler, H., Huang, J., Rauss, P. (1998). The FERET database and evaluation procedure for face recognition algorithms. Image and Vision Computing J. 16(5), 295-306.

Phillips, P. J., Moon, H., Rizvi, S. A., Rauss, P. J. (2000). The FERET Evaluation Methodology for Face Recognition Algorithms. IEEE Trans. Pattern Analysis and Machine Intelligence. 22, 1090-1104.

Dartmouth Database of Children Faces

Description

The pictures are of 80 children, 40 girls and 40 boys, aged between 6-16 years old. Only their faces and a black background can be seen; hair is covered by a black piece of clothing. There are about 40 pictures of each child. The pictures are all taken from 5 different angles from the front to the sides and each child shows 8 different facial expressions from all the 5 angles. The facial expression shown by each child is: neutral, pleased, happy, surprised, angry, sad, disgusted and afraid.

Link

http://www.faceblind.org/social_perception/K_Dalrymple/DDCF.html

License

To obtain the dataset, complete the following form (https://lab.faceblind.org/K_Dalrymple/DDCFLicenseAgreement.pdf)  and send it to kad(Replace this parenthesis with the @ sign)umn.edu.


Reference(s)

Dalrymple, K.A., Gomez, J., & Duchaine, B. (2013). The Dartmouth Database of Children’s Faces: Acquisition and validation of a new face stimulus set. PLoS ONE, 8(11), e79131. doi:10.1371/journal.pone.0079131

DISFA Databases

DISFA (Denver Intensity of Spontaneous Facial Action) Database

Description

This set contains pictures of 27 adults, 15 men and 12 women. Various facial expressions are elicited by a videos stimulus each 4 minutes long.

Link

DISFA+

License

Complete the following form – http://mohammadmahoor.com/disfa-contact-form/

Reference(s)

Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P. (2012) “Automatic detection of non-posed facial action units,” Image Processing (ICIP), 2012, 19th IEEE International Conference on , vol., no., pp.1817,1820, Sept. 30 2012-Oct. 3 2012 , doi: 10.1109/ICIP.2012.6467235

Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P., Cohn, J.F. (2013). “DISFA: A Spontaneous Facial Action Intensity Database,” Affective Computing, IEEE Transactions on , vol.4, no.2, pp.151,160, April-June 2013 , doi: 10.1109/T-AFFC.2013.4

DISFA+ (Extended Denver Intensity of Spontaneous) Database

Description

This set is an extended version of the DISFA.  People were recorded when posing an emotional expression and then again when a facial expression was elicited by video stimuli. This set also includes artificially labelled frame-based descriptions of 5-level intensity of twelve FACS facial actions as well as information on position of facial features.

Link

DISFA+

License

Complete the following form – http://mohammadmahoor.com/disfa-contact-form/

Reference(s)

Mavadati, M., Sanger, P., & Mahoor, M. H. (2016). Extended DISFA Dataset: Investigating Posed and Spontaneous Facial Expressions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 1-8).

3D Mask Attack Dataset

Description

This database consists of 76.500 frames of 17 individuals. Kinect was used to record both real-access and spoofing attacks. Every frame contains a depth image (640×480 pixels), a corresponding RGB image, and manually annotated eye positions. Each individual was recorded in 3 sessions under controlled conditions with neutral expressions and frontal view. In the third session 3D mask attacks are captured by a single operator (attacker). Real-size masks were made using ThatsMyFace.com.

Link

https://www.idiap.ch/dataset/3dmad

License

The dataset is available for dwnload hrough PyPI or via its git repository (see https://zenodo.org/record/4068477)

Reference(s)

Erdogmus, N., & Marcel, S. (2013). Spoofing in 2D face recognition with 3D masks and anti-spoofing with kinect. Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on (pp. 1-6). IEEE. doi:10.1109/BTAS.2013.6712688

3D RMA: 3D database

Description

This set contains 3D images of 120 individuals (of which 14 are women). Around 80 of those 120 individuals were students from the same ethnic origins and around the same age. The rest was university staff from 20 to 60 years old. Each individual was photographed twice, the first time in November 1997 and the second time in January 1998. They were photographed in each session looking straight to the camera, oriented to left from the camera, oriented to right and also photographed looking up and looking down. Each orientation was photographed 3 times. Some subjects had facial hair.

Link

https://www.idiap.ch/en/dataset/3dmad

License

The data set is available to download from their website

Reference(s)

Islam, S. M. S., Bennamoun, M., Owens, R. A., & Davies, R. (2012). A review of recent advances in 3D ear- and expression-invariant face biometrics. ACM Computing Surveys, 44(3), 1–34. https://doi.org/10.1145/2187671.2187676

Emotional Face Database

Description

A database of 290 emotional face stimuli for vision research. The images are displaying seven different emotional states (angry, sad, fearful, surprised, disgusted, happy and neutral). The database was validated by more than 400 people. Farsi and English speakers rated the emotion level in database images.

License: Users need to register with their academic email on the project’s website and send them a scanned signed copy of an EFD Statement of Use (see website).

Link

http://e-face.ir

License
Users need to register with their academic email on the project’s website and send them a scanned signed copy of an EFD Statement of Use (http://e-face.ir/EFD%20Statement%20of%20Use%20Info.pdf) to the following email address – info(Replace this parenthesis with the @ sign)e-face.ir.

Reference(s)

Heydari, F., Yoonessi, A. (2019). Emotional Face Database. Retrieved from http://e-face.ir/

Eurocom Kinect Face Dataset

Description

This database contains multi-modal facial pictures of 52 individuals, 38 men and 14 women. The photos were taken in a lab setting, with the same background. The the subject stood 1 m in front of a Kinect camera. Each photo is 256×256  pixels. The facial images were taken for each individual in different illumination conditions, occlusion and with different facial expressions. The facial expressions where neutral, smile, open mouth, left profile, right profile, with sunglasses, something covering the mouth and a part of the face is covered with paper. For each image there is information about 6 positions on the face: left eye, right eye, the tip of the nose, left side of mouth, right side of mouth and the chin. There is also information on gender, year of birth and if the person uses glasses or not. And there is also a depth map which contains depth information.

Link

http://rgb-d.eurecom.fr/

License

To obtain access of this database, the following form (https://docs.google.com/forms/d/e/1FAIpQLSednMTebdgO9BrqbWi40R-5Ck_reOHRmi7lnWfBgHh5Wo5kFg/viewform?formkey=dGpqeXpidENxR1RxZlB2VEhWWndNZWc6MA#gid=0) must be completed. An acadamic email address must be used.

Reference(s)

Rui Min, Neslihan Kose, Jean-Luc Dugelay, “KinectFaceDB: A Kinect Database for Face Recognition,” Systems, Man, and Cybernetics: Systems, IEEE Transactions on , vol.44, no.11, pp.1534,1548, Nov. 2014, doi: 10.1109/TSMC.2014.2331215

Extended Yale Face Database B

Description

This set has 16.128 photos of 28 individuals. Each person was photographed with 9 different poses and in 64 illumination conditions.  The images are grey-scale. There are two versions of this set; original and cropped.

Link

http://vision.ucsd.edu/~iskwak/ExtYaleDatabase/ExtYaleB.html

License

The images are available for download from the above website.

Reference(s)

Georghiades, A., Belhumeur, P., Kriegman, D. (2001). From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose. IEEE Trans. Pattern Analyses and Machine Intelligence, 23(6), 643-660.

Face Recognition Database

Description

This dataset includes 200 pictures of 3D head models of 10 individuals. There are various lightning conditions, poses and backgrounds in the pictures.

Link

http://cbcl.mit.edu/software-datasets/heisele/facerecognition-database.html

License

The stimuli are openly available to download from the above website.

Reference(s)

B. Weyrauch, B., Huang, J., Heisele, B., and Blanz, V. (2004) Component-based Face Recognition with 3D Morphable Models, First IEEE Workshop on Face Processing in Video, Washington, D.C.

FaceScrub

Description

This set contains over 100,000 pictures of 530 different male and female celebrities, around 200 pictures of each individual. They used this database to create an easy way to remove of photos that might not be of the same person.

Link

http://vintage.winklerbros.net/facescrub.html

License

Data is released under a creative commons license. To request the dataset, complete the following form – https://www.cognitoforms.com/ADSC2/FaceScrubDatasetPasswordRequest

Reference(s)

Hongwei, N..G., Winkler, S. (2014)  A data-driven approach to cleaning large face datasets. Proc. IEEE International Conference on Image Processing (ICIP), Paris, France, Oct. 27-30. DOI:10.1109/ICIP.2014.7025068

FaReT

Description

FaReT is the Face Research Toolkit –  a free and open-source toolkit of three-dimensional models and software to study face perception.

Link
https://github.com/fsotoc/FaReT

Licence

The data is available for download from the above link.

Reference(s)

Hays, J. S., Wong, C., & Soto, F. (2020). FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behavior Research Methods, 5.(6), 2604-2622. https://doi.org/10.31234/osf.io/jb53v

FEI Face database

Description

This website contains 2 data sets. The original dataset contains 2800 photos of 200 subjects, 100 women and 100 men. Each face was photographed 14 times with profile rotation of up to about 180 degrees. The models age is from 19 to 40 years old. Each image is 640×480 pixels and are all in colour. You can also download another dataset from the same website which contains only 4 frontal pictures of each model in greyscale. 2 pictures show 2 different facial expressions. One frontal picture is of the real face and in another one the face has been partly normalized.

Link

http://fei.edu.br/~cet/facedatabase.html

License

The dataset is free for use for research purposes. For further inquiries, contact cet(Replace this parenthesis with the @ sign)fei.edu.br

Reference(s)

Amaral, C. Figaro-Garcia, G. J. F. Gattas, C. E. Thomaz. (2009). Normalizacao espacial de imagens frontais de face em ambientes controlados e nao-controlados, Periodico Cientifico Eletronico da FATEC Sao Caetano do Sul (FaSCi-Tech), 1(1).

Tenorio, Z. & Thomaz, C.E. (2011). Analise multilinear discriminante de formas frontais de imagens 2D de face. In proceedings of the X Simposio Brasileiro de Automacao Inteligente SBAI 2011, 266-271

Thomaz, E. & Giraldi, G.A. (2010). A new ranking method for Principal Components Analysis and its application to face image analysis, Image and Vision Computing, 28(6) , 902-913. https://doi.org/10.1016/j.imavis.2009.11.005

Generated Photos

Description

Synthetic and real-life datasets for machine learning. All portraits are model-released. Professional lighting, cameras, and makeup. Professional crew: from photographers to ML engineers. 100% synthetic. Based on model-released photos. Backgrounds are customizable with the following – coloured, transparent, photographic. There are images with a variety of ethnicities, demographics, facial expressions, and head poses

Link

https://generated.photos/datasets

License

The synthetic dataset is free for use, and can be requested at the following address – https://icons-8.typeform.com/to/PzMjQbPe?typeform-source=generated.photos . You can also contact work.with(Replace this parenthesis with the @ sign)generated.photos .

Georgia Tech face database

Description

In this set there are 640×480 pixel colour pictures of 50 individuals. Each subject was photographed in 2 sessions between June 1999 and November, 1999. The faces are frontal or tilted and taken with various illumination conditions, scale and facial expression. This set also contains another form of these same pictures but where the background has been removed.

Link

http://www.anefian.com/research/face_reco.htm

License

The dataset is available for download at the above link.

References:

Chen, L., Man, H., & Nefian, A.V. (2005) Face recognition based multi-class mapping of Fisher scores, Pattern Recognition, Special issue on Image Understanding for Digital Photographs. https://doi.org/10.1016/j.patcog.2004.11.003

Goel, N., Bebis, G. & Nefian, A.V. (2005) Face recognition experiments with random projections, SPIE Conference on Biometric Technology for Human Identification. doi:10.1117/12.605553

Indian Movie Face Database (IMFDB):

Description

This set contains 34.512 pictures of 100 Indian actors. The pictures were obtained from more than 100 videos. The pictures are handpicked and cropped from the video frames. The images have various light conditions, scale and resolution. The actors are at various ages, gender, in a different pose and showing different emotions. They also have different occlusion, i.e. in some pictures a part of an actor’s face is covered by a hand or an object.

Link

http://cvit.iiit.ac.in/projects/IMFDB/

License

The dataset is available to download at the following address – http://cvit.iiit.ac.in/projects/IMFDB/pages/downloadDB.html .

Reference:

Setty, S., Husain, M, Beham, P., Gudavalli, J., Kandasamy, M., Vaddi, R., Hemadri, V., Karure, J.C., Raju, R, Kumar, V. & Jawahar C.V. (2013) Indian Movie Face Database: A Benchmark for Face Recognition Under Wide Variations. National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG). DOI: 10.1109/NCVPRIPG.2013.6776225

JAFFE Database

Description

This set contains 213 grey-scale frontal photos of Japanese female models. The models were asked to show 7 distinct emotions: neutral, happiness, disgust, anger, fear, sadness and surprise. Each photo is categorized by the emotion the women were asked to express.

Link

http://www.kasrl.org/jaffe.html

License

The images may only be used for non-commercial scientific research. To request access, see the following link – https://zenodo.org/record/3451524/accessrequest .

Reference(s)

Lyons, M.J., Kamachi, M. & Gyoba, J. (2020)
Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)  https://arxiv.org/pdf/2009.05938.pdf

Karolinska Directed Emotional Faces (KDEF):

Description

This database contains 4900 photos of 70 subjects. Each model was asked to express 7 emotions while being photographed twice at 5 different angles.

Link

http://www.emotionlab.se/resources/kdef

License

The stimuli are available to download at the following link – https://kdef.se/download-2/index.html

Reference(s)

Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska directed emotional faces (KDEF). CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet91(630), 2-2.

Lundqvist, D., & Litton, J. E. (1998). The averaged Karolinska directed emotional faces-AKDEF. AKDEF CD ROM. Psychology section, Karolinska Institutet, Stockholm.

Labelled Faces in the Wild (LFW)

Description

There are 4 distinct sets, the original LFW and 3 other “aligned” images. The original LFW contains 13,233 pictures which were all obtained from the internet. There is at least one photo of 5,749 individuals but two or more pictures of 1680 individuals. The images have a background. The pictures are labelled with the name of the person in the pictures. Viola-Jones face detector was used to detect the faces in the pictures.

Link

http://vis-www.cs.umass.edu/lfw/index.html

License

Images are available for download at the above link.

References:

Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008). Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in’Real-Life’Images: detection, alignment, and recognition.

Labelled Faces in the Wild-a (LFW-a)

Description

This set contains the same pictures that are in the original LFW but they are in grey-scale and have been aligned using commercial face alignment software.

Link

http://www.openu.ac.il/home/hassner/data/lfwa/

License

The dataset is available for download at the following address – https://drive.google.com/file/d/1p1wjaqpTh_5RHfJu4vUh8JJCdKwYMHCp/view

Reference(s)

Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008). Labelled faces in the wild: A database for studying face recognition in unconstrained environments. In Workshop on faces in ‘Real-Life’ Images: detection, alignment, and recognition.

Wolf, L., Hassner, T., & Taigman, Y. (2010). Effective unconstrained face recognition by combining multiple descriptors and learned background statistics. IEEE transactions on pattern analysis and machine intelligence33(10), 1978-1990.

LFW Cropped-Face database

Description

This is the cropped version of LFW set where the faces have been cropped from the background. The pictures are both available in grey-scale and in color.

Link

http://conradsanderson.id.au/lfwcrop/

License

The images are free for download at the above link.

Reference(s)

Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008). Labelled faces in the wild: A database for studying face recognition in unconstrained environments. In Workshop on faces in ‘Real-Life’ Images: detection, alignment, and recognition.

Sanderson, C., & Lovell, B. C. (2009, June). Multi-region probabilistic histograms for robust and scalable identity inference. In International conference on biometrics (pp. 199-208). Springer, Berlin, Heidelberg.

McGill Faces Database:

Description

This data set consists of 18,000 video frames of 60 individuals, 31 women and 29 men. The videos were recorded in various surroundings, both indoors and outdoors. Therefore the frames are recorded in various lighting conditions and backgrounds. The individuals show various facial expressions, poses, motions and occlusions. For each video they provide: “the original video frames. Detected, aligned, scaled and masked face images after applying the pre-processing step. Gender and facial hair. Continuous and probabilistic head pose ground third obtained via the manual labelling method.

Link

https://sites.google.com/site/meltemdemirkus/mcgill-unconstrained-face-video-database/

License

The database is only for non-commercial (academic) use. To apply for access, there is a form on the above webpage. For further inquiries, contact demirkus(Replace this parenthesis with the @ sign)cim.mcgill.ca

Reference(s)

Demirkus, M., Precup, D., Clark, J. J., & Arbel, T. (2015). Hierarchical spatio-temporal probabilistic graphical model with multiple feature fusion for binary facial attribute classification in real-world face videos. IEEE transactions on pattern analysis and machine intelligence38(6), 1185-1203.

Demirkus, M., Precup, D., Clark, J. J., & Arbel, T. (2015). Hierarchical temporal graphical model for head pose estimation and subsequent attribute classification in real-world videos. Computer Vision and Image Understanding136, 128-145.

Makeup Datasets

Description

This website contains 3 sets. Makeup in the Wild consists of face pictures of 125 individuals with or without makeup which were gotten from the internet. There are only few who have both a picture of them with and without makeup. In the YouTube Makeup Database there are images of 151 women with pale skin. There are about 1 to 3 pictures of each woman with and without makeup. In both sets the images show various lighting conditions, colour and the amount of makeup. They show different facial expressions and a rotation of the head. The Virtual Makeup dataset includes of 204 pictures of 51 women, one picture with no makeup, one just with lipstick, one just with eye makeup and one photo with a full makeover.

Link

http://www.antitza.com/makeup-datasets.html

License

To obtain access, email swearin3(Replace this parenthesis with the @ sign)cse.msu.edu, CC: rossarun(Replace this parenthesis with the @ sign)msu.edu with the following information – name, affiliation, email address, requested dataset and the reason for the dataset download.

Reference(s)

Chen, C., Dantcheva, A., & Ross, A. (2013). Automatic facial makeup detection with application in face recognition. In 2013 international conference on biometrics (ICB) (pp. 1-8). IEEE.

Chen, C., Dantcheva, A., & Ross, A. (2016). An ensemble of patch-based subspaces for makeup-robust face recognition. Information fusion32, 80-92.

Dantcheva, A., Chen, C., & Ross, A. (2012). Can facial cosmetics affect the matching accuracy of face recognition systems?. In 2012 IEEE Fifth international conference on biometrics: theory, applications and systems (BTAS) (pp. 391-398). IEEE.

MR2 Face Stimuli

Description

This set contains photos of 74 individuals from 18 to 25 years old. While being photographed the subjects stood in front of a white background and wore a black t-shirt. The subjects had to have facial features that are prototypical for 3 different ethnicities, African-American, Asian and European. None of the participants had facial piercings, “unnatural” hair styles, unnatural hair colour or facial hair. They all had medium to dark brown eyes so there was no specific eye colour for any race. They did not have any make-up, jewelry or hair accessories and their hair was pulled back, away from their face.

Link

http://ninastrohminger.com/the-mr2/

License

There is an application form for downloading the database on the above website.

Reference(s)

Strohminger, N., Gray, K., Chituc, V., Heffner, J., Schein, C., & Heagins, T. B. (2016). The MR2: A multi-racial, mega-resolution database of facial stimuli. Behavior research methods48(3), 1197-1204.

MUCT Face Database

Description

Pictures of 3.755 individual‘s faces. For each face there are 76 manual landmarks which contain information on the position of facial features. Each subject was photographed in front of a grey background. The subjects are of various ages, genders, and ethnicity and show various poses and expressions.

Link

http://www.milbo.org/muct/

License

The data is free to download from github (https://github.com/StephenMilborrow/muct). For further questions, contact

Reference(s)

Milborrow, S., Morkel, J., & Nicolls, F. (2010). The MUCT landmarked face database. Pattern recognition association of South Africa201(0).

ORL Database of faces:

Description: In this data set are 400 grey-scale photos of 40 individuals standing in front of a plain black background. There are 10 photographs of each person and they were all taken between April 1992 and April 1994. Each person is in frontal position and upright. The pictures were taken at different times, with different lightning, facial expressions and other details like with glasses or not. The photos are 92×112 pixels.

License: Not known.

Link: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html

Reference: http://www.cl.cam.ac.uk/research/dtg/attarchive/abstracts.html#55

OUI-Audience Face Image Project:

Description: This database contains 26.580 photos of 2.284 individuals from a couple of months old to over 60 years old. The photos are labelled with gender. The photos are taken at various illumination conditions and the individuals are photographed at different poses.

License: see website.

Link: http://www.openu.ac.il/home/hassner/Adience/publications.html

Reference: Eran Eidinger, Roee Enbar, and Tal Hassner, Age and Gender Estimation of Unfiltered Faces, Transactions on Information Forensics and Security (IEEE-TIFS), special issue on Facial Biometrics in the Wild, Volume 9, Issue 12, pages 2170 – 2179, Dec. 2014

Psychological Image Colection at Stirling (PICS):

This website contains two main databases called Stirling/ESRC 3d face database and 2D face set.

License: Not known. On the website it says that in order to cite PICS in a paper people should provide a link to the website. License is not specified on the website itself.

Link: http://pics.psych.stir.ac.uk/

References:

Stirling/ESRC 3d face database:

Description: This database is still being processed but currently has 45 pictures of men and 54 pictures of women in 3D. Their neck, hair is showing in the pictures.

License: see website

Link: http://pics.psych.stir.ac.uk/ESRC/index.htm

Reference: http://pics.psych.stir.ac.uk/ESRC/index.htm

2D face set:

Description: Is actually a set of 9 different databases of faces. All are described on the same website. Some of these sets are in black and white, others are in color. Some are only in a neutral frontal pose but others show emotions or show the same face from different angles. Some show the same expression in different poses but other subsets show many different facial expressions at each angle.

License: Not known

Link: http://pics.psych.stir.ac.uk/2D_face_sets.htm

Reference: Not specified.

Profile Image Dataset

The Profile Image Dataset includes 12 social-media style images each of 102 young adults. It also contains detailed impression ratings for each of the 1224 photos, rated by the person pictured and also a large online sample of unfamiliar viewers. It is described in detail here with examples in Fig 1:

And there is a corrected version of the associated Social Impression and rating data here:

License: The images can be requested by emailing David.white(Replace this parenthesis with the @ sign)unsw.edu.au and signing an agreement for their use

Reference:

White, D., Sutherland, C. A. M., & Burton, A. L. (2017). Choosing face: The curse of self in profile image selection. Cognitive Research: Principles and Implications, 2(1). https://doi.org/10.1186/s41235-017-0058-3

PubFig: Public Figures Face Database:

Description: This set contains 58.797 images of 200 different individuals. Images were gathered from the internet. These images have various illumination conditions, scenes, cameras, poses, facial expressions and parameters.

License: Not known.

Link: http://www.cs.columbia.edu/CAVE/databases/pubfig/

Reference: “Attribute and Simile Classifiers for Face Verification,”
Neeraj Kumar, Alexander C. Berg, Peter N. Belhumeur, and Shree K. Nayar,
International Conference on Computer Vision (ICCV), 2009.

Put face database:

Description: The creators of this set gathered 9.971 images of 100 people to create this 3D picture set. The photos were taken in under controlled conditions. Each individual was photographed with a neutral expression with head turning left to right, the face turned up or down, raised head turning from left to right, lowered head turning left to right and some individuals had glasses. To facilitate evaluation and development of face recognition each photo has additional information about the position of the eyes, mouth, nose etc. There are also “manually annotated rectangles containing a face, eyes (left and right separately), a nose and a mouth”.

License: see website https://biometrics.cie.put.poznan.pl/index.php?option=com_content&view=article&id=3&Itemid=15&lang=en

Link: https://biometrics.cie.put.poznan.pl/

Reference:

Kasiński, A. Florek, A. Schmidt Image Processing & Communications Volume 13, Number 3-4, 59-64, 2008.

Radboud Faces Database:

Description: This dataset contains about 8040 photos in color of 67 different male and female models from various ages, mostly Caucasian but some are Moroccan Dutch males. In the photos the models show 8 emotion expressions, anger, contempt, disgust, fear, happiness, neutral, sadness and surprise. Each picture was taken from 5 different camera angels at the same time and each emotion was photographed with 3 different gaze directions.

License: see website.

Link: http://www.socsci.ru.nl:8180/RaFD2/RaFD?p=main

Reference:

Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition & Emotion, 24(8), 1377—1388. DOI: 10.1080/02699930903485076

SCface – Surveillance Cameras Face Database:

Description: This set contains 4160 photos of 133 human faces as a part of an uncontrolled indoor scene taken by 5 video surveillance cameras. The pictures are taken from different distances. The cameras were above the persons head. The models were not looking at a specific point.

License: see website.

Link: http://www.scface.org/

Reference: Grgic, M., Delac, K., Grgic, S.  (2011). SCface – surveillance cameras face database. Multimedia Tools and Applications Journal, 51(3), 863-879.

Senthilkumar Face Database (Version 1):

Description: This data set consists of 80 grey-scale pictures of 5 men.  There are about 16 pictures of each man. They are mostly frontal views with various occlusions, illumination conditions and facial expressions. The photos were manually cropped to 140×188 pixels and then normalized.

License: Not known.

Link: http://www.geocities.ws/senthilirtt/Senthil%20Face%20Database%20Version1

Reference: Not known.

Senthil IRTT Face Database Version 1.1:

Description: This set contains 317 pictures, size 550×780 pixels, of faces of 13 IRTT students. The pictures are in color and gray-scale. They subjects were from 23 to 24 years old, 1 female the rest male. The photos were taken with a white background captured with digital camera 14.1 megapixels resolution. Some subjects had a scarf, show different facial expressions, poses and some have a hat.

License: Not known.

Link: http://www.geocities.ws/senthilirtt/Senthil%20IRTT%20Face%20Database%20Version%201.1

Reference: Not known.

SiblingsDB Database:

Description: This dataset consists of pictures of 97 pairs of siblings from 18-50 years old. The photos were taken by a professional photographer with a green background. 56 of them have smiling frontal and profile photo. These individuals are students and employees of the Politecnio di Torino and their siblings. 57% of the individuals are men. There are frontal photos of 92 siblings with neutral expression. They took frontal and profile photos of 158 individuals. They also took profile and frontal photos of 112 individuals showing neutral expression and then frontal and profile again smiling.  With each photo there is information on the position of facial features, sex, birth date, age and how many participants thought they were siblings or not. There is another similar sibling database on this website except that one contains 98 pairs of famous (where at least on sibling is famous) siblings obtained from the internet.

License: see website.

Link: http://areeweb.polito.it/ricerca/cgvg/siblingsDB.html

Reference: T.F. Vieira, A. Bottino, A. Laurentini, M. De Simone, Detecting Siblings in Image Pairs, The Visual Computer, 2013, vol XX, p. YY, doi: 10.1007/s00371-013-0884-3

Texas 3D Face Recognition Database (Texas 3DFRD):

Description: Contains 1149 pairs of range and facial color pictures of 105 individuals. Stereo imaging system with high spatial resolution of 0.32 mm along the x, y and z dimensions. The set also contains additional information of facial expression, ethnicity, gender and locations of anthropometric facial fiducial points.

License: see website.

Link: http://live.ece.utexas.edu/research/texas3dfr/

Reference:

  1. Gupta, M. K. Markey, A. C. Bovik, “Anthropometric 3D Face Recognition”, International Journal of Computer Vision, 2010, Volume 90, 3:331-349.
  2. Gupta, K. R. Castleman, M. K. Markey, A. C. Bovik, “Texas 3D Face Recognition Database”, IEEE Southwest Symposium on Image Analysis and Interpretation, May 2010, p 97-100, Austin, TX.
  3. Gupta, K. R. Castleman, M. K. Markey, A. C. Bovik, “Texas 3D Face Recognition Database”, URL: http://live.ece.utexas.edu/research/texas3dfr/index.htm.

UFI – Unconstrained Facial Images:

Description: This data set includes real live photos from reporters of the Czech News Agency (ČTK). They are divided into 2 subsets which are also divided in to training and testing sets. The first one is Cropped Images Dataset which include 4295 photos of 605 different individuals. They were cropped to the size of 128×128 so they mostly show only the face. The photos are in grey-scale and were taken under various illumination conditions, with various poses and facial expressions.  The second subset is called Large Images Dataset. That set contains 4346 pictures of 530 different individuals. The photos are 384×384 in size and are in grey-scale. They are taken in different environments, with various illumination conditions, poses and facial expressions.

License: see website.

Link: http://ufi.kiv.zcu.cz/

Reference: L. Lenc, P. Kral, Unconstrained Facial Images: Database for Face Recognition under Real-world Conditions, in 14th Mexican International Conference on Artificial Intelligence (MICAI 2015), Cuernavaca, Mexico, 25-31 October 2015, Springer, FullText, Bibtex.

UMB-DB 3D Face Database:

Description: This set contains 3D models and colored 2D pictures of 143 individuals, 98 men and 45 women. The subjects show 4 different facial expressions: smiling, bored, hungry and neutral. There are also models and pictures of them where a part of their face is occluded with hair, glasses, hands, hats, scarves or other objects. With each picture or model is information about positions of 7 facial parts like eyes, eyebrows etc.

License: see website.

Link: http://www.ivl.disco.unimib.it/minisites/umbdb//description.html

Reference:

Colombo, C. Cusano, and R. Schettini, “UMB-DB: A Database of Partially Occluded 3D Faces,” in in Proc. ICCV 2011 Workshops, pp. 2113-2119, 2011.

University of Oulu Physics-Based Face Database:

Description: Photos of 125 various faces. 16 pictures of each person but 32 pictures of subjects who have glasses. There is the same background in each picture. The faces are in frontal position and captured under Daylight, Horizon, Fluorescent, and Incandescent illumination. This database also contains “3 spectral reflectance of skin per person measured from both cheeks and forehead“.

License: see website.

Link: http://www.cse.oulu.fi/CMV/Downloads/Pbfd

References:

  1. Marszalec E, Martinkauppi B, Soriano M & Pietikäinen M (2000). A physics-based face database for color research Journal of Electronic Imaging Vol. 9 No. 1 pp. 32-38.
  2. Soriano M, Marszalec E & Pietikäinen M (1999). Color correction of face images under different illuminants by RGB eigenfaces. Proc. 2nd Audio- and Video-Based Biometric Person Authentication Conference (AVBPA99), March 22-23, Washington DC USA pp. 148-153.
  3. Martinkauppi B (1999). Improving results of simple RGB-model for cameras using estimation. SPIE Europto Conf. on Polarization and Color Techniques in Industrial Inspection, June 17-18, Munich, Germany, pp. 295-303.
  4. Soriano M, Martinkauppi B, Marszalec E & Pietikäinen M (1999). Making saturated images useful again. SPIE Europto Conf. on Polarization and Color Techniques in Industrial Inspection, June 17-18, Munich, Germany, pp. 113-121.

10k US Adult Faces Database:

Description: This dataset has 10.169 pictures of different faces. These images are in JPEG form and 72×256 pixels. This set also contains various information of each face. “Manual ground-truth annotations of 77 various landmark-points of 2.222 faces which is useful for face recognition.” Psychology attributes of each participant to study subject-centric versus item-centric face and memory effects. It also contains software so you can collect pictures from the data set from various properties like gender, emotion, race, attractiveness and memorability.

License: see website.

Link: http://www.wilmabainbridge.com/facememorability2.html

Reference:

The image above was obtained from the website under link and is shown with permission from Wilma Brainbridge.

  1. Khosla, A., Bainbridge, W.A., Torralba, A., & Oliva, A. (2013). Modifying the memorability of face photographs. Proceedings of the International Conference on Computer Vision (ICCV), Sydney, Australia
  2. Main Citation: Bainbridge, W.A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face images. Journal of Experimental Psychology: General. Journal of Experimental Psychology: General, 142(4), 1323-1334

VT-AAST Benchmarking Dataset:

Description: This database was created for creating human skin complexion technique benchmarking of automatic face recognition algorithms. This set is split into 4 subset. Subset 1 contains 286 unchanged pictures in color of 1027 faces. The images have various poses, environment, light conditions, orientation, race and facial expression. The second subset has the same pictures as the first one but saved in a different file format. Subset 3 is equivalent to the first two but includes human colored skin regions resulting from an artificial segmentation procedure. The 4 and final subset contains the same photos as in subset 1 but some regions were changed into gray-scale.

License: see website.

Link: http://abdoaast.wixsite.com/abdallahabdallah/the-vt-mena-benchmarking-datas

Reference:

  1. Abdallah S. Abdallah, M. Abou El-Nasr, and A. Lynn Abbott, “Fusion of 2D-DCT and Edge Features for Face Detection with an SOM Classifier”, International Conference of Applied Electronics, AE 2007.
  2. Abdallah S.A., M. Abou El-Nasr, and A.L. Abbott, “A New Face Detection Technique using 2D-DCT and SOM”, Fourth International Conference on Machine Learning and Data Analysis, MLDA 2007.
  3. Abdallah S. Abdallah, M. Abou El-Nasr, and A. Lynn Abbott, “A New Color Image Database for Benchmarking of Automatic Face Detection and Human Skin Segmentation Techniques”, Fourth International Conference on Machine Learning and Pattern Recognition, MLPR 2007.
  4. Abdallah A.S., MacKenzie A.B., et al., “Human-Activity-Focused Classification of Spectrum Data”, IEEE DySPAN 2015, Stockholm, Sweden (under review).
  5. Abdallah A.S. and MacKenzie A.B., “A New Cross-Layer Controller for Adaptive Video Streaming over IEEE 802.11 Networks”, IEEE ICC 2015, London, UK.
  6. Reed, Abdallah A.S., et al., “The FINS Framework: Design and Implementation of the Flexible Internetwork Stack (FINS) Framework”, IEEE Transaction on Mobile Computing (TMC).
  7. Thompson Michael S., Abdallah A.S., et al., “The FINS Framework: An Open-Source Userspace Networking Subsystem for Linux”, IEEE Network Magazine, October 2014.
  8. Abdallah A.S., MacKenzie A.B., et al., “On Software Tools and Stack Architectures for Wireless Network Experiments”, IEEE WCNC 2011.
  9. Abdallah S. Abdallah, et al. “Facilitating Experimental Networking Research with the FINS Framework”, the 6th A CM International Workshop on Wireless Network Testbeds, Experimental Evaluation and Characterization (W iNTECH ’11).[ACM digital library].
  10. Abdallah A.S., MacKenzie A.B., DaSilva L.A., Michael S., “On Software Tools and Stack Architectures for Wireless Network Experiments”, IEEE Wireless Communications and Networking Conference, WCNC 2011. [PDF].
  11. Thompson Michael S., Hilal Amr E., Abdallah Abdallah S., DaSilva Luiz A., MacKenzie A.B., “The MANIAC Challenge: Exploring MANETs through competition”, 2010 Proceedings of the 8th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, WiOpt 2010. [PDF].
  12. Ghaboosi K., MacKenzie A.B.,  DaSilva L.A. , Abdallah A.S. , Latva-Aho M. , “A Channel Selection Mechanism based on Incumbent Appearance Expectation for Cognitive Networks”, IEEE Wireless Communications and Networking Conference, WCNC 2009. [PDF]

Yale Face Database:

Description: This set has 165 black and white pictures of faces of 15 different subjects. 11 pictures were taken of each individual showing different emotions (happy, normal, sad, sleepy, surprised and wink) or posture.

License: Not known.

Link: http://vision.ucsd.edu/content/yale-face-database

Reference: Not known.

Groups

Urban Tribes:

Description: This database contains various photos of people in groups. The photos where collected from the internet and are categorised into bikers, hipsters and goths which the creators call “urban tribes”.

License: Not known.

Link: http://vision.ucsd.edu/content/urban-tribes

Reference:

  1. Kwak I., Murillo A.C., Belhumeur P., Belongie S., Kriegman D., “From Bikers to Surfers: Visual Recognition of Urban Tribes”, British Machine Vision Conference (BMVC), Bristol, September, 2013.
  2. Murillo A.C., Kwak I., Bourdev L., Kriegman D., Belongie S., “Urban Tribes: Analyzing Group Photos from a Social Perspective”, CVPR Workshop on Socially Intelligent Surveillance and Monitoring (SISM), Providence, RI, June, 2012.