Guided Selfies using Models of Portrait Aesthetics

Credit: Prof. Daniel Vogel University of Waterloo

Let’s be honest at one time or another we have all tried to take a selfie on our smartphones. There are plenty of people that seem to have mastered this not so vital skill, but really a lot of the pictures don’t really come out so well. So it becomes a question of moving around to find just the right light and just keep snapping away until you get the photo you like. Well, brothers and sisters, you’re gonna really like what science has cooked up for you selfie addicts. A team of University of Waterloo in Canada computer scientists led by Professor Dan Vogel have developed a smartphone app that essentially senses where to position your phone to get the best picture possible. In this writers case probably in the darkest exposures possible.

Not to be confused with other apps that pretty us up after the photo is taken, the U. of Waterloo team developed an algorithm that takes into account lighting direction, face position and size to guide you to take the optimal selfie.  The Waterloo team took a very interesting modern approach to building and testing their algorithm. They took hundreds of ‘virtual selfies’ changing the lighting direction, face position and face size in each photo. Next, they hired an online crowdsourcing service, so that thousands of people would vote on which virtual selfie out of the hundreds they thought was best. They used the voting results to mathematically model and develop their algorithm for taking the best selfie.  Prof Vogel feels the team will be able to make improvements on the app. by taking new factors into account: “We can expand the variables to include variables aspects such as hairstyle, types of smile or even the outfit you wear.”

Transcript of Video Below:

0:02
we developed a smartphone camera
0:05
application to guide people to take
0:07
better portrait photos commonly called
0:09
selfies the live preview has hints
0:13
showing how to move to improve three
0:15
compositional features lighting
0:17
Direction face position
0:21
and face sighs this guidance is based on
0:25
empirical models that estimate aesthetic
0:28
scores for each feature based on an
0:30
analysis of the image using computer
0:32
vision techniques the models are built
0:35
using ratings of highly controlled
0:37
synthetic selfies generated from 3d
0:40
meshes of six realistic human models by
0:44
manipulating a virtual camera mesh and
0:47
lighting in a 3d modeling package we
0:50
could manipulate compositional features
0:52
in the controlled manner using scripts
0:56
to precisely control settings we
0:58
generated sets of synthetic selfies
1:00
exploring the space of three
1:01
compositional features face size face
1:05
position and lighting direction these
1:09
synthetic selfies were used to gather
1:11
aesthetic ratings in an Amazon
1:13
Mechanical Turk experiment workers
1:16
navigated the set and rated some good
1:19
and some bad by combining thousands of
1:23
ratings we generate distributions of
1:25
aesthetic scores over the space of each
1:27
feature visualized here as a heat map we
1:31
use these distributions to create models
1:33
to estimate aesthetic scores with
1:35
directions of improvement given the
1:37
current face size face position and
1:39
lighting direction tracked using
1:41
computer vision to validate our system
1:44
we conducted a two-part experiment in
1:46
the first part we evaluated usability in
1:50
a controlled lab environment as
1:52
participants took five photos with our
1:54
guided camera app
1:59
without any guidance
2:04
then they pick the best photo from each
2:07
set of five we use pairs of best photos
2:11
in the second part of the study
2:12
conducted on Amazon Mechanical Turk
2:15
workers rated the aesthetics of each
2:17
photo in each pair and provided comments
2:20
our results show our system improved
2:23
selfie photograph aesthetics by 26% see
2:28
the paper for more details and more
2:30
results