Single Take

THE CHALLENGE
-
Samsung camera has varied features like Slow-Mo, Bokeh, Hyper lapse etc. but most of the captures (99 %) happens only in default photo and video modes.
-
Consumers don't know when they should use a specific mode
SOLUTION
MY ROLE (2019 - 2021)
-
Part of team who conceptualized initial idea
-
Rapid Prototyping of UI
-
Design proposals for executives
-
Writing core logic of algorithm
-
User Testing
-
Conceptual improvements for V2.0 & V3.0
Note: As this project is commercially released, all the footage and images used are extracted from the commercials and Samsung official blog channel. I have omitted & obfuscated other confidential information to comply with non-disclosure. Viewpoints expressed are strictly mine
01
THE BEGINNINGS

Samsung was determined to let its users reap out the best that its sensors and on-device AI capabilities could offer. And our Media Research & Innovation division was tasked with this initial brief:
"Create an innovative camera experience that aids users in capturing perfect moments with technical brilliance while having a very simple interface"
To achieve this, we started conducting user research to understand pain points related to using multiple modes, getting creative outputs and subsequently used those insights to come up with initial concepts
.png)
02
QUALITATIVE USER RESEARCH
WHY ?
-
To understand why, how & when users want to use different modes
-
To understand what does a perfect capture mean for them
-
To understand their preferences and desired outputs
WHO ?
-
Gen Z & Millennials - To learn about their recent camera habits
-
Gen X - Dominant users who buy the Samsung flagship device so understanding their capture and camera behaviour
WHERE ?
-
4 Tier 1 & Tier 2 cities of India
-
Delhi, Pune, Bangalore & Vijayawada
HOW ?
-
Semi Structured Interviews
-
Contextual Storytelling
-
Dairy Study ( Social Media Influencers )
KEY INSIGHTS




Prototype Video
03
BRAINSTORMING
After coming up with insights, we organized brainstorming sprints in which designers from Camera Commercialization team, Development team, PM teams also participated. Simple How might we, Abstract Laddering, Quick 8's are some of the ideation methods that we used in these sprints followed by voting.
Post sprint sessions our team did affinity of similar concepts and subsequently prioritized among those themes too using criteria like user impact, frequency, development complexity, closeness to existing framework etc., to arrive at some initial conceptual directions.
INITIAL CONCEPT DIRECTIONS
.png)
AI clicks on the sides when the user is doing his/her own capture
.png)
AI nudging user to use other modes based on the analysis of scene
.png)
No user intervention at all, when tapped
camera does its thing giving feedback
S20 INNOVATION TASK FORCE WORKSHOP IN SEOUL
We took these ideas and presented them in 2 Weeks workshop that happened in our Seoul Design HQ in which design teams from Samsung Korea & SDIC, California too participated. I headed the 3 member team which was given the responsibility of combining the concepts of Zero Camera and Parallel capture to come up with end to end vision for the concept.
Initial Challenges to address:
-
How should the during capture experience be?
-
How to give visual feedback to user when camera detected there is a perfect moment to capture?
-
How to lead user towards captured outputs from camera interface?
We generated multiple UX iterations to answer these questions & after lot of internal discussion and multiple presentations to leadership, finalized this initial prototype.
SOLUTION PROTOTYPE


Key Screen 1 - Camera Capture
Key Screen 2 - Gallery
This concept allows
-
User to stay in the moment
-
Not to miss the right moment anymore
-
Ensures the capture of peak moments and
-
Skips the process of editing as each output is pre-edited and instantly shareable
04
USER TESTING
WHY ?
To gauge the level of interest as this concept requires
-
Intense Development Effort
-
Fundamental rethinking in Camera & Gallery from UI POV
WHO ?
-
Gen Z, Millennials & Gen X in their own individual groups
WHERE ?
-
6 Tier 1 cities across the world
-
New Delhi, Milan, San Francisco, Moscow, London & Seoul
HOW ?
-
Focus Group Studies
-
Local third party research agencies in each country were hired for this, we provided concept storyboards and helped in moderator briefing
RESULT
-
This concept topped the user interest among other new concepts
-
Especially non tech savvy age groups showed very high enthusiasm
-
Camera savvy people are not completely okay giving all the control to AI but saw this concept as good backup when they want to be in the moment or when they wanted to save time by skipping editing
-
The value was perceived for non tech savvy friendliness, UI simplicity, for not missing the right moment, opening the window to get impressive outputs even from other modes etc.,
Post this, our team was given go ahead for detailing the intricacies of concept and to work closely with the AI Engineers & Developers from Samsung Bangalore & Seoul teams to build a working MVP.
05
CONCEPT DETAILING
Before beginning on detailing our team ran multiple user research sprints to understand editing thought process, trending style and what does a "highlight moment" mean for different users in India and Korea. We ran the research in 4 phases of literature review, user survey, FGDs, and semi structured interviews with influencers & professional video editors (Findings of that research are confidential as they are being commercialized in a phased manner so can't be documented, reach out personally for questions). Taking our research, PM team decided the technology that's available in different AI centers of Samsung that's suitable and ready for commercialization.
We had roughly 3 months to develop an MVP and we divided the detailing of MVP broadly in 3 phases:
1. AI Logic Detailing - Finalizing the algorithms and their order to be used to identify few best photos and videos
2. UX Detailing - Finalizing the user flow and managing different development complexities with UX
3. Output Detailing - Finalizing what outputs to be generated and logic for filter application and video screenplay
AI LOGIC DETAILING

Source: Samsung Official Blog
There are multiple sprints that happened along with Engineering, UX & PM teams combined to decide the logic for what engines out of available ones to be used in what order, technicalities in generating each output, how to decide which output should be generated when etc.,
Primarily the discussions are about tradeoffs like choosing HDR for quality shots might result in loss of few preview frames, more video outputs resulting in higher processing time post capture etc.,
UX DETAILING
These are the primary challenges that we wanted to tackle in detail:
-
Engaging user while capturing
-
Camera to Gallery Transition
-
Tray Experience
-
Gallery Experience
-
Managing Memory
01
Engaging user while capturing
Challenge:
User has nothing to do after pressing "capture" button for 15 seconds
Solution:
User could immerse in the moment and not see preview at all but if they are watching preview:
-
Progress bar showing time elapsed
-
Text like "Capturing meaningful moment", "More you capture the better it gets", "Cover more angles for better shots etc., to engage and give suggestions to users

02
Camera to Gallery Transition
Challenge:
This concept which results multiple outputs from single capture requires completely different way of handling it in gallery
Solution:
User could access the captures from camera preview in the same way as they access normal pictures and videos
-
By default highlighting the best AI shot and keeping it on central canvas
-
Rest of the outputs hidden in the tray which could be accessed by swiping up the nudging tray icon
_gif.gif)
03
Tray Experience
Challenge:
Multifaceted problem of the system not generating all the outputs at once, some of the videos might take long time to stitch but the user who waited for 10 to 15 seconds of capture might not want to wait that long
Solution:
Tray experience of which order should all those multiple outputs be placed depends up on
-
Time taken for the system to generate output
-
Best AI shot as the first item and shareable video output as the last item in the tray
-
Balancing photos and videos in the mosaic
-
Giving user the option to crown the output they like

04
Gallery Experience
Challenge:
If single capture session has multiple outputs then how to display them in thumbnail view, gallery preview view etc.,
Solution:
Gallery treatment of Single Take captures through
-
Single take icon in the thumbnail and crown moment occupying the central canvas
-
In detail preview, gif of all the outputs generated for preview and on drag showing all the outputs
-
Single take album to access all single take captures at one place


05
Managing Memory
Challenge:
Every single take capture leads to multiple outputs clogging memory
Solution:
Combination of
-
Adjusting which outputs they want before capture
-
Adjusting capture time before capture
-
Option to select multiple outputs to delete
-
Option to select multiple outputs to save to gallery and delete rest of the capture


OUTPUT DETAILING
For filters of photo based outputs we collaborated with a 3rd party photo editing application and there is a detailed logic for which set of filters to be used when.
Video Outputs are primarily based on detection of certain events like rapid motion, expressions like laugh etc., & there is a logic for screenplay written based on the insights of our video highlight research where we talked to professional video editors and social media influencers. Screenplay is modular in terms of event blocks and could be adjusted based on maximum duration. Final screenplay logic is decided post multiple iterations and extensive usability testing.
We also worked with other AI centers of Samsung for commercializing their AI models in terms of certain outputs.

Video demonstrating different outputs
_gif.gif)
06
KEY SCREENS

Toast that appears when user enters Single Take mode for the first time
1. Single Take Mode
2. Types of Shots to Capture
3. Helpful tips during capture

4. Single Take Output in Gallery
5. Single Take Tray
6. Select Outputs
IMPACT
07
Single Take was launched as the Galaxy S20 Camera USP and later that year marketed as Monster Shot in the Galaxy M series. Both the launches saw great acceptance and accolades from users and tech bloggers.
TECH BLOGGERS
USERS
" Our Favorite feature in Galaxy S20 is this thing called Single Take" - The Verge
"Single Take feature looks cool " - MKBHD
" This is the most thoughtful camera feature that Samsung has ever conceived" - Sammobile
" Feature that impressed us the most in this model is Single Take" - Engadget




KEY SUCCESS METRICS
-
Relative Mode Usage - Single Take became the second most used mode after Photo mode with an average usage of 24% of all captured events in flagship and 27% in midrange phones (Solves the initial problem of skewed usage only from default photo and video modes - 99% of captures)

-
Device Sales as USP Feature - Single Take has been voted by the Samsung Sales reps as the feature that helped sell more devices in 2020. Galaxy M31S midrange phone for which Single Take is the marketing USP feature in India became the highest selling device ever for Samsung in India bringing back the market share of Samsung up after 2 years of decline
-
Frequency of Usage - An average user is using Single Take 6 times a month
-
Duration of Capture, Number of events of each output type are some other metrics which we actively track
Post release of this version - we conducted user testing and used pain points collected from those sessions, big data aggregated to work on the next version of Single Take which is more robust and a step closer to the initial vision. But that is yet to be released commercially, so cant reveal here but happy to chat in person
08
REFLECTIONS
01
End to End Experience
I was lucky to be involved a single project right from the stage of when its on a post it to being used by millions of users. Being a part of this whole cycle, made me confident of handling things end to end at scale.
03
Global Exposure
Personally this factor made a lot of difference to me. Was lucky enough to travel to travel different countries and learn new things (Thank god it happened pre-covid :P).
Being an AI project it involved not just regular commercialization teams but collaboration also across different AI centers of Samsung - Moscow, Cambridge, Toronto, Seoul, Bangalore etc., Working with people from diverse cultural backgrounds helped me grow as a person and made me comfortable working with cross cultural teams.
02
Collaboration is the Key
Being a large scale research project with new approach relative to conventional approach the company usually incorporates, this required seamless collaboration not just between design, dev & PM but also sales, marketing & strategy teams. Gained a ton of knowledge and experience by being part of such an experience.
04
Power of incremental implementation
This is such a idea which is easy to conceive but takes a lot of effort to execute, but our team never shied away from incorporating steps one at a time.
So looking back even though every individual discussion was about a specific feature, we came a long way in achieving our vision. Lot of miles to go ahead in making a completely accurate AI camera but there is a lot of progress that was already made.