Virtual make up using machine learning and augmented reality
MakeApp is a mobile application which uses augmented reality and machine learning to effectively place virtual make up to the users face.
The user can see how some kind of make up looks on them before applying it for real, making make up very effective for hairstylists or make up affectionados.
The challenge relied in building a mobile application which was accurate enough to detect faces and motions but also fast in doing so. Technologies used for face and landmarks detection were mainly optimized for the desktop and we somehow had to port them to be mobile-friendly in terms of performance, not the easiest of tasks.
We divided the projects challenge in the following parts each with it’s own complexity
- Finding the right technology to use and adapt for face and landmark detection
- Making face detection performant enough to be usable in mobile phones
- Mapping the make up images to a 3D mesh which by its turn was mapped to the users face.
Divide and conquer, dividing the challenge in separate parts to get it together as a whole
1. Face & Landmark Detection
The first and the most important bottleneck of the project was most surely finding the appropriate technology which did face detection well. Since this technology is quite new and quite hard to be developed, there aren’t many choices out there.
We had to avoid going for paid solutions since the budget was not there. Our first research and probably the longest research about the project was most surely about this piece of technology. There were two open solutions which fit our research however we chose to go with openface.
Open Face is a piece of software developed on OpenCV and uses neural networks algorithms to achieve face and landmark detection. Face detection is done by Open CV and landmark detection is done by Open Face. We went with the haar detector. Open Face being a desktop software was not optimized for mobile usage, and here goes our second point.
2. Face Detection Performance
The first and core optimization we made was changing the size of the images which were being fed from the camera to the algorithms which did the face and landmarks detection. Depending on the mobile phones camera resolution, size was quite big, speaking in megabytes, times 24 ( 24 frames per second ) and we get the amount of megabytes processed by the algorithms in one second. Reducing the images size in real-time with opencv was a game changer as this was the main bottleneck of the solution.
The second optimization we made was reducing the number of times when the camera fed the algorithms in one second, from 24 to 12. The accuracy did not lose much while the speed improved greatly. By this time we had a solid solution both in speed and in accuracy.
3. Images Mapping
The third projects challenge was the images mapping. Besides the Images and the landmark detection there is also the images mapping process. Images are basically mapped with the landmarks on a mesh which is created for the face. The mesh is generated using Blender which is an open source application for C.G.I ( Computer Generated Imagery ).
Check your make up without wearing it.
The solution developed aims to make it able for hairdressers or make up affectionados to look how a make up would look on them without wearing it.
This solution not only would work on every person who likes experimenting with make ups but also for hairdressers who could use it as their competitive advantage.
Immagine all the people who feel disappointed or lose time after trying a make up which they don’t like or which simply doesn’t fit. Also imagine the amount of satisfied customers a hairdresser would have if they quickly and easily found out what works for them. This is totally made possible by Make App.