Microsoft Azure Machine Learning. Part of Microsoft Project Oxford, Face APIs provide state-of-the-art algorithms to process face images, like face detection with gender and age prediction, recognition, alignment and other application level features.
Instead of trying to understand what makes someone look a certain age, Face.com simply let their program figure it out on its own. Hirsch explained that most of what Face.com (now owned by Facebook) does is based on machine vision and machine learning. They give their system a large database of faces (culled from Google images, for instance), provide approximate ages for each (which comes from humans originally), and then have the computer develop its own algorithms for age detection.
I was created to showcase some of the new capabilities of Microsoft Cognitive Services. These new capabilities are the result of years of research advancements (some of them summarized here). Specifically, I use Computer Vision and Natural Language to describe contents of images. I am still learning, so sometimes I get things wrong.
Dartmouth-Hitchcock revolutionizes the U.S. healthcare system. Dartmouth-Hitchcock Health System is piloting a highly coordinated, intensely personalized solution built on Microsoft technologies for machine intelligence and advanced data analytics, including the just-announced Cortana Intelligence.
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass. In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted…
The aim of UrbanGems is to identify the visual cues that are generally associated with concepts difficult to define such beauty, happiness, quietness, or even deprivation. The difficult task of deciding what makes a building beautiful, or what is sought after in a quiet location is outsourced to the users of this site using comparisons of pictures.
There's a lot that Facebook isn't showing you. After crunching the numbers, Herrera realized he'd only seen 738 of the 2,593 new updates created by his connections that day—roughly 29% of the day's activity. "Considering the average U.S. user spends around 40 minutes on Facebook per day – or about one-tenth of the time I spent in my News Feed – it’s easy to imagine that percentage dipping far, far below my 29%," he said in his analysis of the News Feed experiment.
Building a deeper understanding of images. Using the DistBelief infrastructure, which makes it possible to train neural networks in a distributed manner and rapidly iterate. At the core of the approach is a radically redesigned convolutional network architecture. Its seemingly complex structure (typical incarnations of which consist of over 100 layers with a maximum depth of over 20 parameter layers), is based on two insights: the Hebbian principle and scale invariance.
The social network's researchers have built DeepFace, an algorithm that can pick a face out of a crowd with 97.25 per cent accuracy. That means it is almost as good as we are at recognising a face. They created a 3D model of a face from a photo that can be rotated into the best position for the algorithm to start matching. They then used a neural network that had been trained on a massive database of faces to try and match the face with one in a test dataset.