Now, Google has open-sourced a lump of code named DeepLab-v3+, its “latest and best performing semantic image segmentation model”, and implemented in TensorFlow. While this is not the same tech that produces the Portrait mode on Pixel 2 handsets, it however can produce results similar to those of the Pixel 2, Google clarified. DeepLab-v3+ is built using convolutional neural networks or CNNs – a machine learning method that’s mainly good at analyzing visual data – so that the outlines of the foreground objects are marked out accurately. This tech analyzes objects within a picture and creates difference between the foreground and background objects in the same image, which can then be used to create ‘bokeh’ style photographs. According to the blog post this week, Google said that semantic image segmentation, which assigns labels such as “road,” “sky,” “person,” “dog,” to every pixel in an image, allowing it to outline and differentiate the subject from the background. This labeling helps the Pixel blur the background and focus clearly on the subject. “Modern semantic image segmentation systems built on top of convolutional neural networks (CNNs) have reached accuracy levels that were hard to imagine even five years ago, thanks to advances in methods, hardware, and datasets. We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology,” explain Google software engineers, Liang-Chieh Chen and Yukun Zhu in their blog post. By open-sourcing DeepLab-v3+ to the community, Google has allowed developers to freely use the image segmentation tech and implement it in their own apps!!! Source: The Verge, Google