The craze of deep learning has brought about many challenges to the information status quo. For some use-cases, its success makes sense and seems inevitable. For others, like image processing, its bid to outshine hardened algorithms in compression and optimization seemed harder to predict, begging the question of what feats of computer engineering are safe from its grasp. Today we will only look at the ways machine learning is changing how we store, create and optimize images, but every corner of information science is seeing similar confrontations by deep learning.
Image Compression and Resolution
Last year, Google released RAISR, an algorithm combining traditional upsampling with deep learning in order to turn low-resolution images into convincing high-resolution counterparts. Doing this required investigating the strengths and limitations of both old-school image analysis and deep learning, taking the best parts from both and making a chimera algorithm that is faster than most other deep learning methods, with superior results.
In a similar vein, a company called WaveOne claims to have trained a model to compress images to tiny sizes with much greater success than popular codecs like JPEG. It seems likely this will not be the last we hear of machine learning trumping the compression algorithms we’ve come to know and love.
Hey, it’s Google again (with help from MIT). Take a look at Deep Bilateral Learning for Real-Time Image Enhancement. Now machine learning has been utilized to perform human-like photo retouching in realtime on your phone. So instead of snapping an image and heading to Instagram to apply a bunch of artistic filters, the model could show you how your photos will look with professional-quality image enhancements as you frame up your shot. This is done through clever predictions of low-resolution affine transformations that can be scaled up before being applied, and hey we even wrote a blog post about it.
Image Manipulation and Generation
Adversarial networks pit two separate neural networks against each other to duke it out until convergence. Or if you are glass half full kind of person, one network teaches the other how to generate some data (less exciting). This has been used to some pretty mind-blowing ends, including removing rain from photos, turning night into day, and so forth. The ability for adversarial networks to learn to generate new data is unprecedented and extremely powerful, and as research into the state-of-the-art improves, expect to hear quite a lot about them. Also, while you are at it, you might want to start distrusting everything you see.
Read More →