Published On: January 27th, 2016|3 min read|

Machine learning is the new black. If “software was eating the world” as the Netscape co-founder and renowned venture capitalist, Marc Andreessen famously said, than surely machine learning and smart apps are responsible for putting all that digital goodness to practical use. These smart systems take in raw input data and spit out wisdom. And last year was perhaps the most successful year in making this field a household name.

But the caveat of putting machine learning to use lies in the fact that it is deeply mathematical and theoretical and hard to grasp. It doesn’t fit nicely in the head of teenagers who could write software at break neck speed and challenge the big players. It also doesn’t quite fit in the proverbial “startup garage” since it typically requires a lot of computing power. But the big players don’t want this to be a bottle neck and all of them are falling head over heels to offer their, more or less, fully organized and prepared machine learning setups to the crowd so they can bring back the teenagers and garage people back into the play.

Google has opened up its TensorFlow system, which brings the machine learning goodness, in a typical Google fashion, to everybody. IBM has been pushing the Watson platform for quite some time now and Microsoft has also entered the arena with its Azure Machine Learning and Oxford API’s (and summarily beat everyone in the famed Image Net challenge in 2015). Amazon introduced machine learning as an AWS service last year. Facebook, not to be left behind also open sourced its FAIR (Facebook AI Research ) modules which use Torch, a widely used machine learning framework. Many of these frameworks use GPU’s to accelerate learning, since it is a process which requires many (many many many) computations which are rather simple and can be done with GPU’s instead of CPU’s. Since GPU’s are less expensive, it is easier to build GPU clusters and most of these frameworks do indeed have solutions optimized for GPU clusters. That’s also why NVIDIA is pushing hard into this space with many free software offerings which can make using NVIDIA GPU clusters super easy.

To show how powerful this domain has become, and to demonstrate the use of simplest tools available, a “Deep Learning” elective was offered to CDTM students from all study backgrounds. It revolved around using NVIDIA’s “Digits” framework which makes making machine learning apps a walk in the park for beginners. It’s a web based system with few machine learning models built in, which can be fine tuned easily through the web dashboard. No coding is required to setup some simple tasks but of course it becomes much more powerful with programming know-how and even more effective with technical machine learning knowledge. However the important thing to note here is that, even with very little input and background knowledge, it can still be used for demonstrating more than toy examples. The parameters and the models can be tuned with trial and error by novices to produce meaningful results.

The power of such frameworks can be judged by the fact that after working for just three short weeks, the participants of the deep learning elective were able to come up with some amazing proof of concept apps. The elective was facilitated by CDTM center assistant Patrick Christ and CDTM alumn Robert Weindl, along with the lab resources provided by TUM’s Prof. Dr.-Ing. Klaus Diepold’s Chair for Data Processing. Moving forward some of those projects might even be taken forward as startups in their own right. Good times to be able to make the machines learn, now isn’t it ? And the field is still wide open, with the opportunity specially enticing for startups, with the venture capitalists and big corporates alike on their heels to invest in it!

SHARE THIS STORY
Related stories