Weekly Review: 12/03/2017

Missed a post last week due to the Thanksgiving long weekend :-). We had gone to San Francisco to see the city and try out a couple of hikes). Just FYI – strolling around SF is also as much a hike as any of the real trails at Mt Sutro – with all the uphill & downhill roads! As for Robotics, I am currently on Week 3 of the Mobility course, which is more of physics than ‘computer science’; its a welcome change of pace from all the ML/CS stuff I usually do.

Numenta – Secret to Strong AI

In this article, Numenta‘s cofounder discusses what we would need to push current AI systems towards general intelligence. He points out that many industry experts (including Jeff Bezos & Geoffrey Hinton) have opined that it would take far more than scaling up current intelligent systems, to achieve the next ‘big leap’.

Numenta’s goal as such is to take inspiration from the human brain (especially the neocortex) to design the next generation of machine intelligence. The article describes how the neocortex uses abstract ‘locations’ to understand sensory input and form mental representations. To read more of Numenta’s research, visit this page.

Transfer Learning

This article, though not presenting any ‘new findings’, is a fun-to-read introduction to Transfer Learning. It focusses on the different ways TL can be applied in the context of Neural Networks.

It provides examples of how pre-trained networks can be ‘retrained’ over new data by freezing/unfreezing certain layers during backpropagation. The blogpost also provides a bunch of useful links, such as this discussion on Stanford CS231.

Structured Deep Learning

This article motivates the need for embedding vectors in Deep Learning. One of the challenges of using SQL-ish data for deep learning, is the involvement of categorical attributes. The usual ways of dealing with such variables in ML is to use one-hot encodings, or find an integer representation for each possible value.

However, 1) one-hot encodings increase the memory footprint of a NN & 2) assigning integers to ordinal values implies a wrong meaning to neural networks, which are inherently continuous/numeric in nature. For example, Sunday=1 & Saturday=7 for a ‘week’ enum might lead the NN to believe that Sundays and Saturdays are very far apart, which is not usually true.

Hence, learning vectorial embeddings for ordinal attributes is perhaps the right way to go for most applications. While we usually know embeddings in the context of words (Word2Vec, LDA, etc), similar techniques can be used to other enum-style values as well.

Population-based Training

This blog-post by Deepmind presents a novel approach to coming up with the hyperparameters for Neural-Network training. It essentially brings in the methodology of Genetic Algorithms for designing optimal network architectures.

While standard hyperparameter-tuning methods perform some kind of random search, Population-based training (PBT) allows each candidate ‘worker’ to take inspiration from the best candidates in the current population (similar to mating in GAs) while allowing for random perturbations in parameters for exploration (a.la. GA mutations.)

 

Weekly Review: 10/21/2017

Its been a long while since I last posted, but for good reason! I was busy shifting base from Google’s Hyderabad office to their new location in Sunnyvale. This is my first time in the USA, so there is a lot to take in and process!

Anyway, I am now working on Google’s Social-Search and Ranking team. At the same time, I am also doing Coursera’s Robotics Specialization to learn a subject I have never really touched upon. Be warned if you ever decide to give it a try: their very first course, titled Aerial Robotics, has a lot of linear math and physics involved. Since I last did all this in my freshman year of college, I am just about getting the weeks done!

Since I already have my plate full with a lot of ToDos, but I also feel bad for not posting, I found a middle ground: I will try, to the best of my ability, to post one article each weekend about all the random/new interesting articles I read over the course of the week. This is partly for my own reference later on, since I have found myself going back to my posts quite a few times to revisit a concept I wrote on. So here goes:

Eigenvectors & Eigenvalues

Anything ‘eigen’ has confused me for a while now, mainly because I never understood the intuition behind the concept. The highest-rated answer to this Math-Stackexchange question did the job: Every square matrix is a linear transformation. The corresponding eigenvectors roughly describe how the transformation orients the results (or the directions of maximum change), while the corresponding eigenvalues describe the distortion caused in those directions.

Transfer Learning

Machine Learning currently specializes in utilizing data from a certain {Task, Domain} combo (for e.g., Task: Recognize dogs in photos, Domain: Photos of dogs) to learn a function. However, when this same function/model is used on a different but related task (Recognize foxes in photos) or a different domain (Photos of dogs taken during the night), it performs poorly. This article discusses Transfer Learning, a method to apply knowledge learned in one setting on problems in different ones.

Dynamic Filters

The filters used in Convolutional Neural Network layers usually have fixed weights at a certain layer, for a given feature map. This paper from the NIPS conference discusses the idea of layers that change their filter weights depending on the input. The intuition is this: Even though a filter is trained to look for a specialized feature within a given image, the orientation/shape/size of the feature might change with the image itself. This is especially true while analysing data such as moving objects within videos. A dynamic filter will then be able to adapt to the incoming data, and efficiently recognise the intended features inspite of distortions.