Weekly Review: 12/10/2017

The Mobility Robotics course is finally done, and I just started Perception. It seems to be way more concept-heavy than any of the other courses, but I like the content from Week 1 so far! I did not like Mobility as much, since it focussed exclusively on theory, and the content assumed a fair amount of comfort with kinematics/dynamics (which I don’t have anymore). Anyway, off to the articles for this week:

AI & the Blockchain

This article gives a quick introduction to Blockchain technologies, and then delves into the relationship between Artificial Intelligence and cryptocurrencies.

It discusses the various ways in which AI could transform blockchain tech, such as: 1. Improving the energy efficiency of mining centers (like DeepMind’s algorithms do for Google), 2. Increasing scalability using Federated Learning, 3. Predicting which nodes could solve a particular block, so as to ‘free’ up the others.

Federated Learning

Coming across the mention of Federated Learning made me realise that I did not remember what it was, so I revisited the old(ish) post on Google’s Research blog.

Federated Learning works by decentralizing the training process for ML models (unlike most other technologies that mainly do inference on end-devices). This is useful in cases where communicating data continuously from devices causes bandwidth and latency issues for the user/training server.

It works like this: Every device downloads the latest version of a model from the central server. Then, as it sees more data in deployment, it trains the local model to compute small ‘focussed’ updates based on the user. All these small updates (and the not the raw data that created them) are then sent to the central server, which aggregates all the updates using the FederatedAveraging algorithm. Privacy is ensured primarily by retraining the central model only after receiving a certain number of smaller updates.

AlphaZero Chess

Sometime back, DeepMind had unveiled the AlphaGo Zero, an algorithm that learned to play Go by playing only against itself (given the basic laws of the game). They then went on to try out the MCTS-based algorithm on chess, and it seems to be working really well! The AlphaZero algorithm apparently defeated Stockfish (current computer chess champion) 28 wins to none (and a bunch of draws).

Ofcourse, the superior hardware that AlphaZero uses does make a huge difference, but the very fact that such powerful computers can be optimally used to ‘meta-learn’ is in itself a game-changer. Do read the original paper to get an idea of their method (especially the section on input/outputs Representations to the deep network)

DeepVariant

High-Throughput Sequencing (HTS) is a method used in genome sequencing. HTS produces multiple reads of an individual’s genome, which are then compared to some ‘reference’ to explore variations.

To achieve this, it is necessary to properly align the reads with the reference genome, and also account for errors in measurement. Essentially, every nucleotide position that does not match with the reference could either be a genuine variant or an error in measurement. This is determined using data from all the reads produced by the method – this problem is called the ‘Variant Calling Problem‘.

DeepVariant, an algorithm co-developed by Google Brain & Verily, converts the variant-calling problem into an image classification problem to achieve state-of-the-art results. It was unveiled at NIPS-2017, and they have open-sourced the code.

Funny Programming Jargon

This is not really an ‘article’, but more of comic relief :-). It lists out various programming terms invented by real developers, that mock the various software engineering pitfalls in a typical workplace. Do read if you appreciate programming humor!

Weekly Review: 12/03/2017

Missed a post last week due to the Thanksgiving long weekend :-). We had gone to San Francisco to see the city and try out a couple of hikes). Just FYI – strolling around SF is also as much a hike as any of the real trails at Mt Sutro – with all the uphill & downhill roads! As for Robotics, I am currently on Week 3 of the Mobility course, which is more of physics than ‘computer science’; its a welcome change of pace from all the ML/CS stuff I usually do.

Numenta – Secret to Strong AI

In this article, Numenta‘s cofounder discusses what we would need to push current AI systems towards general intelligence. He points out that many industry experts (including Jeff Bezos & Geoffrey Hinton) have opined that it would take far more than scaling up current intelligent systems, to achieve the next ‘big leap’.

Numenta’s goal as such is to take inspiration from the human brain (especially the neocortex) to design the next generation of machine intelligence. The article describes how the neocortex uses abstract ‘locations’ to understand sensory input and form mental representations. To read more of Numenta’s research, visit this page.

Transfer Learning

This article, though not presenting any ‘new findings’, is a fun-to-read introduction to Transfer Learning. It focusses on the different ways TL can be applied in the context of Neural Networks.

It provides examples of how pre-trained networks can be ‘retrained’ over new data by freezing/unfreezing certain layers during backpropagation. The blogpost also provides a bunch of useful links, such as this discussion on Stanford CS231.

Structured Deep Learning

This article motivates the need for embedding vectors in Deep Learning. One of the challenges of using SQL-ish data for deep learning, is the involvement of categorical attributes. The usual ways of dealing with such variables in ML is to use one-hot encodings, or find an integer representation for each possible value.

However, 1) one-hot encodings increase the memory footprint of a NN & 2) assigning integers to ordinal values implies a wrong meaning to neural networks, which are inherently continuous/numeric in nature. For example, Sunday=1 & Saturday=7 for a ‘week’ enum might lead the NN to believe that Sundays and Saturdays are very far apart, which is not usually true.

Hence, learning vectorial embeddings for ordinal attributes is perhaps the right way to go for most applications. While we usually know embeddings in the context of words (Word2Vec, LDA, etc), similar techniques can be used to other enum-style values as well.

Population-based Training

This blog-post by Deepmind presents a novel approach to coming up with the hyperparameters for Neural-Network training. It essentially brings in the methodology of Genetic Algorithms for designing optimal network architectures.

While standard hyperparameter-tuning methods perform some kind of random search, Population-based training (PBT) allows each candidate ‘worker’ to take inspiration from the best candidates in the current population (similar to mating in GAs) while allowing for random perturbations in parameters for exploration (a.la. GA mutations.)

 

Weekly Review: 10/28/2017

This was a pretty busy week with a lot going on, but I finally seem to be settling into my new role!

The study for Aerial Robotics is almost over with a week to go. There hasn’t been much coding in this course, but that was to be expected since it was more about PID-Control Theory and quadrotor dynamics. I am particularly interested in the Capstone/’final’ project for this course, which would involve building an autonomous robot in Pi.

Anyway, on to the interesting tidbits from this week:

AlphaGo Zero

Google’s Deepmind recently announced a new version of their AI-based Go player, the AlphaGo Zero. What makes this one so special, is that it breaks the common notion of intelligent systems requiring a LOT of data to produce decent results. AlphaGo Zero was only provided the basic rules of Go, and it performed the rest of the learning all by playing against itself. Oh and BTW, AlphaGo Zero beats AlphaGo, the previous champion in the game. This is indeed a landmark in demonstrating the power of good-old RL.

Read this article for a basic overview, and their paper in Nature for a detailed explanation. Brushing up on Monte Carlo Tree Search would certainly help.

Word Mover’s Distance

Given an excellent embedding of words such as Word2Vec, it is not very difficult to compute the semantic distance between individual terms. However, when it comes to big blocks of text, a simple ‘average’ over term-embeddings isn’t good enough for computing their relative distances.

In such cases, the Word Mover’s Distance, inspired from Earth Mover’s Distance, provides a better solution. It figures out the semantically closest term(s) from one document to each term in another, and then the average effort required to ‘rephrase’ one text in words of another. Click on the article link for a detailed explanation.

Robots generalizing from simulations

OpenAI posted a blog article about how they trained a robot only through simulations. This means that the robot received no data from sensors during the training phase, but was able to perform basic tasks in deployment after some calibration.

During the simulations, they used dynamics randomization to alter basic traits of the environment. This data was then fed to an LSTM to understand the settings and goals. A key insight from this work is Hindsight Experience Replay. Quoting the article, “Hindsight Experience Replay (HER), allows agents to learn from a binary reward by pretending that a failure was what they wanted to do all along and learning from it accordingly. (By analogy, imagine looking for a gas station but ending up at a pizza shop. You still don’t know where to get gas, but you’ve now learned where to get pizza.)

Concurrency in Go

If you are a Go Programmer, take a look at this old (but good) talk on concurrency patterns and constructs in the language.

Generalization Bounds in Machine Learning

The Generalization Gap for an ML system is defined as the difference between the training error and the generalization error. The Generalization Bound tries to put a bound on this value, based on probability theory. Read this article for a detailed mathematical explanation.