The Motion Planning course is going faster than I expected. I completed 2 weeks within 5 days. Thats good I guess, since it means I might get to the Capstone project before I take a vacation to India.
Heres the stuff from this week:
Graphcore and the Intelligent Processing Unit (IPU)
Graphcore aims to disrupt the world of ML-focussed computing devices. In an interesting blog post, they visualize neuron connections in different CNN architectures, and talk about how they compare to the human brain.
If you are curious about how IPUs differ from CPUs and GPUs, this NextPlatform article gives a few hints: mind you, IPUs are yet to be ‘released’, so theres no concrete information out yet. If you want to brush up on why memory is so important for neural network training (more than inference), this is a good place to start.
Overview of Different CNN architectures
This article on the CV-Tricks blog gives a high-level overview of the major CNN architectures so far: AlexNet, VGG, Inception, ResNets, etc. Its a good place to go for reference if you ever happen to forget what one of them did differently.
On that note, this blog post by Adit Deshpande goes into the ‘Brief History of Deep Learning’, marking out all the main research papers of importance.
Meta-learning and AutoML
The New York Times posted an article about AI systems that can build other AI systems, thus leading to what they call ‘Meta-learning’ (Learning how to learn/build systems that learn).
Google has been dabbling in meta-learning with a project called AutoML. AutoML basically consists of a ‘Generator’ network that comes up with various NN architectures, which are then evaluated by a ‘Scorer’ that trains them and computes their accuracy. The gradients with respect to these scores are passed back to the Generator, in order to improve the output architectures. This is their original paper, in case you want to take a look.
The AutoML team recently wrote another post about large-scale object detection using their algorithms.
People from Google recently open-sourced their library for computing gradients of Python functions. Tangent works directly on your Python code(rather than view it as a black-box), and comes up with a derivative function to compute its gradient. This is useful in cases where you might want to debug how/why some NN architecture is not getting trained the way it’s supposed to. Here’s their Github repo.
Reconstructing films with Neural Network
This blog post talks about the use of Autoencoders and GANs to reconstruct films using NNs trained on them. They also venture into reconstructing films using NNs trained on other stylish films (like A Scanner Darkly). The results are pretty interesting.