This is part 1.5 in a series entitled “A Machine Learning Based Introduction to PALISADE and CKKS”
Problem Setup: You want to use MAPE as your loss function for training Linear Regression and you want to use
Sklearn to solve it.
It’s been a long time since my last real post and I thought that I would share some things I’ve learned since my last post - specifically RNNs in Tensorflow (Tensorflow tensorflow, not Tensorflow Keras)
Recently I stumbled across Decoupled Neural Interfaces using Synthetic Gradients which aims to solve one of the biggest slowdowns in Deep Learning: the propagation step (In terms of the training, it’s probably the only real step - things like moving data between GPUs and such aren’t often discussed from an algorithmic point of view). This is especially useful because if we can sidestep the issue of needing to complete a forward and backward pass on the data, we can dramatically speed up training. This becomes especially useful when doing training for a single large neural network on multiple GPUs, or on some distributed cluster machine.
I’ve recently been talking to some people about some Deep Reinforcement Learning (deep RL) and while looking up papers and innovations (admittedly for blogging material), I realized that in keeping with the theme of the blog and why I made it, I’d need to start from some fundamentals so bear with me as I discuss these ideas and hopefully give you some intuitive understanding of these concepts.
I love Pusheen, and I’m also a fan of playing around in my terminal. I was inspired to actually do this after talking to someone the other day; she mentioned how an officemate of hers commented on the Pusheen that popped up when she opened her shell. Firstly, I’m frankly disappointed in myself that I’ve never thought to do the same thing but I also saw it as a quick project that I could undertake that might bring joy to other people.
I believe that the beginning of any good story starts with a why so I’ll start there: