# A Machine Learning oriented introduction to PALISADE, CKKS and pTensor.

Note: “we” means “I”

Problem Setup: You want to use MAPE as your loss function for training Linear Regression and you want to use Sklearn to solve it.

Foreword

Foreword

# Sequences: Part 1: Terminology and a simple model

It’s been a long time since my last real post and I thought that I would share some things I’ve learned since my last post - specifically RNNs in Tensorflow (Tensorflow tensorflow, not Tensorflow Keras)

Recently I stumbled across Decoupled Neural Interfaces using Synthetic Gradients which aims to solve one of the biggest slowdowns in Deep Learning: the propagation step (In terms of the training, it’s probably the only real step - things like moving data between GPUs and such aren’t often discussed from an algorithmic point of view). This is especially useful because if we can sidestep the issue of needing to complete a forward and backward pass on the data, we can dramatically speed up training. This becomes especially useful when doing training for a single large neural network on multiple GPUs, or on some distributed cluster machine.

# Fundamentals: Hessians, Jacobians, and Optimization

I’ve recently been talking to some people about some Deep Reinforcement Learning (deep RL) and while looking up papers and innovations (admittedly for blogging material), I realized that in keeping with the theme of the blog and why I made it, I’d need to start from some fundamentals so bear with me as I discuss these ideas and hopefully give you some intuitive understanding of these concepts.