# Stumbling backwards into np.random.seed through jax.

Alternative title: PRNG for you and me through (j)np.random.seed. This post aims to (briefly) discuss why I like jax and then compare Jax and numpy vis a vis randomness.

Preface: this notebook is part 1 in a series of tutorials discussing gradients (manipulation, stopping, etc.) in PyTorch. The series of tutorials cover the following network architectures:

# Pixels, Privacy, and Protection

Right off the bat, I want to say that I’m okay with specific forms of data collection; if the consumer is informed of what will be collected and what the collector will do with the information, the collection is fine. In the cases where the consumer is well informed, they can make clear decisions about the tradeoffs between what they are offered and what they are giving up. I am strongly opposed to hiding the details of this tradeoff under pages and pages of legalese.

# A Machine Learning oriented introduction to PALISADE, CKKS and pTensor.

Note: “we” means “I”

**Problem setup:** You want to use the Mean Absolute Precision Error (MAPE) as your loss function for training Linear Regression on some forecast data. Springer: Mean Absolute Precision Error (MAPE)) has found success in forecasting because it has desirable properties:

# Fundamentals Part 2: Hessians and Jacobians

This section builds off the last post, Fundamentals Part 1: An intuitive introduction to Calculus and Linear Algebra; if you’re not familiar with calculus or linear algebra, I highly recommend starting there. If this is your first time seeing all of this, know that this section is more involved than the first fundamentals post. Be prepared to feel a little lost, but if you keep at it, I know you’ll get there (it took me a while to wrap my head around)