Publications by Sigrid Keydana
Towards privacy: Encrypted deep learning with Syft and Keras
Deep learning need not be irreconcilable with privacy protection. Federated learning enables on-device, distributed model training; encryption keeps model and gradient updates private; differential privacy prevents the training data from leaking. As of today, private and secure deep learning is an emerging technology. In this post, we...
915 sym
Hacking deep learning: model inversion attack by example
Compared to other applications, deep learning models might not seem too likely as victims of privacy attacks. However, methods exist to determine whether an entity was used in the training set (an adversarial attack called member inference), and techniques subsumed under “model inversion” allow to reconstruct raw data input given ...
918 sym
Easy PixelCNN with tfprobability
PixelCNN is a deep learning architecture – or bundle of architectures – designed to generate highly realistic-looking images. To use it, no reverse-engineering of arXiv papers or search for reference implementations is required: TensorFlow Probability and its R wrapper, tfprobability, now include a PixelCNN distribution that can b...
828 sym
Deep attractors: Where deep learning meets chaos
“) training_loop(ds_train) } “` After two hundred epochs, overall loss is at 2.67, with the MSE component at 1.8 and FNN at 0.09. Obtaining the attractor from the test set We use the test set to inspect the latent code: # A tibble: 6,242 x 10 V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 <...
2401 sym R (1457 sym/2 pcs) 2 img
Time series prediction with FNN-LSTM
“) training_loop(ds_train) test_batch % iter_next() encoded \(math\)reduce_variance(encoded, axis = 0L) print(test_var %>% as.numeric() %>% round(5)) } On to what we'll use as a baseline for comparison. #### Vanilla LSTM Here is the vanilla LSTM, stacking two layers, each, again, of size 32. Dropout and recurrent dropout were chosen individua...
9859 sym Python (3146 sym/3 pcs) 22 img
FNN-VAE for noisy time series forecasting
“) training_loop_vae(ds_train) test_batch % iter_next() encoded \(math\)reduce_variance(encoded, axis = 0L) print(test_var %>% as.numeric() %>% round(5)) } “` Experimental setup and data The idea was to add white noise to a deterministic series. This time, the Roessler system was chosen, mainly for the prettiness of its attractor, apparent e...
7707 sym 14 img
An introduction to weather forecasting with deep learning
Same with weekly climatology: Looking back at how warm it was, at a given location, that same week two years ago, does not in general sound like a bad strategy. Second, the DL baseline shown is as basic as it can get, architecture- as well as parameter-wise. More sophisticated and powerful architectures have been developed that not just by far su...
1187 sym
Please allow me to introduce myself: Torch for R
Last January at rstudio::conf, in that distant past when conferences still used to take place at some physical location, my colleague Daniel gave a talk introducing new features and ongoing development in the tensorflow ecosystem. In the Q&A part, he was asked something unexpected: Were we going to build support for PyTorch? He hesitated; that wa...
14341 sym R (3322 sym/23 pcs) 2 img
Getting familiar with torch tensors
Two days ago, I introduced torch, an R package that provides the native functionality that is brought to Python users by PyTorch. In that post, I assumed basic familiarity with TensorFlow/Keras. Consequently, I portrayed torch in a way I figured would be helpful to someone who “grew up” with the Keras way of training a model: Aiming to focus ...
14087 sym R (11675 sym/54 pcs)
Introducing torch autograd
Last week, we saw how to code a simple network from scratch, using nothing but torch tensors. Predictions, loss, gradients, weight updates – all these things we’ve been computing ourselves. Today, we make a significant change: Namely, we spare ourselves the cumbersome calculation of gradients, and have torch do it for us. Prior to that though...
4916 sym R (3772 sym/11 pcs)