Publications by T. Moudiki
LSBoost: Explainable ‘AI’ using Gradient Boosted randomized networks (with examples in R and Python)
Disclaimer: I have no affiliation with The Next Web (cf. online article) A few weeks ago I read this interesting and accessible article about explainable AI, discussing more specifically self-explainable AI issues. I’m not sure – anymore – if there’s a mandatory need for AI models that explain themselves, as there are model-agnostic tools...
3277 sym R (7575 sym/10 pcs) 4 img
LSBoost: Explainable ‘AI’ using Gradient Boosted randomized networks (with examples in R and Python)
Disclaimer: I have no affiliation with The Next Web (cf. online article) A few weeks ago I read this interesting and accessible article about explainable AI, discussing more specifically self-explainable AI issues. I’m not sure – anymore – if there’s a mandatory need for AI models that explain themselves, as there are model-agnostic tools...
3277 sym R (7575 sym/10 pcs) 4 img
LSBoost: Explainable ‘AI’ using Gradient Boosted randomized networks (with examples in R and Python)
Disclaimer: I have no affiliation with The Next Web (cf. online article) A few weeks ago I read this interesting and accessible article about explainable AI, discussing more specifically self-explainable AI issues. I’m not sure – anymore – if there’s a mandatory need for AI models that explain themselves, as there are model-agnostic tools...
3277 sym R (7575 sym/10 pcs) 4 img
Explainable ‘AI’ using Gradient Boosted randomized networks Pt2 (the Lasso)
This post is about LSBoost, an Explainable ‘AI’ algorithm which uses Gradient Boosted randomized networks for pattern recognition. As we’ve discussed it last week LSBoost is a cousin of GFAGBM’s LS_Boost. In LSBoost, more specifically, the so called weak learners from LS_Boost are based on randomized neural networks’ components and var...
1481 sym R (5375 sym/4 pcs) 6 img
Explainable ‘AI’ using Gradient Boosted randomized networks Pt2 (the Lasso)
This post is about LSBoost, an Explainable ‘AI’ algorithm which uses Gradient Boosted randomized networks for pattern recognition. As we’ve discussed it last week LSBoost is a cousin of GFAGBM’s LS_Boost. In LSBoost, more specifically, the so called weak learners from LS_Boost are based on randomized neural networks’ components and var...
1481 sym R (5375 sym/4 pcs) 6 img
Explainable ‘AI’ using Gradient Boosted randomized networks Pt2 (the Lasso)
This post is about LSBoost, an Explainable ‘AI’ algorithm which uses Gradient Boosted randomized networks for pattern recognition. As we’ve discussed it last week LSBoost is a cousin of GFAGBM’s LS_Boost. In LSBoost, more specifically, the so called weak learners from LS_Boost are based on randomized neural networks’ components and var...
1481 sym R (5375 sym/4 pcs) 6 img
Forecasting lung disease progression
In OSIC Pulmonary Fibrosis Progression competition on Kaggle, participants are tasked to determine the likelihood of recovery (prognosis) of several patients affected by a lung disease. For each patient, the maximum volume of air they can exhale after a maximum inhalation (FVC, Forced Vital Capacity) is measured over the weeks, for approximately...
2833 sym R (2861 sym/4 pcs) 10 img
Simulation of dependent variables in ESGtoolkit
Version 0.3.0 of ESGtoolkit has been released – on GitHub for now. As in v0.2.0, it contains functions for the simulation of dependent random variables. Here is how you can install the package from R console: library(devtools) devtools::install_github("Techtonique/esgtoolkit") When I first created ESGtoolkit back in 2014, its name stood for Ec...
3629 sym R (1216 sym/5 pcs) 6 img
Submitting R package to CRAN
Disclaimer: I have no affiliation with Microsoft Corp. or Revolution Analytics. For the n-th time in x years, submitting an R package to CRAN ended up like comedy. This time for one anecdotal note (a kind of warning), whereas the previous accepted version of ESGtoolkit has had, for 5 years, 12 warnings and notes combined. No error, nothing’s...
3096 sym R (497 sym/1 pcs) 2 img
Explainable Statistical/Machine Learning explainability using Kernel Ridge Regression surrogates
As announced last week, this week’s topic is Statistical/Machine Learning (ML) explainability using Kernel Ridge Regression (KRR) surrogates. The core idea underlying this type of ML explainability methods is to apply a second learning model to the predictions of the first so-called black-box model. How am I envisaging it? Not by utilizing KRR...
1066 sym 2 img