Ygor Serpa
1 min readApr 1, 2020

--

One additional point is “do you need the algorithm to explain why it said X and not Y?”

Model interpretability is an often overlooked topic when deciding for ML. If your problem is simple, approaches such as decision trees might handle it (and explain themselves). If your problem is more complex, it might need SVMs or Neural Networks, with are pretty black-box most of the time.

--

--

Ygor Serpa
Ygor Serpa

Written by Ygor Serpa

Former game developer turned data scientist after falling in love with AI and all its branches.

Responses (1)