Hello.

I am Professor of Machine Learning, and Deputy Head, in the Department of Computer Science.

Contact me:
Office: G11, Kilburn Building.

What do I do?

I work on the theoretical and methodological foundations of Machine Learning. I enjoy finding connections and equivalencies between ideas in the jungle of modern ML, with tools from statistics, information theory, and information geometry. Everything in ML is, ultimately, a special case of something else.

I find this strategy can lead to new methods, with strong foundations: e.g. quantifying the stability (reproducibility) of feature selection algorithms; methods for hypothesis testing in non-standard scenarios; and, a new theory of ensemble diversity. Our work has been applied in areas such as predictive policing, clinical trials, and design of efficient ML algorithms for plastic electronics.

I also enjoy thinking about pedagogy, especially the nature of PhD training. I wrote a book - a step-by-step guide to the intellectual and emotional rollercoaster of Your PhD. Written in collaboration with twelve leading academics and industrialists, giving their unique perspectives on the PhD process, How to get Your PhD: A Handbook for the Journey is now available.




News

See my archived news for older work. The main highlights of my recent activities are...
March 2024  The most surprising paper I've ever published... Bias/Variance is not the same as Approximation/Estimation. We figure out the precise connection between two seminal results in ML theory... somehow this has been overlooked for 50+ years?

December 2023  Very pleased to see our new paper out in JMLR today - A Unified Theory of Diversity in Ensemble Learning. This is the culmination of a very, very long journey - the ideas in this work have been in progress for *20* years, with multiple generation of Postdocs and PhDs contributing. Thankyou all!

January 2023  New preprint out, A Unified Theory of Diversity in Ensemble Learning. This is the result of a long chain of research, beginning right back with my PhD. It formulates a theory that explains the nature and role of diversity in several supervised learning scenarios. Thankyou EPSRC for supporting this!

March 2022  New paper published in AISTATS 2022: Bias-Variance Decompositions for Margin Losses with Danny Wood and Tingting Mu.

September 2021  I am starting my research sabbatical for one full year... working on something very, very cool. Stay tuned - results expected in late 2022....

June 2020  Very pleased to announce a new paper in ECML 2020... To Ensemble or Not Ensemble: When does End-To-End Training Fail?. In collaboration with many colleagues from Manchester, this is a key output from our EPSRC funded LAMBDA project, investigating the issues of modularity and cooperative training in deep neural networks.

June 2019  Very pleased to announce our new paper to be published in ECML. Joint work, sponsored by AstraZeneca, this means we can quantify uncertainty in feature selection algorithms even when we have highly interdependent features - On The Stability of Feature Selection in the Presence of Feature Correlations. The acceptance rate was 18% this year.