Hello.

I am Professor of Machine Learning, in the Department of Computer Science.
If you need to find me, I’m in office G11, Kilburn Building.

What do I do?

I like to work on methodological aspects of Machine Learning. I find this leads to novel methods with strong foundations: e.g. we have contributed methods for assessing the stability (reproducibility) of variable selection algorithms; methods for hypothesis testing in challenging non-standard scenarios; and, a new theory of ensemble diversity. This has led to applications in areas such as predictive policiing, clinical trials, and design of efficient ML algorithms for plastic electronics.

I enjoy looking for connections and equivalencies between known methods in the jungle of ML, primarily with tools from statistics and information theory, and lately I've been exploring information geometry. Everything in ML is, ultimately, a special case of something else.

I also enjoy thinking deeply about pedagogy, especially the nature of PhD training. I wrote a book - a step-by-step guide to the intellectual and emotional rollercoaster of Your PhD. Written in collaboration with twelve leading academics and industrialists, giving their unique perspectives on the PhD process, How to get Your PhD: A Handbook for the Journey is now available.

Contact me :
firstname.secondname AT manchester.ac.uk


News

See my archived news for older work, but recent activities have been...
March 2024  One of the most unexpected and surprising papers I've ever published... Bias/Variance is not the same as Approximation/Estimation. We outline the precise connection between these two seminal results in ML theory... somehow this connection and its properties have been overlooked for 50+ years?

December 2023  Very pleased to see our new paper out in JMLR today - A Unified Theory of Diversity in Ensemble Learning. This is the culmination of a very, very long journey - the ideas in this work have been in progress for *20* years, with multiple generation of Postdocs and PhDs contributing. Thankyou all!

January 2023  New preprint out, A Unified Theory of Diversity in Ensemble Learning. This is the result of a long chain of research, beginning right back with my PhD. It formulates a theory that explains the nature and role of diversity in several supervised learning scenarios. Thankyou EPSRC for supporting this!

March 2022  New paper published in AISTATS 2022: Bias-Variance Decompositions for Margin Losses with Danny Wood and Tingting Mu.

September 2021  I am starting my research sabbatical for one full year... working on something very, very cool. Stay tuned - results expected in late 2022....

June 2020  Very pleased to announce a new paper in ECML 2020... To Ensemble or Not Ensemble: When does End-To-End Training Fail?. In collaboration with many colleagues from Manchester, this is a key output from our EPSRC funded LAMBDA project, investigating the issues of modularity and cooperative training in deep neural networks.

June 2019  Very pleased to announce our new paper to be published in ECML. Joint work, sponsored by AstraZeneca, this means we can quantify uncertainty in feature selection algorithms even when we have highly interdependent features - On The Stability of Feature Selection in the Presence of Feature Correlations. The acceptance rate was 18% this year.