MLCopilot: Unleashing the Power of Large Language Models in Solving Machine Learning Tasks

lei Zhang, Yuge Zhang, Kan Ren, Dongsheng Li, Yuqing Yang

Main: Semantics and Applications Oral Paper

Session 9: Semantics and Applications (Oral)
Conference Room: Marie Louise 2
Conference Time: March 20, 09:00-10:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-473-Oral channel in a separate window.
Abstract: The field of machine learning (ML) has gained widespread adoption, leading to significant demand for adapting ML to specific scenarios, which is yet expensive and non-trivial. The predominant approaches towards the automation of solving ML tasks (e.g., AutoML) are often time-consuming and hard to understand for human developers. In contrast, though human engineers have the incredible ability to understand tasks and reason about solutions, their experience and knowledge are often sparse and difficult to utilize by quantitative approaches. In this paper, we aim to bridge the gap between machine intelligence and human knowledge by introducing a novel framework MLCopilot, which leverages the state-of-the-art large language models to develop ML solutions for novel tasks. We showcase the possibility of extending the capability of LLMs to comprehend structured inputs and perform thorough reasoning for solving novel ML tasks. And we find that, after some dedicated design, the LLM can (i) observe from the existing experiences of ML tasks and (ii) reason effectively to deliver promising results for new tasks. The solution generated can be used directly to achieve high levels of competitiveness.