Back to Search
Start Over
ReFeR: Improving Evaluation and Reasoning through Hierarchy of Models
- Publication Year :
- 2024
-
Abstract
- Assessing the quality of outputs generated by generative models, such as large language models and vision language models, presents notable challenges. Traditional methods for evaluation typically rely on either human assessments, which are resource-intensive, or automatic metrics that often show a low correlation with human judgment. Another common approach is to use deep learning systems, which not only consume a substantial amount of compute and time but also require extensive training data. In this study, we introduce a tuning-free framework called ReFeR, designed to evaluate generative outputs, including both text and images, by leveraging a 2-level hierarchy of LLMs and VLMs themselves. We rigorously evaluate our framework, ReFeR, across four diverse evaluation tasks. The framework not only improves the accuracy of these evaluations, surpassing previous benchmarks but also generates constructive feedback. Interestingly, the framework is also applicable to reasoning tasks. Experiments on four reasoning tasks demonstrate superior collective reasoning abilities of the framework. We present two variants of the framework: ReFeR-Turbo, optimized for accelerated performance, and ReFeR-Lite, offering a more cost-effective solution. ReFeR-Lite is $\sim7.7\times$ more efficient while being comparably accurate to ReFeR-Turbo. We make code, data and PIP package publicly available. See this PIP URL https://pypi.org/project/refer-agents/ and this Git URL https://github.com/yaswanth-iitkgp/ReFeR_Code .<br />Comment: Paper Under Review
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2407.12877
- Document Type :
- Working Paper