Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement

1Northeastern University, 2NEC Laboratories America, 3UC San Diego 4UNC Chapel Hill

Is it possible to improve the visual program synthesis abilities of an open code LLM without using large scale human supervision or outputs from a strong commercial model?

Teaser Image

Python-based visual program synthesis asks a LLM to solve compositional computer vision tasks by writing Python code. Existing methods for visual program synthesis primarily use frozen, commercial LLMs (e.g. GPT-4) as program generators. We explore the idea of self-training with synthetic data and feedback from an interpreter to improve the visual program synthesis abilities of an open code LLM.


We construct a pseudoenvironment for visual program synthesis using annotations from standard vision-language tasks. Although our environment can only provide sparse binary rewards, we show that this is enough for a LLM to self-improve with a simple filtered behavioral cloning approach that can also be interpreted as a REINFORCE-esque policy gradient method without a baseline.


Naive self-training saturates after 1-3 iterations. By providing a small amount (< 50) manually written human corrections for persistent error types, we show that self-training can be continued for up to 10 iterations, and maybe more!


Visual program synthesis is a promising approach to exploit the reasoning abilities of large language models for compositional computer vision tasks. Previous work has used few-shot prompting with frozen LLMs to synthesize visual programs. Training an LLM to write better visual programs is an attractive prospect, but it is unclear how to accomplish this. No dataset of visual programs for training exists, and acquisition of a visual program dataset cannot be easily crowdsourced due to the need for expert annotators. To get around the lack of direct supervision, we explore improving the program synthesis abilities of an LLM using feedback from interactive experience. We propose a method where we exploit existing annotations for a vision-language task to improvise a coarse reward signal for that task, treat the LLM as a policy, and apply reinforced self-training to improve the visual program synthesis ability of the LLM for that task. We describe a series of experiments on object detection, compositional visual question answering, and image-text retrieval, and show that in each case, the self-trained LLM outperforms or performs on par with few-shot frozen LLMs that are an order of magnitude larger.


          author    = {Khan, Zaid and BG, Vijay Kumar and Schulter, Samuel and Fu, Yun and Chandraker, Manmohan},
          title     = {Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement},
          booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
          year      = {2024},