Particle.news
Download on the App Store

RAG Research Moves to Graph, Lightweight, and Agentic Designs as New Benchmarks Steer Fixes

New arXiv papers foreground adaptive retrieval, multi‑agent workflows, plus deployment‑minded tooling to tackle multi‑hop, multimodal, long‑context weaknesses.

Overview

  • A new developer guide formalizes a multi‑type RAG taxonomy: GraphRAG for structured multi‑hop reasoning, LightRAG for incremental two‑level retrieval on small models, and AgenticRAG for plan‑execute‑verify workflows.
  • Graph‑centric and multi‑agent methods expand capability, with CogGRAG unifying mind‑map decomposition and KG QA, GraphAgent‑Reasoner scaling accuracy via agent collaboration, and Mujica‑MyGo introducing minimalist reinforcement learning for efficient multi‑turn RAG post‑training.
  • Multimodal retrieval advances as AdaVideoRAG routes queries across text, visual, and knowledge‑graph stores, Video‑RAG injects visually aligned auxiliary text without fine‑tuning, and BeMyEyes pairs a compact perceiver with a powerful reasoner for open‑source multimodal gains.
  • A wave of evaluations highlights higher‑level reasoning gaps, with VisReason, MASS‑Bench, CFG‑Bench, InfiniBench, VCU‑Bridge, and LAST diagnosing declines at deeper levels and motivating remedies such as self‑elicited knowledge distillation and continuous visual tokens.
  • Deployment‑oriented tooling matures, as HuggingR4 curbs prompt bloat for model selection, hybrid routing preserves LLM accuracy at lower cost for app‑feedback triage, and new analysis reframes multi‑turn context drift as a controllable equilibrium improved by brief reminders.