IIT Madras–backed AI4Bharat has introduced the Indic LLM Arena, describing it as more than just a leaderboard a “public utility” for India’s AI ecosystem. The platform enables crowd-sourced evaluation of global large language models (LLMs) tailored for Indian users, setting benchmarks for how these systems interpret, respond, and perform across the nation’s diverse languages and cultural landscapes.
Most global leaderboards focus primarily on English, overlooking how AI models handle Indian languages or code-mixed inputs like Hinglish and Tanglish. The Indic LLM Arena addresses this gap by evaluating models on three key pillars language, context, and safety ensuring more inclusive and accurate assessments for India’s multilingual landscape.
The platform assesses how effectively a model understands India’s multilingual speech patterns and code-switching, how well it responds to region-specific contexts, and whether it aligns with the country’s cultural sensitivities, ethical standards, and fairness guidelines.
Launched alongside India’s growing push for sovereign AI under the IndiaAI Mission, the initiative aims to establish a reliable benchmark for evaluating both domestic and global LLMs. AI4Bharat envisions the Indic LLM Arena as a key tool to measure the quality, capability, and readiness of AI models for real-world Indian applications.
The Indic LLM Arena employs a human-in-the-loop approach, allowing users to type, speak, or transliterate prompts in various Indian languages and compare responses from two anonymous AI models. By selecting which response performs better, users contribute to a pool of thousands of human judgments that power statistically sound rankings helping determine the most capable LLMs for India’s diverse linguistic landscape.
AI4Bharat describes the Indic LLM Arena as more than a leaderboard a “public utility” designed to strengthen India’s AI ecosystem. The platform enables developers to benchmark and enhance Indic language models, helps enterprises identify the most suitable AI solutions for their requirements, and empowers users to shape the standards for what “good” AI should represent in the Indian context.
The team aims to broaden the Indic LLM Arena to assess multimodal models capable of processing text, images, and audio, along with agentic tasks such as search, document analysis, and tool use. AI4Bharat emphasizes that the entire platform and its evaluations will continue to remain open-source, fostering transparency and collaboration within the AI community.
The project’s initial phase received support from Google Cloud, helping establish the platform’s infrastructure and scalability. The Indic LLM Arena is now live and open for public participation users can explore and test it at arena.ai4bharat.org.
AI researcher and CognitiveLabs founder Adithya S K lauded the initiative, noting, “The user experience is excellent I had the best Kannada typing experience so far. Efforts like this are exactly what Indian research labs should be pursuing across different domains.”









