Google recently introduced its latest AI model, Gemini, featuring three tiers: Nano, Pro, and Ultra. The most robust among them, Gemini Ultra, reportedly surpasses OpenAI’s GPT in the majority of industry-standard benchmarks, according to internal sources within the company.
Insiders suggest that Gemini Ultra excels in 30 out of 32 widely recognized academic benchmarks designed for testing Large Language Models (LLM). This achievement could potentially position Google favorably in the competition with OpenAI, which has been leading the AI industry with its renowned GPT model.
As reported by Insider, Gemini Ultra achieved a remarkable 90% score in the Massive Multitask Language Understanding (MMLU) benchmark, making it the first model to outperform human experts. MMLU assesses knowledge and problem-solving abilities across 57 subjects, including math, physics, history, law, medicine, and ethics.
While Gemini Nano and Pro have already been integrated into Google’s products, Ultra is scheduled for release in early 2024. Gemini Nano powers AI features in the Pixel 8 Pro smartphone, and Gemini Pro is accessible through the Bard chatbot. Additionally, Gemini Pro will be made available to enterprise customers and AI studio developers through Google’s Vertex AI program.
Eli Collins, Vice President of Product at DeepMind, the Google division responsible for Gemini’s development, highlighted that Gemini Ultra possesses the capability to comprehend “nuanced” information across various formats, including text, images, audio, and code. Collins mentioned that some of the data used to train the application was derived from publicly available web content, although the specific sources were not explicitly disclosed by the company.
Also Read:Power Distribution Companies Seek Rs. 4.6 Per Unit Tariff Hike For January Bills