Google Launches Deep Research Agents Powered by Gemini 3.1 Pro for Enterprise Use

Google's Gemini 3.1 Pro powers new enterprise agents that securely mine public and private data in one API call for finance and life sciences.

Apr 23, 2026
5 min read
Set Technobezz as preferred source in Google News
Technobezz
Google Launches Deep Research Agents Powered by Gemini 3.1 Pro for Enterprise Use

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Google's new Deep Research agents can mine the open web and your company's private databases in a single API call, turning what was a consumer chatbot feature into enterprise infrastructure aimed at finance, life sciences, and market intelligence.

Built on Gemini 3.1 Pro, the two-tier system launched Monday replaces the preview agent Google released in December. Deep Research is optimized for low-latency, interactive use cases like financial dashboards that answer complex analytical questions in near-real time.

Deep Research Max uses extended test-time compute, spending more computational cycles iteratively reasoning and searching, for asynchronous background workflows where analysts kick off due diligence reports and expect exhaustive analyses by morning.

Both agents are available in public preview via paid tiers of the Gemini API through the Interactions API Google first introduced in December 2025. The most consequential addition is Model Context Protocol support. MCP, an emerging open standard for connecting AI models to external data sources, lets the agents securely query private databases, internal document repositories, and third-party data services without sensitive information leaving its source environment. A hedge fund could point Deep Research at an internal deal-flow database and a financial data terminal simultaneously, then ask it to synthesize insights alongside public web data.

Google disclosed active collaborations with FactSet, S&P, and PitchBook on their MCP server designs, a direct play for Wall Street workflows. The partnerships let shared customers "integrate financial data offerings into workflows powered by Deep Research," according to Google DeepMind product managers Lukas Haas and Srinivas Tadepalli. The second headline feature is native chart and infographic generation. Previous versions produced text-only reports requiring manual export and visualization. The new agents render HTML charts and infographics inline within reports using Google's Nano Banana image generator, which has a built-in knowledge database to accurately interpret visualization requests.

CEO Sundar Pichai announced the launch on X: "We are launching two powerful updates to Deep Research in the Gemini API, now with better quality, MCP support, and native chart/infographics generation." He noted Deep Research Max achieved 93.3% on DeepSearchQA and 54.6% on Humanity's Last Exam.

Google compared Gemini 3.1 Pro against its predecessor using OpenAI's BrowseComp benchmark, which comprises more than 1,000 tasks measuring LLMs' ability to perform online research. Gemini 3.1 Pro scored 85.9, more than 25 points higher than Gemini 3 Pro.

Before generating reports, both agents display an overview of their planned approach. Users can edit the plan to prioritize specific sources such as scientific databases or internal repositories. The system also accepts multimodal inputs including PDFs, CSVs, images, audio, and video as grounding context.

Deep Research is positioned as cost-efficient: lower latency than its December predecessor at higher quality levels. Deep Research Max trades speed for comprehensiveness by investing more hardware resources in report generation. The blog post from Google DeepMind notes that developers building with these agents tap into "the same autonomous research infrastructure that powers research capabilities within some of Google's most popular products like Gemini App, NotebookLM, Google Search and Google Finance."

Rollout to Google Cloud is planned for a later date.

Share