This week in AI updates: Claude Sonnet 4.6, Gemini 3.1 Professional, and extra (February 20, 2026)


Anthropic releases Claude Sonnet 4.6

Claude Sonnet 4.6 options improved expertise in coding, laptop use, long-context reasoning, agent planning, data work, and design.

It’s now the default mannequin in claude.ai and Claude Cowork, has a 1M context window (beta), and is priced the identical as Sonnet 4.5, at $3 per million enter tokens and $15 per million output tokens.

“Efficiency that will have beforehand required reaching for an Opus-class mannequin—together with on real-world, economically beneficial workplace duties—is now out there with Sonnet 4.6. The mannequin additionally exhibits a significant enchancment in laptop use expertise in comparison with prior Sonnet fashions,” Anthropic wrote in a put up.

Gemini 3.1 Professional now out there in preview

Gemini 3.1 Professional is now out there for builders within the Gemini API in Google AI Studio, Gemini CLI, Google Antigravity, and Android Studio. It will also be accessed in Vertex AI, Gemini Enterprise, the Gemini app, and NotebookLM.

“Constructing on the Gemini 3 collection, 3.1 Professional represents a step ahead in core reasoning. 3.1 Professional is a better, extra succesful baseline for complicated problem-solving. That is mirrored in our progress on rigorous benchmarks. On ARC-AGI-2, a benchmark that evaluates a mannequin’s skill to unravel totally new logic patterns, 3.1 Professional achieved a verified rating of 77.1%. That is greater than double the reasoning efficiency of three Professional,” Google wrote in a put up.

OpenAI provides Lockdown Mode, Elevated Danger labels to ChatGPT

These new options are designed to cut back the danger of immediate injection assaults.

Lockdown Mode restricts how ChatGPT is ready to work together with exterior methods, lowering the possibility of information exfiltration from a immediate injection assault, whereas the brand new Elevated Danger labels shall be displayed on sure merchandise to tell customers that interacting with a particular function could introduce extra danger. For instance, builders can grant Codex community entry in order that it will possibly do issues like lookup documentation on-line, however this further entry will also be dangerous. For now, Elevated Danger labels shall be displayed in ChatGPT, ChatGPT Atlas, and Codex.

Microsoft creates a set of pre-built brokers for Visible Studio

The pre-built brokers embrace Debugger, which makes use of name stacks, variable state, and diagnostic instruments to work by errors; Profiler, which identifies bottlenecks and suggests optimizations; Check, which generates unit exams; and Modernize, which executes framework and dependency upgrades.

“Every preset agent is designed round a particular developer workflow and integrates with Visible Studio’s native tooling in ways in which a generic assistant can’t,” Microsoft wrote in a weblog put up.

Brokers may be accessed by the chat panel through the use of the agent picker or “@”.

GraphRAG allows extra context-aware and verifiable responses from LLMs

Graphwise’s new GraphRAG providing acts as a semantic layer on prime of data graphs that LLMs can make the most of to offer context-rich and verifiable solutions.

In accordance with the corporate, a typical RAG implementation flattens information into chunks, and with that strategy, it will possibly discover comparable phrases, however isn’t capable of perceive complicated relationships, hierarchies, or logic connecting enterprise information. On prime of that, it is usually often tough to see how an LLM got here to its reply and what sources it used.

Graphwise believes that GraphRAG solves these points by offering a pipeline the place each step may be inspected and solutions are backed by paperwork and graph entities.

It leverages a number of totally different search approaches, together with retrieval from a data graph, vector search in a specified vector retailer, and full-text search to allow keyword-driven discovery. It makes use of a knowledge-model-driven enter processing strategy to know the consumer’s intent, permitting it to complement ideas utilizing the corporate’s taxonomy or ontology, develop queries utilizing associated entities and phrases, and construct a graph illustration of the query.

Checkmarx enhances IDE-native agentic utility safety in Kiro

Agentic AI safety supplier Checkmarx introduced an integration with the AWS Kiro IDE to allow builders working in that platform to determine and cope with safety points as code is written, the corporate mentioned.

The combination places Checkmarx Developer Help instantly into Kiro, so builders don’t have to depart the IDE to investigate the code for safety.

As soon as builders activate Developer Help inside Kiro and it’s authenticated, Checkmarx mentioned the instrument will analyze supply code and dependencies within the lively workspace. Additional, it mentioned the instrument will robotically floor safety findings within the IDE, together with contextual information that helps builders repair safety points early within the growth cycle. That information may be considered within the Checkmarx One platform, offering stakeholders with a view of venture dangers.

Quest Trusted Knowledge Administration Platform makes it simpler for organizations to create reusable information merchandise

The Quest Trusted Knowledge Administration Platform unifies information modeling, information cataloging, information governance, information high quality, and an information market to allow organizations to ship AI-ready information all through their enterprise.

“Constructing trusted AI-ready information and reusable information merchandise can take as much as six months, however your corporation can’t afford to attend, so groups skip the metadata, bypass governance workflows, and ignore information high quality, and each division finally ends up with their very own model of an information product. That ends in fragmented, siloed information that isn’t reliable,” Quest Software program defined in a video.

One of many key capabilities of the platform is the Automated Knowledge Product Manufacturing unit, which makes use of generative AI to create information merchandise from pure language prompts, lowering information product design cycles, reducing supply prices, and enabling enterprise customers to create their very own information merchandise.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles