MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
grok
Recherche

Grok 3 gets an API — but will enterprises trust it?

vendredi 11 avril 2025, 10:51 , par InfoWorld
Once just a chatbot, xAI’s Grok 3 large language model family is now available in a beta version via an API, enabling developers to integrate Grok 3 into custom applications.

The company is offering two new LLMS, one with deep domain knowledge in finance, healthcare, law, and science, and a lightweight model without domain knowledge but with the ability to show how it thinks about its answers. A faster version of each model is available for an additional fee.

Like all AI models, their adoption is tempered by security concerns such as their vulnerability to adversarial inputs.

The API supports multimodal capabilities, including image analysis, and aligns with developer-friendly standards similar to OpenAI’s and Anthropic’s frameworks.

The context window — how much information the model can process at once — for all versions of the Grok 3 API is capped at 131,072 tokens.

Pricing is $3 per million input tokens and $15 per million output tokens for grok-3-beta, the model with deep domain knowledge, or $5 and $25, respectively, for its faster sibling. For grok-3-mini-beta, the model with reasoning capabilities, pricing is $0.30 per million input tokens and $0.50 per million output tokens (or $0.60 and $4, respectively, for the faster version).

In comparison, the charge for the older grok-2 API is $2 per million input tokens and $10 per million output tokens.

Security questions

However, as enterprises consider adopting this technology, cybersecurity leaders are raising urgent questions about its risks and readiness for business use. The Grok 3 API arrives with bold promises like advanced reasoning capabilities, real-time web search through DeepSearch, and multimodal processing. When Musk first launched Grok he wanted to position it as the “anti-woke” alternative to more filtered AI systems, claiming it offered greater transparency and less restrictive responses.

However, this approach is causing concerns among CISOs evaluating the technology.

“Before an AI model like this is approved for use, a rigorous vetting process would be essential,” said Dina Saada, cybersecurity analyst and member of Women in Cybersecurity Middle East (WISCME). “From an intelligence standpoint, this would involve multiple layers of testing such as code reviews for vulnerabilities, penetration testing, behavioral analysis under stress conditions, and compliance checks against security standards.” 

“To earn trust, xAI must show two things: first, transparency and second, resilience,” Saada added.

Musk’s team at xAI faces an important task in the coming months. While the Grok 3 API showcases promising capabilities, it presents an opportunity to assure enterprises that xAI can meet their expectations for model integrity and reliability.

Enterprise adoption will depend on it demonstrating robust model integrity and reliability alongside technical performance. While developers have already tested Grok 3’s capabilities for weeks through existing chat interfaces, the API’s scalability and adherence to enterprise-grade security protocols will determine its viability for large organizations.
https://www.infoworld.com/article/3960216/grok-3-gets-an-api-but-will-enterprises-trust-it.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
dim. 13 avril - 08:41 CEST