Following the introduction of Nova Forge, a service designed for training bespoke Nova AI models, Amazon Web Services (AWS) has unveiled additional tools enabling enterprise clients to develop their own cutting-edge AI models.
During its AWS re:Invent conference on Wednesday, AWS revealed enhanced features for Amazon Bedrock and Amazon SageMaker AI. These advancements aim to simplify the process for developers to construct and refine personalized large language models (LLMs).
According to Ankur Mehrotra, AWS’s general manager of AI platforms, the cloud giant is implementing serverless model customization within SageMaker. This feature frees developers from concerns about compute resources or infrastructure when initiating model development, as he explained in an interview with TechCrunch.
Developers can utilize these serverless model-creation functionalities through either a guided, click-based interface or an agent-driven mode that accepts natural language prompts for SageMaker. The agent-driven option is currently in preview.
Mehrotra illustrated, “Should a healthcare client desire a model capable of improved comprehension of specific medical terms, they can direct SageMaker AI—provided they have labeled data—to choose the appropriate method, and SageMaker will then proceed to fine-tune the model.”
This customization feature supports Amazon’s proprietary Nova models as well as selected open-source models (those with publicly accessible weights), such as DeepSeek and Meta’s Llama.
Furthermore, AWS is introducing Reinforcement Fine-Tuning within Bedrock, enabling developers to select a reward function or a predefined workflow, after which Bedrock will automate the entire model customization procedure.
The development of frontier LLMs—the most sophisticated AI models—and their customization is evidently a significant theme for AWS at this year’s conference.
During AWS CEO Matt Garman’s keynote on Tuesday, AWS unveiled Nova Forge, a service offering to construct personalized Nova models for enterprise clients at an annual cost of $100,000.
Mehrotra explained, “Many of our clients inquire, ‘If rivals use the same model, how can I stand out?’ and ‘How can I develop bespoke solutions tailored for my brand, data, and specific applications to gain a competitive edge?’ We’ve discovered that the answer lies in the capability to craft customized models.”
Despite AWS’s AI models not yet achieving widespread adoption—a July survey by Menlo Ventures indicated enterprises favored Anthropic, OpenAI, and Gemini significantly more—the introduction of customization and fine-tuning options for these LLMs might provide AWS with a crucial competitive edge.
Stay updated with all of TechCrunch’s reporting on the yearly enterprise technology conference here, and catch up on any announcements you might have overlooked here.