Connect with us

Funding & Business

Peters: 20% Tariffs Could Be 'Problematic' for Markets

Published

on




JPMorgan Private Bank’s Grace Peters says markets are looking at effective “mid-teens” tariff rates, which “major economies can stomach and the markets can digest.” She adds that should tariffs move out of that range and reach 20% or 25% effective rates, it would “prove more problematic.” Peters speaks on Bloomberg Television. (Source: Bloomberg)



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Funding & Business

Bloomberg Markets 7/10/2025

Published

on




“Bloomberg Markets” follows the market moves across every global asset class and discusses the biggest issues for Wall Street. Today’s guests; US Council of Economic Advisors Former Chair Jared Bernstein, BNP Paribas Markets360 Chief Economist for Latin America, Bloomberg’s Danielle Moran, Vanessa Dezem, George Ferguson and Ed Ludlow. (Source: Bloomberg)



Source link

Continue Reading

Funding & Business

Hong Kong Defends FX Peg for Fourth Time in Two Weeks

Published

on




Hong Kong authorities intervened for the fourth time in two weeks to prevent the city’s currency from weakening beyond its official trading band, after previous efforts failed to drain enough liquidity to push up funding costs.



Source link

Continue Reading

Funding & Business

AWS doubles down on infrastructure as strategy in the AI race with SageMaker upgrades

Published

on


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


AWS seeks to extend its market position with updates to SageMaker, its machine learning and AI model training and inference platform, adding new observability capabilities, connected coding environments and GPU cluster performance management. 

However, AWS continues to face competition from Google and Microsoft, which also offer many features that help accelerate AI training and inference.  

SageMaker, which transformed into a unified hub for integrating data sources and accessing machine learning tools in 2024, will add features that provide insight into why model performance slows and offer AWS customers more control over the amount of compute allocated for model development.

Other new features include connecting local integrated development environments (IDEs) to SageMaker, so locally written AI projects can be deployed on the platform. 

SageMaker General Manager Ankur Mehrotra told VentureBeat that many of these new updates originated from customers themselves. 

“One challenge that we’ve seen our customers face while developing Gen AI models is that when something goes wrong or when something is not working as per the expectation, it’s really hard to find what’s going on in that layer of the stack,” Mehrotra said.

SageMaker HyperPod observability enables engineers to examine the various layers of the stack, such as the compute layer or networking layer. If anything goes wrong or models become slower, SageMaker can alert them and publish metrics on a dashboard.

Mehrotra pointed to a real issue his own team faced while training new models, where training code began stressing GPUs, causing temperature fluctuations. He said that without the latest tools, developers would have taken weeks to identify the source of the issue and then fix it. 

Connected IDEs

SageMaker already offered two ways for AI developers to train and run models. It had access to fully managed IDEs, such as Jupyter Lab or Code Editor, to seamlessly run the training code on the models through SageMaker. Understanding that other engineers prefer to use their local IDEs, including all the extensions they have installed, AWS allowed them to run their code on their machines as well. 

However, Mehrotra pointed out that it meant locally coded models only ran locally, so if developers wanted to scale, it proved to be a significant challenge. 

AWS added new secure remote execution to allow customers to continue working on their preferred IDE — either locally or managed — and connect ot to SageMaker.

“So this capability now gives them the best of both worlds where if they want, they can develop locally on a local IDE, but then in terms of actual task execution, they can benefit from the scalability of SageMaker,” he said. 

More flexibility in compute

AWS launched SageMaker HyperPod in December 2023 as a means to help customers manage clusters of servers for training models. Similar to providers like CoreWeave, HyperPod enables SageMaker customers to direct unused compute power to their preferred location. HyperPod knows when to schedule GPU usage based on demand patterns and allows organizations to balance their resources and costs effectively. 

However, AWS said many customers wanted the same service for inference. Many inference tasks occur during the day when people use models and applications, while training is usually scheduled during off-peak hours. 

Mehrotra noted that even in the world inference, developers can prioritize the inference tasks that HyperPod should focus on.

Laurent Sifre, co-founder and CTO at AI agent company H AI, said in an AWS blog post that the company used SageMaker HyperPod when building out its agentic platform.

“This seamless transition from training to inference streamlined our workflow, reduced time to production, and delivered consistent performance in live environments,” Sifre said. 

AWS and the competition

Amazon may not be offering the splashiest foundation models like its cloud provider rivals, Google and Microsoft. Still, AWS has been more focused on providing the infrastructure backbone for enterprises to build AI models, applications, or agents

In addition to SageMaker, AWS also offers Bedrock, a platform specifically designed for building applications and agents. 

SageMaker has been around for years, initially serving as a means to connect disparate machine learning tools to data lakes. As the generative AI boom began, AI engineers began using SageMaker to help train language models. However, Microsoft is pushing hard for its Fabric ecosystem, with 70% of Fortune 500 companies adopting it, to become a leader in the data and AI acceleration space. Google, through Vertex AI, has quietly made inroads in enterprise AI adoption.

AWS, of course, has the advantage of being the most widely used cloud provider. Any updates that would make its many AI infrastructure platforms easier to use will always be a benefit. 



Source link
Continue Reading

Trending