A majority of companies plan to increase their investments in technology infrastructure this year to support artificial intelligence (AI) and machine learning (ML), and a hybrid, multi-cloud approach is becoming the default technology strategy for many organizations, according to Verta Insights latest research study.
The 2023 AI/ML Investment Priorities study analyzed input from 460+ AI/ML practitioners on the technology infrastructure that their organizations are using to support AI and ML, including their spending plans and their cloud service providers (CSPs) of choice.
The study found that a majority of research participants reported an increase in budgets for AI/ML infrastructure in 2023 compared to last year. Specifically:
According to the study, nearly half of the participants (48%) described their organization's approach as "hybrid," meaning a mix of cloud and on-premises technologies. Meanwhile, 32% of participants described their organization as "cloud-only," while only 7% reported using an on-prem-only strategy. The remaining 8% reported being on-premises but moving to the cloud.
When it comes to cloud service providers, Amazon Web Services (AWS) dominated as organizations’ primary cloud service provider, with 50% of participants citing it as their primary CSP. Meanwhile, 25% cited Microsoft Azure, 13% cited Google Cloud Platform (GCP), and 5% cited Oracle as their primary provider.
However, the study also found that half of the organizations have a multi-cloud approach:
We see companies increasingly taking a hybrid, multi-cloud approach to their technology infrastructure for a variety of reasons. An organization could be moving its on-premises infrastructure to the cloud; accommodating existing CSPs strategies following acquisitions; taking advantage of the cost and flexibility benefits of the cloud, on the one hand, and the security offered by on-premises for compliance or regulatory reasons, on the other hand; or a mix of the above factors.
A hybrid, multi-cloud approach can offer advantages in terms of cost and flexibility, but it also can complicate the operational side of machine learning. For instance, managing data stored across different on-premises and cloud environments can be challenging, and integrating different CSPs — with varying APIs and data formats — can be complex. Additionally, managing security and compliance across multiple environments can be more difficult.
To address these challenges, we see many organizations adopting an operational AI platform that can span their on-premises and multi-cloud environments. This kind of platform allows them to simplify and accelerate their ML pipelines, allowing the organization to adapt as their technology infrastructure evolves.