As businesses increasingly integrate artificial intelligence in their workflows and products, there is a growing demand for tools and platforms that make it easier to create, test and deploy machine learning models. This category of platforms — popularly known as machine learning operations, MLOps — is already a little crowded, with startups like InfuseAI, Comet, Arrikto, Arize, Galileo, Tecton and Diveplane, not to mention the offerings from incumbents like Google Cloud, Azure and AWS.
Now, one South Korean MLOps platform called VESSL AI is trying to carve out a niche for itself by focusing on optimizing for GPU expenses using hybrid infrastructure that combines on-premise and cloud environments. And the startup has now raised $12 million in a Series A funding round to speed up the development of its infrastructure, aimed at companies that want to develop custom large language models (LLMs) and vertical AI agents.
The company already has 50 enterprise customers, which include some big names like Hyundai; LIG Nex1, a South Korean aerospace and weapons manufacturer; TMAP Mobility, a mobility-as-a-service joint venture between Uber and Korean telco company SK Telecom; as well as tech startups Yanolja, Upstage, ScatterLab and Wrtn.ai. The company also has strategically partnered with Oracle and Google Cloud in the U.S. It has over 2,000 users, co-founder and CEO Jaeman Kuss An told TechCrunch.
An founded the startup in 2020 with Jihwan Jay Chun (CTO), Intae Ryoo (CPO) and Yongseon Sean Lee (tech lead) — the founders previously had stints at Google, mobile game company PUBG, and some AI startups — to solve a particular pain point that he had to deal with when developing machine learning models at a previous medical tech startup: The immense amount of work involved in developing and utilizing machine learning tools.
The team discovered that they could make the process more efficient — and notably, cheaper — by leveraging a hybrid infrastructure model. The company’s MLOps platform essentially uses a multi-cloud strategy and spot instances to cut GPU expenses by as much as 80%, An noted, adding that this approach also addresses GPU shortages and streamlines the training, deployment and operation of AI models, including large-scale LLMs.
“VESSL AI’s multi-cloud strategy enables the use of GPUs from a variety of cloud service providers like AWS, Google Cloud and Lambda,” An said. “This system automatically selects the most cost-effective and efficient resources, significantly reducing customer costs.”
VESSL’s platform offers four main features: VESSL Run, which automates AI model training; VESSL Serve, which supports real-time deployment; VESSL Pipelines, which integrates model training and data preprocessing to streamline workflows; and VESSL Cluster, which optimizes GPU resource usage in a cluster environment.
Investors for the Series A round, which brings the company’s total raised to $16.8 million, include A Ventures, Ubiquitous Investment, Mirae Asset Securities, Sirius Investment, SJ Investment Partners, Woori Venture Investment and Shinhan Venture Investment. The startup has 35 staff in South Korea and at a San Mateo office in the U.S.
Enterprise companies find MLOps critical for reliability and performance