aVenture is in Alpha: aVenture recently launched early public access to our research product. It's intended to illustrate capabilities and gather feedback from users. While in Alpha, you should expect the research data to be limited and may not yet meet our exacting standards. We've made the decision to temporarily present this information to showcase the product's potential, but you should not yet rely upon it for your investment decisions.
aVenture is in Alpha: aVenture recently launched early public access to our research product. It's intended to illustrate capabilities and gather feedback from users. While in Alpha, you should expect the research data to be limited and may not yet meet our exacting standards. We've made the decision to temporarily present this information to showcase the product's potential, but you should not yet rely upon it for your investment decisions.
© aVenture Investment Company, 2024. All rights reserved.
44 Tehama St, San Francisco, CA 94105
Privacy Policy
aVenture Investment Company (“aVenture”) is an independent research platform providing information and analysis about startups.
Certain metrics provided by aVenture may seek to assess the risks and opportunities associated with a company, fund, or its representatives (collectively “research”). aVenture seeks to provide this information with objectivity and fairness, and with diligence about its accuracy. Nonetheless, aVenture cannot provide assurance as to the accuracy of the information provided by our research. We strongly advise those using the research platform to seek multiple, independent sources for your research when making financial decisions.
Any links provided to other websites are offered as a matter of convenience and are not intended to imply that aVenture or its authors endorse, sponsor, promote, and/or are affiliated with the owners of or participants in those sites.
The aVenture platform also provides investment listings offered by independent investment advisers in the United States. aVenture is neither a registered investment adviser nor an exempt reporting adviser under the Investment Advisers Act of 1940, and no statements made by aVenture are intended to imply any financial instruments are under the counsel or advice of aVenture or its representatives.
Funds offered on the platform are generally managed by a private investment adviser that, unless stated otherwise, claims exemption from SEC or state registration. Investment funds presented on the platform are only available to investors who meet the requirements of the offering, and solicitations are not made outside those listed jurisdictions.
Additionally, each investment offered on the platform has qualifications for eligibility, including some offered only to Qualified Clients and/or Accredited Investors. Certain funds may be available to non-Qualified or Accredited investors, but only those who become personally known and identifiable to aVenture Investment Company staff, who have had an opportunity to assess the financial capacity and suitability for such an investment, and discuss its risks. Funds, when offered, are only offered following a review of a Private Placement Memorandum (PPM), subscription agreement, and other disclosures.
Investments in startups, venture capital, angel investments, private equity, real estate, stocks, and similar asset classes all involve risks, including: the risk of a decline in the value of your investments, including potentially large declines (suddenly and/or for long periods of time), the potential for illiquidity where part or all of a withdrawal request may not be honored on the date requested (even when a feature of the fund). These risks are heightened during periods of market duress.
Diversification has the possibility of reducing the magnitude of declines (either caused by market/economic factors, or by factors related to the individual company), but does not guarantee these risks have been fully or partially alleviated. Most importantly, past results are not an assurance of future outcomes. While most of these risks are shared and similarly held by other investment asset classes, we recommend investors only consider venture capital investments as part of a broader, diversified portfolio of stocks, bonds, and immediately accessible cash reserves.
From Startups | TechCrunch
By Paul Sawers
April 24, 2024
A French startup has raised a hefty seed investment to “rearchitect compute infrastructure” for developers wanting to build and train AI applications more efficiently.
FlexAI, as the company is called, has been operating in stealth since October 2023, but the Paris-based company is formally launching Wednesday with €28.5 million ($30 million) in funding, while teasing its first product: an on-demand cloud service for AI training.
This is a chunky bit of change for a seed round, which normally means substantial founder pedigree — and that is the case here. FlexAI co-founder and CEO Brijesh Tripathi was previously a senior design engineer at GPU giant and now AI darling Nvidia, before landing in various senior engineering and architecting roles at Apple; Tesla (working directly under Elon Musk); Zoox (before Amazon acquired the autonomous driving startup); and, most recently, Tripathi was VP of Intel’s AI and super compute platform offshoot, AXG.
FlexAI co-founder and CTO Dali Kilani has an impressive CV, too, serving in various technical roles at companies including Nvidia and Zynga, while most recently filling the CTO role at French startup Lifen, which develops digital infrastructure for the healthcare industry.
The seed round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital, with participation from Frst Capital, Motier Ventures, Partech and InstaDeep CEO Karim Beguir.
To grasp what Tripathi and Kilani are attempting with FlexAI, it’s first worth understanding what developers and AI practitioners are up against in terms of accessing “compute”; this refers to the processing power, infrastructure and resources needed to carry out computational tasks such as processing data, running algorithms, and executing machine learning models.
“Using any infrastructure in the AI space is complex; it’s not for the faint-of-heart, and it’s not for the inexperienced,” Tripathi told TechCrunch. “It requires you to know too much about how to build infrastructure before you can use it.”
By contrast, the public cloud ecosystem that has evolved these past couple of decades serves as a fine example of how an industry has emerged from developers’ need to build applications without worrying too much about the back end.
“If you are a small developer and want to write an application, you don’t need to know where it’s being run, or what the back end is — you just need to spin up an EC2 (Amazon Elastic Compute cloud) instance and you’re done,” Tripathi said. “You can’t do that with AI compute today.”
In the AI sphere, developers must figure out how many GPUs (graphics processing units) they need to interconnect over what type of network, managed through a software ecosystem that they are entirely responsible for setting up. If a GPU or network fails, or if anything in that chain goes awry, the onus is on the developer to sort it.
“We want to bring AI compute infrastructure to the same level of simplicity that the general purpose cloud has gotten to — after 20 years, yes, but there is no reason why AI compute can’t see the same benefits,” Tripathi said. “We want to get to a point where running AI workloads doesn’t require you to become data centre experts.”
With the current iteration of its product going through its paces with a handful of beta customers, FlexAI will launch its first commercial product later this year. It’s basically a cloud service that connects developers to “virtual heterogeneous compute,” meaning that they can run their workloads and deploy AI models across multiple architectures, paying on a usage basis rather than renting GPUs on a dollars-per-hour basis.
GPUs are vital cogs in AI development, serving to train and run large language models (LLMs), for example. Nvidia is one of the preeminent players in the GPU space, and one of the main beneficiaries of the AI revolution sparked by OpenAI and ChatGPT. In the 12 months since OpenAI launched an API for ChatGPT in March 2023, allowing developers to bake ChatGPT functionality into their own apps, Nvidia’s shares ballooned from around $500 billion to more than $2 trillion.
LLMs are now pouring out of the technology industry, with demand for GPUs skyrocketing in tandem. But GPUs are expensive to run, and renting them for smaller jobs or ad-hoc use-cases doesn’t always make sense and can be prohibitively expensive; this is why AWS has been dabbling with time-limited rentals for smaller AI projects. But renting is still renting, which is why FlexAI wants to abstract away the underlying complexities and let customers access AI compute on an as-needed basis.
FlexAI’s starting point is that most developers don’t really care for the most part whose GPUs or chips they use, whether it’s Nvidia, AMD, Intel, Graphcore or Cerebras. Their main concern is being able to develop their AI and build applications within their budgetary constraints.
This is where FlexAI’s concept of “universal AI compute” comes in, where FlexAI takes the user’s requirements and allocates it to whatever architecture makes sense for that particular job, taking care of the all the necessary conversions across the different platforms, whether that’s Intel’s Gaudi infrastructure, AMD’s Rocm or Nvidia’s CUDA.
“What this means is that the developer is only focused on building, training and using models,” Tripathi said. “We take care of everything underneath. The failures, recovery, reliability, are all managed by us, and you pay for what you use.”
In many ways, FlexAI is setting out to fast-track for AI what has already been happening in the cloud, which means more than replicating the pay-per-usage model: It means the ability to go “multicloud” by leaning on the different benefits of different GPU and chip infrastructures.
FlexAI will channel a customer’s specific workload depending on what their priorities are. If a company has limited budget for training and fine-tuning their AI models, they can set that within the FlexAI platform to get the maximum amount of compute bang for their buck. This might mean going through Intel for cheaper (but slower) compute, but if a developer has a small run that requires the fastest possible output, then it can be channeled through Nvidia instead.
Under the hood, FlexAI is basically an “aggregator of demand,” renting the hardware itself through traditional means and, using its “strong connections” with the folks at Intel and AMD, secures preferential prices that it spreads across its own customer base. This doesn’t necessarily mean side-stepping the kingpin Nvidia, but it possibly does mean that to a large extent — with Intel and AMD fighting for GPU scraps left in Nvidia’s wake — there is a huge incentive for them to play ball with aggregators such as FlexAI.
“If I can make it work for customers and bring tens to hundreds of customers onto their infrastructure, they [Intel and AMD] will be very happy,” Tripathi said.
This sits in contrast to similar GPU cloud players in the space such as the well-funded CoreWeave and Lambda Labs, which are focused squarely on Nvidia hardware.
“I want to get AI compute to the point where the current general purpose cloud computing is,” Tripathi noted. “You can’t do multicloud on AI. You have to select specific hardware, number of GPUs, infrastructure, connectivity, and then maintain it yourself. Today, that’s that’s the only way to actually get AI compute.”
When asked who the exact launch partners are, Tripathi said that he was unable to name all of them due to a lack of “formal commitments” from some of them.
“Intel is a strong partner, they are definitely providing infrastructure, and AMD is a partner that’s providing infrastructure,” he said. “But there is a second layer of partnerships that are happening with Nvidia and a couple of other silicon companies that we are not yet ready to share, but they are all in the mix and MOUs [memorandums of understanding] are being signed right now.”
Tripathi is more than equipped to deal with the challenges ahead, having worked in some of the world’s largest tech companies.
“I know enough about GPUs; I used to build GPUs,” Tripathi said of his seven-year stint at Nvidia, ending in 2007 when he jumped ship for Apple as it was launching the first iPhone. “At Apple, I became focused on solving real customer problems. I was there when Apple started building their first SoCs [system on chips] for phones.”
Tripathi also spent two years at Tesla from 2016 to 2018 as hardware engineering lead, where he ended up working directly under Elon Musk for his last six months after two people above him abruptly left the company.
“At Tesla, the thing that I learned and I’m taking into my startup is that there are no constraints other than science and physics,” he said. “How things are done today is not how it should be or needs to be done. You should go after what the right thing to do is from first principles, and to do that, remove every black box.”
Tripathi was involved in Tesla’s transition to making its own chips, a move that has since been emulated by GM and Hyundai, among other automakers.
“One of the first things I did at Tesla was to figure out how many microcontrollers there are in a car, and to do that, we literally had to sort through a bunch of those big black boxes with metal shielding and casing around it, to find these really tiny small microcontrollers in there,” Tripathi said. “And we ended up putting that on a table, laid it out and said, ‘Elon, there are 50 microcontrollers in a car. And we pay sometimes 1,000 times margins on them because they are shielded and protected in a big metal casing.’ And he’s like, ‘let’s go make our own.’ And we did that.”
Looking further into the future, FlexAI has aspirations to build out its own infrastructure, too, including data centers. This, Tripathi said, will be funded by debt financing, building on a recent trend that has seen rivals in the space including CoreWeave and Lambda Labs use Nvidia chips as collateral to secure loans — rather than giving more equity away.
“Bankers now know how to use GPUs as collaterals,” Tripathi said. “Why give away equity? Until we become a real compute provider, our company’s value is not enough to get us the hundreds of millions of dollars needed to invest in building data centres. If we did only equity, we disappear when the money is gone. But if we actually bank it on GPUs as collateral, they can take the GPUs away and put it in some other data center.”
View original article on techcrunch.com
Share:
Xaira, an AI drug discovery startup, launches with a massive $1B, says it’s ‘ready’ to start developing drugs
Advances in generative AI have taken the tech world by storm. Biotech investors are making a big bet that similar computational methods could revolutionize drug discovery. On Tuesday, ARCH Venture Partners and Foresite Labs, an affiliate of Foresite Capital, announced that they incubated Xaira Therapeutics and funded the AI biotech with $1 billion. Other investors […] © 2024 TechCrunch. All rights reserved. For personal use only.
Apr 25, 2024
Eric Schmidt-backed Augment, a GitHub Copilot rival, launches out of stealth with $252M
AI is supercharging coding — and developers are embracing it. In a recent StackOverflow poll, 44% of software engineers said that they use AI tools as part of their development processes now and 26% plan to soon. Gartner estimates that over half of organizations are currently piloting or have already deployed AI-driven coding assistants, and […] © 2024 TechCrunch. All rights reserved. For personal use only.
Apr 24, 2024
Radical thinks the time has come for solar-powered, high-altitude autonomous aircraft
Though many eyes are on space as orbit develops into a thriving business ecosystem, Radical is keeping things a little closer to the ground — but not too close. Its high-altitude, solar-powered aircraft aim to succeed where Facebook’s infamous Aquila failed by refining the tech and embracing more markets. It’s hard to believe that Facebook’s […] © 2024 TechCrunch. All rights reserved. For personal use only.
Apr 24, 2024
Don't miss our latest news and updates. Subscribe to the newsletter
More from Tech Crunch: