aVenture is in Alpha: aVenture recently launched early public access to our research product. It's intended to illustrate capabilities and gather feedback from users. While in Alpha, you should expect the research data to be limited and may not yet meet our exacting standards. We've made the decision to temporarily present this information to showcase the product's potential, but you should not yet rely upon it for your investment decisions.
aVenture is in Alpha: aVenture recently launched early public access to our research product. It's intended to illustrate capabilities and gather feedback from users. While in Alpha, you should expect the research data to be limited and may not yet meet our exacting standards. We've made the decision to temporarily present this information to showcase the product's potential, but you should not yet rely upon it for your investment decisions.
© aVenture Investment Company, 2024. All rights reserved.
44 Tehama St, San Francisco, CA 94105
Privacy Policy
aVenture Investment Company (“aVenture”) is an independent research platform providing information and analysis about startups.
Certain metrics provided by aVenture may seek to assess the risks and opportunities associated with a company, fund, or its representatives (collectively “research”). aVenture seeks to provide this information with objectivity and fairness, and with diligence about its accuracy. Nonetheless, aVenture cannot provide assurance as to the accuracy of the information provided by our research. We strongly advise those using the research platform to seek multiple, independent sources for your research when making financial decisions.
Any links provided to other websites are offered as a matter of convenience and are not intended to imply that aVenture or its authors endorse, sponsor, promote, and/or are affiliated with the owners of or participants in those sites.
The aVenture platform also provides investment listings offered by independent investment advisers in the United States. aVenture is neither a registered investment adviser nor an exempt reporting adviser under the Investment Advisers Act of 1940, and no statements made by aVenture are intended to imply any financial instruments are under the counsel or advice of aVenture or its representatives.
Funds offered on the platform are generally managed by a private investment adviser that, unless stated otherwise, claims exemption from SEC or state registration. Investment funds presented on the platform are only available to investors who meet the requirements of the offering, and solicitations are not made outside those listed jurisdictions.
Additionally, each investment offered on the platform has qualifications for eligibility, including some offered only to Qualified Clients and/or Accredited Investors. Certain funds may be available to non-Qualified or Accredited investors, but only those who become personally known and identifiable to aVenture Investment Company staff, who have had an opportunity to assess the financial capacity and suitability for such an investment, and discuss its risks. Funds, when offered, are only offered following a review of a Private Placement Memorandum (PPM), subscription agreement, and other disclosures.
Investments in startups, venture capital, angel investments, private equity, real estate, stocks, and similar asset classes all involve risks, including: the risk of a decline in the value of your investments, including potentially large declines (suddenly and/or for long periods of time), the potential for illiquidity where part or all of a withdrawal request may not be honored on the date requested (even when a feature of the fund). These risks are heightened during periods of market duress.
Diversification has the possibility of reducing the magnitude of declines (either caused by market/economic factors, or by factors related to the individual company), but does not guarantee these risks have been fully or partially alleviated. Most importantly, past results are not an assurance of future outcomes. While most of these risks are shared and similarly held by other investment asset classes, we recommend investors only consider venture capital investments as part of a broader, diversified portfolio of stocks, bonds, and immediately accessible cash reserves.
From Startups | TechCrunch
By Kyle Wiggers
February 22, 2024
Massive training data sets are the gateway to powerful AI models — but often, also those models’ downfall.
Biases emerge from prejudicial patterns concealed in large data sets, like pictures of mostly white CEOs in an image classification set. And big data sets can be messy, coming in formats incomprehensible to a model — formats containing a lot of noise and extraneous information.
In a recent Deloitte survey of companies adopting AI, 40% said data-related challenges — including thoroughly preparing and cleaning data — were among the top concerns hampering their AI initiatives. A separate poll of data scientists found that about 45% of scientists’ time is spent on data prep tasks, like “loading” and cleaning data.
Ari Morcos, who’s worked in the AI industry for nearly a decade, wants to abstract away many of the data prep processes around AI model training — and he’s founded a startup to do just that.
Morcos’ company, DatologyAI, builds tooling to automatically curate data sets like those used to train OpenAI’s ChatGPT, Google’s Gemini and other like GenAI models. The platform can identify which data is most important depending on a model’s application (e.g. writing emails), Morcos claims, in addition to ways the data set can be augmented with additional data and how it should be batched, or divided into more manageable chunks, during model training.
“Models are what they eat — models are a reflection of the data on which they’re trained,” Morcos told TechCrunch in an email interview. “However, not all data are created equal, and some training data are vastly more useful than others. Training models on the right data in the right way can have a dramatic impact on the resulting model.”
Morcos, who has a Ph.D. in neuroscience from Harvard, spent two years at DeepMind applying neurology-inspired techniques to understand and improve AI models and five years at Meta’s AI lab uncovering some of the basic mechanisms underlying models’ functions. Along with his co-founders Matthew Leavitt and Bogdan Gaza, a former engineering lead at Amazon and then Twitter, Morcos launched DatologyAI with the goal of streamlining all forms of AI data set curation.
As Morcos points out, the makeup of a training data set impacts nearly every characteristic of a model trained on it — from the model’s performance on tasks to its size and the depth of its domain knowledge. More efficient data sets can cut down on training time and yield a smaller model, saving on compute costs, while data sets that include an especially diverse range of samples can handle esoteric requests more adeptly (generally speaking).
With interest in GenAI — which has a reputation for being expensive — at an all-time high, AI implementation costs are at the forefront of execs’ minds.
Many businesses are opting to fine-tune existing models (including open source models) for their purposes or opt for managed vendor services via APIs. But some — for governance and compliance reasons or otherwise — are building models on custom data from scratch, and spending tens of thousands to millions of dollars in compute in order to train and run them.
“Companies have collected treasure troves of data and want to train efficient, performant, specialized AI models that can maximize the benefit to their business,” Morcos said. “However, making effective use of these massive data sets is incredibly challenging and, if done incorrectly, leads to worse-performing models that take longer to train and [are larger] than necessary.”
DatologyAI can scale up to “petabytes” of data in any format — whether text, images, video, audio, tabular or more “exotic” modalities such as genomic and geospatial — and deploys to a customer’s infrastructure, either on-premises or via a virtual private cloud. This sets it apart from other data prep and curation tools like CleanLab, Lilac, Labelbox, YData and Galileo, Morcos claims, which tend to be more limited in the scope and types of data they can process.
DatologyAI’s also able to determine which “concepts” within a data set — for example, concepts related to U.S. history in an educational chatbot training set — are more complex and therefore require higher-quality samples, as well as which data might cause a model to behave in unintended ways.
“Solving [these problems] requires automatically identifying concepts, their complexity and how much redundancy is actually necessary,” Morcos said. “Data augmentation, often using other models or synthetic data, is incredibly powerful, but must be done in a careful, targeted fashion.”
The question is, just how effective is DatologyAI’s technology? There’s reason to be skeptical. History has shown automated data curation doesn’t always work as intended, however sophisticated the method — or diverse the data.
LAION, a German nonprofit spearheading a number of GenAI projects, was forced to take down an algorithmically-curated AI training data set after it was discovered that the set contained images of child sexual abuse. Elsewhere, models such as ChatGPT, which are trained on a mix of data sets manually and automatically filtered for toxicity, have been shown to generate toxic content given specific prompts.
There’s no getting away from manual curation, some experts would argue — at least not if one hopes to achieve strong results with an AI model. The largest vendors today, from AWS to Google to OpenAI, rely on teams of human experts and (sometimes underpaid) annotators to shape and refine their training data sets.
Morcos insists DatologyAI’s tooling isn’t meant to replace manual curation altogether but rather offer suggestions that might not occur to data scientists, in particular suggestions tangential to the problem of trimming training data set sizes. He’s somewhat of an authority — data set trimming while preserving model performance was the focus of an academic paper Morcos co-authored with researchers from Stanford and the University of Tübingen in 2022, which earned a best paper award at the NeurIPS machine learning conference that year.
“Identifying the right data at scale is extremely challenging and a frontier research problem,” Morcos said. “[Our approach] leads to models that train dramatically faster while simultaneously increasing performance on downstream tasks.”
DatologyAI’s tech was evidently promising enough to convince titans in tech and AI to invest in the startup’s seed round, including Google chief scientist Jeff Dean, Meta chief AI scientist Yann LeCun, Quora founder and OpenAI board member Adam D’Angelo and Geoffrey Hinton, who’s credited with developing some of the most important techniques in the heart of modern AI.
Other angel investors in DatologyAI’s $11.65 million seed, which was led by Amplify Partners with participation from Radical Ventures, Conviction Capital, Outset Capital and Quiet Capital, were Cohere co-founders Aidan Gomez and Ivan Zhang, Contextual AI founder Douwe Kiela, ex-Intel AI VP Naveen Rao and Jascha Sohl-Dickstein, one of the inventors of generative diffusion models. It’s an impressive list of AI luminaries to say the least — and suggests that there might just be something to Morcos’ claims.
“Models are only as good as the data on which they’re trained, but identifying the right training data among billions or trillions of examples is an incredibly challenging problem,” LeCun told TechCrunch in an emailed statement. “Ari and his team at DatologyAI are some of the world’s experts on this problem, and I believe the product they’re building to make high-quality data curation available to anyone who wants to train a model is vitally important to helping make AI work for everyone.”
San Francisco-based DatologyAI has ten employees at present inclusive of the co-founders, but plans to expand to around ~25 staffers by the end of the year if it reaches certain growth milestones.
I asked Morcos if the milestones were related to customer acquisition, but he declined to say — and, rather mysteriously, wouldn’t reveal the size of DatologyAI’s current client base.
View original article on techcrunch.com
Share:
Global crypto firms turn to Hong Kong for refuge — and opportunity
With U.S. regulators continuing to ramp up scrutiny, crypto startups and founders are looking to Hong Kong to support their growth. © 2024 TechCrunch. All rights reserved. For personal use only.
May 6, 2024
DocuSign acquires AI-powered contract management firm Lexion
As DocuSign reportedly explores a sale to private equity, it’s acquiring a company itself. On Monday, DocuSign (which now prefers to go by Docusign, with a lowercase “S,” a PR rep from the company tells me) announced that it’s buying Lexion, a contract workflow automation startup, for $165 million. The purchase comes as DocuSign makes increasing investments […] © 2024 TechCrunch. All rights reserved. For personal use only.
May 6, 2024
TechCrunch Minute: Newchip, Techstars and what happens when startup accelerators fail
Building a startup is hard. Building a company that helps startups is similarly difficult. That’s the takeaway from TechCrunch reporting on Techstars and Newchip. In the case of Newchip, the accelerator appeared to promise a bit more than it could deliver. Mix in a culture that appeared to be turbulent at best, and you have […] © 2024 TechCrunch. All rights reserved. For personal use only.
May 6, 2024
Don't miss our latest news and updates. Subscribe to the newsletter