My hypothesis for how AI will be won

Differentiable datasets and Knowledge-as-a-Service (KaaS)

Zayn Patel
5 min readMay 29, 2023

Differentiable datasets are the key trait of winners in the “AI revolution”. This trait favors incumbents more than startups because incumbents have access to these types of datasets since they’re unique to industries like healthcare, finance, and education mostly because of privacy and policy restrictions. Part of its importance stems from these restrictions; it’s scarce. The lack of comprehensiveness in data makes it difficult for startups to compete.

Winner 1: Incumbent-startup partnerships based on differentiable datasets

In healthcare, for example, companies like United Healthcare, Cigna, and Elevance Health own datasets with patients’ year-over-year biometric data, insurance packages, prescription requests, doctor’s notes, appointment transcripts, and more. Any startup desiring to use AI to disrupt traditional healthcare would spend years trying to acquire users that provide them the richness of data incumbents have access to. In finance, credit card companies, Visa, MasterCard, and American Express own datasets detailing millions of transactions that occur each day. Any startup wanting to rebuild the economic infrastructure of the US would spend years trying to build the transaction volume to observe how consumers are spending to develop insight into how the economy is changing.

Incumbents have leverage because feature-rich, clean data is the unit that keeps AI models alive, just as oxygen is the unit that keeps humans alive. I think startups that figure out how to partner with incumbents to access the dataset are one of the winners of AI.

Incumbents are incentivized to find scrappy engineers who start startups because money is one thing they value more than maintaining the status quo. And AI has lots of this. Startups have an incentive because the market cap for the industries with differentiable datasets, healthcare; education; finance, has historically been larger than ones without them, consumer; enterprise; logistics.

The counterargument to incumbents having leverage on startups is “data-first” startups that were founded because of differentiable datasets. 23andMe, a genetic testing company, has genome data from over 12 million customers. Pearl, an oral health company using computer vision to detect anomalies and pathogens in dental X-rays, raised $11 million to analyze dental scans with AI. Prenuvo, a company growing revenue by 240% year-over-year, specializes in whole-body diagnostic imaging and has access to hundreds of patient scans. The limit on these companies is their scale of data. Incumbents have holistic profiles, while startups have atomistic profiles. In other words, incumbents have access to more data, giving them more patterns to observe and predict.

To get holistic profiles and scale their data, companies need partnerships. OpenAI sees potential in this area, they partnered with Coke after releasing GPT-4, and ScaleAI partnered with Accenture. I suspect partnerships will increase as the models improve their capability and startups find good ideas.

How companies partner remains unclear, but the likely axes they’ll evaluate this decision on include:

  • Value derived: The party that gets the most value, however, it’s defined (revenue share, user outcomes, company efficiency), bears the costs and liability.
  • Implementation costs: The party implementing and deploying the models gets more value. The required technical expertise and infrastructure setup are what create value.
  • Data ownership: The party with the data reserves the company and patient rights to keep it. In an ideal scenario, the party with data asks patients for permission before giving it to a startup. If a startup working with an incumbent wants to access it, they must pay.

Because datasets from incumbents include secure information about humans, how models are trained will differ from the current method. Combining strategies like homomorphic encryption, federated learning, and differential privacy solve this problem. However, similar to how companies partner, the strategy depends on data sensitivity and what data users decide is acceptable to train on.

Winner 2: Humans tutoring AI with their unique expertise (KaaS)

In addition to differentiable datasets, another category of AI winners is companies that integrate expert intelligence into models through AI tutoring. David Sacks commented that developer output is the critical bottleneck for startup growth. Everyone fights to get their features on the Jira board. They give developers reasons why their feature is better than others, and eventually, this discussion turns into an inefficient system of pitching the developer instead of getting work done.

But, human bottlenecks aren’t unique to startups. There is a shortage of 4.3 million physicians, nurses, and professionals worldwide in healthcare. The shortage is so impactful, some countries are in a bidding war for healthcare staff. In education, there’s a shortage of 69 million teachers. This constraint leads to unintended consequences like rushing a patient’s diagnosis because there are four others in the waiting room or teaching to the competency of the lowest-scoring student.

While there is nuance in how these tools can be used effectively, attention is such a scarce resource that tools like Med-Palm2, Google’s large medical language model, which are reaching 85% accuracy on medical exam questions can aid in giving initial diagnoses to more patients and diagnosing diseases without a patient entering the office. Khanmigo, Khan Academy’s large language model, teaches students complex topics, creating a 1:1 student-teacher ratio instead of the US average of 15:1.

What’s unique about Khanmigo is they used an expert teacher (and coded how a teacher would think before responding to a student’s question) in addition to their unique dataset of content on the website. They pull some data from the Internet too. This type of training, which integrates human expertise into the loop, for large-scale models is KaaS (knowledge-as-a-service). Fundamentally, it’s about tutoring the computer about how to interpret a question and answer it well, in addition to giving the model knowledge it can’t find from scraping the web or other users.

My hypothesis is this tutoring will happen through LFHF (learning from human feedback) and prompt chaining. Figma is a good example of to show how this process works:

  • Collect dataset from user designs: Figma has hundreds of data samples because of user mockups and wireframes. These are the input to the model.
  • Add prompts from the designer checklist before training: In most fields, we can reduce the most important questions to a checklist. The questions on that checklist are the prompts for chaining, and the nuance comes from experts.
  • Train model using feedback from expert designers: Just as designers sit in a feedback session and suggest improvements to a designer, they’ll do the same here. Except they’re talking to an LLM instead of a person. Multiple designers will give feedback on each design to prevent bias in giving feedback, and the model will choose which points make sense for a user.

Companies with human bottlenecks can replicate the framework used for Figma. As the Figma model gets trained and deployed, it will provide the same quality of feedback as a senior designer without needing them to be present in every review; that’s the goal of KaaS.

Technology exists to augment society and we’re on the edge of a ladder that will take us five steps forward with one technology, something that rarely happens. LLMs are one piece of AI but it seems to have the largest number of applications to many of the activities humans do.

For these models to take us five steps forward, the need for data remains unchanged and the methods companies will use differ from the past. Those who figure out how to capitalize on partnerships, human expertise, learning from human feedback, and other ideas in acquiring and using unique data will. This sets up one of the most exciting consumer moments ever.

Thanks to Heya Desai, Madhav Malhotra, and Aryan Saha for reading drafts of this. Thanks to Andrés Velarde and Skylor Lane for the conversations that contributed to the thoughts in this essay.

More essays: zaynp.com/essays

--

--

Zayn Patel
Zayn Patel

Written by Zayn Patel

Working on space technology and policy, improving government with data science, and launching a cubesat mission.

No responses yet