AI2 Logo
incubator logo

Menu

edges
Insights

Insights 4: Yoodli Unstealthed, Large Language Models, Task Centric AI

October 31   Vu Ha Vu Ha
Incubator Graduates
Happy Halloween! This month, the relentless AI2 incubator hype cycle continues with

Yoodli's

coming out of stealth and TheSequence's profiling of

WhyLabs.ai

. We will give you a scoop on these two cool companies (insider secret: they are both hiring for multiple roles!). We will also pick out two tidbits from the annual State of AI report that are interesting to us. The first one is the rise of AI-first, full-stack drug discovery and development companies. The second is the invasion of extremely large language models that open the door for both exciting opportunities as well as harmful misuse we should be aware about and actively work on to address. We ponder whether the future of AI will continue to be data-centric per Andrew Ng, or would there be scenarios where we rely less on (large amount of) data given the learning efficiency of these large models.
But first,

Semantic Scholar

is turning 6. We launched S2 (our internal nickname for Semantic Scholar) on November 2, 2015. Year by year, the product grew better with expanded coverage and unique features. I rely heavily on the folder-centric personal library feature to organize my to-read lists and was delighted to learn earlier this month that S2 now recommends papers directly based on the papers in a given folder. My most favorite S2 feature is however hands down

Semantic Reader

, currently in beta. Reading research papers feels 10x better than with traditional PDF viewers - give it a try! The speed at which AI innovation moves from an arxiv upload to production has never been this lightning fast. Startup CTOs/CSOs need to stay on top of what's happening in AI/ML research, and my "unbiased" :) advice is that S2 gives you the best tool to do just that. Happy birthday, Semantic Scholar!

AI2 Incubator Companies

On the cover picture of this month's newsletter is the wonderful founding team of Yoodli. Varun, Esha, and professor Hoque are building a product that helps everyone practice presenting and get tips on how to improve with feedback and advice from both AI and real world speak coaches. Spoken communication is important in so many settings in one's professional and personal life, and Yoodli will be there to help you every step of the way. Congrats team on the launch and the vote of confidence from AI2 and Madrona Venture Group!
It looks like every month, the AI2 incubator gets some coverage from The Sequence. This month, their audience got an introduction to WhyLabs, a ML Observability platform that is a leader in the MLOps startup category. Stay tuned for more exciting updates from WhyLabs in our next newsletter!
Our last community update is from an early alumnus, XNOR that joined Apple back in 2020. Sachin Mehta and XNOR's former CTO Mohammad Rastegari posted a paper on arxiv titled "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer". Ever wondered if vision transformers could be run efficiently at the edge, ahem I mean the iPhone 14? Sachin and Mohammad believe so. Their results show that
MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than Mo-bileNetv3 for a similar number of parameters.

AI-first, full-stack drug discovery and development

The State of AI report came out this month. This is an annual snapshot of the field across academia, industry, politics and spiced up with fun predictions. One of the five industry highlights is that "two major AI-first drug discovery and development companies complete IPOs with drugs in the clinic, further validating their potential". They are:

Exscientia

British AI-first drug discovery company, Exscientia, originated the world’s first 3 AI-designed drugs into Phase 1 human testing and IPO’d on the NASDAQ on 1 October 2021 at a >$3B valuation. Exscientia is now the UK’s largest biotech and the 3rd largest biopharma company in the UK next to GSK and AstraZeneca. The company has a further 4 more drug candidates currently undergoing advanced profiling for submission of investigational new drug applications, in addition to more than 25 active projects in total.

Recursion Pharmaceuticals

Recursion Pharmaceuticals, a Utah-based AI-first company that makes use of high-throughput screening and computer vision-powered microscopy to discover drugs, raised $436M in its NASDAQ IPO in April 2021. The business has 37 internally-developed drug programs including 4 clinical-stage assets. By conducted targeted exploration of biological search space with compound and disease cell type combinations, the company is building a “map” of disease biology. With this map, the company is predicting tens of billions of relationships between disease models and therapeutic candidates. This includes relationships that are predictive of candidate mechanism of action, which expands the discovery funnel beyond hypothesized and human-biased targets.
Last month, we covered two AI2 incubator alumni: Modulus Therapeutics (Cell Therapy by Design) and Ozette (High-Resolution Immune Profiling). I am personally more excited about the potential impact of AI on the life sciences than self-driving cars and related Bond-style gadgets. Am I looking forward to AI-guided cell therapy to cure cancer, or building a high resolution map of our immune systems? Yes!

Large Language Models

Let's start with one of the research highlights from the State of AI annual report:
Large language models (LLM) are in the scale-out phase and have become “nationalised” where each country wants their own LLM.
What does large mean? For the purpose of this discussion, let's define large as GPT-3-like-size with >100 billion parameters. That's 11 zeros! We will walk down the memory lane to the early 2018 - ancient time in deep learning chronology - when Matt Peters et al. at AI2 introduced ELMo.
LLM

The History of LLMs since ELMo

With 93M parameters, ELMo is small (and cute). OpenAI released the first iteration of 150M GPT in June, 2018. Google joined the fray with Bert, now getting to medium T-shirt size with 345M parameters. The second iteration of GPT was the first time we crossed into the billion-parameter territory (1.5B). Language models are getting large(r).
ELMo also started the muppetware revolution (source: AI Essentials):
Muppetware
When OpenAI dropped the GPT-3 bomb in June 2020, we went from S/M/L to XXL with two orders of magnitude more parameters. If the brilliant folks at OpenAI followed the muppet theme, they could have nicknamed it Cookie Monster. GPT-3 has an enormous appetite for GPUs and text data of all shapes and forms. So when SoAI talks about LLMs being in the scale-out phase, the word "large" would map to our XXL T-shirt size. The nationalisation is captured below.
LLM 2
We covered EleutherAI's GPT-J in a previous newsletter. It's currently only L-size, but EleutherAI's intent is to get to the XXL range and beyond at some point. AI21's Jurassic model demonstrated that LLMs are within reach for well-funded startups.

LLM's New-Found Power: Learning Efficiency

What's with the brouhaha around LLMs? Learning efficiency! Below is the famous GPT-3 graph that got everyone's attention:
Learning efficiency
The task being benchmarked here is the removal of extraneous symbols from a word. Yes, it was cherry-picked. Yes, extraneous symbol removal is not exactly a task with sweeping practical impact. The efficiency curve that GPT-3 yields across many tasks is real, however.

Task-Centric AI

Andrew Ng has been giving talks at various venues about a new focus for AI that he calls

data-centric AI

. The TLDR of data-centric AI, for me personally, is GIGO, or garbage-in garbage out: we should focus on minimizing the garbage-ness of the data we feed AI.
How do we square data-centric AI with LLMs' learning efficiency? As LLMs can learn with just a handful of examples, de-garbaging should be easy, right? Instead, we can now focus our attention on the task at hand. We may indeed have many tasks at hand since, guess what, we can now go after a large number of tasks. Below we extend Andrew's data-centric AI visualization with a world that is

task-centric

instead of data-centric.
Task centric AI
Things look a bit different in the task-centric world compared to the data-centric world:
Task centric AI 2
Instead of building 10 models with 1,000 labels per, we could build 1,000 models with 10 labels per. Not every task is a good fit for task-centric AI though. Those that require 99.99% accuracy clearly belong to the data-centric world. For us (data-) poor startups, getting to minimum algorithmic performance (MAP) could be possible. LLMs could provide a lifeline for startups struggling with the bootstrap challenge.
In the task-centric world, LLMs could open up the opportunity to help less technical folks build and use AI models without relying on an expensive data science team. No-code AI, powered by the XXL transformers near you? Scale.ai and Snorkel.ai are the poster-child unicorns of the data-centric AI world. Who will emerge as the representatives for the LLM task-centric world? The two key questions a task-centric startup needs to answer are:
  1. What sort of no-code AI problems exist that are a) painful for lots of customers and b) can only be solved with LLMs?
  2. How can LLMs be deployed cost effectively? GPT-3 is rather spendy if used via OpenAI's API with any meaningful traffic. Ah yes, there's also the small inconvenience of feeding your own Cookie Monster lots of GPUs. AI21 labs did it, so can the next well-funded startup (bootstrap startups eschewing VC path should look elsewhere).
Regarding the second question, the research community has been moving very fast to make LLMs not only bigger but also easier to use with prompt engineering, instruction tuning, calibration, few-shot learning, etc. There has also been a lot of progress around how to hit the sweet spot of performance vs size tradeoffs in LLMs. As an example, the recent Hugging Face-led BigScience Workshop demonstrated T0 model that outperforms GPT-3 while being 16x smaller.
The first task-centric unicorn will be the one that figures out the first question and has a strong technical team that can tackle both the research and the engineering aspects of the second question. Simple, right?

AI Startups

  • Mage, developing an artificial intelligence tool for product developers to build and integrate AI into apps, brought in $6.3 million in seed funding led by Gradient Ventures.
  • Copy.ai (powered by GPT-3) raised $11M series A.
  • Weights & Biases raises $135M series C.
  • Domino Data Lab raised $100M series F.
  • Immunai raised $215M series B. Holy guacamole!

Stay up to date with the latest
A.I. and deep tech reports.

edges
Insights

Insights 4: Yoodli Unstealthed, Large Language Models, Task Centric AI

October 31   Vu Ha Vu Ha
Incubator Graduates
Happy Halloween! This month, the relentless AI2 incubator hype cycle continues with

Yoodli's

coming out of stealth and TheSequence's profiling of

WhyLabs.ai

. We will give you a scoop on these two cool companies (insider secret: they are both hiring for multiple roles!). We will also pick out two tidbits from the annual State of AI report that are interesting to us. The first one is the rise of AI-first, full-stack drug discovery and development companies. The second is the invasion of extremely large language models that open the door for both exciting opportunities as well as harmful misuse we should be aware about and actively work on to address. We ponder whether the future of AI will continue to be data-centric per Andrew Ng, or would there be scenarios where we rely less on (large amount of) data given the learning efficiency of these large models.
But first,

Semantic Scholar

is turning 6. We launched S2 (our internal nickname for Semantic Scholar) on November 2, 2015. Year by year, the product grew better with expanded coverage and unique features. I rely heavily on the folder-centric personal library feature to organize my to-read lists and was delighted to learn earlier this month that S2 now recommends papers directly based on the papers in a given folder. My most favorite S2 feature is however hands down

Semantic Reader

, currently in beta. Reading research papers feels 10x better than with traditional PDF viewers - give it a try! The speed at which AI innovation moves from an arxiv upload to production has never been this lightning fast. Startup CTOs/CSOs need to stay on top of what's happening in AI/ML research, and my "unbiased" :) advice is that S2 gives you the best tool to do just that. Happy birthday, Semantic Scholar!

AI2 Incubator Companies

On the cover picture of this month's newsletter is the wonderful founding team of Yoodli. Varun, Esha, and professor Hoque are building a product that helps everyone practice presenting and get tips on how to improve with feedback and advice from both AI and real world speak coaches. Spoken communication is important in so many settings in one's professional and personal life, and Yoodli will be there to help you every step of the way. Congrats team on the launch and the vote of confidence from AI2 and Madrona Venture Group!
It looks like every month, the AI2 incubator gets some coverage from The Sequence. This month, their audience got an introduction to WhyLabs, a ML Observability platform that is a leader in the MLOps startup category. Stay tuned for more exciting updates from WhyLabs in our next newsletter!
Our last community update is from an early alumnus, XNOR that joined Apple back in 2020. Sachin Mehta and XNOR's former CTO Mohammad Rastegari posted a paper on arxiv titled "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer". Ever wondered if vision transformers could be run efficiently at the edge, ahem I mean the iPhone 14? Sachin and Mohammad believe so. Their results show that
MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than Mo-bileNetv3 for a similar number of parameters.

AI-first, full-stack drug discovery and development

The State of AI report came out this month. This is an annual snapshot of the field across academia, industry, politics and spiced up with fun predictions. One of the five industry highlights is that "two major AI-first drug discovery and development companies complete IPOs with drugs in the clinic, further validating their potential". They are:

Exscientia

British AI-first drug discovery company, Exscientia, originated the world’s first 3 AI-designed drugs into Phase 1 human testing and IPO’d on the NASDAQ on 1 October 2021 at a >$3B valuation. Exscientia is now the UK’s largest biotech and the 3rd largest biopharma company in the UK next to GSK and AstraZeneca. The company has a further 4 more drug candidates currently undergoing advanced profiling for submission of investigational new drug applications, in addition to more than 25 active projects in total.

Recursion Pharmaceuticals

Recursion Pharmaceuticals, a Utah-based AI-first company that makes use of high-throughput screening and computer vision-powered microscopy to discover drugs, raised $436M in its NASDAQ IPO in April 2021. The business has 37 internally-developed drug programs including 4 clinical-stage assets. By conducted targeted exploration of biological search space with compound and disease cell type combinations, the company is building a “map” of disease biology. With this map, the company is predicting tens of billions of relationships between disease models and therapeutic candidates. This includes relationships that are predictive of candidate mechanism of action, which expands the discovery funnel beyond hypothesized and human-biased targets.
Last month, we covered two AI2 incubator alumni: Modulus Therapeutics (Cell Therapy by Design) and Ozette (High-Resolution Immune Profiling). I am personally more excited about the potential impact of AI on the life sciences than self-driving cars and related Bond-style gadgets. Am I looking forward to AI-guided cell therapy to cure cancer, or building a high resolution map of our immune systems? Yes!

Large Language Models

Let's start with one of the research highlights from the State of AI annual report:
Large language models (LLM) are in the scale-out phase and have become “nationalised” where each country wants their own LLM.
What does large mean? For the purpose of this discussion, let's define large as GPT-3-like-size with >100 billion parameters. That's 11 zeros! We will walk down the memory lane to the early 2018 - ancient time in deep learning chronology - when Matt Peters et al. at AI2 introduced ELMo.
LLM

The History of LLMs since ELMo

With 93M parameters, ELMo is small (and cute). OpenAI released the first iteration of 150M GPT in June, 2018. Google joined the fray with Bert, now getting to medium T-shirt size with 345M parameters. The second iteration of GPT was the first time we crossed into the billion-parameter territory (1.5B). Language models are getting large(r).
ELMo also started the muppetware revolution (source: AI Essentials):
Muppetware
When OpenAI dropped the GPT-3 bomb in June 2020, we went from S/M/L to XXL with two orders of magnitude more parameters. If the brilliant folks at OpenAI followed the muppet theme, they could have nicknamed it Cookie Monster. GPT-3 has an enormous appetite for GPUs and text data of all shapes and forms. So when SoAI talks about LLMs being in the scale-out phase, the word "large" would map to our XXL T-shirt size. The nationalisation is captured below.
LLM 2
We covered EleutherAI's GPT-J in a previous newsletter. It's currently only L-size, but EleutherAI's intent is to get to the XXL range and beyond at some point. AI21's Jurassic model demonstrated that LLMs are within reach for well-funded startups.

LLM's New-Found Power: Learning Efficiency

What's with the brouhaha around LLMs? Learning efficiency! Below is the famous GPT-3 graph that got everyone's attention:
Learning efficiency
The task being benchmarked here is the removal of extraneous symbols from a word. Yes, it was cherry-picked. Yes, extraneous symbol removal is not exactly a task with sweeping practical impact. The efficiency curve that GPT-3 yields across many tasks is real, however.

Task-Centric AI

Andrew Ng has been giving talks at various venues about a new focus for AI that he calls

data-centric AI

. The TLDR of data-centric AI, for me personally, is GIGO, or garbage-in garbage out: we should focus on minimizing the garbage-ness of the data we feed AI.
How do we square data-centric AI with LLMs' learning efficiency? As LLMs can learn with just a handful of examples, de-garbaging should be easy, right? Instead, we can now focus our attention on the task at hand. We may indeed have many tasks at hand since, guess what, we can now go after a large number of tasks. Below we extend Andrew's data-centric AI visualization with a world that is

task-centric

instead of data-centric.
Task centric AI
Things look a bit different in the task-centric world compared to the data-centric world:
Task centric AI 2
Instead of building 10 models with 1,000 labels per, we could build 1,000 models with 10 labels per. Not every task is a good fit for task-centric AI though. Those that require 99.99% accuracy clearly belong to the data-centric world. For us (data-) poor startups, getting to minimum algorithmic performance (MAP) could be possible. LLMs could provide a lifeline for startups struggling with the bootstrap challenge.
In the task-centric world, LLMs could open up the opportunity to help less technical folks build and use AI models without relying on an expensive data science team. No-code AI, powered by the XXL transformers near you? Scale.ai and Snorkel.ai are the poster-child unicorns of the data-centric AI world. Who will emerge as the representatives for the LLM task-centric world? The two key questions a task-centric startup needs to answer are:
  1. What sort of no-code AI problems exist that are a) painful for lots of customers and b) can only be solved with LLMs?
  2. How can LLMs be deployed cost effectively? GPT-3 is rather spendy if used via OpenAI's API with any meaningful traffic. Ah yes, there's also the small inconvenience of feeding your own Cookie Monster lots of GPUs. AI21 labs did it, so can the next well-funded startup (bootstrap startups eschewing VC path should look elsewhere).
Regarding the second question, the research community has been moving very fast to make LLMs not only bigger but also easier to use with prompt engineering, instruction tuning, calibration, few-shot learning, etc. There has also been a lot of progress around how to hit the sweet spot of performance vs size tradeoffs in LLMs. As an example, the recent Hugging Face-led BigScience Workshop demonstrated T0 model that outperforms GPT-3 while being 16x smaller.
The first task-centric unicorn will be the one that figures out the first question and has a strong technical team that can tackle both the research and the engineering aspects of the second question. Simple, right?

AI Startups

  • Mage, developing an artificial intelligence tool for product developers to build and integrate AI into apps, brought in $6.3 million in seed funding led by Gradient Ventures.
  • Copy.ai (powered by GPT-3) raised $11M series A.
  • Weights & Biases raises $135M series C.
  • Domino Data Lab raised $100M series F.
  • Immunai raised $215M series B. Holy guacamole!

Stay up to date with the latest
A.I. and deep tech reports.

horizontal separator

Join our newsletter

AI2 Logo
incubator logo

Join our newsletter

Join our newsletter

AI2 Logo
incubator logo

© AI2 Incubator · All Rights Reserved

Seattle, WA

Twitter

LinkedIn

Privacy

Terms of Service