These days, there’s a lot of hype around generative AI and its potential to upend almost every aspect of our lives, from how we work to how we communicate, learn, create, and make decisions. At Northzone, we believe that the burgeoning capabilities of generative models promise to reshape entire industries, and we’ve spent much of the last year looking at vertical AI platforms for markets like healthcare, manufacturing, real estate, wealth management, and more.
Vertical AI is, by definition, specific. Therefore, it has the power to comprehend highly relevant details and perform nuanced, targeted tasks. The best AI “colleague” will have the know-how of a seasoned employee and the execution of a star operator. It’s a gossiper and a go-getter, sharply tailored to its vertical. As it delivers knowledge and logs data itself, employees will look to it, instead of underlying systems of record, as the ultimate source of truth and execution.
But how do you get there? And what do we really look for in the meantime? We have identified a pattern among the companies that have been able not only to capitalize on the exponential growth in budgets for AI tools but also to impress their customers enough to retain them.
We are excited about (i) NLP use cases (ii) built by industry experts that (iii) serve large, tech-laggard industries with intensely manual workflows for rich text data that (iv) often sit across multiple systems of record.
Oof. That’s a mouthful, so let’s unpack it with some more nuance:
NLP use cases
NLP is a broad term, and there’s been a lot of focus on LLM-powered chatbots and automation agents. While we like these platforms and see a lot of potential, we’ve seen that platforms focusing on internally facing text search, extraction, and summarization use cases are growing the fastest. Despite the fact that there’s a lot of interest amongst buyers in adopting AI tools, we still see a lot of rightful hesitation around unleashing LLMs on their customers.
That said, the buying intent is real and is becoming more robust by the day. Over the last twelve months, this intent has evolved from “We know we need to do something in AI” to “We are specifically looking for XYZ tool and are looking to make a decision over the next few months.” In many instances, the vertical AI players are transforming buyer intent by meeting their interest in AI with concrete ROI. For example, when speaking with decision-makers in real estate, we consistently heard that adopting AI leasing agents like Zuma and Elise AI is a #1 or #2 priority. Much of the education here was done by the platforms themselves and we’re now seeing awareness spread across the industry through word of mouth and industry conferences.
In retrospect, this makes a lot of sense. The most visible innovations in AI have centered around LLMs and their superior capabilities for handling unstructured text. This is important because 80-90% of the world’s data is unstructured, predominantly in the form of raw text. Structuring raw text is incredibly valuable in and of itself, but it also amplifies the value of quantitative data by giving it context.
Interestingly enough, however, many of the fastest-growing companies we’ve looked at recently actually use LLMs in a limited capacity! Frequently, they are using more deterministic ML models tuned for their target industry with LLMs as an augmenting layer on top for now.
Platforms built by industry experts bolstered by AI talent
Don’t get us wrong. There is genuine innovation happening in LLMs, and we believe they will power the future of AI applications. However, there’s a reason why there are far more AI-native startups than LLM-native startups. The obvious reality is that no platform is built on top of a monolithic model. Instead, it’s an ensemble of various models, oftentimes feeding into one another. LLMs are just one of many potential inputs, and teams must deeply understand the trade-off between faster setup time versus confidence and interpretability of output to inform when they can use LLMs and when they can’t.
We find that teams with deep industry expertise can move faster because they intuitively understand this trade-off and can shortcut some of the product discovery work that others must do. Coupled with their natural GTM advantage, we find that teams that lead with industry knowledge and are supported by AI talent typically find PMF the fastest. However, this could change, especially as the base performance of LLMs continues to improve so quickly. We anticipate that more and more teams will need to invest more in R&D work to stay on top of the latest developments and swap existing underlying models for new ones when the time is right.
Target large, tech-laggard industries with intensely manual workflows for rich text data
One of the most exciting aspects of generative AI is that it brings about a leapfrog in user experience “that just works,” thus accelerating adoption by orders of magnitudes in industries that have traditionally been slow to adopt new technology. Oftentimes, the best beachhead for newcomers is showing day-1 ROI by automating workflows centered around written reports.
A concrete example of this is in insurance, where companies like Sixfold and Kyber are leveraging generative AI to synthesize data like NAICS codes, written policies, local permits and regulations to help automate the workflow between brokers and insurers to answer inbound questions, automate claims processing, improve underwriting, and more.
Similarly, in healthcare, companies like Abridge, DeepScribe, and Nabla are building AI scribes that are changing how doctors interact with their patients. These platforms automatically transcribe patient conversations, extract medically significant phrases, and summarize them into structured notes for EMR systems. The experience for doctors is magical. As one buyer said, “Usually, we have to drag our physicians kicking and screaming into using new tech. But with AI scribing, our doctors love this tool and actually came to us asking for it. In fact, our oldest doctors are often the ones who love it the most.”
At Northzone, we’ve talked about the idea that generative AI also enables a leap in the mode of interaction with software, from point and click to keyboard shortcuts and now to natural language itself. For many less tech-savvy users, this means that software is finally starting to act how they imagine high technology should.
Until now, much of the technology traction within these industries has come through the arrival of tech-enabled newcomers rather than deep adoption within existing players. AI applications are flipping this dynamic on its head, and the importance of this cannot be understated.
Text data that sits across multiple systems of record
An additional lens with which we examine end markets is understanding where the text data lives today. In some industries, the vast majority of text data sits within a handful or even one system. In these instances, we perceive the incumbent threat to be much higher than in other industries where data is disparate and siloed. One example of the latter is in manufacturing, where rich text data sits across a multitude of systems like ERPs, quality management systems, and even dedicated third-party reporting systems that interface between suppliers and customers, and companies like Axion Ray work across data silos to help manufacturers synthesize and proactively identify quality issues.
All else equal, the more systems of record, the better, as it increases the need for a horizontal layer across data silos. It also reduces the likelihood that any one incumbent–which is often a preexisting system of record–can come in and replace the newcomer. Instead, using their initial use case as a wedge, the newcomer AI platform can then expand to aggregate other data sources. Over time, as the broader space solidifies and components like LLMs, agent architecture, guardrails, and governance improve, the best platforms will be able to extend deeper into more complex workflows and benefit from a virtuous data and workflow flywheel.
So where do we end up? Will Vertical AI shift from software-as-a-service to output-as-a-service?
Investors often talk about “AI + workflow,” and they’re right that it’s important for these bots not just to automate a task, but deeply engrain themselves into a user’s workflow. Jasper AI, for instance, not only automates the copywriting of an email, but also works natively in GoogleDocs, SEO platforms, or Webflow. However, in the last decade, “workflow” has (unfortunately for many workers) become more about updating systems of record than anything else. Building “AI + workflow” really means that the AI can operate autonomously, pushing and pulling data from and into disparate systems of record in an elegant way, such that humans are not suffering in these platforms at all.
Here’s how we predict things shaking out. Many companies will be able to initially show ROI by automating workflow centered around written reports, and competition will be fierce particularly as the base performance of LLMs improves. The best vertical AI solutions will be like a functional worker who is also a team player: expert in their core task (summarization, for instance), but also diligent in updating the horizontal systems of record that the broader organization relies on. And some AI agents will become “the person at work who everyone goes to for questions.” Sure, we can all find answers to most professional questions in a Notion doc, or Salesforce, or an ERP. But then there’s that colleague who’s been around forever, who knows exactly where to find the answer or can even give it to you directly. AI solutions may begin as a functional worker, then become a functional worker who updates (and is updated by) all systems perfectly and might end up as the trusted colleague you go to instead of consulting those systems in the first place. Eventually, just like with human analysts who move up the ranks and get more responsibilities, the agent might be given more leeway to make its own decisions and produce “real output.”
To date, people have built large businesses that sell tools to enhance productivity. Eventually, AI agents will flip this on its head. Instead of software-as-a-service, it’ll just be output-as-a-service. A manufacturing company needs Quality Assurance performed on a product? An AI agent gathers all written reports from various systems, analyzes and flags quality issues, and then updates all relevant stakeholders and logs the remediation plan. Becoming this horizontal “operator layer” on top of multiple data siloes is the endgame for Vertical AI, and those who achieve it will be richly rewarded.
As many others have said, AI is moving so fast, and much of what we see today will be different tomorrow. Nevertheless, we genuinely believe we are in the midst of a massive transformation that will lead to deeper technology adoption in industries at a level we haven’t seen before. If you’re building a vertical AI application, we’d love to hear from you, reach out to Aaron and Molly.
P.S. Exactly one paragraph in this post was written entirely by ChatGPT. Can you guess which one? ????