AI Product Manager: The Sexiest Job of the 21st Century?

AI Product Manager: The Sexiest Job of the 21st Century?
Christophe Bourguignat

Christophe Bourguignat, CPTO

In October 2012, the Harvard Business Review published the now-iconic article: “Data Scientist: The Sexiest Job of the 21st Century.” It underscored the transformative value of data scientists, whose rare blend of analytical expertise, technical acumen, and business insight allowed organizations to extract actionable value from big data. The role quickly became one of the most sought-after and influential across industries.

Fast forward 10 years to November 30, 2022, when ChatGPT — and more broadly, Large Language Models (LLMs) — debuted and captured the world’s attention. This marked a groundbreaking moment for enterprise data scientists and machine learning (ML) practitioners. Suddenly, developing complex applications required almost nothing more than a single line of code, without specialized ML training required anymore. Tasks like sentiment analysis, object detection in images, text summarization, translation, speech-to-text, or image classification—once the domain of dedicated ML teams and extensive training—became accessible to anyone, thanks to these pre-trained foundational models. Trained on the entirety of human knowledge, LLMs possess the flexibility to tackle virtually any task with minimal additional effort.

This revolution has significantly impacted startups and tech companies. Horizontal, general-purpose solution providers—such as those specializing in image recognition or document processing—have faced challenges maintaining their competitive edge. By contrast, vertical, industry-focused companies like Zelros, which cater specifically to the banking and insurance sectors, have preserved their niche advantage. These companies have leveraged generative AI-powered functionalities to deliver superior outcomes, solidifying their positions in their respective markets.

The LLM era has redefined the way modern software products are designed and built. It has opened up vast new opportunities in the product landscape, creating a “white space” ripe for a new generation of agentic applications (as illustrated in this diagram from Sequoia).

This paradigm shift has also redefined how product and R&D teams are organized. At Zelros, we’ve realigned our Product and R&D structure to embrace this new era of AI-first software:

  • No more traditional data scientists. The role of training legacy supervised models is now obsolete in our context.
  • All software engineers are now AI engineers. They integrate LLMs into their development workflows in various ways, making AI an intrinsic part of every engineer’s toolkit. They also changed the way they test their software, complementing traditional unit tests with scientific statistical tests more suitable for AI systems
  • The infrastructure team is now a core product contributor. Previously a support function, this team now plays a vital role in securely hosting and serving the LLMs that power the core engines of our product.

A reinforced product team. The Product function has grown to meet the increasing demand for clear specifications and innovation guidance.

With the reduced need for traditional data scientists and the enhanced productivity of software engineers, thanks to tools like GitHub Copilot, the size of the development team has remained stable while supporting business growth.

On the other hand, we have observed a growing importance of product management. As coding becomes more efficient, the innovation bottleneck is now to come up with clear specifications for valuable things to build. This has led to increased demand for product teams, particularly product owners, and UX designers.

The testing methods for AI-first products have also undergone a significant transformation compared to traditional software products. The inherent unpredictability and stochastic nature of AI systems make conventional unit tests insufficient. Instead, these systems require more advanced and scientifically robust testing methodologies, resembling the statistical cross-validation practices commonly used in data science.

Andrew Ng highlighted this trend in a recent essay: “Many companies have an Engineer: PM ratio of, say, 6:1. (The ratio varies widely by company and industry, and anywhere from 4:1 to 10:1 is typical.) As coding becomes more efficient, I think teams will need more product management work (as well as design work) as a fraction of the total workforce.”

At Zelros, we’ve addressed this by creating multidisciplinary client task forces that each include one R&D team member. This approach bridges the gap between development and customers, particularly when product requirements are ill-defined. AI development, being much more iterative than traditional software development, demands frequent adjustments and close collaboration with end users to ensure alignment with their needs.

Andrew Ng also observed this shift and even goes further: “The demand for good AI Product Managers will be huge. In addition to growing AI Product Management as a discipline, perhaps some engineers will also end up doing more product management work”.
This is an exciting time for tech enablers like Zelros to design and build the products of tomorrow. You are a software developper wanting to integrate LLMs into industry specific applications? Or a product manager waiting to contribute to an AI-first product? We are hiring, join us!