Don’t even think about bringing AI/ML in-house, seriously!

Share it:

The prospect of cultivating internal employees to churn out patentable AI/ML exploits for your one-of-a-kind data is beyond tempting. After all, AI is the future and vertical integration is today’s fashion in management school. It seems like a no-brainer to bring AI in-house, but this is one of the stupidest things you could do, and almost never works.

Consider the familiar ERP rollout (a routine undertaking with zero scientific risk). Despite the prosaic nature of software integration, ERP initiatives carry a 75% enterprise failure rate (according to Gartner). Most enterprises simply cannot muster the execution. With that in mind, imagine tackling industry-defining innovation in a heavily scientific field with a wide risk envelope. AI requires an advanced perspective on strategy to set goals; then demands science-led iteration, software engineering, field qualification testing, packaging, user experience, and –yes– systems integration to be successful.

Bringing proprietary AI in-house is not a workable plan that has occasional side effects. It is a straight line to failure – and that’s not hyperbole. Gartner reports that 85% of enterprise AI initiatives fail directly, meaning they never advance beyond the prototype (proof-of-concept) stage. Producing a fieldable product is an outright impossibility for most enterprise teams. Those that beat the odds by getting past the prototype stage still only stand a desperate hope for success. With proprietary AI, success requires surviving a gauntlet of execution challenges covering training at scale, generalization, performance, field qualification testing, software engineering, systems integration, human-centered design, and other workstreams. Mix in a highly technical domain that is packed with esoteric engineering challenges and the typical enterprise stands no chance. I’m sorry, but that’s the reality.

Acquiring talent.

Recruiting AI talent is not the problem. Most organizations, big or small, can hire AI/ML talent. While some recruitment processes are more selective than others, the talent is there.

For example, it’s easy to hire someone with a convincing c.v. and respectable experience as a director of AI/ML. That person will hire 2-3 subordinates, whether onshore or offshore, to round out your new in-house team. So far, so good. Except…it takes 4-6 months to fill the open positions. Once staffed, a new AI team won’t be ready to tackle a serious innovation opportunity for another 4-6 months, as they must first hone their workflow and tool stack. Best case, you are facing a 6-month window before a newly formed in-house AI team is ready to begin working something with a significant proprietary impact – then another year before their work is significantly monetizable. Regardless of your definition of “hit the ground running”, this is not it.

Moreover, a production AI team needs serious engineering talent. If your team gets forced to implement 4-bit quantization or develop custom CUDA code to overcome performance problems – who does that work? Who builds training environments and runs field qualification trials? Hint: Not data scientists! These tasks require engineers, good ones, with special skills. Who builds the graph database you’ll need on top of your (rather pointless) enterprise data warehouse? Who transcodes the poorly written Python of a data scientist into production software? At the other end of the spectrum, who plots the AI strategy and sleuths out milestones that make sense? Who figures out how to mine ROI to offset the risk envelope? Who determines whether AI should be a value-add or a differentiator? The list goes on.

It quickly becomes evident that none of these specialties exist in-house, at least not in a useful distribution. Moreover, most enterprises lack an established AI culture, workflow, and tooling to harmonize these streams as one. A new in-house AI team will initially be confined to churning out proofs-of-concept and prototypes (which have an irreversibly negative ROI).

Corporate culture.

I know you have a great corporate culture but hear me out. Your core business might have a fantastic work environment, but you probably don’t have a tech-company vibe (unless you are a tech company).

When bringing AI in-house, retention becomes a challenge. The talented AI people you spend time recruiting will quickly feel isolated working someplace with conventional values. Tech culture intersects everything from conversations in the breakroom to the recognition employees expect from a boss after a hard-won, but technically nuanced, accomplishment. If you don’t have a vibrant tech culture (or AI culture), attrition will become a problem.

This can be even worse in government organizations, where the most talented AI people have the shortest retention. The culture of government work and federal contracting can be asphyxiating to AI talent – as AI experts have plenty of opportunities in the private sector.

Partner or acquire (or both).

At best, most organizations have a slim chance of emerging heroically after bringing AI/ML in-house.

Executives unwisely assume, “AI is easy, it’s just Python with a little math”. In truth, AI is a highly specialized field with an enormous risk envelope. For the same reasons you wouldn’t bring semiconductor design in-house for the enterprise, you shouldn’t bring AI in-house either.

When contemplating in-house AI, one can be fooled into overconfidence by “successful” prototype projects. However, compelling prototypes are easy. The first truth about AI prototypes is they all showcase something astonishingly novel that almost seems too easy. The second truth about AI prototypes is that most are pointless excursions that rarely represent material progress. Only about 15% of enterprise AI initiatives exit the prototype stage. Fewer yet monetize. If you plan to tackle AI on your own, without partnering with or acquiring an experienced firm, start by answering the question, “what separates us from the 85% that fail?

The risk envelope for AI contains every systems development risk you can think of, plus the technical and scientific risks inherent to AI. Few enterprises can channel sufficient competence in systems engineering, software development, and technical management at once.

If your organization doesn’t already have a well-oiled machine for AI, the only way to hit the ground running is to partner with, or acquire, an experienced firm that brings an established culture, workflow, and tool stack. If you must vertically integrate AI, it is best to acquire a mature capability and keep it somewhat independent. If you ramp internally, don’t waste time pursuing prototypes and proofs-of-concepts, focus on building a strong capability to field (not just demo) product though AI culture, workflow, and tooling– then expect it to take 18-months before you see any R from the ROI.

Thanks for staying to the end, I hope it was interesting.