Corporate Giants Build In-House AI to Shield Data and Boost Efficiency
From McKinsey to Walmart, businesses are swapping public chatbots for custom AI tools, but at what cost to workers and security?
Big businesses are treating public AI like sketchy airport Wi-Fi—avoiding it at all costs to protect sensitive data. Titans of industry are banning employees from using open chatbots like ChatGPT for work, especially on confidential client projects, and instead rolling out custom, in-house AI tools tailored to their needs.
In consulting, the Big Three are all-in on proprietary AI. McKinsey’s internal platform, Lilli, is used monthly by 75% of its 40,000 employees for tasks like research and crafting PowerPoint slides. Kate Smaje, McKinsey’s AI lead, insists this doesn’t “necessarily” mean fewer jobs, but the firm axed 5,000 workers since Lilli’s 2023 launch. Coincidence?
In banking, Morgan Stanley’s AI has saved coders 280,000 hours this year by converting legacy code to modern standards. At Goldman Sachs, CEO David Solomon bragged in January that AI can whip up 95% of an IPO prospectus—work that once took six people over two weeks—in mere minutes.
Retail’s not slacking either. Walmart’s Trend-to-Product AI slashes clothing design timelines from six months to six to eight weeks by scanning internet trends for mood boards. Target and Amazon have also launched employee-only chatbots to keep ChatGPT at bay.
But it’s not all smooth sailing. UnitedHealth Group’s Optum learned the hard way when its internal AI chatbot, used for claims advice, was found publicly accessible in December, per TechCrunch. Optum yanked it offline, calling it a “demo” that was never meant to scale.
MY MUSINGS:
The shift to in-house AI is a double-edged sword. On one hand, it’s a smart move—custom tools boost efficiency and keep sensitive data under lock and key. McKinsey’s Lilli and Goldman’s prospectus generator show how AI can handle grunt work, freeing humans for higher-value tasks. But the human cost is murky. McKinsey’s layoffs post-Lilli raise red flags: are these tools augmenting workers or quietly replacing them? And Optum’s security slip proves even “secure” AI can be a liability if not airtight. I’m skeptical of claims that AI won’t shrink headcounts long-term—history shows automation often prioritizes profits over people. Still, the speed and scale of these tools are undeniable. Walmart’s six-week design sprint is a game-changer in retail.
What do you think—can companies balance AI efficiency with job security? Have you seen in-house AI tools in your workplace, and are they as secure as promised?
Love to hear your views in the Comments section below.
Take your expertise to the next level. Whether you're focused on fintech, banking, operational risk, global payments, or blockchain, my CPE-certified Illumeo courses deliver real-world insights grounded in decades of experience as a banker, business analyst, and trainer. If you found this podcast valuable, you'll gain even more from the structured, practical training in these online courses. Click the “My Illumeo Courses” link below to explore.