Welcome to the 1st issue of In Context. This issue covers news from the week of April 27th 2026.
Listen to the “podcast” style version of this issue below!
Generated with Google’s NotebookLM.
Using an AI chatbot, like ChatGPT, in the workplace used to be a professional advantage.
Now, they’re table stakes.
The difference: chatbots answer your questions. Agents can use tools and skills to actually deliver work for you.
Saying “we use ChatGPT” is going to start landing like saying “we have a website” in 2005.
Chatbots aren’t going away, but for a modern enterprise the Agentic Era is here.
Table of Contents
Agents are arriving, as default, in workplace platforms
What Happened:
The largest workplace platforms just introduced AI agents to the world as a default setting.
What it means:
Organizations in the Google or Microsoft ecosystems now have access to AI powered agents who can take action inside of their work.
On one hand, this is a powerful shift from chatbots being the most accessible form of AI for most people to having direct access to agents built by the largest enterprise software providers.
On the other hand, business leaders who aren’t up to speed on agents, have just been pushed to get educated and form policy for their organization before they start getting the “can we use this?” questions.
The bottleneck in realizing ROI on AI usage is not the technology.
It’s the organization’s data, education, policies, and lack of adoption
What Happened:
What it means:
While a large portion of the market is “using AI”, only a small percentage is actually doing it systematically and seeing returns.
Only 25% of organizations are reporting “transformative impact.
Organizations putting a structured plan together for AI, are still in the minority, but the ones taking it seriously are the ones who can actually measure the success and therefore the returns.
Deloitte’s AI Discussion KPIs:
- Time-to-production rate
- Reclaimed hours per employee
- AI budget trajectory
Executives in Stanford’s report also cited inaccuracy as their top AI risk.
The models are more than capable at this point, the hurdle is now a familiar one: Good data in, good results out
Even if it feels like it, AI isn’t magic. These are still tools that can be used well or used poorly.
So far, AI pricing has been simplified to make adoption easy, but the economics don’t work the same as a Netflix subscription.
We’re seeing the first signs of a shift.
What Happened:
What it means:
Monthly subscriptions to AI tools or model providers, like OpenAI and Anthropic, keep things easy for the end-user, but AI usage is still a resource intensive process where hard costs for a provider scale directly with usage.
Light users who pay the same as heavy users each month help to subsidize the actual cost of using AI on a day-to-day basis.
As those “light” users continue to convert to heavy usage, the margin is shrinking for providers quickly.
When the name synonymous with AI, ChatGPT, misses its quarterly targets and has executives publicly arguing over funding for future capacity expenses… its worth paying attention.
Case Studies
GE deployed 800+ AI agents across the organization using Google Cloud Gemini Enterprise.
Shift summaries now take minutes, not hours.
Live monitoring reduced equipment downtime.
Factory workers can now access production data without the need for specialist support.
All embedded in GE’s “Brilliant Factory” platform.
80% of their staff use Google’s Gemini Enterprise daily
Within weeks of launching “hundreds of solutions” were built by legal, marketing, compliance, and sales.
Solutions built for teams, by teams. Not dictated to them by their technology team.
Their AI enablement director said the key was having their risk team involved in the process from the start.
Instead of stepping in to shut things down once things went wrong, the risk team helped to enable adoption through a structured approval process.