Has anyone reviewed both the MIT success rate and the ChatGPT success rate?
Sort by:
Microsoft has been slow to implement contextual intelligence in Copilot, requiring users to repeatedly provide context. Recent updates have improved this, but the ability to select different LLM models, such as those from Anthropic, may further enhance Copilot’s capabilities.
I’ve reviewed both. The ChatGPT report provides metrics on usage, segmented by categories such as asking, doing, and expressing, with insights into effectiveness drivers. The MIT report analyzes corporate success rates in scaling AI from pilot to enterprise, identifying patterns for success. I combined both reports in Google’s notebook LLM to create a podcast.
Oh please, mods please make external links work and Steve, please do share your Notebook podcast! That's a real benefit to the community.
This is a big issue. I’m involved in an industry survey to assess real adoption rates, prompted by the MIT study. Many organizations deploy Copilot or Enterprise ChatGPT without tracking actual usage. Education, training, and enforcement are critical for successful adoption. Firms with top-down mandates and enforcement see dramatically higher adoption rates, while others experience around 20% uptake. Monitoring usage and quality remains challenging. Preliminary findings suggest Copilot adoption is low outside specific functions like meeting transcription. Contextual intelligence is essential for effective AI use, and organizations must layer this capability on top of existing tools.