TL;DR: Jamie Pull just upgraded to V2 powering "Deep Mode" with DeepSeek V4. Multi-angle podcast research, better proper-noun search, same 10-cent pricing.
What's New
With Jamie Pull's V2 Release - Deep Research mode x Deepseek V4 is running the show.
We swapped in DeepSeek's open-source V4 model for search and synthesis. It's a 1.6 trillion parameter Mixture-of-Experts beast that costs a fraction of what closed models charge. V4-Flash runs at $0.14 per million tokens while matching GPT-4 class performance. That's 268x cheaper than Claude Opus.
All those savings? We reinvested them into making Jamie search harder and think deeper.
Multi-angle research, not just "here's the top result"
When you ask Jamie a question now, it explores the topic from multiple angles. Ask about CBDCs and you'll get the Bitcoin maximalist take, the Fed perspective, the privacy angle, the developing-world view. Comprehensive answers, not just first-match-wins.
Deep vs Fast: Choose Your Best Fit
We give you two ways to ask. Deep mode (the default) throws our most capable models at your question—multi-step reasoning, cross-referenced sources, the works. You'll wait 60-90 seconds for that thoroughness.

Fast mode runs a leaner, single-pass answer in 30-45 seconds. Perfect for quick lookups or questions you mostly know the answer to. The kicker: both cost the same per call. No premium tier, no upcharge for "thinking harder."
Why This Matters
Open-source models are eating the world. DeepSeek V4 dropped on April 24th with MIT-licensed weights and benchmark scores that rival or beat GPT-5 and Claude Opus on coding tasks. But it costs pennies on the dollar.
That price gap isn't just academic. It's what lets us run deeper, more comprehensive searches without charging you $50/month.
Same 10 cents per call. More angles explored. Better answers.
Try It:
- Web App
- Agent Quick Start
Still L402 Lightning payable. Still zero setup. Just better.