Quick summary:

  • 70% of AI initiatives fail because companies shoehorn AI into broken processes instead of redesigning workflows around the technology
  • AI agents are moving beyond chatbots to handle complex tasks like vendor negotiations, with some achieving 90% automation rates in customer support
  • Investment opportunities in AI startups are opening to individual investors with as little as $10,000 through new fund structures
  • Functional roles are blurring as AI amplifies human capabilities across traditional job boundaries
  • Content provenance will become critical as AI-generated material floods training datasets

Francisco Martin discovered something troubling about his manufacturing clients. They hired expensive consultants, bought cutting-edge AI tools, and still got zero return on investment.

Ray Wu watched trillion-dollar IPOs happen while individual investors got locked out of the early growth that created those valuations.

Sami Shalabi saw customer support teams drowning in ticket volume, unable to scale their human workforce fast enough.

Erik Svilich realized that nobody could tell which content was human-created versus machine-generated, threatening the foundation of copyright law.

Each built solutions that they showed off at Ai4, where they joined host Greg Matusky for a special episode of The Disruption Is Now. The conversations show how agents are becoming more capable, more investors can get in on the ground floor with AI startups, and we can actually distinguish AI and human work. The guests include:

Watch now: 

Key takeaways: 

70% of AI projects fail because companies don’t change their processes

Martin sees the same pattern across industries. “70% of AI initiatives produce zero ROI, which is absolutely ridiculous,” he explains. “Most of the time companies try to shove AI into an existing process that already doesn’t work very well.”

The solution isn’t better tools. Companies need to measure business outcomes instead of outputs, empower front-line employees with decision-making authority, and redesign processes from scratch around AI capabilities.

Martin’s team at Stride has seen success with manufacturing clients using AI agents for vendor negotiations. The agents find suppliers and actually negotiate contracts and optimize supply chains, often performing better than human procurement teams.

Individual investors can now access startup deals with $10,000 minimums

Wu spent years watching venture capital create massive returns while staying locked behind million-dollar minimums and exclusive networks.

Alumni Ventures built a different model. Its funds take $10,000 investments and create diversified portfolios of 15-20 startups per year, all co-investing alongside established VCs like Kleiner Perkins and Sequoia.

“You think about where everyone can build a mutual fund portfolio of public stocks. Why can’t you build a portfolio of private stocks?” Wu asks.

The timing matters. Companies that once went public at $100 million valuations now wait until they hit $10 billion. Individual investors miss the exponential growth phase entirely unless they can access pre-IPO rounds.

Wu’s 850,000 community members, mostly university alumni networks, provide startups with capital as well as employee pipelines, advisory board members, and customer connections across 1,600 portfolio companies.

AI agents can handle 90% of customer support so humans can drive revenue

Shalabi’s team at Maven AGI processes 15 million customer support tickets across 50+ enterprise clients. Their AI agents operate in every channel — chat, voice, social media, email — and achieve over 90% automation rates.

The transformation goes beyond efficiency metrics. Customer support agents now focus on high-complexity issues and customer success work, earning revenue instead of just resolving complaints.

“What it does is free people up to actually drive more customer interactions,” Shalabi explains. “Support agents are starting to do customer success work, which is driving more value.”

The shift creates unexpected benefits. Employee productivity increases 35-40%. Training time for new hires drops from months to weeks. Most importantly, job roles blur as AI amplifies human capabilities across traditional boundaries.

As Shalabi notes, “People are more productive not just in their own domain, but in other domains.”

Content provenance will determine AI training quality and legal liability

Svilich built EncypherAI after realizing that nobody could prove whether content was human or machine-generated. The implications stretch from academic integrity to billion-dollar copyright lawsuits.

“Publishing and media organizations are suing AI companies for billions of dollars,” Svilich explains. “Currently there’s no infrastructure layer for how The New York Times embeds proof of origin into text.”

The solution involves embedding invisible markers that track content creation, modification, and usage. When AI systems ingest training data, they can identify sources, respect licensing, and avoid the “garbage in, garbage out” problem that degrades model performance.

Svilich warns that training AI models on synthetic data creates a deteriorating feedback loop, like making copies of copies until the original quality disappears entirely.

Key moments: 

  • How AI agents negotiate vendor contracts better than humans (1:00)
  • Why 70% of AI initiatives fail to generate ROI (3:01)
  • Guardrailing code for performance and accessibility before launch (10:17)
  • Greg’s P3 plan for AI-search era media placement and publishing (12:14)
  • The $10,000 minimum that democratizes VC access (14:07)
  • 850,000 community members creating startup value beyond capital (19:27)
  • Infra to agents — where the next wave of AI value lands (20:31)
  • 15 million support tickets and 90% automation rates (24:01)
  • From complaint resolution to revenue generation (28:58)
  • 35-40% productivity lift when agents live inside workflows (30:37)
  • Why roles blur across support, success, sales, and product (33:04)
  • The UX future isn’t a chatbot — it has yet to be invented (36:29)
  • Copyright lawsuits and the $2.7 billion synthetic data problem (40:55)
  • Why content provenance matters for model training (42:08)
  • Academic integrity and licensing need real authorship signals (45:10)

Q&A

Q: How do AI agents actually perform vendor negotiations?

Martin: These are instances of large language models that are given tools and a task, and they can go ahead and do them themselves.

We work with manufacturing clients where agents find sources and negotiate contracts with vendors to optimize supply chains. On average, yes, they perform better than humans. Often you have an agent on the other end too, so it’s agent-to-agent negotiation.

Q: What makes Alumni Ventures different from traditional VC access?

Wu: Most times it takes half a million, $2 million to get into established VCs, and they ask what value you bring beyond capital. Ours is about investing with others who create value together. We have 850,000 community members from university alumni networks like Stanford, Harvard, MIT. With a $10,000 investment, they get a diversified portfolio of 15-20 companies yearly, co-investing with well-established VCs.

Q: How does Maven AGI achieve 90% automation in customer support?

Shalabi: We’ve built AI agents that operate in every imaginable modality — chat, text, voice, social media. The platform integrates across all enterprise systems to resolve issues. But the bigger change is human roles. Support agents now focus on high-complexity issues and customer success work. Functions are blurring — support does success work, success does sales work, because AI amplifies capabilities across domains.

Q: Why is content provenance critical for AI development?

Svilich: About $2.7 billion gets spent on AI model training, but there’s a concept called “model collapse” — training on synthetic data that degrades model performance over time. Currently there’s no way to embed proof of origin into text content. When OpenAI ingests training data, they can’t tell if it came from The New York Times or was machine-generated. [Encypher’s] technology embeds invisible markers that track content creation and usage, helping models avoid synthetic data pollution.