
Can AI Steal Your Book? The Alarming Plagiarism Problem! | US Publishing Expert
1/10/2026 | 41 mins.
What if your book could be copied, republished, and sold under someone else’s name, and you’d barely know it happened?In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Julie Trelstad, a longtime publishing leader and one of the most thoughtful voices on copyright, metadata, and digital trust. Julie brings a rare insider’s view into how books are discovered, distributed, and increasingly misused in an AI-driven world.They explore a growing fear among writers, creators, and publishers: how AI is quietly reshaping plagiarism, authorship, and trust in the publishing ecosystem.They examine how AI-generated content is blurring the line between original work and imitation, why traditional copyright protections struggle in a machine-readable world, and how fake or derivative books can appear online within days. The episode breaks down the real risks authors face today, not hypothetical futures, and what structural changes may be required to protect creative work. It’s a practical, sober look at AI plagiarism.Julie Trelstad is a publishing executive and strategist known for her work at the intersection of technology and intellectual property. She has spent decades helping publishers, authors, and platforms navigate the identification, protection, and trust of content at scale. In this episode, her perspective matters because she explains not just that AI plagiarism is happening, but why the system makes it so hard to detect and stop, and what could actually help.Takeaways* AI can clone and resell a book in days, and most platforms struggle to reliably prove that the theft occurred.* AI-generated plagiarism often looks legitimate enough to fool retailers, reviewers, and buyers.* Authors lose sales and reputation when fake AI versions of their books appear at lower prices.* Traditional copyright law exists, but it was never designed for machine-scale copying and AI training.* There has been no machine-readable way for AI systems to recognize who owns content, until now.* Content fingerprinting can detect similarity across languages and paraphrased AI rewrites.* Time-stamped content registries can establish legal proof of who published first.* Most books already inside AI models were scraped without the author's consent or compensation.* AI lawsuits focus less on training itself and more on the use of pirated content.* Authors could earn micro-payments when AI systems use specific paragraphs or ideas from their work.Timestamps00:00 Introduction01:37 Why AI Plagiarism Is So Hard to Detect03:25 Amlet.ai and the Fight for Content Ownership05:32 How Copyright Worked Before Generative AI08:09 The Origin Story Behind Amlet.ai12:22 Building Machine-Readable Infrastructure for Copyright14:24 How Publishing Is Changing in the AI Era17:34 How Authors Can Protect Their Work with Amlet.ai20:38 Tools Publishers Use to Detect and Enforce Rights21:38 How Authors Can Monetize Content Through AI24:27 The Reality of AI Scraping and Plagiarism Today27:00 Publisher Rights, Digital Security, and Enforcement29:08 Evolving the Business Model for AI Licensing35:34 The Future of Digital Ownership and AI Rights38:37 Innovation Q&ASupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Julie* Website: https://paperbacksandpixels.com/ * LinkedIn: https://www.linkedin.com/in/julietrelstad/ * Amlet AI: https://amlet.ai/ Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/

Why Self-Driving Trucks Are So Hard: Safety, Data & AI Explained
1/03/2026 | 52 mins.
In this episode of An Hour of Innovation, host Vit Lyoshin and Achyut Boggaram, a Senior Machine Learning Engineer at Torc Robotics, explore what truly goes on behind the scenes of autonomous trucks and why full self-driving has taken far longer than public timelines promised.They explore why autonomous trucks are not just an AI problem, but a safety-critical engineering challenge involving hardware, software, data, and regulation. The conversation explores how machine learning models interpret the real world, why edge cases are hazardous, and how autonomous vehicles generate massive amounts of sensor data in a matter of minutes. Achyut explains why redundancy, certification, and testing are treated more like rocket engineering than traditional software development. They also unpack common misconceptions about AI capability, data scale, and why impressive demos rarely reflect real-world autonomy.Achyut Boggaram is a senior machine learning engineer focused on applied AI research for autonomous trucking. He has led work on large-scale perception models, sensor fusion systems, and production machine learning pipelines that run directly on self-driving trucks. His expertise spans safety-critical AI, data infrastructure, and real-world deployment, making his insights essential to understanding why autonomy remains so challenging.Takeaways* A single missed annotation, like a stop sign or yield sign, can lead to catastrophic outcomes with an 80,000-pound vehicle.* Self-driving demos work in controlled environments, but real autonomy breaks down once conditions are unpredictable and unstructured.* Autonomous trucks can generate 600–800 terabytes of data in just 20 minutes due to raw, uncompressed sensor capture.* Machine learning models struggle to generalize the way humans do, even after billions of miles of training data.* Safety in autonomous trucking is treated like rocket engineering, with redundancy required at every hardware and software layer.* Autonomous trucks must run entirely on board without internet access, making real-time decision-making far more constrained.* When AI is uncertain, the safest response is not intelligence but a minimum risk maneuver, often pulling over or stopping.* Synthetic and photorealistic simulated data are now essential to train for rare but dangerous scenarios that may never occur in real life.* Autonomous systems can outperform humans in extreme conditions, detecting pedestrians at long distances in fog or darkness.* Autonomous trucks are not replacing drivers today, but filling a growing labor gap that could reach hundreds of thousands of unfilled jobs.Timestamps00:00 Introduction02:41 Why Autonomous Vehicles Still Struggle in the Real World05:40 What It Really Takes to Put Autonomous Trucks on Public Roads10:05 Safety Certifications That Decide If Autonomous Trucks Are Allowed15:50 How Self-Driving Trucks Generate Massive Amounts of Data20:09 How Autonomous Trucks Handle Dangerous and Unexpected Situations23:20 The Full AI Training Pipeline for Autonomous Vehicles31:33 The Most Critical Safety Gates in Autonomous Truck Testing34:21 Breakthrough AI Techniques for Fog, Night, and Extreme Conditions38:07 The Real Timeline for Autonomous Trucks Becoming Reality39:52 The Hardest Problems Blocking Full Self-Driving41:28 Are Autonomous Vehicles Inevitable?42:34 Electric vs Diesel Autonomous Trucks43:53 Will Autonomous Trucks Replace Human Drivers?48:09 Innovation Q&ASupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Achyut* Website: https://torc.ai/ * LinkedIn: https://www.linkedin.com/in/achyutsarma/ Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * Podcast: https://www.anhourofinnovation.com/

Functional Precision Medicine: How Cancer Drugs Are Tested Before Treatment | Jim Foote
12/20/2025 | 46 mins.
Cancer care still forces patients and doctors to guess! Learn how functional precision medicine is replacing that uncertainty by testing cancer drugs before treatment even begins.In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Jim Foote, co-founder and CEO of First Ascent Biomedical, an innovator who is challenging one of the most uncomfortable truths in modern medicine: many cancer treatments are chosen without knowing if they will actually work.First Ascent Biomedical is a company focused on transforming personalized cancer treatment through functional precision medicine and data-driven decision support.In this conversation, they explore how functional precision medicine differs from traditional precision medicine and why testing drugs on patients’ live tumor cells changes everything. Jim explains how AI, robotics, and large-scale drug testing help doctors move from trial-and-error to a true test-and-treat approach. The discussion also covers the risks of ineffective or harmful treatments, the economic cost of cancer care, and what must change for this model to become part of standard oncology practice.Jim Foote is a former technology executive turned healthcare innovator whose work is deeply shaped by personal loss and firsthand experience with cancer care. He is best known for advancing functional precision medicine by combining genomics, live-cell drug testing, and AI-driven analysis to guide treatment decisions. His perspective matters because it connects real clinical outcomes with the technology needed to give doctors and patients clearer, faster, and more humane options.Takeaways* Cancer treatment still relies heavily on trial-and-error, even with modern medical technology.* Two biologically different patients often receive the same cancer treatment based on population averages.* Precision medicine based on DNA and RNA sequencing still cannot confirm if a drug will work before it’s given.* Functional precision medicine tests drugs directly on a patient’s live tumor cells before treatment begins.* Some FDA-approved cancer drugs can be completely ineffective or even make a patient’s cancer worse.* Testing drugs outside the body can prevent patients from being exposed to harmful or useless treatments.* AI and robotics enable hundreds of drug tests to be completed in days instead of weeks or months.* In a published study, 83% of refractory cancer patients did better when treatment was guided by this approach.* Knowing which drugs won’t work is just as important as knowing which ones will.* Personalized, test-and-treat cancer care has the potential to improve outcomes while reducing overall healthcare costs.Timestamps00:00 Introduction02:46 The Core Problem in Modern Cancer Care04:16 Functional Precision Medicine Explained06:42 How AI, Robotics, and Data Are Changing Cancer Treatment10:01 How Cancer Drugs Are Tested Before Treatment13:20 Personalized, Patient-Centric Cancer Care18:22 Cost, Access, and the Economics of Cancer Treatment22:19 The Future of Cancer Care and Patient Empowerment25:21 Real Patient Outcomes and Success Stories26:50 Why Functional Precision Medicine Is the Future31:18 Predicting, Detecting, and Preventing Cancer Earlier34:27 Where to Learn More About Functional Precision Medicine36:12 Transforming Healthcare Beyond Trial-and-Error37:27 Regulations, FDA Pathways, and Scaling Innovation40:09 Why Cancer Is Affecting Younger Patients41:17 Innovation Q&ASupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Jim* Website: https://firstascentbiomedical.com/ * LinkedIn: https://www.linkedin.com/in/jim-foote/ * TEDx Talk: https://www.youtube.com/watch?v=CqLCgNxUhVc Connect with VitLinkedIn: https://www.linkedin.com/in/vit-lyoshin/ X: https://x.com/vitlyoshin Website: https://vitlyoshin.comPodcast: https://www.anhourofinnovation.com/

The Future of Music Education: AI Tutors, Human Mentors, and Creativity
12/13/2025 | 45 mins.
Music education is quietly undergoing a massive shift, and most people haven’t noticed yet.AI tutors are no longer just tools; they’re starting to shape how musicians learn, practice, and improve. But here’s the real question: where does human creativity and mentorship still matter in an AI-driven world?In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with John von Seggern, a longtime musician, educator, and founder of Futureproof Music School, to unpack what’s actually changing, and what isn’t, in the future of music education. John has spent over a decade designing online music education programs and now works at the intersection of AI, creativity, and human mentorship.In this conversation, they explore how AI is personalizing music education in ways traditional schools struggle to scale. John explains how AI tutors can analyze music, guide students through complex production workflows, and surface the one or two things that matter most at each stage of learning. They also dig into why AI still falls short in mastery, taste, and creative judgment, and why human mentors remain essential. They discuss the hybrid model of AI tutors and human teachers, the future of music production learning, and what this shift means for creators trying to stay relevant in a fast-changing industry.John von Seggern is a musician, producer, educator, and music technologist who has worked with film composers and contributed sound design to Pixar’s WALL·E. He previously helped lead and design one of the world’s most respected electronic music programs before founding Futureproof Music School, where he’s building AI-powered, personalized music education systems. His work matters because it goes beyond hype, offering a practical, grounded view of how AI can support creativity without replacing the human elements that make music meaningful.Takeaways* AI tutors are most effective when they surface only one or two actionable fixes, not long reports that overwhelm learners.* Music education improves dramatically when AI can analyze your actual work (like mixes), not just answer theoretical questions.* The biggest limitation of AI in music is that elite, professional knowledge is often undocumented, so models can’t learn it.* Human mentors remain essential at advanced levels because taste, judgment, and creative intuition can’t be automated.* Personalized learning paths outperform one-size-fits-all programs, especially in creative and technical fields like music production.* Generative AI tools are fun, but most professionals prefer AI that assists the process, not tools that generate finished music.* AI acts best as an intelligence amplifier, helping creators move faster rather than replacing their role.* The future of music education isn’t AI-only, but a hybrid model where AI accelerates learning, and humans guide mastery.Timestamps00:00 Introduction03:02 How AI Is Transforming Music Education07:50 Why AI + Human Mentorship Works Better Than Music Schools11:43 Why Music Education Curricula Must Evolve Faster15:04 How AI Personalizes Music Learning for Every Student19:38 Building an AI-Powered Education Business24:22 What Students Really Say About AI Music Education26:20 Electronic Music vs Learning Traditional Instruments27:58 The Future of AI in Music and Creative Industries30:28 Why Artists Still Matter in AI-Generated Art32:21 Who Owns Music Created With AI?36:50 How Creators Can Survive and Thrive Using AI42:24 Innovation Q&ASupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with John* Website: https://futureproofmusicschool.com/ * LinkedIn: https://www.linkedin.com/in/johnvon/ Connect with Vit* LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/

RAG, LLMs & the Hidden Costs of AI: What Companies Must Fix Before It’s Too Late
12/06/2025 | 57 mins.
Most companies have no idea how risky and expensive their AI systems truly are until a single mistake turns into millions in unexpected costs.In this episode of An Hour of Innovation podcast, host Vit Lyoshin explores the truth about AI safety, enterprise-scale LLMs, and the unseen risks that organizations must fix before it’s too late.Vit is joined by Dorian Selz, co-founder and CEO of Squirro, an enterprise AI company trusted by global banks, central banks, and highly regulated industries. His experience gives him a rare inside look at the operational, financial, and security challenges that most companies overlook.They dive into the hidden costs of AI, why RAG has become essential for accuracy and cost-efficiency, and how a single architectural mistake can lead to a $4 million monthly LLM bill. They discuss why enterprises underestimate AI risk, how guardrails and observability protect data, and why regulated environments demand extreme trust and auditability. Dorian explains the gap between perceived vs. actual AI safety, how insurance companies will shape future AI governance, and why vibe coding creates dangerous long-term technical debt. Whether you’re deploying AI in an enterprise or building products on top of LLMs.Dorian Selz is a veteran entrepreneur, known for building secure, compliant, and enterprise-grade AI systems used in finance, healthcare, and other regulated sectors. He specializes in AI safety, RAG architecture, knowledge retrieval, and auditability at scale, capabilities that are increasingly critical as AI enters mission-critical operations. His work sits at the intersection of innovation and regulation, making him one of the most important voices in enterprise AI today.Takeaways* Most enterprises dramatically overestimate their AI security readiness.* A single architectural mistake with LLMs can create a $4M-per-month operational cost.* RAG is essential because enterprises only need to expose relevant snippets, not entire documents, to an LLM.* Trust in regulated industries takes years to build and can be lost instantly.* Real AI safety requires end-to-end observability, not just disclaimers or “verify before use” warnings.* Insurance companies will soon force AI safety by refusing coverage without documented guardrails.* AI liability remains unresolved: Should the model provider, the user, or the enterprise be responsible?* Vibe coding creates massive future technical debt because AI-generated code is often unreadable or unmaintainable.Timestamps00:00 Introduction to Enterprise AI Risks02:23 Why AI Needs Guardrails for Safety05:26 AI Challenges in Regulated Industries11:57 AI Safety: Perception vs. Real Security15:29 Risk Management & Insurance in AI21:35 AI Liability: Who’s Actually Responsible?25:08 Should AI Have Its Own Regulatory Agency?32:44 How RAG (Retrieval-Augmented Generation) Works40:02 Future Security Threats in AI Systems42:32 The Hidden Dangers of Vibe Coding48:34 Startup Strategy for Regulated AI Markets50:38 Innovation Q&A QuestionsSupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Dorian* Website: https://squirro.com/ * LinkedIn: https://www.linkedin.com/in/dselz/ * X: https://x.com/dselz Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/* Podcast: https://www.anhourofinnovation.com/



An Hour of Innovation with Vit Lyoshin