PodcastsBusinessThe Lindahl Letter

The Lindahl Letter

Dr. Nels Lindahl
The Lindahl Letter
Latest episode

Available Episodes

5 of 142
  • The great 2025 LLM vibe shift
    Thank you for tuning in to week 217 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “The great 2025 LLM vibe shift.”Vibe shifts came and went. People are certainly adding the word vibe to all sorts of things as the initial meaning has ironically faded. Casey Newton in the industry standard setting Platformer newsletter wrote about a big silicon valley vibe shift in 2022 [1]. It was a big thing; until it wasn’t. The really big completely surreal LLM shift has happened toward the tail end of 2025. We went from extreme AI bubble talk to very clear, rational, and thoughtful perspectives on how LLMs won’t realize the promises that have been made. Keep in mind the market fears of an AI bubble are different from the understanding that LLMs might be the technology that ultimately wins. All of the spending in the marketplace and the academic argument may get reconciled at some point, but we have not seen that happen in 2025.The backward linkages of how potential technological progress regressed may not have been felt just yet, but the overall sentiment has shifted. The ship has indeed sailed. Let that sink in for a moment and think about just how big a shift in sentiment that really happens to be and how it just sort of happened. As OpenAI and Anthropic move toward inevitable IPO, that shift will certainly change things. Maybe the single best written explanation of this is from Benjamin Riley who wrote a piece for The Verge called, “Large language mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it” [2]. I owe a hat tip to Nilay Patel for recommending and helping surface that piece of writing.I was skeptical at first, but then realized it was a really interesting and well reasoned read. I’ll admit at the same time, I was also reading a 52 paper from the Google Research team, “Nested Learning: The Illusion of Deep Learning Architecture” around the same time which was interesting as a paired reading assignment [3]. More to come on that paper and what it means in a later post. I’m still digesting the deeper implications of that paper.Maybe to really sell the shift you could take a moment and listen to some of the recent words from OpenAI cofounder Ilya Sutskever. I’m still a little shocked about the casual way Ilaya described how we moved from research and the great AI winter, to the age of scaling, and finally back to the age of research again. The idea that scaling based on compute or size of corpse won’t win the LLM race is a very big shift and Ilya makes it pretty casually during this video.You will notice I have set the video to play about 1882 seconds into the conversation:Maybe a video with a really sharp looking classic linux Red Hat fedora in the background featuring a conversation between Nilay Patel and IBM CEO Arvind Krishna can help explain things. Don’t panic when you realize that the CEO of IBM very clearly argues with some back of the envelope math that all the data center investment has no real way to pay off in practical terms or an actual return on investment. Try not to flinch when it is described that within 3-5 years the same data centers could be built at a fraction of the current cost. Technology does just keep getting better. The argument makes sense. It is no less shocking based on the billions being spent.I set the video to start playing 502 seconds into the conversation.The argument that I probably prefer in the long run is how quantum computing is going to change the entire scaling and compute landscape [4]. The long-term argument that may end up mattering the most suggests that quantum computing will transform the economics of scale and ultimately reset expectations about what is computationally feasible. Former Intel CEO Pat Gelsinger recently framed quantum as the force likely to deflate the AI bubble by altering the fundamental relationship between compute and capability, a claim that is gaining analytical support across the research community. We may see it be an effective counter to the billions being spent on data centers for a late mover willing to make a prominent investment in the space or it could just end up being Alphabet who is highly invested in both TPU and quantum chips [5].What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!Footnotes:[1] Newton, C. (2022). The vibe shift in Silicon Valley. Platformer. https://www.platformer.news/the-vibe-shift-in-silicon-valley/[2] Riley, B. (2025). Large language mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it. The Verge. https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems[3] Behrouz, A., Razaviyayn, M., Zhong, P., & Mirrokni, V. (2025). Nested learning: The illusion of deep learning architectures. In The Thirty-ninth Annual Conference on Neural Information Processing Systems. https://abehrouz.github.io/files/NL.pdf[4] Shrivastava, H. (2025). Quantum computing will pop the AI bubble, claims ex-Intel CEO Pat Gelsinger. Wccftech. https://wccftech.com/quantum-computing-will-pop-the-ai-bubble-claims-ex-intel-ceo-pat-gelsinger/[5] Yahoo Finance, “Alphabet CEO just said quantum computing could be close to a breakthrough,” https://finance.yahoo.com/news/alphabet-ceo-just-said-quantum-155229893.html This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nelsx.com
    --------  
    5:56
  • The 5 biggest unsolved problems in quantum computing
    Thank you for tuning in to week 216 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “The biggest unsolved problems in quantum computing.”The field of quantum computing has accelerated rapidly during the last decade, yet its most important breakthroughs remain incomplete. The core research challenges that stand between today’s prototypes and large scale, industrially relevant systems are now visible with unusual clarity. I think we are on the path to seeing this technology realized. These challenges are increasingly framed not as incremental milestones but as structural bottlenecks that shape the entire trajectory of the field. This week’s analysis focuses on the five most critical problems that must be solved for quantum computing to reach fault tolerant, economically meaningful operation. These gaps define where research investment, national strategy, and competitive advantage will be determined in the coming decade.1. A fully fault tolerant logical qubit with logical error rates below thresholdThe first and most fundamental problem is the absence of a fully fault tolerant logical qubit. I know, I know, people are getting close, but this technology is not fully realized just yet. Theoretical thresholds for fault tolerance are well studied, and progress has been reported through surface codes, low density parity check codes, and recent advances in magic state distillation. Several groups have demonstrated logical qubits whose performance exceeds their underlying physical qubits, and some trapped-ion experiments now show better than break-even behavior under repeated rounds of error correction. However, no team has yet realized a logical qubit that maintains below-threshold logical error rates in a fully integrated setting that combines encoding, stabilizer measurement, real time decoding, and continuous correction across arbitrarily deep circuits. Experiments such as the University of Osaka’s zero level magic state distillation results and Quantinuum’s recent logical circuit demonstrations illustrate meaningful progress, yet a complete fault tolerant logical qubit build rolling off the assembly line has not been achieved [1]. This missing element prevents reliable execution of deep circuits and stands as the central research challenge of the field. I am also tracking a leaderboard of efforts aimed at increasing the number and stability of logical qubits as new systems emerge [2].2. A scalable and manufacturable quantum architecture that supports thousands of high fidelity qubitsThe second unsolved problem is the absence of a scalable, manufacturable quantum architecture capable of supporting thousands of high fidelity qubits. Superconducting platforms continue to face wiring congestion, cross talk, and fabrication variability across large wafers, which limits reproducibility at scale. Trapped-ion systems achieve some of the highest gate fidelities reported, but their physical footprint, control volume, and relatively slow gate speeds constrain system growth. Neutral atom arrays offer large qubit counts, yet they have not demonstrated uniform, high fidelity two qubit gates across arrays large enough to support fault tolerant codes. Photonic and spin qubits continue to advance but remain earlier in their development for universal, gate based architectures. Across all platforms, the transition from laboratory systems to repeatable, wafer scale manufacturing has not occurred. Most resource estimates indicate that tens of thousands of physical qubits will be required for practically useful, error corrected applications, and no architecture is yet positioned to deliver this scale with consistent fidelity. I am tracking universal gate based physical qubit leaders closely, and I expect to see significant shifts in 2026 as fabrication strategies evolve [3].3. Integrated cryogenic classical control systems capable of real time decoding at scaleThe third unsolved problem concerns the integration of classical control systems capable of operating efficiently at cryogenic temperatures. Quantum processors rely on classical electronics to generate precise control pulses, read measurement outcomes, and perform real time decoding. As devices grow, these classical requirements become a dominant engineering bottleneck. Current systems depend on extensive room temperature hardware and thousands of coaxial lines, an approach that is not viable for scaling beyond a few hundred qubits. Research into cryogenic CMOS, multiplexed readout architectures, and fast low noise routing has shown meaningful progress, and prototype decoders have demonstrated sub microsecond performance. However, the field still lacks a fully integrated classical to quantum control stack that can operate near the device, support large scale decoding throughput, and eliminate the wiring overhead required for million channel systems. Solving this challenge is as essential as improving qubit fidelity, because fault tolerant computation will require tightly coupled classical and quantum subsystems functioning in real time at cryogenic depths.4. A modular, networked quantum architecture with reliable chip to chip entanglementThe fourth major unsolved problem involves modularity and quantum networking. Large scale quantum computers will not be monolithic systems. They will require distributed architectures in which multiple chips or modules exchange entanglement to support error corrected computation across larger systems. Research groups have demonstrated chip to chip photonic links, heralded entanglement generation, and short range coupling between trapped-ion and superconducting devices, but these demonstrations remain small scale and experimental. No team has yet produced a modular architecture capable of sustaining reliable inter module entanglement rates, routing operations, and error corrected logical circuits across networked components. A practical quantum interconnect, whether photonic or microwave based, would redefine system design by enabling large logical qubit counts without relying on a single monolithic wafer. Developing these networked architectures is now seen as one of the highest value targets for national research programs, because modularity is likely the only viable path to systems with millions of physical qubits.5. A verified quantum advantage tied to a real scientific or industrial workloadThe fifth unsolved problem is the absence of a widely accepted, independently verified quantum advantage tied to a real scientific or industrial workload. Quantum supremacy experiments have demonstrated that certain random circuit sampling tasks are exceptionally difficult for classical systems to simulate, but these tasks do not translate into chemistry, materials, optimization, or cryptography workloads. Several vendors have recently reported domain specific quantum advantages, including applications in quantum navigation and narrow optimization tasks, but these demonstrations have not yet achieved broad community validation or independent replication under strict verification and resource accounting. A robust demonstration of advantage requires a computation that is infeasible for classical systems within realistic time and energy constraints, produces an output that can be meaningfully verified, and operates using real hardware error rates rather than idealized gates. Achieving this milestone would mark a decisive shift in the strategic landscape of the field and would accelerate commercial investment into fault tolerant platforms.Together, these five problems outline the most important questions I’m tracking that are facing quantum computing today. This is based on my research interests. Please feel free to let me know if something else jumps out when you read this list. Each topic represents an opportunity for technical leadership, research investment, and industrial strategy. That does not mean my list is complete. It’s directionally accurate for late 2025, but things in the quantum computing space are changing rapidly. These elements called out also define the hurdles that stand between early laboratory demonstrations and the large-scale quantum platforms required for transformative scientific progress.What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!Links I’m sharing this week!You may not have watched Linus Torvalds build a computer on your watch list for 2025, but I’m sharing that link anyway. I truly enjoyed watching this video.This video made me chuckle several times and was delightful.Footnotes:[1] Itogawa, T., Takada, Y., Hirano, Y., & Fujii, K. (2024). Even more efficient magic state distillation by zero-level distillation. arXiv preprint arXiv:2403.03991. http://arxiv.org/pdf/2403.03991[2] Top quantum computers by logical qubit[3] Updating my top 10 quantum computer leaderboard This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nelsx.com
    --------  
    9:46
  • Process capture and the future of knowledge management
    Thank you for tuning in to week 215 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “Process capture and the future of knowledge management.”The history of knowledge management has been shaped by repeated attempts to store, retrieve, and reuse organizational insight. So much institutional knowledge gets lost and discarded as organizations change and people shift roles or exit. People within organizations learn through the every day practice of getting things done. It’s only recently that systems are augmenting and sometimes automating those processes. Early systems focused on document repositories, and later platforms emphasized collaboration, tagging, and collective intelligence. We now find ourselves in a period where knowledge management converges with automated workflows and computational assistants that can observe, extract, and generalize decision patterns. We are seeing a major change in the ability to observe and capture processes. Systems are able to capture and catalog what is happening. This creates an interesting inflection point where the system may store the knowledge, but the users of that knowledge are dependent on the system. That does not mean the process is understood in terms of the big why question. Scholars have noted that the operational layer of organizational memory is often lost because it resides in informal practices rather than formal documentation. The shift toward embedded and automated capture offers a remedy to that problem.The rise of agentic AI and workflow-integrated assistants alters the knowledge landscape by making it possible to synthesize procedural knowledge in real time. Instead of relying on teams to manually update wikis or define operating procedures, modern systems can extract key steps from repeated actions, identify dependencies, and flag anomalies that deviate from observed patterns. This transforms knowledge management from a static library into a dynamic computational environment. What exactly happens to this store of knowledge over time is something to consider going forward. Supervising the repository will require deep knowledge of the systems which are now being maintained systematically. Maintaining and refining it will be the difference between sustained institutional knowledge or temporary model advantages that drop with the next update. Recent studies on digital trace data argue that high fidelity observational streams can significantly improve the accuracy of organizational models. When this data flows into agents capable of modeling tasks, predicting outcomes, and recommending actions, the role of knowledge management shifts from storage to orchestration.Process capture also introduces new opportunities for long-horizon learning systems. This is the part I’m really interested in understanding. The orchestration layer has to have some background learning and storage that runs periodically. When workflows are automatically translated into structured representations, organizations can run simulations, perform optimization, and enable higher levels of task autonomy. These capabilities begin to resemble continuous improvement environments that merge human judgment with machine-refined operational insight. Researchers have observed that structured process models can improve downstream automation and decision support, particularly in complex enterprise settings where procedures evolve rapidly. This suggests that the next phase of knowledge management will involve systems that not only store information but also refine it through computational analysis and real world feedback. It’s in that refinement that the magic might happen in terms of real knowledge management.What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!Links I’m sharing this week!https://www.computerworld.com/article/4094557/the-world-is-split-between-ai-sloppers-and-stoppers.htmlThis video is a super interesting look at a number we don’t normally question on a daily basis. The delivery style is a bit bombastic, but the fact check on the argument is interesting. You know I enjoy numbers and was really curious how this was calculated. That video referenced this widely shared analysis from Michael W. Green on Substack. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nelsx.com
    --------  
    4:19
  • The great manufacturing reset
    Thank you for tuning in to week 214 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “The great manufacturing reset.”Boston Dynamics captured public imagination when they introduced Spot the dog-like robot back in 2016. Things have changed. Robots that walk around are beginning to enter the commercial landscape, and new entrants continue to appear. A humanoid robot product from Russia built by the company Idol surfaced last week [1]. Other companies such as Agility Robotics (USA), Figure AI (USA), Boston Dynamics (USA), UBTECH (China), and 1X Technologies (Norway/USA) are all working toward delivering humanoid robots. Optimus, the Tesla bot introduced conceptually in 2021 and now in its third-generation prototype which remains part of an internal program and has not yet reached commercial deployment is also being talked about.The stage is now set, and we are at a point where robotics, autonomous fabrication systems, and advanced materials are converging into a new industrial baseline. The last decade brought low-cost filament printers into hobbyist and commercial spaces at massive scale, and the next decade is poised to move far beyond that early wave. Industrial additive manufacturing has already expanded into metals, composites, and high-performance polymers, with global revenue expected to accelerate over the coming years. At the same time, the field is absorbing rapid advancements in AI-enabled calibration, defect detection, and real-time optimization, allowing machinery to tune production parameters autonomously. That capability shifts what it means to operate a modern fabrication workflow. Things are changing rapidly.Alongside these developments, humanoid and semi-autonomous industrial robots are transitioning from research demonstrations to contract manufacturing deployments. Several builders are scaling up pilot programs in which general-purpose robots support assembly, materials handling, and repetitive manufacturing tasks. These systems benefit from advances in reinforcement learning, enhanced sensors, and cloud-based model updates. Industrial robotics shipments are increasing rapidly, driven by global demand for flexible production lines and labor-augmentation strategies. The supply side of robotics is not only expanding but also becoming modular and more interoperable across fabrication environments.The most significant shift may come from the emergence of machines that build machines. That is a topic I’m focused on understanding. Historically, tooling design required long lead times, significant manual labor, and specialized expertise. Today, automated CAM pipelines, printable tooling, adaptive CNC systems, and robotically tended fabrication cells allow factories to generate and regenerate their own production processes. Some aerospace and automotive facilities already deploy these closed-loop systems to create fixtures, jigs, and replacement components internally. This form of self-manufacturing reduces dependency on external suppliers and removes friction from engineering iteration cycles. We are moving toward a world where design, testing, and tooling are all integrated within an AI-guided, robotics-driven feedback loop. That integration is the foundation of the great manufacturing reset.For the United States, these technologies open a realistic path to reshoring custom and small-batch manufacturing in ways that were not economically viable during the offshoring wave of the late twentieth century. Rising labor costs in traditional manufacturing hubs, geopolitical risk, and supply chain disruptions have already encouraged firms to reconsider where they build things. Additive manufacturing and flexible robotics change the cost structure by reducing reliance on large minimum-order quantities, expensive hard tooling, and long logistics chains. A factory that can print tooling on demand, deploy modular robots, and run AI-optimized production scheduling can serve shorter runs and more specialized designs while remaining geographically close to end customers. In effect, the United States can replace scale-driven arbitrage with speed, customization, and resilience. That is why we are at the inflection point for the great manufacturing reset.Policy and infrastructure are beginning to support this transition. Federal programs such as Manufacturing USA and its associated network of advanced manufacturing institutes are working to diffuse next-generation production technologies across domestic firms and regions [2]. Investments in semiconductor fabrication, battery plants, and clean-energy hardware have already catalyzed billions of dollars in new onshore manufacturing commitments. The same capabilities that support large facilities can extend to mid-market and smaller manufacturers through shared tooling libraries, regional robotics integrators, and standardized digital design pipelines. Universities and community colleges can align curricula with this reset by emphasizing mechatronics, robotics programming, and design-for-additive principles that translate directly to a modern factory floor.If the United States leans into this transition, the great manufacturing reset will not simply re-create legacy industrial capacity. It will establish a distributed network of automated, digitally coordinated micro-factories specializing in custom work, rapid prototyping, and short-run production. The strategic advantage will be the ability to move from concept to physical part in days instead of months, while retaining critical capabilities within domestic borders. The risk is that other regions may scale faster and capture the integrator role that coordinates robots, additive systems, and AI platforms across global supply chains. The next few years will determine whether the United States treats these technologies as incremental enhancements or as foundational infrastructure for a new manufacturing baseline. Ideally, this reset will create conditions for a new wave of startups delivering smaller manufacturing runs, bespoke development cycles, and entirely new product categories.Things to consider:* The economics of reshoring depend as much on automation and design speed as on wage differentials.* Policy support for advanced manufacturing will matter most where it connects directly to tooling, robotics, and workforce upskilling.* Custom, short-run production could become a core competitive advantage for regions that adopt additive and robotics early.* The integrators that connect robots, printers, and AI software may end up more powerful than any single hardware vendor.* Manufacturing resilience will increasingly be measured by how quickly domestic systems can reconfigure to new designs and shocks.What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!Links I’m sharing this week!Footnotes:[1] Mesa, J. (2025, November 11). Russia ‘human’ robot falls on stage during debut. Newsweek. https://www.newsweek.com/russia-human-robot-falls-stage-during-debut-11031104[2] Manufacturing USA. (n.d.). Home. https://www.manufacturingusa.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nelsx.com
    --------  
    7:20
  • Why a “combiner model” might someday work
    Thank you for tuning in to week 213 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “Why a “combiner model” might someday work.”Open models abound. Every week, new open-weight large language models appear on Hugging Face, adding to a massive archive of fine-tuned variants and experimental checkpoints. Together, they form a kind of digital wasteland of stranded intelligence. These models aren’t all obsolete; they’re simply sidelined because the community lacks effective open source tools to combine their specialized insights efficiently. The concept of a “combiner model” offers one powerful path to reclaim this lost potential. Millions of hours of training, billions of dollars in compute, and so much electricity have been spent. Sure you can work by distillation to capture outputs from one model into another, but a combiner model would be different as it overlays instead of extracts.A combiner model represents a critical shift away from the assumption that AI progress requires ever-larger single systems. Instead of training another trillion-parameter monolith, we can learn to combine many smaller, specialized models into a coherent whole. The central challenge lies in making these models truly interoperable. The challenges form from questions around how to merge or align their parameters, embeddings, or reasoning traces without degrading performance. The combiner model would act as a meta-learner, adapting, weighting, and reconciling information across independently trained systems, unlocking the latent knowledge already encoded in thousands of open weights. Somebody at some point is going to make an agent that works on this problem and grows stronger by essentially eating other modals.This vision can be realized through at least three technical routes. The first involves weight-space merging. Techniques such as Model Soups and Mergekit show that when models share a common base, their weights can be effectively averaged or blended. More advanced methods, like TIES-Merging, learn adaptive coefficients that vary across layers, turning model blending into a trainable optimization process rather than a static recipe. In this view, the combiner model becomes a universal optimizer for reuse, synthesizing the gradients of many past experiments into a single, functioning network.The second approach focuses on latent-space alignment. When models differ in architecture or tokenizer, their internal representations diverge. Even so, a smaller alignment bridge can learn to translate between their embedding spaces, creating a shared semantic layer, or semantic superposition. This allows, for example, a legal-domain model and a biomedical model to exchange information while their original knowledge weights remain frozen. The combiner learns the translation rules, effectively building a common interlingua for neural representations that connects thousands of isolated domain experts.The third approach treats the combiner not as a merger but as a controller or orchestrator. In this design, the combiner dynamically decides which expert model to invoke, evaluates their outputs, and fuses the results through its own learned inference layer. This idea already appears in robust multi-agent frameworks. A true combiner model or maybe combiner agent would internalize this orchestration as a core part of its reasoning process. Instead of running one model at a time, it would simultaneously select and synthesize outputs from many experts, producing complex, context-aware intelligence assembled on demand. This approach is the most immediately viable and is already being used in sophisticated production systems today.If such systems mature, the economics of AI will fundamentally change. Rather than concentrating resources on a few massive, proprietary models, research will shift toward modular ecosystems built from reusable parts. Each fine-tuned checkpoint on Hugging Face will become a potential building block, not an obsolete artifact. The combiner would turn the open-weight landscape into an evolving lattice of knowledge, where specialization and reuse replace the endless cycle of frontier retraining. This vision is demanding, but the promise remains compelling: a world where intelligence is assembled, not hoarded; where the fragments of past experiments contribute directly to future understanding. The combiner model might not exist yet, but its underlying logic already dictates the future of open source AI.What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!Links I’m sharing this week!This is the episode with Sam Altman that everybody was talking about. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.nelsx.com
    --------  
    5:42

More Business podcasts

About The Lindahl Letter

Thoughts about technology (AI/ML) in newsletter form every Friday www.nelsx.com
Podcast website

Listen to The Lindahl Letter, REAL AF with Andy Frisella and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.1.2 | © 2007-2025 radio.de GmbH
Generated: 12/15/2025 - 1:35:12 PM