<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Therna Biosciences]]></title><description><![CDATA[Therna Biosciences]]></description><link>https://blog.therna.com</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 08:42:21 GMT</lastBuildDate><atom:link href="https://blog.therna.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Therna Biosciences]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thernabio@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thernabio@substack.com]]></itunes:email><itunes:name><![CDATA[Therna Biosciences]]></itunes:name></itunes:owner><itunes:author><![CDATA[Therna Biosciences]]></itunes:author><googleplay:owner><![CDATA[thernabio@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thernabio@substack.com]]></googleplay:email><googleplay:author><![CDATA[Therna Biosciences]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Designing Medicines for One: The Future of Individualized RNA Medicines]]></title><description><![CDATA[Therna Biosciences recently unveiled its collaboration with Charles River to advance individualized medicines for patients with ultra-rare diseases.]]></description><link>https://blog.therna.com/p/designing-medicines-for-one-the-future</link><guid isPermaLink="false">https://blog.therna.com/p/designing-medicines-for-one-the-future</guid><dc:creator><![CDATA[Nazli Azimi]]></dc:creator><pubDate>Wed, 29 Apr 2026 17:19:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3yaY!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9625529-86f3-4606-8522-cd8ac417328a_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Therna Biosciences recently unveiled <a href="https://www.prnewswire.com/news-releases/therna-announces-collaboration-with-charles-river-to-advance-single-patient-personalized-rna-therapeutics-302704703.html">its collaboration</a> with Charles River to advance individualized medicines for patients with ultra-rare diseases. Our first program is focused on a patient with rapidly progressing form of lung fibrosis caused by a unique mutated gene. With no available treatment options, the need for a tailored RNA medicine was extremely urgent.</p><p>Following the announcement, several reporters expressed interest in learning more about Therna&#8217;s n=1 strategy, with <a href="https://endpoints.news/about-endpoints-news/">Ryan Cross</a> at <a href="https://endpoints.news/">Endpoints News</a> asking how the <a href="https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-plausible-mechanism-framework-develop-individualized-therapies-target-specific">FDA&#8217;s recent draft guidance</a>, which introduces a framework for accelerating tailored, individualized treatments for ultra-rare diseases, could impact the company. This guidance marks an important moment for the field and an opportunity to define how n=1 approaches can be developed with both speed and scientific rigor. In this blog, I outline our vision and approach to advancing n=1 treatments for patients who have no alternative treatment options. Central to this effort is our AI-enabled RNA platform and its alignment with the scientific and regulatory requirements of individualized medicine.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.therna.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Therna&#8217;s approach is built on the premise that RNA can be designed with extreme precision. We have developed an AI-driven platform trained on massive amounts of proprietary experimental data generated in our labs, enabling us to interrogate how RNA behaves in biological systems.</p><p>When creating the model, we asked questions about how RNA interacts with its biological environment and how those interactions influence behavior, translation efficiency, tissue and cell type expression and longevity. We also examined RNA structure, stability, and interactions with RNA-binding proteins, as well as how DNA, RNA, and proteins work together within the cell. As a result, our AI model with RNA intelligence can generatively design RNA sequences with characteristics that we want. For example, we can program the mRNA to be expressed exclusively in a tissue or cell type of interest, to be more durable, and achieve high translation efficiency to protein output. Similarly, we can find unique target sites within the transcript and design small oligonucleotides, such as ASOs and siRNAs, to increase, decrease, or fine tune the expression of the mRNA. Rather than producing transient biological signals, these engineered RNA sequences are designed to function as therapeutics.</p><p>By integrating computational design with experimental validation, we can generate better RNA medicines faster and translate those designs into candidates suitable for development. This approach has the potential to improve the cost-efficiency of developing highly individualized therapies, enabling programs that would not have been feasible using conventional drug discovery approaches. In doing so, it opens the door to programmable RNA medicines with previously unattainable properties.</p><p>While single-patient programs are an important application of our platform, Therna is advancing a broader pipeline of RNA medicines across multiple disease areas with significant unmet need. These efforts leverage the same design and validation framework to develop therapeutics at scale.</p><p>Our rationale for advancing individualized medicine is grounded in four core principles:</p><p><strong>First</strong>, these patients are often overlooked and have no viable treatment options. There is limited commercial incentive for industry to pursue such therapies, and academic institutions typically lack the resources to develop them at scale. We believe there is a responsibility to address this gap and bring forward solutions where none exist.</p><p><strong>Second, </strong>our AI-enabled platform allows us to rapidly design and validate effective RNA medicines. For these patients, time is of the essence. We have demonstrated the ability to design and experimentally validate candidates in under three months and advance them into pre-clinical development &#8211; an unprecedented acceleration of traditional drug development timelines.</p><p><strong>Third</strong>, each program generates highly valuable data that both validates and strengthens our platform. From initial RNA design through experimental testing and clinical validation, these data improve our models and expand their predictive capabilities. Over time, this learning builds, benefiting not only future n=1 patients but also our broader therapeutic programs. <br><br><strong>Fourth</strong>, advancing these programs helps establish a viable framework for individualized medicine. While these efforts are less resource-intensive than traditional drug development due to the absence of large clinical trials, they still require meaningful investment. Demonstrating what is possible is an important step toward enabling broader adoption of individualized approaches for patients who need them. <br><br>Therna&#8217;s approach to individualized medicine reflects a convergence of scientific innovation, regulatory evolution, and urgent patient need. By combining AI-enabled RNA design with rigorous experimental validation, we are establishing a new model for developing therapies for patients who have historically been left behind by traditional drug development. These early programs not only provide a path forward for individuals with ultra-rare diseases, but also generate the data and experience needed to scale this approach more broadly. As the field continues to evolve, we believe Therna will be at the forefront of developing individualized RNA medicines, expanding what is possible for patients and redefining how therapies are developed.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.therna.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A Look Back at Our First Year: Building Therna Biosciences in 2025]]></title><description><![CDATA[We started Therna Biosciences in April 2025 with a simple belief: RNA is a language, and it can be learned and engineered.]]></description><link>https://blog.therna.com/p/a-look-back-at-our-first-year-building</link><guid isPermaLink="false">https://blog.therna.com/p/a-look-back-at-our-first-year-building</guid><dc:creator><![CDATA[Nazli Azimi]]></dc:creator><pubDate>Tue, 30 Dec 2025 17:39:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3yaY!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9625529-86f3-4606-8522-cd8ac417328a_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We started Therna Biosciences in April 2025 with a simple belief: RNA is a language, and it can be learned and engineered. What has changed since then is not that belief, but the confidence that we are building the tools to make it real.</p><p>RNA medicines have already transformed medicine, but the next phase requires deeper control. Control over tissue specificity. Control over durability. Control over safety and expression. Biology has always held these answers, but only recently have we had the computational power and data scale to uncover them. This is the moment Therna Biosciences was built for.</p><p>From day one, we set out to integrate deep RNA biology with generative AI in a way that respects biological complexity rather than flattening it. At Therna, we have built an AI platform to design novel RNA medicines that are longer-lasting, safe, and more effective. Our approach is grounded in a lab in the loop system, where models are continuously informed by carefully designed experiments and high quality biological data. Asking the right biological questions has been just as important as building the right models.</p><p>What makes Therna Biosciences different is that we are not only an AI company applying models to biology. We are deeply rooted in the data we generate. Our experimental systems are designed to produce biologically meaningful, high resolution datasets that directly shape our models. Better data leads to better models, and better models allow us to ask more precise biological questions. This feedback loop is core to how we operate and why our platform continues to improve with every cycle.</p><p>After two years of stealth operation, we launched the company officially this year by securing seed funding from investors (Pear VC, AIX, and Fusion Fund) who believed in our mission and vision. We assembled an experienced team that brings together deep expertise in RNA biology, AI, and translational and preclinical science to streamline the AI-powered discoveries into reality by designing next generation RNA medicines.</p><p>We were proud to be accepted into both the NVIDIA Inception program and the Google for Startups Cloud Program. These partnerships reflect the seriousness of our technical ambition and have strengthened the computational foundation behind our work. As part of our collaboration with NVIDIA, we are working with CodonFM, the new RNA foundation model that learns the rules of RNA by reading sequences in biological units called codons to reveal patterns that matter for therapeutic design. Integrating Therna&#8217;s proprietary RNA biology data with this class of models and world class compute infrastructure helps accelerate the development of programmable RNA medicines in ways that were not possible before.</p><p>Perhaps most importantly, we validated our core premise. RNA is a language, and it can be learned and engineered. With the right data, the right experiments, and the right models, RNA can be designed with intent. Fit for purpose mRNA. Smarter targeting strategies. Faster iteration cycles. Work that once took years can now happen in weeks.</p><p>As we look ahead to 2026, the opportunity feels even larger. We are excited to scale our platform, expand the scope of what programmable RNA can do, and translate our advances into medicines that matter. This is still the beginning, but the foundation is strong, and the momentum is real. Therna Biosciences is here to become The RNA company.</p><p>Thank you to our team, our partners, and everyone who believes that the future of RNA medicine can be written with purpose.</p><p>We are just getting started.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.therna.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.therna.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Case for Foundation Models in Biology]]></title><description><![CDATA[Rethinking scale: context, diversity, and design for biological foundation models]]></description><link>https://blog.therna.com/p/the-case-for-foundation-models-in</link><guid isPermaLink="false">https://blog.therna.com/p/the-case-for-foundation-models-in</guid><dc:creator><![CDATA[Amir Momen-Roknabadi]]></dc:creator><pubDate>Fri, 12 Sep 2025 15:31:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gScn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The past few years have seen biology embrace the ideas of large language models. Genomic and transcriptomic data, at some level, are also sequences with their own language and grammar. The cell often translates one to the other seamlessly. Why not then apply the same tricks that worked so well in language? Build massive models, train them with self-supervised objectives, and expect breakthroughs.</p><p>Reality has been more complicated. Across benchmarks, far smaller supervised models often outperform large pretrained ones. For some, this has led to the conclusion that scaling has failed in biology (or at least to a questioning of the underlying premise). </p><p>I think that conclusion is premature. Scaling is alive, but it plays out differently in biology than in text or images. Success depends on scaling the right axes &#8212; not just parameters, but context length, data diversity, objectives, tokenization, and architecture. And it depends on evaluating models fairly, with methods that reveal their strengths rather than obscure them. </p><p>In this post, I am laying out why scaling still matters for biology, why in fact foundation models are crucial for bringing biology into the next decade, and more importantly what dimensions we should be scaling, and how to think about evaluation.</p><h1>Supervised Models Excel with Abundant Data, but Biology Is Far More Complex</h1><p>In genomics, supervised models excel where labels are plentiful. Human and mouse genomes have benefited from massive data-generation consortia. Disease genetics, ENCODE-like catalogs of regulatory activity, popular immortalized cell lines; these are rich ecosystems where supervised models can thrive. Feed them thousands of transcriptomic and epigenomic tracks and they deliver excellent performance.</p><p>But biology extends far beyond this tight circle. Step into zebrafish, plants, or microbial consortia, and labeled data quickly thins out. Assays become noisy, annotations sparse, and sample sizes small. Training high-capacity supervised models is simply not feasible in these settings.</p><p>This is where foundation models matter. Pretraining across species and contexts lowers the barrier in data-poor domains. A model like Evo 2, trained on thousands of genomes across the tree of life, carries useful priors even for organisms studied by only a handful of labs. Its broad evolutionary grounding allows transfer: patterns learned from one species help interpret another. And remarkably, Evo 2 achieves state-of-the-art performance on both non-coding and coding variant prediction in humans despite being trained only on reference genomes, not on expensive omics tracks. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gScn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gScn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 424w, https://substackcdn.com/image/fetch/$s_!gScn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 848w, https://substackcdn.com/image/fetch/$s_!gScn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 1272w, https://substackcdn.com/image/fetch/$s_!gScn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gScn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif" width="1456" height="867" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:867,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11149304,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/tiff&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thernabio.substack.com/i/173234975?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gScn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 424w, https://substackcdn.com/image/fetch/$s_!gScn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 848w, https://substackcdn.com/image/fetch/$s_!gScn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 1272w, https://substackcdn.com/image/fetch/$s_!gScn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc24f3e2-0d32-4a93-bd2b-9793f7d3b051.tif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Evo 2 achieves strong zero-shot performance on predicting human clinical variants across both coding and noncoding regions, outperforming baselines without relying on omics tracks. PhyloP is a conservation-based baseline. (adapted from the <a href="https://doi.org/10.1101/2025.02.18.638918">Evo 2 paper</a>). </figcaption></figure></div><p></p><p>The lesson: supervised models may be dominating in rich-data niches for the time being. However, foundation models are indispensable in sparse ones. And it should be clear to everyone that much of biology is data-sparse.</p><h1>Benchmarking Pitfalls: Why Evaluation Choices Matter</h1><p>I feel like there is a tendency in our field to downplay the value of foundation models. But in reality, benchmarking of models requires care and attention. When foundation models are benchmarked, the results often hinge on methodological details that are easy to overlook.</p><h2>1. Layer Choice Is Not a Nitpick</h2><p>Transformers do not distribute information evenly across layers. Early layers capture local features; middle layers often encode rich structural and functional signals; late layers tilt toward autoregressive objectives. Picking the wrong layer to probe can make a model look weak.</p><p>Yet many benchmarks report results from a single layer or from a naive averaging of token embeddings. That choice can flip conclusions. For Evo 2, for example, the authors show that probing layer 20 yields excellent classification accuracy on BRCA1 variants; whereas, probing layers 1-3 produces far worse results. Without systematic layer sweeps and thoughtful pooling strategies, comparisons between models become unreliable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OhvN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OhvN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 424w, https://substackcdn.com/image/fetch/$s_!OhvN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 848w, https://substackcdn.com/image/fetch/$s_!OhvN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 1272w, https://substackcdn.com/image/fetch/$s_!OhvN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OhvN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4496304,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/tiff&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thernabio.substack.com/i/173234975?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OhvN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 424w, https://substackcdn.com/image/fetch/$s_!OhvN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 848w, https://substackcdn.com/image/fetch/$s_!OhvN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 1272w, https://substackcdn.com/image/fetch/$s_!OhvN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d9417e3-f65b-4663-99a2-6af3c635fb9b.tif 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Careful selection of layers is essential for fair evaluation of transformer models. AUROC performance in BRCA1 variant classification varies widely across layers, with Evo 2 block 20 providing the strongest results. This highlights how layer choice can dramatically affect conclusions (adapted from the <a href="https://doi.org/10.1101/2025.02.18.638918">Evo 2 paper)</a>.</figcaption></figure></div><p></p><h2>2. Cropping Results Misses the Full Story</h2><p>Benchmarks like TraitGym illustrate another problem. Evo 2 lags behind specialized models on some complex traits but outperforms them on Mendelian ones. This makes biological sense: Mendelian traits are often driven by strong-effect variants in coding or conserved regulatory regions, where evolutionary pretraining is powerful. Complex traits, by contrast, are polygenic and data-rich, favoring supervised models tuned to phenotype-specific signals.</p><p>But over and over again, critics simply share a cropped version of this evaluation. Sharing only the right-hand panel of a benchmark, the slice where supervised models win, gives a skewed impression. Reporting full grids reveals the trade-offs: foundation models recognize evolutionary disruption; supervised models capture subtle statistical associations. Even in humans where we have plenty of data, both stories are true, and both matter.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ph7y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ph7y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 424w, https://substackcdn.com/image/fetch/$s_!ph7y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 848w, https://substackcdn.com/image/fetch/$s_!ph7y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 1272w, https://substackcdn.com/image/fetch/$s_!ph7y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ph7y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png" width="1280" height="1276" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1276,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ph7y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 424w, https://substackcdn.com/image/fetch/$s_!ph7y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 848w, https://substackcdn.com/image/fetch/$s_!ph7y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 1272w, https://substackcdn.com/image/fetch/$s_!ph7y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb9f48a-e912-4d30-a895-f6e583fe3167_1280x1276.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">In TraitGym benchmarks, Evo 2 performs better on Mendelian traits, while supervised models do better on complex traits. Showing the full result grid is essential: cropping to only the supervised wins obscures the complementary strengths of each approach (adapted from the <a href="https://doi.org/10.1101/2025.02.11.637758">TraitGym paper</a>).</figcaption></figure></div><h1>Noise Today Can Be Structure Tomorrow</h1><p>One common critique is that most of the genome is &#8220;noise.&#8221; Intergenic regions make up ~75% of mammalian genomes, and many positions seem unconserved. If only ~10% of nucleotides are conserved across species, the argument goes, why waste model capacity predicting the rest?</p><p>This view misrepresents regulatory biology. Intergenic regions are not uniform deserts. They are patchworks of functional elements embedded in less constrained sequence. Enhancers, silencers, and insulators sit scattered across these expanses, looping over vast distances to control genes. Chromatin domains partition the genome into 3D compartments that matter for transcriptional regulation.</p><p>The ENCODE 2025 encyclopedia catalogs millions of candidate regulatory elements in human and mouse alone. Many of these lie in &#8220;non-conserved&#8221; territory. Sequence-level conservation is not the right proxy: regulatory function often persists through network-level conservation, where different motifs evolve to achieve the same control logic.</p><p>When models struggle to learn from intergenic regions, it does not mean those regions are meaningless. It means our objectives are insufficient. Absent large swaths of data, standard next-token prediction may miss distal and sparse dependencies. Here, addition of supervised tasks, predicting chromatin contacts, enhancer-promoter loops, or expression changes under perturbation, could teach models to extract real structure from apparent noise.</p><h1>Scaling Beyond Parameters</h1><p>In public discourse, scaling is often shorthand for &#8220;adding parameters.&#8221; Bigger is better. But in biology, parameters are only one axis, and sometimes not even the most important one.</p><ul><li><p><strong>Context length.</strong> Many regulatory interactions span hundreds of kilobases. Standard autoregressive training with limited windows will miss them. Long-context models like Evo 2 and AlphaGenome, with million-token windows, show why context scaling can be more impactful than sheer size.</p></li><li><p><strong>Tokenization.</strong> Single nucleotide tokens are simple but not necessarily optimal. Codons, kmers, or adaptive schemes can encode biological structure better, letting smaller models outperform larger but poorly tokenized ones.</p></li><li><p><strong>Readouts.</strong> Linear probes on the right intermediate layer can beat more complex architectures on the wrong one. Benchmarking must separate engineering choices from fundamental model limits.</p></li><li><p><strong>Data breadth.</strong> A model trained only on human data will struggle with plants. One trained only on coding sequences will miss regulatory grammar. Scaling species diversity, assay types, and experimental contexts often delivers larger gains than adding layers.</p></li></ul><p><br>Scaling in biology is multi-dimensional. The critical question is not &#8220;how big is the model?&#8221;, but &#8220;what axes are we scaling, and are they aligned with biological signals&#8221;?</p><h1>Architectures for Biology, Not Just Borrowed from Language</h1><p>Language models advanced because architecture and data evolved together. Self-attention unlocked long-range dependencies; tokenization improved; datasets scaled; instruction tuning taught models to use their knowledge.</p><p>Biology will follow a similar trajectory, but with its own twists. The signals we care about are inherently bidirectional, multimodal, and structured in 3D.</p><ul><li><p><strong>Bidirectionality.</strong> Enhancers regulate promoters upstream and downstream. RNA folding depends on base-pairing across both directions. Autoregressive models that only look backward miss half the story.</p></li><li><p><strong>Long-range, sparse interactions.</strong> Unlike text, where nearby words matter most, biological regulation often skips over large spans. Sparse attention or hierarchical models may capture this better.</p></li><li><p><strong>Multi-scale patterns.</strong> Chromatin domains, local motifs, 3D genome structure; biology is layered in ways language is not. Standard transformers may not capture these efficiently.</p></li></ul><p>We likely have not yet found the optimal architecture for biological sequences. Borrowing directly from NLP will take us part of the way, but breakthroughs will come from designs tailored to biological invariances.</p><h1>The Data Frontier: Diversity, Perturbations, and Synthetic Biology</h1><p>In our field, there is also a sense that we are approaching the limits of meaningful biological data. This is simply wrong. OpenGenome2 and similar collections are milestones, not endpoints. Sequence diversity is vast; microbial communities, metagenomes, plants, and understudied clades remain under-sampled. Functional diversity is even larger. Perturbation assays, multi-modal single-cell data, and time-series measurements are only beginning to scale.</p><p>But most importantly, we are no longer limited to natural data. Genome-scale generative processes and genome foundries mean that we can design synthetic sequences and test them at scale, effectively creating new training data beyond what evolution provides. Evo itself demonstrates this: models that generate and evaluate synthetic variation expand the dataset in directions natural diversity never explored.</p><p>Data growth in biology is not about chasing the trillion-token thresholds of NLP. It is about expanding along the biologically relevant axes: species, conditions, modalities, perturbations, and designed diversity.</p><h1>Foundation Modeling Is Needed More Than Ever</h1><p>Taken together, these points argue for a reframing. Scaling in biology is not dead. It is conditional.</p><ul><li><p>Foundation modeling helps most where data is sparse.</p></li><li><p>Scale fails if we probe the wrong layers or delude ourselves by cropping benchmarks.</p></li><li><p>Scale needs objectives that respect biological regulation.</p></li><li><p>Scale is multi-dimensional: context length, tokenization, data breadth, architecture, not just parameters.</p></li><li><p>Scale depends on data diversity, including synthetic generation.</p></li></ul><p>When these conditions are met, scaling delivers. Evo 2&#8217;s performance on variant interpretation, foundation models&#8217; ability to transfer across species, and the early success of long-context architectures are all evidence. The story is not failure; it is refinement.</p><h1>Conclusion</h1><p>Scaling has always been about more than size. In biology, it is about matching objectives, architectures, and data to the underlying signals. Dismissing scaling because supervised models win in data-rich niches misses the broader picture.</p><p>The right framing is this: scaling is alive, but it must be given the right instructions, such as longer contexts, better tokenization, richer objectives, broader data. Under those conditions, scaling unlocks biological insight in places where no supervised model could even start.</p><p>Scaling works. We just need to scale the things that matter.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.therna.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>