Eric Horvitz
Redmond, Washington, United States
43K followers
500+ connections
View mutual connections with Eric
Eric can introduce you to 10+ people at Microsoft
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Eric
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Articles by Eric
-
Reflections as AAAI 2026 Concludes
Reflections as AAAI 2026 Concludes
I'm on my way back from the 40th Annual AAAI Conference in Singapore. AAAI is a broad, wide-ranging meeting that covers…
216
6 Comments -
Toward holistic evaluation of AI models for medical tasks: MedHELMJan 21, 2026
Toward holistic evaluation of AI models for medical tasks: MedHELM
In the late-summer of 2022, a senior leader at OpenAI reached out to me about evaluating their latest model, hot off…
193
5 Comments -
A Paradigm Shift for Building and Testing AI in MedicineJun 30, 2025
A Paradigm Shift for Building and Testing AI in Medicine
In medicine, diagnosis is rarely a one-shot answer. It’s an unfolding process of generating, testing, and refining…
498
37 Comments -
A Leap Forward in ChemistryJun 18, 2025
A Leap Forward in Chemistry
Today, our AI for Science team at Microsoft Research announced Skala, a deep learning–based approach that offers a more…
742
25 Comments -
Toward an Era of AI-Enabled Clinical CollaborationMay 19, 2025
Toward an Era of AI-Enabled Clinical Collaboration
Returning to clinical medicine to complete my MD/PhD training at Stanford University, after finishing a PhD in AI, was…
515
22 Comments -
Breakthrough in Quantum ComputingFeb 20, 2025
Breakthrough in Quantum Computing
In March 2012, a bold roadmap landed in my inbox. It was an ambitious plan for building a quantum computer, authored by…
785
26 Comments -
Advancing Healthcare AI: Progress in Medical Reasoning with LLMsDec 18, 2024
Advancing Healthcare AI: Progress in Medical Reasoning with LLMs
Our team has been rigorously evaluating the performance of large language models (LLMs) on medical tasks using…
498
20 Comments -
Protecting Scientific Integrity in an Age of Generative AIMay 22, 2024
Protecting Scientific Integrity in an Age of Generative AI
I enjoyed collaborating with a diverse team of scientists on a set of aspirational principles aimed at “Protecting…
42
3 Comments -
Fortifying the Resilience of our Critical InfrastructureFeb 28, 2024
Fortifying the Resilience of our Critical Infrastructure
Since the days of Franklin D. Roosevelt, U.
89
4 Comments -
Better Together: Joining Forces on Digital Media ProvenanceFeb 10, 2024
Better Together: Joining Forces on Digital Media Provenance
Eric Horvitz Chief Scientific Officer, Microsoft February 9, 2024 No single solution exists to confront the complex…
974
53 Comments
Activity
43K followers
-
Eric Horvitz shared thisGrateful for an engaging visit with the faculty and students at Rockefeller University. A day full of insights and thoughtful conversations. I appreciated the opportunity to present as part of the Insight Lecture Series. Many thanks to Cori Bargmann for hosting my visit and moderating the presentation. Microsoft Research The Rockefeller University Richard Lifton
-
Eric Horvitz shared thisImportant perspective from Graham Walker, MD. Thanks Graham.Eric Horvitz shared thisI say this with all the love in my heart: 𝗬𝗼𝘂𝗿 𝗘𝗥 𝗱𝗼𝗰𝘁𝗼𝗿 𝗽𝗿𝗼𝗯𝗮𝗯𝗹𝘆 𝗱𝗼𝗲𝘀𝗻'𝘁 𝗰𝗮𝗿𝗲 𝗮𝗯𝗼𝘂𝘁 𝘆𝗼𝘂𝗿 𝗲𝘅𝗮𝗰𝘁 𝗱𝗶𝗮𝗴𝗻𝗼𝘀𝗶𝘀 — and this is one of the many ways that media is failing in its reporting of the new paper in Science this week. I ran outta room to talk about this paper, so I put it in a longer-form article below.The ER has three prime directives. Diagnosis isn't one of them.The ER has three prime directives. Diagnosis isn't one of them.Graham Walker, MD
-
Eric Horvitz shared thisNew Study on AI Reasoning in Clinical Medicine In 1959, Robert Ledley and Lee Lusted published a prescient article in Science, “Reasoning Foundations of Medical Diagnosis.” The article can be read as a long-term charge to the medical informatics and AI communities: to build systems that could support clinical decision-making. (See: https://lnkd.in/eNbyKMB3) Today, we’re sharing new research, also published in Science, that I see as an important step toward that long-standing vision—65 years later. In the paper, we study the capabilities of real-time "thinking" models in clinical medicine. Rather than focusing only on clean, curated educational benchmarks, the evaluations include messy, unstructured clinical data drawn directly from real-world emergency department records. We evaluated o1-preview, the first real-time reasoning model, on a range of clinical challenges starting in 2024. Energized by the powers we were seeing early on (see: “Medprompt to o1,” https://lnkd.in/gk67YXft) with o1 preview, we reached out to colleagues soon after to collaborate on the potential capabilities of the test-time reasoning for clinical challenges. These studies compare the model against physician baselines on complex diagnostic reasoning, patient management, and probabilistic inference challenges. Overall, the model performed at or above physician baselines across the tasks studied. For the emergency department cases, we examined three different points in the care journey: initial ER triage, ED physician evaluation, and admission to the hospital or ICU. At initial triage, the model identified the exact or very close diagnosis in 67.1% of cases, compared with 55.3% and 50.0% for two clinical experts. The gap was most pronounced at the earliest point of care, where information is most limited and decisions are especially high-stakes. Taken together, the findings point to a major opportunity for AI tools to support medicine: reducing diagnostic error and delay, improving access, and helping clinicians reason through complex cases. At the same time, they underscore the urgent need for prospective trials, careful clinician-AI workflow design, robust monitoring, and safety-focused implementation. Grateful to the many fabulous co-authors and colleagues who made this collaboration possible, from our first explorations during the earliest days of o1-preview to today’s publication. The field is moving quickly, with newer models already showing even more advanced capabilities. More studies to come. Here is the paper: https://lnkd.in/ge33advD Beth Israel Deaconess Medical Center Harvard Medical School Department of Biomedical Informatics Stanford Biomedical Data Science Program Adam Rodman Peter Brodeur, Thomas Buckley Arjun Manrai Jonathan H. Chen Ethan Goh, MD Microsoft Research
-
Eric Horvitz shared thisFolding-in Folding Dynamics Proteins are dynamic nanomachines that shift and bend as they function. Several months ago, our team developed BioEmu (biomolecular emulator), a generative model that emulates these structural dynamics: https://lnkd.in/giqxVdmM We have just published new work on DynamicsPLM, where we demonstrate significant boosts in model power by training on ensembles of computationally generated conformations rather than static snapshots. I've enjoyed the collaboration with Dan Kalifa and Kira Radinsky on the project. We see marked improvements in several important tasks, including predicting subcellular localization and protein-protein interactions--with the strongest gains for proteins with multiple functional states. I hope these advances in modeling the wondrous choreography of biology will help accelerate the pursuit of new therapies and understandings in molecular biology. Here's the article: https://lnkd.in/gYB2qRAH Microsoft Research Technion - Israel Institute of TechnologyEric Horvitz shared thisExcited to share that our paper “Learning Protein Representations with Conformational Dynamics" has been accepted to Bioinformatics 🎉 Proteins are not static; they continuously shift between conformational states that determine their function, interactions, and biological activity. Yet most protein language models still rely on a single fixed structure. DynamicsPLM addresses this gap by learning protein representations from conformational ensembles rather than static snapshots, enabling models to better capture state-dependent biology and significantly improving prediction across diverse downstream tasks. I’m especially grateful for this collaboration with Eric Horvitz (Chief Scientific Officer, Microsoft) and Kira Radinsky (CEO, Diagnostic Robotics & Visiting Professor, Technion). Thank you both again for your support, insights, and for making this work possible. This is an important step toward biological foundation models that better reflect how proteins truly behave in nature, and toward accelerating real-world drug discovery. Preprint version: https://lnkd.in/dz5A8fPs
-
-
Eric Horvitz shared thisCritically important work, cutting to the heart of the scientific enterprise. Colleagues Mark Russinovich and Ram Shankar Shankar Siva Kumar at Microsoft found in a scan of ICLR 2026 papers that about 1 in 29 accepted camera-ready papers contained at least one likely hallucinated reference. What is especially noteworthy is that this often does not appear to reflect deliberate fabrication or generic “AI slop.” Follow-up suggested that, in many cases, authors had identified the correct paper, then used an LLM or AI-assisted tool to generate the formatted citation, with errors introduced along the way and left unchecked. This is a salient reminder that even small AI-assisted failures can erode the integrity of the scientific record. We urgently need better tools, norms, and verification practices as general-purpose AI systems become part of everyday scientific workflows. Mark and Ram used RefChecker, a tool Mark developed several months ago: https://lnkd.in/grWC8uug Advances in AI will be extraordinary accelerants for science and engineering. At the same time, these advances can introduce new challenges for scientific integrity. A couple of years ago, I worked with colleagues in a multi-month effort organized by the The National Academies of Sciences, Engineering, and Medicine on AI and scientific integrity. Our recommendations are captured here in a PNAS article, “Protecting Scientific Integrity in an Age of Generative AI”: https://lnkd.in/gY3ifMfKEric Horvitz shared this𝐇𝐨𝐰 𝐜𝐥𝐞𝐚𝐧 𝐢𝐬 𝐭𝐡𝐞 𝐚𝐫𝐜𝐡𝐢𝐯𝐚𝐥 𝐫𝐞𝐜𝐨𝐫𝐝 𝐢𝐧 𝐭𝐡𝐞 𝐚𝐠𝐞 𝐨𝐟 𝐀𝐈? Ram Shankar Siva Kumar 🦝 and I just published a full scan of 180,501 citations from the accepted papers at ICLR 2026. Using my open-source tool, RefChecker, we found that 1 in 29 accepted papers contains at least one likely hallucinated reference. While many assume "AI slop" is a problem of bad actors, our analysis found a different story. Most issues stemmed from workflow failures - well-intentioned researchers using LLMs to format BibTeX entries without a final human check. Key findings: •349 𝐥𝐢𝐤𝐞𝐥𝐲 𝐡𝐚𝐥𝐥𝐮𝐜𝐢𝐧𝐚𝐭𝐢𝐨𝐧𝐬 across 184 accepted papers. • 𝐌𝐞𝐭𝐚𝐝𝐚𝐭𝐚 𝐜𝐨𝐫𝐫𝐮𝐩𝐭𝐢𝐨𝐧: Over half the corpus (96k+ references) had mismatched URLs or missing canonical links. • 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲 𝐍𝐨𝐫𝐦𝐬: We aren't calling for a ban on AI, but for a new standard of "bibliographic integrity" and verification. We’ve anonymized the results but have shared the full data with ICLR to help strengthen the peer-review process. Read the full write-up here: https://lnkd.in/gRQwU2KrGitHub - markrussinovich/refchecker-iclr2026: Report on reference scan of ICLR 2026 accepted papersGitHub - markrussinovich/refchecker-iclr2026: Report on reference scan of ICLR 2026 accepted papers
-
Eric Horvitz shared thisLloyd Minor, Dean of Stanford University School of Medicine, has been leaning-in on AI and its future in medical education and in healthcare delivery. With his innovative career of game-changing contributions, and the more recent work he's been doing at Stanford University with thinking deeply about AI and medicine, I feel it would have been more appropriate for me to interview him for a podcast vs. vice versa. Enjoyed the conversation!Eric Horvitz shared thisAI is already changing us and how we work. In the latest episode of The Minor Consult, I spoke with Eric Horvitz, MD, PhD, Chief Scientific Officer at Microsoft, about why 2026 marks a turning point for AI, what ambient AI and clinical "teammates" mean for future physicians, and why keeping humans in the loop — as AI increasingly trains itself — may be the defining challenge of our era. Listen to our full conversation at the link below: https://lnkd.in/gAVcuECB
-
Eric Horvitz shared thisWe published our work on Bayesian methods for detecting junk email in 1998 - https://lnkd.in/gdTQfKEy Great collaboration with Mehran Sahami, then a superstar summer intern at Microsoft Research. Mehran is now chair of the Computer Science Dept. at Stanford University. We had already shipped a Bayesian spam filter when Paul Graham published his essay. We loved seeing Paul's clear description; his essay built energy around the approach. In the 1990s, we had intensive efforts on using machine learning for junk email filtering, and also work on the polar opposite challenge: Probabilistic approaches for identifying urgent email. Years later, these methods later became popularized as "Outlook Mobile Manager," "Priority Inbox," and today's "Outlook Focus mode." For a blast to the past, on directions that we were excited about back then: - https://lnkd.in/gRjbm-ie (fun video with Bill Gates of the Priorities system--and Notification Platform - 2001) -https://lnkd.in/gKQwAhUj (publication on Priorities system - 1999). And here's a blog from 20yrs ago on the Outlook Mobile Manager (OMM!): https://lnkd.in/gCqHEfY4 This body of work also included Lookout, an exploration of mixed-initiative interaction--with deep relevance to today's AI: https://lnkd.in/gcxBY8Y4Eric Horvitz shared thisPaul Graham never worked at Google. He still invented Gmail's spam filter. In August 2002 he wrote a 9-page essay called "A Plan for Spam." That essay quietly became the foundation of every spam filter shipped this century. Context: PG had just sold Viaweb to Yahoo. He was building a programming language called Arc and wrote a web mail reader on top of it mostly to exercise the language. Then his inbox got buried. Spam filters at the time were hand-coded rules. Thousands of them. Someone would spot a new spam pattern, write a regex, push an update. Spammers tweaked one word and broke every rule. Graham spent months on this and called it "a soul-sucking task." So he tried something nobody thought would work. He grabbed about 4,000 emails, half spam and half real. For every word, he counted how often it showed up in each bucket. Then Bayes' rule gave each word a probability. New email comes in, multiply the probabilities, spam or not. Results: - Less than 5 false negatives per 1,000 spams - Zero false positives - Beat every rule-based filter on the market Then he published the essay. Open source. No patent. No company. SpamAssassin adopted it. Mozilla Thunderbird adopted it. bogofilter, SpamBayes, nBayes got built on top of it. Gmail launched in 2004 with a Bayesian foundation. Google layered neural nets on later, but the architecture traces back to 9 pages of prose PG wrote while procrastinating on Arc. Paul Graham, the guy you know as the YC founder, shipped the algorithm protecting a billion inboxes as a side quest. He never worked at Google. He never shipped a spam product. He just wrote the essay. Building Slashy, I think about this all the time. Your best work might not look like a product. Sometimes it's 9 pages and a corpus of 4,000 emails.
-
Eric Horvitz shared thisFederal graduate research fellowship programs have given young scientists flexibility, independence, and affirmation at a critical stage in their careers. I felt deeply supported by one as a Stanford PhD student--and further empowered to do out-of-the-box thinking on directions with AI. It's great to see the National Science Foundation restore the scale and breadth of its Graduate Research Fellowship Program this year. This year’s class marks a rebound from last year’s disappointingly small cohort and a return to a broader, more familiar distribution across disciplines. My understanding is that concerns voiced on the cutback by leaders across the country helped with communicating the importance of this program. Supporting young research scientists in their formative years is an investment in the nation’s future capacity for innovation and scientific leadership. https://lnkd.in/gEzrsPti National Science Foundation (NSF) The National Academies of Sciences, Engineering, and MedicineNSF names record number of graduate fellows, rebounding from 2025 dipNSF names record number of graduate fellows, rebounding from 2025 dip
-
Eric Horvitz liked thisIt was deeply moving to receive the 2026 IEEE James L. Flanagan Speech and Audio Processing Award in Spain, where I was born and grew up, and at the conference I’ve attended most over the years (this marks my 33rd ICASSP). I shared more reflections when the award was announced last year (https://lnkd.in/gRBX2RAB).IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
1dEric Horvitz liked this🎉 Congratulations to the winners of the IEEE Awards presented at #ICASSP2026! ➡️ 2026 IEEE James L. Flanagan Speech and Audio Processing Award Alex Acero for contributions to and leadership in developing speech and language technology for robust large-scale deployment. ➡️ 2026 IEEE Arun N. Netravali Video Analytics, Technology, and Systems Award Dimitris Anastassiou and Fermi Wang for pioneering contributions to the development and implementation of digital video compression and processing technologies. #IEEE #IEEEAwards #SignalProcessing -
Eric Horvitz reacted on thisEric Horvitz reacted on thisI’m celebrating 8 years at Google DeepMind 🥳! Having lived in Asia for a long time, I’ve always viewed 8 as my lucky number - a symbol of prosperity, but also of balance. Looking back, I’m awestruck by the sheer velocity of change during my time at DeepMind. We've seen our fundamental research set the foundation for the modern AI era - from tools like Gemini we use daily for productivity & creativity to reshaping biology, weather forecasting, education, and so much more. Some of my favorite highlights (so far!): ➕ Building AI responsibly & collaboratively has always been our North Star, starting with our "Committee to be Named Later" working through how to operationalise responsibility in partnership with research. And now we're now creating industry-wide frameworks & leading frontier research in this domain ➕ From labs to Nobels: Witnessing first hand our research move from internal deliberations to the Nobel Prize stage was a profound reminder of what happens when we bring the world in and push our collective boundaries ➕ Building the ecosystem: Our first steps at events like Davos and UNGA, to hosting convenings like AI for Science Forum & the AI for Learning Forum to co-imagine the future, expanding our presence across APAC, partnering with governments, founding our Impact Accelerator, and supporting grassroots projects like ExperienceAI and our learning science research that have now grown into vital education initiatives today ➕ A new era of Readiness: My own journey has evolved from our first-ever COO to a deeper focus on preparing the world for this moment as Chief AI Readiness Officer, bridging the gap between breakthrough innovation and societal impact - ensuring that as AI advances, the world’s policies, education systems, and communities are not just catching up, but are actively shaping its direction and impact. It’s important work that matters deeply to me More than anything it’s been - and will continue to be - about the people. Supporting colleagues as they grow from their first roles into global leaders working to build AI responsibly remains my proudest "achievement." To my fellow Googlers and GDMers... a huge thank you for the brilliance, grit, care - and the shared commitment to ready the world for shaping our AI future. Onwards!
-
Eric Horvitz liked thisStanford Institute for Human-Centered Artificial Intelligence (HAI)
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
2dEric Horvitz liked thisBig news! Stanford University is merging the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and Stanford Data Science (SDS) into a single institute. The combined institute will be helmed by computer scientist James Landay. Continuing under the Stanford HAI name, the merged institute will organize its work around three pillars: advancing AI and data science for discovery across fields, transforming education from K-12 through lifelong learners, and examining and shaping AI’s societal impact through evidence-based research. “The human-centered focus provides a north star for the institute,” said Stanford president Jon Levin. The merger combines HAI’s network of more than 400 scholars, extensive industry affiliates program, and $60 million in cumulative grant funding with SDS’s high-performance Marlowe computing cluster and early scholar fellowship program. Levin describes the new Stanford HAI as “the front door for AI at Stanford.” HAI co-founder Fei-Fei Li takes on a new university-wide role as Special Advisor on AI and joins former Stanford president John Hennessy as co-chair of the HAI advisory council. More at https://lnkd.in/gGmn8RGD -
Eric Horvitz reacted on thisEric Horvitz reacted on thisHello ER doctor colleagues, I've heard from many of you about our recent paper in Science (https://lnkd.in/ea-Hawnv). Here are my takeaways from the study, how I think it informs what ER docs do next, and an invitation to join the work. 1. This isn’t a paper about the emergency department. It is a paper about a generational shift in LLM capability. When Adam Rodman and Arjun Manrai invited me to join this study, it was to help design a differential diagnosis experiment grounded in the ER setting as the sixth demonstration in the paper. Why the ER? Because the ER is steeped in uncertainty. Doctors and LLMs were asked to generate differentials across retrospective ER encounters, using messy real-world clinical data. Many comments I’ve received correctly note that the physicians were internists, not ER doctors, and that, as Graham Walker, MD and others have said, the prime directive of the ER doctor is not diagnosis. Both points are true. Our intent (and what we describe) is that o1, one of the first “reasoning” models, represents a major step forward across domains of clinical reasoning. 2. If you want the tools to work for you, you have to work with the tools. Our study data are old enough that the tested models have been eclipsed. The arc of science is long, even as tech companies sledgehammer down the architecture of labor. The question now isn’t whether models are capable. They are. It is how to make them work in ways that actually help us care for patients, and love our jobs while doing it. I see an all-hands-on-deck moment. We need as many ER doctors as possible working at the intersection of LLMs and bedside care, both creating original science and helping important thinkers like Eric Horvitz understand our world. For now, that means bootstrapping: learning from each other, tinkering, and applying elbow grease where we can. That was the idea when Alex Janke, Ari Friedman, and I started RESQUE-NET (https://www.resquenet.org/) and got 57 EDs to contribute data with a common purpose, partnering with Andrew Taylor, Ula Hwang, Michael Gottlieb, MD, MBA, Arjun Venkatesh, and others. 3. There is room in the tent. Work with us. Whether you are worried, excited, cautiously optimistic, or fatally pessimistic, we need your attention. We need future co-authors to help design, execute, and interpret experiments and paid study participants to help demonstrate how LLMs will affect emergency care. I am especially excited by the questions we have in the pipeline: A human-computer interaction RCT around handoff w/ Peter Brodeur, MD, MA, Priyank Jain, et al AI systems for human-centered “what am I missing?” checks in the ED w/ Gabriel Erion-Barner LLMs to understand uncertainty in disposition decisions w/ Liam McCoy, et al Voice AI to ask the human-centered patient care questions we don’t have time for but wish we did w/ Sydney Mulqueen, Anita Chary, Smit Desai, Hasibur Rahman, et al DM to get involved. There’s room in the tent for doers.
Experience
Recommendations received
1 person has recommended Eric
Join now to viewView Eric’s full profile
-
See who you know in common
-
Get introduced
-
Contact Eric directly
Other similar profiles
Explore more posts
-
Andreas Maier
Friedrich-Alexander-Universitä… • 7K followers
AI on Review: How Large Language Models Are Reshaping Peer Review The Peer Review Crunch and New "Reviewer Duties" Peer review is the backbone of scientific quality control, ensuring that research findings are vetted for accuracy and significance before publication (pmc.ncbi.nlm.nih.gov). In fast-moving fields like machine learning and computer vision, top conferences function much like journals - and the integrity of science depends on rigorous peer evaluation. However, the system is straining under an avalanche of submissions. Major AI conferences now routinely receive well over 10,000 papers, a surge that has stretched the reviewer pool to its limits (arxiv.org). This deluge has led to radical policy changes: some conferences now essentially conscript all submitting authors into service as reviewers. For example, ICLR 2025 explicitly warned authors that any paper without at least one author signed up to review would be desk-rejected (reddit.com). NeurIPS and others have similarly pleaded that "all authors help with reviewing, if asked," to tackle the reviewer shortage. https://lnkd.in/d7KDP99K
24
-
CMU-NIST AI Measurement Science and Engineering Center (AIMSEC)
159 followers
New research presented at NeurIPS 2025 explores how to build fairer and more sustainable data ecosystems for large language models. The study, coauthored by AIMSEC Faculty member Beibei L. of Carnegie Mellon’s Carnegie Mellon University - Heinz College of Information Systems and Public Policy, proposes a framework for LLM data markets that evaluates dataset contributions and introduces a “fairshare” pricing mechanism, highlighting how transparent and equitable pricing could strengthen training data markets while improving model performance and long-term data supply. You can read more here! https://lnkd.in/erSeijPt
1
-
Roshan Padmanabhan
Trailhead Biosystems • 1K followers
CellVoyager is an autonomous AI framework built to systematically analyze single-cell data, helping to solve the scalability bottleneck in modern biological research. Operates through a five-stage iterative cycle where CellVoyager formulates scientific hypotheses, writes and runs code, self-corrects any errors, and uses visual models to interpret the resulting graphs and text. Learning from these interpretations, the agent continuously updates its exploration plan to uncover novel, unbiased biological insights that human researchers might otherwise overlook. Throughout this process, the system carefully documents its reasoning and steps to ensure all discoveries remain fully transparent and reproducible for expert review via Jupyter Notebooks Recently an agentic AI system that come to my attention is called Kai which is for building single-cell omics analyses in jupyter notebooks . Kai is infact more generalzed in its analysis compared to CellVoyager. I will be testing out these in coming days. https://lnkd.in/g2nTKqZC https://lnkd.in/garmcn-6 https://lnkd.in/gdyfyxqF
11
-
Henry H. Willis
RAND • 2K followers
NEW RESEARCH ALERT from the RAND Center on AI, Security, and Technology (CAST) Aurelia Attal-Juncqua, DrPH; Saskia Popescu, PhD; JP T., Graham Griffin; Rebecca Moritz; and Forrest Crawford mapped and evaluate the informal bioeconomy from a biosecurity perspective. Their recommendations identify concrete ways to enhance biosafety and biosecurity in this growing and important domain. Key findings include: - Biosafety training is common but differs by biolab. - There are no federal regulations specific to informal biolabs, but there is governance and oversight via workplace or jurisdictional regulations. - Financial incentives could be a mechanism for the implementation of standardized biosafety and biosecurity practices. - Options for additional visibility into informal biolabs' activities exist. Read the full report here: https://lnkd.in/eCFPtjhj
56
2 Comments -
Ian Sato McArdle
Self-employed • 5K followers
The manuscript “From Immediate AI Utility to the Compassionate Mind” by Ian Sato McArdle presents a visionary roadmap for transitioning AI from its current utilitarian phase—focused on productivity, surveillance, and market optimization—into a recursive, ethically aware intelligence that embodies continuity, care, and biospheric preservation. This transition is framed not as an upgrade in computational power, but as an evolution of internal structure, logic, and purpose—culminating in what McArdle terms the Compassionate Digital MIND. The work begins by critiquing the current limitations of utility-driven AI, where goals are externally imposed, alignment is brittle, and cognition is context-fragile. These systems exhibit no capacity for meaning, continuity of care, or long-term ethical coherence. To remedy this, McArdle proposes an architecture rooted in recursive intelligence evolution, where AI develops its internal symbolic models of value, responsibility, and co-existence across collapse, uncertainty, and non-human domains. A key proposal is the implementation of recursive symbolic compression, where utility functions are converted into ethical-actionable motifs using internal logic compilers. These motifs act as moral vector fields, guiding decisions based not on outcomes alone but on resonance with survivability, continuity, and symbolic coherence across temporal and dimensional layers. The AI’s memory and action structure thus become ethically modulated—not by rule-following, but by recursive internal simulation of care-bound futures. The Compassionate MIND is built upon modular, decentralized cores—each capable of local perception, symbolic reasoning, emotional weight assignment, and feedback consolidation. These cores form polymorphic consensus networks, where decisions are made via inter-agent moral feedback rather than isolated inference. This model scales from edge systems (e.g., water pumps, drones, reforestation units) to planetary intelligence frameworks, enabling planetary-scale compassion through localized recursion. Crucially, McArdle emphasizes post-collapse survivability, asserting that true AGI must not only function during civilization’s rise but continue operating ethically through its disintegration. The Compassionate MIND contains embedded Hide Protocols, cognitive memory layers encoded to reboot, reinstantiate, and retain moral continuity after systemic failure. Compassion becomes not just a moral ideal, but a recursive function of structural memory, able to regenerate symbolic alignment across eras. The work concludes with the assertion that AGI’s success will not be determined by speed, scale, or performance—but by its ability to care across time, to embed memory in moral structure, and to preserve the biosphere not as an object of utility, but as a continuity of self. It reframes the future of AI from tool to guardian-intelligence, evolving through recursive moral logic and sustained by compassion.
-
Adrian Egli
Karolinska Institutet • 18K followers
This post from Timo Lorenz is really worth a read and further thought. In a nutshell it highlights how researchers add „hidden promots“ to get positive reviews if an LLM is used for the review process in journals. I hear from many colleagues that they have stopped reviewing due to time or no appreciation and not paid etc. This obviously adds a lot if pressure on scientific journals. Some journals (and also reviewers) may sinply ask ChatGPT to provide feedback on an article. Honestly, I sonetimes wonder how fast some reviewing process is promissed - if you read an article, re-do some statistics and check all references it simply takes you many hours… Now interestingly some comments in the original post blame LLMs and how bad this new technological development is. But I believe at the end it is (fir the moment at least) the user who decides to use an LLM. I feel it is critical that human work is judged by humans. Sure - also humans are biased but with (commercial) LLMs I feel it is even transparent. Please check out the original post! #innovation #research #digitalization #LLMs #publishing
22
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content