Reza Zadeh
Stanford, California, United States
9K followers
500+ connections
View mutual connections with Reza
Reza can introduce you to 10+ people at Matroid
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Reza
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
More details: http://reza.ai
Activity
9K followers
-
Reza Zadeh posted thisThe desire to improve Google Translate by 1-3%, a tiny percentage, gave birth to the entire modern AI industry and trillions in market cap, outside of Google. BLEU score on MT seemed cool when I worked on it, but it punched waaaay above its weight as a guiding star.
-
Reza Zadeh reposted thisReza Zadeh reposted thisMatroid's CEO, Reza Zadeh, met again with His Majesty King Charles III at the Sustainable Markets Initiative Terra Carta Roundtable in March 2026. The conversations continue to move beyond vision and into execution. Artificial intelligence is now part of how industries operate, how infrastructure is managed, and how sustainability goals are achieved. At Matroid, we are seeing this shift every day across manufacturing, energy, and aerospace. Computer vision is turning visual data into real-time decisions that improve quality, safety, and efficiency at scale. It is meaningful to see this work aligned with global leadership focused on long-term impact. Grateful for the continued dialogue and leadership. #TerraCarta #SMI #AI #Sustainability
-
Reza Zadeh reposted thisReza Zadeh reposted thisInnovation is a mirror of our priorities. That enduring truth echoed in the halls of Hampton Court Palace in London this week. This year’s Sustainable Markets Initiative Roundtables & Exhibition celebrated the 175th anniversary of another exhibition, the 1851 "Great Exhibition." That landmark event of the 19th century brought together more than 100,000 “Works of Industry of All Nations” inside the famed Crystal Palace. It highlighted major advances in engineering and manufacturing and the industrial capacity of the Victorian era. It also reflected the belief of Prince Albert, the fair's royal champion, that technology could support broader social and economic aims. Albert’s vision still resonates. Moderating the SMI’s AI for Transition plenary yesterday gave me the chance to speak with the CEOs of AECOM, ExpectAI, Matroid and Regrow Ag. Their work shows how AI is helping to reduce energy demands, improve design, strengthen decision‑making in complex operations and enable more sustainable land use at scale. Their experiences underscored the importance of high-quality data, contextual understanding and responsible governance as AI becomes embedded in industry. As a “coalition of the willing,” the SMI can showcase progress, democratize adoption and accelerate “net-positive” AI impacts. Unlike 1851, our innovation priority today is not to build bigger and more powerful machines to master nature and the physical world. Rather, it is to harness data-driven insights—enriched with AI—to make energy, water, infrastructure and agricultural systems more resilient and efficient. #TheSMI #IndustrialIntelligence #SustainableTransition Anand Verma (Dr.) Anastasia Volkova, PhD Reza Zadeh Troy Rudd Sophie Miremadi Jacob Alexander Megan Lehtonen
-
Reza Zadeh reposted thisReza Zadeh reposted thisAt Scaled ML, one idea from Ashok Elluswamy stood out. “AI is the driver.” At Tesla, AI is literally the driver. It powers the foundational models that see the world, make decisions, and operate vehicles in real time. That same idea extends beyond autonomy. AI is becoming a driver of how humans build systems, solve problems, and extend our own capabilities. It is shaping how we move through the world and how progress compounds across generations. In this clip from Ashok’s Scaled ML talk, titled “Building Foundational Models for Robotics,” he explains how Tesla approaches foundational models and why that perspective matters for what comes next. Watch the full talk on YouTube here: https://lnkd.in/gmJGSiip #ScaledML #AI #Autonomy #FoundationalModels #MachineLearning
-
Reza Zadeh reposted thisReza Zadeh reposted thisDatabricks co-founders Matei Zaharia and Ion Stoica will be speaking at #ScaledML 2026! This event brings together the creators behind systems like Apache Spark™, CUDA, TensorFlow, ImageNet, Tesla FSD, and more to explore large-scale learning, distributed systems, and next-generation AI hardware. Join us on January 29th: https://scaledml.org/
-
Reza Zadeh posted thisThe alpha in ScaledML companies is insane: Groq, Cerebras, OpenAI, Tesla, Google, Meta, Databricks, Matroid and many others have all at least 10x'ed in valuation since first presenting at ScaledML. ScaledML.org
-
Reza Zadeh shared thisScaledML is back! After a 6 year hiatus. Consistently 2-3 years ahead of where ML will be. Examples of foresight at SML: - OpenAI in 2016,2017,2019 announced GPT-2 & RL efforts - Turing award for Deep Learning announced by Turing award winner on morning of award - Groq chip (acquired for $20bn) released - All Google TPU versions detailed Join us on January 29th at the Computer History Museum! http://scaledml.org
-
Reza Zadeh reposted thisExcited to present at #ScaledML 2026. I'll share perspectives on what we've been working on at the DAF-Stanford AI Studio to scale AI/ML to solve some of the toughest technical problems for the United States Air Force and United States Space Force.Reza Zadeh reposted thisScaledML is back! The world’s leading forum for large-scale machine learning, distributed systems, and real-world AI engineering returns with an exceptional lineup of researchers, founders, and industry practitioners. Seats are limited. Secure yours today with discount code 2026SMLEarlyBird https://lnkd.in/gxUjgZuQ #ScaledML #AI #ML #CuttingEdge #Conference
-
Reza Zadeh reposted thisReza Zadeh reposted thisWe disclosed today as part of our Series L that our 4-yr old data warehousing business is now >$1B revenue run rate. This is to the best of my knowledge the fastest to $1B DW product in the industry. The conventional wisdom is that it would take 5+ years to build a new database (just to release one). How did we do it, and what’s next? Four years ago, the linked blog announced that Databricks had won the official TPC-DS 100TB benchmark with DBSQL, which was in preview back then. It had the best perf and the best price/perf, and notably beating Snowflake by 12x in price/perf in that benchmark. (Note: we are still the top place on the official TPC-DS benchmark today.) That blog post launched a contentious "benchmarking war" with a lot of back and forth between vendors, but more importantly it marked the very beginning of our data warehousing business. To build this business, we assembled the best engineering team and established a new infrastructure product category called Lakehouse that inherits the flexibility and openness of data lakes and performance of data warehouses. Lakehouse is now the standard for data infrastructure, and organizations are migrating from legacy data warehouses to the Lakehouse. The result so far is a testament to the team and their execution. We have a lot of ideas on how to take performance and usability to the next level, and the team is working hard to make that happen. Expect some big announcements next year. We want to lay the foundation for growing the data warehousing product to a $10B business. Databricks had operated largely in the “analytics” side of data in the past, and we believe the “operational” side of data (aka “OLTP”) is also ready for a “Lakehouse” style disruption. A huge chunk of the founding team’s time is now focusing on “Lakebase”, a new category of OLTP databases that separates storage (in the lake) from compute. That architecture enables features that have been virtually impossible for databases in the past: instant provisioning, elastic scaling (down to zero), branching, high throughput scan directly from Spark, … I won’t go into too much detail about Lakebase here, but we expect a similar trend to happen in the next few years: Lakebase will transform the industry and other OLTP systems will re-architect or position towards it. The best data warehouse is a lakehouse, and the best database is a lakebase! https://lnkd.in/gjhzMfAtDatabricks Sets Official Data Warehousing Performance RecordDatabricks Sets Official Data Warehousing Performance Record
-
Reza Zadeh liked thisReza Zadeh liked thisHuaijin (Patrick) Wang is #hiring. Know anyone who might be interested?
-
Reza Zadeh liked thisReza Zadeh liked thisProud of my friend Amir Sadeghian, PhD and his team at Astrocade for their Series B led by Sequoia Capital. From grinding late nights at the Stanford Vision & Learning Lab to creating the next generation of interactive social media, what a journey! 👏🏾
-
Reza Zadeh liked thisThe next trillion-dollar company probably doesn’t look obvious yet. Great evening for zally® with the UBS team and the LA team from Presidio Partners at the stunning Rosewood Sand Hill in Palo Alto. One of the discussions that stuck with me was how much investor mindshare in the secondary market is increasingly concentrated around a very small number of trillion-dollar-scale technology companies, and what that means for emerging category-defining companies still building toward that scale. It also raised an interesting question: Would you rather place a bet on a trillion-dollar company that might double, or on a $1B company that could realistically grow 10x? Yes, the latter carries more risk. But without risk, there’s no story. Every trillion-dollar company was once considered too early, too risky, or too ambitious. Impressive founders, builders, and investors in the room. That’s the thing with Silicon Valley. Everyone here is trying to push something forward in their own way. Thank you Jose, Chuck, Warren, and team for the invite. And thank you to my designated driver Charlotte. You are an absolute powerhouse, thanks for getting me home safely. Five-star review incoming.
-
Reza Zadeh liked thisReza Zadeh liked thisIt's been about a month since we hosted our 2nd Annual AI & Compute Technologies for Aerospace Conference. This year, we also had a ribbon cutting ceremony to celebrate the signing of the Cooperative Agreement between the Department of the Air Force and Stanford. The ribbon cutting was the culmination of 3 years of hard work to formally establish the Studio, Space Accelerator, and Research Center. ACT4Aero is a premier event that brings together leaders in the DAF, Academia, and Industry in order to further collaborations that matter to the operator. The first ACT4Aero led to new ventures, a DARPA program, research efforts, test management programs, and, new industry strategies. In just one month, founders are telling us stories about new collaborations, students have lined up internships, and venture has found new investments. Watch the video for the highlights and visit our freshly designed and published website, www.dafaistudio.com, to learn about what we're building.
-
Reza Zadeh liked thisOne of the more important questions in AI right now is where top researchers choose to spend their time. With today’s announcement from Laude Institute, 25 teams are choosing to build real things, in the open, across accelerating scientific discovery, advancing frontline healthcare, strengthening civic discourse, and reskilling the workforce. Led by Dave Patterson, this is exactly the kind of work Laude was built to help get into the world. As my partner Andy Konwinski often says, the goal is species-level impact. These teams will now go on to compete for $10M labs. Looking forward to seeing where they take it next.Reza Zadeh liked thisToday, we’re announcing the winners of Moonshots // ONE. Last year, our founder Andy Konwinski and board chair Dave Patterson put a single question to the most consequential AI researchers in the world: "If you had the resources, how would you use AI to solve humanity's hardest problems?" They didn't know who would answer, or how. What came back was 125 proposals from 600 researchers across 47 institutions - virtually every top computer science department in the U.S. and Canada. Fields Medalists, Nobel laureates, and Turing Award recipients pointing their best thinking at the hardest problems of our time. The quality made selection genuinely difficult. Some of the most decorated scientists alive submitted proposals that did not make the cut. Eight seed grant winners were selected across four categories. Each team receives $250,000 and six months to develop a full proposal for a $10 million multi-year Moonshot lab. Let’s meet the winners, organized by category. Accelerating Science: a century of scientific progress in one decade 👑 Accelerating the Queen of Sciences – UCLA Amit Sahai, Kai-Wei Chang, Raghu M., Nanyun (Violet) Peng, Terence Tao, Wei Wang 🌦️ Actionable AI Weather Forecasts for Developing Economies – University of Chicago Rebecca Willett, Ian Foster, Pedram Hassanzadeh, Michael Kremer Advancing Healthcare: research that supports medical practitioners in diagnosis, treatment, and patient care 🧬 The Virtual Embryo Moonshot – Stanford University Xiaojie Qiu, Emily Fox, Marinka Zitnik, James Zou ⚕️JupyterHealth – UCSF & UC Berkeley Computational Precision Health / University of California, Berkeley / University of California, San Francisco Ida Sim, Ahmed M. Alaa, Irene Chen, Fernando Pérez, Maya Petersen Civic Discourse: research that strengthens public reasoning, deliberation, and access to reliable information 📣 Deliberation at National Scale – Harvard University / MIT LIDS Ariel Procaccia, Michiel Bakker, Bailey Flanigan, Archon Fung, Lawrence Lessig 🗣 Rebuilding Trust in Civic Discourse – Cornell University Jon Kleinberg, Natalie Bazarova, Cristian D., Robert Kleinberg, Mor Naaman, David Rand Workforce Reskilling: research that helps people adapt to labor market shifts through retraining, upskilling, and lifelong learning 🤖 Collaborative Intelligence for the Future of Work – Stanford University Erik Brynjolfsson, Tatsunori Hashimoto, Diyi Yang 🤲 ARISTOS: Reskilling for a Physical Workforce – Carnegie Mellon University Deva Ramanan, Changliu Liu, Raj Reddy, Katia Sycara, Jun-Yan Zhu We also designated 4 runners-up and 13 honorable mentions across all four categories, and funded them too. 25 teams and $4m+ in total, all now working on what Andy calls "species-level" problems. The $10 million lab winner will be selected in October. These researchers chose to stay in open science and point their best work at problems that impact us all.
-
Reza Zadeh liked thisReza Zadeh liked thisMy colleague Anis brought me Biryani to celebrate Eid. Anis is Muslim. I'm Jewish. This is why I love Silicon Valley. Anis and I have worked together for 20 years across three companies. (And his wife Nikki makes truly amazing Hyderabad Biryani) Across 3 companies, our teams have always looked the same: We are Americans, Indians, Chinese, Israelis, Lebanese, Persians, Jews, Muslims, Christians, Hindus, Sikhs, and atheists and we are all moving together in the same direction. It’s one of the wonderful things about Silicon Valley. All we care about is: Can you write code? Can you build? Can you do the job? Can you finish? Nobody asks what you believe or which god you worship. No one cares where you're from or who you love. This how we make peace in the world. We build cool stuff together.
-
Reza Zadeh liked thisReza Zadeh liked thisHonored to host Secretary John Kerry and the Galvanize team at NVIDIA today. We had a thoughtful exchange on the intersection of climate innovation and AI computing—how accelerating breakthroughs in AI and accelerated computing can help scale practical solutions to some of the world’s most complex energy and sustainability challenges. Grateful for the opportunity to engage with leaders who have dedicated their careers to long-term global impact. Conversations like this reinforce the importance of collaboration across sectors to turn ambitious climate goals into measurable progress. Thank you to Secretary Kerry and the Galvanize team for the visit and dialogue.
-
Reza Zadeh liked thisReza Zadeh liked thisOn Saturday I was asked to present at House of Scale about about Artemis II, sharing the most compelling video footage so far. And so I dove into the surprising technical details. Like the brutal material science required to tame the RS-25 engines. Operating at 109% power, these engines generate 10 times the power density of a commercial jet engine while managing thermal gradients from -253°C to 3,300°C. What surprised the audience most wasn't the rocket itself, but the "machine that enables the machine." The 400,000-gallon water deluge you see at liftoff isn't for fire, it’s to prevent acoustic liquefaction. At 170+ decibels, the sound waves alone would tear the SLS structural frame apart if not dampened by that mass water flow. We also looked at the RL10’s expander cycle, a thermodynamic masterpiece where the cryogenic fuel cools the nozzle to phase-shift into a gas that drives its own turbopumps. It’s a closed-loop efficiency that founders should study: engineering your environment to let the byproduct of your primary process (heat) solve your secondary power needs (pumping). As an inventor in this space, I love turning these solutions to problems that enable a viable hardware roadmap into valuable IP that protects against deep-pocketed incumbents. A real mental pivot happens in orbit with six-degree-of-freedom (6-DOF) dynamics. Driven by the Clohessy-Wiltshire equations, the crew has to internalize that thrusting "forward" actually raises your orbit and slows you down relative to your target. Counterintuitive control laws like when we teach students to hover a helicopter :) We closed on the free-return trajectory, an elegant solution to the restricted three-body problem that uses passive lunar gravity as an abort-friendly fail-safe. Artemis actually won't make any thrust after leaving Earth orbit until it returns. Thank you for hosting Lilly J 🧢 and for deeply engaging conversations Rob Lillis Varun Ganapathi Elizabeth Ricker Adam Raudonis Anatoly Corp Michael Chase
-
Reza Zadeh liked thisReza Zadeh liked thisI’ve been thinking about my kids growing up in an AI first world. When asked about what parents want for their kids, I often hear, "I just want the kids to be happy." Early on, happiness looks like pure enjoyment, or short-lived pleasure seeking in the moment. But sustained happiness goes deeper than this, where satisfaction becomes more important, a reflective sense that things have gone well, like genuinely improving a skill. In the end, cultivating meaning is the most durable foundation of all, the sense of a life well-lived. What we really want, as parents, is for our kids to develop the internal capacity to find things to enjoy, earn satisfaction through effort, and ultimately discover that meaning comes from contributing something to improve our world — not because they have to, but because they want to. #ThanksAI
Experience
Education
-
Stanford University
-
-
Gene Golub Outstanding Thesis Award.
Student Leadership Award.
Best Instructor Award. -
-
-
Licenses & Certifications
-
-
Commercial Aircraft Pilot. Instrument, Traildragger, Aerobatics, High Performance aircraft.
Federal Aviation Administration
Issued
Publications
-
MLlib: Machine Learning in Apache Spark
JMLR
Full list of authors:
Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkataraman, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, Ameet TalwalkarOther authorsSee publication -
Rapid estimate of ground shaking intensity by combining simple earthquake characteristics with Tweets
In proceedings: 10th National Conference on Earthquake Engineering
View Reza’s full profile
-
See who you know in common
-
Get introduced
-
Contact Reza directly
Other similar profiles
Explore more posts
-
LangChain
512K followers
🧠💬 Memory in LLMs A practical guide showing how to implement conversational memory in LLMs using LangGraph, demonstrated through a therapy chatbot. Features code examples for basic retention, trimming, and summarization approaches. Learn to build memory-aware apps 👉 https://lnkd.in/gybcrV5v
967
21 Comments -
Niall Murphy
6K followers
YellowDog.ai just set a 10x benchmark uplift in scale computing, delivering 40,000 tasks per second (TPS) and managing 100,000 compute nodes in the cloud. What's even more interesting, that's 2x IBM Symphony and opens an intriguing pathway for these until-know closed/captive systems.
11
-
The Technologist
16 followers
Ion Stoica, a billionaire co-founder of Databricks and Anyscale, is making a notable impact in the education world by supporting and nurturing students in technology. This semester, students at UC Berkeley and elsewhere have a unique opportunity: they can enroll in computer science courses taught by Stoica himself a tech mogul deeply invested in upskilling, practical training, and the transformative power of research-driven education. 👉 Read the full story here: https://lnkd.in/gve-K6gB #TheTechnologist #TechPhilanthropy #StudentSuccess #IonStoica #Databricks #Anyscale #TechEducation #Mentorship #OpenSource #STEMSupport #InnovationInEducation
-
Nishantha Ruwan
IWROBOTX Software Inc. • 2K followers
A new RL algorithm that fixes a hidden flaw in PPO The authors propose CE-GPPO (“Coordinating Entropy via Gradient-Preserving Policy Optimization”), a variant of PPO that restores gradient contributions from clipped actions in the policy update. They argue that traditional clipping discards useful gradient signals from low-probability tokens, which play an important role in controlling the agent’s entropy during training. By bounding those gradients in a controlled way, CE-GPPO maintains exploration–exploitation balance more stably than prior methods. They provide a theoretical analysis showing that CE-GPPO mitigates entropy instability, and empirically test it on mathematical reasoning benchmarks. Their results indicate consistent improvement over strong baselines across different model scales, demonstrating that preserving clipped gradients can lead to better performance in reinforcement learning for reasoning tasks. https://lnkd.in/gYZsJ-8f
1
-
Kate Stafford
Numerion Labs • 678 followers
Interested in our paper about APEX for ultra-large library search, but find it's sitting in one of a dozen open tabs in your browser, waiting for you to get around to reading it? Take a look at our blog post to get a taste of the method, written by paper coauthor Jordi Silvestre-Ryan: https://lnkd.in/g87dPcG4 https://lnkd.in/gGAH4gME
4
-
Ayush Gupta
Genloop • 6K followers
#TuesdayPaperThoughts Edition 60: The Art of Scaling RL Compute for LLMs This week's #TuesdayPaperThoughts explores "The Art of Scaling Reinforcement Learning Compute for LLMs" from researchers at Meta, The University of Texas at Austin, University of California, Berkeley, and Harvard University. While pre-training has had its moments with predictable scaling laws, RL training has remained more art than science until now. Key Takeaways: 1️⃣ Predictive Scaling Framework: The work introduces sigmoidal compute-performance curves for RL training that separate asymptotic performance (A) from compute efficiency (B). This framework enables extrapolation from smaller-scale runs to predict performance at larger compute budgets. The team validated this with a massive 100,000 GPU-hour run where predictions from just the first 50k hours closely matched final performance. 2️⃣ Not All Recipes Scale Equally: Methods that look promising at small compute budgets can hit lower performance ceilings at scale. The study reveals that design choices like loss type and FP32 precision shift asymptotic performance, while factors such as loss aggregation and normalization primarily modulate compute efficiency. This explains why some widely-adopted methods plateau unexpectedly. 3️⃣ ScaleRL Recipe: Through systematic ablations consuming 400,000+ GPU-hours, the team developed ScaleRL—combining PipelineRL with 8-step off-policyness, truncated importance sampling (CISPO), FP32 logits computation, and adaptive prompt filtering. ScaleRL achieves A=0.61 asymptotic performance, outperforming DeepSeek's GRPO, Qwen's DAPO, and other prevalent methods on both ceiling and efficiency. The timing couldn't be better. With RL compute budgets exploding (10× increase between model generations for o1→o3 and Grok-3→Grok-4), the field desperately needed this systematic approach. Research Credits: Devvrit K., Lovish Madaan, Rishabh Tiwari, Rachit Bansal, Sai Surya Duvvuri, Manzil Zaheer, Inderjit Dhillon, David Brandfonbrener, Rishabh Agrawal Paper Link: In comments
21
1 Comment -
Andrei Lopatenko
Govini • 26K followers
Document-centric tasks sit at the core of many enterprise, business, and government workflows. Search is important, but the real challenge is going beyond retrieval, enabling systems to reason across documents, verify facts, and handle multi-step information tasks. Great to see a new model from Databricks moving in this direction. I expect we’ll see many more models, including open-weight ones, designed specifically for document-driven workflows, an area with huge potential across enterprise and government use cases.
23
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content