Di Li
San Francisco, California, United States
8K followers
500+ connections
View mutual connections with Di
Di can introduce you to 10+ people at Anthropic
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Di
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Seasoned engineering leader with experience leading ML and fullstack teams to achieve…
Activity
8K followers
-
Di Li shared thisThis is what it looks like when a company means what it says. Anthropic has been clear from the start: we support lawful national security uses of AI, but we won’t enable mass surveillance of Americans or fully autonomous weapons. Today’s statement makes me even more proud to be part of this team. https://lnkd.in/gA79r9k2Statement on the comments from Secretary of War Pete HegsethStatement on the comments from Secretary of War Pete Hegseth
-
Di Li shared thisHiring - hit me or Marisa Jones if you see a fit here. https://lnkd.in/gZgf3qPzStaff Backend Engineer, Merchandising Platform - Careers at AirbnbStaff Backend Engineer, Merchandising Platform - Careers at Airbnb
-
Di Li shared thisTime to explore more new offerings at AirbnbDi Li shared thisNow you can Airbnb more than an Airbnb. Airbnb Services: Book the world’s best chefs, trainers, massage therapists, and more Airbnb Experiences: Completely reimagined, and hosted by the locals who know their city best An all-new app, with homes, experiences, and services all in one place Learn more: airbnb.com/release
-
Di Li shared thisHiring Backend and Machine Learning Engineers https://lnkd.in/gPUt7TBV https://lnkd.in/gUDaaMNg Reach out to Erica Cortes Walker or me if interested. #hiring #backend #MLESenior Machine Learning Engineer, Guest & Host - Careers at AirbnbSenior Machine Learning Engineer, Guest & Host - Careers at Airbnb
-
Di Li shared thisExcited to share another team's work around AI-powered use cases boosting computer vision accuracy and performance at Airbnb.Di Li shared thisExcited to share that our latest blog post, Airbnb’s AI-powered photo tour using Vision Transformer, where we show our journey of using Vision Transformers to power our photo tour product, by incorporating pretraining, multi-task learning, ensemble learning, and knowledge distillation: https://lnkd.in/gJi_p5qW Please feel free to reach out to me, Xiaoxin (Aaron) Yin or Di Li if you are interested in our work.Airbnb’s AI-powered photo tour using Vision TransformerAirbnb’s AI-powered photo tour using Vision Transformer
-
Di Li shared thisHiring post - If you are deeply passionate about building world class data/ML driven intelligence platforms that power high quality listings and help host better merchandising them to guests, please apply or reach out to me or Bennett Bontemps Suzy LambertStaff Software Engineer, Data Engineering - Careers at AirbnbStaff Software Engineer, Data Engineering - Careers at Airbnb
-
Di Li shared thisDi Li shared thisWhat is the difference between 𝗟𝗮𝗺𝗯𝗱𝗮 𝗮𝗻𝗱 𝗞𝗮𝗽𝗽𝗮 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀? Lambda and Kappa are both Data architectures proposed to solve movement of large amounts of data for reliable Online access. The most popular architecture has been and continues to be Lambda. However, with Stream Processing becoming more accessible you will be hearing a lot more of Kappa in the near future. Let’s see how they are different. 𝗟𝗮𝗺𝗯𝗱𝗮. ➡️ Ingestion layer is responsible for collecting the raw data and duplicating it for further Real Time and Batch processing separately. ➡️ Consists of 3 additional main layers: 👉 Speed or Stream - Raw Data here is coming in Real Time and is processed by a Stream Processing Framework (e.g. Flink) then passed to the Serving layer to create Real Time Views for low latency near to Real Time Data access. 👉 Batch - Batch ETL Jobs with batch processing Frameworks (e.g. Spark) are run against raw Data to create reliable Batch Views for Offline Historical Data access. 👉 Serving - this is where the processed Data is exposed to the end user. Latest Real Time Data can be accessed from Real Time Views or combined with Batch Views for full history. Historical Data can be accessed from Batch Views. ❗️ Processing code is duplicated for different technologies in Batch and Speed Layers causing logic divergence. ❗️ Compute resources are duplicated. ❗️ Need to manage two Infrastructures. ✅ Distributed Batch Storage is reliable and scalable, even if the System crashes it is easily recoverable without errors. 𝗞𝗮𝗽𝗽𝗮. ➡️ Treats both Batch and Real Time Workloads as a Stream Processing problem. ➡️ Uses Speed Layer only to prepare data for Real Time and Batch Access. ➡️ Consists of only 2 main layers: 👉 Speed or Stream - similar to Lambda but (optionally) often contains Tiered Storage which means that all of Data coming into the system is stored indefinitely in different Storage Layers. E.g. S3 or GCS for historical data and on disk log for hot data. 👉 Serving - same as Lambda but all transformations that are performed in Speed Layer are never duplicated in Batch Layer. ❗️ Some transformations are hard to perform in Speed Layer (e.g. complex joins) and are eventually pushed to Batch storage for implementation. ❗️ Requires strong skills in Stream Processing. ✅ Data is processed once with a single Stream Processing Engine. ✅ Only need to manage single set of Infrastructure. Have you dealt with Kappa architecture in your day-to-day? What are your thoughts around it? Let me know in the comments 👇 -------- Follow me to upskill in #MLOps, #MachineLearning, #DataEngineering, #DataScience and overall #Data space. 𝗗𝗼𝗻’𝘁 𝗳𝗼𝗿𝗴𝗲𝘁 𝘁𝗼 𝗹𝗶𝗸𝗲 👍, 𝘀𝗵𝗮𝗿𝗲 𝗮𝗻𝗱 𝗰𝗼𝗺𝗺𝗲𝗻𝘁! Join a growing community of Data Professionals by subscribing to my 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/e5d3GuJe
-
Di Li shared thisDi Li shared thisSimply the best high-level talk on LLM so far.
-
Di Li liked thisAfter an incredible 8+ years of leading Product and Design and helping build this world-class company into what it is today, I have decided to leave DoorDash. This will be the last summer with a kid at home full-time, and I know it’s time I want to spend with my daughter before she heads off to college. My last day will be May 22nd, and what comes after that will include downtime, tinkering, and planning for my next play. My journey at DoorDash was shaped by many moments: Building a product customers use every day: DoorDash's customer-obsession culture is world-class. From diagnosing “disaster deliveries” to spending time each month working at merchant counters and making deliveries, the product we built learned from every interaction and stayed focused on one goal - solving our customers' problems. Assembling a team that hustles and delivers: Culture is shaped by the people you hire. Early on, we created a value of never compromising on our bar or the principles that we wanted to uphold with the builders we bring on-board. I am incredibly proud of everyone I got to build with, learn from, and ship meaningful work alongside. Shipping impactful things: Keeping DoorDash operating for our users during the pandemic, building one of largest subscription networks, innovating new ways to work with local businesses, launching the market-leading grocery service, creating a category-defining advertising product, and expanding globally, to name a few. I want to thank Tony Xu for trusting me to be part of this journey. His optimism and high expectations have shaped me forever. Thank you as well to Ryan Sokol, Prabir Adarkar, Ravi Inukonda, Mariana G., Keith Yandell, Tia Sherringham, Elizabeth Jarvis-Shean, Christopher Payne, and Miki Kuusi for being incredible teammates. A final shoutout to the amazing Product and Design teams we built from scratch into what I can confidently say are among the best in the industry. I am privileged and honored to have worked with you all. While I will remain connected as an advisor at the company, to each and every Doordasher, I am so proud of what we have accomplished and built and will be a proud cheerleader forever.
-
Di Li liked thisDi Li liked thisIntroducing Claude Design by Anthropic Labs: a new way to make designs, prototypes, slides, and one-pagers by talking to Claude. Claude Design is powered by Claude Opus 4.7, our most capable vision model. Describe what you want, and Claude builds the first version. Refine through conversation, inline comments, direct edits, or custom sliders, then export to Canva, as PDF or PPTX, or hand off to Claude Code. Claude reads your codebase and design files to build your team's design system, then applies it automatically, keeping every project on-brand. Claude Design is available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day. Try Claude Design: claude.ai/design Read more: https://lnkd.in/e3YaGiiA
-
Di Li liked thisDi Li liked thisWhat does an "Elite Founder" look like? Well, according to a16z speedrun's website, you're looking at him! Jokes and randomly-selected-B-roll-photos aside, it has been an incredible 12 weeks taking Amdahl to the next level alongside a community of exceptionally sharp and motivated founders. I've been especially impressed by the support we've had from the a16z speedrun team - their hands on help in hiring, sales, fundraising, and navigating the hard decisions that come at this stage have been an unfair advantage for our small (but mighty!) team - huge thanks to Fareed Mosavat Lejla Johnsen Jonathan Lai Andrew Chen Tom Hammer Macy Mills Bella Nazzari Jordan Carver Jordan Mazer Jacqueline Young Andrew Lee and everyone else who helped us along the way. It still feels like we're just getting started - excited to see where we're at in another 12 weeks!
-
Di Li liked thisDi Li liked thisStoked to share that our run-rate revenue has crossed $30 billion, up from $9 billion at the start of the year! We've also just signed an agreement with Google and Broadcom for multiple gigawatts of compute, coming online starting in 2027. Excited to continue serving the wild growth in demand we're seeing. More about our deal here: https://lnkd.in/ehbz23j6 Separately on the growth team, we're hiring across the board to help Anthropic stay on the exponential. If you're an engineer, PM, designer, or data scientist with a background working on growth teams, get in touch through our jobs page. For growth PMs, right now I'm particularly interested in folks who have a background in either monetization or growing API products.Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation computeAnthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute
-
Di Li liked thisDi Li liked thisToday, we announced a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, coming online starting in 2027. This investment will help us continue to develop frontier models and serve unprecedented growth in customer demand. Demand from our customers has accelerated in 2026. Our run-rate revenue has surpassed $30 billion, up from $9 billion at the end of 2025. Over 1,000 business customers are each spending more than $1 million with us on an annualized basis—double where we were just two months ago. Meeting this demand requires infrastructure investment on this scale. We are continuing our disciplined approach to scaling our infrastructure, building the compute capacity needed to serve our global customers while enabling Claude to define the frontier of AI development. We are grateful to our partners at Google and Broadcom and to the Anthropic team building the foundation needed to support our growth. Read more: https://lnkd.in/gQxndJVE
-
Di Li liked thisDi Li liked thisAnthropic is on an unprecedented growth run. Just in the past year they grew from $1B to $19B ARR. They added $6B in ARR just in *February*. Companies like Palantir and Atlassian took 15-20 years to reach ~$5B ARR. Anthropic is adding that every month. Amol Avasare is Head of Growth at Anthropic. He got the job by cold emailing Mike Krieger when no role existed. He spends 70% of his time on what he calls "success disasters"—things breaking because growth is going too well. In his first ever public interview, Amol shares: 🔸 How Anthropic built an internal tool that runs growth experiments autonomously 🔸 Why activation is the single highest-leverage growth problem in AI 🔸 How the company's focus on AI coding created a research flywheel that accelerated their models 🔸 Why Amol is hiring *more* PMs, not less 🔸 How he uses Cowork to automatically detect team misalignment in Slack 🔸 The traumatic brain injury that left him unable to walk for 6 months—and why he calls it one of the best things that happened to him Listen now: https://lnkd.in/gNSReT2d Thank you to our wonderful sponsors for supporting this season: 🏆 WorkOS — Modern identity platform for B2B SaaS, free up to 1 million MAUs: https://workos.com/lenny 🏆 Vanta — automate compliance, manage risk, and accelerate trust with AI: https://vanta.com/lenny Also available on: • Spotify: https://lnkd.in/gVc3GwQU • Apple: https://lnkd.in/gm6kHGQz
-
Di Li liked thisDi Li liked thisHad a great chat with Lenny Rachitsky on some of the fun stuff happening at the intersection of AI and growth! Thanks for having me on Lenny, had a blast :) https://lnkd.in/eEzRTAQDHead of Growth (Anthropic): “Claude is growing itself at this point” | Amol AvasareHead of Growth (Anthropic): “Claude is growing itself at this point” | Amol Avasare
-
Di Li liked thisDi Li liked thisMy biggest takeaways from Claire Vo on all things OpenClaw 🦞: 1. Install OpenClaw on a separate computer, not your main machine. Use an old laptop or buy a Mac Mini ($500-$600). Create a dedicated Gmail account and local admin account for your agent. Think of it like hiring an employee—you wouldn’t let them run wild on your personal computer 24/7. 2. The unlock is to stop treating OpenClaw like one general-purpose agent and instead creating multiple Claws with very specific roles. Claire says people get frustrated when they throw every task at a single agent and it sucks at it because it loses context. Her fix was to split her work. Sam handles sales, Finn manages family, Howie preps podcasts, Sage runs her course. Think of it like Slack: you wouldn’t put your whole company in one channel, so do not put every workflow into one agent. 3. The magic of OpenClaw is soul + heartbeat + jobs. The “soul” is a Markdown file defining identity and personality. The “heartbeat” checks in every 30 minutes to see what needs doing. “Jobs” are scheduled tasks that run automatically. This combination makes agents feel alive. 4. Sam the sales agent saves Claire 10 hours per week and real money. Every morning, Sam sweeps their CRM for new signups, identifies decision-makers at companies, sends personalized emails, and flags international deals to handle autonomously. This replaced a contractor Claire was paying for the same work. 5. The “yappers API” is the highest-bandwidth way to communicate with AI. Don’t worry about perfect prompts or structured inputs. Just ramble in voice notes on Telegram about what you need. The agent will make sense of it and ask clarifying questions. 6. Browser use is the biggest limitation—look for APIs first. The web is hostile to bots, and browser automation is unreliable across all AI tools. Always check if there’s an API available. If not, try browser use, but be prepared for it to fail. Sometimes the solution is solving the problem behind the problem. 7. Management skills are the secret to AI agent success, not technical skills. Claire’s 20-plus years of management experience—role scoping, org design, onboarding, progressive trust—translates directly to making agents effective. If your agent isn’t working, it’s usually a structural issue, not the agent being “dumb.” 8. Screen sharing saves you from buying monitors and keyboards for every Mac Mini. Turn on screen sharing in Mac Mini settings, and you can control it from your laptop on the same Wi-Fi. Turn on remote login to SSH into the terminal. This was Claire’s life-changing discovery. 9. Security is a real factor but manageable with progressive trust. OpenClaw is hardened against prompt injection, but start cautiously. Only let agents listen to you on specific channels (like Telegram, not email). Build trust progressively like you would with a human assistant. Don't miss our full chat: https://lnkd.in/g5s8Fu85
Experience
Education
Languages
-
English
Professional working proficiency
-
Chinese
Native or bilingual proficiency
View Di’s full profile
-
See who you know in common
-
Get introduced
-
Contact Di directly
Other similar profiles
Explore more posts
-
Adalberto Pinto
26K followers
Most startups die from moving too fast. But plenty die from moving too slow. So how do you actually balance speed vs. technical debt? I sat down with Shuzhi Huang, former Engineering Leader at Google, someone who's seen this play out at massive scale, and got a remarkably honest answer. The insight that hit hardest: technical debt isn't the enemy. Unacknowledged technical debt is. If you're a founder scaling your engineering team right now, or a hiring manager trying to decide when to prioritize velocity over quality — this one's for you. What's your take? The full episode is live. Link in the comments 👇 #Engineering #TechnicalDebt #StartupFounders #EngineeringLeadership #Hiring #BotifySessions #BotifyTech #TechLeadership #Founders
6
1 Comment -
Neil Tewari
Conversion • 18K followers
The hottest role in AI startups right now isn’t Forward Deployed Engineers. It isn't GTM Engineers. It’s Deployment Strategists. Decagon calls it an “Agent Product Manager.” Harvey calls it a “Solutions Architect.” Palantir Technologies has had versions of this role for years. And the salaries are climbing fast: - Decagon: $200k–$285k - Palantir Technologies: $120k–$200k - Figma: $150k–$260k - Ramp: $100k–$180k - Harvey: $190k–$260k So who are these people? They are usually pseudo-technical -- CS or engineering majors, or folks with technical work experience. Many come from 2 years in consulting, IB, or PE, then jump into startups to get their hands dirty. They are young, hungry, polished, and comfortable being in front of customers. What do they actually do? They make sure enterprise AI deployments succeed. A $100k+ deal does not survive on a nice pitch or a self-serve onboarding flow. It survives if the customer sees value in the pilot. That means: - Embedding directly with the customer - Designing prompt logic for specific workflows - Working with engineering to align integrations and data flow - Helping exec teams define their AI roadmap - Running feedback loops into product and GTM Why does this role matter so much? Because enterprise AI is messy. Integrations, data transfer, and adoption make or break a deal. Most buyers are using AI for the first time, and each has unique workflows. Deployment Strategists bridge that gap. They own the outcome. They are accountable for making pilots successful, which often means millions in revenue down the line. At Conversion, Sam Bochner has been leading this work for us. We are now thinking about scaling it into a full team. Because a few successful pilots can fund an entire department, and the cost of failed deployments is too high to ignore. Is this just a rebrand of customer success? Not really. Success is about answering tickets and renewals. Deployment Strategy is about going deep with a few enterprise accounts, extracting maximum value, and ensuring the pilot closes into a multi-year contract. Call it Agent PM, Solutions Architect, or Deployment Strategist. Whatever the title, this is becoming one of the most important roles in AI SaaS.
656
91 Comments -
Hashir Abdi
FDA • 323 followers
Open source’s real edge isn’t code. Instead, It’s the cost of curiosity. Winners don’t just ship models — they lower the unit cost of trying. Two hidden taxes on curiosity: compute price and license friction. Remove both, and experiments explode. (Simple as that, no Really!) Each fork is a cheap hypothesis test. Scale the tests, not the press release. Breakthroughs follow a power-law. More shots → fatter tail → outsized wins. Standards aren’t set by the “best model.” They’re set by whomsoever hosts the most experiments. (This repeated lesson is based on my long and cherished history with Open Source) Cheap power + open weights turns a nation into a Monte Carlo engine for discovery. Closed systems hoard capability; open ecosystems hoard error signals — and improve faster. (Fail fast, Fail Often is what OSS is all about after all) Protocol power outlives product power. Quiet lesson: Don’t fight to win the leaderboard. Fight to own the gradient of the world’s curiosity.
-
Peter Deng
Felicis • 39K followers
When nearly anything can be built with AI, the question shifts from what is possible to who is behind it. The advantage lies with the talent steering the systems. AI is rapidly making many parts of life more efficient, but companies still need top talent, and the hiring process has not followed the same trajectory. The market has grown more saturated, making it difficult to cut through the noise and break through. Paraform is building an AI-powered recruiting marketplace designed to ensure companies are able to hire the right candidates for specialized roles with ease. This addresses one of the most persistent and unsolved challenges for building strong teams today. Paraform fits into a broader shift we’re seeing, away from AI as a tool, and toward AI as a service that delivers outcomes. It’s been incredible to watch the business take off and to see how strongly this resonates with customers. Congrats to John Kim , Jeffrey Li, and the entire Paraform team on your Series B! cc: Felicis James Detweiler
25
2 Comments -
Chris Balestras
vibescaling • 15K followers
after working with hundreds of candidates at VibeScaling trying to place them at the top AI-native startups in SF/NYC, here are 5 unconventional tactics that we've seen work to stand out 👇🏻: 1️⃣ customer interviews tactic: when referrals fail, go directly to their customers - outbound 30-50 asking about pain points, turn 3-5 responses into a short analysis, and send it to the head of sales. Shows you can create value before you're hired while most people just send resumes and pray 2️⃣ the "not now" drip campaign tactic: when you get rejected for a specific reason, don't walk away - fix the objection and stay in touch. If they say you're "not technical enough," learn their space, use their product, and send updates on your progress. They often come back when timing changes, and you'll be top of mind as someone who actually takes feedback 3️⃣ the brand book tactic: hiring managers have tunnel vision and only see your recent experience. Create a portfolio with career highlights, peer quotes, outbound examples, and deal case studies to tell your full story. Helps you control the narrative and show skills beyond what's on your resume 4️⃣ proactive references tactic: instead of waiting for companies to backchannel you, have a strong reference reach out to hiring managers before your final rounds. Lets someone handle objections for you while most candidates just wait around. Being proactive with references gives you a huge edge 5️⃣the gratitude loop tactic: send thank you notes to everyone who helped - interviewers, referrers, advisors - regardless of outcome. People remember genuine appreciation, and it's a small tech world where today's "no" often becomes tomorrow's opportunity invest in relationships and stand out by doing what others won't wrote the full breakdown with templates in GTMBA 👇🏻
25
7 Comments -
Steve Love
Leeds Building Society • 705 followers
I've been discussing this recently with Frances Buontempo in the context of coding assistants. We hear a lot from developers saying that LLM assistants in their fave IDEs are great for generating a lot of the (boring) boilerplate code needed to stand up a non-trivial system. However, most popular IDEs have been able to do exactly this for decades--whether with "wizards", code-block and project templates, or other various forms of meta-programming. The difference is that the "old" way is much less hungry of resources, while simultaneously being much more predictable and reliable at automatically generating the "boring" stuff. Coding assistants are also good at other things, of course, but many of the popular (in my conversations with developers) tasks are those I would much prefer to do myself, such as writing tests.
15
6 Comments -
Anthony Escamilla
Top Funnel Talent • 34K followers
Hot take: Startups want Big Tech candidates... but only AFTER another startup has "de-risked" them first 🙃 I'm seeing this pattern more and more and believe it's short sided. Start-ups LOVE when a candidate starts their career at Google/Meta/Amazon/Netflix - it's a strong signal of excellence. But they don't want to be the ones to bet on them making the jump to startup land. Instead, they want someone who did FAANG → startup for 2-3 years → NOW they're interested. Translation: "We love the credibility, but we want another start-up to test them first." Meanwhile, the best startups with strong hiring processes? They're scooping up these FAANG candidates directly, like some of our clients 🙂. Recruiters/founders - are you seeing this too? Has anyone been on the receiving end of this?
141
4 Comments -
Matt Lerner
Matt Lerner Executive Coaching • 3K followers
One AI-enabled company I spoke with organizes its engineering team like this: → 9 two-person engineering pods → 1 product manager → 1 designer Compare this to a legacy feature team: → ~ 6 engineers → 1 product manager → 1 designer On well-defined greenfield projects, these two-person AI pods ship a similar amount of software as a six-engineer legacy team. To run 9 legacy feature teams: 72 people. To run 9 AI pods: 20 people. That’s ~3.6X more software delivery per headcount. To make this work you need: → Engineers with the judgment to operate independently → AI engineering best practices → AI-enabled product managers and designers (or else they become the bottleneck) When a company tells me they’ve adopted AI but haven’t seen meaningful gains, it’s usually because their team structure hasn’t changed. AI changes the optimal team size. The biggest gains come from smaller, more autonomous teams with less coordination overhead. I’m curious – has AI changed the structure of your team? If so, how?
31
18 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content