Max He
San Mateo, California, United States
1K followers
500+ connections
View mutual connections with Max
Max can introduce you to 10+ people at Microsoft AI
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Max
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Websites
- Personal Website
-
https://maxhe.org
- Google Scholars citations
-
https://go.maxhe.org/gs
- ResearchGate profile page
-
https://go.maxhe.org/rg
Articles by Max
-
Revolutionize Your Data Science Operations with Jupyter Notebooks on NBForge
Revolutionize Your Data Science Operations with Jupyter Notebooks on NBForge
(For my fellow data scientists ❤️) In the dynamic world of data science, we're always striving to make our insights…
57
2 Comments
Activity
1K followers
-
Max He posted thisThis is not an airport... ✈️🚫 They say "this isn't an airport, you don't have to announce your departure." But this is LinkedIn, and we all know the algorithm loves a good life update. 🤷♂️📱 After a short but incredibly dense 8-month sprint, I am saying goodbye to the team at Google Search Data Science 🔍📊. While my time here was brief, the learning curve was vertical 📈 and the people were world-class. Thank you Jacey for the opportunity and the guidance. I'm incredibly proud of what the team has achieved. 🙌 I’m excited to share that I’m heading over to Microsoft AI to work on #copilot 🤖🚀. Huge thanks to the colleagues who made my time at Google memorable -- let's keep in touch! 🥂👋
-
Max He shared thisGreat addition. This could make a lot of analytical pipeline running on BigQuery, a lot cheaper.Max He shared this𝐅𝐚𝐬𝐭, 𝐀𝐩𝐩𝐫𝐨𝐱𝐢𝐦𝐚𝐭𝐞 𝐚𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐬 𝐚𝐭 𝐬𝐜𝐚𝐥𝐞 𝐢𝐧 𝐁𝐢𝐠𝐐𝐮𝐞𝐫𝐲! Excited to share that we've launched KLL quantile sketch functions in Google BigQuery! If you need fast, approximate percentile insights from large datasets, give these a look. A major advantage is that these sketches are mergeable! You can pre-compute sketches on different data segments (e.g., daily logs, regional sales) and combine them later ! This allows for highly efficient aggregate analysis (like weekly/monthly quantiles from daily sketches) without reprocessing the raw data. 📚 Docs: https://lnkd.in/g2F9fepg 💻 Demo Notebook : https://lnkd.in/g_EspgzK #BigQuery #DataAnalytics #GoogleCloud #KLLSketches #DataEngineering #Quantiles #NewFeature #PublicPreview Xueping Weng
-
Max He shared thisData science teams could struggle with operational tasks that usually start with something like "could you rerun that analysis you did last week, but for five more markets?", "we have one more day of the A/B data, could you refresh your analysis yesterday - it was great?". I create a open source project you can deploy for your data science team, to help with exactly this (and more). Collaborations with YOU: Terry and I chatted about how to further improve the platform already. Please send us any suggestions or if you want to contribute directly. If you have a cool notebook you'd like to onboard to the demo (https://demo.nbforge.com), let me know too. (I mentioned this in an earlier post, but I want to explain in more details about the motivation and the benefits of adopting NBForge (https://nbforge.com))Revolutionize Your Data Science Operations with Jupyter Notebooks on NBForgeRevolutionize Your Data Science Operations with Jupyter Notebooks on NBForgeMax He
-
Max He shared this(Post too short? Read more about it here: https://lnkd.in/ggN5WVfD) 🔥 I built this Jupyter Notebook based data science platform and it may DOUBLE your analytics productivity - we spend too much time repeating similar analyses. 🔥 🚨 Warning: This is NOT scientifically validated—but who cares if it works, right? You have to try it (https://nbforge.com) and deploy it for your team (or data science partners) I left Snap Inc. on Monday after 8 amazing years [footnote 1] (big thanks to all the incredible folks there! 🙏). By Tuesday, I had an idea that every data scientist secretly wants: Turn Jupyter Notebooks into instant web apps and pipeline tasks—with ZERO hassle. Introducing: NBForge ⚡️ ✅ Web apps from your notebooks—instantly. ✅ Easy integration into #airflow, #dagster and data pipelines. ✅ Fully customizable environments (Python libs, CPU, Memory) per notebook. ✅ Deploy securely to AWS, GCP, or your own Kubernetes cluster. (No privacy nightmares. Your data stays yours.) Built with my favorite stack (and some playful biases 😉): • Kubernetes 🚀 • Google Cloud ☁️ (Snap taught me well) • FastAPI ⚡️ • Vue.js 💚 (React? Sorry, not smart enough /s) • And mostly... Cursor with Anthropic's #claude 3.7 👩💻🪄 (Is this the future? Let’s debate!) Check it out: 👉 Project Page: https://nbforge.com 👉 👉 👉 Video demo: https://lnkd.in/gx3CQbkT 👉 GitHub: https://lnkd.in/gT8vgg9u 👉 Demo Web App: https://demo.nbforge.com (login and emails are disabled) Am I exaggerating about doubling productivity? Maybe... maybe not. Think about **how often you rerun similar analyses** 🤔 Let’s discuss below. 👇 --- #dataScience #productivity #openSource #NBForge #Jupyter #Kubernetes #Cursor #controversial #AI #Airflow #Python #FastAPI #VueJS #NoReactAllowed footnote 1: I'm really grateful for everyone that I have worked with at Snap - I have learned and grown a lot. Snap has some of the most kind, creative, and smart people. A special thank to Lu who brought me to Snap, and mentored me since as my manager and colleague.Turn your notebooks into production-ready applicationsTurn your notebooks into production-ready applications
-
Max He reposted thisMax He reposted thisI am hiring for multiple Machine Learning and Software Engineering roles (Junior to Staff levels) for Content Recommendation Systems at Snap. We are making rapid progress in Applied ML and evolving Snap's content ecosystem. Please let me know if you or your friends are interested in joining this exciting journey! ML Manager - https://lnkd.in/dskbDqP9 ML Infra TL - https://lnkd.in/dhJ79fi8 MLE - https://lnkd.in/duhy5b-7 ML Infra SWE - https://lnkd.in/dFU7ARuQ
-
Max He shared thisI'm hiring a passionate Data Scientist with at least 3 years of experience to empower our Conversion Lift platform. This role is a unique opportunity to measure the success of our advertisers and help them run more effective campaigns. As part of our team, you'll dive deep into data analysis, contribute to experimental design, and leverage causal inference to drive advertiser success on a global scale. If you're someone who loves to turn insights into action and has a proven track record in data science, we want to hear from you! #Snapchat #DataScience #Hiring #TechCareers #AdTech #ConversionLift #Innovate #TeamSnapchat For more details and to apply, please visit this link https://lnkd.in/gufrUswa Let's create something amazing together. 🌟
-
Max He reposted thisMax He reposted thisMy team is hiring PM's for two really high impact roles at Snap, Messaging and Trust and Safety. Please let me know if you're interested in learning more! https://lnkd.in/giScV8yR https://lnkd.in/gKe6H_p6
-
Max He reposted thisMax He reposted thisMy team is hiring PM's for two really high impact roles at Snap, Messaging and Trust and Safety. Please let me know if you're interested in learning more! https://lnkd.in/giScV8yR https://lnkd.in/gKe6H_p6
-
Max He liked thisMax He liked thisToday’s enterprises face new security threats and the loss of sensitive data while more work than ever is happening in the browser. Learn how businesses are meeting these challenges with Edge for Business in a new commissioned TEI study by @ForresterConsulting.New Technology: The Projected Total Economic Impact™ Of Microsoft Edge For BusinessNew Technology: The Projected Total Economic Impact™ Of Microsoft Edge For Business
-
Max He liked thisMax He liked thisI'm looking for a Machine Learning Engineer to join me on the Growth Intelligence team for Microsoft Copilot. We mine petabytes of raw chat logs to create structured signals about our users and conversations including embeddings, topics, intents, personas, summaries, vector databases and more! Experience with NLP modeling and GPU utilization at scale preferred. https://lnkd.in/etqDaBCQ
-
Max He liked thisMax He liked thisMeta launched a Meta Ads MCP. You could think that this would hurt Pipeboard. I thought it would. But nope, our sign-ups and usage just accelerated. The servers are really busy :). I've been heads down working on making everything bigger and better, not doing a great job at talking about it, but I've been learning *so* much. Building fast-growing products in this era is so fun!
-
Max He liked thisMax He liked thisFrom the ground up, we are seeing incredible progress. There is real, tangible momentum in every part of what we do. And as for Google Search - Search queries are at an all-time high. As AI Overviews make it easier to understand complex topics, users are leaning in and asking the deep questions they used to think were "too hard" for a search engine. Whether they're using their camera, their voice, or just a messy thought, now we're in a space where Search helps people get things done.
-
Max He liked thisExcited for this launch. Amazing work by the Whatnot team to make it happen.Max He liked thisWhatnot 🤝 Shopify Today, we announced our new integration with Shopify, bringing live commerce into the workflows sellers already rely on. Live commerce is the fastest-growing channel in e-commerce because it adds something traditional online shopping can’t: human connection. Sellers can answer questions in real time, build trust instantly, and create the kind of urgency that drives conversion – all in a single live show. For many businesses, the question hasn’t been whether it works, but how to make it fit into the operation they’ve already built. Now, retailers of all sizes, from those just getting started to those with 100s of million in sales, can plug Whatnot into the same systems they use to manage products, inventory, and fulfillment. You get the speed and energy of live selling while your business keeps running seamlessly. Download the Whatnot app in the Shopify App Store (https://lnkd.in/gMPnMwUH), and read more in our blog (https://lnkd.in/g5Ay_Ju7) #Shopify #Whatnot #LiveShopping #LiveCommerce https://lnkd.in/gwj7nRP5Whatnot x Shopify: Grow your Business with Live SellingWhatnot x Shopify: Grow your Business with Live Selling
-
Max He liked thisMax He liked thisToday marks the closing of my second chapter at Google. After 7 years, I’m moving on to a new chapter and excited to take on new challenges. Looking back, what made this journey meaningful comes down to two things: the people I worked with and the work we got to do together. The people. I’ve had the privilege of working alongside teammates who are not only sharp and high-caliber, but also thoughtful, generous, and kind. What I’ll remember most isn’t just what we accomplished but how we did it: through trust, care, and a shared commitment to doing great work. Thank you for the partnership, the belief, the debates, and the laughter along the way. The work. Watching Search transform itself in the AI era has been remarkable and I’m proud to have contributed meaningfully to that shift. Helping shape how AI comes into the product, and how we think about the future of Search, is something I’ll always carry with me. I grew here in ways that stretched me - not just as a data science leader, but as a builder. Grateful for the past, and excited for what’s ahead. P.S. I’m taking some time to recharge - reading, getting back into piano, exploring new hikes and going deeper on AI skills and tools. If you have recommendations on any of the above or anything that helped you reset, I’d love to learn about them.
-
Max He liked thisMax He liked thisThank you Prudhvi Vatala (Head of Platform Engineering, Snap) for sharing Snap's journey with Google Cloud and Nvidia at Next '26. We appreciate the partnership and collaboration! #snap #googlecloud #partnership #googlecloudconsulting #next26
-
Max He liked thisExcited to be featured in The RISE 40, a new report from Huron highlighting the companies shaping the future of the student lifecycle across recruitment, retention, and career readiness. Explore the report here: https://lnkd.in/gd8BMjYK
-
Max He liked thisMax He liked thisFollowing up on my previous post on how fast the Data Science field is evolving with AI: AI isn’t just boosting productivity and capability — it’s also widening the gap between great data scientists and mediocre ones. This 2x2 infographic captures how I’ve been thinking about that shift. Biggest risk, in my view: the slippery slope zone — over-reliance on AI, weaker thinking, and skill decay. This is especially important for fresh graduates entering the field, and for data scientists onboarding into a new role or domain. Early on, it’s critical to strike the right balance between leveraging AI to move faster and getting your hands dirty enough to build real domain intuition, judgment, and independent thinking. AI can be an incredible amplifier. But only if it strengthens your thinking instead of replacing it.
Experience
Education
View Max’s full profile
-
See who you know in common
-
Get introduced
-
Contact Max directly
Other similar profiles
Explore more posts
-
Bradley Fay
DraftKings Inc. • 1K followers
Third piece in the AAD series is up. This one is about acceptance criteria (AC) and why treating them as a formal contract changes what you can diagnose when autonomous development breaks. AC serve as the bridge between the spec and the environment in which the spec is compiled into a product. This is implicit in the first article, but after marinating on this for a few weeks, I felt like I should make it explicit. The short version: most teams write AC the way you'd write them for a human. "Handle edge cases gracefully." "Performance should be acceptable under normal load." In AAD, those are specification failures. An agent will do something when it reads that. It might even do something reasonable. But you've handed control over what "reasonable" looks like to the LLM, and you'll only find out whether you agree after you've burned the iteration cycle. In my experience, the first pass is ok, but it can quickly diverge and it takes a lot of time to get back to where you intended to go in the first place. One thing I keep coming back to is the diagnostic value of keeping the spec and the AC separate. My currently line of thinking leads me to this framing: The spec describes intent. The AC defines done. You need both. The Spec is the what and the why. The AC are the falsifiable tests that the intention was met. Keeping them separate creates a framework so they each get the attention they deserve as part of this new world. Building on the notion of falsifiability, there's also a section on the right mindset for writing AC. Just like with the best scientist/experimentalist, you should be writing AC from a lens that lets you confirm intention was not met, not that intention was met. It's a subtle but meaningful nuance. If you assume the agent is going to get it correct, you're biased to see the final product and assume it's correct. When you train yourself to focus on the ways the agent might get it wrong, you open up a whole new world on writing great AC. I think this applies to traditional development as well as AAD. Claude made a fancy doc for this one! But also, the full read is in the comments.
15
1 Comment -
Michael Leznik
Aristocrat • 2K followers
We live in an era where a single import torch can get a model up and running in minutes. But when things go wrong — collapsing loss functions, vanishing gradients — syntactic convenience isn’t enough. Progress still depends on a solid grasp of first principles. This is one of the most comprehensive resources I’ve come across: a textbook, video lectures, and Python implementations for every concept and chapter, with an excellent balance between theory and practical application. #MachineLearning #DeepLearning #AI #PyTorch #DataScience #MLTheory #FirstPrinciples #LearningResources #Engineering #AppliedAI https://lnkd.in/e_gysNCb
18
-
Zifeng L.
ElasticDash • 3K followers
The end of human drivers? Tried Tesla Robotaxi in the Bay Area for the first time, 8.8 miles from Meta SF to Daly City station for $18. Unlike Waymo, Tesla’s Robotaxi can take the freeway. The ride was smooth, and the decision-making felt far closer to human than “bot-like.” What surprised me most: it’s just a regular Model Y. With the built-in rear screen, any Tesla could potentially be activated as a Robotaxi. That’s scalability at an entirely different level. Waymo might have an early lead, but Tesla’s vision-only approach gives it massive coverage and faster rollout potential. If this scales, Uber, Lyft, and even car ownership itself could be disrupted faster than expected. This is exactly why I quit my job to go all-in on ElasticDash We’re entering a new industrial revolution, the real question is: will you shape it, or be forced to adapt?
15
2 Comments -
Asjad Ali
Devminified • 2K followers
Every year in the U.S. alone, 40,000 lives are lost and 2.4 million people are injured in car accidents. Most of these tragedies come down to human error like distraction, fatigue, poor judgment. 𝐖𝐡𝐚𝐭 𝐢𝐟 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐜𝐨𝐮𝐥𝐝 𝐜𝐡𝐚𝐧𝐠𝐞 𝐭𝐡𝐚𝐭 𝐬𝐭𝐨𝐫𝐲? Waymo just released safety data after 56M+ fully driverless miles. The results are eye-opening: -> 85% fewer serious injuries vs human drivers -> 96% fewer intersection crashes (where many fatalities happen) -> 92% fewer pedestrian crashes -> 82% fewer cyclist/motorcycle crashes 𝐈𝐧 𝐨𝐭𝐡𝐞𝐫 𝐰𝐨𝐫𝐝𝐬: when you take the human out of the driver’s seat, the roads get dramatically safer. And adoption is accelerating fast. Waymo went from 10K weekly rides in 2023 → 250K rides in 2025. At this pace, they’ll likely hit 1M+ weekly rides by early 2026. But here’s the twist: the biggest barrier isn’t technology anymore. It’s policy, regulation, and public trust. Cities like NYC, Singapore, and Nashville are opening up, but many governments are still hesitating. So the real question isn’t “Is the tech ready?” It’s “𝐀𝐫𝐞 𝐰𝐞 𝐫𝐞𝐚𝐝𝐲 𝐭𝐨 𝐥𝐞𝐭 𝐠𝐨 𝐨𝐟 𝐭𝐡𝐞 𝐰𝐡𝐞𝐞𝐥?” (Image credit: Unsplash) #artificialintelligence #selfdriving #generativeai
6
-
Brian Seaman
Wayfair • 2K followers
I’ve been leaning on Claude Code a lot lately (I especially like the new Claude Opus 4.1) and this new short course from DeepLearning.AI crystalized a bunch of “do this, not that” habits for agentic coding. Claude Code: A Highly Agentic Coding Assistant (built with Anthropic) is clear, hands-on, and only 90 minutes or so long. https://lnkd.in/guvsDnBg My big takeaway: Claude Code shines when you give it context + constraints and let it work in small, testable steps. First reading the repo, then proposing a plan, then shipping diffs (not walls of code). The course walks through exactly that across three concrete projects (RAG chatbot, refactoring a notebook into a dashboard, and a Figma-to-web app build), plus GitHub hooks. If you are not already doing it, here were a few tips that can level up your usage: Use CLAUDE.md at the repo root (and even per-package in a monorepo) so Claude automatically pulls project norms, commands, and “gotchas” into context. It’s like a living, machine-readable onboarding doc. Create custom /slash commands by dropping prompt templates into .claude/commands. This is great for repeatable workflows like “fix PR comments and push” or “triage Issue #1234.” Team-shareable via git. Headless mode for CI: run Claude Code non-interactively to label issues, do subjective linting (naming, clarity), or fan out codebase migrations. It’s an automation surface, not just a chat. MCP servers as power-ups: wire in tools (e.g., Puppeteer, Sentry) via .mcp.json so everyone on the repo gets the same agentic capabilities out of the box. Git + GitHub ergonomics: install gh and let Claude draft commit messages, open PRs, or resolve rebases—surprisingly effective when scoped to small diffs. Why I liked the course: it doesn’t just demo “AI writes code.” It teaches the workflow: have Claude summarize the codebase, set success criteria, ship incremental changes, and keep tests close. The GitHub hooks + notebook-to-dashboard refactor were especially practical. If you’re experimenting with agentic coding or trying to make it stick across a team, this is a crisp starting point. #AI #ClaudeCode #AgenticCoding #DeveloperExperience #MLOps #Productivity
18
1 Comment -
Ovadya Menadeva
NanoScout • 6K followers
Vision models usually stop at understanding. This one keeps going into interaction and physics. That’s a very different kind of perception. The video shows VIGA Vision as Inverse Graphics Agent, a system that reframes visual understanding as reconstructing the world that produced the image, not just predicting labels or dense maps. Technically, VIGA treats vision as program synthesis. Instead of a one-shot image To output pipeline, it runs an analysis-by-synthesis loop: proposes executable Blender Python code (geometry, camera, materials, lighting, physics) renders the scene compares render together input using photometric signals and VLM based feedback iteratively refines with a persistent scene memory This design unlocks several important properties: • Explicit world representation : the output is an editable, inspectable scene program, not a latent tensor • Error localization , failures are diagnosed at the level of camera pose, scale, materials, or physics • Compositional reasoning ,multi-step edits build on prior state instead of restarting from scratch • Physical consistency ,reconstructed scenes support interactions (collisions, gravity, motion) • Better generalization , operating in program space enables large gains on multi-step and compositional benchmarks On BlenderBench and BlenderGym, this approach significantly outperforms one-shot VLM baselines and memory-less agents , not because of bigger models, but because the representation itself is executable. The broader takeaway isn’t “Blender as a backend”. It’s the return of explicit, controllable world models, now combined with agentic reasoning. This direction feels tightly aligned with: Vision Language Action models Embodied AI and robotics Simulation-first learning Digital twins and dynamic world models Paper: Vision as Inverse Graphics Agent via Interleaved Multimodal Reasoning Curious whether inverse-graphics agents will become the default interface between perception and action , especially as we move beyond static scenes. #ComputerVision #InverseGraphics #EmbodiedAI #WorldModels #AgenticAI #Simulation #Robotics #3DVision #AIResearch #VLM #AI
15
1 Comment -
Qihang(Tom) Cheng
Dalian DeepMatrix Technology… • 149 followers
Been trying out a semantic chunking strategy recently. The logic is to check if the max_sim between a new sentence and the current chunk is greater than the min_sim within the chunk. If so, merge; otherwise, create a new one. But in practice, I found that as soon as one irrelevant sentence gets into the chunk, the min_sim plummets, and then it pretty much stops creating new chunks. Also, just looking at max_sim can chain two unrelated topics together with just a few transitional sentences, which is a real headache. Performance-wise, the O(m²) complexity is just unusable. So, I changed my approach: for small chunks, I only use a max_sim threshold. Once the chunk gets larger (e.g., >5 sentences), I then bring in quantile-based constraints. This change really helped mitigate the chain-linking issue. For performance, you still have to use incremental calculations or approximate quantiles, or it's just too slow.
3
-
Sam Mahdavian
AArete • 9K followers
Everyone’s talking about frontier LLMs like GPT‑5, Claude Opus 4.5, and Gemini 3. Most people are missing the point! The real competitive edge isn’t access to a new model. It’s the ability to build the entire production system around it. It’s the engineering that turns a flashy demo into a trusted, efficient, scalable workflow: • 𝗗𝗮𝘁𝗮 & 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻: High‑quality data pipelines feeding clear prompts, tools, and policies • 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 & 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴: Rigorous testing, continuous monitoring for drift, and human‑in‑the‑loop review • 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 & 𝗙𝗶𝗻𝗢𝗽𝘀: Strong security and compliance, paired with relentless cost optimization The future belongs to a few key groups working together: • 𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘀𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘀𝘁𝘀 who can design secure, reliable, and efficient LLM (and agentic) pipelines • 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝘀𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘀𝘁𝘀 who deeply understand their domain and can “𝘀𝗽𝗲𝗮𝗸 𝗟𝗟𝗠” • 𝗣𝗿𝗼𝗱𝘂𝗰𝘁/𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝘀𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘀𝘁𝘀 who translate messy real‑world needs into clear requirements, guardrails, and success metrics Individually they’re powerful, but it’s the hybrid builders and cross‑functional teams at the intersection of these three that turn frontier models into durable, production AI. They don’t just ship another chatbot. They ship real outcomes. What’s the most overlooked component in building production AI? #AI #GenerativeAI #LLM #AILeadership #AIEngineering #AArete
34
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content