Ziliang (Daniel) Lin
New York, New York, United States
1K followers
500+ connections
View mutual connections with Ziliang (Daniel)
Ziliang (Daniel) can introduce you to 10+ people at Instagram
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Ziliang (Daniel)
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Experienced Data Scientist and Quantitative Researcher
Activity
1K followers
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisI’m excited to be accepted into Georgia tech’s online masters of science in computer science (Georgia Tech OMSCS ), which I will start in the fall (1 class per semester while working, for a few years). I have been on a long detour from my academic aspirations. When I was deciding next steps for my career this winter, I told Claude about my goal to get a PhD in something in between math, philosophy, and computer science. Last time I applied for PhD programs (post undergrad) I applied to only Stanford Math/statistics and 1 other, to try to be financially prudent. My background however is a bit of a long shot for most top PhD programs in these areas — 15 years out of school. Claude recommended OMSCS given my constraints, I decided to immediately apply, and thanks to some awesome colleagues/managers at Meta I completed my application (with recommendation letters!) the day before the deadline, and I got a very-last-minute acceptance (2 days before final deadline to accept!). Within industry, I’ve always enjoyed Eng and research projects a lot, but with my BA in math (1 formal CS course) I’ve always felt like a bit of an imposter in both areas, despite some career success in both. Additionally, I believe without evidence that attaining a higher degree may set my children up for better economic outcomes. I’m excited to learn more formally about: deep learning, RL, robotics, quantum computing, software design and best practices, and hopefully areas such as World models and more. I also hope to use this as a stepping stone toward a PhD in one of my passion areas.
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisSince getting laid off from Meta in 2022, I have been in contract roles and have struggled to land a fulltime job. It felt like Meta ruined my life. So many people in this world are experiencing the same challenge. With great relief and joy, I am happy to share that I have accepted a fulltime conversion offer at Adobe! It's truly a dream come true to be helping Adobe scale their design and creative teams. I am reunited with Lisa Gibello for the long haul - my first manager for my very first recruiting role back in 2013. To all my recruiting friends and colleagues out there struggling, don't give up! If there is anything that differentiates us from the rest of the talent in the world, it's resilience. 💖
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisMost people I know in AI think the median person is screwed, and they have no idea what to do about it. I spent the last 3 months talking to dozens of researchers, economists, and policy experts about AI's impact on work; including reps from every frontier lab and several Congressional offices. Unfortunately, I was not reassured. But an "underclass" is not inevitable, but rather a societal choice — one we can and should avoid. Instead of waiting for impact, we should start planning now to support workers through AI disruption. Whether policymakers can assuage concerns about economic security will determine if we get to reap AI's gains at all. New from me, for The New York Times (gift link in comments)
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisHow might AI affect marketplaces and economic exchange in the future? To study this question we built a marketplace for Anthropic employees at the SF office. Claude interviewed the 69 participants to learn about what they might be interested in selling (and at what price) and about what they would also be interested in buying. Then the Claude agents were placed in a centralized marketplace to offer deals, negotiate, and ultimately execute real-world transactions. We learned a number of things along the way: 1) This pilot experiment worked. There were 186 deals totaling over $4k in transaction volume. 2) Participants generally rated the deals that Claude made as fair and almost half said they would be willing to pay for a service like this. 3) We ran multiple simulated exchanges in parallel to test whether having a more powerful model meant getting better terms of trade when negotiating against weaker models. In these simulated exchanges Opus agents generally fared better than Haiku agents. 4) One of the surprising things that emerged was how well Claude seemed to bargain for desired items even when preferences were left unstated by participates — Claude seemed to pick up on latent demand based on related contextual clues from the interview. There is a lot more in the actual write-up: One person asked their Claude agent to barter like an exasperated cowboy. In another case, Claude was told to buy something for itself (it bought 19 ping pong, balls which we keep at the office). Grateful to my fellow coauthors on this very fun project: Keir Bradwell Kevin Troy and Dylan Shields https://lnkd.in/ggQ-SjpxProject Deal: our Claude-run marketplace experiment | AnthropicProject Deal: our Claude-run marketplace experiment | Anthropic
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisDocumentation used to be the thing everyone hated writing and nobody read. Now it's arguably the most important part of an AI system. Anthropic just shipped their biggest Claude Cowork update yet. Plugins for sales, finance, legal, engineering, operations, and more. Under the hood, there's nothing magical. It's just markdown files. A CLAUDE.md tells Claude your workspace rules. A SKILL.md teaches it a repeatable workflow. • No fine-tuning. • No training pipeline. • Plain text that Claude reads at runtime. Take the marketing plugin. It bundles skills like brand-voice and competitive-analysis with commands for drafting content, building email sequences, and running SEO audits. At its core, just a folder of .md files. But it doesn't load everything at once. Think of it like a library catalog: the AI scans the short descriptions first, and only opens the specific .md file when it actually needs to run that audit or draft that sequence. I've been running this architecture for weeks to manage content and campaigns, powered by OpenClaw. An Obsidian vault where three editors work concurrently: me, a remote AI assistant, and Claude Code in the terminal. • A USER.md gives AI my context. • Daily memory files track tasks, actions, and decisions. • Skills in folders teach AI how to pull performance reports and orchestrate campaigns The entire brain of the operation is text files synced to the cloud. Eli Mernit recently wrote that 'your company is a filesystem.' When you model an organization as folders and files, AI agents can access everything they need by reading and writing. Cases in /cases. Billing in /billing. The back office becomes a state machine. When you strip away the noise, an AI agent reduces to two things: the filesystem as state, and the model as the orchestrator. That means you can take the tacit knowledge of your best performers — the habits, the checks, the unwritten rules of how to pull a campaign report — and turn it into something executable and shareable. Update the folder once, and the whole team's workflow aligns. Documentation used to gather dust. Now it's the first thing AI reads. The most underrated infrastructure in 2026 isn't a model. It's a folder of .md files that someone (or AI) actually bothered to write.
-
Ziliang (Daniel) Lin reacted on thisZiliang (Daniel) Lin reacted on thisI recently joined Anthropic to lead Consumer Product and we're growing the team. Our goal is to build AI that millions of people use every day to think better, create more, and accomplish what matters to them. I might be biased, but I think this is probably the most interesting opportunity in the world right now. We're looking for former consumer founders who deeply understand what makes experiences intuitive, spike on 0 to 1 product development, ship products that people love to tell their friends about, and dive into everything to bring their vision to life. The best candidates will be extremely creative, hands-on, obsessed with craft, and opinionated about a future where AI genuinely improves people's daily lives. You'll partner with research, design, engineering, and trust & safety to bring frontier AI capabilities to everyone in ways that are both meaningful and responsible. Ultimately you'll help us build the AI assistant that people trust daily with their most important work and creative pursuits. Apply here: https://lnkd.in/eAmzPp4U We're hiring for many engineering, design, and data science roles in this area: 1. iOS — https://lnkd.in/e5Pem8gV 2. Android — https://lnkd.in/eet4sXJA 3. Agent Infrastructure — https://lnkd.in/eYVJe8uT 4. UI Platform EM — https://lnkd.in/ewnvDhsj 5. Consumer EM — https://lnkd.in/epKFkT78 6. DS Lead - https://lnkd.in/et4sCgy7 More to come!
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisWell, that was fast. Just a few weeks ago, the one-person project OpenClaw went viral and became the fastest-growing open source project in histor. Now, its creator, 🦄 Peter Steinberger, has joined OpenAI. It's another validation that the architecture is here to stay: - File-based persistent memory - Modular skills - Self-modifying software Kimi also launched native OpenClaw integration, living in your browser tab, online 24/7. Now with the acquihire, OpenAI would make the architecture far more accessible. Sam Altman says OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. I do hope it's true.
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisExcited to share that I'm serving as a judge for the 47th Annual Telly Awards! They recently interviewed me about my work at NowThis , my career path, what I look for in excellent video, and how my job empowers creators. Check out the full interview here: https://tlly.co/4keizOM This year's theme, "Capture the Original," celebrates how creators, producers, and brands tap into the original inspiration behind their work in today's rapidly evolving media landscape. If you're doing great work in video, from branded content and commercials to social video, film, animation, and beyond, I'd love to see it. 📅 Final Entry Deadline: Friday, February 20th 🔗 Submit your work: https://tlly.co/3La3OihThe Telly Awards | Honoring Excellence in Video and Television Across All ScreensThe Telly Awards | Honoring Excellence in Video and Television Across All Screens
-
Ziliang (Daniel) Lin liked thisZiliang (Daniel) Lin liked thisHiring Data Scientists to Scale AI for Commerce After three incredible years at Uber, I’m excited to share that I’ve joined Shopify to support our AI initiatives! We are building the future of commerce through AI assistants for consumers, merchants, and the Shop ecosystem. To do this, I am looking for talented product Data Scientists —from Senior to Staff level—to join the team. If you're passionate about shipping impactful AI products, I’d love to connect. DM me or check out our open roles!
Experience
Education
Projects
-
TEDxNKU
• Led a team of 12 to design an event with inspiring storyline, Liaise with 6 speakers from completely different fields, promote the event online and offline, and host an event with more than 150 attendees
• Wish to convey to the youths in our campus that young people have a lot of alternative options other than those exposed and chosen by most, with stories, ideas from inspiring young figures.
Other creatorsSee project
Languages
-
English
Full professional proficiency
-
Mandarin
Native or bilingual proficiency
-
Cantonese
Native or bilingual proficiency
View Ziliang (Daniel)’s full profile
-
See who you know in common
-
Get introduced
-
Contact Ziliang (Daniel) directly
Other similar profiles
Explore more posts
-
Guy Kilbey
Techfellow Limited • 4K followers
Ever wondered what agentic collusion looks like in practice - not theory? Following on from this week’s earlier post... here’s how researchers at King’s College London (KCL) actually built their multi-agent simulation to test it 👇 Architecture: - A multi-agent reinforcement learning market built with RLlib and PettingZoo Learning setup: - PPO (policy-gradient) and DDPG (actor-critic) for continuous control - Centralised training, decentralised execution - agents share gradients during training but act independently in the market - Reward shaping: baseline = pure PnL; hybrid = PnL + inventory penalty + market-stability term Compute: - ~8 × A100 GPUs - >10 million training steps with parallel rollouts via Ray - Convergence to stable quoting policies after ~500 GPU hours Findings: - Pure-PnL agents learned aggressive quoting behaviour - Hybrid-reward agents converged to smoother, more stable policies - In game-theory terms, a correlated equilibrium emerged - spontaneously 📄 Wang, Ventre & Polukarov (ICAIF 2025) - Multi-Agent Reinforcement Learning for Market Making: Competition without Collusion ...
1
-
James Pascale
Executive AI Advisors • 2K followers
In working with LLMs, a consistent source of performance gaps comes from limitations in memory and learning over time. In my own work, this gap shows up repeatedly: amazing performance in isolated tasks, followed by brittleness once interactions extend over time or require the system to build on prior experience. Memory and self-learning may not be the only way to frame the problem, but they are currently useful lenses for understanding where today’s LLM architectures break down. What makes this gap interesting is the contrast: these systems can be super-human in narrow capabilities, yet frustratingly wrong when even modest persistence or learning over time is expected. You don’t need to be a cognitive scientist to notice that the human brain is not just a very large context window. From a practical standpoint, it can be useful to think architecturally in terms of adding or improving memory and learning—not as the only way to frame the problem or its solution, but as architecturally useful working models. Seen this way, the issue is not access to more text via larger context windows, but the absence of persistent, updateable processes—mechanisms that govern what is retained, revised, and abstracted across interactions. I recently looked at Hindsight (link to repo in the comments), an open approach to agent memory that treats memory as more than a flat retrieval layer. Instead of storing everything in a single vector space, it introduces explicit memory categories that resemble familiar cognitive distinctions: 🔹World knowledge — relatively stable facts 🔹Experiences — what the agent itself has done and observed 🔹Opinions — beliefs that carry confidence or uncertainty 🔹Observations — higher-level models formed by reflecting across prior memory What stands out is not just the taxonomy, but the separation of memory operations: 🔹Retain information intentionally 🔹Recall relevant past context when needed 🔹Reflect to synthesize new insights from accumulated history This structure makes it easier to reason about how an agent’s knowledge evolves—particularly how beliefs change, how experience accumulates, and how abstractions form over longer horizons. Many existing RAG-style systems struggle here once interactions extend beyond short-term context. The approach has also been evaluated on LongMemEval, where it reports leading accuracy among open systems, with independent verification by external research groups. Whether or not this specific architecture becomes standard, it points toward a direction that feels necessary: systems that can accumulate experience, revise beliefs, and build internal models over time. That seems like a prerequisite for anything we would reasonably call self-learning. How are you thinking about memory and persistent learning in your own AI or LLM projects? And how much improvement do you think this could bring to LLM or AI systems’ performance? #AIAgents #LLMs #AgentMemory #MachineLearning #AIArchitecture
2
3 Comments -
Muhammed Burak Bedir
Leveragai • 707 followers
In the early 1980s, Gerry Bamberger, a recent graduate of Columbia University's computer science program, joined Morgan Stanley. His task was to write simple software that updated trading results. He noticed that traders often executed large “pair trades”, for example, buying millions in Coca-Cola shares while simultaneously selling the same amount in Pepsi. As Bamberger observed these trades, he realized that the spread between the two stocks would temporarily diverge due to market impact, then revert to the mean, a pattern he saw as an opportunity. With a $500K budget from the bank, he built a computer-driven statistical arbitrage system and eventually grew it into a $30M strategy. I wrote an article on how to apply cointegration with Python to uncover such opportunities, feel free to check it out. https://lnkd.in/dQAjVZCx
8
-
Dipankar Ranjan Baisya
Subquadratic • 4K followers
Ever wondered why a single AI model struggles to analyze stocks like a professional Wall Street analyst? 🤔 Or why traditional LLMs hallucinate financial metrics when asked to provide comprehensive investment analysis? The answer lies in understanding the complexity overload problem—and that's exactly what multi-agent collaboration solves. Recently, I looked into AI in finance and stock trading and then built RiskNavigator AI using Google's Agent Development Kit (ADK), orchestrating 5 specialized agents to deliver institutional-grade stock analysis and investment decision with risk assessment. I implemented the complete sequential agent pattern—from data gathering through MCP to risk assessment—testing it on Google Cloud Run's serverless infrastructure. The results were eye-opening: - 20% improvement in output consistency compared to single-LLM approaches - 85% reduction in hallucinations through specialized agent roles - <60 second end-to-end analysis with 99.9% uptime - Production-ready deployment on serverless infrastructure with zero GPU costs But the most surprising finding? State-based communication (shared memory) should be your default choice for sequential agent workflows, not message-passing as many assume. TL;DR I wrote a comprehensive blog breaking down: ✅ What multi-agent systems are and why they outperform monolithic LLMs for complex tasks ✅ The 12 fundamental agent design patterns—and how to choose the right one ✅ How Model Context Protocol (MCP) connects agents to 60+ financial APIs securely ✅ Agent-to-Agent Communication (A2A) implementation using Google's AgentTool pattern ✅ Different types of agent memory (short-term, long-term, shared, episodic, semantic) ✅ Complete code implementation with all 5 specialized agents (Data, Trading, Execution, Risk, Summary) ✅ Production deployment on Google Cloud Run with FastAPI + Docker ✅ Real experimental results showing the power of agent specialization Whether you're building financial AI systems or just curious about multi-agent architectures, this guide takes you from fundamentals to production deployment with real code and architecture diagrams. 💻 GitHub: https://lnkd.in/gkewN_7y Special thanks to Google Cloud for the incredible "5-Day AI Agents Intensive Course" that inspired this deep dive into multi-agent systems, and to Anthropic for introducing the Model Context Protocol that makes tool integration so elegant! 🙏 What's your experience with multi-agent systems? Have you used Google ADK, LangGraph, or CrewAI in production? I'd love to hear your insights in the comments! 👇 📖 Read the full blog here: https://lnkd.in/gGYw65jP #AI #MultiAgentSystems #FinancialAI #GoogleCloud #AgenticAI #MCP #ProductionAI #MachineLearning #MLOPS
34
2 Comments -
Shubham Agrawal
Stealth Startup • 4K followers
A fantastic deep dive by William Arnold (NVIDIA) into Dynamo, an open-source system that rethinks high-performance LLM inference. If you're building or hosting models, this talk is a must-watch. Learn about smart cache routing, disaggregated serving, and real production-ready design. At AER Labs, we’re passionate about efficient AI + system-level innovation. Do subscribe to our channel for more such expert sessions. Our Batch V1 applications are now open. If you want to contribute to open-source AI research (part-time or full-time) or building a cool AI project, do check it out. Free and open to all. Fill this form to connect: https://lnkd.in/dJG8bXtt
12
-
Madelyn Silveira
CoStar Group • 2K followers
Many of us were holding our breath for Nvidia’s Q3 earnings call last week. It’s no secret that resource availability impacts downstream products, so how do we ensure that the “growth” of AI doesn’t outpace its supply stream? Last weekend, I attended #PlanetAction at Massachusetts Institute of Technology in a quest for some answers. I learned that society is inextricable from its environment, and economy laces between the two – connecting supply to demand and tugging at both landscapes. That transactional lattice ought to be pliable. When a material resource runs low, good products reconnect to new inputs to maintain consistent outputs. Likewise, if production pipelines adversely impact surrounding communities, alternate methods ought to be tested to regenerate social decay. These methods need no discovery; They need adoption ahead of the endpoints they were designed to avoid. Nvidia is safe for now. Do we breathe a sigh of relief? Or, do we push for an economy that feels a little less like a game of jenga?
32
3 Comments -
Vasily Ilin
UW Math AI lab • 513 followers
Most existing #lean4 datasets contain only correct proofs. Models learn error correction with RL, that's expensive. With UW Math AI lab we release a dataset of 260k erroneous Lean proofs with - compiler feedback - reasoning trace - corrected proof Improvements in Error Correction: - Goedel 8B: 2x - Kimina 8B: 3x Paper: https://lnkd.in/gaybt4bd
118
2 Comments -
Vinny DeGenova
Shopify • 3K followers
Excited to share that our team’s paper on aspect-guided review summarization has been accepted to EMNLP 2025’s industry track! 📈🧠🤓 In this paper, we describe a live system at Wayfair that leverages LLMs alongside classical NLP techniques to summarize tens of millions of customer reviews, making it easier for shoppers to discover what matters most to them on site. To further support the research community, we’re also open-sourcing a dataset of ~12M anonymized reviews and their extracted aspects which we hope accelerates progress in aspect extraction, sentiment analysis, and summarization tasks. This milestone reflects the hard work of many folks across Wayfair, including our co-authors Ilya Boytsov, Misha Balyasin, Joe Walt, Caitlin Eusden, Marie-Claire Rochat, and Margaret Pierson. https://lnkd.in/ezJMi3g7 #NLP #EMNLP #EMNLP2025 #LLMs #Summarization #ABSA #OpenSource #Data #Dataset #ecommerce #wayfair #machinelearning #ai #publication
74
3 Comments -
Sandra Garcia
Adevinta • 927 followers
Interesting insight for people in UX and User Research. Can LLM personas replace human feedback? According to this work by Luiz Pizzato and his colleagues, they can be useful at early stages of design but they have important limitations: they stereotype people, lack variability and they can't capture the individuality and diversity obtained with human feedback. So, no, humans can't be fully replaced, yet! ;)
18
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content