Rong Y.
Greater Boston
1K followers
500+ connections
View mutual connections with Rong
Rong can introduce you to 10+ people at Google
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Rong
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Data Science leader with years of experience building 0-to-1 products and scaling…
Experience
View Rong’s full profile
-
See who you know in common
-
Get introduced
-
Contact Rong directly
Other similar profiles
Explore more posts
-
Erin Davison Medeiros
Vision Insights • 629 followers
We're facing more and more questions about the use of synthetic data in market research. I'd recommend this blog post, which really resonated with me. So much of the conversation is "how closely can this replicate human data?" But there are so many other considerations researchers should be thinking about. https://lnkd.in/eSQfSTPj
31
3 Comments -
Sudesh Jog
apetito UK • 3K followers
Dr. Ramla Jarrar, an MMM expert, shares an important perspective every marketer using MMM should read. The customer journey is complex, and while MMM is a powerful tool for understanding how spend impacts that journey, its limitations are often overlooked and nuances brushed over once results are packaged in polished decks. The tension I often see: marketers need one strong KPI to track large marketing investments, but the complexity of measurement means no single framework provides that holy grail metric. Measurement methodologies are sophisticated and technically robust, yet each has constraints. Triangulating across multiple frameworks—MMM, incrementality testing, attribution, brand tracking—provides better insights, though interpretation remains as much art as science. This requires partnership. Marketers need to understand what each methodology can and cannot deliver. Analytics partners, whether internal teams or external consultancies, have the responsibility to explain assumptions, acknowledge limitations, and help interpret results responsibly. When both sides engage with this complexity honestly, we make better decisions. A question for marketers: where do you find the biggest gaps between marketing performance measurement and the questions you need answered? #MarketingMixModeling #MMM #MarketingAnalytics #MarketingMeasurement #MarketingROI #DataDrivenMarketing #MarketingStrategy
1
-
Tobias Konitzer, PhD
GrowthLoop • 3K followers
🎯 𝗖𝗮𝘂𝘀𝗮𝘁𝗶𝗼𝗻 > 𝗖𝗼𝗿𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻 (10 and final) 𝗧𝗵𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗠𝗶𝗿𝗮𝗴𝗲 Everyone is excited about “autonomous marketing.” LLMs writing copy. Agents orchestrating journeys. AI deciding what every customer sees. It feels like the future. Most of it is optimizing inside a hallucination. I call it The Optimization Mirage. 𝗧𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗺𝗶𝘀𝘁𝗮𝗸𝗲: 𝗰𝗼𝗻𝗳𝘂𝘀𝗶𝗻𝗴 𝗱𝗲𝘀𝗰𝗿𝗶𝗽𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻Churn scores. LTV models. Propensity rankings. These are descriptive models. They contain zero intelligence about what will happen if you intervene. In lifecycle marketing, their role must be narrow: • As surrogate outcomes for experimentation when true LTV is delayed • As context features inside a real decisioning engine They are inputs. They cannot be policies. Using descriptive models as decision engines is like trying to fly a plane by reading yesterday’s weather report. 𝗧𝗵𝗲 𝘀𝗲𝗰𝗼𝗻𝗱 𝗺𝗶𝘀𝘁𝗮𝗸𝗲: 𝗹𝗲𝘁𝘁𝗶𝗻𝗴 𝗟𝗟𝗠𝘀 “𝗱𝗲𝗰𝗶𝗱𝗲” The natural extension! An LLM can: • Generate treatments • Embed context • Summarize history It can tell you what usually happens together. But it cannot reason about why a treatment caused which outcome. Correlation does not translate to policy. 𝗧𝗵𝗲 𝘁𝗵𝗶𝗿𝗱 𝗺𝗶𝘀𝘁𝗮𝗸𝗲: 𝗧𝗵𝗲 𝗛𝗼𝗿𝗶𝘇𝗼𝗻 𝗧𝗿𝗮𝗽 Even the most state-of-the-art reinforcement learning systems must initialize somewhere. If you let an LLM initialize treatments based on correlational patterns in your warehouse, you fall into: The Horizon Trap. It looks like intelligence because it reflects past patterns. But those patterns are not causal. They are status-quo artifacts. The decisioning will converge. But to what? A local maximum defined by correlational guesses. You can dynamically allocate traffic across five bad ideas and still lose money. The algorithm didn’t fail. Your initialization horizon was wrong. 𝗪𝗵𝗮𝘁 𝗿𝗲𝗮𝗹 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝗶𝗻𝗴 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝘀 If you want to avoid: • Boomerang effects • Slow convergence • Local maxima traps You need causal priors, and decisioning belongs in constrained, auditable, outcome-driven systems. Have you seen LLM-driven “autonomous” systems produce real incremental lift — or just elegant automation?
19
3 Comments -
AnalyticsLiv
3K followers
CRO isn’t one tool it’s a stack working together. Analytics with Google Analytics 4, Heap Analytics, and Mixpanel shows what users do. Behavior tools like Hotjar, Smartlook, and FullStory reveal why they do it. Experimentation platforms such as Optimizely, VWO, and AB Tasty test what works. Add landing pages, AI personalization, and revenue tools like HubSpot and Kameleoon and you turn insights into conversions. CRO Stack = Data + Behavior + Testing + Personalization → Revenue #CRO #ConversionRateOptimization #GA4 #GrowthMarketing #ABTesting #DigitalMarketing #MarketingAnalytics
4
-
Palantir Technologies
643K followers
Most legacy optimization tools break down when real-world disruptions hit. Palantir adapts. At AIPCon 7, Palantir Head of Dynamic Scheduling & Optimization Molly Carmody shows how Palantir allows AI agents to simulate, customize, and automate the best response to any disruption — so your operation never misses a beat.
491
17 Comments -
Rakesh Dhanamsetty
Quadrant Technologies • 18K followers
Are Your Data Leaders Thinking… or Just Connecting? Here’s an uncomfortable truth. Many data teams spend most of their time not analyzing… not predicting… not innovating… But connecting systems. Fixing flows. Maintaining pipelines. That’s not a talent problem. That’s an architecture problem. When ingestion, governance, and analytics live in different tools, Your smartest engineers end up stitching instead of strategizing. That ratio is broken. Modern platforms like Microsoft Fabric shift the balance: • Built-in ingestion • Built-in governance • Built-in analytics Less system stitching. More business thinking. Because scaling insight shouldn’t require scaling headcount. 📩 datafabric@quadranttechnologies.com Write to us for a quick walkthrough or POC discussion. And if you’re heading to FabCon Atlanta (March 18–20), let’s connect in person; see you there. Microsoft | Quadrant Technologies | Ram Paluri, MBA | Vamshi Reddy | Bhaskar Gangipamula | James Kass | Prakash Nagarajan | Lavina DSilva | Mithun P N | Dr. Madhavi Gundavajyala | Siva Kanuru | Sivani Pamidi | Varsha Panguluri | Ajay kumar Erukulla | #Microsoft #DataStrategy #MicrosoftFabric #CIO #CDO #ModernData #FabCon
91
2 Comments -
PixlData
662 followers
If your datasets are poorly labeled, you’re inviting a host of issues like noise, bias, and inconsistencies that can seriously hurt your model's accuracy and reliability when it’s out in the wild. That's why data labeling has evolved from being just a preliminary step into a vital part of engineering and operations in the AI lifecycle.
7
-
Xtract.io
2K followers
Why XDAS's next-gen prompting matters for your AI projects AI projects often underperform not because of the model, but because prompts aren’t precise. Intelligent prompting turns generic AI into a project-specific problem solver, delivering faster results, higher accuracy, better ROI, and scalable solutions. Better prompts = better AI = better business outcomes. Check out this video to know more and experience smart prompting with XDAShttps://zurl.co/6RklN #genai #xdas #promptengineering #AIforBusiness
2
-
Venkata Vara P
Goldman Sachs • 428 followers
⚡ Most data platforms fail not because of data — but because they don’t connect to real operational workflows. That’s where I come in. With 9+ years in Data Engineering and deep expertise in Palantir Foundry, I design and build end-to-end operational systems — from data ingestion to user-facing workflows — connecting pipelines, ontology, and frontend into a unified decision-making layer. 🔹 Transform raw data into actionable operational workflows (not just dashboards) 🔹 Engineer scalable, high-volume pipelines using PySpark & Foundry Pipeline Builder 🔹 Build React-based UI layers (~30%) to drive real user interaction and adoption 🔹 Enable governed, low-latency data layers for regulatory, risk, and clinical operations The goal is simple: make data usable in real-time, where decisions actually happen. 🛠️ Palantir Foundry | React | Python | PySpark | SQL | AWS | GCP 📩 Open to new opportunities (C2C / C2H / 1099) #Palantir #PalantirFoundry #React #DataEngineering #OperationalWorkflows #OpenToWork #BigData
-
Emily Krautsou (née Zong)
Wood Mackenzie • 2K followers
Orthogonal ETL pipelines triangulate to truth. Most data quality frameworks are outdated. Coverage. Timeliness. Accuracy. Schema compliance. These metrics are artifacts of the 2010s data stack. In the age of agentic ETL, they are increasingly cheap. Our agents can scrape more sources, normalize schemas, and expand coverage at scale. But none of that guarantees truth. The most reliable datasets emerge from orthogonal pipelines reconstructing the same dataset independently. This principle already exists in modern statistics and AI: ensembles, mixture-of-experts, multimodal models, and orthogonal estimators. The insight is the same. Single pipelines produce data. Orthogonal systems converge on truth.
5
1 Comment -
Capital Current
243 followers
Unlocking AI Potential: Palantir's Strategic Edge 🚀 The race in artificial intelligence is intensifying, and one core challenge organizations face is leveraging AI effectively to solve real-world problems. Palantir has positioned itself as a leader by integrating advanced AI capabilities into its data platforms, enabling companies to drive actionable insights at scale. The result? Businesses gain a competitive advantage by making faster, smarter decisions that directly impact growth. With AI rapidly transforming industries, how do you see its role evolving in your organization's strategy?
-
Neal Richter
3K followers
Nice job on this Aaron. I agree with it. First) AdCP is not an execution layer. A good definition of an Agent is "a process that uses tools in a loop to accomplish an objective." Where's the feedback loop in an AdCP agent? (hint: it ends up looking like an optimizer) Second) AdCP seems better framed as a set of callable skills from MCP, where the skills are based on proven standards. We don't need a branded fork of MCP. Third) You hit this point: An auction is still the most efficient way to allocate resources that we understand. What matters is the quality of the inventory, and a consistent set of rules.
16
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content