Jay Parikh
San Francisco Bay Area
22K followers
500+ connections
View mutual connections with Jay
Jay can introduce you to 10+ people at Microsoft
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Jay
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Experience
View Jay’s full profile
-
See who you know in common
-
Get introduced
-
Contact Jay directly
Other similar profiles
Explore more posts
-
Rubal Sahni
Confluent • 17K followers
While below update might appear a new release or a feature been made available. I will try to elaborate on what it can do for business in digital era. Its not a regular release. Its a game changer Here are key areas impacting business where we are working with customers even before the launch of Tableflow for early access. Saving AI models from hallucinations Hyper Personalisation Realtime recommendation engine Realtime Fraud Prevention Quality Improvement in manufacturing Energy optimization Proactive maintenence Connected Cars, realtime action Cyber Defense Would really encourage practioners and think tanks to share their comments with expanding above pointers. Look forward to some insightful comments. #tableflow #flink #unitycatalog #Kafka #Confluent #AWS #deltalake
12
-
Christian Posta
solo.io • 13K followers
Have you heard of the llm-d project? It improves LLM performance and drives down costs of inference serving on Kubernetes. It's based around a few key projects including vLLM and kgateway. If you want a deep dive into how it works with a detailed step-by-step "packet flow" through llmd, take a look at this blog I wrote recently. 👉 https://lnkd.in/g2mbEh3m
44
3 Comments -
Jaideep Khanduja
CX Quest • 16K followers
To add to that, Confluent Cloud Network (CCN) routing simplifies private networking for Flink, reusing existing Kafka infrastructure. It helps teams quickly connect secure pipelines across cloud environments. Security is further strengthened with IP Filtering—giving teams greater control over access in hybrid environments and enabling them to confidently deploy AI at scale. With these enhancements, Confluent is not just enabling smarter AI—it’s making it enterprise-ready. Organizations can now unlock: Seamless integration of historical and streaming data Secure, scalable infrastructure for AI agents Faster, context-aware decisions powered by trusted data Agentic AI is here. With Confluent Cloud, it’s ready for the enterprise. https://lnkd.in/gh-dA7qs #ConfluentCloud #ApacheFlink #SnapshotQueries #AgenticAI #RealTimeData #StreamProcessing #DataAnalytics #EnterpriseAI #BatchProcessing #DataStreaming #DigitalTransformation #Flink #AI #MachineLearning #DataSecurity #PrivateNetworking #IPFiltering Follow our LinkedIn page for more such interesting CX expert opinions/ Thought Leader Interviews/ CX Leadership news/ articles/ updates https://lnkd.in/g9eeFFgh
1
-
Susanta Ghosh
JPMorganChase • 2K followers
Today let's talk about Parallel Fan Out/ Concurrent Agentic Design pattern and when parallel agents go rouge What’s the Concurrent Orchestration Pattern? Imagine a scenario where multiple AI agents—each with its own lens or specialty—tackle the same task simultaneously. Instead of a single, step-by-step chain, tasks fan out to different agents in parallel. Then their outputs are merged or aggregated for the final answer. It’s the AI equivalent of a brainstorming session where everyone chips in together. This pattern thrives when you need diverse insights or speed—think ensemble reasoning or reaching a verdict faster. This pattern is commonly used in agentic RAG. Here’s how you can apply it in a Retrieval-Augmented Generation (RAG) system: Step 1: Break the user’s query into smaller sub-queries (e.g., “Define concept X,” “List use cases,” “Give examples”) and map out their dependencies. Step 2: Run those sub-queries in parallel—agents fetch context or process each in isolation (e.g., document retrieval, summarization, external tool usage). Step 3: Once all agents have results, aggregate the findings into a single, coherent response. This technique lets you parallelize independent parts, reducing latency while maintaining clarity—especially effective when sub-queries don’t depend on each other. Super Important : Try to avoid this pattern and use sequential execution even if it's slow, but it might yield a good result, below is the reason As Cognition [the company behind devin] warns in a famous blog post “Don’t Build Multi-Agents”, things can go sideways fast if agents don’t share context or coordination is weak. When agents operate in isolation—making decisions based solely on their own view—the final result might be fragmented, contradictory, or just plain incoherent. Think two agents building different puzzle pieces that don’t fit. The core issues: Context fragmentation: Each agent works in a silo, leading to mismatched assumptions. Implicit decisions: Agents’ outputs reflect unspoken choices that may clash when merged. Coordination complexity: Without strong orchestration, integration becomes error-prone. References : 1. Concurrent Execution Pattern : https://lnkd.in/gZybgm2s 2. LLM Compiler whitepaper : https://lnkd.in/gWwGJhbi 3. Coginition blog Don't build multi agents which shows parallel execution can yield agent drift : https://lnkd.in/gkDKC4TP
14
3 Comments -
John F. Heerdink, Jr.
8K followers
JFrog’s Earnings Leap: Hopping Past Wall Street’s Cloud Rev Forecasts (and Competitors) – ( $FROG $SPY ) https://lnkd.in/gE_hMhKQ JFrog showcased impressive growth in Q3 2025, combining software innovation, cloud momentum, and AI advancements #JFrog #EarningsLeap #FROG #CloudGrowth #DevOps #DevSecOps #AI #SoftwareSecurity #RevenueSurge #MarketLeaders #TechnologyStocks #InvestorConfidence #FinancialResults #QuarterlyEarnings #GrowthStock #SoftwareSupplyChain #Innovation #WallStreet
1
-
Mitch Tompkins
3K followers
LLM serving doesn’t have to be slow or expensive. In this new Domino Blueprints guide, Principal Solution Architect Sameer Wadkar shares how to deploy LLMs as model endpoints—without embedding huge binaries in your model image. Learn how to fine-tune, register, and serve LLMs efficiently using Domino Model Registry and shared datasets. 📘 Stay fast, lean, and compliant: https://gag.gl/Bj70bg #LLMOps #MLOps #ModelDeployment #AI
1
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content