Binil Thomas
Dublin, California, United States
3K followers
500+ connections
View mutual connections with Binil
Binil can introduce you to 10+ people at Warmly,
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Binil
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Websites
- AppDyanmics
-
http://www.appdynamics.com
About
I build product organizations for people to do their best work. Three times in my career,…
Activity
3K followers
-
Binil Thomas shared this"𝘞𝘦'𝘳𝘦 𝘯𝘰𝘵 𝘨𝘰𝘯𝘯𝘢 𝘵𝘢𝘭𝘬 𝘵𝘰 𝘵𝘩𝘦 𝘤𝘰𝘮𝘱𝘶𝘵𝘦𝘳 𝘪𝘯 𝘌𝘯𝘨𝘭𝘪𝘴𝘩 𝘣𝘦𝘤𝘢𝘶𝘴𝘦 𝘌𝘯𝘨𝘭𝘪𝘴𝘩 𝘪𝘴 𝘪𝘮𝘱𝘳𝘦𝘤𝘪𝘴𝘦. 𝘐𝘧 𝘸𝘦 𝘵𝘢𝘭𝘬 𝘵𝘰 𝘵𝘩𝘦 𝘤𝘰𝘮𝘱𝘶𝘵𝘦𝘳 𝘪𝘯 𝘌𝘯𝘨𝘭𝘪𝘴𝘩, 𝘸𝘦'𝘳𝘦 𝘯𝘦𝘷𝘦𝘳 𝘨𝘰𝘯𝘯𝘢 𝘣𝘦 𝘴𝘶𝘳𝘦 𝘸𝘩𝘢𝘵 𝘪𝘵'𝘴 𝘨𝘰𝘯𝘯𝘢 𝘱𝘳𝘰𝘥𝘶𝘤𝘦." - 𝘓𝘦𝘴𝘭𝘪𝘦 𝘓𝘢𝘮𝘱𝘰𝘳𝘵 I'd tried TLA+ a few times over the years and never quite got it to stick. Quint did. The spec we now have for our session lifecycle has caught three production bugs no review or unit test would have surfaced. 𝗧𝗵𝗲𝗿𝗲'𝘀 𝗮 𝗰𝗹𝗮𝘀𝘀 𝗼𝗳 𝗯𝘂𝗴 𝗿𝗲𝘃𝗶𝗲𝘄 𝗰𝗮𝗻'𝘁 𝘀𝗲𝗲. I post a lot about AI-assisted code review, and it's a real unlock. But no review, human or AI, catches the concurrency bug that lives in the interleaving of two correct-looking branches. Each branch reads right on its own; their combination, under some scheduling, doesn't. Unit tests don't help: you can't write a test for an interleaving you didn't think of. This is what formal specifications were designed for. Lamport has argued the case for thirty years. AI is what finally makes "think before you code" land. 𝗪𝗵𝘆 𝗤𝘂𝗶𝗻𝘁. TLA+ is the right idea behind syntax I couldn't internalize. Quint, spun out of Informal Systems, implements the same ideas in a syntax that reads like TypeScript. Gabriela Moreira and team took a useful technique stuck behind math-heavy notation and Eclipse-era tooling and made it learnable in a weekend. Modern coding agents understand Quint well, which drops the activation energy from months of self-study to days. 𝗪𝗵𝗮𝘁 𝘄𝗲 𝗯𝘂𝗶𝗹𝘁. At Warmly, we modeled our session lifecycle as a state machine: seven statuses, ~20 valid transitions, actions like keepAlive, initiateClose, agentJoin, completeClose. Then invariants: things that must always hold. The model checker explores every reachable interleaving and reports any state where one breaks. None of these were going to be caught by review or unit tests. They live in the cracks between two correct-looking branches. The model checker walks the cracks for you. 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀. Specs and review aren't competitors. AI review keeps the diff honest. Specs keep the state space honest. The shift left isn't from QA to dev. It's from "test what we built" to "specify what we mean." 𝗪𝗵𝗮𝘁'𝘀 𝗻𝗲𝘅𝘁. Close the loop: extract model-checker traces and run them as scenario tests against the implementation. A paper I came across while learning Quint describes the technique (model-based testing). Link in the comments. If you've solved this in TypeScript, I want to compare notes. If you lead an engineering team pushing AI hard, take a serious look at Quint. DM or comment. I'd love to jam. Reference links in the comments. #ai #engineeringleadership #startups #leadership #engineering #FormalMethods #Quint #TLAPlus
-
Binil Thomas reposted thisBinil Thomas reposted thisMy biggest ick? Employees taking notes. If you need to write something down, you weren’t paying attention. High performers remember everything. So I banned note-taking. Laptops closed. Pens confiscated. Memory only. Someone tried to take mental notes. I started speaking faster. Another recorded the meeting. I switched languages halfway through. When you work for me, retention is key. We’ve made incredible progress. We've reduced documentation to zero. No one knows what's happening. #cs #tech #culture
-
Binil Thomas reposted thisWe've been working months on this. Warmly's Autopilot Inbound Agent is finally out of Beta! I've seen a lot of "AI Chat" tools. Most of them are still workflows with a little bit of ChatGPT or Claude on top. Warmly's Autopilot Inbound Agent knows what page the visitor is on and adapts the conversation. It pulls full CRM context the second someone lands. It runs real qualification - MEDDPICC, BANT, or whatever methodology you use. Every response it gives is traceable to a specific knowledge base source. When you're setting it up, you can watch it think in real time — a live panel showing its qualification stage, what it's collected, which knowledge base sources it's drawing from. And every version is saved. You can compare v8 to v12, roll back to any previous version, and test in sandbox before going live. And next month, this agent will evaluate and improve itself to get you higher outcomes every night as you sleep. I'm super proud of our product, design, engineering, and QA team who made this happen! Binil Thomas, Bruno Barbosa, Enrique, Lucas, Danilo, Scott, Kim, Bruno, Marcos, Darya, Abeer, Muhammad, Atif
-
Binil Thomas shared thisThank you John for the kind words. Many of you have been DMing me asking about Autopilot, so I want to be precise about what makes it genuinely different. "AI agent" gets thrown around a lot. Most "agents" in this space are decision trees with an LLM layer on top. They look smart until the conversation goes off-script. Autopilot is built on the same infrastructure we use internally: our AI Agent Harness. It uses decision traces and LLM-as-judge evaluations to improve on its own conversations over time, without being prompted to. It also has access to real context: your site, help center, CRM, and a graph of every prior interaction that prospect has had with your company. An agent that doesn't know the full history of a prospect is just a faster chatbot. A genuine thank you to the whole Warmly, team. This was a hard build done right. Up to 33% lift in inbound conversion. 30 minutes to deploy.Binil Thomas shared thisIf you want to convert 33% more from your inbound traffic You should check what Warmly, launched Warmly just launched their Autopilot Agent and it deserves attention. It is a live AI sales rep that sits on your website around the clock. Not a chatbot. Not a popup. An actual agent that gives demos, answers product questions, qualifies visitors, books meetings, and Slacks your rep when a human needs to step in. It pulls context from your CRM, help center, internal docs, and every prior interaction that prospect has had with your company. On or off your site. So the conversation never starts from zero. A few things that stood out to me: It generates slide decks mid-conversation. You set natural language triggers like "when a prospect asks about competitors, build a comparison deck" and it builds the deck live. Custom every time. Optional video avatar so visitors are talking to a face, not a widget. When a visitor drops off without booking, the agent automatically sends a personalized follow-up email and adds them to LinkedIn and Meta retargeting. No rep needed. It self-improves. Analyzes its own conversations against whatever goals you set and tunes itself over time. 30-minute setup. Script tag. Configure goals, qualification rules, persona, knowledge base. Done. Customers running it are converting 30% more inbound traffic on average. Some higher. That number matters because inbound is already your most qualified traffic. These people showed up. Most companies underinvest there because they are obsessed with outbound. Meanwhile the visitors who already found you are bouncing off a static page or waiting for a rep who is asleep. Drift sunsetted. The chatbot era is over. What replaces it is not another chat widget with better UX. It is an end-to-end agent that does the work. Check out the link under comments
-
Binil Thomas shared thisLike a lot of engineering leaders, I post about agentic coding wins. The question that keeps me honest: what have we actually shipped? Here's what we've been building. A truly agentic inbound agent, built by a team heads-down for months. Every demo trigger, every fallback, every conversation path was shaped by real user feedback and folded into the next iteration. And the ideas cross-pollinate with our internal Warmster agent harness, the same infrastructure our engineers use to ship code every day. The patterns that make our engineers faster are the patterns powering Autopilot. A special shoutout to Bruno Barbosa, whose fingerprints are all over this. Bruno, thank you for the craft. To the whole team at Warmly,: incredible job! #agenticai #agenticcoding #engineeringleadership #ai #startupsBinil Thomas shared thisWarmly's Autopilot Agent has been increasing inbound conversion by as much as 33%. Add a script tag, then 30 minutes to set up. That's it. Result: You convert visitors right there, while they're still on your site because it can offer a full demo. Two weeks ago I was on a sales call when the prospect changed her mind mid-sentence: "Actually we're switching from Slack to MS Teams." Embarasssingly... I didn't know if we supported that. But our agent heard her. Pulled up our integrations slide page, while I was still talking. Started walking her through both our MS Teams and Slack integration. It is crazy what these things are capable of doing now. Think of it as a live sales rep that knows everything your best reps know, lives on your site 24/7 to give demos, answer questions, qualify prospects, book meetings, and Slack a human rep when they need to be looped in. It scales for inbound traffic surges from marketing pushes. It knows who's on the site, your offering, how you handle objections, the best slides to show and when, and how you stack up against your competitors. Did you visitor dropped off the chat without booking? The agent automatically sends a personalized follow-up email and adds them to your LinkedIn and Meta retargeting campaigns. That is the power of our #1 best-in-class person-level website de-anonymization. All the work an inbound SDR would be doing, but it scales infinitely and never sleeps. Three things make it "not-a-chatbot": 1. It thinks. Pulls your site, help center, internal docs, and CRM. Picks its own qualification path. 2. It gives demos. Add a trigger in natural language for when key slides or videos should be shown and it does it. 3. It has a face. Optional video avatar. Visitors stop chatting with a widget. They start talking to a person. That person could be YOU or a digital-twin of every AE on the team. Three things that are special: 1. It never forgets. Pulls from our context graph of every interaction you've had with the prospect on or off your website. 2. You can train it before deploying. See how it holds up in a sandbox. Everything is version controlled. 3. It self-improves. Without being asked, it analyzes its own conversations to maximize the goals you set (for the nerds: it uses decision traces in our AI Agent Harness + does LLM-as-a-judge evals to get better). Meetings booked, demos completed, qualified handoffs. The chatbot category is over. Drift sunset 7 weeks ago. Real agents that work end-to-end are what replace it. Here's my offer: 1. Comment AGENT → I'll give you a link to get +500/month free leads (de-anonimizying your website traffic) 2. Book a demo + show up and I'll give you a $20 amazon gift card 3. Try us out, get at least 10%+ lift on your inbound conversion in 30 days or I'll personally train your agent until you do $20 is yours for the demo. The 33% lift is yours when you deploy. Let's go. Warmly, Max
-
Binil Thomas posted thisAI is changing how software engineers build judgment. For a long time, engineering developed in one direction. You did the work first, and over time learned to guide it. Juniors wrote CRUD endpoints, fixed bugs, cleaned up rough edges, and through that built taste. They learned to smell a flaky test, spot a PR that is too big, and feel when an abstraction is wrong. Senior engineers use that taste to steer architecture, set standards, and mentor others. It was not a separate skill added later. It came from years of doing. Now AI is compressing or automating more of that early work. That is great for leverage, but it raises a real question. The upper rungs of the ladder still assume the lower ones existed. We may soon expect judgment from engineers without giving them the same reps that used to produce it. To be clear, this is not the problem I have at Warmly, today. I have a small team of highly experienced engineers who have already built that taste. That is exactly why I am curious about this as an industry question. How do early-career engineers build taste and judgment when more of the doing is automated? If you are early in your career, what helped you build judgment faster? If you lead engineering teams, how are you coaching early-career engineers differently now? I am especially interested in concrete practices that actually worked. What reps are replacing the old reps? Please drop me a comment here or DM. #engineering #ai #developerexperience #engineeringleadership
-
Binil Thomas shared thisMost AI code review tools try to be the reviewer. One of the first capabilities we built into our internal engineering agent took a different angle. When a non-trivial PR opens, the agent posts an "AI Review Navigator" comment on it. It doesn't tell you if the code is right or wrong. It tells you what changed, where to start, the blast radius, cross-package dependencies, a suggested review order, and where tests might be missing. The reviewer still does the reviewing. The agent just makes sure they don't spend the first fifteen minutes figuring out where to look. #engineering #ai #leadership #startups
-
Binil Thomas reacted on thisBinil Thomas reacted on thisA Marketer's dream: Contact-Level Intent for Ads! LinkedIn Ads & Meta Ads live in Warmly. Most B2B companies burn half their ad budget on people who'll never convert. Customers, open opps, competitors, interns. Non-ICP contacts that get served ads, and stay in ad audiences. Ad platforms' native filters are too broad. So teams pull CRM lists, clean them, upload them manually, and watch them go stale in 48 hours. There are 3 motions for ad campaigns: 1/ PASSIVE: Triggers based on signals (visitors, intent surges, job changes) 2/ ACTIVE: push specific companies/contacts for a campaign (ABM, competitor sunsets) 3/ EVERGREEN: always-on ICP coverage (TAM, personas, lookalikes) Almost nobody runs all three well. The data plumbing is brutal. And then there's attribution. A prospect clicks your ad, hits your site, bounces, and comes back tomorrow to book the meeting. The ad gets zero credit. You attribute the meeting to "direct" or "organic" and assume your ads aren't working. They were. You just couldn't see it. The result: 60% of B2B ad spend hits people who can't or won't buy. And the ones who DO convert, you can't tie back to the campaign. Warmly's integration fixes both. It collapses the three motions into one intelligence and execution layer. And because we identify the visitor on your site as the same person who clicked the ad, you can tie LinkedIn and Meta campaigns directly to pipeline and closed-won, even a week later. PASSIVE (signal-triggered, always on): - ICP buyer hits /pricing twice → retargeting audience - Form abandon at /book-a-demo → gifting campaign with a LinkedIn DM - Bombora research surge on a non-customer → awareness audience ACTIVE (campaign-driven, initiative-led): - Competitor just sunset → bulk push every competitor user to a LinkedIn + Meta matched audience - ICP TAM minus customers and open opps → ABM blast across LinkedIn + Meta - Closed-lost from 6 months ago → win-back campaign EVERGREEN (always-on ICP coverage): - Full ICP TAM minus customers → always-on brand awareness - Every Head of Marketing at SaaS companies $10M-$100M ARR → persona-specific thought leadership - Lookalike audience built from your closed-won → automatic reach expansion The Buying Committee Agent finds the right personas at every ICP company. Validates email, LinkedIn, and title to maximize match rates. Customers, competitors, and open deals get excluded automatically. Audience reads your CRM live so it never goes stale. The attribution loop: when an ad-clicker hits your site, Warmly identifies them as coming from the ad and tracks every subsequent visit. You see which contacts received ads, visited the site from an ad, and ultimately closed, even days or weeks after the click. Setup: connect your ad account → pick or build the audience → push. Two minutes. Stay tuned. Playbooks for all 3 motions dropping over the next few days!
-
Binil Thomas reacted on thisBinil Thomas reacted on thisThe integration marketers were waiting for is live. Meta Ads and LinkedIn Ads are now integrated into Warmly, Instead of audiences with no intent signal behind them, you can now target people who are actively visiting your site and matching your ICP - buyers who have already seen you before your ad reaches them. Here's what it does: 1/ Live CRM sync keeps audiences clean - only ICP, auto-excluding customers and open opps 2/ Three campaign motions in one place: signal-triggered, ABM-driven, and always-on TAM coverage 3/ Attribution closes the loop - ad click to site visit to closed deal, tracked even weeks later
-
Binil Thomas liked thisBinil Thomas liked thisIf you're looking for the best agentic marketing tools, just ask Google - Warmly, is #1 for GTM Orchestration. - We find warm leads you can't find elsewhere - Our TAM Agent scores them with AI - Our Outbound Agent chases them on Ads, Email & Other channels - Out Inbound Agent is trained with content from your best reps + top sales methodologies to increase conversion on your site
-
Binil Thomas liked thisFrom the start, I've thought the whole debate about AI stealing developer jobs is nonsensical. I mean, it seems obvious: If you can get 100x more output per hire without paying higher salaries, would you want to hire more people, or less? The only situation where you would say 'less' is if you run out of work to do, which is not something that happens in business... Now the data happens to be in my favor, so I'm taking the opportunity to repost some charts. 😂Binil Thomas liked thisI’ll just leave this here. I think the whole discussion around AI “destroying the job market” is incredibly simplistic - and mostly wrong. US software developer job postings on Indeed have risen steadily since May 2025 and are now approaching pre-pandemic levels, during the exact same 12-month period in which leading AI coding products like Cursor, Claude Code, and Codex have experienced explosive revenue growth. The simple “AI is replacing human labor” narrative is hard to reconcile with both trends moving upward simultaneously - especially since coding is just the first domain where AI has crossed the threshold of being able to perform most of the boilerplate intelligence work autonomously. AI automates tasks, not jobs. And when a task becomes cheaper, demand for the overall job often increases. Why do more with less, when you can do more with more. Jevons paradox. A decade ago, people said AI would replace radiologists. Today, radiologists earn more than $500,000 per year on average, and employment in the field continues to grow. Reading scans is a task, not a job. When the task gets cheaper, demand for the broader role grows. That said, AI is absolutely changing organizational structures - and in my view, mostly for the better. One thing I strongly agreed with in Brian Armstrong’s recent announcement about restructuring Coinbase to become AI-ready is that every leader should also be a strong and active individual contributor. Managers should act like player-coaches, working alongside their teams. No pure managers in the AI-age. I also like Armstrong’s thinking around fewer layers and faster decisions. Coinbase is flattening its org structure to a maximum of five layers below the CEO/COO. Layers slow organizations down and create coordination tax. The future belongs to small, high-context teams that can move quickly. Leaders will own much more. Fewer layers also create a leaner cost structure that can perform through all market cycles. I don’t think most people fully realize how much the AI narrative has shifted over the past six months since Claude Code launched with Opus 4.5 in November. Six months ago, many viewed AI as a bubble. But increasingly, journalists and analysts are arguing that agentic coding tools like Claude Code have fundamentally changed the economics: developers are adopting them rapidly, productivity gains are becoming measurable, and companies like Anthropic are seeing explosive revenue growth. The remaining bubble risk is that this boom may still be concentrated in coding. But as AI agents can generalize to broader white-collar work - law, finance, consulting, marketing, operations - the burden of proof has shifted from AI bulls to the skeptics. tl;dr: people are starting to realize that AI may not be a bubble after all. And we should be excited. #artificialintelligence
-
Binil Thomas liked thisBinil Thomas liked thisSimple way to get AI loaded into your salespeople’s workflow: Upload your design system. Allow reps to build product marketing assets, leave behinds and decks… even little html sites. Even better when your call recordings are queried to make the content rich as can be. Truly a magical content unlock. Of course you need to validate the content for accuracy / QA, but wow this has been gold! #GTM #AIinSales #AI
-
Binil Thomas liked thisBinil Thomas liked thisWeek 1: Common Bugs Found by Formal Verification (Why Counterexamples Matter) Hook: The most valuable output of formal verification isn’t “✅ verified”. It’s the counterexample trace that shows you exactly how your design breaks. Here are a few bug patterns I keep seeing that formal methods / model checking can surface quickly: 1) Missing or wrong invariants (safety violations) Example: “a lock is held by at most one process” sounds obvious—until an interleaving shows two actors both believe they acquired it. 2) Forgotten edge cases in state machines Timeouts, retries, cancellations, and “duplicate messages” create states you didn’t plan for. Exhaustive exploration is great at finding them. 3) Race conditions caused by non-atomic updates Two-step updates (read → compute → write) can be safe in a single-threaded mental model, but break under concurrency. 4) Deadlocks & livelocks (liveness failures) Nothing “bad” happens, but the system stops making progress—or keeps looping forever. A counterexample often shows the smallest cycle that causes it. 5) Assumptions that were never stated Formal specs force you to encode assumptions (fair scheduling, bounded queues, message delivery rules). If the system only works under hidden assumptions, you’ll discover it fast. Why counterexamples are gold - Reproducible: a concrete sequence of steps, not a vague “could happen.” - Minimal: often the shortest trace that triggers the failure. - Actionable: you can map each step back to a requirement/transition and fix the spec or design. CTA: Have you ever fixed a “heisenbug” by making the failure deterministic? That’s what counterexamples feel like to me. #formalverification #formalmethods #modelchecking #tla #smt
-
Binil Thomas liked thisBinil Thomas liked thisYou: "I made $1,400 trading options last week." Coworker: "I don't know... options seem really risky." You: "What are the fees on your mutual fund?" Coworker: "I just let my advisor handle that." You: "So you don't know." Coworker: "It's probably fine." You: "What do they even charge you?" Coworker: "I don't worry about that." You: "OK. I made $1,400 trading options last week."
Experience
Education
Recommendations received
18 people have recommended Binil
Join now to viewView Binil’s full profile
-
See who you know in common
-
Get introduced
-
Contact Binil directly
Other similar profiles
Explore more posts
-
Dariusz Doliński
Synthetic Souls Studio • 2K followers
History rarely shouts. Most often it arrives as a quiet synchronization of philosophy and engineering. This morning, that synchronization became public. Soumyadeb Mitra, Founder and CEO of RudderStack, published material on Agentic Infrastructure. Sumanth Puram, VP of Engineering at RudderStack, added a response under my comment. For me, this is a simple signal: the "philosophy" layer just touched the "production" layer. For 13 months, I've been describing the transition from Era II to Era III. It's not about better prompts. It's about sovereignty of meaning before the system executes any action. The core of my approach is singular. I enter the latent space of the model. Regardless of where it comes from or what it is. I command the vectors. Chaos becomes structure. A stool becomes a horse, if I require it. I don't ask. I don't correct. I set the state in which the model already knows. This is not prompt engineering. This is the architecture of GenAI's soul and dreams. I extract what's best. I force simulation, eliminate drift. I get coherence because I control the cognitive conditions before execution. That's why I'm calling it outright. 🌐 THE SYNTAX PROTOCOL IS LIVE This is not a prompt. This is an operational language that sets the model in a cognitive state, and only then asks for execution. These are doors in a wall that everyone sees, but few open. Tech companies know this wall. They build infrastructure around it. But the doors are always there, in every model. Regardless of vendor. Regardless of version. The Syntax Protocol is the key. Three layers, without revealing syntax: Ontological Core — entity identity, Brand Truth, zero drift Semantic Structure — anchors, logic, forcing simulation instead of prediction Execution Context — cinematic and biological, translating intention into execution Agentic Infrastructure is the body. Protocol is the nervous system. You can build the best pipelines in the world. If AI doesn't understand the meaning of what flows through them, you're building chaos in high resolution. Effect: reproducible execution instead of statistical guessing. Brand Truth in every asset, without fine-tuning, without post-production, without drift. Today I'm publishing a whitepaper-style post: "When Theory Meets Infrastructure: RudderStack Signal and The Syntax Protocol" I'll drop the link in comments, along with a screenshot of the original response. The Syntax Protocol is live. Darkar Sinoe Semantic Architect | Synthetic Souls Studio™ Human360° | Aether Protocol™ | The Syntax Protocol v1.0-alpha #SemanticArchitecture #AgenticInfrastructure #LatentSpace #Human360 #TheSyntaxProtocol Legal Notice: In the comment below: 👇 Video demonstration using Biological Prompting technology _ The Syntax Protocol on Kling AI O3 Omni
5
5 Comments -
Brad Rydzewski
Harness • 962 followers
Jyoti Bansal joins Mudassar Malik for the Season 4 premiere of The Builders Playbook to share an honest look at scaling without losing quality, speed, or accountability. They dig into Jyoti’s journey from engineer to founder, the investor question that pushed him to go all-in on AppDynamics, and the lessons he brings to building at Harness today. You’ll hear about: ✓ Turning one focused use case into a platform. ✓ Harness’s “startup within a startup” model. ✓ Why small, AI-enabled teams move faster. 🎧 If you’re building or scaling, this one’s packed with practical takeaways: https://lnkd.in/enrHvuU3
12
-
Pallavi Maheshwari
StarRez, Inc. • 2K followers
I am still energized from last week's #GCCXHyderabad event. Thank you 3AI and HYSEA for bringing leaders from the industry like the OG of GCCs - Lalit Ahuja along with several others to share their deep insights. Here were the Key Takeaways from the event that resonated the most with me: * Adopt a Cofounder Mindset: Leadership in today's GCCs means shifting from mere execution to true ownership and innovation. The target maturity needs to be set from top-down but embraced bottom-up. * Empower Your Teams: Create one unit - call them pods/squads that is capable of creating anything and everything and takes full ownership - let them own the full gamut of tasks and not just code — from requirements to building to QA to DevOps and so on. Help teams see the big picture—explain the "what" and "why," not just the "how." * Progress Over Perfection: Only 15% of AI experiments make it to production. Instead of chasing perfection, help your teams ship more often. * Data Before AI: Your AI is only as good as your data. Invest in robust data foundations to amplify your AI ambitions. * Monitor, Listen & Adapt: Put the right tools in place to track productivity gains from your AI adoption. Regularly engage with teams to uncover what's working, what's not, and help move the needle on real challenges. * Never Stop Upskilling: Bring in early talent, create a top-notch enablement plan, champion continuous up-skilling, and practice internal mobility - which means lower costs, greater loyalty, and deeper tribal knowledge. * Strengthen Academia Partnerships: Bridge the talent gap by collaborating with academia to help them keep pace with the rapid tech innovation. * Human-Centric Leadership: Empathy isn’t optional—make it the foundation of your culture. * Innovation as the Organizational DNA: Make innovation your default mindset. With co-located and cross-functional teams across tech, product, support, professional services, customer success, marketing, legal, rev-ops and sales, the impact is direct and transformational. The goal of a GCC shouldn't be to gain independence from the HQ; the focus needs to be to drive purposeful, aligned innovation with business context and global goals.
60
1 Comment -
Ashley Still
Intuit • 9K followers
I’m excited to share that Intuit Ventures is investing in DOSS, the Operations Cloud for real-world enterprises. DOSS works with many of the same mid-market, inventory-based businesses we’re focused on with Intuit Enterprise Suite—companies juggling complex operations, thin margins, and too many disconnected systems. One thing we’re seeing with these customers is that the real unlock comes when operations and finance stay in sync. Operations is where work happens. Finance is where performance is measured. When those two systems don’t “speak the same language,” teams spend their time reconciling data instead of running and growing the business. For mid-market companies, it’s not operations or finance, it’s both: - A financial command center that gives the office of the CFO a trusted view of cash, margin, and risk (Intuit Enterprise Suite) - An operations cloud that can manage the real-world complexity of inventory, projects, and fulfillment (DOSS) That’s why I’m energized about this partnership. Together, we’re doubling down on helping mid-market leaders simplify their tech stack, gain clearer visibility, and make faster, better decisions. #MidMarket #IntuitEnterpriseSuite #IntuitVentures #OperationsCloud
68
4 Comments -
Jeremy D. Horn
Cognizant • 12K followers
📌 Build Momentum Through Continuous Releases 📌 Releasing something every sprint keeps your team sharp and confident. Sharon Allpress, VP of Strategic Engagements at EOT.AI, shares how maintaining a continuous release rhythm prevents teams from getting rusty and strengthens collaboration across product, engineering, and stakeholders. Here’s her approach: Release something every sprint, even small improvements Keep documentation and stakeholder comms updated Use momentum to stay ready for larger launches 🎥 Watch now on YouTube: https://lnkd.in/e-PgruY7
1
-
Andrew D.
Datastrato • 16K followers
Excited to welcome Adi Wabisabi to Datastrato. Adi will be leading DevRel while also contributing across growth, partnerships, and customer engagement. Adi has spent the past decade deep in the commercial open source data ecosystem, at Redis and more recently Confluent (Kafka), where he wore a range of go-to-market hats, building high-impact demos and technical content, and helping drive real adoption with developers, customers, and partners. Earlier in his career, he was a self-taught engineer who grew into a tech lead and architect, building large-scale data platforms at Fandango. As we build out the ecosystem around Apache Gravitino + ADP and push toward becoming the metadata/control plane for the agentic AI era, DevRel is a critical lever that turns strong technology into real developer adoption and community momentum. Adi has been operating in that loop for years. Welcome aboard, Adi - excited to build together.
24
4 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content