Diego Saenz
Dallas, Texas, United States
9K followers
500+ connections
View mutual connections with Diego
Diego can introduce you to 10+ people at Deloitte
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Diego
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Business and technology executive focused on growth and innovation at scale. Senior…
Articles by Diego
-
Powerful and easy-to-use Python libraries for deep learning
Powerful and easy-to-use Python libraries for deep learning
Keras is one of the most powerful and easy-to-use Python libraries for developing and evaluating deep learning models;…
6
-
Why the top entrepreneurs are seeking corporate venture moneyFeb 17, 2017
Why the top entrepreneurs are seeking corporate venture money
Entrepreneurs are getting funded by large corporate incumbents that recognize that engaging with the startup community…
7
-
The Real Story Behind Apple's Famous '1984' Super Bowl AdFeb 13, 2017
The Real Story Behind Apple's Famous '1984' Super Bowl Ad
Still one of the best commercials and still remembered after all these years! https://www.youtube.
3
Activity
9K followers
-
Diego Saenz reposted thisDiego Saenz reposted this𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗮 𝟯𝟯-𝗽𝗮𝗴𝗲 𝗴𝘂𝗶𝗱𝗲 𝗼𝗻 𝗖𝗹𝗮𝘂𝗱𝗲 𝗦𝗸𝗶𝗹𝗹𝘀. 𝗔𝗻𝗱 𝘁𝗵𝗶𝘀 𝗼𝗻𝗲 𝗶𝘀 𝘄𝗼𝗿𝘁𝗵 𝗽𝗮𝘆𝗶𝗻𝗴 𝗮𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻 𝘁𝗼. ⬇️ First, what is a Claude Skill? A Skill is a folder that teaches Claude a workflow. You set it up once, then it’s reusable across sessions. At its core, it’s a set of instructions that teaches Claude how to do a specific task your way. Think of it like handing a new employee your playbook on day one. Except they never forget it. And all is written down in a markdown file (SKILL.md). This keeps your workflow maintainable over time. It reduces prompt bloat and inconsistency And it makes iteration easier because you can tighten one thing without breaking everything else The guide itself is well scoped, covering the following areas: 1 - Fundamentals 2 - Planning and design 3 - Testing and iteration 4 - Distribution and sharing 5 - Patterns and troubleshooting Some very important take aways here: ✦ Build micro-skills that chain together instead of one monolithic skill ✦ Keep SKILL.md short and decisive, move supporting material into references/ and assets ✦ Always hand-edit Skills generated by Claude, first drafts are usually too verbose to be reliable ✦ The real leverage comes when Skills connect to tools through hooks and MCP servers, that’s when “workflow” turns into “system” If you are building agents, internal copilots, or repeatable personal workflows in 2026, Skills are worth learning properly. I am currently writing a comprehensive guide on Claude Skills. In the next edition of my newsletter, I’ll share the most comprehensive guide on Claude Skills: structure, patterns, and a set of micro-skills you can adapt immediately. Subscribe here to not miss it: https://lnkd.in/dbf74Y9E Some further good ressources from Anthropic: ✦ Best practices guide: https://lnkd.in/gGpcN8nQ ✦ Skills documentation: https://lnkd.in/gevQn-5a ✦ Example skills: https://lnkd.in/gd7eZDaH
-
Diego Saenz reposted thisDiego Saenz reposted thisSnowflake is at the center of today’s enterprise AI revolution, delivering value across the entire data lifecycle. Our momentum in the market is evident: Growth accelerated to 32% YoY as we surpassed $1 billion in quarterly product revenue. Our Remaining Performance Obligations totaled $6.9 billion, up 33% YoY. Given our strong results, we’re raising our growth outlook for the year. 📈 We had a record 50 new customers cross $1M in trailing 12 month revenue, increased net customer adds 21% YoY, and added 15 Global 2000s companies in the quarter, thanks to the ease of use, connectivity, and trust that differentiates our platform. ⚡️ We continue to innovate at lightning speed, launching ~250 product capabilities in H1, with newer capabilities like Snowflake Intelligence and Snowflake Openflow already delivering amazing impact to our clients. 💪 AI is key to customer success. Nearly 50% of new logos in Q2 chose Snowflake in part for our AI capabilities, and 25% of all deployed use cases now leverage AI. We're empowering customers like Thomson Reuters and Tripadvisor to accelerate their business outcomes with world-leading models from OpenAI, Anthropic and more. And we’re not slowing down. Our Snowflake World Tours are in session, showcasing our latest innovations to tens of thousands of customers, partners, and developers in cities all around the world. I look forward to seeing our teams in Tokyo, New York, and EMEA soon. Thanks to all our Snowflakes for your relentless dedication and hard work. ❄️❄️❄️ https://lnkd.in/gEZ6mi5a
-
Diego Saenz reposted thisDiego Saenz reposted thisFollowing up on my last post about how AI is resetting the enterprise software chessboard, I just came across a really timely article. The Information just reported that Salesforce is now blocking AI startups like Glean from accessing Slack data. They are doing this even though customers clearly want to be able to use their own data with third-party tools. This is a clear defensive move to keep startups from building on top of their systems of record. Another example of this is SAP currently being sued by Celonis for blocking Celonis’ tool from extracting customer data from SAP applications. For now, the two companies have struck a temporary agreement whereby SAP will not interfere with Celonis’ data extractor or impose extra fees while the lawsuit proceeds, but the bigger antitrust question is still unresolved. Are these tactics just smart business, or are incumbents crossing the line into anti-competitive territory? We’ve seen how Facebook's API restrictions landed them in antitrust hot water, and now both SAP and Salesforce are under the microscope for similar behavior. With AI making data portability and interoperability more important than ever, will regulators step in to ensure customers can actually use their own data with the tools they choose? #AI #EnterpriseSoftware #Antitrust #AgenticAI #SaaS https://lnkd.in/gHQc8s3u
-
Diego Saenz reposted thisDiego Saenz reposted thisTL;DR: MCP is gaining widespread mindshare and while it has great promise, 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗻𝗼𝘁 𝗱𝗲𝗽𝗹𝗼𝘆 𝗶𝘁 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝘀𝗮𝗳𝗲𝗴𝘂𝗮𝗿𝗱𝘀 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝘁𝗵𝗲 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗵𝗮𝘀 𝘀𝗶𝗴𝗻𝗶𝗳𝗶𝗰𝗮𝗻𝘁 𝗴𝗮𝗽𝘀. Amazon Web Services (AWS) is not only providing solutions today but also contributing to improving the specification. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗠𝗖𝗣? Model Context Protocol (MCP) is an emerging standard for enabling AI agents and models to safely access external tools and data sources, providing structured communication pathways that enhance AI capabilities through controlled tool execution while trying to maintain security boundaries between the model and external systems. 𝗠𝗖𝗣 𝗵𝗮𝘀 𝗮 𝗯𝗶𝗴 𝗳𝗹𝗮𝘄 The MCP authorization specification (https://lnkd.in/eQ6EzpCG) suffers from a fundamental design flaw that hinders enterprise adoption. By requiring MCP servers to 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗮𝘀 𝗯𝗼𝘁𝗵 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝘀𝗲𝗿𝘃𝗲𝗿𝘀 𝗮𝗻𝗱 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝘀𝗲𝗿𝘃𝗲𝗿𝘀, the specification violates established OAuth best practices and imposes substantial implementation burdens. This dual-role mandate forces MCP servers to become stateful systems responsible for complex token management, mapping, validation, and lifecycle enforcement—even when integrating with third-party authorization providers. The 𝗿𝗲𝘀𝘂𝗹𝘁𝗶𝗻𝗴 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗰𝗼𝗻𝘁𝗿𝗮𝗱𝗶𝗰𝘁𝘀 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 that typically separate these concerns, requiring each MCP server to essentially reinvent authentication infrastructure rather than leveraging existing enterprise identity solutions. Read more here from Christian Posta : https://lnkd.in/ePtMv8V8 𝗜𝘀 𝗶𝘁 𝗯𝗲𝗶𝗻𝗴 𝗮𝗱𝗱𝗿𝗲𝘀𝘀𝗲𝗱? Yes, the MCP community is really good and active discussions are underway to resolve these issues. AWS security teams are working with the MCP community. Updated specifications will be published here: https://lnkd.in/e6sv-QdT 𝗪𝗵𝗮𝘁 𝗰𝗮𝗻 𝘆𝗼𝘂 𝗱𝗼 𝘁𝗼𝗱𝗮𝘆? Experiment with MCP but since MCP is not yet ready for enterprise use as is, you need to implement additional layers of protection. 𝗔𝗪𝗦 𝗵𝗮𝘀 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗱 𝗮 𝗿𝗲𝗮𝗱𝘆-𝘁𝗼-𝘂𝘀𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 (https://lnkd.in/e-Gq7g8a) with best practices already incorporated. Start with this approach, and the solution will evolve alongside the specification. Please consult with your security teams as always. 𝗪𝗲 𝗵𝗮𝘃𝗲 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝗲𝘅𝗰𝗶𝘁𝗶𝗻𝗴 𝗠𝗖𝗣 𝘂𝗽𝗱𝗮𝘁𝗲 𝗰𝗼𝗺𝗶𝗻𝗴 𝘂𝗽 𝘁𝗵𝗶𝘀 𝘄𝗲𝗲𝗸! Stay tuned!
-
Diego Saenz shared thisTrust isn’t a checklist. It’s a system. We like to think trust is built with handshakes, policy binders, and a few well-meaning compliance documents. But that’s not how it works anymore—not in a world where data moves faster than decisions, and where risk doesn’t wait for a quarterly audit. Real trust—the kind that scales—is built into the architecture. It’s enforced by design. It proves what happened, who did it, and whether they had the right to. In real time... Let's build trustworthy AI applications and architectures. #AIAdvantage #AWS #anthropic #snowflake Raghvender Arni Advait (Addy) Dubhashi Francisco Barroso Nishita Henry John Byron Thomas McGinnis Nithin Gopidi Alejandro Danylyszyn 🇺🇦 Juan FiguereoDiego Saenz shared this𝗔𝗜 𝗮𝘁 𝘀𝗰𝗮𝗹𝗲 𝗶𝘀𝗻’𝘁 𝗷𝘂𝘀𝘁 𝗮 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲: 𝗜𝘁’𝘀 𝗯𝗲𝗰𝗼𝗺𝗶𝗻𝗴 𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲. ⬇️ If you're deploying GenAI in the enterprise — compliance isn’t a side note. It’s part of the architecture. And copilots, LLM platforms, assistants — they all require one thing: a clear understanding of what’s allowed, expected, and soon mandatory. Even a basic grasp of today’s AI laws and standards helps you: → Move faster → Avoid risk → Build trust-by-design Below is a fantastic overview of global: — AI laws & regulations — Governance frameworks — Technical standards 𝗥𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀 EU AI Act 🔗 https://lnkd.in/drN-6Jxd EU AI Liability Directive 🔗 https://lnkd.in/dcrWz-b6 Brazil AI Bill 🔗 https://lnkd.in/d_X-E8Pu Canada AI and Data Act 🔗 https://lnkd.in/dzUschEA U.S. Executive Order on Trustworthy AI 🔗 https://lnkd.in/dfzHbsHb NYC Bias Audit Law 🔗 https://lnkd.in/dpzpvpMb China Algorithmic Recommendation Law 🔗 https://lnkd.in/eXkkiAw5 China Generative AI Services Law 🔗 https://lnkd.in/dF29AHBV China Deep Synthesis Law 🔗 https://lnkd.in/emM8_emQ Peru Law 31814 🔗 https://lnkd.in/dKAEvNiv South Korea AI Act 🔗 https://lnkd.in/diPK27hb Indonesia Presidential Regulation on AI 🔗 https://lnkd.in/dcEPJdmA Mexico Federal AI Regulation 🔗 https://lnkd.in/dyiiNnwP Chile Draft AI Bill 🔗 https://lnkd.in/dKK5ZFYm NIST AI RMF 🔗 https://lnkd.in/et3PY6ef Blueprint for an AI Bill of Rights 🔗 https://lnkd.in/dH8jPUHh OECD AI Principles 🔗 https://lnkd.in/eEcydZ6j OECD AI Risk Classification Framework 🔗 https://lnkd.in/dBFj74fm Council of Europe Framework Convention on AI 🔗 https://lnkd.in/dPqsris8 Singapore AI Verify Framework 🔗 https://lnkd.in/dzKG8ycs UNESCO AI Ethics Recommendation 🔗 https://lnkd.in/d6RR9fbH G7 Hiroshima Process AI Guiding Principles 🔗 https://lnkd.in/dEggzCwi ISO/IEC 42001 🔗 https://lnkd.in/er8mH7cu ISO/IEC 23894 🔗 https://lnkd.in/enUZZjMg IEEE P2863 🔗 https://lnkd.in/dCy7VXjc IEEE P7003 🔗 https://lnkd.in/dayCTyas 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗿𝗲𝗮𝗹𝗶𝘁𝘆 𝗰𝗵𝗲𝗰𝗸 𝗳𝗼𝗿 𝗲𝘃𝗲𝗿𝘆 𝗔𝗜 𝗹𝗲𝗮𝗱𝗲𝗿, 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝘆𝗼𝘂’𝗿𝗲 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝘄𝗶𝘁𝗵𝗶𝗻 𝗮 𝗹𝗲𝗴𝗮𝗹 𝗺𝗶𝗻𝗲𝗳𝗶𝗲𝗹𝗱. Kudos to Oliver Patel, AIGP, CIPP/E for creating this excellent visual — a must-follow voice in the AI governance space.
-
Diego Saenz reposted this⬇️⬇️⬇️Diego Saenz reposted this𝗧𝗵𝗶𝘀 𝗶𝘀 𝗵𝗮𝗻𝗱𝘀 𝗱𝗼𝘄𝗻 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗕𝗘𝗦𝗧 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗵𝗼𝘄 𝗟𝗟𝗠𝘀 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸. ⬇️ 𝘓𝘦𝘵'𝘴 𝘣𝘳𝘦𝘢𝘬 𝘪𝘵 𝘥𝘰𝘸𝘯: 𝗧𝗼𝗸𝗲𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻 & 𝗘𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴𝘀: - Input text is broken into tokens (smaller chunks). - Each token is mapped to a vector in high-dimensional space, where words with similar meanings cluster together. 𝗧𝗵𝗲 𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻 𝗠𝗲𝗰𝗵𝗮𝗻𝗶𝘀𝗺 (𝗦𝗲𝗹𝗳-𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻): - Words influence each other based on context — ensuring "bank" in riverbank isn’t confused with financial bank. - The Attention Block weighs relationships between words, refining their representations dynamically. 𝗙𝗲𝗲𝗱-𝗙𝗼𝗿𝘄𝗮𝗿𝗱 𝗟𝗮𝘆𝗲𝗿𝘀 (𝗗𝗲𝗲𝗽 𝗡𝗲𝘂𝗿𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴) - After attention, tokens pass through multiple feed-forward layers that refine meaning. - Each layer learns deeper semantic relationships, improving predictions. 𝗜𝘁𝗲𝗿𝗮𝘁𝗶𝗼𝗻 & 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 - This process repeats through dozens or even hundreds of layers, adjusting token meanings iteratively. - This is where the "deep" in deep learning comes in — layers upon layers of matrix multiplications and optimizations. 𝗣𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻 & 𝗦𝗮𝗺𝗽𝗹𝗶𝗻𝗴 - The final vector representation is used to predict the next word as a probability distribution. - The model samples from this distribution, generating text word by word. 𝗧𝗵𝗲𝘀𝗲 𝗺𝗲𝗰𝗵𝗮𝗻𝗶𝗰𝘀 𝗮𝗿𝗲 𝗮𝘁 𝘁𝗵𝗲 𝗰𝗼𝗿𝗲 𝗼𝗳 𝗮𝗹𝗹 𝗟𝗟𝗠𝘀 (𝗲.𝗴. 𝗖𝗵𝗮𝘁𝗚𝗣𝗧). 𝗜𝘁 𝗶𝘀 𝗰𝗿𝘂𝗰𝗶𝗮𝗹 𝘁𝗼 𝗵𝗮𝘃𝗲 𝗮 𝘀𝗼𝗹𝗶𝗱 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗵𝗼𝘄 𝘁𝗵𝗲𝘀𝗲 𝗺𝗲𝗰𝗵𝗮𝗻𝗶𝗰𝘀 𝘄𝗼𝗿𝗸 𝗶𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲, 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀. Here is the full video from 3Blue1Brown with exaplantion. I highly recommend to read, watch and bookmark this for a further deep dive: https://lnkd.in/dAviqK_6
-
Diego Saenz shared thisI agree. Anthropic’s Sonnet 3.7 is a excellent AI model, excelling in speed, accuracy, and contextual understanding. Its improved reasoning and safety measures enhance user trust. With nuanced responses and strong coherence, it showcases recent advancements in generative AIDiego Saenz shared thisAnthropic's Sonnet 3.7 is the next big AI Agent model now and it's now free for everyone to use... Today, Anthropic just introduced Sonnet 3.7, their Hybrid Reasoning Model. From the many demos we've seen, the Sonnet 3.7 is a thinking model. However, Unlike o1 and o3 - it doesn't perform long-time thinking. Its processing is rather Hybrid, Sharing that depending upon your given context the the model may think a lot or may not. Currently, It scores over the top performance in coding and SWE Tasks. They have also released Claude's code, Claude Code is an agentic command line tool that Anthropic has made available in the research preview. It allows developers to delegate coding tasks directly to Claude from their terminal. This means software developers can use Claude's capabilities right in their workflow without leaving the command line. This just makes me more excited about what Anthropic has in store for us. Also in their last Build Effective AI Agents YT video, they shared that the team is actively working on agentic developments as well. Very exciting times ahead for AI Agents. Below is the demo shared by Cursor AI utilizing Sonnet 3.7.
-
Diego Saenz reposted thisDiego Saenz reposted thisCursor is now the fastest software company to reach $100M in ARR, surpassing Wiz 📈 A while back, I predicted that by 2025, an AI-native software startup on the application layer would beat Wiz in becoming the fastest B2B software startup to scale from $1M to $100M ARR (Wiz did it in 18 months). Yesterday, Glean also announced that they’ve reached $100M in ARR - doing so in 3 years from $1M ARR. AI code generation and enterprise search are prime examples of early GenAI use cases that have achieved exceptionally strong initial PMF. Of course, questions remain around stickiness, churn, and defensibility when building on the AI application layer - where product depth and vertical expertise become critical. But we’re still in the early innings of a much broader paradigm shift. I firmly believe that AI code gen could rival historical platform shifts in productivity (e.g. workstation → PC, client-server → cloud, web → mobile) I will discuss this and more at a live webinar with Anton Osika, sign up here: https://lnkd.in/d9eKY7j3 #startups #artificalintelligence #cursor
-
Diego Saenz reposted thisDiego Saenz reposted thisTL;DR: Reasoning models are here to stay, and knowing all about them will become a key foundation for anyone in the AI space. "𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮𝗹𝗹 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱" changed the game in many ways a few years ago and became the foundation for Transformer style LLMs. But as you look at what's next in AI it's clear that 𝘁𝗲𝘀𝘁-𝘁𝗶𝗺𝗲 𝗰𝗼𝗺𝗽𝘂𝘁𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 𝗮𝗸𝗮 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗺𝗼𝗱𝗲𝗹𝘀 (Understanding reasoning: https://bit.ly/3CH5I6x) will dominate the landscape. While OpenAI opened the doors on reasoning with O(1), the internals of these reasoning models was closed to most. Thanks to the DeepSeek AI team we now have a great starting point to the details on these types of models. Their work on Reinforcement Learning based techniques (https://bit.ly/3WLkqjw) may be one of the most important research efforts in recent times. So how best to learn about these models then? 1. 𝗗𝗲𝗲𝗽𝗲𝘀𝘁 𝗥𝗲𝗮𝗱: R1 paper: https://lnkd.in/esSXazdQ 2. 𝗩𝗶𝘀𝘂𝗮𝗹 𝗿𝗲𝗮𝗱 of the same paper: https://bit.ly/3EmkDU9 by Maarten Grootendorst - Great work by Maarten as always 3. If you like to learn via a 𝘀𝗶𝗺𝗽𝗹𝗲 𝗮𝗻𝗮𝗹𝗼𝗴𝘆: (By me): https://lnkd.in/eQJuYJpn 4. 𝗚𝗲𝘁𝘁𝗶𝗻𝗴 𝗵𝗮𝗻𝗱𝘀 𝗼𝗻 and watching the "thinking" these models do is a great way to learn. a. You can use the model on Perplexity (maybe the safest way to use the R1 model for most consumers). b. You can deploy it on Amazon Web Services (AWS) for safe enterprise use as well: https://bit.ly/4gIclU5 I am sure more awesome resources will come (and I will share them as they come about) but the above should be a great start.
-
Diego Saenz liked thisHarvey just raised at $11B. Then killed its own AI model. Harvey just closed an $11B‑valued round—then quietly dismantled the very thing everyone thought was its moat: its proprietary AI model. For 18 months, Harvey’s pitch to every law firm was: “We trained a custom model on legal data.” That was the moat. That’s how you justify $11B. Then GPT‑5, Claude 4.5, and Gemini 3 Pro beat Harvey’s proprietary model on BigLaw Bench—their own benchmark, built to show off their own training. So they scrapped it. Now Harvey is a model selector. A routing layer that picks between OpenAI, Anthropic, and Google based on the query. 400,000 agentic queries a day. 25,000 custom workflows built by lawyers inside the product. Full RAG with per‑claim citations. Zero proprietary model. Every AI startup deck I’ve seen this month still pitches a “custom model” as the moat. Every founder talks about proprietary training data. Here’s what the $11B company just proved: 1. Frontier models eat vertical training for lunch. Six to twelve months after you ship a domain‑tuned model, GPT or Claude closes the gap. The cycle keeps repeating—and the same pattern is now playing out in finance, compliance, and other regulated verticals. 2. The moat isn’t the model. It’s the 25,000 workflows. Those are not replicable by the next Claude release. In regulated environments, those workflows are where policy, risk controls, and audit trails actually live. 3. If your pitch starts with “we trained our own model,” you’re fighting a war you can’t win. The layer that matters is orchestration, context, workflow, and governance—the stuff frontier labs won’t build for your vertical or your compliance stack. Harvey didn’t raise at $11B because of its model. They raised because every BigLaw firm is locked into their workflows. The model was a line item. Replaceable. That’s the play: Build the workflow. Rent the intelligence. I write about AI shifts like this, focusing on non-obvious dynamics and real strategic implications. Follow if you want signal, not recycled headlines.
-
Diego Saenz liked thisDiego Saenz liked thisThe most recent agentic AI strategy pack. Every executive needs to read in 2026. The world's top consultancies plus our own ENDGAME blueprint on agentic delivery. 1/ MCKINSEY – The State of AI in 2025. Your reality check. 🔗 https://lnkd.in/gMHgSVys 2/ ENDGAME – Building Agentic Development Organizations. 🔗 https://end.game/ 3/ BCG – AI Radar 2026. The CEO investment view. 🔗 https://lnkd.in/gZXm_PE5 4/ DELOITTE – State of AI in the Enterprise 2026. Ambition vs action. 🔗 https://lnkd.in/gd3AjY6Q 5/ MCKINSEY – Seizing the Agentic AI Advantage. Why 90% of pilots die. 🔗 https://lnkd.in/gUtmr7YB 6/ PwC – 2026 AI Business Predictions. Where agents land first. 🔗 https://lnkd.in/gbsu6sB6 7/ BAIN – Technology Report 2025. Data, platforms, plumbing. 🔗 https://lnkd.in/gbunajNN 8/ KPMG – Global AI Pulse Q1 2026. The orchestration shift. 🔗 https://lnkd.in/gC7h8rZV 9/ EY – AI Pulse Survey Wave 3. The oversight gap. 🔗 https://lnkd.in/gRrB4snm 10/ WEF × CAPGEMINI – AI Agents in Action. The governance playbook. 🔗 https://lnkd.in/gNHK-q-3 11/ DELOITTE – Agentic Enterprise 2028. The autonomy maturity ladder. 🔗 https://lnkd.in/gv5i_8zp 12/ BCG – The $200B Agentic AI Opportunity. Numbers your CFO needs. 🔗 https://lnkd.in/gt8hUE4U The world's top consultancies will tell you where it's all heading. What they don't tell you is how to actually rewire your engineering org to do agentic work. That's the gap we built ENDGAME to fill. ENDGAME is helping dozens of enterprises to transform and become agentic development organizations Results: ready-to-use agents, playbooks, processes. Want to transform using proven methods? Apply here 👉 https://lnkd.in/e-ipdV33 Dozens of CTOs have already enrolled. Learn and transform together with your peers. ♻️ Repost to help your network. Follow Alex Barády, founder of ENDGAME.
-
Diego Saenz liked thisDiego Saenz liked this🛠️🧭 How to Build AI Agents from Scratch → Even If You’ve Never Done It Before. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝟭𝟬 𝗦𝘁𝗲𝗽 𝗿𝗼𝗮𝗱𝗺𝗮𝗽 𝗳𝗿𝗼𝗺 𝗽𝗿𝗼𝗺𝗽𝘁 𝘁𝗼 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻. 》𝗦𝘁𝗲𝗽 𝟭: Define the Agent’s Role and Goal ✸ What will your agent do? ✸ Who is it helping? ✸ What kind of output will it generate? → Example: A medical assistant agent that reads X-rays, summarizes findings, and speaks results. 》𝗦𝘁𝗲𝗽 𝟮: Design Structured Input & Output ✸ Use Pydantic AI or JSON Schemas to define what the agent receives and returns. ✸ Avoid messy text - think like an API. → Tool: Pydantic AI, LangChain Output Parsers 》𝗦𝘁𝗲𝗽 𝟯: Prompt, Tune, and Define Agent Protocol ✸ Start with role-based system prompts ✸ Use Prompt Tuning or Prefix Tuning for consistent persona and task behavior ✸ Use MCP to standardize how your agent handles inputs, tools, and outputs across modules. → Tools: GPT, Claude, MCP, Prompt Tuning 》𝗦𝘁𝗲𝗽 𝟰: Add Reasoning and Tool Use ✸ Equip the agent with reasoning frameworks: ☆ ReAct (Reasoning + Action) ☆ Chain-of-Thought ✸ Allow access to tools like web search, code interpreters, or document retrievers. → Tools: LangChain, OpenAI Tools, ReAct Framework 》𝗦𝘁𝗲𝗽 𝟱: Structure Multi-Agent Logic (if needed) ✸ Use orchestration frameworks to define agent roles and coordination. ✸ Create Planner, Researcher, Reporter agents — each with its own input/output schema. → Tools: CrewAI, LangGraph, Google ADK 》𝗦𝘁𝗲𝗽 𝟲: Add Memory and Long-Term Context ✸ Does your agent need to remember what happened earlier? ✸ Use conversational memory, summary memory, or vector-based memory. → Tools: Zep, LangChain Memory, Chroma 》𝗦𝘁𝗲𝗽 𝟳: Add Voice or Vision Capabilities (Optional) ✸ Text-to-speech: Use Coqui or ElevenLabs ✸ Image understanding: Use GPT or Gemini → Let your agent see and speak. 》𝗦𝘁𝗲𝗽 𝟴: Deliver the Output (in Human or Machine Format) ✸ Format outputs into Markdown → PDF or structured JSON ✸ Output must be both readable and parsable → Tools: Pydantic AI, Markdown-to-PDF, LangChain Output Parsers 》𝗦𝘁𝗲𝗽 𝟵: Wrap in a UI or API (Optional) ✸ Create a front-end or expose your agent via API ✸ Use Gradio, Streamlit, or FastAPI → This is what turns your agent into a product. 》𝗦𝘁𝗲𝗽 𝟭𝟬: Evaluate and Monitor Your Agent’s Performance ✸ Run test prompts and toolchains to ensure reliability and consistency. ✸ Use logs, benchmarks, and feedback loops to improve over time. → Tools: MCP Logs, OpenAI Evaluation API, Custom Metrics Dashboards Which Real-World AI Agents you have Built recently? —- ⫸ꆛ My 2,300 students Build Real-World AI Agents. 𝘙𝘦𝘢𝘥𝘺 𝘵𝘰 𝘫𝘰𝘪𝘯 𝘵𝘩𝘦𝘮? 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗠𝗮𝘀𝘁𝗲𝗿𝘆 (𝟱-𝗶𝗻-𝟭): ➠ 12 Real-World Projects. 100% hands-on ➠ 5 Modules: MCP, LangGraph, PydanticAI, CrewAI, OpenAI Swarm ➠ Lifetime Updates. Full code. 𝟲𝟬% 𝗼𝗳𝗳 ⭒ 𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝘁𝗶𝗺𝗲 ↓ https://lnkd.in/egzjzy8X
-
Diego Saenz liked thisCarolyn Healey , excellent post — so many pearls of wisdom here. One point that especially resonated: without a clear strategy, organizations risk becoming just another “me too” player in the market. And without disciplined execution, competitive advantage quickly erodes. History has shown us what happens when companies fail to adapt — think Kodak or Remington. Strategy isn’t optional; it’s what differentiates leaders from followers.Diego Saenz liked thisCompanies winning with AI aren’t better at choosing tools. They’re better at redesigning how work gets done. PwC’s 2026 AI predictions put a hard number on it: ~20% of value comes from technology. ~80% comes from how work is redesigned. Most enterprises have it reversed. They’re spending 80% of their energy evaluating tools and running pilots, and 20% asking the only question that matters: “How does this change the way we operate?” Here’s the 80/20 AI strategy framework: 1/ Start With the Workflow → McKinsey data is clear: workflow redesign is the strongest predictor of AI impact. More than model sophistication, data scale, or budget. → The starting point isn’t “what tools should we buy?” It’s a process map and 1 question: where are we losing time, money, or quality? Reality: If your roadmap doesn’t start with workflows, it’s procurement dressed up as strategy. 2/ Stop Confusing Adoption With Impact → BCG reports 74% of companies still see no tangible value from AI. → Every team has a tool. Almost none have redefined a KPI because of it. Reality: Adoption without operational change is just shelfware. 3/ Kill the Pilot, Design Deployment → MIT research shows ~95% of generative AI pilots fail to deliver measurable ROI. → Pilots work in clean, controlled environments. Then they collapse in real workflows, messy data, and teams that weren’t part of the design. Reality: Your pilot didn’t fail. Your operating model did. 4/ Make Your CFO Your Co-Architect → AI spend is projected to hit $2.52T in 2026, with over half going to infrastructure (Gartner). → The companies capturing value tie every investment to a measurable business delta: revenue gained, cost removed, cycle time reduced. Reality: If your AI strategy can’t survive a CFO conversation, it’s a bet. 5/ Redesign Roles Before You Automate Tasks → BCG’s 10-20-70 rule still holds: most value comes from people, process, and culture. → Leaders pulling ahead stop asking “what can we automate?” and ask “what should this role become?” Reality: You can’t bolt intelligence onto a broken org chart and call it transformation. 6/ Treat Governance as a Scaling Lever, Not a Brake → Gartner predicts a shift toward AI systems with delegated execution authority → High performers use governance, clear decision rights, human-in-the-loop rules, escalation paths, to increase speed and autonomy Reality: Successful companies are using governance to move faster. 7/ Measure Business Outcomes, Not AI Metrics → Only ~6% of companies qualify as AI high performers (McKinsey), where AI contributes meaningfully to EBIT. → The difference isn’t technical. It’s measurement discipline. Reality: “We deployed 47 use cases” isn’t a metric. Revenue, margin, and cycle time are. The truth: The 6% of companies generating real returns from AI didn’t get there by picking better tools. They got there by redesigning how their businesses operate and treating technology as the enabler, not the strategy.
-
Diego Saenz liked thisDiego Saenz liked thisI curated 15 Crucial AI Agent Design Patterns. Most AI engineers don't know half of these exist. After building 400+ AI Agents, this is the AI Agent Architecture Designs I wish I had on day one. + The Link for more Deep Dive. TIER 1: SINGLE-AGENT PATTERNS --- 1. ReAct: Alternate between reasoning and acting so the agent thinks before every tool call. - https://lnkd.in/eYYAAhZF 2. Plan-and-Execute: Generate the full task plan upfront, then execute each step sequentially. - https://lnkd.in/eCzSqVTn 3. Reflection / Self-Critique: The agent reviews its own output and iterates until satisfied. - https://lnkd.in/eQ5nabJz 4. Tool Use / Function Calling: The agent decides which external tool to invoke and when. - https://lnkd.in/esCSipM9 TIER 2: MULTI-AGENT ORCHESTRATION --- 5. Orchestrator-Subagent: A coordinator breaks down goals and delegates to specialized workers. - https://lnkd.in/eyr-9mEh 6. Supervisor: A controller routes tasks, monitors outputs, and enforces quality gates. - https://lnkd.in/eRKRDtC4 7. Hierarchical Agents: Orchestrators manage other orchestrators across multiple levels of responsibility. - https://lnkd.in/e7MeJjA7 8. Sequential Pipeline: Each agent completes its task and passes the result to the next in line. - https://lnkd.in/eXKi-Qij 9. Parallel Fan-Out / Fan-In: Tasks split across agents simultaneously, then results merge into one. - https://lnkd.in/emRzawkq 10. MapReduce: Distribute subtasks across many agents, then aggregate into a single output. - https://lnkd.in/eyr-9mEh 11. Debate / Adversarial: Agents argue opposing positions and a judge resolves the final answer. - https://lnkd.in/etv3rJyG TIER 3: ITERATIVE & FEEDBACK LOOP --- 12. Evaluator-Optimizer: One agent generates, another scores, and the loop runs until quality is met. - https://lnkd.in/eQmnsGXn 13. Critic-Actor: A critic provides structured feedback and the actor refines until the bar is cleared. - https://lnkd.in/ehSU347F 14. Self-Healing / Retry Loop: On failure, the agent diagnoses the error and retries with a corrected strategy. - https://lnkd.in/eQ5nabJz 15. HITL: A human steps in at defined checkpoints to approve, correct, or redirect the agent. - https://lnkd.in/ekdB7y3v Three tiers. Fifteen patterns. One decision before every build. - Single-agent patterns define what one agent can do alone. - Multi-agent patterns define how agents coordinate under load. - Feedback loop patterns define how systems recover and improve. 💾 Save this for your AI Agents build. —- ⫸ꆛ My 2,300 students build real-world AI agents. 𝘙𝘦𝘢𝘥𝘺 𝘵𝘰 𝘫𝘰𝘪𝘯 𝘵𝘩𝘦𝘮? 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗠𝗮𝘀𝘁𝗲𝗿𝘆 (𝟱-𝗶𝗻-𝟭): ➠ 12 Real-World Projects. 100% hands-on ➠ 5 Modules: MCP, LangGraph, PydanticAI, CrewAI, OpenAI Swarm ➠ Lifetime Updates. Full code. 𝟲𝟬% 𝗼𝗳𝗳 ⭒ 𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝘁𝗶𝗺𝗲 ↓ https://lnkd.in/egzjzy8X
-
Diego Saenz liked thisDiego Saenz liked thisI've been in 200+ discovery calls with enterprise teams building GenAI systems. The ones that fail don't fail because the model was wrong. They fail in the same five places. Every time. 1. The data plumbing was underestimated. The LLM was ready at day 30. The connectors to 7 internal systems were still in progress at day 90. 2. Hallucination tolerance was set too high. Teams accepted outputs that were 'usually right.' One wrong legal or financial answer, and the project was killed from the top. 3. There was no guardrails layer. Prompts were passed straight into production. Compliance teams found out six months in. 4. The internal champion left. The person who believed in the project moved teams. With no sponsor, it died quietly. 5. ROI was never defined before day one. Success was 'we'll know it when we see it.' They never saw it. Every single one of these is an architecture problem - not a technology problem. The fix isn't better. It's asking the right questions before the project starts. What's your data layer? What constitutes a wrong answer? Who owns this in 18 months? Which of these five have you seen kill a GenAI project closest to the finish line? #GenAI #EnterpriseAI #LLM
-
Diego Saenz liked thisDiego Saenz liked this𝗧𝗟;𝗗𝗥: AI is dismantling the traditional corporate pyramid. We are moving from managing hierarchies to orchestrating intelligence where AI agents handle the coordination tax. 𝗧𝗵𝗲 𝗕𝗶𝗴 𝗦𝗵𝗶𝗳𝘁 The corporate structures we use today were designed to manage information scarcity. Sequoia Capital's Roelof Botha and Jack Dorsey highlights (https://lnkd.in/erjgibWm) that as AI makes intelligence abundant, these rigid models are breaking down. We are entering the era of the intelligence-first organization. 𝗞𝗲𝘆 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗽𝗶𝗲𝗰𝗲: 𝗖𝗼𝗼𝗿𝗱𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝘁𝗮𝘅: AI agents reduce the need for layers of middle management. 𝗔𝗴𝗶𝗹𝗶𝘁𝘆 𝗼𝘃𝗲𝗿 𝘀𝗰𝗮𝗹𝗲: Smaller, flatter teams can now outperform massive departments by leveraging reasoning engines. 𝗢𝘂𝘁𝗰𝗼𝗺𝗲 𝗱𝗿𝗶𝘃𝗲𝗻: Success is shifting from headcount and activity to the quality of automated workflows. The goal is no longer just to manage people. It is to manage the flow of intelligence. Companies that fail to flatten their structures will likely be outpaced by leaner, AI-native competitors. 𝗡𝗲𝘅𝘁 𝗦𝘁𝗲𝗽𝘀 I used NotebookLM to synthesize these ideas into a presentation. You can find the deep dive embedded below. How are you rethinking your organizational structure for the AI era?
-
Diego Saenz liked thisDiego Saenz liked thisMost AI agent projects never make it past the prototype stage. Hidden failures, tool thrashing, and version drift quietly break production systems. Our new guide, “5 Critical Lessons for Production-Ready AI Agents,” shares field-tested insights from enterprise deployments, from exposing agent reasoning and testing checkpoints to applying infrastructure-level rigor. Download the guide and book a demo to see how leading organizations build AI agents that scale safely and reliably in production.
Experience
Education
Courses
-
Big Data: Making Complex Things Simpler
2012
Languages
-
Fluent in Spanish
-
Recommendations received
22 people have recommended Diego
Join now to viewView Diego’s full profile
-
See who you know in common
-
Get introduced
-
Contact Diego directly
Other similar profiles
Explore more posts
-
Jodi Adams
KPMG US • 7K followers
Great insights from the latest research between KPMG US and the Texas McCombs School of Business. The joint study analyzed over 1.4 million AI interactions to decode what separates sophisticated AI collaboration from routine use. Harvard Business Review has shares more: https://ow.ly/b7x050YwrNP
2
-
Todd Lohr
7K followers
There’s a lot of talk about efficiency, but to me what makes #AI different is that it already fundamentally changes how things are built. KPMG US is using Anthropic's Claude Code to accelerate creation and modernize AI applications. This hands-on experience is critical as we help our clients integrate Claude into their most complex work, especially in areas like #healthcare. It's always exciting to see the latest enterprise innovations from Anthropic’s ‘The Briefing: Enterprise Agents’ event in NYC. This event is designed for senior leaders shaping AI strategy at their organizations. That's the ecosystem-driven approach in action—combining powerful platforms with deep industry knowledge to solve enterprise challenges. Click here for more info: https://lnkd.in/d8NnjkHV #KPMGTechnology
63
1 Comment -
Jason Gould
KPMG US • 2K followers
Great insights from the latest research between KPMG US and the Texas McCombs School of Business. The joint study analyzed over 1.4 million AI interactions to decode what separates sophisticated AI collaboration from routine use. Harvard Business Review has shares more: https://ow.ly/b7x050YwrNP
4
-
Anita Barksdale JD, CIPP (US), CIPM
KPMG US • 5K followers
How do organizations enable high-impact, sophisticated AI use across the workforce? A new study from KPMG US and the Texas McCombs School of Business details the important shift from driving adoption to shaping the habits that create value. Dive in: https://ow.ly/b7x050YwrNP
5
-
Wolfe Tone
Deloitte • 6K followers
In today’s fast-changing business environment, managed services deliver right-sized capabilities—helping private companies adapt quickly and efficiently. Deloitte Private’s most recent publication, Deloitte Operate, explores how your organization can leverage external specialists and advanced technology for: • Access to best-of-breed technologies and talent • Immediate scalability and flexibility • Enhanced threat visibility and cybersecurity • Improved customer service and operational efficiency • Cost-effective solutions compared to DIY alternatives Discover more insights on achieving your business goals with minimal investment—while driving results, flexibility, and continuous innovation in your organization. #DeloittePrivate #Operate
40
-
Phil Wong
KPMG • 3K followers
In the fast-paced environment of Dallas, where scaling quickly and managing risks are critical, AI-driven internal audits are becoming indispensable for achieving sustainable growth. Learn more about our approach in the Dallas Business Journal. #InternalAudit #KPMGDallas #MakeTheDifference https://bit.ly/3RYGCE7
-
Phyllis Thompson MPA
North Carolina Central… • 3K followers
It’s official! KPMG US professionals now have Gemini Enterprise at their fingertips, with nearly 28,000 utilizing the technology in the first week. In the first few days, we’ve already built nearly 700 no-code AI agents. Incredible to see innovation happening at this scale. https://ow.ly/9nXt50X9iLS
2
-
Lena Cristina Rincones
KPMG US • 971 followers
Great insights from the latest research between KPMG US and the Texas McCombs School of Business. The joint study analyzed over 1.4 million AI interactions to decode what separates sophisticated AI collaboration from routine use. Harvard Business Review has shares more: https://ow.ly/b7x050YwrNP
-
Vamsi Andavarapu
Accenture • 3K followers
Heading to Las Vegas next week for HIMSS26 — and the question I keep coming back to is: are we transforming healthcare, or just digitizing the status quo? There's a difference. Real transformation means rethinking how data, AI, and cloud infrastructure work together to actually change outcomes for patients, clinicians, and communities — not just move old workflows onto new platforms. As Oracle North America Health & Public Service Lead at Accenture, I'm seeing firsthand where the genuine breakthroughs are happening — in EHR modernization, in AI-enabled operations, and in building health systems that serve people across their entire care journey. If you're at HIMSS26, stop by Accenture Booth #4060. I'd love to compare notes on what's actually working. #HIMSS26 #HealthcareIT #OracleHealth #Accenture #DigitalHealth #HealthTransformation
31
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content