Awais Nemat
Santa Clara, California, United States
7K followers
500+ connections
View mutual connections with Awais
Awais can introduce you to 10+ people at Temporal Technologies
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Awais
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Builder at heart, entrepreneur at core, and operator by choice. Love technology and…
Activity
7K followers
-
Awais Nemat shared thisBig news …the Chkk team is joining Temporal. When we started Chkk, we were obsessed with one problem: making complex systems more reliable. That obsession led us to Temporal as users first. We built our entire platform on it, because we knew it was the right foundation. The infrastructure and systems layer for agentic AI is one of the hardest problems in software right now and Temporal is building exactly that. We’re excited to join at this incredible moment in time, building and leading from the frontlines of AI adoption. Grateful to Samar Abbas, Maxim Fateev, Preeti Somal, and everyone who believed in and built with us. Excited for what comes next. See you at #Replay2026. Read more: https://lnkd.in/dd7yq6cm
-
Awais Nemat reposted thisAwais Nemat reposted thisMaintain agent state through crashes and network timeouts with Temporal and Gemini. Building durable AI agents requires more than just a model. This guide covers the implementation of a ReAct-style loop that persists every step of the process. Temporal handles the durability, ensuring that the agent resumes from the exact point of failure without losing data. ✅ Build a durable agentic loop ✅ Use Gemini for reasoning ✅ Deploy Temporal for persistence Learn more → https://goo.gle/4rvf0Wr
-
Awais Nemat reposted thisTiming, Closure, and New Beginnings… I took this picture on my recent trip to Hawaii, and as 2025 comes to a close, it feels like the perfect reflection of the year for me, my family, my work, and the moments that reminded us to pause, appreciate, and keep moving forward. On this eve of December 31, 2025, as I see the last sunset, I am reminded that every ending quietly prepares the way for a new beginning. With that, I wish you new beginnings and an amazing, happy, healthy, and prosperous 2026. HAPPY NEW YEAR!
-
Awais Nemat reposted thisTiming, Closure, and New Beginnings… I took this picture on my recent trip to Hawaii, and as 2025 comes to a close, it feels like the perfect reflection of the year for me, my family, my work, and the moments that reminded us to pause, appreciate, and keep moving forward. On this eve of December 31, 2025, as I see the last sunset, I am reminded that every ending quietly prepares the way for a new beginning. With that, I wish you new beginnings and an amazing, happy, healthy, and prosperous 2026. HAPPY NEW YEAR!
-
Awais Nemat shared thisTiming, Closure, and New Beginnings… I took this picture on my recent trip to Hawaii, and as 2025 comes to a close, it feels like the perfect reflection of the year for me, my family, my work, and the moments that reminded us to pause, appreciate, and keep moving forward. On this eve of December 31, 2025, as I see the last sunset, I am reminded that every ending quietly prepares the way for a new beginning. With that, I wish you new beginnings and an amazing, happy, healthy, and prosperous 2026. HAPPY NEW YEAR!
-
Awais Nemat reposted thisAwais Nemat reposted thisHappy Holidays & Merry Christmas from Chkk! 🎄 May this festive season bring warmth and joy to you and your loved ones! ✨
-
Awais Nemat reposted thisAwais Nemat reposted thisChkk’s Agentic Lifecycle Management platform is built on our Collective Learning technology — and the Knowledge Graph is a critical part of this foundation. The Knowledge Graph models the cloud native ecosystem, capturing how projects evolve, depend on one another, and interact at runtime. It provides the contextual intelligence required for safe, automated lifecycle decisions across hundreds of cloud native projects. 👉 Read the full blog: https://lnkd.in/g2H2UUd4 #platformengineering #upgrades #devops #sre
-
Awais Nemat shared thisBusy Day 2 at KubeCon NA 2025. Come by Booth 1952 and see Agentic Upgrades in action.
-
-
Awais Nemat liked thisAwais Nemat liked thisWe are thrilled to welcome Moied Wahid to Cloudera as our new SVP of Engineering & Applications! 🎉 A passionate open-source advocate and proven culture builder, his leadership will be key in accelerating our Anywhere Cloud Platform. Welcome to the team Moied! We’re excited to build the future of cloud data together. 🚀 #ClouderaLife #EngineeringLeadership #CloudNative #DataPlatforms #OpenSource Sergio Gago Leo Brunnick Karthik Krishnamoorthy Katrina Boswell Scott McCurdy Amy Nelson Donna Beasley Stephen Ellis Bruno Unna Yaguang Liu Madhan Neethiraj Rahul B. Sangeeta Doraiswamy David Streever Shubho Sinha Sunitha Velpula Sivakumar Krishnamurthy Béla Ányos Aoife (Ee-fah) O'Connor Rachit Chandra Jane Hendry 🏳️🌈 Sara Link Brian Rosso
-
Awais Nemat liked thisAwais Nemat liked thisOpenObserve has raised $10M in Series A funding co-led by Nexus Venture Partners and Dell Technologies Capital to scale its AI-native observability platform. The San Francisco company is attacking a very practical enterprise AI problem: telemetry volume is rising faster than the teams responsible for interpreting it. OpenObserve already had the open-source wedge, with more than 6,000 organizations using the platform and 18,000+ GitHub stars, but the more interesting bet is where the product is moving next. OpenObserve is packaging logs, metrics, traces, RUM, pipelines, visualization, incident management, anomaly detection, AI SRE, MCP support, and LLM observability into one platform. That matters because the old observability stack was built around dashboards and manual triage. The new pain point is different: teams need systems that can understand signals, surface root causes, and eventually act before incidents turn into firefights. The company says its S3-native Parquet architecture can cut storage costs by up to 140x and remove database management overhead. If that cost story holds at enterprise scale, OpenObserve has a credible opening against fragmented Prometheus-Grafana and ELK deployments, and against expensive commercial incumbents. Quick facts👇 ● founders: Prabhat Sharma ● total capital raised: ≈$13.6M ● HQ: San Francisco, California ● Investors: Nexus Venture Partners; Dell Technologies Capital; Secure Octane; Cardinia Ventures
-
Awais Nemat liked thisAwais Nemat liked thisIncredible week @ Replay 2026 at Moscone, SF. Amazing to see the explosive growth of the Temporal community with over 2000+ customers and OSS members participating at Replay this year. And hats off to all of the Temporal staff who worked hard behind the scenes to make the magic happen. Replay 2027 planning starts...Monday.
-
Awais Nemat liked thisAwais Nemat liked thisThis week we wrapped up our 5th annual Replay conference, and I’m honestly still processing it. To go from hosting at a wedding venue in 2022 to selling out the iconic Moscone Center in 2026 is nothing short of mind-blowing. I’m beyond proud of the team who worked tirelessly to make this a success. Huge round of applause to all of you! And lastly, thank you to our incredible community of developers and builders. You pour so much love into this event and you are seen. We’re going to keep giving y'all our very best! 👏
-
Awais Nemat liked thisAwais Nemat liked thisTemporal Technologies #replay2026 was a blast! I was in awe from the second I walked into Moscone Center to the time I left. The venue, the design, the digital badges, and the speakers were all top notch! Yichao Yang an I gave a glimpse at CHASM, a framework that we built to accelerate Temporal server feature development. I will share the video as soon as it's up on YouTube. Thanks to teleportme.co for sending this package of clothes to my hotel to wear at the event. I had everything waiting for me at the hotel neatly folded, what a great service!
-
Awais Nemat liked thisAwais Nemat liked thisA year ago, we set out on a journey to transform Altera into a leading FPGA company. Today, we’re operating with sharper focus, stronger execution, and a culture rooted in accountability and outcomes. With the foundation in place, it’s now time to scale and grow together. Thank you to our employees, partners, and customers. What we’re building is just the beginning.
-
Awais Nemat liked thisAwais Nemat liked thisJeff Dean and I had the pleasure of hosting Dina Bass at our Mountain View campus recently for an in-depth conversation on Google’s approach to AI infrastructure. We shared the history that led us to engineer entire full-stack systems, instead of simply buying off-the-shelf components, including the insight a decade ago that led us to create our own custom silicon: that we wouldn’t be able to provide language translation and voice recognition otherwise. It turned out that there was a lot more that custom silicon would enable us to do as Transformers came along a few years later. AI workloads require massive mathematical calculations, and these early revelations around what it would take to train and deploy AI at scale helped set the standard for AI-optimized hardware. Our teams continue to partner deeply to co-design hardware to essentially predict the future needs of researchers lined up with hardware design and delivery cycles. A few key themes from our discussion: The Co-Design Advantage: Our AI & Infrastructure (AI2) and DeepMind teams work as direct partners in the same room. This closed-loop system allows us to both set the design and to use AI to design the physical layouts of the very chips that will train the next generation of models. Systems, Not Just Chips: There’s a lot more that goes into powering intense AI workloads than chips. By using proprietary networking fabric and optical circuit switching (OCS), we enable thousands of TPUs to act as a single supercomputing "brain." Bending the CapEx Curve: By controlling the design from silicon to the facility, we eliminate lost efficiencies at the interfaces between hundreds of hardware and software components. This allows us to reinvest savings into large-scale research and lower the marginal cost of training and serving models for our customers. The research-to-hardware flywheel is an accelerating feedback loop. I am incredibly proud of how our teams are redefining what is possible by building infrastructure that is fundamentally aware of the models it supports. Stay tuned for more exciting news to come at this year’s Google Cloud Next! #AIInfrastructure #GoogleCloudNext #CloudInnovation https://lnkd.in/g3NwDTzmGoogle Eyes New Chips to Speed Up AI Results, Challenging NvidiaGoogle Eyes New Chips to Speed Up AI Results, Challenging Nvidia
Experience
Education
Publications
Patents
-
Co-inventor on 10 Patents related to Networking, Security, Scheduling, Processing and Encryption
US 010
View Awais’ full profile
-
See who you know in common
-
Get introduced
-
Contact Awais directly
Other similar profiles
Explore more posts
-
Kit Yu
33K followers
NVDA announced six new chips part of its upcoming Vera Rubin platform: 1) Vera CPU (88x custom NVDA cores with 3x system memory vs. Grace), 2) Rubin GPU (5x/3.5x NVFP4 inf/trn vs. Blackwell), 3) NVLink 6 scale-up switch (2x or 3.6TB/s GPU-GPU all-to-all connection at 400G SerDes), 4) ConnectX-9 SuperNIC (800G Ethernet with 200G PAM4 SerDes for east-west traffic), 5) BlueField-4 DPU (800G smartNIC and storage processor for north-south traffic), and 6) Spectrum-6 Ethernet scale-out switch (102.4T infra with 200G co-packaged silicon photonics). NVDA also announced Vera Rubin is now in full production, on track for 2H26. This follows mgmt's last commentary of chip tape-out in December (see our IR meeting takeaways), in-line with cycle time of ~9 months.
1
-
🇮🇱 Nir Nitzani 🇮🇱
NVIDIA • 1K followers
Ultra-low latency and reliable packet delivery are critical requirements for modern applications in sectors such as the financial services industry (FSI), cloud gaming, and media and entertainment (M&E). In these domains, microseconds of delay or a single dropped packet can have a significant impact—causing financial losses, degraded user experiences, or visible glitches in media streams. NVIDIA #Rivermax and NEIO Systems, Ltd joined technologies, bring low latency solution for FSI market. Great technical review by Simon Raviv you would like to learn more come see me at #IBC2025 https://lnkd.in/dEyvypVD #Fastsocket #Rivermax #FSI #Latency
21
1 Comment -
Gurtaj Alag
TRIDENTX Inc. • 2K followers
🚨 Fiber and Private Networks are scaling. Visibility isn’t. Too many Tier-2 ISPs and Private Network operators are flying blind. 💥 Observability = Faster Fixes + SLA Wins + Fewer Truck Rolls Without it? MTTR spikes, margins shrink, and trust erodes. It’s time to supercharge network intelligence—on steroids. 👇 Are your ISPs or Private Networks still reacting blind? #Observability #ISPs #PrivateNetworks #FiberOnSteroids #TridentX #NetworkVisibility #SLAReady #TelcoOps #NMS #SmartInfra
5
-
Peter Bordes
Portal Space Systems • 33K followers
Another AI data center acquisition. NIXXY (NASDAQ: NIXX), a technology company developing and providing AI-powered business services, today announced it has acquired the EDGE data center assets of Everythink Innovations Limited, (“EIL”), a telecom and edge infrastructure provider with existing operations in Fremont, CA, and Vancouver, Canada. This strategic acquisition accelerates Nixxy’s rollout of its AI data infrastructure layer and provides immediate revenue scale and infrastructure capacity aligned with communications, AI, and data monetization applications by securing its own facilities and its ability to host its AI applications. #AI #artificialintelligence #datacenter #datacenters #AIinfrastructure
19
2 Comments -
Naseem Jamal
Pliops • 1K followers
Live from AI Infra Summit 2025 — LightningAI is powering the future of GenAI inference. This morning, Pliops unveiled breakthrough demos showing how our GenAI-native memory stack delivers real-time performance across NVIDIA and AMD GPUs. From LLMs to RAG and vLLM, LightningAI is enabling hyperscalers and enterprises to scale with confidence. 🚀 What’s live at booth #726: • The Viking Enterprise Solutions LightningAI server — purpose-built for multi-GPU inference • Collaboration with Tensormesh to accelerate vLLM deployment (the team behind LMCache Lab) • Unified compute + memory for turnkey GenAI infrastructure Come see how LightningAI is transforming GenAI workloads—fast, efficient, and ready for scale.
4
-
Debashis Basu
Juniper Networks • 4K followers
AI tools have started making waves in chip design, and it's exciting to see how they can enhance productivity. While large language models like Claude are proving beneficial in RTL coding and verification, there's a growing array of new tools emerging from both established EDA vendors and numerous startups. With the current talent crunch in the chip development space, the introduction of these innovative tools could be key to improving team efficiency without compromising quality. If you are utilizing any of these AI tools and have experienced significant productivity improvements, please share your insights. Looking forward to a lively discussion on this topic. Let's get it started.
105
17 Comments -
Jianping(JP) Jiang
XConn Technologies Holdings… • 2K followers
Introducing the "Ultra IO Transformer" by XConn Technologies Holdings Inc. , a groundbreaking technology redefining AI Inference workload efficiency by conquering the memory wall in AI computing. This innovation seamlessly merges the PCIe and CXL worlds, allowing PCIe based GPUs/DPUs/Accelerators to leverage the expansive CXL memory pool (up to 100+ TB) for enhanced computing and communication capabilities. All made possible through XConn's 2nd generation CXL/PCIe hybrid switch. Key Features: - Connect PCIe devices to CXL.mem without the need for driver modifications - Direct GPU (or DPU) access to a vast combined memory of up to 100TB - Low latency (close to NUMA node) access to memory pool - Up to 1TB/s memory bandwidth access with support of CXL interleaving - Support memory pooling and sharing among multiple PCIe and CXL devices/nodes The outcome? Unleash scalable inference capabilities, breaking free from constraints like GPU HBM size or CPU memory capacity and bandwidth limitations. Embrace large transformer models, expand across multiple GPUs, DPUs, maintain substantial KV caching, all with heightened efficiency, lower power consumption and reduced complexity. #AI #MemoryArchitectures #AIInfrastructure #CXL #PCIe #UltraIOTransformer #DisaggregatedMemory #GPU #MemorySharing #Inference #KVCaching #COMPUTEX2025
90
4 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top contentOthers named Awais Nemat
1 other named Awais Nemat is on LinkedIn
See others named Awais Nemat