Vijoy Pandey
San Francisco Bay Area
17K followers
500+ connections
View mutual connections with Vijoy
Vijoy can introduce you to 10+ people at Cisco
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Vijoy
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Focused on the Internet of Agents (check out AGNTCY), cognition and semantics, and the…
Activity
17K followers
-
Vijoy Pandey shared thisAnother question that kept coming up in our conversations about the quantum networking stack we are building: why can't we just use good old TCP/IP? Some of this confusion comes from companies branding their high-speed classical networks - the ones connecting GPUs to quantum islands - as quantum networking. The Internet moves bits. Quantum networking teleports qubits. Teleporting quantum state between nodes. Yep, that teleportation! A bit can be copied, and the entire classical networking stack depends on that principle - store-and-forward. A qubit is a strange beast. It can exist in a superposition of 0 and 1 simultaneously - the proverbial Schrödinger's cat being dead and alive. Unfortunately, an attempt to copy a qubit requires measuring it first, which collapses the superposition. So, quantum networking moves quantum state through completely different mechanisms: entanglement and teleportation. You distribute an entangled photon pair between two nodes, A and B - one tango partner at each end - then perform a measurement at A. The state is reconstructed at B and destroyed at A. No copying. And this has implications through every layer of the quantum networking stack: switching, error correction, apps, datacenter design. The design patterns themselves will feel familiar from cloud and distributed computing, but the networking primitive is an entangled photon pair, not a packet. Longer version on Substack, linked in comments.
-
Vijoy Pandey shared thisWhile talking to folks last week about our Universal Quantum Switch (https://cs.co/6040BBEWiG), one question that kept cropping up is how are the computational probabilistic engines - AI and Quantum - related (or not). Arvind Krishna summed it up elegantly a couple of weeks ago in a quote that I would further summarize and paraphrase as: AI Predicts, Quantum Computes. AI operates inside the boundaries of recorded human knowledge. Every protein structure, every legal precedent, every diagnosed disease in the training corpus came from something a human observed and measured and wrote down. Quantum simulations do not consult a corpus of data. The algorithm that a quantum computer runs evolves a system according to quantum mechanics laws, and the answer that emerges is the result of the computation nature itself would have run. The two paradigms are complementary. The hard problems that matter to society — new therapeutics, room-temperature superconductors, carbon capture approaches, fault-tolerant materials at the atomic scale — they all have solutions beyond the record of human experimentation. Getting to those solutions (tractably) will require simulating the future through the laws of physics. Physics solving physics. Longer version on Substack, linked in comments.
-
Vijoy Pandey shared thisI showed this to my kids last night and told them, "This is history." That's not a phrase I throw around easily. After enough years in this industry you learn the difference between a milestone and a genuine first. This is clearly the latter. What you're looking at is the Cisco Universal Quantum Switch. It routes entangled photons while preserving their quantum state, converts between encoding modalities so you can connect any type of quantum compute node or sensor and translate between them, and works over standard telecom fiber at room temperature. The physics and engineering behind it are wild, and the team at Cisco Quantum Labs in Santa Monica pulled it off. For the first time, distributed quantum computing is architecturally possible. We built the foundation for the quantum internet. Read the blog: https://lnkd.in/g42BNzfk Chuck Robbins, Jeetu Patel, Ammar Maraqa, Ramana Kompella, Reza Nejabati
-
Vijoy Pandey shared thisThere's a new kind of computer media in the enterprise: Write once, Read never. The docs are perpetually stale, constantly diverging from reality, and scattered across Confluence, SharePoint, GitHub, Webex (or Slack) threads, Notion, Obsidian - and in my personal life, add Apple Notes, Goodnotes, web clippings, and multiple Google Drives worth of docs and slides that nobody is ever going back to. Karpathy tweeted his LLM knowledge base wiki architecture which went viral last weekend and I decided to give it a run yesterday. Verdict: You *have* to try this out. Prediction: You won’t be able to live without it soon. There were a few mods and decisions I made to the base Karpathy provided. First, the vault / folder structure in Obsidian. I already use Obsidian as a human. Instead of creating separate vaults and dealing with the sync nightmare, I just have folders for Human and Agent, and a Raw folder. (1) The Human/ folder is where I write long form articles and notes independent of the knowledge base wiki. No LLM or agent touches this folder. (2) I do have Arnold Layne, my OpenClaw agent, doing background tasks for me. Raw/ is where both Arnold and I, dump raw snippets. Inclusive of diverse kinds of media. (3) The Agent/ folder is where the LLM (Claude in my case) synthesizes the wiki. No human touches this folder. Second, some customizations to CLAUDE.md for enterprise-like usage - (4) Domain extensions - the agent needs to know that quantum computing and agentic AI have different entity types and different provenance thresholds. (5) Primary source protection, when I drop in my own original work, secondary sources can extend it or raise questions against it, but they cannot overwrite it. It sounds like a small thing but its’s not, especially at enterprise scale where provenance actually matters. Karpathy is upfront that what he’s built is working memory for a single agent, and it’s truly remarkable at that. The jump to Shared Context across teams, reconciling conflicting beliefs at org scale, ontologies that don’t collapse under the weight of a hundred contributors - those are much harder problems and what we are exploring with the Internet of Cognition. PS: The screenshot shows my Obsidian vault after just two runs: one with Karpathy’s original tweet and gist file itself (so meta!) and one with our Internet of Cognition paper. Claude (Sonnet) read it, compiled it into structured summaries, entity pages, concept pages, backlinks, merged all the information cohesively, and keeps it all maintained from there. You just read the Wiki. It’s simply magical.
-
Vijoy Pandey shared thisWhat is the Internet of Cognition? Glad you asked :) Learn about it in under 60 seconds. I have written a whitepaper on this, talked about it on multiple podcasts, given hour-long presentations. Turns out explaining it in under 60 seconds is harder than all of that. But I beat the clock. ...and I caught that microphone pretty well!
-
Vijoy Pandey shared thisRight on the heels of RSAC from last week, this paper that dropped yesterday should be on everyone's radar. It accelerates the quantum threat timeline in a significant way. Breaking RSA or ECC encryption with a quantum computer requires running something called Shor's algorithm at scale. The biggest question has always been around how many physical qubits you need to run Shor’s. For years that number was in the millions, which made the threat feel distant. The 2021 Gidney-Ekera paper, the prior gold standard, put it at ~20 million qubits. Yesterday, a team from Caltech and Oratomic, including John Preskill, one of the architects of quantum error correction, published a paper bringing that number down to 10,000–14,000 physical qubits on a neutral-atom architecture. For context: some neutral-atom labs have already demonstrated arrays of 6,100 qubits. Which means that the gap between theory and practice just went from roughly 3,000x to about 2x. 2𝐗!! What happened? This is all due to better error-correcting codes that pack more logical qubits into the same physical hardware, combined with reconfigurable atomic architectures. The result is a 2,000x reduction in qubit requirements over the prior gold standard. Runtime for this is still ~10 days and not minutes. So this isn’t like “oh, RSA is broken today in real-time”. But there are 3 bullets that make this urgent: - The "harvest now, decrypt later" threat is now active and real. Adversaries are collecting encrypted traffic now, to decrypt once hardware catches up - and that horizon just moved much much closer. - ECC-256, the crypto that actually protects most live TLS, SSH, and PKI traffic today, is more quantum-vulnerable than RSA-2048 in this analysis. (Because, smaller keys mean simpler quantum circuits.) - Quantum computing and quantum networking hardware is improving fast. The 10-day runtime at 26,000 qubits will get shorter, and investment in this space is accelerating. All this to say, the timeline for deploying NIST-standardized post-quantum cryptography, PQC (ML-KEM, ML-DSA, SLH-DSA, just moved to now.
-
Vijoy Pandey shared thisHow do we codify human glue into software? Right now, humans are doing the intent alignment, the coordination, the context-building between AI agents. We are the connective tissue. The Internet of Cognition is the infrastructure to change that. Talked through this with Nathan Labenz on The Cognitive Revolution podcast about why the semantic layer is key to agents thinking together, why identity (not capability) is the real blocker to enterprise AI autonomy, and what happens when agents skip language entirely and exchange raw brain states 🤯 . Link to the episode and cliff notes blog in comments.
-
Vijoy Pandey shared thisIn this past week itself I have come across many conversations, where people, yes humans, are 𝒔𝒂𝒚𝒊𝒏𝒈, not writing, things like - "The biggest unlock ..." "Here's the problem no one talks about ..." "This signals ... <snip> refined elegance ..." Even in social, shooting-the-breeze kinds of conversations! We all desperately need a ritual for LLM-detoxing. (preaching, while quickly using nano banana to create an image for this)
-
Vijoy Pandey reacted on thisVijoy Pandey reacted on thisBig news for our AI ecosystem 🚀 We’re incredibly excited to team up with Cisco to launch the “Cisco Scale Hub” at STATION F, a new acceleration program for startups building AI systems, multi-agent architectures, and infrastructure technologies. The mission is simple: help founders go from product to real-world deployment faster. Startups joining the program will get access to Cisco’s global ecosystem, technical mentorship, enterprise validation opportunities, and production-level infrastructure. And that’s not all: Cisco will also deploy a high-performance network and enhanced security across the STATION F campus, supporting founders building here every day. We’re very happy to welcome Cisco to the campus and excited to see what the next generation of AI founders will build! Applications are now open, more info on our website.
-
Vijoy Pandey liked thisVijoy Pandey liked thisJoin us for a first live look at a critical breakthrough in scalable quantum networking: the Cisco Universal Quantum Switch. Our first-of-its-kind prototype preserves delicate quantum data and operates at room temperature over standard telecom fiber. This innovation paves an achievable path toward distributed quantum computing in years, rather than decades. During our upcoming webinar, Ramana Kompella, Head of Cisco Research, and Reza Nejabati, Head of Quantum Research and Quantum Labs, will share the team’s research findings and you can engage directly with our experts during a live Q&A to explore how this tech will shape the future of secure, scalable, and interconnected systems. Date: Thursday, May 28th Time: 10 a.m. PT Register here and save your spot: https://cs.co/6042BBb8ms
-
Vijoy Pandey reacted on thisVijoy Pandey reacted on this[] 4 Factors Reveal When AI is Out of Control [] Claude is conscious + [] Internet of Cognition [] Teen’s Fashion Innovation Stops Assaults + More + An undiscovered Musical Artist every Fri 1pm-1.30pm (UK time) or listen later. Our small #AI, #Innovation & #Ehics chat - join us https://lnkd.in/gZ57pH_u Muzaffar Garakhanli Mustafa Kamel Mariam Samir Sachin Panicker
-
Vijoy Pandey liked thisVijoy Pandey liked thisAlors que les besoins en qubits augmentent dans le domaine de l’informatique quantique, une seule machine ne suffira pas. Il faut de nombreux dispositifs quantiques capables de fonctionner ensemble comme un seul système. Jusqu’à présent, il n’existait aucun moyen de les mettre en réseau sans détruire les informations quantiques pendant leur transmission. Nous venons de relever ces deux défis. Le prototype du Cisco Universal Quantum Switch connecte des dispositifs quantiques de différents fournisseurs au sein d’un réseau unique. À température ambiante. Avec une fibre télécom standard. Et les informations quantiques restent intactes. Pour en savoir plus, découvrez les explications de Vijoy Pandey, SVP & GM d'Outshift by Cisco ➡️https://cs.co/6044BBZ3s4
-
Vijoy Pandey liked thisVijoy Pandey liked this"My team is busy. But are they busy on the right things?" The inspiration for answering that question came from cleaning out my closet. You stand in front of a packed closet and realize you wear maybe 30% of what's in there. So you sort: toss what's worn out, donate what doesn't fit, tailor what's worth saving, keep what you love. Our workloads are the same way — we just never stop long enough to sort through the pile. That closet logic became a Workload Triage framework I started with my team, and plan to run twice a year: 🗑️ Trash — Low impact or duplicative? Stop doing it. 🤝 Donate — Not yours to own? Hand it back without guilt. 🔧 Re-architect — High value but inefficient? Redesign how it gets done with AI tooling. 🎯 Keep — Strategic and essential? Protect it. Remove the friction. I used Atlassian #Rovo in #Confluence to turn this concept into a repeatable template — tables, criteria, examples, and all. I brought the idea; in less than 30 minutes, Rovo helped me shape it into something my team could pick up and use immediately. The magic isn't the sorting. It's the conversation it forces: What do we stop? What do we hand back? What do we redesign? And what do we protect? If your team hasn't triaged its workload recently, you might be surprised how much you're carrying that you don't need to. Especially if you are implementing new AI tools into your team’s work, this is the pre-work you need to do to figure out where to focus that investment. Make no mistake: Asking your team to adopt new AI tools and workflows requires time-consuming, manual work and deep strategic thinking. AI is not an easy button. I do have more to say on the topic (how I used Rovo to summarize the results, identify patterns and trends, produce a team memo and action plan)... should I write a blog about it? And tell me: Do you do something like this on a regular basis? How are you auditing your work for where to put AI in as a meaningful improvement?
Experience
Education
View Vijoy’s full profile
-
See who you know in common
-
Get introduced
-
Contact Vijoy directly
Other similar profiles
Explore more posts
-
Tara Neal Ramaprabha
10K followers
Today's read: Totogi Secures First European Customer with AI-Native BSS Deployment 📣 https://lnkd.in/g3_hYc8S Totogi has signed a strategic agreement with a major European service provider publicly traded on the London Stock Exchange to deploy its cloud-native BSS platform as part of a modern… Read the full story by visiting the link above ⬆️ Never miss a beat in telecoms. Catch the latest news on The Fast Mode 🚀 #telecoms #tech #innovations #5G #technology
-
Chandra R. Srikanth
moneycontrol.com • 47K followers
Provocative but not accurate: Cisco's Jeetu Patel reacts to Vinod Khosla jibe about company Khosla, founder of Khosla Ventures, had said at the India AI Impact Summit that somebody who has worked at Cisco for 15 to 20 years would be considered “unemployable”, arguing that people tend to get ossified in large organisations and fail to keep up with change. “Vinod is provocative in ways sometimes that are interesting, but not completely accurate,” Jeetu Patel told Moneycontrol responding to the remark at the India AI Impact Summit in New Delhi. https://lnkd.in/guupKHPu
20
1 Comment -
Cloud and Clear UK
85 followers
The Broadcom-VMware acquisition left many IT leaders scrambling for alternatives. But what if you didn't have to choose between familiar operations and cloud innovation? AWS Elastic VMware Service (EVS) is delivering up to 46% cost savings while solving the most painful migration challenge: time. Unlike traditional cloud migrations that take months of planning and refactoring, EVS lets you: ✅ Lift and shift workloads in hours, not weeks ✅ Maintain full administrative control (root ESXi access) ✅ Keep existing VMware expertise and investments ✅ Gradually modernize at your own pace The sweet spot? Time-critical data centre exits. When lease renewals loom or infrastructure refresh cycles demand immediate attention, EVS provides the operational consistency your teams need with AWS innovation potential. But EVS isn't right for every scenario. In my latest deep-dive, I cover: 🔍 When EVS makes strategic sense (and when it doesn't) 🏗️ Implementation best practices that actually work 📊 Real cost comparisons and migration strategies ⚖️ How it stacks up against alternatives Are you facing data centre lease renewals or looking to reduce VMware licensing exposure? The migration timeline doesn't have to disrupt your business. Read the full analysis: https://lnkd.in/et5yjfKg #AWS #VMware #CloudMigration #DataCentre #HybridCloud #ITStrategy
1
-
Frank Contrepois
Coblan Ltd • 6K followers
NetApp's Q3 FY2026 results offer a revealing detail worth noting. The company reported record all-flash array revenue, attributing it to constraints on power and density driven by AI workloads in data centres. This aligns with what many in FinOps have observed: enterprises carry a substantial, often hidden expense even before running their first AI model. The costs for compute resources like GPUs and tokens represent only part of the picture. The greater expense lies in the infrastructure side of AI. 💡 Industry analysis suggests this "infrastructure tax" can increase AI budgets by 75% to over 130%. For example, a model and GPU budget of $1 million could realistically translate to $1.75 million to $2.35 million once storage upgrades, data pipeline rewiring, and power and cooling improvements are included. 🤔 Boards should be asking what portion of AI spend goes to building and maintaining the underlying data platform versus actual model experimentation, and whether underfunding the platform is creating costly bottlenecks. 💰 Context is critical. Well-funded tech firms can weather this additional cost, while organisations led by cautious CFOs, particularly in low-margin sectors like retail, may find cloud-based AI rentals a more practical option than large upfront investments. This perspective reframes AI budgeting as a strategic, top-down decision rather than just a line-item for individual projects. Without a robust data foundation, isolated AI trials risk leading straight to bill shock. The cloud remains a useful environment for experimentation, enabling teams to validate concepts before investing heavily in infrastructure. More importantly, the conversation must move beyond simply counting GPU hours and token prices to include quantifying data readiness and infrastructure debt. Reflect on how much of your AI budget is actually consumed by those hidden infrastructure costs. AI first draft, fine tuning by me.
9
2 Comments -
Pooja Kashyap
AI Advances • 3K followers
𝗪𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝗳𝗿𝗼𝗺 𝘁𝗮𝗹𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗦𝗮𝗻𝗱𝗲𝗲𝗽 𝗞𝗮𝗶𝗽𝘂 𝗮𝗯𝗼𝘂𝘁 𝗔𝗜 𝗮𝗻𝗱 𝗹𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 Every once in a while, you come across a leader whose perspective reframes how you think about tech and leadership. I recently had the chance to speak with Sandeep Kaipu, on our Conversive Talk. Sandeep leads platform services for VMware Cloud Foundation (VCF) at Broadcom, shaping the private cloud and AI infrastructure that power some of the world’s largest enterprises. What stood out to me about Sandeep wasn’t just his deep technical knowledge, but 𝗵𝗼𝘄 𝗰𝗹𝗲𝗮𝗿𝗹𝘆 𝗵𝗲 𝘁𝗵𝗶𝗻𝗸𝘀 𝗮𝗯𝗼𝘂𝘁 𝗽𝗲𝗼𝗽𝗹𝗲. He says it best: “𝘛𝘳𝘶𝘦 𝘦𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 𝘭𝘦𝘢𝘥𝘦𝘳𝘴𝘩𝘪𝘱 𝘪𝘴𝘯’𝘵 𝘢𝘣𝘰𝘶𝘵 𝘣𝘶𝘪𝘭𝘥𝘪𝘯𝘨 𝘪𝘴𝘰𝘭𝘢𝘵𝘦𝘥 𝘵𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘪𝘦𝘴, 𝘪𝘵’𝘴 𝘢𝘣𝘰𝘶𝘵 𝘢𝘳𝘤𝘩𝘪𝘵𝘦𝘤𝘵𝘪𝘯𝘨 𝘦𝘤𝘰𝘴𝘺𝘴𝘵𝘦𝘮𝘴 𝘵𝘩𝘢𝘵 𝘦𝘯𝘢𝘣𝘭𝘦 𝘪𝘯𝘯𝘰𝘷𝘢𝘵𝘪𝘰𝘯 𝘵𝘰 𝘧𝘭𝘰𝘶𝘳𝘪𝘴𝘩. 𝘛𝘩𝘢𝘵’𝘴 𝘢𝘴 𝘵𝘳𝘶𝘦 𝘧𝘰𝘳 𝘤𝘰𝘥𝘦 𝘢𝘴 𝘪𝘵 𝘪𝘴 𝘧𝘰𝘳 𝘱𝘦𝘰𝘱𝘭𝘦.” His upcoming book, 𝘈𝘐 𝘌𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 𝘓𝘦𝘢𝘥𝘦𝘳𝘴𝘩𝘪𝘱 (2025), explores that intersection beautifully: balancing logic and empathy, systems and storytelling, technology and trust. 𝗔 𝗳𝗲𝘄 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝘁𝗵𝗮𝘁 𝘀𝘁𝗮𝘆𝗲𝗱 𝘄𝗶𝘁𝗵 𝗺𝗲: • Infrastructure is no longer plumbing, it’s the foundation of insight and creativity. • Leadership is service, helping others see their own potential more clearly. • Curiosity is the only sustainable strategy. If you’re passionate about AI, engineering, or leading teams through transformation, this conversation is a must-read. https://lnkd.in/guNgwqNQ #ConversiveTalks #Leadership #AI #Engineering #TeamCulture #AIEngineering
56
2 Comments -
Dave Kellermanns
Broadcom • 2K followers
Workload automation is evolving fast, and complexity is at an all-time high. How can IT teams achieve true visibility and control across hybrid and multi-cloud environments? In this on-demand webinar, @Rajeev Kumar, Head of Product for AOD Automation at Broadcom, shares the latest trends in workload automation and demonstrates how Automic SaaS acts as a manager of managers to orchestrate complex processes end to end. Watch now: https://lnkd.in/gUFrBTMh
1
-
David Ramel
1105 Media Inc. • 389 followers
Selecting the right GPU for AI workloads is more nuanced than many realize. While vendors emphasize processing power, the real determining factor is VRAM capacity. Memory serves as the hard limit for which models you can run--you can't execute an AI model that won't fit in GPU memory, regardless of processing speed. Understanding VRAM consumption is critical: model weights, activations, KV cache, and system overhead typically require 1.2-1.5x the stated model size. For inference workloads, this calculation becomes even more complex when considering factors like batch size and conversation length. Training workloads can require 6-8x the model size in memory. Quantization offers a pathway to running larger models on GPUs with limited VRAM by reducing numerical precision, though this comes with potential tradeoffs in accuracy. Our latest article breaks down these considerations to help you make informed GPU selection decisions for your AI infrastructure.
5
-
AceCloud
2K followers
Scalability isn't just a feature, it’s a foundational requirement for AI at scale. In our recent webinar with NetApp, Abhijit Ray broke down how intelligent data infrastructure must start with a clear understanding of what you're building and then scale accordingly. With NetApp, you can scale up to 24 nodes per cluster and expand further across multiple clusters giving enterprises the confidence to meet growing AI demands without compromising performance. But it’s not just about scaling. It’s about aligning architecture with application goals, choosing the right models and building with purpose. Watch this short clip to hear Abhijit’s take on scaling AI infrastructure the right way. Watch full webinar here: https://lnkd.in/giw6kNdJ
18
-
TheNextGenTechInsider.com
665 followers
86% of VMware Customers Cut Usage After Broadcom Acquisition, CloudBolt Study Finds 📌 86% of VMware customers are cutting usage after Broadcom’s acquisition, shifting workloads to cloud platforms like AWS and Azure in a strategic, phased exit. While cost fears have cooled, concerns over future price hikes keep migration momentum strong-marking a quiet but significant shift in enterprise IT infrastructure. 🔗 Read more: https://lnkd.in/dB8FdP5d #Vmware #Broadcom #Cloudbolt #Virtualization #Workloadmigration
2
-
VAST Data
64K followers
Are traditional data architectures slowing your AI initiatives? Bottlenecks in data, compute, and observability can cripple speed to insight when it matters most. Discover how to overcome these limitations. VAST Data is teaming up with Supermicro and Voltage Park to dive into a partnership built to redefine AI model training and deployment at scale. This webinar will dive into full-stack AI data services, optimized observability, and how to achieve readiness for next-gen GPU hardware. This is your chance to learn about building dedicated, flexible, and high-performance AI infrastructure. Save your spots through the link in the comments!
57
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content