Gregg Carman
Cambridge, Massachusetts, United States
8K followers
500+ connections
View mutual connections with Gregg
Gregg can introduce you to 10+ people at Qblox
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Gregg
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Are you confident that you’re making the right go-to-market decisions?
Are those…
Articles by Gregg
-
Make Your Go-to-Market a Blockbuster
Make Your Go-to-Market a Blockbuster
In the volatile world of B2B technology, I have witnessed the rise and fall of countless companies over my 30-year…
35
4 Comments
Activity
8K followers
-
Gregg Carman shared thisInteresting paper on neutral atoms from Mark Saffman Robert Kent, Ph.D. and colleagues #quantumcomputing #machinelearning #neutralatoms #quantumcontrolsGregg Carman shared thisI am excited to share that our paper, “Efficient measurement of neutral-atom qubits with matched filters,” has been published in 𝘗𝘩𝘺𝘴𝘪𝘤𝘢𝘭 𝘙𝘦𝘷𝘪𝘦𝘸 𝘈𝘱𝘱𝘭𝘪𝘦𝘥. Read it here: https://lnkd.in/ekGFPYK2 In this work, we introduce two matched filter models for processing pixel data from a 3×3 array of neutral atoms to discriminate between qubit states (bright vs. dark). Our approach achieves 43% higher accuracy than traditional methods, while requiring four orders of magnitude fewer multiplications than prior machine learning techniques, resulting in evaluation times that are 540× faster. Critically, the complexity of our models scales linearly with the number of qubits, making them compatible with large-scale quantum computers of the future. The two models are illustrated below. Model (a) evaluates a weighted sum of the pixels corresponding to a specific qubit and then applies a threshold to determine its state, while model (b) incorporates the average pixel value from neighboring qubits to account for crosstalk, making the approach suitable for densely packed atom arrays. The weights are learned through linear regression, making training computationally inexpensive compared to other machine learning approaches. An additional benefit is that the learned weights are physically interpretable, providing insight into experimental imperfections. Many thanks to my co-authors Linipun Phuttitarn, Chaithanya Naik M., Swamit Tannu, Mark Saffman, Gregory Lafyatis, and Daniel Gauthier. #quantumcomputing #machinelearning #neutralatoms #quantum #qubits #readout
-
Gregg Carman shared thisGreat to see Chad Rigetti and team at Sygaldry Technologies breaking the boundaries on innovation in quantum accelerated supercomputing.Gregg Carman shared thisToday we're pleased to share that Sygaldry has raised $139M to build quantum-accelerated AI servers, with the Series A led by Breakthrough Energy and the seed led by Initialized Capital We're building our servers specifically for AI data centers in order to deliver more compute per watt and make AI faster, cheaper, and more energy efficient. We've spent the past year focused on tech and team, and we're continuing to hire brilliant, curious, kind, and collaborative people in AI, quantum, and more. Come build the future with us! https://lnkd.in/g9M2uvkH
-
Gregg Carman reposted thisGregg Carman reposted thisIf a computer had a soul, this would be close. The Novera QPU. It will sit at the core of the first university-owned and operated full-stack quantum computer at the University of Saskatchewan. It’s built by Rigetti Computing, founded by Chad Rigetti—a Moose Jaw guy and U of R alum. Reppin'! The wild part is this is just one piece. There’s also a cooling system colder than outer space (Zero Point Cryogenics), a “nervous system” delivering incredibly precise pulses (Qblox), automation and calibration software (QuantrolOx), and quality control systems (Testforce). Each is impressive on its own. Together, it’s something else entirely. It will be used by researchers and industry to help answer some of the most perplexing questions known to man. The USask team brought the vision together, with support from the federal and provincial governments (including us at Innovation Saskatchewan). But projects like this don't happen by committee. Sure, we need community and leaders to push it forward (many of them pictured here), but there's usually someone beating the drum. The President of USask Vincent Bruni-Bossio called him “persistent" with a bit of a wink and smirk. I loved that. Dr. Steven Rayan is the kind of person who brings people along with big ideas. You could see it—in his students excitement, in how he spoke with media and leaders, and in how he patiently answered my probably annoying questions at the end of a long event (I've just read 'What Is Real' by Adam Becker so I'm insufferable). Great ideas grow here because great *people* grow here. This is the start of a great chapter in Saskatchewan innovation thanks to Dr. Rayan and the champions of quantum at USask like Baljit Singh and beyond. It's pretty incredible being in this room, at this time. I'm proud our agency is supporting the work. Also, I didn't drop the thing...this counts as contribution to Saskatchewan's quantum future, right? Photo credits: Matt Braden Photo, with extra credit from Dr. Rayan who asked if I wanted a photo with the precious quantum 🍪
-
Gregg Carman shared thisGreat to spend time with Anastasia Marchenkova and Elizabeth Goin Furber during NVIDIA GTC in San Jose. Looking forward to more collaboration for years to come #quantum #quantumcomputing #quantumcontrols #qblox #nvqlink #deeptech Bauke van Rhijn Andrea DelgadoGregg Carman shared thisGreat 2 weeks of AI and Quantum at Montgomery Summit and Nvidia GTC!
-
Gregg Carman reposted thisGregg Carman reposted this> If you’re responsible for technology strategy, this is the shift to watch. The question is no longer “Should we care about quantum?” Rather, it’s “How do we ensure the platforms we choose today can integrate quantum capabilities tomorrow? Nice article from Nicholas Harrigan.The integration of QPUs and supercomputers is at an inflection pointThe integration of QPUs and supercomputers is at an inflection pointNicholas Harrigan
-
Gregg Carman shared thisQblox is proud to partner with Riverlane on real-time Quantum Error Correction using QECi framework as we Build Quantum, Together on a path towards fault-tolerant quantum computing FTQC. #quantum #quantumcomputing #quantumcontrols #ftqc #qec #qblox #riverlane https://lnkd.in/epdnmDY6Riverlane and Qblox integrate systems to tackle quantum error correction challengeRiverlane and Qblox integrate systems to tackle quantum error correction challenge
-
Gregg Carman shared thisCheck out our latest Quantum Builders with Hewlett Packard Enterprise Masoud Mohseni and Namit Anand #quantum #quantumcomputing #quantumcontrolsGregg Carman shared this🔔 This morning at #APSSummit26 booth 810 we have Masoud Mohseni, Senior Distinguished Technologist, and Namit Anand, Research Scientist, of HPE Labs! ➡️ 10:10 AM | Qblox booth 810 Join us for this very cool & insightful conversation on the architecture and applications of hybrid quantum–classical systems, including: 👉 How quantum processors integrate with classical HPC systems 👉 HPE’s approach to building scalable hybrid architectures 👉 Promising quantum algorithms and near-term scientific applications 👉 The role of real-time control, decoding, and system-level integration Come by, listen in, and bring your questions! We're serving brownies! 🧁
-
Gregg Carman shared thisQblox is a proud member of this partnership with Elevate Quantum QuantWare Q-CTRL NVIDIA Maybell Quantum Riverlane Arrow Electronics First in US! #quantum #quantumcomputing #quantumcontrols #deeptechGregg Carman shared this𝗘𝗹𝗲𝘃𝗮𝘁𝗲 𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗮𝗻𝗱 𝗣𝗮𝗿𝘁𝗻𝗲𝗿𝘀 𝗟𝗮𝘂𝗻𝗰𝗵 𝗨𝗻𝗶𝘁𝗲𝗱 𝗦𝘁𝗮𝘁𝗲𝘀’ 𝗙𝗶𝗿𝘀𝘁 𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗢𝗽𝗲𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗦𝘆𝘀𝘁𝗲𝗺 𝗶𝗻 𝗥𝗲𝗰𝗼𝗿𝗱 𝗧𝗶𝗺𝗲 Elevate Quantum, in partnership with QuantWare, Q-CTRL, Qblox, Maybell Quantum, and Arrow Electronics, today announced the fully operational launch of the Quantum Platform for the Advancement of Commercialization (Q-PAC) — the first fully open, commercially deployable quantum system in the U.S. and the fastest quantum infrastructure deployment ever completed in the country. Built in only five months, Q-PAC shows that the Quantum Utility Block (QUB) is now the fastest path to deploying operational quantum computers. This open, modular, and pre-validated reference framework, allowed the system to reach deployment at a fraction of the cost and time typically associated with closed, full-stack approaches. “𝘘‐𝘗𝘈𝘊 𝘱𝘳𝘰𝘷𝘦𝘴 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦 𝘜𝘯𝘪𝘵𝘦𝘥 𝘚𝘵𝘢𝘵𝘦𝘴 𝘤𝘢𝘯 𝘥𝘦𝘱𝘭𝘰𝘺 𝘲𝘶𝘢𝘯𝘵𝘶𝘮 𝘴𝘺𝘴𝘵𝘦𝘮𝘴 𝘢𝘵 𝘤𝘰𝘮𝘮𝘦𝘳𝘤𝘪𝘢𝘭 𝘷𝘦𝘭𝘰𝘤𝘪𝘵𝘺,” said Elevate Quantum COO and CFO Jessi Olsen. “𝘛𝘩𝘪𝘴 𝘪𝘴 𝘯𝘰𝘵 𝘢 𝘳𝘦𝘴𝘦𝘢𝘳𝘤𝘩 𝘵𝘦𝘴𝘵𝘣𝘦𝘥. 𝘐𝘵 𝘪𝘴 𝘢 𝘧𝘶𝘭𝘭𝘺 𝘳𝘦𝘱𝘳𝘰𝘥𝘶𝘤𝘪𝘣𝘭𝘦, 𝘤𝘰𝘮𝘮𝘦𝘳𝘤𝘪𝘢𝘭‐𝘨𝘳𝘢𝘥𝘦 𝘲𝘶𝘢𝘯𝘵𝘶𝘮 𝘴𝘺𝘴𝘵𝘦𝘮 𝘸𝘩𝘦𝘳𝘦 𝘤𝘰𝘮𝘱𝘢𝘯𝘪𝘦𝘴 𝘤𝘢𝘯 𝘵𝘦𝘴𝘵‐𝘥𝘳𝘪𝘷𝘦 𝘵𝘩𝘦𝘪𝘳 𝘲𝘶𝘢𝘯𝘵𝘶𝘮 𝘧𝘶𝘵𝘶𝘳𝘦 𝘣𝘦𝘧𝘰𝘳𝘦 𝘵𝘩𝘦𝘺 𝘣𝘶𝘪𝘭𝘥 𝘵𝘩𝘦𝘪𝘳 𝘰𝘸𝘯.” This initial system is powered by QuantWare's 17-qubit Contralto-A QPU and has a clear upgrade path to 100‑qubit‑class processors heading into 2027, allowing supported Q-PAC use-cases to scale without replacing infrastructure. And with #NVQLink on the roadmap through the Arrow Electronics collaboration, the platform is already pointing toward the next step: hybrid quantum/classical systems built natively for HPC environments. 𝗥𝗲𝗮𝗱 𝗺𝗼𝗿𝗲: https://lnkd.in/etbucXkm #QuantumComputing #QuantumHardware #QuantumOpenArchitecture #QUB #HPC #NVQLink #QuantWare
-
Gregg Carman shared thisQblox is so excited to be collaborating with NVIDIA Quantum and the entire team with Sam Stanwyck Krysta Svore as we Build Quantum Together on the journey towards FTQC - fault tolerant quantum computing What an amazing week! #quantum #quantumcomputing #quantumcontrols #qec #nvidia #nvqlink #qbloxGregg Carman shared thisThank you to our amazing ecosystem of partners for creating an unforgettable #NVIDIAGTC26. It’s incredible just how quickly the quantum ecosystem is moving toward fault-tolerant quantum-GPU supercomputing. Last October at GTC DC, we launched NVIDIA NVQLink, an open system architecture designed to solve two massive bottlenecks: creating a scalable resource for real-time QEC, and providing a single interface to seamlessly integrate quantum processors with supercomputers. A month later at SC25, we saw the first fruits of that labor when Quantinuum used NVQLink to perform real-time decoding on their Helios QPU, beating their 2ms latency requirement by 32x while improving logical fidelity by 5x. This week at GTC 2026, we took the next major step. We brought that architecture directly to developers by launching cudaq-realtime in our CUDA-Q 0.14 release. Fault-tolerant quantum computing demands a tight classical-quantum feedback loop. QEC decoding and autocalibration have to happen in real-time, right alongside quantum operations, before qubits decohere. cudaq-realtime gives developers the runtime API to execute microsecond-latency callbacks between GPUs and quantum controllers, entirely bypassing the host CPU. In addition to bringing our initially benchmarked sub-4 us latency to developers through cudaq-realtime, we’ve improved the underlying latency by nearly a full microsecond. Today, on an RTX 6000 Blackwell Pro with a ConnectX 7 NIC, we are measuring FPGA-GPU-FPGA transport-only round trips at just 2.92 µs. Architecture and APIs only matter if they are put to work by the community. At NVIDIA, our mission is to build these systems in partnership with the entire quantum ecosystem, not as a walled garden. So we were thrilled to see the many partner success stories, from deployments at Elevate Quantum with QuantWare, Qblox, and Q-CTRL and in Korea with Anyon Technologies and SDT Inc., to Dell Technologies validating the first NVQLink compatible commercial systems. Check out the full NVQLink and cudaq-realtime breakdown, including our partner news and how to run the latency benchmarks on your own systems, in our latest developer blog: https://lnkd.in/gDyDr8KE We also expanded our CUDA-Q ecosystem overall, with 40+ partner press releases, including: - Orchestration and scheduling for quantum-GPU supercomputing, where Pasqal is integrating CUDA-Q with Slurm - Conductor Quantum, qBraid, Classiq, and SDT Inc. integrating agentic developer tools with CUDA-Q, - innovation at the QEC layer, where Alice & Bob are innovating on new error correcting codes with our open-source CUDA-Q QEC library and many more: https://lnkd.in/gXGcf_SB Thank you to everyone who joined us in San Jose this week. The present is bright and the future is brighter!
-
Gregg Carman liked thisMassachusetts Executive Office of Economic Development (EOED)
Massachusetts Executive Office of Economic Development (EOED)
2dGregg Carman liked thisAt the SelectUSA Investment Summit, Secretary Eric Paley and MOITI Massachusetts Office of International Trade and Investment Executive Director Jeevan Ramapriya met with companies and business leaders from around the world to showcase why Massachusetts continues to be a top destination for global investment, innovation and growth. And with the Mass Wins Act, Massachusetts is doubling down — including a proposed $50 million GlobalMass Innovation Access Fund to attract global capital into Massachusetts companies and $20 million for site development and infrastructure to help international companies establish and grow here. Because if you want to compete globally, Massachusetts is where you build. Office of Massachusetts Governor Maura Healey -
Gregg Carman liked thisIf we were designing chips for AI from scratch today, we wouldn't build a GPU. Big takeaway from the hardware panels at The Montgomery Summit this year! The GPU won the AI race originally built for rendering polygons. Parallel matrix math just happened to be what AI needed. Even Jensen Huang said at the GTC Keynote that NVIDIA is a platform company that got very good at something at exactly the right moment. But the panels on AI hardware, photonics, and quantum showed there is a lot of room for new types of compute and how it will all tie together, both in the physical data center world, and in the software world - what does the dev need to know to program across many types of chips? BTW - NVIDIA is already a photonics company with its eye towards quantum networks. Most people missed it. At GTC last year, Jensen announced Spectrum-X and Quantum-X — silicon photonics switches that move the optical layer inside the chip package. Copper is becoming a problem for the future of heterogenous compute. Per NVIDIA, this cuts power consumption up to 3.5x and improves resiliency 10x. And the startups building here, like Lightmatter, Ayar Labs, Enfabrica got validated instead of killed. Which leads me to quantum... Quantum isn't competing with GPUs. It needs them. And a lot of them! Nobody is replacing GPUs with quantum computers, I think that's the #1 myth I have to bust in early conversations with investors and people exploring quantum for the first time. The real question is where quantum will win both algorithmically and economically — optimization, molecular simulation, workloads classical hardware structurally can't touch (the "intractable" problem, if you will!). Thank you Jamie Montgomery and Oliver Marques for being such amazing hosts and putting together an incredible program for Monty 2026! Hope to be back next year 😊
-
Gregg Carman liked thisHot take on the Wellcome Leap Q4Bio results: we haven't shown clean quantum advantage in chemistry yet, and we're already running quantum biology. That's either deeply premature or secretly the right move. Caffeine (24 atoms) is past exact classical simulation. Quandela notes that simulating it exactly on classical hardware "would take longer than the age of the universe." Proteins have thousands of atoms. Quantum should win here, and yet we don't have an uncontested, commercially-relevant chemistry result where quantum beats classical. A recent review (arXiv, Dec 2025) puts it bluntly: "no quantum-native method has been validated at a commercially relevant scale." Quantinuum says "within reach". IonQ says "significant advancement". John Preskill himself asked the uncomfortable question at his Q2B 2025 keynote: "When it comes to many-particle quantum simulation, a nagging question is: 'Will AI eat quantum's lunch?'" Here's an XKCD where a chemist tells a biologist "biology is just applied chemistry," and the physicist behind them tells the chemist the same thing about physics. If biology reduces to chemistry and we are working on chemistry still, then quantum biology is built on an unfinished foundation. So a challenge for quantum biology looks like skipping a step. Except the Q4Bio results make a strong case that it isn't. Huge congratulations to Algorithmiq, Cleveland Clinic, IBM for taking the $2M for simulating photodynamic therapy, a light-activated cancer treatment, approaching 100 qubits. Five of six teams used IBM's utility-scale Heron or Nighthawk processors. The spec was genuinely hard: 50+ qubits, 1,000–10,000 gate circuits, and a clear path to scaling. Teams had to show their algorithm works on today's ~50-qubit demonstrations and scales to the 100–200 qubit, 10⁵–10⁷ gate depth machines expected in 3-5 years. Circuit depth and error behavior both have to stay under control as you grow the problem. Other finalists: ⚛️ University of Oxford + Wellcome Sanger Institute encoded an entire Hepatitis D genome onto a quantum computer using quadratic unconstrained binary optimization (QUBO) formulations. ⚛️ Infleqtion + University of Chicago + Massachusetts Institute of Technology identified novel cancer biomarkers through hybrid quantum-classical optimization ⚛️ Stanford University + Michigan State University modeled ATP/GTP hydrolysis, the reactions powering most cellular processes ⚛️ Nottingham + Phasecraft + QuEra Computing Inc. applied quantum methods to covalent inhibitor design for Myotonic dystrophy The real quantum-for-bio frontier probably isn't drug discovery alone. Drugs are molecules, molecules are chemistry. The more interesting frontier is biology that isn't reducible to single-molecule simulation — gene regulatory networks, cellular decision-making, dynamics of how thousands of proteins interact. Excited to see what gets built once the next generation of utility-scale processors hits broader research access!
-
Gregg Carman liked thisThe quantum ML hype cycle has a graveyard. Quantum recommendation algorithms, quantum linear algebra - dequantized. Quantum speedups for sampling - classical algorithms caught up. And are these all toy problems anyway? The pattern has been consistent enough that serious people started asking whether quantum computers would ever have a genuine advantage on classical data at all. Last week, a team from Google Quantum AI, Caltech, Massachusetts Institute of Technology, and Oratomic published a proof that claims: yes, and here's exactly where. The paper is "Exponential quantum advantage in processing massive classical data." Claim: quantum computer of poly(log N) size can perform classification and dimension reduction on massive classical datasets, while any classical machine achieving the same performance requires exponentially larger size. This was done on two real-world data sets: movie review sentiment analysis and single-cell RNA sequencing. (A note for the skeptics: the classical comparison is against general-purpose algorithms: streaming, sparse-matrix, QRAM-based. The authors explicitly flag that comparisons with dataset-specific classical heuristics are left to future work. A well-tuned classical algorithm for IMDb reviews or PBMC sequencing might close some of that empirical gap. The practical crossover on real hardware is still an open question.) ⚛️ Why this one is different Most quantum ML proposals broke on the data loading problem. To get quantum speedup, you need quantum coherent access to classical data. The standard solution was QRAM — store the whole dataset in quantum memory. The problem: sustaining coherent QRAM requires classical co-processing overhead so large that you solve the problem classically. This paper sidesteps QRAM entirely with quantum oracle sketching. Instead of storing the dataset, you build the oracle incrementally streaming, a data point is processed once and discarded. ⚛️ The hardness result Let’s minimize the assumptions. The classical hardness here is unconditional. It doesn't rely on P≠NP. It holds even if classical machines get unlimited time. It holds even if BPP = BQP, meaning even if quantum and classical computers can efficiently solve the same problems in principle, the size advantage persists. It relies only on quantum mechanics being correct. If you falsify this experimentally, you've falsified quantum mechanics. The authors frame it explicitly as a Bell inequality for computational complexity. ⚛️ Finding the right algorithm The field has been searching for where quantum wins on classical data for a long time. Perhaps a tiny quantum machine, streaming data it never stores, can build accurate models that an exponentially larger classical machine cannot. 60 logical qubits. Given that we are knocking on the doors of logical qubits from companies like Quantinuum and Atom Computing and more, it’s very interesting indeed that we can test it for real, soon. arxiv.org/abs/2604.07639 ⚛️
-
Gregg Carman liked thisI know I'm the last talk of the day at the International Semiconductor Industry Group Quantum Computing Infra Summit tomorrow and we will be tired, but if you listen to my podcasts, you know we always get into spicy topics 🌶️ Join me and Aaron Lott from Hewlett Packard Enterprise where we will discuss the quantum application hype and where we see real value. The last question I will open to the group below: What quantum application do you think is overrated, and which is underrated? If you don't come to my talk and I see you at the reception later, I will be sad 😁 see you there!
-
Gregg Carman liked thisI fear the AI engineers are going to cook us, quantum. 13 institutions across industry, national labs, and academia, covering superconducting, neutral atom, and electron-on-helium hardware just released Ising Calibration 1. It's a 35B parameter vision-language model (mixture of experts) fine-tuned specifically to read quantum calibration plots and decide what to do next. Rabi oscillations, Ramsey fringes, spectroscopy scans, readout histograms — the stuff every experimentalist has squinted at on a Tuesday at 2 AM trying to figure out why their qubit is misbehaving. The model gets 74.7 on QCalEval, beating the best general-purpose zero-shot model (Gemini 3.1 Pro at 72.3). A few thoughts: 🔴 Calibration is hard, huge PITA and incremental improvements are great (I have been there, spending weeks on calibration workflows and writing processes by hand in lab notebooks that will need to be redone the next time something drifts). 🔴 The paper is honest about what VLMs can and can't do yet. Base models hit 72.3 average; domain tuning gets you to 74.7. Frontier models suffer from "optimistic bias" - the model will tell you your experiment succeeded when it didn't. Calibration automation has to solve this before real autonomy. 🔴 This is NVIDIA's flywheel showing up in quantum, fast. The open release isn't charity. The same strategic move NVIDIA has been running in compute for a decade. They stabilize a layer (hardware, frameworks, now domain-specific open-weight models) so that everyone above it can build on consistent infrastructure instead of reinventing foundations. NVIDIA, University of Toronto, IQM Quantum Computers, Berkeley Lab , Conductor Quantum, National Physical Laboratory (NPL), Infleqtion, Harvard University, Fermilab, Northwestern University, EeroQ Corporation, Royal Holloway, University of London, and Vector Institute for Artificial Intelligence Model weights: https://lnkd.in/esVKMEcx Benchmark: https://lnkd.in/erft2HB9 Eval code: https://lnkd.in/e3Tb67Hv
-
Gregg Carman liked thisI've long been skeptical of quantum papers that publish claims without publishing the circuits or data. It's a real problem in the field. Post-selection bias is rampant. You run the experiment many times, cherry-pick the runs that worked, and report the result. Hardware access is so restricted that independent replication is nearly impossible. Companies will straight up ban you for benchmarking (you know who you are…. 👀 ) Which is why I want to talk about how Google published their new quantum cryptography whitepaper, not just what's in it. Quick summary: The team dropped updated resource estimates for breaking ECDLP-256 — the elliptic curve cryptography protecting most blockchains and cryptocurrencies today. Fewer than 1,200 logical qubits, 90 million Toffoli gates, fewer than 500,000 physical qubits. Approximately a 20x reduction from previous estimates. Execution time in just minutes. But they didn't publish the circuits, (and that’s ok), because they published a zero-knowledge proof of their quantum circuits instead of the circuits themselves. A zero-knowledge proof lets you prove that a statement is true without revealing any information about why it's true. Google can prove their resource estimates are correct, that the circuits exist and work as described, without publishing the actual attack blueprint. Vulnerability disclosure in cryptanalysis is genuinely hard. Two competing failure modes: 🔴 Full disclosure hands bad actors a working attack manual. No disclosure leaves the ecosystem unprepared and creates a vacuum that gets filled by unsubstantiated FUD — which is itself an attack vector on public confidence in a multi-trillion dollar asset class. 🔴 The existing responsible disclosure framework was designed for classical software vulnerabilities where a patch can be deployed before the deadline expires. And for blockchain, PQC migration doesn't work like patching a server. It requires ecosystem-wide coordination. A zero-knowledge proof of the vulnerability lets researchers, governments, and the cryptocurrency community verify that the threat is real and the estimates are credible without a step-by-step attack guide. Google explicitly flags 2029 as the migration deadline, engaged the US government before publishing, and is calling on other quantum cryptanalysis research teams to adopt the same model. That's the standard I want to see across quantum research. Great job from Ryan Babbush and Hartmut Neven on the blog post: https://lnkd.in/e-WWnxKY Paper: https://lnkd.in/etFqmS5X (This is part of what we're building toward at Marqov. reproducible, verifiable quantum-classical workflows where the same workload can actually run twice and produce the same result.)
Experience
Education
Volunteer Experience
-
Board of Directors Member (2007–2011) & Treasurer (2009–2011)
Massachusetts Service Alliance
- 4 years
Social Services
Treasurer and Chair of Audit Committee
-
Board of Directors Member
Boston Center for Adult Education
- 3 years
Arts and Culture
Projects
-
Harvard Business School Case Study "Anatomy of a Sale," by Professors John A. Dieghton and Das Naryandas
-
See projectAs the key-character of this study, I faced a challenge in the midst of a strategic deal. I used the choreography of The Challenger Sale before it existed. Teaching, Tailoring, and Taking Control. The result was a win-win for all parties and long-term commercial-relationships that benefited all involved.
-
Sales Expertise: (click drop down)
-
• Sales Management
• Enterprise Solution Selling
• SaaS Subscription Selling
• Strategic Selling
• SPIN Selling
• Challenger Sales
• Diagnostic Selling
• MEDDIC
• Business Development
• C-Level Negotiations
• Strategic Alliances
• Cloud Computing
• Managed Services
• Vertical Market Growth
• Go-To-Market Strategist
Honors & Awards
-
Global Sales MVP (2001) (highest revenue sales representative), Siebel Systems
-
-
Rookie of the Year (1998), Scopus Technology
-
-
Top Producing Account Executive (2011), C3 Energy
-
-
Top Producing Regional Vice President (2009), Mattersight Corporation
-
-
Top Producing Sales Representative (2005), Visible Path Corporation
-
-
Top Producing Vice President (2014), C3 Energy
-
Recommendations received
25 people have recommended Gregg
Join now to viewView Gregg’s full profile
-
See who you know in common
-
Get introduced
-
Contact Gregg directly
Other similar profiles
Explore more posts
-
Puja Agrawal
Amogha Corp • 4K followers
Ever wonder what truly drives success inside a private equity boardroom? In my experience, PE-backed boards operate with a different level of intensity, accountability, and focus on value creation. Expectations are clear, metrics matter, and alignment between investors and management is critical. That’s why I’m excited to share that we've partnered with FEI NYC to bring you an insight-packed session designed to give board members, advisors, and operators a practical look at what really happens in these rooms - "Inside the Boardroom of a Private Equity Portfolio Company." With our expert panelists, we’ll explore governance structure, performance expectations, investor dynamics, and the mindset required to succeed in a PE environment. If you serve on a board or aspire to, this is a conversation you won’t want to miss. 🔗 Register here: https://lnkd.in/ee4-VG9C #PrivateEquity #BoardLeadership #CorporateGovernance #CFO #PDANYMetro #Growth
8
-
Serhii Kononenko
echocode.ai • 503 followers
Replit has raised $250M at a $3B valuation, led by Prysm Capital with participation from American Express Ventures, AI Futures Fund, and others. Alongside the funding news, the company introduced its most advanced AI agent yet — Agent 3. It can autonomously test apps, build social media bots, and even create other AI agents to tackle specific tasks. Try it for free here: https://replit.com/agent3 #Replit #Agent3 #AIagents #StartupFunding #DevTools #ArtificialIntelligence
3
-
Pete Bianchini
NexCore Systems • 2K followers
Interesting to see how much attention cooling, water stewardship, and thermal design are getting across digital infrastructure right now. Operator-led platforms that bake these considerations in early — including approaches like Endeavor’s work with Edged — feel well aligned with where high-density compute is heading. #EndeavorManaged #DigitalInfrastructure #DataCenters #Sustainability #CoolingInnovation
8
-
Bryan Watson
CleanTech North • 12K followers
*** 2025 #SRED Overhaul: What to Expect *** Episode 3: SR&ED From The Shed - Major government figures have now confirmed SR&ED updates are on the way, and they might be even better than first announced in December as we covered in Episode 2, last week! Here’s What To Expect: 🟢 Cap on Eligible Expenses Upgraded: December’s Fall Economic Statement proposed a $4.5M cap for eligible SR&ED expenditures—but recent word from Parliament is that the new number could rise to $6 million. That’s a substantial boost for Canadian innovators. 🟢 Capital Expenditures Status: The return of capital expenses as eligible costs for SR&ED remains firmly “on the table.” Watch this space for when the change becomes law. 🟢 Refundable Credits Public Companies: Canadian public companies could soon gain access to refundable (cash-back) SR&ED credits—no matter the source of their capital. This opens new doors for R&D investment, especially in sectors where capital scarcity holds companies back. 🟢 Patent Box and IP Protections: For the first time, there’s a nod to Patent Box-style incentives—supporting not just innovation, but also the protection and retention of Canadian intellectual property on home soil. ⚠️ A Caution ⚠️ Now, of course, these notes were based on a fireside chat had at a recent event with the Parliamentary Secretary of the Ministry of Finance, MP Ryan Turnbull, and are NOT legislated yet - or even tabled - but they certainly provide us with an updated sense of where this current government is taking the SR&ED program. We recommend companies begin planning scenarios around this information, but that they don’t take any major actions simply because of it. As we saw in December, until things are legislated… things can change! ✳️ Why It Matters ✳️ 🔹 Real, Confirmed Momentum: This is not just rumor—senior government representatives have publicly confirmed this direction for the SR&ED program. 🔹 Potential Game Changers: Higher caps, capital expenditures, cash for public companies, and new IP incentives could reshape the SR&ED landscape in Canada - making it much more useful for many more companies! The focus now turns to the fall legislative session for this direction to become law. Until then, stay tuned for verified updates—not hype! 🎥 Catch all the updates, analysis, and industry insights in Episode 3 – now live! #SRED #SR&ED #Innovation #R&D #TaxCredits #CanadianInnovation #PolicyUpdate #PatentBox #PublicCompanies #UpcomingChanges #LinkedInUpdate
41
18 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content