Linda Fry
Greater Seattle Area
5K followers
500+ connections
View mutual connections with Linda
Linda can introduce you to 10+ people at Coinbase
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Linda
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
I don't practice GRC as an academic exercise; I build Risk functions that drive business…
Activity
5K followers
-
Linda Fry shared thisIs your GRC Team generating AI slop? We are in the heyday of AI. As a tech leader, I’m inspired by the capabilities we’ve unlocked. But there is a hidden cost. In high-pressure innovation cycles, it’s easy to let AI output outpace your maturity. When that happens, you’re building your GRC strategy on sand. For a CISO or GRC Exec, AI slop is more than just annoying, it can result in a systemic failure of an organization’s safeguards. How do you know if your team has crossed the line? Look for these 5 red flags: • The "Black Box" Defense: Your team can’t stand up to light probing on the work they produce. If your team can’t explain the logic behind why a specific control was deemed "ineffective" without saying "the model flagged it," you have a liability, not an audit trail. Accountability cannot be delegated to an agent. • High Noise-to-Signal Ratio: Are your cross functional partners spending more time "cleaning" your risk analyses than acting on it? If they are wading through mountains of AI-generated reports just to find the three useful calls to action, the AI isn't a force multiplier. It’s a distraction. • The Nuance Gap: AI often misses the human-in-the-loop nuances that define modern GRC. It might flag a technical non-compliance without weighting it against business priorities, or communicate the finding without taking into account specific relationship dynamics. Logic drift happens when models treat risk as a static math problem rather than a dynamic business reality. • High Inference Tolerance: This is a term we need to get comfortable with. Is your AI agent asserting a risk remediation timeline because it parsed a real roadmap, or is it hallucinating based on generic training data? Without guardrails on temperature, your Tech Risk Management is just high-speed guesswork. • Automating Dysfunction: Automating a broken process doesn't improve its value proposition. If you use AI to pump out controls performance and KRI reports that no one reads into a dashboard no one uses, you’ve just accelerated the rate at which you produce waste. The bottom line is that the AI bubble won’t pop because the technology fails. It will pop because the execution debt bears fruit. The difference between being a victim of the bubble and a soft landing is ensuring that your legacy is not a mountain of high-speed, automated slop. What’s the most egregious example of AI slop you’ve seen lately? (Names changed to protect the innocent, of course) #TechRisk #CybersecurityRisk #AIRisk #GRC #AIGovernance #ResponsibleAI
-
Linda Fry shared thisMost Tech Risk functions are built to be a bottleneck; I spent the last six months proving they should be an engine. I built this home office to be a sanctuary for high-stakes problem-solving because I refuse to treat GRC as a back-office administrative task. While I’m saying goodbye to my role as Director of Security & Technology Risk Management at Coinbase due to a 14% RIF, I’m grateful to keep this view as I plot my next move. As Brian’s note makes clear (link in comments), the pivot is toward a flatter model to eliminate "coordination tax" by leaders assuming up to 15 direct reports. During my six months of concentrated velocity, we didn't wait for a more efficient structure; we architected it. We restructured Tech Risk into an AI-native engine grounded in engineering principles to remain nimble and high-impact. To my team: It was a privilege to lead you. You are the talent density the industry is looking for, and you deserve a leadership hand that supports the ambitious, data-driven foundations we’ve laid together. We moved the needle through four key pillars: • Reliable and Actionable Data: Led a focused risk register data quality uplift sprint, built a self-service dashboard ecosystem, and launched AI-driven narrative risk analysis. Aligned 1LOD executive leadership on incorporating Tech Risk insights and GRC calls to action into established quarterly planning processes. • Quantitative Risk Analysis: Built foundations to measure and communicate risk in financial terms grounded in statistical concepts; achieved sophisticated analysis without the expensive third party software. • Shifting Left to Risk-Informed Velocity: Modernized our "check and challenge" capabilities to identify irresponsible risk without becoming a bottleneck to innovation. • Operational Excellence: Redesigned our org into a service model that anticipated the shift toward smaller, high-context teams managing fleets of agents. During this "no meeting week" dedicated to deep work, I was building a GRC AI Agent that cross-references decision memos against our risk, findings, policy exceptions, and incident databases. The goal is to provide authors with an intellectually honest and balanced analysis of their proposals in real-time. I’m entering this summer with a clear head and a full heart. Between working on AI projects right here in this office, maintaining a neighborhood vegetable garden in my front yard, and running for local office to serve my community, I’m staying grounded in the work that matters. If your organization is looking for an executive to bridge the gap between modern GRC architecture and AI-driven operational excellence, let’s connect. Onward. #TechRisk #CyberRisk #GRC #RiskQuant #RiskManagement #Coinbase
-
Linda Fry shared thisHot take: LinkedIn should officially recognize an organically earned Slack emoji as a legitimate career milestone. 🏆
-
Linda Fry posted thisI have a budding theory that vertical risk teams have to be more pessimistic than Enterprise Risk teams. In my view, an ERM team exists to aggregate data, identify cross-cutting themes, and tie them to strategic objectives. At that level, the "Decision-Centric Risk Management" crowd is right: risk should be analyzed when it’s time to inform a choice. It makes sense for an ERM conversation to start with, “What are your strategic objectives, and what could get in the way of achieving them?” However, Vertical Risk teams like Tech or Cyber Risk operate in a different reality. If we start with a goal-oriented statement like "What are your strategic objectives?" the answer from an Engineering Team is usually "to deliver [Product X] ASAP" This introduces an unnecessary layer of translation, forcing specialists to sell why technical reality matters to a business dream. We can avoid this by going straight to the inherent fault lines. This approach serves as a critical diagnostic tool to prevent the common trap of conflating issues with risks. By identifying the fault lines first, we acknowledge what is already broken (the issue) before calculating how those cracks might actually jeopardize a future destination (the risk). We only tie these fault lines to strategic objectives after they've been identified. This allows us to filter out the noise - those existing issues that, while imperfect, don't actually threaten the current journey. From there, the real work begins by layering in the conflicts in prioritization. This answers the ultimate question: “Are we focusing on the right problems, or just the loudest ones?” While the "Decision-Centric" approach is vital for choosing a path, vertical risk teams must provide the "Status-Centric" reality check. If we only analyze risk when a decision is on the table, we ignore the slow erosion of the foundation that happens between those decisions. Ultimately, the two approaches serve different masters: Enterprise Risk is optimistic; it looks at how to safely reach a goal, and it informs the decision of where we are going. Vertical Risk is foundational, it looks at the structural integrity of the ship, whether it’s sailing to a destination or sitting in the harbor. It informs the decision of whether we are even fit to make the trip. Both risk teams drive data-informed decision-making, but they require fundamentally different lenses. One decides where to steer, and the other ensures the hull can take the pressure.
-
Linda Fry posted thisThere is an obsession in our industry with perfect risk quantification. We’ve all seen the debates where purists argue over specific methodologies as if they’re defending a religious text. Here is a truth that feels like a heresy to say out loud: if your math is so complex that your CISO, CRO, or Board needs a PhD to understand and defend the results, you haven't built a risk model, you’ve built a black box. Dogmatic adherence to "pure" quantification is often just a distraction from a lack of underlying data quality. When we prioritize a specific tool over pragmatic decision-making, we stop being risk leaders and start being mathematicians in a vacuum. The reality is that most organizations simply aren’t ready for advanced, automated quantification on Day One, and that is perfectly okay. Forcing high-end models onto a low-maturity culture is the fastest way to ensure your reports become compliance noise. To move from vibe-based risk to actual technical architecture, you need a roadmap that prioritizes judgment over syntax. * Phase 1: Qualitative Risk: Start with clear, plain-English conversations. If you can't explain the risk without a spreadsheet, you don't understand the risk. * Phase 2: Qualitative with Quantitative Bands: Stop using "High/Medium/Low" in isolation. Map those labels to broad financial ranges so the business has a consistent anchor for the "vibe". * Phase 3: Calibrated Estimation & Data Maturation: Train your team to identify where their own biases are masking reality. Use calibrated estimates to fill data holes while you simultaneously identify and mature the specific telemetry you need for the future. * Phase 4: Calibrated Estimation Quant with Deep-Dive "Slow" Quant: Save the heavy lifting for your strategic bets where you can afford to take the time. For the daily grind, keep it lean with calibrated estimations with some light math and maybe Monte Carlo simulations on your calibrated ranges. * Phase 5: Automated Risk Engineering: This is a living telemetry pipeline where risk scores recalculate in real-time based on actual technical gaps in your environment. Many organizations will find their "sweet spot" at the penultimate phase, and there is no shame in that. You don't need to reinvent the physics of risk to manage the daily grind. As a leader, your job is to transform GRC from a reactive cost center into a strategic engine. Don't let the purists convince you that your program is failing because you aren't using their specific flavor of math. If the goal is sustainable innovation, the best model is the one that actually informs a decision today, not the one that remains "100% compliant" but 0% useful. Don't get me wrong, high-fidelity risk quantification absolutely works, but if your organization isn't ready for that level of engineering, forcing it isn't best practice or the only way to add value. Are you building a technical architecture that scales, or are you just defending a methodology?
-
Linda Fry reposted thisLinda Fry reposted thisAs much as I love AI, I miss reading YOUR voice. Your typos, long-running sentences. Your analogies, that only you can make, are sometimes imperfect, often quite funny. Your hidden context, the historical, cultural, and lateral dots you've connected in your head that the AI doesn't have in its window. Your years of experience, combined with all of the learnings you never wrote into a prompt. Your storytelling skills are unique to you and ONLY you. My day involves a lot of reading, and most of it reads as a singular voice, predictable, uniform, and the same across all subjects and topics. It's made reading for me less enjoyable and less informative. My ask for you: Still Write. Use AI to help if needed, but make it yours. The reader will appreciate the magic only you can bring to written words. -Scott
-
Linda Fry reposted thisLinda Fry reposted thisCoinbase is hiring! Join Coinbase's Physical Security team as our Senior Strategic Operations Manager. This high-impact role ensures our team's strategy and risk posture are integrated across company-wide decision-making. Check out the role here: https://lnkd.in/gVKmnFBe
-
Linda Fry reposted thisLinda Fry reposted thisGRC platforms are the most hated tools in security. I've never met anyone who disagrees. GRC vendors, obviously, but everyone else. I've been guilty of it myself. I've had access to proper GRC platforms and opened a spreadsheet to do the actual risk assessment. Not because I'm cheap or lazy. Because the spreadsheet let me work through the risk without navigating six dropdown menus and a mandatory field for business justification first. The GRC tool wanted 47 fields populated before it'd show me anything useful. The user serves the tool. Somewhere along the way, that became normal. And I'm not unusual. I've watched entire security teams do exactly the same thing. Six-figure platform sitting there while the real work happens in a tab called "risk_assessment_FINAL_v3_actual.xlsx." Nobody talks about it publicly. Everyone does it. There's a reason Excel is still the biggest competitor to every GRC platform on the market, and it isn't price. The problem is what happens when you actually look at the security posture underneath. I've seen programmes where the GRC platform is green across the board and the organisation can't answer basic questions about what changed in production last week. The platform tracks whether policies exist, whether reviews were completed on time, whether evidence was uploaded. What it doesn't track is whether any of that made the organisation harder to attack. It's a project management tool for audits that somehow got marketed as a security product, and honestly, most project management tools would do a better job of it. The whole category ended up here because it was built to serve a static process. Start from the framework, work backwards. Which clause applies. What evidence satisfies it. How quickly can we collect that evidence. Every dashboard, every workflow optimised for one question: can we prove we did this. The question that actually matters, whether doing it helped, nobody asked. So what did the vendors do when AI turned up? Exactly what you'd expect. Automated the evidence collection. Faster screenshots. Automatic control mapping. AI-generated policy documents. Bolted a language model onto an architecture oriented around auditors, not threats, and called it innovation. The platforms got faster. They didn't get smarter. And when the whole architecture is optimised for producing evidence rather than producing security, it was probably inevitable that someone would work out you can just produce the evidence. Pre-filled report, customer name at the top, auditor's signature at the bottom. I'm sure that's never happened though. The question that keeps coming up isn't which GRC tool to buy. It's whether the whole approach is pointed at the wrong thing. Is our security actually improving, and can we prove it without a 200-field questionnaire? Compliance has always fallen out of good security naturally. The industry just built the tooling backwards. #GRC #CyberSecurity #CISO #AIGovernance #SOC2 #VendorRisk
-
Linda Fry posted thisI'm going to commit what feels like a cardinal sin and talk about the AI bubble. We're currently in the beta testing phase of enterprise AI where many companies remain in the honeymoon period of their initial pilots. While the assets look healthy on paper because the underlying risk is masked by a rising market, I wonder if this is similar to the subprime mortgage crisis in the lead-up to '08. I'm a firm believer that AI is the future. It's the ultimate force multiplier and we should all be finding new ways to work alongside these models. BUT, history shows that market corrections follow periods where the excitement of an asset class outpaces the governance of the underlying process. In the 2000 .com boom, the market ignored the lack of real revenue. In the 2008 real estate bubble, the market ignored the lack of underlying asset quality. In 2026, the primary risk is execution debt. We're starting to see the quiet buildup of a systemic liability. Most orgs haven't yet faced a trigger event that forces them to audit their AI decisions, and they're operating on good faith, which is an incredibly dangerous risk posture for any org. We've already seen examples where organizations spend a significant amount of their saved time peer-reviewing and fixing AI hallucinations or errors in automated reports, and a logic drift leading to a cascade of errors that requires months manual auditing to fix. The impending correction is a failure of controls rather than a failure of tech. The difference between a crash and a soft landing is the GRC frameworks we build around these systems. We don't need to reinvent the wheel or create new physics. By the time the execution debt matures, when a major model shift happens or a significant compliance breach forces an audit, the companies that lacked GRC guardrails will be forced to undergo an expensive, painful, and likely public refactoring of their entire AI strategy. This is the point where the distinction between AI-enabled and AI-governed companies will determine who survives the correction. We must apply foundational GRC principles to this volatile asset. This means prioritizing model integrity WITH hype by moving toward standardized model risk management. It means treating ethical guardrails as a business moat because robust oversight is a prerequisite for long-term market access. It also requires accountable autonomy where humans govern while AI enables. GRC ensures that for every agentic decision, there is a clear trail back to a human owner. The goal is to make innovation sustainable. If we treat AI as a plug and play miracle without managing the underlying technology risk, we're building on sand. By integrating risk management today, we ensure that when the speculative hype clears, the actual value remains. We can improve quality and scale efficiency without sacrificing our ethical standards. The bubble can exhale safely because GRC is the valve. ...and, yes, this was written with the help of AI :)
-
Linda Fry liked thisLinda Fry liked thisCongrats to Tim Breen, Sam Franklin and the entire GlobalFoundries team for an amazing investor day in NYC. The GF story is one to watch reflecting great growth drivers and execution! Check out the event @ https://lnkd.in/gVB4wGQf
-
Linda Fry liked thisLinda Fry liked thisTaylor Maze wrote a great article about the general trajectory of risk management models/philosophies, starting with checklists, then on to compliance, and now moving towards more complex quant risk approaches. Checklists still have their place, compliance and maturity frameworks have their place, but the necessity of moving to a quant risk approach is evidence in the fact that the frequency of breaches is still on the rise, and the advent of AI is upending the efficacy of traditional controls. We need to priortize better, we need to identify strategic control weakness better, and quant risk is ideal for evaluating these areas.Cutting the End of the Roast: What Generations of Risk Managers InheritedCutting the End of the Roast: What Generations of Risk Managers InheritedRaven Punk
-
Linda Fry reacted on thisLinda Fry reacted on thisThe hype around agentic AI keeps asking "is the AI agent smart?" That is the wrong question. The interesting question is "how many other agents is it agreeing with right this second?" If the answer is "all of them, on the same timestamp," you've been told everything you need to know. Maybe the agent is smart, but the system it is part of isn't. And dumb systems made of smart parts crash for reasons that look, from inside the system, like everyone simultaneously making the right decision. The fix is not making the agents smarter. The fix is paying somebody to keep them different. Right now, almost nobody is. When they all agree, brace for impact.
-
Linda Fry liked thisLinda Fry liked thisEverybody wants the success. Few, very few, are willing to do what it takes. 🏁
-
Linda Fry liked thisLinda Fry liked thisMy Six Rs for Recovering After a Mass Layoff Reach Out | Rest | Reset | Review | Refocus | Restart If you were impacted by layoffs at Coinbase, the news lands in a wider pattern, not in a vacuum. Major tech has run repeated waves of large reductions, and Coinbase is only the latest to surface in that cycle. This stretch can still be disorienting. Severance gives different people different runway. This is the first time I have been laid off in nearly three decades. I still believe in frameworks in life as much as in software. Here is a compact one for thinking about what comes next.
Experience
View Linda’s full profile
-
See who you know in common
-
Get introduced
-
Contact Linda directly
Other similar profiles
Explore more posts
-
SimpleRisk
984 followers
Near misses are some of the most valuable, and most ignored signals in your risk ecosystem. They reveal weak spots before they turn into incidents, but only if your team reports them consistently. Establish a simple, low friction process for reporting near misses across the organization. Even a quick internal form or Slack workflow can surface patterns you’d otherwise never see. Near miss data isn’t noise, it’s the early warning system most teams overlook. #SimpleRisk #RiskManagement #TipoftheWeek #GRC
4
1 Comment -
Private Identity
2K followers
Digital identity has become the frontline—and attackers are treating it that way. Legacy IAM stacks were built for a different era. Today, the pressure comes from deepfakes, credential replay, synthetic accounts, and automated fraud at scale. Too often, teams are forced into a bad choice: tighten controls and hurt conversion, or reduce friction and accept more risk. PrivateID was built for the current threat model: privacy-first biometrics and high-confidence identity signals that strengthen your existing controls—without creating new data exposure. Why organizations choose PrivateID ✅ Privacy-first by design Authenticate with on-device biometrics (face, voice, fingerprint, palm). No biometric images or PII are transmitted or stored. With EdgeAI and patented homomorphic tokenization, you eliminate the biometric “honeypot” and reduce both breach impact and compliance burden. ✅ Integrates into your IAM stack Deploy as a drop-in layer alongside SSO/MFA—supporting platforms like Okta, Ping Identity, and Entra ID. Choose SaaS, PaaS, or on-prem, and stream real-time identity signals into fraud and risk systems to improve decisions fast. ✅ Enterprise-grade security and scale Zero-trust architecture with liveness/spoof resistance and advanced threat protection—built to standardize across teams, regions, and regulatory environments as you grow. AI-enabled fraud isn’t slowing down, and yesterday’s verification and authentication controls are falling behind. If you’re modernizing identity, it’s time to strengthen both security and privacy—together. Learn how PrivateID helps teams raise fraud defenses while lowering data liability. #IAM #PrivacyFirst #DigitalIdentity #Biometrics #Cybersecurity #PrivateID #ZeroTrust #AuthenticationInnovation
5
-
Pinakin IT
258 followers
Comprehensive Reports On-Demand for Every Audit Long reports and static charts make it hard to track your compliance posture in real-time. We don’t like that. But we do understand that teams need to treat compliance as ongoing visibility rather than a one-off task. That’s specifically why SecPod’s Saner Cloud is devised to turn raw data into actionable insight. Here’s a peek at some of Saner Cloud’s capabilities: -Live compliance scorecards that update with every scan cycle -Custom alerts for policy drift, registry changes, or new control failures -Scheduled delivery of summary reports to email or Slack, tailored by team role -Integration with ticketing systems so exceptions are triaged automatically No more chasing spreadsheets or fire drills before an audit. Let’s schedule a call for a walkthrough of SecPod’s cloud security solution so you can experience live compliance tracking and reporting in your own cloud environment.
-
American Technology Services
2K followers
Risk changes faster than your quarterly report. New misconfigurations, new exposures, new attack paths can appear in days, not months. Continuous Threat Exposure Management keeps your risk map current so you are fixing what matters now, not what mattered last quarter.
3
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content