Gela Fridman
New York, New York, United States
6K followers
500+ connections
View mutual connections with Gela
Gela can introduce you to 10+ people at Amazon
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Gela
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
I build platforms and teams that scale impact, especially where AI changes how people…
Activity
6K followers
-
Gela Fridman posted thisYou can outsource your thinking, but you cannot outsource your understanding. Andrej Karpathy quoted that line in his recent Sequoia conversation. It is the cleanest frame I have seen for what AI is exposing in engineering hiring. For decades, interviews measured production. Could you write the code? Could you debug under pressure? Could you reason through the algorithm? Agents have compressed the production side. A candidate can now produce something that looks like a working demo quickly. That does not mean they understand the system. Karpathy’s recommendation is practical: stop over-indexing on isolated coding puzzles. Give candidates a time-boxed, realistic system exercise with agents. Have them reason through deployment, security, evaluation, and failure modes. Then pressure-test it. That is a very different hiring signal. The new interview should test more of the engineering loop. Whether the candidate can frame the problem and decompose it across agents. Whether they know what context to give, what boundaries to set, and when to stop. Whether they can write useful specs and evaluate the output beyond “it runs.” Whether they catch the tests that pass for the wrong reason, secure the system, and debug and operate it when production behaves differently from the demo. The signals that matter are different now. Not how fast they produce. How clearly they frame. How rigorously they evaluate. How willing they are to reject output that works but is wrong. Curious what others are seeing. How are you redesigning technical interviews for this shift?
-
Gela Fridman posted thisJason P. Yoong called 15 minutes with a physical book the new 10,000 steps. I like that frame. And once you get 15 minutes back, it is hard not to want more. Especially when books shape how you think, live, and see the world. I love Lenny’s Podcast book recommendation corner. You learn so much about a person by the books they read. I looked at them two ways: 1. Books guests come back to 1. High Output Management Recommended: Nick Turley (09-Aug-25), Sebastian Barrios (08-Jun-25), Nabeel S. Qureshi (11-May-25), Wes Kao (06-Apr-25), Tomer Cohen (08-Sep-24), Jeff Weinstein (11-Jul-24), David Singleton (04-May-23), Nikita Miller (06-Apr-23), Patrick Campbell (19-Feb-23) 2. The Design of Everyday Things Recommended: Robby Stein (10-Oct-25), Nick Turley (09-Aug-25), Peter Deng (22-Jun-25), Nan Yu (30-Jan-25), Brian Tolkin (04-Aug-24), Gina Gotthilf (19-Oct-23) 3. Build Recommended: Oji and Ezinne Udezue (07-Sep-25), Dmitry Zlokazov (15-May-25), Noam Lovinsky (17-Mar-24), Marty Cagan (10-Mar-24), Scott Belsky (18-May-23), David Singleton (04-May-23) 4. Inspired Recommended: Phyl Terry (12-Sep-24), Megan Cook (04-Feb-24), Christian Idiodi (21-Dec-23), Paige Costello (09-Jul-23), Christine Itwaru (16-Feb-23), Marily Nika (05-Feb-23) 5. The Three-Body Problem Recommended: Aishwarya Naresh Reganti and Kiriti Badam (11-Jan-26), Howie Liu (31-Aug-25), Camille Hearst (20-Aug-23), Ayo Omojola (14-May-23), Hila Qu (02-Apr-23) 6. Thinking, Fast and Slow Recommended: Jason Droege (09-Oct-25), Madhavan Ramanujam (27-Jul-25), Tomer Cohen (08-Sep-24), Vikrama Dhiman (12-May-24), Casey Winters (30-Mar-23) 7. Shoe Dog Recommended: Zevi Arnovitz (18-Jan-26), Grant Lee (13-Nov-25), Brian Tolkin (04-Aug-24), Jonathan Becker (07-May-23) 8. Outlive Recommended: Tomer Cohen (04-Dec-25), Nicole Forsgren (19-Oct-25), Luc Levesque (15-Jun-23), Laura Modi (13-Apr-23) 9. The 15 Commitments of Conscious Leadership Recommended: Rachel Lockett (23-Nov-25), John Mark Nickels (06-Oct-24), Kenneth Berger (19-May-24), Jonny Miller (28-Jan-24) 10. Designing Your Life Recommended: Rachel Lockett (23-Nov-25), Aman Khan (14-Nov-24), Nicole Forsgren (30-Jul-23), Ada Chen Rekhi (16-Apr-23) 2. One-offs that feel personal 1. Insomniac City (Jenny Wen, 01-Mar-26) 2. Accelerando (Boris Cherny, 19-Feb-26) 3. There Is No Antimemetics Division (Sherwin Wu, 12-Feb-26) 4. Aurora (Robby Stein, 10-Oct-25) 5. Nuclear War: A Scenario (Eoghan McCabe, 21-Aug-25) 6. The Wisdom of Insecurity (Matt LeMay, 14-Aug-25) 7. Endurance (Bret Taylor, 31-Jul-25) 8. War and Peace (Dan Shipper, 17-Jul-25) 9. The River of Doubt (Sander Schulhoff, 19-Jun-25) 10. The Odyssey (Sebastian Barrios, 08-Jun-25) And if Lenny Rachitsky asked me? My three would be: 1. Originals by Adam Grant about challenging defaults. 2. The Art of Impossible by Steven Kotler about expanding what feels possible. 3. The Nightingale by Kristin Hannah about remembering what courage looks like.
-
Gela Fridman posted thisDORA helped engineering leaders stop measuring activity and start measuring delivery. AI is forcing the next step: measuring whether the delivered work was worth doing. PR throughput. Prompt count. Token usage. Deploy count. All visible. All easy to instrument. All dangerous if treated as the scorecard. The vendors have the same problem in reverse. Yesterday Amazon reported that Bedrock processed more tokens in Q1 than in all prior years combined. Bedrock customer spend grew 170% quarter over quarter. Amazon also said it has more than $225 billion in Trainium revenue commitments, as it uses custom silicon to push down the cost curve for AI workloads. That is the vendor version of PR throughput. The same gap breaks AI productivity dashboards and AI pricing models. Both sides are measuring what is easiest to meter. Salesforce ran into the same tension. Agentforce launched at $2 per conversation, then added Flex Credits and per-user licensing as enterprise buyers pushed for more predictable budgeting. Cloud went through the same cycle. AWS started with pay-as-you-go, added Reserved Instances, then launched Savings Plans in 2019 to turn usage into something finance teams could commit to and budget around. The market did not abandon usage. It wrapped it in budgetability. The current AI meter is the token. The unit of cost is the token. The unit of value is the outcome. Those two are not aligned. Builders absorb the misalignment on the inside as productivity claims they cannot fully defend. Customers absorb it on the outside as bills they cannot tie back to results. We are three years into AI, and every team building agents is hitting the same wall from opposite sides. Engineering leaders are trying to define outcome velocity. Vendors are trying to define outcome pricing. The pricing model that survives will not be the one that meters the most activity. The productivity metric that survives will not be the one that counts the most output. Both will have to answer the same question: was the work worth doing?
-
Gela Fridman posted thisThe idea of a PM makes no sense in the future. The skill is being a CEO. Knowing what to build and why. Keith Rabois said this on Lenny Rachitsky’s Podcast this month. It lands because build speed has collapsed. When anything can be built in a week, deciding what to build becomes the only real work. Every PM will need AI skills. That part is obvious. But AI fluency is necessary, not sufficient. AI collapsed the distance from idea to shipping. In that collapse, we lost a filter no one had named: cost. The high cost of building used to kill weak ideas before they reached the light of day. Now weak ideas ship. Whether anything stops them is a decision someone has to make on purpose. Once you find what works, the work becomes subtraction. OpenAI shut down Sora as a standalone last month. The model stays inside ChatGPT. Leadership was direct with staff. We cannot miss this moment because we are distracted by side quests. Now anything can be built. The question is whether it should be.
-
Gela Fridman shared thisThree hundred garments at ICA Miami, every one made by hand. Sold out in Milan, Paris, and Rome before it opened in Miami this month. From the Heart to the Hands. That is the title of the exhibition. Thirty years of Alta Moda built on Fatto a Mano. The human hand as the entire point. The more interesting move is curatorial. Dolce and Gabbana staged the show in dialogue with contemporary artists, including Quayola and Obvious. Obvious is the collective whose AI-generated portrait sold at Christie’s in 2018. A fashion house whose entire premise is the hand chose to place algorithmic art inside the same rooms as its couture. The show is not a defense of the hand against the machine. It is a curation of where each one belongs. That is a more honest argument than most AI debates produce. The question is not whether models replace people. It is which parts of the work you refuse to hand off, and which parts you let the system carry. Every AI team is answering that question this year. Dolce and Gabbana answered it thirty years ago.
-
Gela Fridman posted thisIf every agent action needs a human approval, you don’t have agents. You have a queue. Anthropic dropped Opus 4.7 today. The internet will argue benchmarks. Finance will argue tokens. Engineering will argue regressions. All three are real. None of them is the story that survives contact with production. Organizations do not fail at agents because the model is ten points short on a leaderboard. They fail because they scaled autonomy before they built the control plane. At agent speed, a permission prompt is not a UX annoyance. It is a throttle on parallel work. If every bash line needs a human, you have a chatty intern with admin rights and a queue in front of your calendar. Observability is the same problem. Long-running work creates long-running state. If you cannot re-enter a session and see what changed, what is still risky, and what is next, you pay the tax in senior attention. Teams run fewer agents because coordination cost ate the win. Then there is verification. Without a harness, “done” defaults to plausible. Plausible ships. Plausible also compounds technical debt at machine cadence. The teams that pull away are the ones with a tight definition of proof. Server boots. Test passes. Browser path exercised. Make proof explicit enough that a busy human can trust it. Token burn is not pricing. It is behavior. It is what happens when autonomy runs without stop rules, budgets, or a clear definition of done. This is not a model problem. It is a control plane problem. Models will keep improving. Without a control plane, more capability just creates more failure.
-
Gela Fridman posted thisBoris Cherny, head of Claude Code at Anthropic, ended OpenClaw’s subscription access with one sentence. “Fundamentally engineering is about tradeoffs.” He is right. Cost is a tradeoff. Most engineering teams are not treating it like one. 135,000 OpenClaw instances were running on flat subscriptions. The compute was real. The cost was invisible. That is not an OpenClaw problem. That is an engineering discipline problem. Cost has to be a first-class engineering metric, alongside uptime and latency, with explicit ownership and observability so efficient is measurable, not just aspirational. When Anthropic made cost visible, every team running agents had to answer a question many had never formally asked: what does this workflow return per dollar of compute? That question is not a threat to agents. It is the argument for them. Agents increase the amount of work completed per model call. That is what improves the unit economics of AI systems. The teams that will build successfully with agents are not the ones with the biggest models. They are the ones who learn how to get more useful work per dollar of compute. That is an engineering discipline.
-
Gela Fridman posted thisMost people will listen to Lenny Rachitsky’s conversation with Claire Vo about OpenClaw and nod. A few will do the only thing that matters: install it and let it break something small. Claire’s story is the point. Her first install deleted her family calendar. Then she pulled the thread. Now she runs specialized agents across multiple machines to manage real life and real work. Lenny also published Claire’s complete guide. It is not content. It is a field manual. First install to multi-agent setups, with the real costs and security gotchas most people skip. If you are waiting for this to feel safe and polished, you are choosing the wrong learning loop. You do not get leverage from reading about agents. You get it from discovering where they fail, then designing the boundary. A low-risk way to start this week: 1. Do not install it on your main machine. 2. Create one agent with one job. 3. Pick one repeating workflow you already do weekly. Scheduling, research, follow-ups, status updates. 4. Run it for 30 minutes a day for five days. 5. Write down what broke and what you changed. That is the shift. Demo to practice. Curiosity to craft. If you try it, what is the first workflow you would hand off?
-
Gela Fridman reposted thisGela Fridman reposted thisNYC WOMEN 📣 On April 8th, Ruth AI Inc. and HearstLab are hosting the AI for Social Good Hackathon in NYC, supported by OpenAI for Startups. 25 women will spend the day building AI-powered tools with OpenAI Codex. In the evening, they’ll present what they built to a curated room of 100 founders, investors, and creators. If you build with AI and want to spend a day creating something that matters, apply here: https://lnkd.in/gJKwx5MD The evening reception is invite-only so if you want to join message me. Tag a woman who should be building with us 👇 Supported by OpenAI. Presentation tools by Gamma. #AIforSocialGood #WomenInAI #WomenInTech #NYC #Hackathon
-
Gela Fridman reacted on thisGela Fridman reacted on thisWith all the layoffs happening, I figured I should take something I hacked together for myself a few months ago and put it out there. It started as a Claude Code skill for finding jobs and tailoring resumes. I polished it up, gave it a web UI, and called it Matchbox. It's free, open source, and maybe what we'd call "open slop'ish?" And it might be useful. TL;DR: It searches for roles across multiple job boards based on your resume and presents a daily de-duped list sorted by relevancy. It gives you a quick sense of how you'd fit, where the gaps are, and lets you generate a custom resume and cover letter for each role. There's an interesting Darién Gap between "I quickly prototyped this for myself" and "I designed scalable software," and LLM-enabled development is making that gap muddier by the day. I've started calling these liminal projects in my head. Matchbox lives in that gap. It doesn't reflect how I'd build this at scale or on a team. Hell, it's not even how I'd build it if I wrote it by hand. But it works, and it's useful, and maybe that's enough? I think there's a growing category of software like this: "I put this together and orchestrated it reasonably well, feel free to use it and change it however you need, I don't want to make money off of it." I'm curious whether that's just where LLMs are right now, or if it's how a lot of software is going to get made going forward. Repo 🚀: https://lnkd.in/g75MemdR Quick demo 🎥: https://lnkd.in/g2zNGgeF Thinking out loud: I wonder what the best way is to package these liminal projects.
-
Gela Fridman reacted on thisGela Fridman reacted on thisGreat conversation w. Chris Marquez at The Wall Street Journal Future of Everything on human-powered repair in the age of AI https://lnkd.in/et3aBpzF
-
Gela Fridman reacted on thisGela Fridman reacted on thisInterested in a sneak peek at (and a chance to help improve) my new book, Product Design Engineering for Designers? It focuses on how designers can use AI tools to move beyond mockups and write production-quality code themselves. The manuscript is complete, but before we go to print I am looking for a few people to do a technical review. That means reading part or all of the book to see if the ideas resonate, if the concepts land, and if anything important is missing. If you are an engineer, I am also looking for people to sanity-check the code examples. There is a separate copyediting process, so no proofreading needed. In exchange for your time, you get early access to the unedited manuscript, a free copy of the book when it publishes, and a spot in the acknowledgments. I want a range of perspectives: engineers (and design engineers), design leaders, and designers. I have a limited number of slots, and the final list requires publisher approval. If you are interested, send me a DM. If we run out of space for this round, there will definitely be other ways to get involved closer to launch. If we run out of space for this round, there will definitely be other ways to get involved closer to launch.
-
Gela Fridman reacted on thisGela Fridman reacted on thisHow long do you think we have before Claude testifies against someone in court?
-
Gela Fridman reacted on thisGela Fridman reacted on thisIn 2016 at Uber, I had to learn Go on the job. Even while I went to the bathroom. I took this pic above the urinal 😅. Let me tell you why. TL;DR stateful services. Most microservices are “stateless”, but Uber has many “stateful” services. They operated more like specialized distributed databases than API servers. In a stateless service, a request fans out to caches and databases, deserializes data into an in-memory model, computes on that model, persists updates, and returns a response. At hundreds of thousands of QPS, that is too slow. So data was kept in application memory and used an in-house library called Ringpop to shard state across service instances, just like a database. You might be thinking: 1. Why not just cache the model pre-built from different stores? 2. If requests still need network hops to reach the right shard, what’s the difference? The difference is Uber’s data models are highly mutable. Driver positions are updated every few seconds. New trip requests and orders are constantly popping up in random locations, instantly changing the calculation for which driver or courier was best suited for dispatch. Databases force you into a data model optimized for retrieval, with generic locking and coordination around reads and writes. That trades away speed for generality: modeling almost any type of data through generic primitives like rows, columns, documents, collections..etc) A custom in-memory data model lets you optimize access patterns and concurrency control for your exact workload. For example, when a driver location update comes in, and you update a live spatial index. You don’t need to fan out to multiple stores, rebuild an object graph, deserialize it, recompute the match rankings, and write everything back. Instead, you incrementally update the few in-memory structures affected by that movement: their position in a spatial index, their availability state, ETA-related metadata, and any nearby candidate sets that depend on them. And you have fine grained control over which entities needs to be locked to avoid unnecessarily blocking other in flight operations on the same data model. Dispatch, marketplace, and geospatial systems needed highly concurrent in-memory computation that could handle constant mutations and re-ranking. Many of these systems started in Node.js, which is great for I/O concurrency but not CPU-bound computation. Its single-threaded event loop meant expensive work blocked everything else. Node can scale CPU-bound work across cores by running multiple processes, but each process has isolated memory. For highly mutable in-memory state, that makes coordination harder. Go provides lightweight concurrency inside a shared-memory process, with goroutines scheduled across OS threads, making it a better fit for highly concurrent stateful services.
-
Gela Fridman reacted on thisGela Fridman reacted on thisThis week we celebrate the people who show up every single day when it matters most. Nurses. My twin sister Dr. Lori Armstrong has spent her career doing exactly that. Showing up. Caring deeply. Never looking for recognition. And my daughter Christian is following in her footsteps. Two generations of nurses in one family! This Nurses Week, I'm thinking about every nurse who chose this profession not for the glory, but because they genuinely believe in healing people. Thank you to every nurse who shows up. Inspire Nurse Leaders #NursesWeek #Healthcare
Experience
Education
-
New York University
-
-
Activities and Societies: • GPA: 3.9 (Major) / 3.7 (Overall) • Honors Dean’s List, Honors Program • Coursework emphasized Java, data management, operating systems, and data structure and algorithms
Honors & Awards
-
Ad Age's 40 under 40 list (2017)
-
-
Featured Speaker, Createtech 2013
-
Languages
-
Russia
-
-
English
-
Organizations
-
Acquia | Partner Advisory Board Member
-
- PresentAcquia is an open-source digital experience company that empowers the world's most ambitious brands to embrace innovation and create customer moments.
-
Sitecore | Partner Advisory Board Member
-
- PresentSitecore is a robust digital marketing system that combines a content management system with contextual intelligence and omnichannel automation technologies.
Recommendations received
15 people have recommended Gela
Join now to viewView Gela’s full profile
-
See who you know in common
-
Get introduced
-
Contact Gela directly
Other similar profiles
Explore more posts
-
Mugoya Godfrey
OnealleyAI • 206 followers
🌍 While Instacart revolutionizes grocery shopping in America with AI agents, what does this mean for Africa's retail future? Instacart just launched AI solutions that turn grocery shopping into simple conversations. Major retailers like Kroger and Sprouts are already onboard, letting customers build baskets, get meal suggestions, and complete purchases through natural dialogue with AI agents. This "prompt economy" shift is fascinating for Africa's context. We're leapfrogging traditional e-commerce infrastructure in many regions. Imagine AI-powered shopping agents adapted for local markets - helping customers navigate seasonal produce availability, suggesting recipes with indigenous ingredients, or facilitating group buying for rural communities. In Uganda, Kenya, and Nigeria, we're already seeing mobile-first commerce solutions. Adding conversational AI could democratize access to quality groceries, especially in areas where traditional retail infrastructure is limited. The real opportunity? Building AI shopping solutions that understand African contexts - local languages, cultural food preferences, and community buying patterns. 💬 How do you see conversational AI transforming retail across Africa? What challenges and opportunities do you foresee? #ArtificialIntelligence #AfricaTech #AIForGood #FutureOfRetail #DigitalAfrica
2
-
Frank Fillmann
Salesforce • 13K followers
AI pilots are everywhere. ROI is not. 95% of pilot investments fail. Businesses are getting impatient. Real value comes from connecting AI to trusted data, workflows and people. That’s the Agentic Enterprise. We’re seeing it at ANZ Bank, Xero, Australia Post, Flight Centre Travel Group, Zurich Insurance, and many more organisations crossing the last mile to measurable impact. Proud to share my thoughts in today’s AFR 👇 https://bit.ly/3N234x6
100
1 Comment -
Darío Sheikh
Moloco • 5K followers
AI-driven disruption is not uniform across industries. To explore this, Moloco and Boston Consulting Group (BCG) partnered to publish the AI Disruption Index: https://lnkd.in/gdfJRSCB The report provides a real-time pulse check on industries being reshaped by AI and highlights emerging opportunities. The research indicates that while AI is transforming how people discover products and services, customer relationships remain paramount. Take Pizza Hut, for example. Their Senior Director of Media shared their approach: 1. Investing in their app as a way to build direct relationships 2. Using their first-party data to create experiences AI can't copy 3. Diversifying their budget from traditional discovery channels into in-app advertising Innovative brands that are winning aren't trying to fight AI—they're just getting better at building direct connections with their customers. For more insights, you can watch the Brand Innovators backstage interview with Pizza Hut here: https://lnkd.in/gYcBjpJG.
20
-
Taimur Rashid
15K followers
I'm proud of what Fortune highlighted about the Amazon Web Services (AWS) Gen AI Innovation Center (#GenAIIC), but what they captured goes beyond our production rate; it's about fundamentally changing how enterprises approach AI transformation. Two and a half years ago, we launched with a hypothesis: customers don't need more POCs. They need partners who can take AI from concept to production, at scale, while building their internal capabilities to sustain it. What we've learned from working with 1,000+ customers from Formula 1 to Nasdaq, GoDaddy to Cox Automotive Inc. is that production success comes down to two critical shifts: First, engagement models matter. We deploy Forward Deployed Engineering (FDE) teams for strategic depth, which is embedding multi-disciplinary builders through full implementation cycles - building, deploying, and teaching customers to operate AI-powered solutions. For rapid scale, we mechanized "Live in 45," which is leveraging velocity and rapid building using solution accelerators and reusable assets to deliver outcomes within 45 days. Second, platform focus accelerates everything. We're driving three AWS platforms at scale: Nova Forge for model customization, Bedrock AgentCore for agentic AI, and Quick Suite for workforce transformation. But here's what excites me most: we scale through ecosystem leverage. Our partners have shifted from execution support to demand originators, now the predominant source of opportunities, and platform partnerships with emerging startups are generating multi-fold ARR returns. Fortune asked what we've learned. Here it is: Production at scale isn't about better pilots; it's about better partnerships, reusable patterns, and relentless focus on customer outcomes. Link to Fortune article: https://lnkd.in/gfD_ChJE Francessca Vasquez, Joel Sider, Amanda Cohen, Jaime Wilde (Rice), Randi Larson, Jackie Vendetti, Sri Elaprolu, Sarah Cooper, Shaun Collett, Anila Joshi, Pouria Mortazavian, Atanu Roy, Mohamed (Mo) Ahmed, Ph.D., Diego Socolinsky, Rafael Fernandes, Kate Zimmerman, Marta Milejska, Yaniv Ehrlich #GenerativeAI #AWS #Innovation #LiveIn45
219
6 Comments -
Tim Savage
PARiTA • 28K followers
The Strategic Value of Micro Roles: Building Profitability with Focused, Fractional Expertise Revenue Magazine article I wrote. https://lnkd.in/eBC-Q2-F TLDL: Build Your Fractional Network Now To stay competitive, start lining up the right micro-role experts today. Look for consultants who excel in revenue operations, CRM optimization, and top-of-the-funnel strategy. These partners should be ready to deliver measurable results, paving the way for long-term profitability and growth. By investing in fractional expertise, your business can target precise growth strategies without unnecessary costs. Stay lean, agile, and ready for rapid growth, bringing in top-tier specialists for the exact skills you need when you need them—leaving your full-time resources free to execute on long-term growth. With this approach, companies can turn micro roles into massive gains, shaping a winning team one project at a time. In a world where speed and efficiency win, micro roles are the play that drives profitable, focused results.
12
-
Bill Q.
Bill It Up • 10K followers
Had such a great chat with Vivien Lin! She brought so much wisdom and vivid insights to the conversation: 1. We're trying to work with our users, when they make money we make money. We try not to stand on the opposite. 2. From the start we decided to exit China market. We never touched US users. It's all because we are quite self-disciplined and self-regulated. 3. We consider whether this product or whether a feature can truly solve the problem for the user. 4. From 2018, 2019 is always put our top priority to think about how we bring more new user to the crypto industry. It's not like competing the existing user with other exchange. 5. Ideally, the final product should be you don't need to worry about what functions sit in what page. You just go to the AI or the AI assistant to tell him what you want. 6. The difference in the roadmap between crypto and traditional finance is rooted in how the industry got originated. For traditional finance, banking and payment that's naturally where finance comes from than lending than trading. At crypto, the very first need where the industry originated is mining. 7. I think, we are still in a very precious time window especially for exchange it to become a sort of mega platform in near future. 8. My suggestion is always be open minded - open minded to the technology, business model and even all those new ideas. 9. My suggestion to everyone, even if you are a legal, you are a product manager or you are in the operation team, is to study programming, learn how to code. 10. The slides always tell the good stories but you need to get down to the details to understand what's happening. 11. Before all those regulations put into place there has to be someone familiar with how possible the regulation is going to be implemented in the future and we can get prepared beforehand. 12. Without one major hack exchange cannot be a mature exchange. 13. If I know what they are doing I can read their code, I can get a very direct sense about the quality of the project instead of really looking at the powerpoint. Watch the full session here: https://lnkd.in/dGhbXdwr BingX
10
-
Rachel Jordan
BetterLesson • 5K followers
This Amazon story seems to be one proof point of many: it’s harder to transform an existing enterprise to AI-first than to build from scratch. The stories of startups being wholly, mostly, or partially staffed by AI agents might not be exaggerated. But they are not the bar against which we can measure existing entities. Turning the ship around takes time, planning, learning, testing, implementing, and fixing along the way. https://lnkd.in/e8kdB6ZS
2
-
Philippe Bouffaut
Industrious • 2K followers
I’ve spent most of my career in the startup world. When you’re in a small team, the mission is oxygen. You act like the money is yours, because if the company fails, there’s no safety net. Whenever I’ve stepped into the world of large corporations, I’ve felt that immediate shift. The "red tape" starts to drag. Processes designed to prevent mistakes end up preventing action. People show up to do their "daily job," but they often lose sight of the "why." This memo from Amazon CEO Andy Jassy (link below) really resonated with me. He talks about making Amazon the "world’s largest startup." It’s an ambitious vision, but let’s be real about the context: Amazon is currently navigating the departure of 16,000 employees. It is impossible to talk about "efficiency" and "returning to startup roots" without acknowledging the very real human cost of that transition. My heart goes out to those affected; talent leaving a company is never just a metric on a spreadsheet. However, I believe Jassy’s diagnosis of the "corporate disease" is spot on. As companies grow, they naturally create entropy. They add layers. They prioritize bureaucracy over builders. They stop acting like owners and start acting like administrators. For a company of Amazon’s scale to admit that they’ve allowed too much "red tape" to creep in is a necessary wake-up call for the rest of the industry. The goal shouldn't be just to grow big—it should be to stay "small" enough to care, to move fast, and to keep that ownership mindset alive, no matter how many zeros are at the end of the headcount. You can be a giant, but the moment you stop thinking like a founder, you start the slow walk toward irrelevance. What do you think? Can a company ever truly maintain a "startup culture" once it reaches 1M+ employees? Or is entropy inevitable? https://lnkd.in/eeEv2Ryu
15
1 Comment -
Ash Yadav
HSBC • 1K followers
A BEST CASE SCENARIO FOR AI? David Sacks offers an optimistic AI outlook, countering doomer predictions of a rapid AGI takeoff. With 2025 model releases, top U.S. firms like OpenAI and xAI show competitive clustering and specialization, not monopolies. This decentralized race boosts innovation, supports human-AI collaboration (boosting productivity 20-30%, per MIT), and mitigates centralized control risks. Open-source models and vertical applications further fuel growth, suggesting a dynamic, balanced AI future—less Oppenheimer, more Goldilocks. Stay tuned as this evolves! https://lnkd.in/eRJc8YyK
2
-
Pierre Hebrard
Pricemoov • 9K followers
Retail just crossed a threshold. Walmart's partnership with OpenAI brings us into the era where conversation replaces navigation. Shoppers will no longer browse, they’ll describe their intent, and an AI agent will curate, compare, and complete the purchase in seconds. That means the front line of commerce has changed: it’s not your website, it’s the algorithm interpreting your offer. The new question isn’t “how do we convert customers?” but “how do we convince AI agents?” Pricing, product data, and positioning now compete in machine logic. Win that, and you win the customer, human or AI. #AI #Ecommerce #Retail #Pricing #AgenticCommerce https://lnkd.in/eKHaAviW
6
4 Comments -
Mindy Ferguson
9K followers
Real-Time Bidding in the AdTech space just took a major step forward with the launch of #AWS RTB Fabric. Now, advertising technology (AdTech) companies can seamlessly connect with their supply and demand partners, such as Amazon Ads, GumGum, Kargo, MobileFuse, Sovrn, TripleLift, Viant, Yieldmo and more, to run high-volume, latency-sensitive RTB workloads with consistent single-digit millisecond performance and up to 80% lower networking costs compared to standard networking costs. https://lnkd.in/gfHUhFKH #AdTech #Data #Advertising
8
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content