Hunter Leath
San Francisco, California, United States
4K followers
500+ connections
View mutual connections with Hunter
Hunter can introduce you to 8 people at Archil
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Hunter
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Activity
4K followers
-
Hunter Leath shared thisWe benchmarked cloning a monorepo on Archil vs EFS. Archil finished in 19 seconds. EFS took 2 minutes 14 seconds. Archil came in 7x faster. If you've ever migrated an app from EBS to EFS and watched the performance fall over, the reason is probably familiar. EFS, AWS's managed file storage, never reached its full potential, and I think there are three reasons: performance, price, and a weird competition with S3. AWS has taken the S3 competition off the table. EBS, the block storage attached to EC2 instances, is AWS's biggest SSD business. Most database products are *just* wrappers around EBS. But EBS drives have no logic to let multiple machines coordinate on the same data. If you want zero-downtime deploys, you can't use them. That's usually where I'd see people migrate to EFS. And their application would explode. Not literally, but performance would get *really bad*. On EBS, creating a file is basically free until the app actually needs to save it. On EFS, every metadata operation (creating, opening, renaming files) has to check in with a central server so that two clients don't step on each other. The EFS team has great engineers working diligently on per-operation latencies, but for interactive workloads it'll always be an order of magnitude slower than EBS. At Archil we replaced that coordination model with one borrowed from the Andrew File System (AFS). Clients check out the parts of the filesystem they want exclusive write access to, and we hit the same speed as local block storage. Beating EFS was never the goal. Becoming the default way people store data in the cloud was. To do that, we have to unseat EBS.
-
Hunter Leath reposted thisHunter Leath reposted thisAs part of the Series A we led for Archil, I sat down with the founder/CEO Hunter Leath. We discuss: - What exactly Archil is, and why anyone building agents should evaluate it - Why LLMs understand filesystems better than pretty much any other technology - Hunter's 10 year career at AWS working on filesystems - A case study of how Clay is using Archil - What it was like raising a Series A from Standard Capital and more! https://lnkd.in/gVgxkkjc
-
Hunter Leath shared thisToday, I'm excited to share that Archil has raised $11M in new, Series A capital led by Standard Capital with participation from YCombinator, Felicis, Peak XV partners, Wayfinder Ventures. We've also had the opportunity to add wonderful angel investors such as Anurag Goel (Render), Richard Artoul (Warpstream), and several individuals at our partners Antithesis and Clay. This round, less than a year after our seed funding, brings Archil's total capital raised to $18M. This announcement reflects the rapidly growing need for new infrastructure in the industry. Applications that are built for the AI era -- inference, model training, and agents -- are inherently stateful in a way that existing primitives do not solve for. At Archil, we believe that the file system -- as the universal interface to data for over 50 years -- represents the best path forward for making infrastructure simple for the next decade. In fact, it's because of this deep history that file systems have become the best performing way for agents to interact with data. Their representation in the training data means that most models inherently know how to work with files and folders. Until now, file systems have never been the "cool" way to build services, and we're excited to have the opportunity to bring their developer experience into the modern era. We've started this past week with the launch of Serverless execution, a way for users to treat their file systems as a service that accepts bash commands (like SQL) and respond with just the results. As a result, Archil isn't just a file system that's infinite, performant, and synchronizes to S3. It's also the file system that your agents run on. We know that there is much more that developers need, and we're building as fast as we can. If you have experience with low-level systems, and you're interested in working at the intersection of AI and data, we're hiring to try to speed up how quickly we can get all of this into our customer's hands. If you're building in AI, you can try Archil today to make it simple to connect AI to huge data sets. Go to https://console.archil.com or run "npx disk create" anywhere you have node.
-
Hunter Leath shared thisThe past few months have been a whirlwind of activity for Archil. We originally started the company to help users who were locked into file-based applications, but wanted to connect those applications to data based in S3. Think genomics, financial analysis, etc. Recently, though, it's become clear that file systems are the native way that AI models want to interact with data. The agents built with these models need something that developers have never had access to: file storage that's really easy to use and very performant. This was incredibly validating for our vision, since it's exactly what we've been building for since day 1. However, there were some things about the product that we clearly had to change. Suddenly, all of our users started to ask us about how they could attach Archil disks to a new generation of "sandbox providers" in order to give their agents the ability to safely run code and more. We have always supported the ability for users to use the file system from a variety of environments, but we increasingly found that: - It was complex for users to try to manage a compute provider and a storage provider for simple tasks - Not all sandbox providers expose the full set of Linux APIs and permissions that we need as a file system - The location of the sandboxes could materially alter the performance that users experienced with our product This meant that we needed to rethink how we exposed our product to AI users. We came up with "Serverless Execution". Like how a database provides the ability to execute SQL statements against the data and only return the requested rows, Serverless Execution allows users to send their Archil disk bash commands (in a full Linux environment) to execute remotely, and only return the result. This capability unlocks a whole new set of AI workloads which no longer need to integrate with a sandbox provider. Today, we got some exciting news. Serverless execution (released yesterday) is the single fastest sandbox provider that delivers full VMs -- beating incumbents like Vercel, Cloudflare, and Modal -- not by a little bit but by 50-90%. This means that workloads not only get to be radically simpler by moving to Archil, but they also get to be to be faster. This is exactly the kind of unlock for customers working at the forefront of AI that we try to deliver every day. Serverless execution is available today on all Archil file systems in AWS regions. Read more about it here: https://lnkd.in/eVeXvTHJ
-
Hunter Leath shared thisWhen you build infrastructure, comparisons to hyperscalers like AWS are inevitable. We don't see AWS as a competitor, we see them as a partner. Archil runs on top of AWS, GCP, and emerging GPU clouds, acting as a layer that helps customers use those platforms more effectively. By pushing data into systems like S3 and simplifying access across environments, we unlock new use cases and customer segments that hyperscalers may not reach on their own. As the cloud ecosystem evolves — with new GPU-focused providers and shifts in how storage and compute are used — we position ourselves as an enabling layer that grows the overall market rather than competing with it. #Archil #CloudInfrastructure #AWS #GCP #GPUCloud #DataInfrastructure #Cloud
-
Hunter Leath shared thisIf you're building agents, you need to put your data in a file system to get the best performance. We built Archil to make it simple for teams to spin up thousands of scale-to-zero file systems for agents, and now we're making it more accessible to startups. We just listed a deal on StandardDB for startups to use Archil with up to 1 TB of free data. Check it out! https://lnkd.in/eB_KHp_PHunter Leath shared thisMy co-founders from Standard Capital + myself just built a resource for AI builders called StandardDB. If you are building in AI, please check it out, I would really appreciate your feedback! https://lnkd.in/dfF_Bgcu
-
Hunter Leath posted thisAWS quietly added natural language querying to S3 Storage Lens back in December. You can now ask your S3 observability data questions directly via the S3 Tables MCP Server. "Which buckets grew the most last month?" "Show me storage costs by storage class." No pipeline required, just questions against your metrics. Infra teams have never lacked data. The gap has always been the friction between having it and acting on it. This update removes a layer of that. The rest of the update is great too, prefix analytics scaling to billions of objects, eight new performance metric categories. I dropped the link to the full post in the comments.
-
Hunter Leath shared thisModern teams don't run in just one environment anymore. They train models in GPU clouds, deploy applications on CPU clusters, and ship workloads in containers, but storage often fails to keep up. We built Archil to be available wherever our customers are. Traditional storage systems work well for large training datasets, but they break down when developers and researchers try to do everyday tasks like installing packages or running Docker-based workflows. Archil bridges that gap by supporting both sides of the workflow — from large-scale pre-training to day-to-day development and containerized production environments. The result is a storage layer that actually matches how modern teams build and ship software. #Archil #CloudInfrastructure #AIInfrastructure #GPUs #CPUs #DataInfrastructure #DeveloperTools #AWS
-
Hunter Leath posted thisStorage is killing your AI performance. And if it isn’t, it’s killing your budget. We keep seeing the same three mistakes with teams running ML workloads: → Everything on hot storage when most of the pipeline doesn't need it. → Tiering that saves on storage but gets eaten by egress fees. → Custom tiering logic that costs more in engineering time than it saves. The difference between a petabyte on EFS vs S3 is roughly $277k/month. That's not a rounding error. If your team is hitting this, I'm happy to talk through it.
-
Hunter Leath liked thisHunter Leath liked thisjust-bash (from Vercel) gives AI agents a clean, minimal Unix execution environment. But it’s ephemeral. With agent-shell, we add persistent storage by mounting Tigris storage as a filesystem: - Keep using standard shell tools (cat, grep, jq) - Treat buckets like local files - Persist state across executions - Writes are staged in-memory and flushed atomically, so agents get durability without partial or inconsistent state. This keeps the model simple: don’t invent new abstractions—extend the ones developers already understand.
-
Hunter Leath liked thisHunter Leath liked thisProactive agent on the cloud that thinks and acts like you. Multiplayer AI Brain for teams. Proper GUI for commanding 50 agents. Sauna.ai is all three. Sauna goes live today. First 2000 people, use access code LAUNCH for $80 of weekly(!) credits. Let’s explain. Multiplayer only works once the personal brain is powerful. So let's start here. 𝗣𝗲𝗿𝘀𝗼𝗻𝗮𝗹 𝗔𝗜 𝗕𝗿𝗮𝗶𝗻 3,800+ tools connected. State of the Art memory. Skills and schedules you teach once that get repeated forever on cheaper models. An AI first CRM. Lives on the cloud so you can initiate tasks from anywhere: iMessage, Slack, Email. Also no need for a Mac mini 😉 𝗚𝗨𝗜 AI agents have been stuck in their MS-DOS era. A chat box, a scroll buffer, no way to command 50 of them. We built the first GUI: Live sessions on one side, work waiting for your sign-off on the other, plus the things Sauna kicked off while you were asleep waiting for review. Game mode helps clear the queue with actual joy. Okay so far so good, but how to give benefit of what you built to more people or whole team? 𝗠𝘂𝗹𝘁𝗶𝗽𝗹𝗮𝘆𝗲𝗿 Once your Sauna actually knows you and you gave her access to your tools, you can use multiplayer. Two modes: - Brain access. My co-founder Robert plugged my brain as a tool into his Sauna last month. He can ask it about pricing while I'm in other meetings, gets a sourced answer back, never has to interrupt me. That’s read only. Yolo mode gives him access to all my tools too :O - Communal Saunas extend that to whole companies with proper permissioning. Folder owners decide what's true for the whole company and build skills. Most get their personal brain + read access to communal files and memories. Works also for group planning my best friend's bachelor party. 𝗧𝗵𝗲 𝗪𝗵𝘆 I’ve been obsessed about AI Brain since 2016. Our human brains suck at some things like memory and are brilliant at others like creativity. We are also particularly bad at thinking we are all on the same page and then realising weeks later that we weren’t. Most leaders spend their days being the human diff tool, catching contradictions in hallway conversations and Slack threads. Repeating themselves 50 times. Now every company is spinning up hundreds or thousands of agents that drift faster than humans do, feeding each other their drift as context. Compounding rot. After 10 years and one failed company in this space we are launching the solution. 𝗧𝗵𝗲 𝗟𝗮𝘂𝗻𝗰𝗵 We thought about a celebrity launch. Margot Robbie in a bathtub explaining agents. But then we realised the same money gives the first 2,000 people free daily credits, every day, until we burn through that $1,000,000. Sauna runs Claude, GPT-5, Gemini, GLM-5.1, DeepSeek, Kimi so you can budget yourself. The labs are racing to lock you so they can milk you in a year. We picked your side. 𝗢𝗻𝗯𝗼𝗮𝗿𝗱𝗶𝗻𝗴 It’s live at app.sauna.ai We’ve preheated saunas based on a niche. Use the access codes in the comments to get a better experience.
-
Hunter Leath liked thisThis is a pretty fun space to work in.Hunter Leath liked thisHey everyone, I'm hiring a Software Development Manager for the S3 Express storage nodes team. S3 Express One Zone is S3's high-performance storage offering, delivering consistent single-digit millisecond latency — up to 10x faster than S3 Standard. It uses directory buckets to store data within a specific Availability Zone for co-location with compute resources, supporting hundreds of thousands of requests per second. This team owns the storage platform software and the fleet that powers the service. The product is growing fast, and as the leader of this team you would work alongside a strong group of SDEs and Principals on low-level systems software, helping shape the future of the product. If you're interested, apply through the link below or DM me directly!Software Development Manager, Veyron Storage ServiceSoftware Development Manager, Veyron Storage Service
-
Hunter Leath liked thisHunter Leath liked thisVercel got hacked this week via a... Roblox cheat and AI? Quick recap. Vercel uses a 3rd party tool called "Context AI". An employee at Context AI, searched for a Roblox cheat. That cheat included a hidden malware. This malware ultimately affected Context AI and then accessed a Vercel employee's Google Workspace account. "From there, they were able to pivot into a Vercel environment, and subsequently maneuvered through systems to enumerate and decrypt non-sensitive environment variables." As a precaution, rotate all your environment variables. Check your activity logs.
-
Hunter Leath liked this
-
Hunter Leath reacted on thisHunter Leath reacted on thisToday, I'm thrilled to share that Bolto has raised a $12M Series A led by Standard Capital, with participation from Y Combinator, General Catalyst, Amino Capital, and others. Work is changing. HR leaders today want proactive insights, deeper analytics and reporting, and a platform that helps them make better human capital decisions. Today's HR systems haven't kept up with these needs. They're clunky, don't surface the right data, and cost countless hours (and dollars) to manage. Bolto is changing that. We're building the first AI-native, proactive HR platform that lets teams find talent, run payroll, and manage compliance globally, all in one place. We grew 20x from our seed round just one year ago, serve some of the fastest growing companies in the world, and are on track for our best year yet. A special thanks to our investors: Dalton Caldwell @ Standard Capital Pete Koomen @ Y Combinator The full team @ General Catalyst Sue Xu & Larry Li @ Amino Capital Ash Patel @ Morado Ventures + all angels to our incredible founding team that got us here: Owen Oktay Mary Landis McKenna R. Lia Mason Sahil Yadav Pratyush Agarwal and to the best co-founder out there, Jake Johnson. Back to building... the best is yet to come.
Experience
Education
Honors & Awards
-
Rodman Scholar
-
View Hunter’s full profile
-
See who you know in common
-
Get introduced
-
Contact Hunter directly
Other similar profiles
Explore more posts
-
Mutembei Kariuki
Fastagger • 15K followers
AGI lab bootstrapped to $1,000,000,000 Revenue and $30 Billion valuation in 5 years Surge AI by Edwin Chen Great lesson at South Park Commons SF yesterday on focusing on building, the problem and the customer. Only now are they raising their first VC round of $1 billion. #AGI #SF
22
-
James Green
CRV • 10K followers
CRV Security: Request for Startups I never know if this actually works for our friends over at YC but figured we'd try. Here's what we want to fund in 2026! 1. Golden Artifacts: Think Chainguard but more broad. Artifact attestation exists for open source. Almost nothing exists for internal software — especially the vibe-coded tooling now running in production. We want the company building cryptographic proof of secure software delivered from secure artifacts: who built it, how, and whether it was reviewed. If more things are being yeeted into the world via Claude Code (myself included), this feels like an issue. 2. MCP & Agentic Security: Agents are getting real credentials and taking real actions. The security posture of most orgs around this is basically zero. That changes fast. You'd never give an employee hardcoded API keys or write access to your email without supervision/trust. Why give it to agents? 3. AI Governance: Boards are asking CISOs to account for AI risk. CISOs have no good answer other than "Palo has a module" 4. Next-Gen Endpoint: CrowdStrike was built for a world of static binaries and human operators. AI workloads, cloud-native infra, and AI-assisted attackers need a new architecture. The category is ready to be reinvented. 5. Networking in the AI Era: Zero trust was designed for humans. What does network security look like when the entity requesting access is an agent? Nobody's really solved this. 6. Email Security + Next-Gen Phishing: LLMs have made spear phishing infinitely scalable. I've never truly understood why Abnormal and KnowBe4 aren't one company. Maybe this time it's different. 7. Frontier Security Lab: We'd back a credible, well-staffed lab focused entirely on red-teaming models and setting the evidentiary standard the industry needs as LLM built apps become the norm. 8. Dependency Security: That Actually Remediates Malicious and vulnerable dependencies are a top attack vector. The tooling is mostly noise — scanners that don't close the loop. The winner here ships fixes, not just alerts. 9. Critical Infrastructure Cyber: Data centers, satellites, power grids, undersea cables. The physical backbone of the internet is increasingly exposed and wildly under-defended. We have data centers in space, for God's sake. Surely we need better cyber for critical infrastructure? 10. PAM for the Modern Era Legacy: PAM was built for static roles, human users, on-prem directories. Cyberark was founded in 1999.....Agents, ephemeral workloads, and cloud-native infra have broken all of those assumptions. Is anyone rebuilding this from scratch? If you're building in any of these areas — or something we haven't thought of — reach out. james@crv.com
450
49 Comments -
Lakshmi Shankar
Together • 3K followers
Thrilled to announce that Together Fund is investing in Sentra, alongside a16z speedrun! You track results in Jira. Decisions in Notion. Conversations in Slack. But the reasoning, the debates, trade-offs, and context behind why you chose A over B, disappears into what we call "Dark Matter." A decision made in March looks insane by July because no one remembers the constraints that made it smart. I lived this firsthand at Twitter scaling from 800 to 8,000 employees, and at Google while launching AI Overviews to billions at planet scale. The problem isn't process. Process is compensation for something deeper: organizational amnesia. An organization’s "Systems of Record" doesn’t solve this, they encode it. They store what happened, never why. That's why we are investing in Sentra. Sentra is the always-on collective memory that eliminates organizational amnesia by maintaining accurate context for all members and agents, functioning as an operational nervous system. It connects to every channel where work happens, meetings, Slack, email, code commits, docs, calendars, and treats them not as artifacts to search, but as living signals to synthesize. The fleeting and the permanent, unified into a memory that understands. The founding team is built for this: - Jae Gwan Park (CEO): Product-first founder, memory systems research at UofT and MIT - Ashwin Gopinath (CSO): Former MIT professor, created "Reflexion" (NeurIPS 2023), agents that learn from mistakes, 2x founder - Andrey Starenky (CTO): Early Vapi engineer, ex-IBM, built to process enterprise-scale data firehose Together is an operator-led fund. We invest in problems we've lived. This is one of them. Many congrats Jae, Ashwin and Andrey, we are so excited to partner with you! Read the full thesis: https://lnkd.in/gixj9cE4 Book a demo: https://www.sentra.app/ #OrganizationalMemory #AI #Sentra #TogetherFund #a16z #ContextGraphs
71
3 Comments -
Aviel Ginzburg
Founders Co-op • 4K followers
While there has never been a more exciting time to be a founder building dev tooling or next-gen infra, it has also never been less investable at seed/pre-seed. I'm either really missing something or a lot of my peers are lost. As someone who has not just written, but also SHIPPED, about 75k lines of code in the past 6 months I can tell you that the evolution of how to build products has changed as much in the past year as it did in the entirety of 2007-2017. The complete rise and fail of frameworks, platforms, methodologies, etc... paved over and forgotten... that is of course except for the 1 company that gets a 1000x return from a wildly overvalued hyper-scaler or drunken growth stage investor obsessed with compounding at scale. Imagine a world where any seed investor in trends like Openstack, Hadoop, PaaS, etc all took a full loss on their investment. That's what we're looking at right now. I personally know of over a dozen well-funded seed-stage companies building in these spaces, with years of runway, scrambling to get acquired for a return of capital + several million personally while they're still relevant. If you're not seeing this unfold in front of you, you either aren't paying attention or you're satisfied playing the lottery instead of investing.
75
11 Comments -
Anish Acharya
Andreessen Horowitz • 14K followers
The big labs are expansive in their product ambition, especially since foundation models have largely improved in lockstep - in order to compete with them you have to do things they won’t which are: - building a very rich software ecosystem around a primitive - orchestration across multiple models - going insanely deep on product and growth for a narrow vertical domain
58
3 Comments -
Yaniv Golan
lool ventures • 5K followers
AI is changing what “scale” actually means. In CTech by Calcalist’s 2026 VC Survey, Haim Bachar and Lee Ben-Gal share how we’re seeing the scaling metric shift - from headcount growth to output per person. This isn’t about doing more with less. It’s about building companies differently from day one - and rethinking how success is measured. See the full article below 👇
13
-
Jianchang (JC) Mao
Fellows Fund • 3K followers
I was honored to host a fireside chat with Sherwin Wu, head of engineering for OpenAI API at last week's Fellows Forum 2025 organized by Fellows Fund. Sherwin shared his incredible journey at OpenAI in scaling OpenAI's developer platform and building its vibrant ecosystem, and his deep insights into various topics we covered. A few quick notes: Model Evolution & Platform Development: Outlined 3 major model capability inflection points 📍GPT-3: Limited to copywriting applications 📍GPT-3.5/4: Enabled the post-ChatGPT explosion (Perplexity, Snapchat integrations) 📍Reasoning paradigm (O1): Unlocked long-term planning for coding and autonomous agents Lessons from GPT-5 launch 📍Latency concerns: Model's extensive thinking time frustrated users 📍Product showcase mismatch: ChatGPT interface didn't effectively demonstrate the model's capabilities 📍Learning: Future major launches will pair API releases with first-party products that better showcase model strengths Unexpected use cases 📍Consumer apps like enabling toys for kids to chat with Common patterns in startups that successfully leverage OpenAI models and API 📍Most successful startups "swim with the current" - building products that push models to their limits and position for future capabilities rather than current limitations Common patterns in enterprises that successfully leverage OpenAI models and API 📍Common use cases include internal productivity enhancement (e.g., using Codex) and customer interaction transformation (support, sales automation) Barriers in enterprise Adoption 📍Lack of both top-down and bottom-up organizational buy-in (top-down alone is not sufficient) 📍Insufficient AI-ready infrastructure and data platform investment Responsible use: OpenAI is investing in 📍Pre-launch Red-teaming in Model Development 📍Post-launch monitoring 📍Tools to help developers manage safety risks, build guiderails, and setup evals Exciting future breakthroughs 📍Excitement centers on advancing the reasoning paradigm (longer thinking periods) and multimodal breakthroughs, particularly in voice interactions that feel more naturally conversational Full fireside chat video: https://lnkd.in/gJzjzMBm
122
2 Comments -
Ali Rohde
Outset Capital • 20K followers
Super exciting to see reports this morning that Waymo is in advanced discussions to raise $10–15B at a valuation near or above $100B, with Alphabet expected to anchor the round. That would be a major step up from Waymo’s prior ~$45B valuation in 2024 and a clear step change in investor confidence. Notably, Waymo has not raised this year, despite the broader AI funding frenzy. The size and structure of the round also make this feel like a step toward an eventual spinout or IPO. This matters because capital supercharges growth. Waymo is past the “does this work?” phase and firmly in “how fast can we scale?” mode. The company has already logged ~14M rides this year and expects to reach ~1M rides per week by late 2026 as it expands into markets like Miami, Dallas, and Philadelphia, including expanding onto freeways. At the same time, competition and consumer expectations are accelerating. In markets where Waymo operates, fully driverless rides are becoming normal rather than novel. Once people can reliably hail a car with no one in the driver’s seat, tolerance for “pilot programs” and distant timelines drops quickly. The race is no longer about whether robotaxis work. It is about who can scale them first, safely, and everywhere — a win for consumers who get cheaper, safer, and more reliable rides.
25
-
John F. Heerdink, Jr.
8K followers
JFrog’s Earnings Leap: Hopping Past Wall Street’s Cloud Rev Forecasts (and Competitors) – ( $FROG $SPY ) https://lnkd.in/gE_hMhKQ JFrog showcased impressive growth in Q3 2025, combining software innovation, cloud momentum, and AI advancements #JFrog #EarningsLeap #FROG #CloudGrowth #DevOps #DevSecOps #AI #SoftwareSecurity #RevenueSurge #MarketLeaders #TechnologyStocks #InvestorConfidence #FinancialResults #QuarterlyEarnings #GrowthStock #SoftwareSupplyChain #Innovation #WallStreet
1
-
Artem Bredikhin
University of London • 854 followers
Zed just raised $32M Series B from Sequoia to revolutionize how developers collaborate on code. Their bold vision: eliminate the outdated snapshot-based approach that fragments conversations about code across different tools and timeframes. The problem is real - every developer knows the pain of discussing code through stale commits, broken permalinks, and scattered chat messages. When that crucial context disappears, so does the accumulated wisdom of your team's decisions. Zed's solution is DeltaDB, an operation-based version control system that tracks every edit in real-time using CRDTs. Instead of working in isolation between commits, imagine continuous collaboration where every discussion remains permanently linked to the evolving codebase. Picture debugging a production issue where you can instantly see not just what changed, but every conversation, assumption, and decision that led to each line of code. AI agents would have full context of your team's reasoning, making their suggestions exponentially more valuable. This isn't just another IDE feature - it's reimagining software development as a continuous conversation between humans and AI, where knowledge never gets lost in the gaps between commits. The implications are massive. If Zed pulls this off, they could make Git's snapshot model look as outdated as CVS. The question isn't whether operation-level version control is the future - it's whether traditional development workflows can survive without it. What happens when your codebase becomes a living history where every insight is preserved forever? #SoftwareDevelopment #DeveloperTools #AI #Collaboration #VersionControl #IDE #TechInnovation #Programming #Engineering #Productivity #OpenSource #DeveloperExperience #CodeReview #TechFunding #DeepTech
-
Liran Grinberg
Team8 • 10K followers
Team8 is excited to back Clover Security 🍀 as it comes out of stealth with $36M across Seed and Series A – with Team8 leading the seed and Notable Capital leading the Series A following Clover's explosive growth 🚀 Clover is tackling one of the biggest gaps in modern software: #product #security that only wakes up after the product is already designed and built. In an AI-native world where products start in Confluence, Jira, GitHub, Cursor and Slack, security has to start there too. That’s exactly what Clover does: AI agents embedded directly into the product workflow, helping teams catch design flaws early and build secure products by design, not by cleanup. It’s a new foundation for product security in the AI era – design-led product security. Behind Clover are two exceptional founders: Alon Kollmann and Or Chen. They’ve spent their careers in product security at 8200, Checkmarx, Dazz (acquired by Wiz), Google and Microsoft, sitting next to engineering teams, feeling the pain of late-stage reviews and “too little, too late” security. Clover is the platform they wished they’d had: security that thinks like a product security engineer, but scales like software. It's great to partner with Assaf Rappaport (Wiz), Shlomo Kramer (Cato Networks, Check Point Software, Imperva), Assaf Hefetz (Snyk), Glenn Solomon and Oren Yunger (Notable Capital), SVCI - Silicon Valley CISO Investments, CCL and many other security luminaries! CC: Snir I., Heelee Kriesler, Hai Maler, Matthew J. Schoenfeld, Ori Yankelev, Asaf Azulay, Ori Barzilay, Sharon Shmueli, Sarit Firon, Uri Shamay, Amir Zilberstein, Marc Gaffan, Aviv Yonas, Nick Aharoni, Paz Menshes, Tal Levi, Eyal Eliakim, Assaf Mischari
328
40 Comments -
Chris Lam
Aizzie • 4K followers
There are two types of vibe coding: "Full vibe coding" and "Semi-vibe coding". Recently had a thought-provoking conversation with Jenny Wong about #VibeCoding. Many engineers are against it, but as a serial CTO from Series A/B startups and someone who has run my own startups, I am NOT against vibe coding at all. 1️⃣ Full vibe coding, with tools like Lovable/Replit/Bolt, is a game-changer for non-technical founders aiming to build MVPs quickly and validate ideas. While critics highlight issues like AI-generated bugs, security holes, and the risk of sky-high cloud bills, these are manageable risks for small-scale, early deployment. The key is to recognize that vibe-coded prototypes aren't meant to scale; when validation is achieved, expect "big bang" refactoring or complete revamp. 2️⃣ Semi-vibe coding, which is already mainstream for experienced teams like ours at Aizzie: we've been leveraging GitHub Copilot since 2021, then Cursor last year, and now Claude Code and Kiro. Here, AI assists us in code generation, but everything remains under expert supervision. This hybrid model enhances productivity while keeping true engineering discipline intact. Embracing both forms of vibe coding can accelerate time-to-market and empower both engineers and non-technical founders, as long as we remain mindful of the trade-offs and transition at the right time.
25
1 Comment -
Carl Fritjofsson
CREANDUM • 8K followers
AI is reshaping every industry — and VC is no exception. great catching up with Nathaniel Barling, operating partner at Andreessen Horowitz, who leads a 10-person engineering team building internal AI tools to supercharge their VC workflows. 🚀 at Creandum, we’re right there on the same journey. Over the past year, we’ve rolled out multiple AI-enabled apps (some built with Lovable ofc 😎) that now power everything from sourcing + dealflow analysis to portfolio management. the productivity impact is real — and we’re only just getting started. in VC (like everywhere else): adapt and embrace…or get run over. ⚡️
204
11 Comments -
Alex Shaw
Laude Institute • 2K followers
Mike Merrill and I are excited to announce the next chapter of Terminal-Bench with two releases: 1. Harbor, a new package for running sandboxed agent rollouts at scale 2. Terminal-Bench 2.0, a harder version of Terminal-Bench with increased verification Harbor is the package we wish we had had while making Terminal-Bench. It’s for agent, model, and benchmark developers and researchers who want to evaluate and improve agents and models. Just a few of the features I love about Harbor: - Evaluate any agent that can be installed and run autonomously - Scale up to thousands of concurrent containers using providers like Daytona and Modal - Generate rollouts for SFT data gen and RL - Create your own benchmarks or use existing ones like SWE-Bench Verified Harbor is also the official harness for Terminal-Bench 2.0. We used it to run tens of thousands of experiments in containerized environments while developing the latest version of the benchmark So why Terminal-Bench 2.0? We always knew that as model capabilities increased, we’d need to keep Terminal-Bench up to date with frontier capabilities. Terminal-Bench 2.0 consists of 89 hard tasks to test these capabilities. We aim to push the frontier with an increased emphasis on task quality. Each task received several hours of human and LM-assisted verification to ensure that tasks are (1) solvable, (2) realistic, and (3) well-specified. We’ll share more on how we did this in our upcoming preprint. At present, Codex CLI with GPT-5 sits at the top of our new leaderboard. Astute Terminal-Bench fans may notice that SOTA performance is comparable to TB1.0 despite our claim that TB2.0 is harder. We believe that this is because task quality is substantially higher in the new benchmark. We have removed several misspecified or impossible tasks – increasing difficulty while maintaining raw performance. Interested in using Terminal-Bench-2.0 or submitting to our new leaderboard? Check out the Harbor docs for more information. It's been a pleasure working alongside our advisors, Ludwig Schmidt, who seeded the idea and guided our development, and Andy Konwinski, who gave us the Laude Institute mandate to "ship your research." Additionally, Terminal-Bench wouldn’t be possible without its community. We’re so thankful to the over 1k members of our Discord who contributed and audited tasks, helped build and beta test Harbor, and made this such a fun project for everyone involved. https://lnkd.in/guZXbPVn https://lnkd.in/g7xqkjK8 https://lnkd.in/gETM-wkx
110
5 Comments -
David George
Andreessen Horowitz • 10K followers
I had a great time chatting with Patrick O'Shaughnessy on Invest Like The Best. I've known Patrick since college, and this is the first time we've talked markets and investing at this much depth. The fundamentals of company building haven’t changed: people, products, and markets matter. But obviously, private markets have evolved substantially over my career: there are now ~6x more private unicorns than public companies with a $1b+ market cap. And at the end of 2010, just 2 public technology companies were among the top 10 in market cap; today it’s 8 of 10. AI (alongside software eating everything more generally) is clearly driving a lot of this. But it’s instructive to look at everything from the steam engine, to the early days of Facebook and Google user monetization, to real-time success stories like Databricks, Anduril, OpenAI and Waymo, to get a clear picture of where the opportunities lie. It was a pleasure to go deep on all this and more!
131
9 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content