Kevin Ott
Palo Alto, California, United States
4K followers
500+ connections
View mutual connections with Kevin
Kevin can introduce you to 10+ people at Google
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Kevin
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
I'm a software engineer and engineering leader who is passionate about building great…
Articles by Kevin
-
Innovation at the Edge: Navigating Distributed Environments and Beyond at Cisco’s Data and Analytics Conference 2016
Innovation at the Edge: Navigating Distributed Environments and Beyond at Cisco’s Data and Analytics Conference 2016
I am pleased to share that Cisco’s Connected Streaming Analytics (CSA) is among the 15 leading vendors featured in…
18
-
Sustaining the Smart CityMay 3, 2016
Sustaining the Smart City
“Cisco Energy Management Suite is a key project in reducing energy costs across our 700-plus buildings, aiding our…
14
-
Cisco Data Virtualization Powers Operational Use CasesMar 30, 2016
Cisco Data Virtualization Powers Operational Use Cases
In the past 15 years we’ve developed data virtualization products and solutions to keep our customers ahead of the game…
35
1 Comment -
Maximizing Your Return on Investment From Data VirtualizationJul 8, 2015
Maximizing Your Return on Investment From Data Virtualization
Forrester Consulting recently conducted a Total Economic Impact (TEI) study and examined the potential return on…
1
Activity
4K followers
-
Kevin Ott shared thisIn this era of first party data, you need a better way to make your offers relevant effective for each consumer. Formation's new Dynamic Offer Optimization Platform automate the creation and optimization of each customer's offer - that's billions of unique offers deployed in minutes at the click of a button! #brandloyalty #customerexperience #digitalexperienceKevin Ott shared thisOnly 33% of offers are considered relevant by consumers. That means marketers are wasting ⅔ of the time spent on offers. Do you like wasting time? Because we don’t. That’s why Formation built a better way - Dynamic Offer Optimization. Automate the creation and optimization of each customer's offer - that's billions of unique offers deployed in minutes at the click of a button. #brandloyalty #customerexperience #relevancygap https://lnkd.in/dvajA6Q
-
Kevin Ott shared thisWe're very excited to officially launch our Dynamic Offer Optimization Platform! We've taken all of the knowledge and experience from optimizing offers for some of the world’s biggest brands and we've made that breakthrough technology available to all.
-
Kevin Ott shared thisData Scientists, we are changing the way the world's leading brands interact with their customers. Join us! #datascience https://lnkd.in/giDCHcK
-
Kevin Ott shared thisWe are very pleased to announce that our Data Virtualization business is moving to TIBCO Software Inc. We are looking forward to accelerating innovation as part of the leading integration and analytics portfolio. Exciting things to come.Kevin Ott shared thisWith the acquisition of Cisco Data Virtualization, TIBCO customers will be able to bring together data from disparate sources into a single orchestrated layer, addressing the growing data needs of companies with maturing analytic ecosystems. Learn more: https://lnkd.in/eYmiVFG
-
Kevin Ott liked thisKevin Ott liked thisWe rewrote our entire BigQuery dbt project (160+ files) in Google's Pipe query syntax in one day. Our team loves it, and both humans and AI agents have an easier time reasoning about data transformations now. A normal SQL query looks like this: select user_id, first_name, last_name, count(*) as event_count from user_events where age > 30 and gender = 'M' group by user_id, first_name, last_name" having event_count > 5 Here you have mixed into one 'sentence' several things at once, and in the wrong order. 1. reading data from a table 2. doing an aggregation and column calculation (event count) 3. applying three filters, two before and one after aggregation 4. picking your final output columns The pipe sql way to do this is: from user_events |> where age > 30 and gender = 'M' |> aggregate count(*) as event_count group by user_id, first_name, last_name |> where event_count > 5 |> select user_id, first_name, last_name, event_count You start with the table -> filter it -> aggregate it -> filter again -> then say your output columns explicitly for good measure and clarity. The magic is that every step takes the ENTIRE output of the previous step, and does a new thing to it. You don't need to remember when you use 'where', 'having', or 'qualify' - there's only one way to filter. You don't mix a where filter with a join and have to figure out why you're losing data. It's just sequential operations on the whole thing. You could do this in normal SQL with a bunch of CTEs but it would be harder to read and not worth the effort. It's extremely easy to debug - just select the first 2 steps, run them, and see what happened. Add in a limit to get a smaller set of rows at any point. Reorder your operations by just moving one line up and down. I made my own language to try and do this one night (a custom dsl that compiles to sql), but the next morning I learned that google had already done the important parts natively. It's based on this paper from the google team, and is a standard feature in all google sql products. https://lnkd.in/gF5it25j docs: https://lnkd.in/g8gBs2NN Shoutout to Jeff Shute and co.Pipe query syntax | BigQuery | Google Cloud DocumentationPipe query syntax | BigQuery | Google Cloud Documentation
-
Kevin Ott liked thisKevin Ott liked thisWhat can you do when your data warehouse has GenAI, Vector Search, and Multimodal built in? Hundreds of people answered that question as they participated in BigQuery's AI Hackathon on Kaggle. See interviews with the winning team from each category as well as new features that provide even more possibilities in our new blog post: https://lnkd.in/guHDCtAn. Omid Fatemieh, Thibaud Hottelier, Ivan Santa Maria Filho, Yves-Laurent Kom Samo, PhD, Tomas Talius, Ganesh Kumar Gella, Alicia WilliamsBigQuery AI Hackathon: Celebrating Innovation and a Look at What's New | Google Cloud BlogBigQuery AI Hackathon: Celebrating Innovation and a Look at What's New | Google Cloud Blog
-
Kevin Ott liked thisKevin Ott liked thisWe are excited to announce several new and powerful GenAI capabilities from BigQuery. 1. [GA] AI.generate() function for both text and structured data generation (e.g., entity extraction). 2. [Preview] AI.embed() function for text and multimodal embedding generation. 3. [Preview] AI.similarity() for computing semantic similarity scores between two pieces of text, two images, or across text and images. 4. Gemini 3.0 Pro/Flash support. 5. Simplified permission setup for GenAI functions by using End User Credentials (EUC). Please check out the blog post below for more details and share any feedback: https://lnkd.in/gujktmhX Kudos to the core engineering and PM/TPM team: Tianxiang Gao, Derrick Li, Yuxiang Li, Jiashang Liu, Vaibhav Sethi, Anuj Rajbhandari, as well as our Vertex AI partners. cc: Omid Fatemieh, Shanmugam Kulandaivel, Sanjay Dey, Jing Jing Long, Vinay Balasubramaniam, Ganesh Kumar Gella, Neeraja Rentachintala, Tomas TaliusNew BigQuery gen AI functions for better data analysis | Google Cloud BlogNew BigQuery gen AI functions for better data analysis | Google Cloud Blog
-
Kevin Ott liked thisKevin Ott liked thisBigQuery now supports fully-managed inference for open embedding or LLM models from Hugging Face and Vertex AI Model Garden. Benefits include simplified deployment, unified inference, and automated infrastructure management, all via simple SQL statements. Please check it out and share any feedback: https://lnkd.in/gFhq8uq2 Kudos to the core development and PM team: Yunmeng Xie, Zehao (Jasper) Xu, Jiashang Liu, Xi Cheng, Vaibhav Sethi, as well as our Vertex AI partners. CC: Shanmugam Kulandaivel Vinay Balasubramaniam Neeraja Rentachintala Jing Jing Long Ganesh Kumar Gella Tomas Talius Wencheng Lu Sushant Jain Manoj Gunti Suda Srinivasan Geeta Banda Shawn MaIntroducing BigQuery managed and SQL-native inference for open models | Google Cloud BlogIntroducing BigQuery managed and SQL-native inference for open models | Google Cloud Blog
-
Kevin Ott liked thisKevin Ott liked thisToday we launched Personal Intelligence in the Gemini App, marking an exciting step toward making Gemini uniquely helpful. It lets you securely connect Gemini to the Google apps you use every day, like Gmail and Photos, resulting in even more tailored responses that are personalized to you. Connecting your apps with Personal Intelligence is launching as a beta in the U.S. for Google AI Pro and AI Ultra subscribers and is opt-in. Learn more: https://lnkd.in/gim5XjMG
-
Kevin Ott liked thisKevin Ott liked thisToday, we are excited to announce the open sourcing of one of our most critical infrastructure components, Dicer: Databricks’ auto-sharder, a foundational system designed to build low latency, scalable, and highly reliable sharded services. It is behind the scenes of every major Databricks product, enabling us to deliver a consistently fast user experience while improving fleet efficiency and reducing cloud costs. Dicer achieves this by dynamically managing sharding assignments to keep services responsive and resilient even in the face of restarts, failures, and shifting workloads. As detailed in the blog post, Dicer is used for a variety of use cases including high-performance serving, work partitioning, batching pipelines, data aggregation, multi-tenancy, soft leader election, efficient GPU utilization for AI workloads, and more. By making Dicer available to the broader community, we look forward to collaborating with industry and academia to advance the state of the art in building robust, efficient, and high-performance distributed systems. In our blog post, we discuss the motivation and design philosophy behind Dicer, share success stories from its use at Databricks, and provide a guide on how to install and experiment with the system yourself. Blog post: https://lnkd.in/gTFd7D7r Github repo: https://lnkd.in/g-xvSumW
-
Kevin Ott liked thisKevin Ott liked thisEver since I joined Google back in March, the top thing I’ve heard is, "You guys are Google, why aren’t you using my Gmail, photos, calendar, etc. to make my experience better?" Today we have an answer -- Personal Intelligence. Now you can ask questions that actually use the context of your life to get helpful answers. For example, if you’re traveling and want recommendations on where to eat or what to do, you don’t want the same list that everyone else sees. With Personal Intelligence, Gemini will reference your interests and preferences (you like hiking and being outdoors, but you are visiting Chicago in January 🥶) to find hidden gems that are right for you. We’re rolling it out now, starting with Google AI Pro and AI Ultra subscribers in the U.S. Check out the full details here: https://lnkd.in/giNRsP8N
-
Kevin Ott liked thisKevin Ott liked thisBuild your own enterprise ready data agent with BigQuery managed MCP Server and Google Agent Development Kit in a few easy steps. cc: Andi Gutmans Yasmeen Ahmad Neeraja Rentachintala Ganesh Kumar Gella Prem Ramanathan Wu Jiaxun https://lnkd.in/gNygPiNVUsing the fully managed remote BigQuery MCP server to build data AI agents | Google Cloud BlogUsing the fully managed remote BigQuery MCP server to build data AI agents | Google Cloud Blog
Experience
Education
Volunteer Experience
-
Mentor
Girls Power Tech @ Cisco
- 2 years 8 months
Children
Mentor to girls aged 13 to 18 to encourage education and career paths in STEM (science, technology, engineering, and math.) - See more at: http://goo.gl/zwYNkz
View Kevin’s full profile
-
See who you know in common
-
Get introduced
-
Contact Kevin directly
Other similar profiles
Explore more posts
-
Carmella (Surdyk) Weatherill
3K followers
We're excited to announce the release of the Apache Iceberg V3 specification! This new standard brings powerful features like Deletion Vectors, default column values, and enhanced data types (VARIANT and GEOSPATIAL) that are designed to simplify data lake operations and unlock new possibilities for data analysis. Dive into the details and see how these changes are shaping the future of open data lakehouses. #ApacheIceberg #OpenSource #DataEngineering #DataAnalytics #Google https://lnkd.in/gVD9hJfE
-
Sacha Ghiglione
Davos Tech Summit • 25K followers
Dwarkesh just dropped a 2.5-hour deep dive with AI pioneer Andrej Karpathy on the Dwarkesh Podcast, and it’s a must-listen, even if it’s a long one. Karpathy is by far has the best and most sober takes on AI progress in the community. Super careful with what he says and how he articulates it and he isn’t too bearish or bullish. Andrej, with his background as former director of AI at Tesla and OpenAI, shared some fascinating insights: 🔹️AGI might be a decade away, but it’s not going to be a dramatic shift—it’ll weave into our lives gradually. 🔹️He dove into the limitations of LLMs, critiqued reinforcement learning, and explained why self-driving cars are taking longer than expected. 🔹️Plus, he thinks AGI could just boost our economy by a steady 2% each year. If you’re into AI, tech, or just curious about the future, grab a coffee, settle in, and give this a listen. It’s worth it! Also on all your usual Podcast platforms.
2
-
Sasha Kipervarg
Stanford University Graduate… • 5K followers
Cloud cost uncertainty isn’t a fluke; it’s by design. Cloud providers profit from complexity: 💸 Granular billing 💸 Shifting discount programs 💸 Opaque pricing for AI workloads 💸 Exploding SaaS and shadow IT The result? Engineering and FinOps leaders are left scrambling to explain unpredictable cloud bills while innovation can’t slow down. Traditional dashboards and static budgets won’t save you. You need adaptive cloud cost management that embeds cost awareness into engineering workflows, detects anomalies early, and drives shared accountability. We break it all down in our latest post, including 5 steps to start leading through the chaos. Read it here: https://lnkd.in/ge4jadxr #FinOps #CloudCost #EngineeringLeadership #CloudComputing #AI #SaaS #Ternary
35
1 Comment -
Or Lenchner
Bright Data • 12K followers
As models evolve beyond text and code, video and multi‑modal data will become the new frontier. That shift requires infrastructure that can handle high‑volume streams at scale, without latency and without getting blocked. At Bright Data, that’s been our focus for over a decade. The same trusted platform that fuels the world’s leading AI companies with live web data today is already powering video and other multi‑modal formats with the same reliability and precision. It’s something we’re delivering right now. #AI #AIAgents #EnterpriseAI
57
1 Comment -
Manvinder Singh
Google • 12K followers
Deploying LLMs at scale across your organization? LiteLLM (YC W23) and Redis can be a great solution with caching and rate-limiting in Redis built in! check out this article by Rini Vasan to learn more! Tyler Hutcherson Benoit Dion Blair Pierson Ishaan Jaffer
44
1 Comment -
Ben Schaechter
Vantage • 5K followers
Amazon Web Services (AWS) just launched general availability of new R8gn instances - though the way that I found out about them wasn't from the official Amazon blog post, it was from the new automated newsletter for EC2instances.info. EC2instances.info is regularly rebuilding itself every few hours and now also checks for two things: 1) new instance types and 2) any pricing changes with existing instance types. You can sign up to receive email alerts about these two changes as often as the site finds them. If you're curious about getting the notifications sent your way to see what Amazon is launching, especially leading up to AWS re:Invent, I included the link for how to set this up in the comments below.
34
3 Comments -
Niall Murphy
6K followers
YellowDog.ai just set a 10x benchmark uplift in scale computing, delivering 40,000 tasks per second (TPS) and managing 100,000 compute nodes in the cloud. What's even more interesting, that's 2x IBM Symphony and opens an intriguing pathway for these until-know closed/captive systems.
11
-
Bret Taylor
OpenAI • 145K followers
Building on top of large language models is fun, but getting consistent performance and reliability is extremely challenging. I love this post from Kimberly Patron about how Sierra uses health and performance-based traffic routing and request hedging to get meaningfully better performance and tail latency from high scale LLMs https://lnkd.in/g4qWRCb5
284
13 Comments -
Vijayakumar A
NeevCloud® • 8K followers
Jensen Huang dropped a number at #GTC 2026 that stopped me mid-scroll. "I believe computing demand has increased by 1 million times over the last two years." Jensen Huang, GTC 2026 Not over a decade. Not someday. Two years. Then came the slide that reframed everything for me. The entire AI infrastructure thesis in one frame: ▸ AI Factories are the Industrial Infrastructure of the AI Era ▸ Inference is the Workload ▸ Tokens are the New Commodity ▸ Compute is Revenue And the line Jensen said out loud: "It's now a factory to generate tokens." on the All-In podcast at GTC, Jensen laid out a thought experiment. If a $500K/year developer spent less than $250K on tokens by year end, he'd be "deeply alarmed." My read on that: it's not a pricing benchmark. It's a provocation. It's Jensen saying we are still dramatically underestimating how much value a token can unlock. Here's where I land on all of this. The cost of generating tokens is dropping fast. The value of what those tokens can do is rising exponentially. The gap between those two curves? That's where the real opportunity lives. But and this is the part most people skip the gap only gets captured if the infrastructure beneath it is built right. #Tokens don't generate themselves. #Inference doesn't optimize itself. And enterprises won't run mission-critical agentic workloads on infrastructure that can't guarantee latency, throughput and cost-per-token at scale. Whoever builds that layer , reliably, efficiently and at enterprise grade #owns the #AIstack. The inference delivery platform. At NeevCloud®, this is exactly what we're building toward on our AI SuperCloud platform. Because Jensen is right the AI factory era is here. The question is who builds the pipes that make the factory run. What's your take is token cost the most underloved problem in enterprise AI right now? Drop your thoughts below 👇 Note: I didn't attend GTC in person. Insights are from the official NVIDIA GTC keynote on YouTube and public coverage. Screenshot credit: NVIDIA GTC 2026 official livestream. #NvidiaGTC #AIInference #TokenEconomy #AgenticAI #EnterpriseAI #AISuperCloud #NeevCloud
34
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content