Jack Lu
Austin, Texas, United States
6K followers
500+ connections
View mutual connections with Jack
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Jack
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Jack’s full profile
-
See who you know in common
-
Get introduced
-
Contact Jack directly
Other similar profiles
Explore more posts
-
Alex Kiriakides
TrustedStake • 952 followers
"Why it's more interesting to launch on Bittensor than start from scratch (Credit: @Axilo on X) Why TAO? - Companies building on Bittensor are effectively subsidizing expensive compute by lifting on top of the network effects of Bittensor and its token, TAO as the primary incentive that everyone in the ecosystem competes for. - Most AI crypto protocols focus on deterministic outputs, rewarding only one type of result as the single source of truth. This results in one-dimensional incentive structures that are hard to scale and difficult to align with broader community participation. - Bittensor should be viewed not just as an AI protocol, but as an incentive protocol — a system where miners compete to produce subjective, high-value outputs across various subnets. - Because those outputs are subjective (e.g. language, vision, decision-making), the competition itself becomes dynamic, persistent, and difficult to game. - That creates a multi-layered economic game between miners, validators, and subnet owners with coordination, routing, and performance all incentivized natively - Many of these protocols are also hindered by high FDV funding rounds that front load investor returns and weaken the opportunity for meaningful, community driven growth. - Fairer launches like Bitcoin, Ethereum, and even Doge created stronger early ecosystems. Bittensor reintroduces that fairness with embedded incentive mechanisms that attract real contributors and incentives competition at every layer of the network. - As an investor excited about TAO, you're not betting on a single model or commodity. Instead on the collective intelligence / commodities of the network. - Every innovation from miners across every subnet is denominated in TAO, making it a meta-bet on decentralized AI productivity itself. - Instead of exposure to a single output, you're gaining exposure to the creation, distribution and competition for all outputs across the network. - As open-source AI models proliferate and centralized moats erode, Bittensor will become inevitable.. Offering open, incentive aligned coordination for AI development and deployment. - It's not just a bet on a network. it's a bet on the economic infrastructure layer of open-source AI. - What makes Bittensor especially compelling is how many existing crypto teams choose to launch subnets which proves that the economic design is working. I can see the same case for web2 companies coming to TAO to subsidize costs through miners. - It's simply more efficient to plug into TAO, align with existing miners and incentives, than to bootstrap a network from scratch. - Imagine if Midjourney, Stable Diffusion, or Hugging Face APIs were powered by miners competing to serve faster, cheaper, better results with the cost offset by TAO incentives. Whilst miners are hyper-competitive and have the incentives to do whatever it takes to outcompete their peers... That's a world I want to live in, and I believe that day will come."
12
4 Comments -
Ed Y. Li
allinsocial • 26K followers
At Citadel Securities’ Future of Global Markets 2025 in New York City, Jensen Huang, Founder and CEO of NVIDIA, joined Konstantine Buhler of Sequoia Capital to discuss how NVIDIA’s early vision of accelerated computing has evolved into the infrastructure powering the AI revolution. Jensen shared insights on: • The rise of the Integrated AI Factory Platform — connecting compute, data, and intelligence at planetary scale. • The enormous future potential of Agentic AI and Physical AI — enabling machines that learn, reason, and act in the real world. • The growing importance of Sovereign AI — empowering nations to build their own AI capabilities securely and responsibly. This dialogue between technology and capital markets leaders highlights how AI is redefining productivity, innovation, and national competitiveness — shaping the next frontier of global growth. #NVIDIA #CitadelSecurities #SequoiaCapital #AI #PhysicalAI #AgenticAI #SovereignAI #FutureOfMarkets #JensenHuang
-
Sergey O.
Compounding Your Wealth, LLC • 4K followers
Micron: Record 74% Gross Margin, 68% Operating Margin — but why did the stock fall after such a quarter? Micron delivered one of the strongest quarters across the semiconductor space, driven by a powerful combination of AI-driven demand, tight supply, and strong execution. Yet, despite record profitability and beat across key metrics, the stock declined post-earnings. Revenue reached $23.9B, well above expectations of $19.7B, supported by broad-based strength across segments. What stands out is not just growth, but the quality of earnings. Gross margin up to 75%, up more than 37 percentage points YoY, while operating margin reached 68%. Net margin followed at 58%. Free cash flow reached $6.9B, translating into a 23% margin, with EBITDA margin at 77%. This indicates not only strong pricing power but also disciplined cost control, with operating expenses remaining contained at roughly 6% of revenue combined for R&D and SG&A. AI workloads are significantly increasing memory intensity across all compute layers. Data centers are constrained not by compute, but by availability of DRAM and NAND, particularly for high-bandwidth memory. Micron is directly benefiting from this shift, with Cloud Memory and Data Center segments growing above 160–210% YoY, while even traditionally slower segments like automotive showed strong expansion. Product innovation reinforces this positioning. The ramp of HBM4 for next-generation AI platforms such as Nvidia Vera Rubin, alongside developments like 256GB LPCAMM2 modules enabling up to 2TB per CPU, highlights how memory is becoming a critical bottleneck in AI infrastructure. At the same time, Micron continues to expand manufacturing capacity globally, with new fabs in the U.S., Singapore, and India. Micron signed its first five-year Strategic Customer Agreement, signaling a shift toward longer-term demand visibility in what has historically been a cyclical business. Combined with a net cash position of $6.5B, Micron enters this phase with both financial strength and operational leverage. However, with such strong fundamentals, what is the market pricing in? Part of the answer likely lies in expectations. The guidance for the next quarter implies $33.5B revenue and 81% gross margin, suggesting continued acceleration. Capital intensity is rising sharply, with CapEx expected to exceed $25B in fiscal 2026 and increase further in 2027. Investors may be assessing how sustainable current margins are in the context of future supply expansion. Valuation remains relatively modest at 10x forward P/E and 7.6x EV/Sales, especially considering current profitability levels. The key debate shifts from near-term performance to cycle durability and long-term capital efficiency. Overall, the quarter reflects a structural shift in memory markets. The challenge now is not growth, but how efficiently that growth converts into sustained returns as capacity scales. #Micron #Semiconductors #AIInfrastructure #Investing #Earnings
38
2 Comments -
Phee Boon Kang
Beloit College • 1K followers
📖 明快打字機: A Historian's Hunt for the Missing Link in Chinese Computing --A testament to human curiosity 🔗 [The New York Times](https://lnkd.in/gCF-3wJn) story by Veronique Greenwood In a fascinating global journey spanning 18 years, a dedicated historian unraveled the remarkable story of the MingKwai typewriter - a pivotal yet forgotten medium that bridged traditional Chinese language and modern computing. 🔍 Key Highlights: - 18-year research odyssey - Pioneering innovation before digital era - Technological breakthrough in language representation Further Exploration: - 🎙️ [NPR Article by Emily Feng](https://lnkd.in/gQbE4K7j) - 📝 [Made in China Journal by Yangyang Cheng](https://lnkd.in/gUQ_9fnR) #TechHistory #LinYutang
2
-
Simon Lancaster 🇺🇸🇨🇦🇵🇹
University of Waterloo • 35K followers
Nvidia, Foxconn reported to be in talks to deploy humanoid robots at Houston AI server making plan - https://lnkd.in/gYvJ-fWd These type of mid-market Humanoids Could Be Korea/Japan/Taiwan’s Moment: • China may be poised to dominate in consumer robotics but… • Korea, Taiwan, and Japan are ideally positioned to ship secure, reliable, mid-priced robotics and humanoids for more stringent applications • Cheaper than U.S. systems, safer than PRC supply chains: ideal for enterprise and government buyers Why this matters now: • Trusted supply chains + infosec baselines → KTJ ecosystems already meet industrial cybersecurity norms • Industrial-grade mechatronics + serviceability → legacy robotics and component expertise translate into uptime • Cost discipline without cutting safety → focus on modularity and repairability, not disposable hardware Recent catalysts: • Korea launched a national K-Humanoid Alliance to drive global leadership by 2030 • Taiwan’s robotics suppliers are emerging as preferred subsystem and sensor partners for U.S. and EU OEMs • Japanese firms are pivoting humanoid R&D into applied enterprise robotics: logistics, inspection, healthcare For manufacturing-tech investors: The humanoid shift isn’t about billion-dollar prototypes: it’s about scalable, serviceable, and secure humanoids built on KTJ’s manufacturing DNA. This is the mid-market inflection where enterprise robotics meets trusted industrial supply chains.
60
2 Comments -
Thomas Johannes Look
Closelook Venture GmbH • 7K followers
Every new NVIDIA chip cycle expands TAM/SAM far faster than it expands cost — and accelerated Big Beautiful Bill depreciation now allows companies to invest even quicker, reduce taxes, and reach the profitable end-state sooner. The major real systemic risk is a delay or underperformance in Rubin and Feynman, not rapid depreciation or fast iteration cycles Most investors are looking at the wrong AI risk. 👉 Depreciation is linear — AI demand creation is exponential. 👉 The faster the chip cycle, the better for those who can execute. What Rubin Creates Rubin isn’t just “faster Blackwell.” It enables entire industries that are not economically viable today: New Products Always-on AI agents, Autonomous multi-agent systems, AI-native systems that perform work, Real-time multimodal agents with memory, Continuous personal AI companions, Autonomous research and engineering tools New Services True RPA 2.0 automation, Autonomous ticketing & IT resolution, AI-native customer support, AI-driven knowledge management, Enterprise agent orchestration layers New Sectors AgentOps, AI governance & compliance, Agent marketplaces, Agent-native developer ecosystems Rubin is the bridge from the Copilot Economy (Blackwell) → Agent Economy. The Real Risk: Rubin Delays or Underperformance If depreciation is not the risk, what is? → Rubin arriving late → Rubin underdelivering → Rubin not enabling the agent economy on time Rubin is a platform inflection point. A delay would ripple across enterprise adoption, automation roadmaps, startup ecosystems, agent platform maturity, and hyperscaler consumption patterns. Early Internet Parallel: The Real Risk Then, and Now In 2000, one narrative was: “Mobile streaming is around the corner.” But the reality was: smartphones didn’t arrive until 2007 4G didn’t arrive until 2010–2012 cloud didn’t mature until 2014–2016 mobile video took a decade longer than investors expected The risk was not speedy investing. The risk was that the promised applications lagged the infrastructure by 8–12 years, and the enabled investments' main monetization channel in Y2K was something like clunky Yahoo ads. The Big Beautiful Bill The final piece of the puzzle is the new U.S. depreciation rules allowing maximum front-loading of AI infrastructure depreciation in the first year. This changes the financial profile of the AI build-out: 1️⃣ Companies can front-load depreciation → lower GAAP earnings in transition years → but free up cash flow → and reduce taxable income 2️⃣ Lower reported profits ≠ weaker economics It’s an accounting artifact, not a business deterioration. 3️⃣ Lower taxes + faster investment cycles = faster innovation More money stays inside the system, compounding. 4️⃣ Long-term profits will be larger and arrive sooner Because companies reach the agent economy earlier. 5️⃣ The market may misread the transition year(s) - 2026? Reported earnings will look weak when underlying economics improve the most.
8
2 Comments -
John James
BOKA Capital • 4K followers
A key discussion between Infleqtion’s CTO Pranav Gokhale and NVIDIA’s Sam Stanwyck on what shifted in 2024, and why hybrid quantum-classical computing is no longer merely theoretical. https://lnkd.in/eVRHxrZZ For BOKA, this matters greatly. Infleqtion (NYSE: INFQ) is helping to solve the integration challenge that has hindered quantum computing for decades: enabling GPUs and quantum processors to share workloads in real time. Quantum won’t replace classical compute, it will augment and increasingly enable it. QPUs won’t compete with GPUs; they will unlock new classes of problems that make GPU supercomputing more powerful. As Jensen Huang has said: “In the near future, every NVIDIA GPU scientific supercomputer will be hybrid, tightly coupled with quantum processors to expand what is possible with computing.” Logical qubits. Hybrid orchestration. Scalable deployment. The next computing cycle is taking shape with Infleqtion at the centre of it. #QuantumComputing #HybridComputing #HPC #DeepTech #AIInfrastructure #Infleqtion #NVIDIA
25
-
Amir Sediq
Moonypto • 2K followers
#NVIDIA agreed to pay $20 billion to license Groq’s AI chip technology and bring its core engineering team onboard, without fully acquiring the company. Groq will continue independently as #GroqCloud, but NVIDIA gains access to its LPU patents and leadership, including founder Jonathan Ross. The deal targets a key #market shift: inference now generates more revenue than training. While NVIDIA dominates training, it has lagged in low-latency inference. Groq’s #technology is built specifically for fast, real-time responses, which fills that gap. By structuring the deal as licensing and hiring, NVIDIA avoids antitrust issues, spends only about one quarter of its free #cash flow, and blocks competitors from accessing top inference talent. The move strengthens NVIDIA’s position as the end2end platform for #AI, from training models to running them in real time.
2
-
Shalu Saraf, CFA
TipRanks • 410 followers
Nvidia’s (NVDA) Q1 Earnings Could Be a ‘Positive Clearing Event’ for Investors, Says Top Analyst AI giant Nvidia (NVDA) is set to report its Q1 FY26 earnings on May 28. Analysts expect earnings of $0.88 per share on revenue of $43.26 billion. Cantor Fitzgerald analyst C.J. Muse, who has an Overweight rating and $200 price target on the stock, says this earnings call could be a “positive clearing event” for investors. Muse, a five-star analyst, believes the report will ease investor concerns and provide a “strong line-of-sight” for growth in the second half of the year. With improving fundamentals and clear demand drivers, he sees “meaningful upside” and keeps Nvidia as a “Top Pick.” Blackwell Strength May Soften China Hit Nvidia is expected to take a $15 billion hit to its Data Center revenue in 2025 due to H20 chip restrictions in China. Despite this, Muse thinks the company’s July-quarter guidance will come in close to expectations. He projects revenue of $46 billion, just slightly below the $46.3 billion Wall Street average. This estimate includes a $5 billion loss from China but is partly balanced by strong early demand for Nvidia’s new Blackwell chips. Looking beyond the near term, Muse is confident in Nvidia’s ability to grow its Data Center business. He pointed to better visibility around “rack-scale shipment acceleration” for the Blackwell platform. In particular, he sees a meaningful ramp-up in GB300 chip shipments starting in the fourth quarter, with about 25,000 units expected to ship in 2025. Based on this momentum, Muse believes Data Center revenue could climb to $200 billion next year — well ahead of the current $175 billion estimate, even after factoring in the impact from China. Management May Stay Upbeat on AI Spending Looking ahead to 2026, Muse expects Nvidia’s management to stay positive about long-term AI spending. He does not expect formal guidance for 2026 just yet, but he thinks the team will strike an optimistic tone. He also expects gross margins to stay in the “mid-70s” in the second half of 2025, which would match previous comments. In addition, he believes Nvidia will highlight new areas of demand, such as “AI Factories and Physical AI,” which could drive future growth. Read more on TipRanks https://lnkd.in/g-FbMDQ8
1
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top contentOthers named Jack Lu
-
Jack Lu
Allen, TX -
Jack L.
Australia -
Jack Lu
Vancouver, BC -
Jack Lu
Taichung City -
Jack Lu
Ningxia Hui, China
579 others named Jack Lu are on LinkedIn
See others named Jack Lu