About
I bring machine learning to billions of people.
Most recently, I spent a few years…
Articles by Jason
Activity
-
Exciting times at The University of Texas at Austin! UT Austin is launching the new School of Computing, bringing together Computer Science…
Exciting times at The University of Texas at Austin! UT Austin is launching the new School of Computing, bringing together Computer Science…
Liked by Jason Gauci
-
Grateful for everyone building with us. The future won’t be centralized. It’s yours. Time to think for yourself.
Grateful for everyone building with us. The future won’t be centralized. It’s yours. Time to think for yourself.
Liked by Jason Gauci
-
Continual learning was a major part of my research focus during my time in academia, so I naturally agree with many of the points raised in this…
Continual learning was a major part of my research focus during my time in academia, so I naturally agree with many of the points raised in this…
Liked by Jason Gauci
Experience
Education
-
University of Central Florida
-
-
Co-invented HyperNEAT, a novel method for evolving large artificial neural networks. Created first HyperNEAT implementation, now adapted by research institutions worldwide. Created a machine learning agent that is capable of mastering most board games without any knowledge of the rules.
-
-
-
Publications
-
Evolving neural networks for geometric game-tree pruning
GECCO 2011
Game-tree search is the engine behind many computer game opponents. Traditional game-tree search algorithms decide which move to make based on simulating actions, evaluating future board states, and then applying the evaluations to estimate optimal play by all players. Yet the limiting factor of such algorithms is that the search space increases exponentially with the number of actions taken (i.e. the depth of the search). More recent research in game-tree search has revealed that even more…
Game-tree search is the engine behind many computer game opponents. Traditional game-tree search algorithms decide which move to make based on simulating actions, evaluating future board states, and then applying the evaluations to estimate optimal play by all players. Yet the limiting factor of such algorithms is that the search space increases exponentially with the number of actions taken (i.e. the depth of the search). More recent research in game-tree search has revealed that even more important than evaluating future board states is effective pruning of the search space. Accordingly, this paper discusses Geometric Game-Tree Pruning (GGTP), a novel evolutionary method that learns to prune game trees based on geometric properties of the game board. The experiment compares Cake, a minimax-based game-tree search algorithm, with HyperNEAT-Cake, the original Cake algorithm combined with an indirectly encoded, evolved GGTP algorithm. The results show that HyperNEAT-Cake wins significantly more games than regular Cake playing against itself.
Other authorsSee publication -
Autonomous Evolution of Topographic Regularities in Artificial Neural Networks
Neural Computation
Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs…
Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains. That is, if the neurons in evolved ANNs are situated at locations in space (i.e., if they are given coordinates), then, as experiments in evolving checkers-playing ANNs in this letter show, topographic maps with symmetries and regularities can evolve spontaneously. The ability to evolve such maps is shown in this letter to provide an important advantage in generalization. In fact, the evolved maps are sufficiently informative that their analysis yields the novel insight that the geometry of the connectivity patterns of more general players is significantly smoother and more contiguous than less general ones. Thus, the results reveal a correlation between generality and smoothness in connectivity patterns. They also hint at the intriguing possibility that as NE matures as a field, its algorithms can evolve ANNs of increasing relevance to those who study neural computation in general.
Other authorsSee publication -
Indirect Encoding of Neural Networks for Scalable Go
PPSN 2010
he game of Go has attracted much attention from the artificial intelligence community. A key feature of Go is that humans begin to learn on a small board, and then incrementally learn advanced strategies on larger boards. While some machine learning methods can also scale the board, they generally only focus on a subset of the board at one time. Neuroevolution algorithms particularly struggle with scalable Go because they are often directly encoded (i.e. a single gene maps to a single…
he game of Go has attracted much attention from the artificial intelligence community. A key feature of Go is that humans begin to learn on a small board, and then incrementally learn advanced strategies on larger boards. While some machine learning methods can also scale the board, they generally only focus on a subset of the board at one time. Neuroevolution algorithms particularly struggle with scalable Go because they are often directly encoded (i.e. a single gene maps to a single connection in the network). Thus this paper applies an indirect encoding to the problem of scalable Go that can evolve a solution to 5×5 Go and then extrapolate that solution to 7×7 Go and continue evolution. The scalable method is demonstrated to learn faster and ultimately discover better strategies than the same method trained on 7×7 Go directly from the start.
Other authorsSee publication -
A hypercube-based encoding for evolving large-scale neural networks
MIT Press
Research in neuroevolution—that is, evolving artificial neural networks (ANNs) through evolutionary algorithms—is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called…
Research in neuroevolution—that is, evolving artificial neural networks (ANNs) through evolutionary algorithms—is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.
Other authorsSee publication -
A Case Study on the Critical Role of Geometric Regularity in Machine Learning
AAAI 2008
An important feature of many problem domains in machine learning is their geometry. For example, adjacency relationships, symmetries, and Cartesian coordinates are essential to any complete description of board games, visual recognition, or vehicle control. Yet many approaches to learning ignore such information in their representations, instead inputting flat parameter vectors with no indication of how those parameters are situated geometrically. This paper argues that such geometric…
An important feature of many problem domains in machine learning is their geometry. For example, adjacency relationships, symmetries, and Cartesian coordinates are essential to any complete description of board games, visual recognition, or vehicle control. Yet many approaches to learning ignore such information in their representations, instead inputting flat parameter vectors with no indication of how those parameters are situated geometrically. This paper argues that such geometric information is critical to the ability of any machine learning approach to effectively generalize; even a small shift in the configuration of the task in space from what was experienced in training can go wholly unrecognized unless the algorithm is able to learn the regularities in decision-making
across the problem geometry. To demonstrate the importance of learning from geometry, three variants of the same evolutionary learning algorithm (NeuroEvolution of Augmenting Topologies), whose representations vary in their capacity to encode geometry, are compared in checkers. The result is that the variant that can learn geometric regularities produces a significantly more general solution. The conclusion is that it is important to enable machine learning to detect and thereby learn from the geometry of its problems.Other authorsSee publication -
Generating large-scale neural networks through discovering geometric regularities
GECCO 2007
Connectivity patterns in biological brains exhibit many repeating motifs. This repetition mirrors inherent geometric regularities in the physical world. For example, stimuli that excite adjacent locations on the retina map to neurons that are similarly adjacent in the visual cortex. That way, neural connectivity can exploit geometric locality in the outside world by employing local connections in the brain. If such regularities could be discovered by methods that evolve artificial neural…
Connectivity patterns in biological brains exhibit many repeating motifs. This repetition mirrors inherent geometric regularities in the physical world. For example, stimuli that excite adjacent locations on the retina map to neurons that are similarly adjacent in the visual cortex. That way, neural connectivity can exploit geometric locality in the outside world by employing local connections in the brain. If such regularities could be discovered by methods that evolve artificial neural networks (ANNs), then they could be similarly exploited to solve problems that would otherwise require optimizing too many dimensions to solve. This paper introduces such a method, called Hypercube-based Neuroevolution of Augmenting Topologies (HyperNEAT), which evolves a novel generative encoding called connective Compositional Pattern Producing Networks (connective CPPNs) to discover geometric regularities in the task domain. Connective CPPNs encode connectivity patterns as concepts that are independent of the number of inputs or outputs, allowing functional large-scale neural networks to be evolved. In this paper, this approach is tested in a simple visual task for which it effectively discovers the correct underlying regularity, allowing the solution to both generalize and scale without loss of function to an ANN of over eight million connections.
Other authorsSee publication
Patents
-
TEXT TRANSCRIPT GENERATION FROM A COMMUNICATION SESSION
Filed US 61/529,607
Projects
-
Programming Throwdown
- Present
See projectProgramming Throwdown attempts to educate Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.
-
Trivipedia
-
See projectTrivia game using content extracted from wikipedia. Over 300,000 questions are generated from wikipedia text automatically.
Honors & Awards
-
Presidential Doctoral Fellowship
University of Central Florida
Two undergraduate students from each department of the university are selected annually to receive the Presidential Doctoral Fellowship. These awards provide multi-year support to the most qualified PhD students.
-
National Merit Scholar
National Merit Scholarship Corporation
The National Merit® Scholarship Program is an academic competition for recognition and scholarships that began in 1955. High school students enter the National Merit Program by taking the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT®)–a test which serves as an initial screen of approximately 1.5 million entrants each year–and by meeting published program entry/participation requirements. About 10,000 students go on to become National Merit Scholars.
Languages
-
English
Native or bilingual proficiency
Organizations
-
Association for Computing Machinery
Vice President, UCF Chapter
- Present
More activity by Jason
-
Most document retrieval systems start by converting PDFs into text. OCR first, then search. That works until it doesn't. Charts, tables, scanned…
Most document retrieval systems start by converting PDFs into text. OCR first, then search. That works until it doesn't. Charts, tables, scanned…
Liked by Jason Gauci
-
Tomorrow at PyTexas in Austin, I'm giving a talk called "I Built an AI Running Coach." It's easy to hand an LLM your running history. Getting useful…
Tomorrow at PyTexas in Austin, I'm giving a talk called "I Built an AI Running Coach." It's easy to hand an LLM your running history. Getting useful…
Liked by Jason Gauci
-
I am joining Resolve AI Labs as a founding member! 🎉 Resolve is building AI agents to solve some of the trickiest reasoning problems: debugging…
I am joining Resolve AI Labs as a founding member! 🎉 Resolve is building AI agents to solve some of the trickiest reasoning problems: debugging…
Liked by Jason Gauci
-
If I had a nickel for every time I got laid off from Meta I would have two nickels, which isn’t a lot but still pretty crazy that it’s happened…
If I had a nickel for every time I got laid off from Meta I would have two nickels, which isn’t a lot but still pretty crazy that it’s happened…
Liked by Jason Gauci
-
One week down at #Depthfirst! I’m thrilled to share that I’ve joined the team as a Member of Technical Staff to help redefine #cybersecurity through…
One week down at #Depthfirst! I’m thrilled to share that I’ve joined the team as a Member of Technical Staff to help redefine #cybersecurity through…
Liked by Jason Gauci
-
Starting this week, I’m joining Notion State as a consultant—helping teams build Company OS workspaces in Notion that actually work: designed for how…
Starting this week, I’m joining Notion State as a consultant—helping teams build Company OS workspaces in Notion that actually work: designed for how…
Liked by Jason Gauci
Other similar profiles
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content