- Shanghai/Beijing China
-
02:15
(UTC +08:00) - wuwei211x@gmail.com
- https://orcid.org/0009-0009-1590-601X
Stars
MindOS is a Human-AI Collaborative Mind System, where human thinks and agents act. Globally sync your mind for all agents: transparent, controllable, and evolving symbiotically.
The code of our paper "Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding" accepted by KDD2025
verl/HybridFlow: A Flexible and Efficient RL Post-Training Framework
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
The official repository for "Rongsheng Wang's Arxiv Template"
GENERator: A Long-Context Generative Genomic Foundation Model
🌐 Make websites accessible for AI agents. Automate tasks online with ease.
MLGym A New Framework and Benchmark for Advancing AI Research Agents
SGLang is a high-performance serving framework for large language models and multimodal models.
Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]
SWE-bench: Can Language Models Resolve Real-world Github Issues?
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource Paper.
🤗 smolagents: a barebones library for agents that think in code.
GENERanno: A Genomic Foundation Model for Metagenomic Annotation
AgentTuning: Enabling Generalized Agent Abilities for LLMs
DSPy: The framework for programming—not prompting—language models
FlashInfer: Kernel Library for LLM Serving
A high-throughput and memory-efficient inference and serving engine for LLMs
Survey: A collection of AWESOME papers and resources on the large language model (LLM) related recommender system topics.
📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥
Ring attention implementation with flash attention
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory"
The code of our paper "AFDGCF: Adaptive Feature De-correlation Graph Collaborative Filtering for Recommendations" accepted by SIGIR2024
Saprot: Protein Language Model with Structural Alphabet (AA+3Di)
Foldseek enables fast and sensitive comparisons of large structure sets.