Starred repositories
Official implementation of the paper "Rethinking Generative Recommender Tokenizer: Recsys-Native Encoding and Semantic Quantization Beyond LLMs"
Claude Code 泄露源码 - 本地可运行版本,新增跨平台桌面端软件补齐Computer Use(附带核心模块解析)
`mini-R1` 是一个参考 VLM-R1 / Open-R1 思路、面向视觉指代表达定位(REC, Referring Expression Comprehension)任务的简化强化学习实现。这个仓库当前主要围绕 RefCOCOg 风格数据,使用 Qwen2.5-VL / Qwen3-VL 进行 GRPO 训练与评测。
Unified measurement reconciler across MMM, MTA, and geo-experiments
Learning-to-Rank system using LambdaMART on MSLR-WEB10K — 81% NDCG improvement over BM25 baseline
Causal impact of SEO interventions via Bayesian Structural Time Series
Revenue-regularized learning-to-rank for e-commerce search
A Learning-to-Rank (LTR) model using LambdaMART to optimize search result relevance. Achieved an 18% improvement in NDCG@10 over BM25 by engineering 20+ query-document features and optimizing for n…
Code source for the IEEE-JSTAR paper : Learning to Rank in 2D: Differentiable Spatial Prioritization for Deforestation Forecas
Uses tmdb dataset to present a demo on Learning -to-Rank plugin used in Opensearch
A PyTorch implementation of "Query Your Strings and Return Ranking Regions with Only One Look"
Efficient cache for gigabytes of data written in Go.
Source Code for Diversity-Promoting Recommendation with Dual-Objective Optimization and Dual Consideration
A PyTorch implementation of a ranking model using DeBERTa-v3 with a custom ranking loss function. The project uses Hydra for configuration management and Accelerate for distributed training support.
The official PyTorch implementation for WWW 2025 paper "Rankformer: A Graph Transformer for Recommendation based on Ranking Objective".
Pytorch implementation of Personalized Re-ranking for Recommendation (https://arxiv.org/abs/1904.06813).
A reproduction of "ComRank: Ranking Loss for Multi-Label Complementary Label Learning" with a unified PyTorch implementation and experimental analysis.
A PyTorch implementation of Multi-task Ranking model (PLE + DCN-v2) on Tencent TenRec dataset.
TokenSpeed is a speed-of-light LLM inference engine.