-
rednote
- Beijing, China
Stars
面向开发者的 LLM 入门教程,吴恩达大模型系列课程中文版
《大模型白盒子构建指南》:一个全手搓的Tiny-Universe
与Datawhale组织的现有仓库以及学习内容对话——快速找到你想学习的内容和贡献内容!
[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
制作懂人情世故的大语言模型 | 涵盖提示词工程、RAG、Agent、LLM微调教程
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Llama中文社区,实时汇总最新Llama学习资料,构建最好的中文Llama大模型开源生态,完全开源可商用
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under al…
Production-ready platform for agentic workflow development.
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
一个面向小白的大模型应用开发课程
Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding
Ongoing research training transformer models at scale
校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library step by step
Chat-甄嬛是利用《甄嬛传》剧本中所有关于甄嬛的台词和语句,基于ChatGLM2进行LoRA微调得到的模仿甄嬛语气的聊天语言模型。