-
MSC
- Iran, Isfahan
- in/mohammad-kadkhodaei
- https://huggingface.co/mohammadkad
Lists (2)
Sort Name ascending (A-Z)
Stars
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
NSGA2, NSGA3, R-NSGA3, MOEAD, Genetic Algorithms (GA), Differential Evolution (DE), CMAES, PSO
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
Collection of reinforcement learning algorithms
Grokking Deep Reinforcement Learning
PyTorch implementation of Soft Actor-Critic (SAC), Twin Delayed DDPG (TD3), Actor-Critic (AC/A2C), Proximal Policy Optimization (PPO), QT-Opt, PointNet..
A small package to create visualizations of PyTorch execution graphs
lzbench is an in-memory benchmark of open-source compressors
An extension of the PyMARL codebase that includes additional algorithms and environment support
Python Multi-Agent Reinforcement Learning framework
Author's PyTorch implementation of TD7 for online and offline RL
Author's PyTorch implementation of TD3 for OpenAI gym tasks
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Clean, Robust, and Unified PyTorch implementation of popular Deep Reinforcement Learning (DRL) algorithms (Q-learning, Duel DDQN, PER, C51, Noisy DQN, PPO, DDPG, TD3, SAC, ASL)
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Collection of publicly available IPTV channels from all over the world
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
🌐 The Internet Computer! Free, Open-Source, and Self-Hostable.
Massively parallel rigidbody physics simulation on accelerator hardware.
Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools, with 10x faster training through evolutionary hyperparameter optimization.
A unified framework for machine learning with time series
Adaptive task scheduling, Task offloading, Resource Allocation, Application Placement in Edge and Fog Computing Environments
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.