#python #deep_learning #pre_trained #model #awesome #nlp #vision #paddlehub #ai_models
https://github.com/PaddlePaddle/PaddleHub
https://github.com/PaddlePaddle/PaddleHub
GitHub
GitHub - PaddlePaddle/PaddleHub: 400+ AI Models: Rich, high-quality AI models, including CV, NLP, Speech, Video and Cross-Modal.…
400+ AI Models: Rich, high-quality AI models, including CV, NLP, Speech, Video and Cross-Modal. Easy to Use: 3 lines of code to predict 400+ AI models. - PaddlePaddle/PaddleHub
#scala #ai #apache_spark #azure #big_data #cognitive_services #data_science #databricks #deep_learning #http #lightgbm #machine_learning #microsoft #ml #model_deployment #onnx #opencv #pyspark #spark #synapse
https://github.com/microsoft/SynapseML
https://github.com/microsoft/SynapseML
GitHub
GitHub - microsoft/SynapseML: Simple and Distributed Machine Learning
Simple and Distributed Machine Learning. Contribute to microsoft/SynapseML development by creating an account on GitHub.
#cplusplus #deployment #model_converter #ncnn #onnxruntime #openvino #pplnn #sdk #tensorrt
https://github.com/open-mmlab/mmdeploy
https://github.com/open-mmlab/mmdeploy
GitHub
GitHub - open-mmlab/mmdeploy: OpenMMLab Model Deployment Framework
OpenMMLab Model Deployment Framework. Contribute to open-mmlab/mmdeploy development by creating an account on GitHub.
#python #data_parallelism #deep_learning #distributed_training #hpc #large_scale #model_parallelism #pipeline_parallelism
https://github.com/hpcaitech/ColossalAI
https://github.com/hpcaitech/ColossalAI
GitHub
GitHub - hpcaitech/ColossalAI: Making large AI models cheaper, faster and more accessible
Making large AI models cheaper, faster and more accessible - hpcaitech/ColossalAI
#python #data_analysis #data_drift #data_science #deep_learning #jupyter_notebook #machine_learning #machinelearning #ml #mlops #model_monitoring #monitoring #performance_monitoring #visualization
https://github.com/NannyML/nannyml
https://github.com/NannyML/nannyml
GitHub
GitHub - NannyML/nannyml: nannyml: post-deployment data science in python
nannyml: post-deployment data science in python. Contribute to NannyML/nannyml development by creating an account on GitHub.
#python #caffe #computer_vision #coreml #edgetpu #keras #mediapipe #model #model_zoo #models #onnx #openvino #pretrained_models #pytorch #tensorflow #tensorflow_lite #tensorflowjs #tf_trt #tfjs #tflite #tflite_models
https://github.com/PINTO0309/PINTO_model_zoo
https://github.com/PINTO0309/PINTO_model_zoo
GitHub
GitHub - PINTO0309/PINTO_model_zoo: A repository for storing models that have been inter-converted between various frameworks.…
A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8...
#jupyter_notebook #ai #artificial_intelligence #image_generation #img2img #latent_diffusion #machine_learning #model_training #stable_diffusion #txt2img
https://github.com/JoePenna/Dreambooth-Stable-Diffusion
https://github.com/JoePenna/Dreambooth-Stable-Diffusion
GitHub
GitHub - JoePenna/Dreambooth-Stable-Diffusion: Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual…
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focuse...
#python #data_drift #data_science #hacktoberfest #html_report #jupyter_notebook #machine_learning #machine_learning_operations #mlops #model_monitoring #pandas_dataframe #production_machine_learning
https://github.com/evidentlyai/evidently
https://github.com/evidentlyai/evidently
GitHub
GitHub - evidentlyai/evidently: Evidently is an open-source ML and LLM observability framework. Evaluate, test, and monitor any…
Evidently is an open-source ML and LLM observability framework. Evaluate, test, and monitor any AI-powered system or data pipeline. From tabular data to Gen AI. 100+ metrics. - evidentlyai/evidently
#python #data_drift #data_science #data_validation #deep_learning #html_report #jupyter_notebook #machine_learning #ml #mlops #model_monitoring #model_validation #pandas_dataframe #pytorch
https://github.com/deepchecks/deepchecks
https://github.com/deepchecks/deepchecks
GitHub
GitHub - deepchecks/deepchecks: Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open…
Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test ...
#python #ai #ai_alignment #ai_safety #ai_test #ai_testing #artificial_intelligence #cicd #explainable_ai #llmops #machine_learning #machine_learning_testing #ml #ml_safety #ml_test #ml_testing #ml_validation #mlops #model_testing #model_validation #quality_assurance
https://github.com/Giskard-AI/giskard
https://github.com/Giskard-AI/giskard
GitHub
GitHub - Giskard-AI/giskard-oss: 🐢 Open-Source Evaluation & Testing library for LLM Agents
🐢 Open-Source Evaluation & Testing library for LLM Agents - Giskard-AI/giskard-oss
#python #ai #control #decision_making #distributed_computing #machine_learning #marl #model_based_reinforcement_learning #multi_agent_reinforcement_learning #pytorch #reinforcement_learning #rl #robotics #torch
https://github.com/pytorch/rl
https://github.com/pytorch/rl
GitHub
GitHub - pytorch/rl: A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning. - pytorch/rl
👍1
#python #large_language_models #model_para #transformers
Megatron-LM and Megatron-Core are powerful tools for training large language models (LLMs) on NVIDIA GPUs. Megatron-Core offers GPU-optimized techniques and system-level optimizations, allowing you to train custom transformers efficiently. It supports advanced parallelism strategies, activation checkpointing, and distributed optimization to reduce memory usage and improve training speed. You can use Megatron-Core with other frameworks like NVIDIA NeMo for end-to-end solutions or integrate its components into your preferred training framework. This setup enables scalable training of models with hundreds of billions of parameters, making it beneficial for researchers and developers aiming to advance LLM technology.
https://github.com/NVIDIA/Megatron-LM
Megatron-LM and Megatron-Core are powerful tools for training large language models (LLMs) on NVIDIA GPUs. Megatron-Core offers GPU-optimized techniques and system-level optimizations, allowing you to train custom transformers efficiently. It supports advanced parallelism strategies, activation checkpointing, and distributed optimization to reduce memory usage and improve training speed. You can use Megatron-Core with other frameworks like NVIDIA NeMo for end-to-end solutions or integrate its components into your preferred training framework. This setup enables scalable training of models with hundreds of billions of parameters, making it beneficial for researchers and developers aiming to advance LLM technology.
https://github.com/NVIDIA/Megatron-LM
GitHub
GitHub - NVIDIA/Megatron-LM: Ongoing research training transformer models at scale
Ongoing research training transformer models at scale - NVIDIA/Megatron-LM
#python #billion_parameters #compression #data_parallelism #deep_learning #gpu #inference #machine_learning #mixture_of_experts #model_parallelism #pipeline_parallelism #pytorch #trillion_parameters #zero
DeepSpeed is a powerful tool for training and using large artificial intelligence models quickly and efficiently. It allows you to train models with billions or even trillions of parameters, which is much faster and cheaper than other methods. With DeepSpeed, you can achieve significant speedups, reduce costs, and improve the performance of your models. For example, it can train ChatGPT-like models 15 times faster than current state-of-the-art systems. This makes it easier to work with large language models without needing massive resources, making AI more accessible and efficient for everyone.
https://github.com/microsoft/DeepSpeed
DeepSpeed is a powerful tool for training and using large artificial intelligence models quickly and efficiently. It allows you to train models with billions or even trillions of parameters, which is much faster and cheaper than other methods. With DeepSpeed, you can achieve significant speedups, reduce costs, and improve the performance of your models. For example, it can train ChatGPT-like models 15 times faster than current state-of-the-art systems. This makes it easier to work with large language models without needing massive resources, making AI more accessible and efficient for everyone.
https://github.com/microsoft/DeepSpeed
GitHub
GitHub - deepspeedai/DeepSpeed: DeepSpeed is a deep learning optimization library that makes distributed training and inference…
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. - deepspeedai/DeepSpeed
#python #bert #deep_learning #flax #hacktoberfest #jax #language_model #language_models #machine_learning #model_hub #natural_language_processing #nlp #nlp_library #pretrained_models #python #pytorch #pytorch_transformers #seq2seq #speech_recognition #tensorflow #transformer
The Hugging Face Transformers library provides thousands of pretrained models for various tasks like text, image, and audio processing. These models can be used for tasks such as text classification, image detection, speech recognition, and more. The library supports popular deep learning frameworks like JAX, PyTorch, and TensorFlow, making it easy to switch between them.
The benefit to the user is that you can quickly download and use these pretrained models with just a few lines of code, saving time and computational resources. You can also fine-tune these models on your own datasets and share them with the community. Additionally, the library offers a simple `pipeline` API for immediate use on different inputs, making it user-friendly for both researchers and practitioners. This helps in reducing compute costs and carbon footprint while enabling high-performance results across various machine learning tasks.
https://github.com/huggingface/transformers
The Hugging Face Transformers library provides thousands of pretrained models for various tasks like text, image, and audio processing. These models can be used for tasks such as text classification, image detection, speech recognition, and more. The library supports popular deep learning frameworks like JAX, PyTorch, and TensorFlow, making it easy to switch between them.
The benefit to the user is that you can quickly download and use these pretrained models with just a few lines of code, saving time and computational resources. You can also fine-tune these models on your own datasets and share them with the community. Additionally, the library offers a simple `pipeline` API for immediate use on different inputs, making it user-friendly for both researchers and practitioners. This helps in reducing compute costs and carbon footprint while enabling high-performance results across various machine learning tasks.
https://github.com/huggingface/transformers
GitHub
GitHub - huggingface/transformers: 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models…
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training. - GitHub - huggingface/t...
#python #amd #cuda #gpt #inference #inferentia #llama #llm #llm_serving #llmops #mlops #model_serving #pytorch #rocm #tpu #trainium #transformer #xpu
vLLM is a library that makes it easy, fast, and cheap to use large language models (LLMs). It is designed to be fast with features like efficient memory management, continuous batching, and optimized CUDA kernels. vLLM supports many popular models and can run on various hardware including NVIDIA GPUs, AMD CPUs and GPUs, and more. It also offers seamless integration with Hugging Face models and supports different decoding algorithms. This makes it flexible and easy to use for anyone needing to serve LLMs, whether for research or other applications. You can install vLLM easily with `pip install vllm` and find detailed documentation on their website.
https://github.com/vllm-project/vllm
vLLM is a library that makes it easy, fast, and cheap to use large language models (LLMs). It is designed to be fast with features like efficient memory management, continuous batching, and optimized CUDA kernels. vLLM supports many popular models and can run on various hardware including NVIDIA GPUs, AMD CPUs and GPUs, and more. It also offers seamless integration with Hugging Face models and supports different decoding algorithms. This makes it flexible and easy to use for anyone needing to serve LLMs, whether for research or other applications. You can install vLLM easily with `pip install vllm` and find detailed documentation on their website.
https://github.com/vllm-project/vllm
GitHub
GitHub - vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs
A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm
❤1