#cplusplus #caffe #convolution #deep_learning #deep_neural_networks #diy #graph_algorithms #inference #inference_engine #maxpooling #ncnn #pnnx #pytorch #relu #resnet #sigmoid #yolo #yolov5
This course, "_动手自制大模型推理框架_" (Handcrafting Large Model Inference Framework), is a valuable resource for those interested in deep learning and model inference. It teaches you how to build a modern C++ project from scratch, focusing on designing and implementing a deep learning inference framework. The course supports latest models like LLama3.2 and Qwen2.5, and uses CUDA acceleration and Int8 quantization for better performance.
By taking this course, you will learn how to write efficient C++ code, manage projects with CMake and Git, design computational graphs, implement common operators like convolution and pooling, and optimize them for speed. This knowledge will be highly beneficial for job interviews and advancing your skills in deep learning. The course also includes practical demos on models like Unet and YoloV5, making it a hands-on learning experience.
https://github.com/zjhellofss/KuiperInfer
This course, "_动手自制大模型推理框架_" (Handcrafting Large Model Inference Framework), is a valuable resource for those interested in deep learning and model inference. It teaches you how to build a modern C++ project from scratch, focusing on designing and implementing a deep learning inference framework. The course supports latest models like LLama3.2 and Qwen2.5, and uses CUDA acceleration and Int8 quantization for better performance.
By taking this course, you will learn how to write efficient C++ code, manage projects with CMake and Git, design computational graphs, implement common operators like convolution and pooling, and optimize them for speed. This knowledge will be highly beneficial for job interviews and advancing your skills in deep learning. The course also includes practical demos on models like Unet and YoloV5, making it a hands-on learning experience.
https://github.com/zjhellofss/KuiperInfer
GitHub
GitHub - zjhellofss/KuiperInfer: 校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance…
校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library step by step - zjhellofss/KuiperInfer
#python #baselines #gsde #gym #machine_learning #openai #python #pytorch #reinforcement_learning #reinforcement_learning_algorithms #robotics #sb3 #sde #stable_baselines #toolbox
Stable Baselines3 (SB3) is a tool that makes it easy to use reinforcement learning algorithms with PyTorch. It provides reliable and tested implementations of these algorithms, which helps researchers and developers build projects quickly. SB3 offers many features like custom environments, policies, and integration with other tools like Tensorboard and Hugging Face. It also has detailed documentation and examples to help beginners get started. This tool assumes you have some knowledge of reinforcement learning but provides resources to learn more. Using SB3 can save time and effort by providing a stable base for your projects, allowing you to focus on new ideas and improvements.
https://github.com/DLR-RM/stable-baselines3
Stable Baselines3 (SB3) is a tool that makes it easy to use reinforcement learning algorithms with PyTorch. It provides reliable and tested implementations of these algorithms, which helps researchers and developers build projects quickly. SB3 offers many features like custom environments, policies, and integration with other tools like Tensorboard and Hugging Face. It also has detailed documentation and examples to help beginners get started. This tool assumes you have some knowledge of reinforcement learning but provides resources to learn more. Using SB3 can save time and effort by providing a stable base for your projects, allowing you to focus on new ideas and improvements.
https://github.com/DLR-RM/stable-baselines3
GitHub
GitHub - DLR-RM/stable-baselines3: PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms. - GitHub - DLR-RM/stable-baselines3: PyTorch version of Stable Baselines, reliable implementatio...
#python #deep_learning #plate_recognition #pytorch #yolov5
This tool helps you detect and recognize car license plates from images and videos. It supports 12 different types of Chinese license plates, including blue, yellow, new energy, police, and more. You can use it with Python and PyTorch, and it provides demos for testing with images and videos. The benefit is that it makes it easy to automate the process of identifying car license plates accurately, which can be useful for various applications such as traffic management or security systems.
https://github.com/we0091234/Chinese_license_plate_detection_recognition
This tool helps you detect and recognize car license plates from images and videos. It supports 12 different types of Chinese license plates, including blue, yellow, new energy, police, and more. You can use it with Python and PyTorch, and it provides demos for testing with images and videos. The benefit is that it makes it easy to automate the process of identifying car license plates accurately, which can be useful for various applications such as traffic management or security systems.
https://github.com/we0091234/Chinese_license_plate_detection_recognition
GitHub
GitHub - we0091234/Chinese_license_plate_detection_recognition: yolov5 车牌检测 车牌识别 中文车牌识别 检测 支持12种中文车牌 支持双层车牌
yolov5 车牌检测 车牌识别 中文车牌识别 检测 支持12种中文车牌 支持双层车牌. Contribute to we0091234/Chinese_license_plate_detection_recognition development by creating an account on GitHub.
#jupyter_notebook #deep_learning #machine_learning #python #pytorch
This course, "深入浅出PyTorch" (Thorough PyTorch), is designed to help you learn PyTorch from basics to advanced levels. It covers everything from installing PyTorch, understanding tensors and automatic differentiation, to building and training models, and even deploying them. The course is divided into several chapters, each focusing on different aspects of PyTorch, such as data loading, model construction, loss functions, optimizers, and visualization.
The benefit to you is that you will gain a comprehensive understanding of PyTorch, which is a powerful tool for deep learning. You will learn through both theoretical explanations and practical exercises, including hands-on projects like fashion classification and fruit classification. This will help you develop your programming skills and ability to solve real-world problems using deep learning algorithms. Additionally, the course includes video tutorials and a community-driven approach to learning, making it easier and more engaging.
https://github.com/datawhalechina/thorough-pytorch
This course, "深入浅出PyTorch" (Thorough PyTorch), is designed to help you learn PyTorch from basics to advanced levels. It covers everything from installing PyTorch, understanding tensors and automatic differentiation, to building and training models, and even deploying them. The course is divided into several chapters, each focusing on different aspects of PyTorch, such as data loading, model construction, loss functions, optimizers, and visualization.
The benefit to you is that you will gain a comprehensive understanding of PyTorch, which is a powerful tool for deep learning. You will learn through both theoretical explanations and practical exercises, including hands-on projects like fashion classification and fruit classification. This will help you develop your programming skills and ability to solve real-world problems using deep learning algorithms. Additionally, the course includes video tutorials and a community-driven approach to learning, making it easier and more engaging.
https://github.com/datawhalechina/thorough-pytorch
GitHub
GitHub - datawhalechina/thorough-pytorch: PyTorch入门教程,在线阅读地址:https://datawhalechina.github.io/thorough-pytorch/
PyTorch入门教程,在线阅读地址:https://datawhalechina.github.io/thorough-pytorch/ - datawhalechina/thorough-pytorch
#python #annotation #annotation_tool #annotations #boundingbox #computer_vision #computer_vision_annotation #dataset #deep_learning #image_annotation #image_classification #image_labeling #image_labelling_tool #imagenet #labeling #labeling_tool #object_detection #pytorch #semantic_segmentation #tensorflow #video_annotation
CVAT is a powerful tool for annotating videos and images, especially useful for computer vision projects. It helps developers and companies annotate data quickly and efficiently. You can use CVAT online for free or subscribe for more features like unlimited data and integrations with other tools. It also offers a self-hosted option with enterprise support. CVAT supports many annotation formats and has automatic labeling options to speed up your work. It's widely used by many teams worldwide, making it a reliable choice for your data annotation needs.
https://github.com/cvat-ai/cvat
CVAT is a powerful tool for annotating videos and images, especially useful for computer vision projects. It helps developers and companies annotate data quickly and efficiently. You can use CVAT online for free or subscribe for more features like unlimited data and integrations with other tools. It also offers a self-hosted option with enterprise support. CVAT supports many annotation formats and has automatic labeling options to speed up your work. It's widely used by many teams worldwide, making it a reliable choice for your data annotation needs.
https://github.com/cvat-ai/cvat
GitHub
GitHub - cvat-ai/cvat: Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams…
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale. - cvat-ai/cvat
#python #deep_learning #geometric_deep_learning #graph_convolutional_networks #graph_neural_networks #pytorch
PyG (PyTorch Geometric) is a library that makes it easy to work with Graph Neural Networks (GNNs) using PyTorch. Here’s why it’s beneficial You can start training a GNN model with just 10-20 lines of code, especially if you're already familiar with PyTorch.
- **Comprehensive Models** The library supports large-scale graphs, dynamic graphs, and heterogeneous graphs, making it versatile for various applications.
- **Scalability** It provides extensive documentation, tutorials, and examples to help you get started quickly.
Overall, PyG simplifies the process of working with GNNs, making it a powerful tool for machine learning on graph-structured data.
https://github.com/pyg-team/pytorch_geometric
PyG (PyTorch Geometric) is a library that makes it easy to work with Graph Neural Networks (GNNs) using PyTorch. Here’s why it’s beneficial You can start training a GNN model with just 10-20 lines of code, especially if you're already familiar with PyTorch.
- **Comprehensive Models** The library supports large-scale graphs, dynamic graphs, and heterogeneous graphs, making it versatile for various applications.
- **Scalability** It provides extensive documentation, tutorials, and examples to help you get started quickly.
Overall, PyG simplifies the process of working with GNNs, making it a powerful tool for machine learning on graph-structured data.
https://github.com/pyg-team/pytorch_geometric
GitHub
GitHub - pyg-team/pytorch_geometric: Graph Neural Network Library for PyTorch
Graph Neural Network Library for PyTorch. Contribute to pyg-team/pytorch_geometric development by creating an account on GitHub.
#python #deep_learning #glow_tts #hifigan #melgan #multi_speaker_tts #python #pytorch #speaker_encoder #speaker_encodings #speech #speech_synthesis #tacotron #text_to_speech #tts #tts_model #vocoder #voice_cloning #voice_conversion #voice_synthesis
The new version of TTS (Text-to-Speech) from Coqui.ai, called TTSv2, is now available with several improvements. It supports 16 languages and has better performance overall. You can fine-tune the models using the provided code and examples. The TTS system can now stream audio with less than 200ms latency, making it very responsive. Additionally, you can use over 1,100 Fairseq models and new features like voice cloning and voice conversion. This update also includes faster inference with the Tortoise model and support for multiple speakers and languages. These enhancements make it easier and more efficient to generate high-quality speech from text.
https://github.com/coqui-ai/TTS
The new version of TTS (Text-to-Speech) from Coqui.ai, called TTSv2, is now available with several improvements. It supports 16 languages and has better performance overall. You can fine-tune the models using the provided code and examples. The TTS system can now stream audio with less than 200ms latency, making it very responsive. Additionally, you can use over 1,100 Fairseq models and new features like voice cloning and voice conversion. This update also includes faster inference with the Tortoise model and support for multiple speakers and languages. These enhancements make it easier and more efficient to generate high-quality speech from text.
https://github.com/coqui-ai/TTS
GitHub
GitHub - coqui-ai/TTS: 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production - coqui-ai/TTS
#python #fno #fourier_neural_operator #neural_operator #neural_operators #partial_differential_equations #pde #pytorch #tensor_methods #tensorization #tensorly #uno
The `neuraloperator` library is a powerful tool for learning neural operators in PyTorch. It allows you to learn mappings between function spaces, which is different from regular neural networks. This library is useful because it makes your trained models work with data of any resolution, meaning you don't have to worry about the size of your data. You can easily install it using `pip install neuraloperator` and start training operators right away. The library also offers efficient models like the Tucker Tensorized FNO, which reduces the number of parameters needed, making it faster and more efficient. This helps you train and use complex models more effectively.
https://github.com/neuraloperator/neuraloperator
The `neuraloperator` library is a powerful tool for learning neural operators in PyTorch. It allows you to learn mappings between function spaces, which is different from regular neural networks. This library is useful because it makes your trained models work with data of any resolution, meaning you don't have to worry about the size of your data. You can easily install it using `pip install neuraloperator` and start training operators right away. The library also offers efficient models like the Tucker Tensorized FNO, which reduces the number of parameters needed, making it faster and more efficient. This helps you train and use complex models more effectively.
https://github.com/neuraloperator/neuraloperator
GitHub
GitHub - neuraloperator/neuraloperator: Learning in infinite dimension with neural operators.
Learning in infinite dimension with neural operators. - neuraloperator/neuraloperator
#python #cuda #deepseek #deepseek_llm #deepseek_v3 #inference #llama #llama2 #llama3 #llama3_1 #llava #llm #llm_serving #moe #pytorch #transformer #vlm
SGLang is a tool that makes working with large language models and vision language models much faster and more manageable. It has a fast backend runtime that optimizes model performance with features like prefix caching, continuous batching, and quantization. The frontend language is flexible and easy to use, allowing for complex tasks like chained generation calls and multi-modal inputs. SGLang supports many different models and has an active community behind it. This means you can get your models running quickly and efficiently, saving time and resources. Additionally, the extensive documentation and community support make it easier to get started and resolve any issues.
https://github.com/sgl-project/sglang
SGLang is a tool that makes working with large language models and vision language models much faster and more manageable. It has a fast backend runtime that optimizes model performance with features like prefix caching, continuous batching, and quantization. The frontend language is flexible and easy to use, allowing for complex tasks like chained generation calls and multi-modal inputs. SGLang supports many different models and has an active community behind it. This means you can get your models running quickly and efficiently, saving time and resources. Additionally, the extensive documentation and community support make it easier to get started and resolve any issues.
https://github.com/sgl-project/sglang
GitHub
GitHub - sgl-project/sglang: SGLang is a fast serving framework for large language models and vision language models.
SGLang is a fast serving framework for large language models and vision language models. - sgl-project/sglang
#python #gpu #llm #pytorch #transformers
The `ipex-llm` library is a powerful tool for accelerating Large Language Models (LLMs) on Intel GPUs, NPUs, and CPUs. It integrates seamlessly with popular frameworks like HuggingFace transformers, LangChain, LlamaIndex, and more. Here are the key benefits `ipex-llm` optimizes LLM performance with advanced quantization techniques (FP8, FP6, FP4, INT4) and self-speculative decoding, leading to significant speedups.
- **Wide Model Support** It works on various Intel hardware such as Arc GPUs, Core Ultra NPUs, and CPUs, making it versatile for different setups.
- **Easy Integration** Detailed quickstart guides, code examples, and tutorials help users get started quickly.
Overall, `ipex-llm` enhances the performance and usability of LLMs on Intel hardware, making it a valuable tool for developers and researchers.
https://github.com/intel/ipex-llm
The `ipex-llm` library is a powerful tool for accelerating Large Language Models (LLMs) on Intel GPUs, NPUs, and CPUs. It integrates seamlessly with popular frameworks like HuggingFace transformers, LangChain, LlamaIndex, and more. Here are the key benefits `ipex-llm` optimizes LLM performance with advanced quantization techniques (FP8, FP6, FP4, INT4) and self-speculative decoding, leading to significant speedups.
- **Wide Model Support** It works on various Intel hardware such as Arc GPUs, Core Ultra NPUs, and CPUs, making it versatile for different setups.
- **Easy Integration** Detailed quickstart guides, code examples, and tutorials help users get started quickly.
Overall, `ipex-llm` enhances the performance and usability of LLMs on Intel hardware, making it a valuable tool for developers and researchers.
https://github.com/intel/ipex-llm
GitHub
GitHub - intel/ipex-llm: Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma…
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr...
#python #asr #automatic_speech_recognition #conformer #e2e_models #production_ready #pytorch #speech_recognition #transformer #whisper
WeNet is a powerful tool for speech recognition that helps turn spoken words into text. It's designed to be easy to use and works well in real-world situations, making it great for businesses and developers. WeNet provides accurate results on many public datasets and is lightweight, meaning it doesn't require a lot of resources to run. This makes it beneficial for users who need reliable speech-to-text functionality without complex setup or maintenance.
https://github.com/wenet-e2e/wenet
WeNet is a powerful tool for speech recognition that helps turn spoken words into text. It's designed to be easy to use and works well in real-world situations, making it great for businesses and developers. WeNet provides accurate results on many public datasets and is lightweight, meaning it doesn't require a lot of resources to run. This makes it beneficial for users who need reliable speech-to-text functionality without complex setup or maintenance.
https://github.com/wenet-e2e/wenet
GitHub
GitHub - wenet-e2e/wenet: Production First and Production Ready End-to-End Speech Recognition Toolkit
Production First and Production Ready End-to-End Speech Recognition Toolkit - wenet-e2e/wenet
#cplusplus #cuda #cutlass #gpu #pytorch
Flux is a library that helps speed up machine learning on GPUs by overlapping communication and computation tasks. It supports various parallelisms in model training and inference, making it compatible with PyTorch and different Nvidia GPU architectures. This means you can train models faster because Flux combines the steps of sending data between GPUs (communication) and doing calculations (computation), allowing them to happen at the same time. This overlap reduces overall training time, which is beneficial for users working with large or complex models.
https://github.com/bytedance/flux
Flux is a library that helps speed up machine learning on GPUs by overlapping communication and computation tasks. It supports various parallelisms in model training and inference, making it compatible with PyTorch and different Nvidia GPU architectures. This means you can train models faster because Flux combines the steps of sending data between GPUs (communication) and doing calculations (computation), allowing them to happen at the same time. This overlap reduces overall training time, which is beneficial for users working with large or complex models.
https://github.com/bytedance/flux
GitHub
GitHub - bytedance/flux: A fast communication-overlapping library for tensor/expert parallelism on GPUs.
A fast communication-overlapping library for tensor/expert parallelism on GPUs. - bytedance/flux
#jupyter_notebook #cnn #colab #colab_notebook #computer_vision #deep_learning #deep_neural_networks #fourier #fourier_convolutions #fourier_transform #gan #generative_adversarial_network #generative_adversarial_networks #high_resolution #image_inpainting #inpainting #inpainting_algorithm #inpainting_methods #pytorch
LaMa is a powerful tool for removing objects from images. It uses special techniques called Fourier Convolutions, which help it understand the whole image at once. This makes it very good at filling in large areas that are missing. LaMa can even work well with high-resolution images, even if it was trained on smaller ones. This means you can use it to fix photos where objects are in the way, making them look natural and complete again.
https://github.com/advimman/lama
LaMa is a powerful tool for removing objects from images. It uses special techniques called Fourier Convolutions, which help it understand the whole image at once. This makes it very good at filling in large areas that are missing. LaMa can even work well with high-resolution images, even if it was trained on smaller ones. This means you can use it to fix photos where objects are in the way, making them look natural and complete again.
https://github.com/advimman/lama
GitHub
GitHub - advimman/lama: 🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022
🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022 - advimman/lama
#python #deep_learning #intel #machine_learning #neural_network #pytorch #quantization
Intel Extension for PyTorch boosts the speed of PyTorch on Intel hardware, including both CPUs and GPUs, by using special features like AVX-512, AMX, and XMX for faster calculations[5][2][4]. It supports many popular large language models (LLMs) such as Llama, Qwen, Phi, and DeepSeek, offering optimizations for different data types and easy GPU acceleration. This means you can run advanced AI models much faster and more efficiently on your Intel computer, with simple setup and support for both ready-made and custom models.
https://github.com/intel/intel-extension-for-pytorch
Intel Extension for PyTorch boosts the speed of PyTorch on Intel hardware, including both CPUs and GPUs, by using special features like AVX-512, AMX, and XMX for faster calculations[5][2][4]. It supports many popular large language models (LLMs) such as Llama, Qwen, Phi, and DeepSeek, offering optimizations for different data types and easy GPU acceleration. This means you can run advanced AI models much faster and more efficiently on your Intel computer, with simple setup and support for both ready-made and custom models.
https://github.com/intel/intel-extension-for-pytorch
GitHub
GitHub - intel/intel-extension-for-pytorch: A Python package for extending the official PyTorch that can easily obtain performance…
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform - intel/intel-extension-for-pytorch
#jupyter_notebook #ai #artificial_intelligence #chatgpt #deep_learning #from_scratch #gpt #language_model #large_language_models #llm #machine_learning #python #pytorch #transformer
You can learn how to build your own large language model (LLM) like GPT from scratch with clear, step-by-step guidance, including coding, training, and fine-tuning, all explained with examples and diagrams. This approach mirrors how big models like ChatGPT are made but is designed to run on a regular laptop without special hardware. You also get access to code for loading pretrained models and fine-tuning them for tasks like text classification or instruction following. This helps you deeply understand how LLMs work inside and lets you create your own functional AI assistant, gaining practical skills in AI development[1][2][3][4].
https://github.com/rasbt/LLMs-from-scratch
You can learn how to build your own large language model (LLM) like GPT from scratch with clear, step-by-step guidance, including coding, training, and fine-tuning, all explained with examples and diagrams. This approach mirrors how big models like ChatGPT are made but is designed to run on a regular laptop without special hardware. You also get access to code for loading pretrained models and fine-tuning them for tasks like text classification or instruction following. This helps you deeply understand how LLMs work inside and lets you create your own functional AI assistant, gaining practical skills in AI development[1][2][3][4].
https://github.com/rasbt/LLMs-from-scratch
GitHub
GitHub - rasbt/LLMs-from-scratch: Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step - rasbt/LLMs-from-scratch
#other #automl #chatgpt #data_analysis #data_science #data_visualization #data_visualizations #deep_learning #gpt #gpt_3 #jax #keras #machine_learning #ml #nlp #python #pytorch #scikit_learn #tensorflow #transformer
This is a comprehensive, regularly updated list of 920 top open-source Python machine learning libraries, organized into 34 categories like frameworks, data visualization, NLP, image processing, and more. Each project is ranked by quality using GitHub and package manager metrics, helping you find the best tools for your needs. Popular libraries like TensorFlow, PyTorch, scikit-learn, and Hugging Face transformers are included, along with specialized ones for time series, reinforcement learning, and model interpretability. This resource saves you time by guiding you to high-quality, actively maintained libraries for building, optimizing, and deploying machine learning models efficiently.
https://github.com/ml-tooling/best-of-ml-python
This is a comprehensive, regularly updated list of 920 top open-source Python machine learning libraries, organized into 34 categories like frameworks, data visualization, NLP, image processing, and more. Each project is ranked by quality using GitHub and package manager metrics, helping you find the best tools for your needs. Popular libraries like TensorFlow, PyTorch, scikit-learn, and Hugging Face transformers are included, along with specialized ones for time series, reinforcement learning, and model interpretability. This resource saves you time by guiding you to high-quality, actively maintained libraries for building, optimizing, and deploying machine learning models efficiently.
https://github.com/ml-tooling/best-of-ml-python
GitHub
GitHub - lukasmasuch/best-of-ml-python: 🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
🏆 A ranked list of awesome machine learning Python libraries. Updated weekly. - lukasmasuch/best-of-ml-python