#cplusplus #caffe #convolution #deep_learning #deep_neural_networks #diy #graph_algorithms #inference #inference_engine #maxpooling #ncnn #pnnx #pytorch #relu #resnet #sigmoid #yolo #yolov5
This course, "_动手自制大模型推理框架_" (Handcrafting Large Model Inference Framework), is a valuable resource for those interested in deep learning and model inference. It teaches you how to build a modern C++ project from scratch, focusing on designing and implementing a deep learning inference framework. The course supports latest models like LLama3.2 and Qwen2.5, and uses CUDA acceleration and Int8 quantization for better performance.
By taking this course, you will learn how to write efficient C++ code, manage projects with CMake and Git, design computational graphs, implement common operators like convolution and pooling, and optimize them for speed. This knowledge will be highly beneficial for job interviews and advancing your skills in deep learning. The course also includes practical demos on models like Unet and YoloV5, making it a hands-on learning experience.
https://github.com/zjhellofss/KuiperInfer
This course, "_动手自制大模型推理框架_" (Handcrafting Large Model Inference Framework), is a valuable resource for those interested in deep learning and model inference. It teaches you how to build a modern C++ project from scratch, focusing on designing and implementing a deep learning inference framework. The course supports latest models like LLama3.2 and Qwen2.5, and uses CUDA acceleration and Int8 quantization for better performance.
By taking this course, you will learn how to write efficient C++ code, manage projects with CMake and Git, design computational graphs, implement common operators like convolution and pooling, and optimize them for speed. This knowledge will be highly beneficial for job interviews and advancing your skills in deep learning. The course also includes practical demos on models like Unet and YoloV5, making it a hands-on learning experience.
https://github.com/zjhellofss/KuiperInfer
GitHub
GitHub - zjhellofss/KuiperInfer: 校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance…
校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library step by step - zjhellofss/KuiperInfer
#python #baselines #gsde #gym #machine_learning #openai #python #pytorch #reinforcement_learning #reinforcement_learning_algorithms #robotics #sb3 #sde #stable_baselines #toolbox
Stable Baselines3 (SB3) is a tool that makes it easy to use reinforcement learning algorithms with PyTorch. It provides reliable and tested implementations of these algorithms, which helps researchers and developers build projects quickly. SB3 offers many features like custom environments, policies, and integration with other tools like Tensorboard and Hugging Face. It also has detailed documentation and examples to help beginners get started. This tool assumes you have some knowledge of reinforcement learning but provides resources to learn more. Using SB3 can save time and effort by providing a stable base for your projects, allowing you to focus on new ideas and improvements.
https://github.com/DLR-RM/stable-baselines3
Stable Baselines3 (SB3) is a tool that makes it easy to use reinforcement learning algorithms with PyTorch. It provides reliable and tested implementations of these algorithms, which helps researchers and developers build projects quickly. SB3 offers many features like custom environments, policies, and integration with other tools like Tensorboard and Hugging Face. It also has detailed documentation and examples to help beginners get started. This tool assumes you have some knowledge of reinforcement learning but provides resources to learn more. Using SB3 can save time and effort by providing a stable base for your projects, allowing you to focus on new ideas and improvements.
https://github.com/DLR-RM/stable-baselines3
GitHub
GitHub - DLR-RM/stable-baselines3: PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms. - GitHub - DLR-RM/stable-baselines3: PyTorch version of Stable Baselines, reliable implementatio...
#python #deep_learning #plate_recognition #pytorch #yolov5
This tool helps you detect and recognize car license plates from images and videos. It supports 12 different types of Chinese license plates, including blue, yellow, new energy, police, and more. You can use it with Python and PyTorch, and it provides demos for testing with images and videos. The benefit is that it makes it easy to automate the process of identifying car license plates accurately, which can be useful for various applications such as traffic management or security systems.
https://github.com/we0091234/Chinese_license_plate_detection_recognition
This tool helps you detect and recognize car license plates from images and videos. It supports 12 different types of Chinese license plates, including blue, yellow, new energy, police, and more. You can use it with Python and PyTorch, and it provides demos for testing with images and videos. The benefit is that it makes it easy to automate the process of identifying car license plates accurately, which can be useful for various applications such as traffic management or security systems.
https://github.com/we0091234/Chinese_license_plate_detection_recognition
GitHub
GitHub - we0091234/Chinese_license_plate_detection_recognition: yolov5 车牌检测 车牌识别 中文车牌识别 检测 支持12种中文车牌 支持双层车牌
yolov5 车牌检测 车牌识别 中文车牌识别 检测 支持12种中文车牌 支持双层车牌. Contribute to we0091234/Chinese_license_plate_detection_recognition development by creating an account on GitHub.
#jupyter_notebook #deep_learning #machine_learning #python #pytorch
This course, "深入浅出PyTorch" (Thorough PyTorch), is designed to help you learn PyTorch from basics to advanced levels. It covers everything from installing PyTorch, understanding tensors and automatic differentiation, to building and training models, and even deploying them. The course is divided into several chapters, each focusing on different aspects of PyTorch, such as data loading, model construction, loss functions, optimizers, and visualization.
The benefit to you is that you will gain a comprehensive understanding of PyTorch, which is a powerful tool for deep learning. You will learn through both theoretical explanations and practical exercises, including hands-on projects like fashion classification and fruit classification. This will help you develop your programming skills and ability to solve real-world problems using deep learning algorithms. Additionally, the course includes video tutorials and a community-driven approach to learning, making it easier and more engaging.
https://github.com/datawhalechina/thorough-pytorch
This course, "深入浅出PyTorch" (Thorough PyTorch), is designed to help you learn PyTorch from basics to advanced levels. It covers everything from installing PyTorch, understanding tensors and automatic differentiation, to building and training models, and even deploying them. The course is divided into several chapters, each focusing on different aspects of PyTorch, such as data loading, model construction, loss functions, optimizers, and visualization.
The benefit to you is that you will gain a comprehensive understanding of PyTorch, which is a powerful tool for deep learning. You will learn through both theoretical explanations and practical exercises, including hands-on projects like fashion classification and fruit classification. This will help you develop your programming skills and ability to solve real-world problems using deep learning algorithms. Additionally, the course includes video tutorials and a community-driven approach to learning, making it easier and more engaging.
https://github.com/datawhalechina/thorough-pytorch
GitHub
GitHub - datawhalechina/thorough-pytorch: PyTorch入门教程,在线阅读地址:https://datawhalechina.github.io/thorough-pytorch/
PyTorch入门教程,在线阅读地址:https://datawhalechina.github.io/thorough-pytorch/ - datawhalechina/thorough-pytorch
#python #annotation #annotation_tool #annotations #boundingbox #computer_vision #computer_vision_annotation #dataset #deep_learning #image_annotation #image_classification #image_labeling #image_labelling_tool #imagenet #labeling #labeling_tool #object_detection #pytorch #semantic_segmentation #tensorflow #video_annotation
CVAT is a powerful tool for annotating videos and images, especially useful for computer vision projects. It helps developers and companies annotate data quickly and efficiently. You can use CVAT online for free or subscribe for more features like unlimited data and integrations with other tools. It also offers a self-hosted option with enterprise support. CVAT supports many annotation formats and has automatic labeling options to speed up your work. It's widely used by many teams worldwide, making it a reliable choice for your data annotation needs.
https://github.com/cvat-ai/cvat
CVAT is a powerful tool for annotating videos and images, especially useful for computer vision projects. It helps developers and companies annotate data quickly and efficiently. You can use CVAT online for free or subscribe for more features like unlimited data and integrations with other tools. It also offers a self-hosted option with enterprise support. CVAT supports many annotation formats and has automatic labeling options to speed up your work. It's widely used by many teams worldwide, making it a reliable choice for your data annotation needs.
https://github.com/cvat-ai/cvat
GitHub
GitHub - cvat-ai/cvat: Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams…
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale. - cvat-ai/cvat
#python #deep_learning #geometric_deep_learning #graph_convolutional_networks #graph_neural_networks #pytorch
PyG (PyTorch Geometric) is a library that makes it easy to work with Graph Neural Networks (GNNs) using PyTorch. Here’s why it’s beneficial You can start training a GNN model with just 10-20 lines of code, especially if you're already familiar with PyTorch.
- **Comprehensive Models** The library supports large-scale graphs, dynamic graphs, and heterogeneous graphs, making it versatile for various applications.
- **Scalability** It provides extensive documentation, tutorials, and examples to help you get started quickly.
Overall, PyG simplifies the process of working with GNNs, making it a powerful tool for machine learning on graph-structured data.
https://github.com/pyg-team/pytorch_geometric
PyG (PyTorch Geometric) is a library that makes it easy to work with Graph Neural Networks (GNNs) using PyTorch. Here’s why it’s beneficial You can start training a GNN model with just 10-20 lines of code, especially if you're already familiar with PyTorch.
- **Comprehensive Models** The library supports large-scale graphs, dynamic graphs, and heterogeneous graphs, making it versatile for various applications.
- **Scalability** It provides extensive documentation, tutorials, and examples to help you get started quickly.
Overall, PyG simplifies the process of working with GNNs, making it a powerful tool for machine learning on graph-structured data.
https://github.com/pyg-team/pytorch_geometric
GitHub
GitHub - pyg-team/pytorch_geometric: Graph Neural Network Library for PyTorch
Graph Neural Network Library for PyTorch. Contribute to pyg-team/pytorch_geometric development by creating an account on GitHub.
#python #deep_learning #glow_tts #hifigan #melgan #multi_speaker_tts #python #pytorch #speaker_encoder #speaker_encodings #speech #speech_synthesis #tacotron #text_to_speech #tts #tts_model #vocoder #voice_cloning #voice_conversion #voice_synthesis
The new version of TTS (Text-to-Speech) from Coqui.ai, called TTSv2, is now available with several improvements. It supports 16 languages and has better performance overall. You can fine-tune the models using the provided code and examples. The TTS system can now stream audio with less than 200ms latency, making it very responsive. Additionally, you can use over 1,100 Fairseq models and new features like voice cloning and voice conversion. This update also includes faster inference with the Tortoise model and support for multiple speakers and languages. These enhancements make it easier and more efficient to generate high-quality speech from text.
https://github.com/coqui-ai/TTS
The new version of TTS (Text-to-Speech) from Coqui.ai, called TTSv2, is now available with several improvements. It supports 16 languages and has better performance overall. You can fine-tune the models using the provided code and examples. The TTS system can now stream audio with less than 200ms latency, making it very responsive. Additionally, you can use over 1,100 Fairseq models and new features like voice cloning and voice conversion. This update also includes faster inference with the Tortoise model and support for multiple speakers and languages. These enhancements make it easier and more efficient to generate high-quality speech from text.
https://github.com/coqui-ai/TTS
GitHub
GitHub - coqui-ai/TTS: 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production - coqui-ai/TTS
#python #fno #fourier_neural_operator #neural_operator #neural_operators #partial_differential_equations #pde #pytorch #tensor_methods #tensorization #tensorly #uno
The `neuraloperator` library is a powerful tool for learning neural operators in PyTorch. It allows you to learn mappings between function spaces, which is different from regular neural networks. This library is useful because it makes your trained models work with data of any resolution, meaning you don't have to worry about the size of your data. You can easily install it using `pip install neuraloperator` and start training operators right away. The library also offers efficient models like the Tucker Tensorized FNO, which reduces the number of parameters needed, making it faster and more efficient. This helps you train and use complex models more effectively.
https://github.com/neuraloperator/neuraloperator
The `neuraloperator` library is a powerful tool for learning neural operators in PyTorch. It allows you to learn mappings between function spaces, which is different from regular neural networks. This library is useful because it makes your trained models work with data of any resolution, meaning you don't have to worry about the size of your data. You can easily install it using `pip install neuraloperator` and start training operators right away. The library also offers efficient models like the Tucker Tensorized FNO, which reduces the number of parameters needed, making it faster and more efficient. This helps you train and use complex models more effectively.
https://github.com/neuraloperator/neuraloperator
GitHub
GitHub - neuraloperator/neuraloperator: Learning in infinite dimension with neural operators.
Learning in infinite dimension with neural operators. - neuraloperator/neuraloperator
#python #cuda #deepseek #deepseek_llm #deepseek_v3 #inference #llama #llama2 #llama3 #llama3_1 #llava #llm #llm_serving #moe #pytorch #transformer #vlm
SGLang is a tool that makes working with large language models and vision language models much faster and more manageable. It has a fast backend runtime that optimizes model performance with features like prefix caching, continuous batching, and quantization. The frontend language is flexible and easy to use, allowing for complex tasks like chained generation calls and multi-modal inputs. SGLang supports many different models and has an active community behind it. This means you can get your models running quickly and efficiently, saving time and resources. Additionally, the extensive documentation and community support make it easier to get started and resolve any issues.
https://github.com/sgl-project/sglang
SGLang is a tool that makes working with large language models and vision language models much faster and more manageable. It has a fast backend runtime that optimizes model performance with features like prefix caching, continuous batching, and quantization. The frontend language is flexible and easy to use, allowing for complex tasks like chained generation calls and multi-modal inputs. SGLang supports many different models and has an active community behind it. This means you can get your models running quickly and efficiently, saving time and resources. Additionally, the extensive documentation and community support make it easier to get started and resolve any issues.
https://github.com/sgl-project/sglang
GitHub
GitHub - sgl-project/sglang: SGLang is a fast serving framework for large language models and vision language models.
SGLang is a fast serving framework for large language models and vision language models. - sgl-project/sglang
#python #gpu #llm #pytorch #transformers
The `ipex-llm` library is a powerful tool for accelerating Large Language Models (LLMs) on Intel GPUs, NPUs, and CPUs. It integrates seamlessly with popular frameworks like HuggingFace transformers, LangChain, LlamaIndex, and more. Here are the key benefits `ipex-llm` optimizes LLM performance with advanced quantization techniques (FP8, FP6, FP4, INT4) and self-speculative decoding, leading to significant speedups.
- **Wide Model Support** It works on various Intel hardware such as Arc GPUs, Core Ultra NPUs, and CPUs, making it versatile for different setups.
- **Easy Integration** Detailed quickstart guides, code examples, and tutorials help users get started quickly.
Overall, `ipex-llm` enhances the performance and usability of LLMs on Intel hardware, making it a valuable tool for developers and researchers.
https://github.com/intel/ipex-llm
The `ipex-llm` library is a powerful tool for accelerating Large Language Models (LLMs) on Intel GPUs, NPUs, and CPUs. It integrates seamlessly with popular frameworks like HuggingFace transformers, LangChain, LlamaIndex, and more. Here are the key benefits `ipex-llm` optimizes LLM performance with advanced quantization techniques (FP8, FP6, FP4, INT4) and self-speculative decoding, leading to significant speedups.
- **Wide Model Support** It works on various Intel hardware such as Arc GPUs, Core Ultra NPUs, and CPUs, making it versatile for different setups.
- **Easy Integration** Detailed quickstart guides, code examples, and tutorials help users get started quickly.
Overall, `ipex-llm` enhances the performance and usability of LLMs on Intel hardware, making it a valuable tool for developers and researchers.
https://github.com/intel/ipex-llm
GitHub
GitHub - intel/ipex-llm: Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma…
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr...