#python #convolutional_neural_networks #cvpr #cvpr2020 #efficient_inference #fbnet #imagenet #mobilenet #mobilenetv3 #model_compression #tensorflow
https://github.com/huawei-noah/ghostnet
https://github.com/huawei-noah/ghostnet
GitHub
GitHub - huawei-noah/Efficient-AI-Backbones: Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's…
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab. - huawei-noah/Efficient-AI-Backbones
#javascript #annotate_images #annotation_tool #audio #classification #computer_vision #dataset #deep_learning #desktop #entity_recognition #image_labeling_tool #image_segmentation #imagenet #named_entity_recognition #semantic_segmentation #text_annotation #text_labeling #udt
https://github.com/UniversalDataTool/universal-data-tool
https://github.com/UniversalDataTool/universal-data-tool
GitHub
GitHub - UniversalDataTool/universal-data-tool: Collaborate & label any type of data, images, text, or documents, in an easy web…
Collaborate & label any type of data, images, text, or documents, in an easy web interface or desktop app. - UniversalDataTool/universal-data-tool
#python #convolutional_neural_networks #deep_learning #imagenet #jax #pytorch #tensorflow2 #transfer_learning
https://github.com/google-research/big_transfer
https://github.com/google-research/big_transfer
GitHub
GitHub - google-research/big_transfer: Official repository for the "Big Transfer (BiT): General Visual Representation Learning"…
Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper. - google-research/big_transfer
#python #ade20k #image_classification #imagenet #mask_rcnn #mscoco #object_detection #semantic_segmentation #swin_transformer
https://github.com/microsoft/Swin-Transformer
https://github.com/microsoft/Swin-Transformer
GitHub
GitHub - microsoft/Swin-Transformer: This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer…
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows". - microsoft/Swin-Transformer
#typescript #annotation #annotation_tool #annotations #boundingbox #computer_vision #computer_vision_annotation #dataset #deep_learning #image_annotation #image_classification #image_labeling #image_labelling_tool #imagenet #labeling #labeling_tool #semantic_segmentation #tensorflow #video_annotation
https://github.com/openvinotoolkit/cvat
https://github.com/openvinotoolkit/cvat
GitHub
GitHub - cvat-ai/cvat: Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams…
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale. - cvat-ai/cvat
#python #deep_learning #image_classification #imagenet #mobilenet #pytorch #regnet #resnet #resnext #senet #shufflenet #swin_transformer
https://github.com/open-mmlab/mmclassification
https://github.com/open-mmlab/mmclassification
GitHub
GitHub - open-mmlab/mmpretrain: OpenMMLab Pre-training Toolbox and Benchmark
OpenMMLab Pre-training Toolbox and Benchmark. Contribute to open-mmlab/mmpretrain development by creating an account on GitHub.
#python #damo_yolo #deep_learning #imagenet #nas #object_detection #onnx #pytorch #tensorrt #yolo #yolov5
https://github.com/tinyvision/DAMO-YOLO
https://github.com/tinyvision/DAMO-YOLO
GitHub
GitHub - tinyvision/DAMO-YOLO: DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones…
DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement. - tinyvision/DAMO-YOLO
#jupyter_notebook #computer_vision #deep_learning #image_classification #imagenet #neural_network #object_detection #pretrained_models #pretrained_weights #pytorch #semantic_segmentation #transfer_learning
https://github.com/Deci-AI/super-gradients
https://github.com/Deci-AI/super-gradients
GitHub
GitHub - Deci-AI/super-gradients: Easily train or fine-tune SOTA computer vision models with one open source training library.…
Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS. - Deci-AI/super-gradients
#python #deeplab_v3_plus #deeplabv3 #fpn #hacktoberfest #image_processing #image_segmentation #imagenet #linknet #models #pretrained_backbones #pretrained_models #pretrained_weights #pspnet #pytorch #segmentation #segmentation_models #semantic_segmentation #unet #unet_pytorch #unetplusplus
https://github.com/qubvel/segmentation_models.pytorch
https://github.com/qubvel/segmentation_models.pytorch
GitHub
GitHub - qubvel-org/segmentation_models.pytorch: Semantic segmentation models with 500+ pretrained convolutional and transformer…
Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones. - qubvel-org/segmentation_models.pytorch
#python #augmix #convnext #distributed_training #dual_path_networks #efficientnet #image_classification #imagenet #maxvit #mixnet #mobile_deep_learning #mobilenet_v2 #mobilenetv3 #nfnets #normalization_free_training #pretrained_models #pretrained_weights #pytorch #randaugment #resnet #vision_transformer_models
PyTorch Image Models (`timm`) is a comprehensive library that includes a wide range of state-of-the-art image models, layers, utilities, optimizers, and training scripts. Here are the key benefits `timm` offers over 300 pre-trained models from various families like Vision Transformers, ResNets, EfficientNets, and more, allowing you to choose the best model for your task.
- **Pre-trained Weights** You can easily extract features at different levels of the network using `features_only=True` and `out_indices`, making it versatile for various applications.
- **Optimizers and Schedulers** It provides several augmentation techniques like AutoAugment, RandAugment, and regularization methods like DropPath and DropBlock to enhance model performance.
- **Reference Training Scripts**: Included are high-performance training, validation, and inference scripts that support multiple GPUs and mixed-precision training.
Overall, `timm` simplifies the process of working with deep learning models for image tasks by providing a unified interface and extensive tools for training and evaluation.
https://github.com/huggingface/pytorch-image-models
PyTorch Image Models (`timm`) is a comprehensive library that includes a wide range of state-of-the-art image models, layers, utilities, optimizers, and training scripts. Here are the key benefits `timm` offers over 300 pre-trained models from various families like Vision Transformers, ResNets, EfficientNets, and more, allowing you to choose the best model for your task.
- **Pre-trained Weights** You can easily extract features at different levels of the network using `features_only=True` and `out_indices`, making it versatile for various applications.
- **Optimizers and Schedulers** It provides several augmentation techniques like AutoAugment, RandAugment, and regularization methods like DropPath and DropBlock to enhance model performance.
- **Reference Training Scripts**: Included are high-performance training, validation, and inference scripts that support multiple GPUs and mixed-precision training.
Overall, `timm` simplifies the process of working with deep learning models for image tasks by providing a unified interface and extensive tools for training and evaluation.
https://github.com/huggingface/pytorch-image-models
GitHub
GitHub - huggingface/pytorch-image-models: The largest collection of PyTorch image encoders / backbones. Including train, eval…
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V...