#python #cross_modal_retrieval #image_captioning #pretraining #tden #video_captioning #vision_and_language #visual_question_answering
https://github.com/YehLi/xmodaler
https://github.com/YehLi/xmodaler
GitHub
GitHub - YehLi/xmodaler: X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning…
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsens...
#python #deep_learning #deep_learning_library #image_captioning #multimodal_datasets #multimodal_deep_learning #salesforce #vision_and_language #vision_framework #vision_language_pretraining #vision_language_transformer #visual_question_anwsering
https://github.com/salesforce/LAVIS
https://github.com/salesforce/LAVIS
GitHub
GitHub - salesforce/LAVIS: LAVIS - A One-stop Library for Language-Vision Intelligence
LAVIS - A One-stop Library for Language-Vision Intelligence - salesforce/LAVIS
#python #chinese #clip #computer_vision #contrastive_loss #coreml_models #deep_learning #image_text_retrieval #multi_modal #multi_modal_learning #nlp #pretrained_models #pytorch #transformers #vision_and_language_pre_training #vision_language
This project is about a Chinese version of the CLIP (Contrastive Language-Image Pretraining) model, trained on a large dataset of Chinese text and images. Here’s what you need to know This model helps you quickly perform tasks like calculating text and image features, cross-modal retrieval (finding images based on text or vice versa), and zero-shot image classification (classifying images without any labeled examples).
- **Ease of Use** The model has been tested on various datasets and shows strong performance in zero-shot image classification and cross-modal retrieval tasks.
- **Resources**: The project includes pre-trained models, training and testing codes, and detailed tutorials on how to use the model for different tasks.
Overall, this project makes it easy to work with Chinese text and images using advanced AI techniques, saving you time and effort.
https://github.com/OFA-Sys/Chinese-CLIP
This project is about a Chinese version of the CLIP (Contrastive Language-Image Pretraining) model, trained on a large dataset of Chinese text and images. Here’s what you need to know This model helps you quickly perform tasks like calculating text and image features, cross-modal retrieval (finding images based on text or vice versa), and zero-shot image classification (classifying images without any labeled examples).
- **Ease of Use** The model has been tested on various datasets and shows strong performance in zero-shot image classification and cross-modal retrieval tasks.
- **Resources**: The project includes pre-trained models, training and testing codes, and detailed tutorials on how to use the model for different tasks.
Overall, this project makes it easy to work with Chinese text and images using advanced AI techniques, saving you time and effort.
https://github.com/OFA-Sys/Chinese-CLIP
GitHub
GitHub - OFA-Sys/Chinese-CLIP: Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation. - OFA-Sys/Chinese-CLIP