#python #cv #deep_learning #machine_learning #multi_modal #nlp #science #speech
https://github.com/modelscope/modelscope
https://github.com/modelscope/modelscope
GitHub
GitHub - modelscope/modelscope: ModelScope: bring the notion of Model-as-a-Service to life.
ModelScope: bring the notion of Model-as-a-Service to life. - modelscope/modelscope
#python #bloom #deep_learning #gpt #inference #nlp #pytorch #transformer
https://github.com/huggingface/text-generation-inference
https://github.com/huggingface/text-generation-inference
GitHub
GitHub - huggingface/text-generation-inference: Large Language Model Text Generation Inference
Large Language Model Text Generation Inference. Contribute to huggingface/text-generation-inference development by creating an account on GitHub.
#python #document_ai #document_image_analysis #document_layout_analysis #document_parser #document_understanding #layoutlm #nlp #ocr #publaynet #pubtabnet #pytorch #table_detection #table_recognition #tensorflow
https://github.com/deepdoctection/deepdoctection
https://github.com/deepdoctection/deepdoctection
GitHub
GitHub - deepdoctection/deepdoctection: A Repo For Document AI
A Repo For Document AI. Contribute to deepdoctection/deepdoctection development by creating an account on GitHub.
#python #active_learning #ai #annotation_tool #developer_tools #gpt_4 #human_in_the_loop #langchain #llm #machine_learning #mlops #natural_language_processing #nlp #rlhf #text_annotation #text_labeling #weak_supervision #weakly_supervised_learning
https://github.com/argilla-io/argilla
https://github.com/argilla-io/argilla
GitHub
GitHub - argilla-io/argilla: Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets
Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets - argilla-io/argilla
❤1
#python #embeddings #information_retrieval #language_model #large_language_models #llm #machine_learning #nearest_neighbor_search #neural_search #nlp #search #search_engine #semantic_search #sentence_embeddings #similarity_search #transformers #txtai #vector_database #vector_search #vector_search_engine
https://github.com/neuml/txtai
https://github.com/neuml/txtai
GitHub
GitHub - neuml/txtai: 💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows
💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows - neuml/txtai
#jupyter_notebook #computer_vision #gpt #huggingface_transformers #llm #machinelearning #nlp_machine_learning #rag
https://github.com/katanaml/sparrow
https://github.com/katanaml/sparrow
GitHub
GitHub - katanaml/sparrow: Structured data extraction and instruction calling with ML, LLM and Vision LLM
Structured data extraction and instruction calling with ML, LLM and Vision LLM - katanaml/sparrow
#python #beit #beit_3 #bitnet #deepnet #document_ai #foundation_models #kosmos #kosmos_1 #layoutlm #layoutxlm #llm #minilm #mllm #multimodal #nlp #pre_trained_model #textdiffuser #trocr #unilm #xlm_e
Microsoft is developing advanced AI models through large-scale self-supervised pre-training across various tasks, languages, and modalities. These models, such as Foundation Transformers (Magneto) and Kosmos-2.5, are designed to be highly generalizable and capable of handling multiple tasks like language understanding, vision, speech, and multimodal interactions. The benefit to users includes state-of-the-art performance in document AI, speech recognition, machine translation, and more, making these models highly versatile and efficient for a wide range of applications. Additionally, tools like TorchScale and Aggressive Decoding enhance stability, efficiency, and speed in model training and deployment.
https://github.com/microsoft/unilm
Microsoft is developing advanced AI models through large-scale self-supervised pre-training across various tasks, languages, and modalities. These models, such as Foundation Transformers (Magneto) and Kosmos-2.5, are designed to be highly generalizable and capable of handling multiple tasks like language understanding, vision, speech, and multimodal interactions. The benefit to users includes state-of-the-art performance in document AI, speech recognition, machine translation, and more, making these models highly versatile and efficient for a wide range of applications. Additionally, tools like TorchScale and Aggressive Decoding enhance stability, efficiency, and speed in model training and deployment.
https://github.com/microsoft/unilm
GitHub
GitHub - microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - microsoft/unilm
#python #chinese #clip #computer_vision #contrastive_loss #coreml_models #deep_learning #image_text_retrieval #multi_modal #multi_modal_learning #nlp #pretrained_models #pytorch #transformers #vision_and_language_pre_training #vision_language
This project is about a Chinese version of the CLIP (Contrastive Language-Image Pretraining) model, trained on a large dataset of Chinese text and images. Here’s what you need to know This model helps you quickly perform tasks like calculating text and image features, cross-modal retrieval (finding images based on text or vice versa), and zero-shot image classification (classifying images without any labeled examples).
- **Ease of Use** The model has been tested on various datasets and shows strong performance in zero-shot image classification and cross-modal retrieval tasks.
- **Resources**: The project includes pre-trained models, training and testing codes, and detailed tutorials on how to use the model for different tasks.
Overall, this project makes it easy to work with Chinese text and images using advanced AI techniques, saving you time and effort.
https://github.com/OFA-Sys/Chinese-CLIP
This project is about a Chinese version of the CLIP (Contrastive Language-Image Pretraining) model, trained on a large dataset of Chinese text and images. Here’s what you need to know This model helps you quickly perform tasks like calculating text and image features, cross-modal retrieval (finding images based on text or vice versa), and zero-shot image classification (classifying images without any labeled examples).
- **Ease of Use** The model has been tested on various datasets and shows strong performance in zero-shot image classification and cross-modal retrieval tasks.
- **Resources**: The project includes pre-trained models, training and testing codes, and detailed tutorials on how to use the model for different tasks.
Overall, this project makes it easy to work with Chinese text and images using advanced AI techniques, saving you time and effort.
https://github.com/OFA-Sys/Chinese-CLIP
GitHub
GitHub - OFA-Sys/Chinese-CLIP: Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation. - OFA-Sys/Chinese-CLIP
#python #bert #deep_learning #flax #hacktoberfest #jax #language_model #language_models #machine_learning #model_hub #natural_language_processing #nlp #nlp_library #pretrained_models #python #pytorch #pytorch_transformers #seq2seq #speech_recognition #tensorflow #transformer
The Hugging Face Transformers library provides thousands of pretrained models for various tasks like text, image, and audio processing. These models can be used for tasks such as text classification, image detection, speech recognition, and more. The library supports popular deep learning frameworks like JAX, PyTorch, and TensorFlow, making it easy to switch between them.
The benefit to the user is that you can quickly download and use these pretrained models with just a few lines of code, saving time and computational resources. You can also fine-tune these models on your own datasets and share them with the community. Additionally, the library offers a simple `pipeline` API for immediate use on different inputs, making it user-friendly for both researchers and practitioners. This helps in reducing compute costs and carbon footprint while enabling high-performance results across various machine learning tasks.
https://github.com/huggingface/transformers
The Hugging Face Transformers library provides thousands of pretrained models for various tasks like text, image, and audio processing. These models can be used for tasks such as text classification, image detection, speech recognition, and more. The library supports popular deep learning frameworks like JAX, PyTorch, and TensorFlow, making it easy to switch between them.
The benefit to the user is that you can quickly download and use these pretrained models with just a few lines of code, saving time and computational resources. You can also fine-tune these models on your own datasets and share them with the community. Additionally, the library offers a simple `pipeline` API for immediate use on different inputs, making it user-friendly for both researchers and practitioners. This helps in reducing compute costs and carbon footprint while enabling high-performance results across various machine learning tasks.
https://github.com/huggingface/transformers
GitHub
GitHub - huggingface/transformers: 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models…
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training. - GitHub - huggingface/t...
#jupyter_notebook #computer_vision #deep_learning #drug_discovery #forecasting #large_language_models #mxnet #nlp #paddlepaddle #pytorch #recommender_systems #speech_recognition #speech_synthesis #tensorflow #tensorflow2 #translation
This repository provides top-quality deep learning examples that are easy to train and deploy on NVIDIA GPUs. It includes a wide range of models for computer vision, natural language processing, recommender systems, speech to text, and more. These examples are updated monthly and come in Docker containers with the latest NVIDIA software, ensuring the best performance. The models support multiple GPUs and nodes, and some are optimized for Tensor Cores, which can significantly speed up training. This makes it easier for users to achieve high accuracy and performance in their deep learning projects.
https://github.com/NVIDIA/DeepLearningExamples
This repository provides top-quality deep learning examples that are easy to train and deploy on NVIDIA GPUs. It includes a wide range of models for computer vision, natural language processing, recommender systems, speech to text, and more. These examples are updated monthly and come in Docker containers with the latest NVIDIA software, ensuring the best performance. The models support multiple GPUs and nodes, and some are optimized for Tensor Cores, which can significantly speed up training. This makes it easier for users to achieve high accuracy and performance in their deep learning projects.
https://github.com/NVIDIA/DeepLearningExamples
GitHub
GitHub - NVIDIA/DeepLearningExamples: State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with…
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure. - NVIDIA/DeepLearningExamples