#python #bert #chatgpt #chatgpt_api #chatgpt_python #chatgpt3 #gpt_2 #gpt_3 #gpt_3_prompts #gpt_neo #gpt3_library #large_language_models #openai #prompt_engineering #prompt_toolkit #prompt_tuning #prompting #prompts #transformers
https://github.com/promptslab/Promptify
https://github.com/promptslab/Promptify
GitHub
GitHub - promptslab/Promptify: Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured…
Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research - promptslab/Promptify
#python #attention_mechanism #deep_learning #gpt #gpt_2 #gpt_3 #language_model #linear_attention #lstm #pytorch #rnn #rwkv #transformer #transformers
https://github.com/BlinkDL/RWKV-LM
https://github.com/BlinkDL/RWKV-LM
GitHub
GitHub - BlinkDL/RWKV-LM: RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like…
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it'...
#python #deepspeed_library #gpt_3 #language_model #transformers
https://github.com/EleutherAI/gpt-neox
https://github.com/EleutherAI/gpt-neox
GitHub
GitHub - EleutherAI/gpt-neox: An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and…
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries - EleutherAI/gpt-neox
#python #chatgpt #clip #deep_learning #gpt #hacktoberfest #hnsw #information_retrieval #knn #large_language_models #machine_learning #machinelearning #multi_modal #natural_language_processing #search_engine #semantic_search #tensor_search #transformers #vector_search #vision_language #visual_search
https://github.com/marqo-ai/marqo
https://github.com/marqo-ai/marqo
GitHub
GitHub - marqo-ai/marqo: Unified embedding generation and search engine. Also available on cloud - cloud.marqo.ai
Unified embedding generation and search engine. Also available on cloud - cloud.marqo.ai - marqo-ai/marqo
#python #graphcore #habana #inference #intel #onnx #onnxruntime #optimization #pytorch #quantization #tflite #training #transformers
https://github.com/huggingface/optimum
https://github.com/huggingface/optimum
GitHub
GitHub - huggingface/optimum: 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers…
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools - huggingface/optimum
#python #embeddings #information_retrieval #language_model #large_language_models #llm #machine_learning #nearest_neighbor_search #neural_search #nlp #search #search_engine #semantic_search #sentence_embeddings #similarity_search #transformers #txtai #vector_database #vector_search #vector_search_engine
https://github.com/neuml/txtai
https://github.com/neuml/txtai
GitHub
GitHub - neuml/txtai: 💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows
💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows - neuml/txtai
#python #ai #data #data_structures #database #long_term_memory #machine_learning #ml #mlops #mongodb #pytorch #scikit_learn #sklearn #torch #transformers #vector_search
https://github.com/SuperDuperDB/superduperdb
https://github.com/SuperDuperDB/superduperdb
GitHub
GitHub - superduper-io/superduper: Superduper: End-to-end framework for building custom AI applications and agents.
Superduper: End-to-end framework for building custom AI applications and agents. - superduper-io/superduper
#jupyter_notebook #ai #azure #chatgpt #dall_e #generative_ai #generativeai #gpt #language_model #llms #openai #prompt_engineering #semantic_search #transformers
This course teaches you how to build Generative AI applications with 21 comprehensive lessons from Microsoft Cloud Advocates. You'll learn about Generative AI, Large Language Models (LLMs), prompt engineering, and how to build various applications like text generation, chat apps, and image generation using Python and TypeScript. The course includes videos, written lessons, code samples, and additional learning resources. You can start anywhere and even join a Discord server for support and networking with other learners. This helps you gain practical skills in building and deploying Generative AI applications responsibly and effectively.
https://github.com/microsoft/generative-ai-for-beginners
This course teaches you how to build Generative AI applications with 21 comprehensive lessons from Microsoft Cloud Advocates. You'll learn about Generative AI, Large Language Models (LLMs), prompt engineering, and how to build various applications like text generation, chat apps, and image generation using Python and TypeScript. The course includes videos, written lessons, code samples, and additional learning resources. You can start anywhere and even join a Discord server for support and networking with other learners. This helps you gain practical skills in building and deploying Generative AI applications responsibly and effectively.
https://github.com/microsoft/generative-ai-for-beginners
GitHub
GitHub - microsoft/generative-ai-for-beginners: 21 Lessons, Get Started Building with Generative AI
21 Lessons, Get Started Building with Generative AI - GitHub - microsoft/generative-ai-for-beginners: 21 Lessons, Get Started Building with Generative AI
#python #large_language_models #model_para #transformers
Megatron-LM and Megatron-Core are powerful tools for training large language models (LLMs) on NVIDIA GPUs. Megatron-Core offers GPU-optimized techniques and system-level optimizations, allowing you to train custom transformers efficiently. It supports advanced parallelism strategies, activation checkpointing, and distributed optimization to reduce memory usage and improve training speed. You can use Megatron-Core with other frameworks like NVIDIA NeMo for end-to-end solutions or integrate its components into your preferred training framework. This setup enables scalable training of models with hundreds of billions of parameters, making it beneficial for researchers and developers aiming to advance LLM technology.
https://github.com/NVIDIA/Megatron-LM
Megatron-LM and Megatron-Core are powerful tools for training large language models (LLMs) on NVIDIA GPUs. Megatron-Core offers GPU-optimized techniques and system-level optimizations, allowing you to train custom transformers efficiently. It supports advanced parallelism strategies, activation checkpointing, and distributed optimization to reduce memory usage and improve training speed. You can use Megatron-Core with other frameworks like NVIDIA NeMo for end-to-end solutions or integrate its components into your preferred training framework. This setup enables scalable training of models with hundreds of billions of parameters, making it beneficial for researchers and developers aiming to advance LLM technology.
https://github.com/NVIDIA/Megatron-LM
GitHub
GitHub - NVIDIA/Megatron-LM: Ongoing research training transformer models at scale
Ongoing research training transformer models at scale - NVIDIA/Megatron-LM
#python #chinese #clip #computer_vision #contrastive_loss #coreml_models #deep_learning #image_text_retrieval #multi_modal #multi_modal_learning #nlp #pretrained_models #pytorch #transformers #vision_and_language_pre_training #vision_language
This project is about a Chinese version of the CLIP (Contrastive Language-Image Pretraining) model, trained on a large dataset of Chinese text and images. Here’s what you need to know This model helps you quickly perform tasks like calculating text and image features, cross-modal retrieval (finding images based on text or vice versa), and zero-shot image classification (classifying images without any labeled examples).
- **Ease of Use** The model has been tested on various datasets and shows strong performance in zero-shot image classification and cross-modal retrieval tasks.
- **Resources**: The project includes pre-trained models, training and testing codes, and detailed tutorials on how to use the model for different tasks.
Overall, this project makes it easy to work with Chinese text and images using advanced AI techniques, saving you time and effort.
https://github.com/OFA-Sys/Chinese-CLIP
This project is about a Chinese version of the CLIP (Contrastive Language-Image Pretraining) model, trained on a large dataset of Chinese text and images. Here’s what you need to know This model helps you quickly perform tasks like calculating text and image features, cross-modal retrieval (finding images based on text or vice versa), and zero-shot image classification (classifying images without any labeled examples).
- **Ease of Use** The model has been tested on various datasets and shows strong performance in zero-shot image classification and cross-modal retrieval tasks.
- **Resources**: The project includes pre-trained models, training and testing codes, and detailed tutorials on how to use the model for different tasks.
Overall, this project makes it easy to work with Chinese text and images using advanced AI techniques, saving you time and effort.
https://github.com/OFA-Sys/Chinese-CLIP
GitHub
GitHub - OFA-Sys/Chinese-CLIP: Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation. - OFA-Sys/Chinese-CLIP