GitHub Trends
10.1K subscribers
15.3K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#jupyter_notebook #ai #azure #chatgpt #dall_e #generative_ai #generativeai #gpt #language_model #llms #openai #prompt_engineering #semantic_search #transformers

This course teaches you how to build Generative AI applications with 21 comprehensive lessons from Microsoft Cloud Advocates. You'll learn about Generative AI, Large Language Models (LLMs), prompt engineering, and how to build various applications like text generation, chat apps, and image generation using Python and TypeScript. The course includes videos, written lessons, code samples, and additional learning resources. You can start anywhere and even join a Discord server for support and networking with other learners. This helps you gain practical skills in building and deploying Generative AI applications responsibly and effectively.

https://github.com/microsoft/generative-ai-for-beginners
#python #large_language_models #model_para #transformers

Megatron-LM and Megatron-Core are powerful tools for training large language models (LLMs) on NVIDIA GPUs. Megatron-Core offers GPU-optimized techniques and system-level optimizations, allowing you to train custom transformers efficiently. It supports advanced parallelism strategies, activation checkpointing, and distributed optimization to reduce memory usage and improve training speed. You can use Megatron-Core with other frameworks like NVIDIA NeMo for end-to-end solutions or integrate its components into your preferred training framework. This setup enables scalable training of models with hundreds of billions of parameters, making it beneficial for researchers and developers aiming to advance LLM technology.

https://github.com/NVIDIA/Megatron-LM
#python #chinese #clip #computer_vision #contrastive_loss #coreml_models #deep_learning #image_text_retrieval #multi_modal #multi_modal_learning #nlp #pretrained_models #pytorch #transformers #vision_and_language_pre_training #vision_language

This project is about a Chinese version of the CLIP (Contrastive Language-Image Pretraining) model, trained on a large dataset of Chinese text and images. Here’s what you need to know This model helps you quickly perform tasks like calculating text and image features, cross-modal retrieval (finding images based on text or vice versa), and zero-shot image classification (classifying images without any labeled examples).
- **Ease of Use** The model has been tested on various datasets and shows strong performance in zero-shot image classification and cross-modal retrieval tasks.
- **Resources**: The project includes pre-trained models, training and testing codes, and detailed tutorials on how to use the model for different tasks.

Overall, this project makes it easy to work with Chinese text and images using advanced AI techniques, saving you time and effort.

https://github.com/OFA-Sys/Chinese-CLIP