GitHub Trends
10.1K subscribers
15.3K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#python #amd #cuda #gpt #inference #inferentia #llama #llm #llm_serving #llmops #mlops #model_serving #pytorch #rocm #tpu #trainium #transformer #xpu

vLLM is a library that makes it easy, fast, and cheap to use large language models (LLMs). It is designed to be fast with features like efficient memory management, continuous batching, and optimized CUDA kernels. vLLM supports many popular models and can run on various hardware including NVIDIA GPUs, AMD CPUs and GPUs, and more. It also offers seamless integration with Hugging Face models and supports different decoding algorithms. This makes it flexible and easy to use for anyone needing to serve LLMs, whether for research or other applications. You can install vLLM easily with `pip install vllm` and find detailed documentation on their website.

https://github.com/vllm-project/vllm
1
#go #gemma #gemma2 #go #golang #llama #llama2 #llama3 #llava #llm #llms #mistral #ollama #phi3

Ollama is a tool that lets you use large language models on your own computer. You can download and install it for macOS, Windows, or Linux. It supports various models like Llama 3.2, Phi 3, and others, which you can run locally using simple commands. For example, to run the Llama 3.2 model, you just need to type `ollama run llama3.2`.

The benefit to you is that you can use powerful language models without relying on cloud services, ensuring your data stays private and secure. You can also customize the models with specific prompts and settings to fit your needs. Additionally, there are many community integrations and libraries available to extend its functionality in various applications.

https://github.com/ollama/ollama
#jupyter_notebook #ai #finetuning #langchain #llama #llama2 #llm #machine_learning #python #pytorch #vllm

The `llama-recipes` repository helps you get started with Meta's Llama models, including Llama 3.2 Text and Vision. It provides example scripts and notebooks for various use cases, such as fine-tuning the models and building applications. You can use these models locally, in the cloud, or on-premises. The repository includes guides for installing the necessary tools, converting models to Hugging Face format, and using features like multimodal inference and responsible AI practices. This makes it easier for you to quickly set up and use the Llama models for your projects, saving time and effort.

https://github.com/meta-llama/llama-recipes
1
#python #agent #ai #chatglm #fine_tuning #gpt #instruction_tuning #language_model #large_language_models #llama #llama3 #llm #lora #mistral #moe #peft #qlora #quantization #qwen #rlhf #transformers

LLaMA Factory is a tool that makes it easy to fine-tune large language models. It supports many different models like LLaMA, ChatGLM, and Qwen, among others. You can use various training methods such as full-tuning, freeze-tuning, LoRA, and QLoRA, which are efficient and save GPU memory. The tool also includes advanced algorithms and practical tricks to improve performance.

Using LLaMA Factory, you can train models up to 3.7 times faster with better results compared to other methods. It provides a user-friendly interface through Colab, PAI-DSW, or local machines, and even offers a web UI for easier management. The benefit to you is that it simplifies the process of fine-tuning large language models, making it faster and more efficient, which can be very useful for research and development projects.

https://github.com/hiyouga/LLaMA-Factory
#cplusplus #ai #api #audio_generation #distributed #gemma #gpt4all #image_generation #kubernetes #llama #llama3 #llm #mamba #mistral #musicgen #p2p #rerank #rwkv #stable_diffusion #text_generation #tts

LocalAI is a free, open-source alternative to OpenAI that you can run on your own computer or server. It allows you to generate text, images, and audio locally without needing a GPU. You can use it with various models and it supports multiple functionalities like text-to-audio, audio-to-text, and image generation. LocalAI is easy to set up using an installer script or Docker, and it has a user-friendly web interface. This tool is beneficial because it saves you money by not requiring cloud services and gives you full control over your data privacy. Plus, it's community-driven, so there are many resources and integrations available to help you get started and customize it to your needs.

https://github.com/mudler/LocalAI
#python #llama #transformer #tts #valle #vits #vqgan #vqvae

Fish Speech is a powerful tool that converts text into speech in many languages, including English, Japanese, Korean, Chinese, and more. You can use it by inputting a short vocal sample to generate high-quality speech. It supports multiple languages without needing phonemes and is highly accurate with low error rates. The tool is fast, with real-time processing on various devices, and has a user-friendly web and GUI interface. You can try the demo online or set it up locally. It's released under a CC BY-NC-SA 4.0 license, which means you can use and modify it freely, but you must give credit and share any changes under the same license. This tool helps you create realistic speech quickly and easily, making it useful for various applications like voice cloning and multilingual communication.

https://github.com/fishaudio/fish-speech
#typescript #agent_monitoring #analytics #evaluation #gpt #langchain #large_language_models #llama_index #llm #llm_cost #llm_evaluation #llm_observability #llmops #monitoring #open_source #openai #playground #prompt_engineering #prompt_management #ycombinator

Helicone is an all-in-one, open-source platform for developing and managing Large Language Models (LLMs). It allows you to integrate with various LLM providers like OpenAI, Anthropic, and more with just one line of code. You can observe and debug your model's performance, analyze metrics such as cost and latency, and fine-tune your models easily. The platform also offers a playground to test and iterate on prompts and sessions, and it supports prompt management and automatic evaluations. Helicone is enterprise-ready, compliant with SOC 2 and GDPR, and offers a generous free tier of 100k requests per month. This makes it easier to manage and optimize your LLM projects efficiently.

https://github.com/Helicone/helicone
1
#python #agent #ai #chatbot #chatgpt #docker #function_calling #gemini #gpt #llama #llm #ollama #openai #python #qq #qqbot #qqchannel #telegram

AstrBot is a powerful chatbot and development framework that supports multiple messaging platforms like QQ, WeChat, Telegram, and more. It integrates with large language models (LLMs) such as OpenAI, Google Gemini, and others, allowing for multi-round conversations, personality settings, and multimodal capabilities like image understanding and speech-to-text. The bot has a user-friendly plugin system, a visual management panel, and high stability due to its modular design. This makes it easy to deploy and manage, with various deployment options including Docker, Windows, and Replit. Using AstrBot benefits users by providing a versatile and highly customizable chatbot solution that can be easily extended with new features through plugins.

https://github.com/Soulter/AstrBot
#cplusplus #ggml #llama

The `llama.cpp` project allows you to run large language models (LLMs) like LLaMA and others with high performance on various hardware, including local machines and cloud services. Here are the key benefits It works on Apple Silicon, x86 architectures, NVIDIA, AMD, and Moore Threads GPUs, as well as CPUs, ensuring you can use it on a wide range of devices.
- **Optimized Performance** You can build and run the project locally, install it via package managers, use Docker images, or download pre-built binaries.
- **Extensive Model Support** It includes tools like `llama-cli` for simple text completion, `llama-server` for setting up an HTTP server, and `llama-perplexity` for measuring model quality.

This makes `llama.cpp` a powerful and flexible tool for anyone looking to work with LLMs efficiently.

https://github.com/ggerganov/llama.cpp
#python #deepseek #deepseek_r1 #fine_tuning #finetuning #gemma #gemma2 #llama #llama3 #llm #llms #lora #mistral #phi3 #qlora #unsloth

Using Unsloth.ai, you can finetune AI models like Llama, Mistral, and others up to 2x faster and with 70% less memory. The process is beginner-friendly; you just need to add your dataset, click "Run All" in the provided notebooks, and you'll get a faster, finetuned model that can be exported or uploaded to platforms like Hugging Face. This saves time and resources, making it easier to work with large AI models without needing powerful hardware. Additionally, Unsloth supports various features like 4-bit quantization, long context windows, and integration with tools from Hugging Face, making it a powerful tool for AI model development.

https://github.com/unslothai/unsloth