GitHub Trends
10.1K subscribers
15.3K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#python #agent #ai #chatglm #fine_tuning #gpt #instruction_tuning #language_model #large_language_models #llama #llama3 #llm #lora #mistral #moe #peft #qlora #quantization #qwen #rlhf #transformers

LLaMA Factory is a tool that makes it easy to fine-tune large language models. It supports many different models like LLaMA, ChatGLM, and Qwen, among others. You can use various training methods such as full-tuning, freeze-tuning, LoRA, and QLoRA, which are efficient and save GPU memory. The tool also includes advanced algorithms and practical tricks to improve performance.

Using LLaMA Factory, you can train models up to 3.7 times faster with better results compared to other methods. It provides a user-friendly interface through Colab, PAI-DSW, or local machines, and even offers a web UI for easier management. The benefit to you is that it simplifies the process of fine-tuning large language models, making it faster and more efficient, which can be very useful for research and development projects.

https://github.com/hiyouga/LLaMA-Factory
#python #deepseek #deepseek_r1 #fine_tuning #finetuning #gemma #gemma2 #llama #llama3 #llm #llms #lora #mistral #phi3 #qlora #unsloth

Using Unsloth.ai, you can finetune AI models like Llama, Mistral, and others up to 2x faster and with 70% less memory. The process is beginner-friendly; you just need to add your dataset, click "Run All" in the provided notebooks, and you'll get a faster, finetuned model that can be exported or uploaded to platforms like Hugging Face. This saves time and resources, making it easier to work with large AI models without needing powerful hardware. Additionally, Unsloth supports various features like 4-bit quantization, long context windows, and integration with tools from Hugging Face, making it a powerful tool for AI model development.

https://github.com/unslothai/unsloth
#cplusplus #esp32 #gps #heltec #hiking #lora #mesh #mesh_networks #meshtastic #nrf52 #off_grid #pico #rp2040 #stm32 #ttgo #ttgo_tbeam

The Meshtastic Firmware is software for devices in the Meshtastic project. It helps devices communicate with each other in a mesh network. You can find instructions on how to build and flash the firmware on your device, making it easy to set up and use. This benefits you by allowing your devices to connect and share information efficiently, even in areas with poor internet coverage.

https://github.com/meshtastic/firmware
#typescript #electron #llama #llms #lora #mlx #rlhf #transformers

Transformer Lab is a free, open-source tool that lets you easily work with large language models on your own computer, offering one-click downloads for popular models like Llama3 and Mistral, fine-tuning across different hardware (including Apple Silicon and GPUs), and features like chatting, training, and evaluating models through a simple interface—saving you from complex setups like CUDA or Python version issues[1][2][5].

https://github.com/transformerlab/transformerlab-app
#jupyter_notebook #chatglm #chatglm3 #gemma_2b_it #glm_4 #internlm2 #llama3 #llm #lora #minicpm #q_wen #qwen #qwen1_5 #qwen2

This guide helps beginners set up and use open-source large language models (LLMs) on Linux or cloud platforms like AutoDL, with step-by-step instructions for environment setup, model deployment, and fine-tuning for models such as LLaMA, ChatGLM, and InternLM[2][4][5]. It covers everything from basic installation to advanced techniques like LoRA and distributed fine-tuning, and supports integration with tools like LangChain and online demo deployment. The main benefit is making powerful AI models accessible and easy to use for students, researchers, and anyone interested in experimenting with or customizing LLMs for their own projects[2][4][5].

https://github.com/datawhalechina/self-llm
#python #agent #agentic_ai #grpo #kimi_ai #llms #lora #qwen #qwen3 #reinforcement_learning #rl

ART is a tool that helps you train smart agents for real-world tasks using reinforcement learning, especially with the GRPO method. The standout feature is RULER, which lets you skip the hard work of designing reward functions by using a large language model to automatically score how well your agent is doing—just describe your task, and RULER takes care of the rest. This makes building and improving agents much faster and easier, works for any task, and often performs as well as or better than hand-crafted rewards. You can install ART with a simple command and start training agents right away, even on your own computer or with cloud resources.

https://github.com/OpenPipe/ART