GitHub Trends
10.1K subscribers
15.3K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#csharp #agent #ai #avalonia #chat #claude #deepseek #gpt_oss #grok #llm #mcp #ollama #openai #rag #ui_automation

Everywhere is an AI assistant that works directly on your screen without needing screenshots or app switching. You just press a shortcut and it understands the context instantly to help you with tasks like fixing errors, summarizing articles, translating text, or improving your writing tone. It supports many AI models and runs on Windows, with macOS and Linux versions coming soon. This tool saves you time and effort by giving quick, relevant help exactly where you need it, making your work and browsing smoother and more efficient. It also supports multiple languages and has a modern, easy-to-use interface.

https://github.com/DearVa/Everywhere
#python #ai #faiss #gpt_oss #langchain #llama_index #llm #localstorage #offline_first #ollama #privacy #python #rag #retrieval_augmented_generation #vector_database #vector_search #vectors

LEANN is a tiny, powerful vector database that lets you turn your laptop into a personal AI assistant capable of searching millions of documents using 97% less storage than traditional systems without losing accuracy. It works by storing a compact graph and computing embeddings only when needed, saving huge space and keeping your data private on your device. You can search your files, emails, browser history, chat logs, live data from platforms like Slack and Twitter, and even codebases—all locally without cloud costs. This means fast, private, and efficient AI-powered search and retrieval on your own laptop.

https://github.com/yichuan-w/LEANN
#go #gemma3 #go #gpt_oss #granite4 #llama #llama3 #llm #on_device_ai #phi3 #qwen3 #qwen3vl #sdk #stable_diffusion #vlm

NexaSDK runs AI models locally on CPUs, GPUs, and NPUs with a single command, supports GGUF/MLX/.nexa formats, and offers NPU-first Android and macOS support for fast, multimodal (text, image, audio) inference, plus an OpenAI‑compatible API for easy integration. This gives you low-latency, private on-device AI across laptops, phones, and embedded systems, reduces cloud costs and data exposure, and lets you deploy and test new models immediately on target hardware for faster development and better user experience.

https://github.com/NexaAI/nexa-sdk