#swift #ai #aichat #chatbot #chatgpt #deepseek #deepseek_r1 #gemma #gemma3 #gguf #llama #llama3 #llm #macos #qwen #qwen2 #qwq #qwq_32b #rag #swift #swiftui
Sidekick is a local-first AI application for Macs that helps you find information from your files, folders, and websites without needing the internet. It's private, so your data stays secure on your device. You can ask questions like "Did the Aztecs use captured Spanish weapons?" and get answers with references. Sidekick also supports image generation, LaTeX rendering, and more. This makes it useful for research and work because it keeps your data safe and provides quick access to relevant information.
https://github.com/johnbean393/Sidekick
Sidekick is a local-first AI application for Macs that helps you find information from your files, folders, and websites without needing the internet. It's private, so your data stays secure on your device. You can ask questions like "Did the Aztecs use captured Spanish weapons?" and get answers with references. Sidekick also supports image generation, LaTeX rendering, and more. This makes it useful for research and work because it keeps your data safe and provides quick access to relevant information.
https://github.com/johnbean393/Sidekick
GitHub
GitHub - johnbean393/Sidekick: A native macOS app that allows users to chat with a local LLM that can respond with information…
A native macOS app that allows users to chat with a local LLM that can respond with information from files, folders and websites on your Mac without installing any other software. Powered by llama....
#go #gemma3 #go #gpt_oss #granite4 #llama #llama3 #llm #on_device_ai #phi3 #qwen3 #qwen3vl #sdk #stable_diffusion #vlm
NexaSDK runs AI models locally on CPUs, GPUs, and NPUs with a single command, supports GGUF/MLX/.nexa formats, and offers NPU-first Android and macOS support for fast, multimodal (text, image, audio) inference, plus an OpenAI‑compatible API for easy integration. This gives you low-latency, private on-device AI across laptops, phones, and embedded systems, reduces cloud costs and data exposure, and lets you deploy and test new models immediately on target hardware for faster development and better user experience.
https://github.com/NexaAI/nexa-sdk
NexaSDK runs AI models locally on CPUs, GPUs, and NPUs with a single command, supports GGUF/MLX/.nexa formats, and offers NPU-first Android and macOS support for fast, multimodal (text, image, audio) inference, plus an OpenAI‑compatible API for easy integration. This gives you low-latency, private on-device AI across laptops, phones, and embedded systems, reduces cloud costs and data exposure, and lets you deploy and test new models immediately on target hardware for faster development and better user experience.
https://github.com/NexaAI/nexa-sdk
GitHub
GitHub - NexaAI/nexa-sdk: Run the latest LLMs and VLMs across GPU, NPU, and CPU with PC (Python/C++) & mobile (Android & iOS) support…
Run the latest LLMs and VLMs across GPU, NPU, and CPU with PC (Python/C++) & mobile (Android & iOS) support, running quickly with OpenAI gpt-oss, Granite4, Qwen3VL, Gemma 3n and mor...