#typescript #ai #azure_openai_api #chat #chatglm #chatgpt #claude #dalle_3 #function_calling #gemini #gpt #gpt_4 #gpt_4_vision #knowledge_base #nextjs #ollama #openai #qwen2 #rag #tts
LobeChat is an open-source, modern chatbot framework that supports ChatGPT and other Large Language Models (LLMs). It offers several key features Works with multiple AI model providers like OpenAI, Google AI, and more.
- **Speech Synthesis and Voice Conversation** Can recognize and respond to images using models like GPT-4 Vision.
- **Text to Image Generation** Extends functionality with plugins for tasks like web searches and document management.
- **One-Click Deployment** Offers customizable themes and optimized mobile experience.
These features make LobeChat highly flexible and user-friendly, allowing you to create a personalized and powerful chatbot with minimal setup.
https://github.com/lobehub/lobe-chat
LobeChat is an open-source, modern chatbot framework that supports ChatGPT and other Large Language Models (LLMs). It offers several key features Works with multiple AI model providers like OpenAI, Google AI, and more.
- **Speech Synthesis and Voice Conversation** Can recognize and respond to images using models like GPT-4 Vision.
- **Text to Image Generation** Extends functionality with plugins for tasks like web searches and document management.
- **One-Click Deployment** Offers customizable themes and optimized mobile experience.
These features make LobeChat highly flexible and user-friendly, allowing you to create a personalized and powerful chatbot with minimal setup.
https://github.com/lobehub/lobe-chat
GitHub
GitHub - lobehub/lobe-chat: 🤯 LobeHub - an open-source, modern design AI Agent Workspace. Supports multiple AI providers, Knowledge…
🤯 LobeHub - an open-source, modern design AI Agent Workspace. Supports multiple AI providers, Knowledge Base (file upload / RAG ), one click install MCP Marketplace and Artifacts / Thinking. One-cl...
#swift #ai #aichat #chatbot #chatgpt #deepseek #deepseek_r1 #gemma #gemma3 #gguf #llama #llama3 #llm #macos #qwen #qwen2 #qwq #qwq_32b #rag #swift #swiftui
Sidekick is a local-first AI application for Macs that helps you find information from your files, folders, and websites without needing the internet. It's private, so your data stays secure on your device. You can ask questions like "Did the Aztecs use captured Spanish weapons?" and get answers with references. Sidekick also supports image generation, LaTeX rendering, and more. This makes it useful for research and work because it keeps your data safe and provides quick access to relevant information.
https://github.com/johnbean393/Sidekick
Sidekick is a local-first AI application for Macs that helps you find information from your files, folders, and websites without needing the internet. It's private, so your data stays secure on your device. You can ask questions like "Did the Aztecs use captured Spanish weapons?" and get answers with references. Sidekick also supports image generation, LaTeX rendering, and more. This makes it useful for research and work because it keeps your data safe and provides quick access to relevant information.
https://github.com/johnbean393/Sidekick
GitHub
GitHub - johnbean393/Sidekick: A native macOS app that allows users to chat with a local LLM that can respond with information…
A native macOS app that allows users to chat with a local LLM that can respond with information from files, folders and websites on your Mac without installing any other software. Powered by llama....
#jupyter_notebook #chatglm #chatglm3 #gemma_2b_it #glm_4 #internlm2 #llama3 #llm #lora #minicpm #q_wen #qwen #qwen1_5 #qwen2
This guide helps beginners set up and use open-source large language models (LLMs) on Linux or cloud platforms like AutoDL, with step-by-step instructions for environment setup, model deployment, and fine-tuning for models such as LLaMA, ChatGLM, and InternLM[2][4][5]. It covers everything from basic installation to advanced techniques like LoRA and distributed fine-tuning, and supports integration with tools like LangChain and online demo deployment. The main benefit is making powerful AI models accessible and easy to use for students, researchers, and anyone interested in experimenting with or customizing LLMs for their own projects[2][4][5].
https://github.com/datawhalechina/self-llm
This guide helps beginners set up and use open-source large language models (LLMs) on Linux or cloud platforms like AutoDL, with step-by-step instructions for environment setup, model deployment, and fine-tuning for models such as LLaMA, ChatGLM, and InternLM[2][4][5]. It covers everything from basic installation to advanced techniques like LoRA and distributed fine-tuning, and supports integration with tools like LangChain and online demo deployment. The main benefit is making powerful AI models accessible and easy to use for students, researchers, and anyone interested in experimenting with or customizing LLMs for their own projects[2][4][5].
https://github.com/datawhalechina/self-llm
GitHub
GitHub - datawhalechina/self-llm: 《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程 - datawhalechina/self-llm