#python #chatgpt #llms #pyqt #wechat
This tool helps you manage your WeChat data by letting you export and analyze your chat history. It supports WeChat 4.0 and allows you to restore chat interfaces, export data in various formats like HTML, CSV, and Word, and even create visual reports. This means you can keep track of your conversations and memories easily, making it a useful tool for organizing your digital life.
https://github.com/LC044/WeChatMsg
This tool helps you manage your WeChat data by letting you export and analyze your chat history. It supports WeChat 4.0 and allows you to restore chat interfaces, export data in various formats like HTML, CSV, and Word, and even create visual reports. This means you can keep track of your conversations and memories easily, making it a useful tool for organizing your digital life.
https://github.com/LC044/WeChatMsg
GitHub
GitHub - LC044/WeChatMsg
Contribute to LC044/WeChatMsg development by creating an account on GitHub.
#python #agents #graph #llms #rag
Graphiti helps AI systems handle constantly changing information by building real-time knowledge graphs that track relationships and historical data, allowing them to integrate user interactions, business data, and external sources seamlessly. Unlike traditional methods, it updates information instantly without needing full recomputations, enabling precise historical queries and efficient hybrid searches. This helps AI applications stay context-aware, automate tasks effectively, and manage complex, evolving data with minimal delay.
https://github.com/getzep/graphiti
Graphiti helps AI systems handle constantly changing information by building real-time knowledge graphs that track relationships and historical data, allowing them to integrate user interactions, business data, and external sources seamlessly. Unlike traditional methods, it updates information instantly without needing full recomputations, enabling precise historical queries and efficient hybrid searches. This helps AI applications stay context-aware, automate tasks effectively, and manage complex, evolving data with minimal delay.
https://github.com/getzep/graphiti
GitHub
GitHub - getzep/graphiti: Build Real-Time Knowledge Graphs for AI Agents
Build Real-Time Knowledge Graphs for AI Agents. Contribute to getzep/graphiti development by creating an account on GitHub.
#typescript #electron #llama #llms #lora #mlx #rlhf #transformers
Transformer Lab is a free, open-source tool that lets you easily work with large language models on your own computer, offering one-click downloads for popular models like Llama3 and Mistral, fine-tuning across different hardware (including Apple Silicon and GPUs), and features like chatting, training, and evaluating models through a simple interface—saving you from complex setups like CUDA or Python version issues[1][2][5].
https://github.com/transformerlab/transformerlab-app
Transformer Lab is a free, open-source tool that lets you easily work with large language models on your own computer, offering one-click downloads for popular models like Llama3 and Mistral, fine-tuning across different hardware (including Apple Silicon and GPUs), and features like chatting, training, and evaluating models through a simple interface—saving you from complex setups like CUDA or Python version issues[1][2][5].
https://github.com/transformerlab/transformerlab-app
GitHub
GitHub - transformerlab/transformerlab-app: Open Source Machine Learning Research Platform designed for frontier AI/ML workflows.…
Open Source Machine Learning Research Platform designed for frontier AI/ML workflows. Local, on-prem, or in the cloud. Open source. - transformerlab/transformerlab-app
#python #agents #ai #ai_agents #llm #llms #mcp #model_context_protocol #python
The Model Context Protocol (MCP) is a standard way for AI agents to connect with different tools and data sources, making it much easier to build powerful AI applications without writing custom code for each integration[2][5]. The mcp-agent framework uses MCP to let you quickly create agents that can do things like read files, fetch web pages, or manage emails, and you can combine these agents in flexible ways to handle complex tasks. This means you can focus on what you want your AI to do, while mcp-agent takes care of connecting to the right tools and managing the workflow, saving you time and effort[3][5].
https://github.com/lastmile-ai/mcp-agent
The Model Context Protocol (MCP) is a standard way for AI agents to connect with different tools and data sources, making it much easier to build powerful AI applications without writing custom code for each integration[2][5]. The mcp-agent framework uses MCP to let you quickly create agents that can do things like read files, fetch web pages, or manage emails, and you can combine these agents in flexible ways to handle complex tasks. This means you can focus on what you want your AI to do, while mcp-agent takes care of connecting to the right tools and managing the workflow, saving you time and effort[3][5].
https://github.com/lastmile-ai/mcp-agent
GitHub
GitHub - lastmile-ai/mcp-agent: Build effective agents using Model Context Protocol and simple workflow patterns
Build effective agents using Model Context Protocol and simple workflow patterns - lastmile-ai/mcp-agent
#python #agents #document_search #evaluation #guardrails #llms #optimization #prompts #rag #vector_stores
Ragbits is a tool that helps build and deploy GenAI applications quickly. It offers features like swapping between many language models, ensuring safe interactions with these models, and connecting to various data storage systems. Ragbits also includes tools for managing data and testing prompts, making it easier to develop reliable AI applications. This helps users create more accurate and efficient AI systems by integrating the latest data and reducing errors. Overall, Ragbits makes it faster and more efficient to develop and deploy AI applications.
https://github.com/deepsense-ai/ragbits
Ragbits is a tool that helps build and deploy GenAI applications quickly. It offers features like swapping between many language models, ensuring safe interactions with these models, and connecting to various data storage systems. Ragbits also includes tools for managing data and testing prompts, making it easier to develop reliable AI applications. This helps users create more accurate and efficient AI systems by integrating the latest data and reducing errors. Overall, Ragbits makes it faster and more efficient to develop and deploy AI applications.
https://github.com/deepsense-ai/ragbits
GitHub
GitHub - deepsense-ai/ragbits: Building blocks for rapid development of GenAI applications
Building blocks for rapid development of GenAI applications - GitHub - deepsense-ai/ragbits: Building blocks for rapid development of GenAI applications
#rust #ai #ai_engineering #anthropic #artificial_intelligence #deep_learning #genai #generative_ai #gpt #large_language_models #llama #llm #llmops #llms #machine_learning #ml #ml_engineering #mlops #openai #python #rust
TensorZero is a free, open-source tool that helps you build and improve large language model (LLM) applications by using real-world data and feedback. It gives you one simple API to connect with all major LLM providers, collects data from your app’s use, and lets you easily test and improve prompts, models, and strategies. You can see how your LLMs perform, compare different options, and make them smarter, faster, and cheaper over time—all while keeping your data private and under your control. This means you get better results with less effort and cost, and your apps keep improving as you use them[1][2][3].
https://github.com/tensorzero/tensorzero
TensorZero is a free, open-source tool that helps you build and improve large language model (LLM) applications by using real-world data and feedback. It gives you one simple API to connect with all major LLM providers, collects data from your app’s use, and lets you easily test and improve prompts, models, and strategies. You can see how your LLMs perform, compare different options, and make them smarter, faster, and cheaper over time—all while keeping your data private and under your control. This means you get better results with less effort and cost, and your apps keep improving as you use them[1][2][3].
https://github.com/tensorzero/tensorzero
GitHub
GitHub - tensorzero/tensorzero: TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway…
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluation, and experimentation. - tensorzero/tensorzero
#jupyter_notebook #ai #llm #llms #multi_modal #openai #python #rag
Retrieval-Augmented Generation (RAG) is a technique that helps improve the accuracy of large language models by fetching relevant information from databases or documents. This approach ensures that the model's responses are based on up-to-date and accurate data, reducing errors and "hallucinations" where the model might provide false information. For users, RAG offers more reliable and trustworthy responses, allowing them to verify the sources used to generate those responses. This method also saves resources by avoiding the need to retrain models with new data.
https://github.com/FareedKhan-dev/all-rag-techniques
Retrieval-Augmented Generation (RAG) is a technique that helps improve the accuracy of large language models by fetching relevant information from databases or documents. This approach ensures that the model's responses are based on up-to-date and accurate data, reducing errors and "hallucinations" where the model might provide false information. For users, RAG offers more reliable and trustworthy responses, allowing them to verify the sources used to generate those responses. This method also saves resources by avoiding the need to retrain models with new data.
https://github.com/FareedKhan-dev/all-rag-techniques
❤1
#typescript #ai_gateway #gateway #generative_ai #hacktoberfest #langchain #llama_index #llmops #llms #openai #prompt_engineering #router
The AI Gateway by Portkey lets you connect to over 1600 AI models quickly and securely through one simple API, making it easy to integrate any language, vision, or audio AI model in under two minutes. It ensures fast responses with less than 1ms latency, automatic retries, load balancing, and fallback options to keep your AI apps reliable and scalable. It also offers strong security with role-based access, guardrails, and compliance with standards like SOC2 and GDPR. You can save costs with smart caching and optimize usage without changing your code. This helps you build powerful, cost-effective, and secure AI applications faster and with less hassle.
https://github.com/Portkey-AI/gateway
The AI Gateway by Portkey lets you connect to over 1600 AI models quickly and securely through one simple API, making it easy to integrate any language, vision, or audio AI model in under two minutes. It ensures fast responses with less than 1ms latency, automatic retries, load balancing, and fallback options to keep your AI apps reliable and scalable. It also offers strong security with role-based access, guardrails, and compliance with standards like SOC2 and GDPR. You can save costs with smart caching and optimize usage without changing your code. This helps you build powerful, cost-effective, and secure AI applications faster and with less hassle.
https://github.com/Portkey-AI/gateway
GitHub
GitHub - Portkey-AI/gateway: A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1…
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API. - Portkey-AI/gateway
#typescript #12_factor #12_factor_agents #agents #ai #context_window #framework #llms #memory #orchestration #prompt_engineering #rag
The 12-Factor Agents are a set of proven principles to build reliable, scalable, and maintainable AI applications powered by large language models (LLMs). They help you combine the creativity of AI with the stability of traditional software by managing prompts, context, tool calls, error handling, and human collaboration effectively. Instead of relying solely on complex frameworks, you can apply these modular concepts to improve your existing products quickly and reach high-quality AI performance for real users. This approach makes AI software easier to develop, debug, and scale, ensuring it works well in production environments[1][3][5].
https://github.com/humanlayer/12-factor-agents
The 12-Factor Agents are a set of proven principles to build reliable, scalable, and maintainable AI applications powered by large language models (LLMs). They help you combine the creativity of AI with the stability of traditional software by managing prompts, context, tool calls, error handling, and human collaboration effectively. Instead of relying solely on complex frameworks, you can apply these modular concepts to improve your existing products quickly and reach high-quality AI performance for real users. This approach makes AI software easier to develop, debug, and scale, ensuring it works well in production environments[1][3][5].
https://github.com/humanlayer/12-factor-agents
GitHub
GitHub - humanlayer/12-factor-agents: What are the principles we can use to build LLM-powered software that is actually good enough…
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers? - humanlayer/12-factor-agents
#jupyter_notebook #artificial_intelligence #book #large_language_models #llm #llms #oreilly #oreilly_books
You can learn how to use Large Language Models (LLMs) effectively through the book *Hands-On Large Language Models* by Jay Alammar and Maarten Grootendorst. This book uses nearly 300 custom illustrations to explain key concepts and practical tools for working with LLMs, including tokenization, transformers, prompt engineering, fine-tuning, and advanced text generation. It also provides runnable code examples in Google Colab, making it easy to practice and apply what you learn. This resource helps you understand and build your own LLM applications confidently, saving you time and effort in mastering complex AI technology. It’s highly recommended for anyone wanting hands-on experience with LLMs.
https://github.com/HandsOnLLM/Hands-On-Large-Language-Models
You can learn how to use Large Language Models (LLMs) effectively through the book *Hands-On Large Language Models* by Jay Alammar and Maarten Grootendorst. This book uses nearly 300 custom illustrations to explain key concepts and practical tools for working with LLMs, including tokenization, transformers, prompt engineering, fine-tuning, and advanced text generation. It also provides runnable code examples in Google Colab, making it easy to practice and apply what you learn. This resource helps you understand and build your own LLM applications confidently, saving you time and effort in mastering complex AI technology. It’s highly recommended for anyone wanting hands-on experience with LLMs.
https://github.com/HandsOnLLM/Hands-On-Large-Language-Models
GitHub
GitHub - HandsOnLLM/Hands-On-Large-Language-Models: Official code repo for the O'Reilly Book - "Hands-On Large Language Models"
Official code repo for the O'Reilly Book - "Hands-On Large Language Models" - HandsOnLLM/Hands-On-Large-Language-Models
#go #databases #genai #llms #mcp
The MCP Toolbox for Databases helps developers connect AI agents to databases more easily and securely. It simplifies the process by handling complex tasks like connection pooling and authentication, allowing you to integrate databases with AI agents using minimal code. This toolbox supports the Model Context Protocol (MCP), which standardizes how AI interacts with external tools. By using MCP Toolbox, you can automate database tasks, query databases using natural language, and generate context-aware code, all of which save time and improve development efficiency.
https://github.com/googleapis/genai-toolbox
The MCP Toolbox for Databases helps developers connect AI agents to databases more easily and securely. It simplifies the process by handling complex tasks like connection pooling and authentication, allowing you to integrate databases with AI agents using minimal code. This toolbox supports the Model Context Protocol (MCP), which standardizes how AI interacts with external tools. By using MCP Toolbox, you can automate database tasks, query databases using natural language, and generate context-aware code, all of which save time and improve development efficiency.
https://github.com/googleapis/genai-toolbox
GitHub
GitHub - googleapis/genai-toolbox: MCP Toolbox for Databases is an open source MCP server for databases.
MCP Toolbox for Databases is an open source MCP server for databases. - googleapis/genai-toolbox
#python #agent #agentic_ai #grpo #kimi_ai #llms #lora #qwen #qwen3 #reinforcement_learning #rl
ART is a tool that helps you train smart agents for real-world tasks using reinforcement learning, especially with the GRPO method. The standout feature is RULER, which lets you skip the hard work of designing reward functions by using a large language model to automatically score how well your agent is doing—just describe your task, and RULER takes care of the rest. This makes building and improving agents much faster and easier, works for any task, and often performs as well as or better than hand-crafted rewards. You can install ART with a simple command and start training agents right away, even on your own computer or with cloud resources.
https://github.com/OpenPipe/ART
ART is a tool that helps you train smart agents for real-world tasks using reinforcement learning, especially with the GRPO method. The standout feature is RULER, which lets you skip the hard work of designing reward functions by using a large language model to automatically score how well your agent is doing—just describe your task, and RULER takes care of the rest. This makes building and improving agents much faster and easier, works for any task, and often performs as well as or better than hand-crafted rewards. You can install ART with a simple command and start training agents right away, even on your own computer or with cloud resources.
https://github.com/OpenPipe/ART
GitHub
GitHub - OpenPipe/ART: Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on…
Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement learning for Qwen2.5, Qwen3, Llama, and more! - OpenPipe/ART
#python #agent #agentic #agentic_ai #agents #agents_sdk #ai #ai_agents #aiagentframework #genai #genai_chatbot #llm #llms #multi_agent #multi_agent_systems #multi_agents #multi_agents_collaboration
The Agent Development Kit (ADK) is an open-source Python toolkit that helps you easily build, test, and deploy smart AI agents, from simple helpers to complex multi-agent systems. It lets you write agent logic in Python, use many built-in or custom tools, and organize multiple agents to work together. You can deploy agents anywhere, including Google Cloud, and evaluate their performance with built-in tools. ADK supports flexible workflows and works with various AI models, not just Google’s. This means you get full control and flexibility to create powerful AI applications that fit your needs, speeding up development and making it easier to manage AI projects.
https://github.com/google/adk-python
The Agent Development Kit (ADK) is an open-source Python toolkit that helps you easily build, test, and deploy smart AI agents, from simple helpers to complex multi-agent systems. It lets you write agent logic in Python, use many built-in or custom tools, and organize multiple agents to work together. You can deploy agents anywhere, including Google Cloud, and evaluate their performance with built-in tools. ADK supports flexible workflows and works with various AI models, not just Google’s. This means you get full control and flexibility to create powerful AI applications that fit your needs, speeding up development and making it easier to manage AI projects.
https://github.com/google/adk-python
GitHub
GitHub - google/adk-python: An open-source, code-first Python toolkit for building, evaluating, and deploying sophisticated AI…
An open-source, code-first Python toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control. - google/adk-python
❤1
#typescript #agentic_ai #agentic_workflow #agents #ai #approval_process #escalation_policy #function_calling #human_as_tool #human_in_the_loop #humanlayer #llm #llms
HumanLayer helps you safely use AI agents to automate important tasks by ensuring a human always reviews high-risk actions, like sending emails or changing private data. This is crucial because AI can make mistakes or create wrong outputs, and some tasks are too sensitive to trust AI alone. HumanLayer’s tools guarantee human oversight in these cases, so you get the benefits of AI automation without risking errors in critical work. This makes AI more reliable and useful for automating complex workflows while keeping control and safety in your hands.
https://github.com/humanlayer/humanlayer
HumanLayer helps you safely use AI agents to automate important tasks by ensuring a human always reviews high-risk actions, like sending emails or changing private data. This is crucial because AI can make mistakes or create wrong outputs, and some tasks are too sensitive to trust AI alone. HumanLayer’s tools guarantee human oversight in these cases, so you get the benefits of AI automation without risking errors in critical work. This makes AI more reliable and useful for automating complex workflows while keeping control and safety in your hands.
https://github.com/humanlayer/humanlayer
GitHub
GitHub - humanlayer/humanlayer: The best way to get AI coding agents to solve hard problems in complex codebases.
The best way to get AI coding agents to solve hard problems in complex codebases. - humanlayer/humanlayer
#python #agent #ai #ai_coding #claude #claude_code #language_server #llms #mcp_server #programming #vibe_coding
Serena is a free, open-source toolkit that turns large language models (LLMs) into powerful coding agents able to work directly on your codebase with IDE-like precision. It uses semantic code analysis to understand code structure and symbols, enabling efficient code search and editing without reading entire files. Serena supports many programming languages and integrates flexibly with various LLMs and development environments via the Model Context Protocol (MCP). This means you can automate complex coding tasks, improve productivity, and reduce costs without subscriptions, making your coding workflow faster and smarter.
https://github.com/oraios/serena
Serena is a free, open-source toolkit that turns large language models (LLMs) into powerful coding agents able to work directly on your codebase with IDE-like precision. It uses semantic code analysis to understand code structure and symbols, enabling efficient code search and editing without reading entire files. Serena supports many programming languages and integrates flexibly with various LLMs and development environments via the Model Context Protocol (MCP). This means you can automate complex coding tasks, improve productivity, and reduce costs without subscriptions, making your coding workflow faster and smarter.
https://github.com/oraios/serena
GitHub
GitHub - oraios/serena: A powerful coding agent toolkit providing semantic retrieval and editing capabilities (MCP server & other…
A powerful coding agent toolkit providing semantic retrieval and editing capabilities (MCP server & other integrations) - oraios/serena
#python #agent #llms
AutoAgent lets you create and use powerful AI agents easily by just using natural language—no coding needed. It supports many large language models (LLMs) like OpenAI and Anthropic, and performs as well as top research AI systems on benchmarks. You can build tools, agents, and workflows quickly, manage data efficiently with its built-in vector database, and interact flexibly through different modes. It’s lightweight, customizable, and cost-effective, making it a personal AI assistant that helps automate complex tasks simply and efficiently. This saves you time and technical effort while giving you advanced AI capabilities.
https://github.com/HKUDS/AutoAgent
AutoAgent lets you create and use powerful AI agents easily by just using natural language—no coding needed. It supports many large language models (LLMs) like OpenAI and Anthropic, and performs as well as top research AI systems on benchmarks. You can build tools, agents, and workflows quickly, manage data efficiently with its built-in vector database, and interact flexibly through different modes. It’s lightweight, customizable, and cost-effective, making it a personal AI assistant that helps automate complex tasks simply and efficiently. This saves you time and technical effort while giving you advanced AI capabilities.
https://github.com/HKUDS/AutoAgent
GitHub
GitHub - HKUDS/AutoAgent: "AutoAgent: Fully-Automated and Zero-Code LLM Agent Framework"
"AutoAgent: Fully-Automated and Zero-Code LLM Agent Framework" - HKUDS/AutoAgent
❤1
#python #llms #mlx
MLX LM is a Python tool that helps you run and fine-tune large language models (LLMs) efficiently on Apple Silicon Macs. It connects easily to thousands of models on Hugging Face, supports model quantization to save memory, and allows distributed training. You can generate text or chat with models via simple commands or Python code. It also offers features like prompt caching and memory optimization for handling long texts, making it faster and less resource-heavy. This means you can run powerful AI models locally on your Mac without needing expensive cloud services, saving cost and improving speed.
https://github.com/ml-explore/mlx-lm
MLX LM is a Python tool that helps you run and fine-tune large language models (LLMs) efficiently on Apple Silicon Macs. It connects easily to thousands of models on Hugging Face, supports model quantization to save memory, and allows distributed training. You can generate text or chat with models via simple commands or Python code. It also offers features like prompt caching and memory optimization for handling long texts, making it faster and less resource-heavy. This means you can run powerful AI models locally on your Mac without needing expensive cloud services, saving cost and improving speed.
https://github.com/ml-explore/mlx-lm
GitHub
GitHub - ml-explore/mlx-lm: Run LLMs with MLX
Run LLMs with MLX. Contribute to ml-explore/mlx-lm development by creating an account on GitHub.
#python #agent_framework #data_analysis #deep_research #deep_search #llms #multi_agent_system #nlp #public_opinion_analysis #python3 #sentiment_analysis
You can use the "Weibo Public Opinion Analysis System" (called "微舆") to automatically analyze public opinion from over 30 major social media platforms and millions of comments. It uses AI agents working together to monitor, search, analyze text and videos, and generate detailed reports based on real-time data. The system supports easy setup, custom models, and integration with your own databases, helping you understand public sentiment, trends, and make better decisions. It offers continuous monitoring, deep multi-angle analysis, and flexible report generation, all accessible by simply asking questions like chatting. This saves you time and gives clear insights into public opinion dynamics.
https://github.com/666ghj/BettaFish
You can use the "Weibo Public Opinion Analysis System" (called "微舆") to automatically analyze public opinion from over 30 major social media platforms and millions of comments. It uses AI agents working together to monitor, search, analyze text and videos, and generate detailed reports based on real-time data. The system supports easy setup, custom models, and integration with your own databases, helping you understand public sentiment, trends, and make better decisions. It offers continuous monitoring, deep multi-angle analysis, and flexible report generation, all accessible by simply asking questions like chatting. This saves you time and gives clear insights into public opinion dynamics.
https://github.com/666ghj/BettaFish
GitHub
GitHub - 666ghj/BettaFish: 微舆:人人可用的多Agent舆情分析助手,打破信息茧房,还原舆情原貌,预测未来走向,辅助决策!从0实现,不依赖任何框架。
微舆:人人可用的多Agent舆情分析助手,打破信息茧房,还原舆情原貌,预测未来走向,辅助决策!从0实现,不依赖任何框架。 - 666ghj/BettaFish
❤1
#typescript #agent #agentic #agentic_ai #agents #agents_sdk #ai #ai_agents #aiagentframework #genai #genai_chatbot #llm #llms #multi_agent #multi_agent_systems #multi_agents #multi_agents_collaboration
Agent Development Kit (ADK) for TypeScript is an open-source toolkit to build, test, and deploy advanced AI agents with full control in code. Key features include rich tools like Google Search, custom functions, and multi-agent hierarchies for scalable apps, plus a dev UI for easy debugging. Install via
https://github.com/google/adk-js
Agent Development Kit (ADK) for TypeScript is an open-source toolkit to build, test, and deploy advanced AI agents with full control in code. Key features include rich tools like Google Search, custom functions, and multi-agent hierarchies for scalable apps, plus a dev UI for easy debugging. Install via
npm install @google/adk. You benefit by creating flexible, versioned AI agents that integrate tightly with Google Cloud, run anywhere from laptop to cloud, and speed up development like regular software.https://github.com/google/adk-js
GitHub
GitHub - google/adk-js
Contribute to google/adk-js development by creating an account on GitHub.
#python #docker #fastapi #kbqa #kgqa #llms #neo4j #rag #vue
Yuxi-Know (语析) is a free, open-source platform built with LangGraph, Vue.js, FastAPI, and LightRAG to create smart agents using RAG knowledge bases and knowledge graphs. The latest v0.4.0-beta (Dec 2025) adds file uploads, multimodal image support, mind maps from files, evaluation tools, dark mode, and better graph visuals. It helps you quickly build and deploy custom AI agents for Q&A, analysis, and searches without starting from scratch, saving time and effort on development.
https://github.com/xerrors/Yuxi-Know
Yuxi-Know (语析) is a free, open-source platform built with LangGraph, Vue.js, FastAPI, and LightRAG to create smart agents using RAG knowledge bases and knowledge graphs. The latest v0.4.0-beta (Dec 2025) adds file uploads, multimodal image support, mind maps from files, evaluation tools, dark mode, and better graph visuals. It helps you quickly build and deploy custom AI agents for Q&A, analysis, and searches without starting from scratch, saving time and effort on development.
https://github.com/xerrors/Yuxi-Know
GitHub
GitHub - xerrors/Yuxi-Know: 结合LightRAG 知识库的知识图谱智能体平台。LangChain v1 + Vue + FastAPI。集成主流大模型、LightRAG、MinerU、PP-Structure、Neo4j 、联网检索、工具调用。
结合LightRAG 知识库的知识图谱智能体平台。LangChain v1 + Vue + FastAPI。集成主流大模型、LightRAG、MinerU、PP-Structure、Neo4j 、联网检索、工具调用。 - xerrors/Yuxi-Know