GitHub Trends
10.1K subscribers
15.3K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#typescript #agent #ai #anthropic #backend_as_a_service #chatbot #gemini #genai #gpt #gpt_4 #llama3 #llm #llmops #nextjs #openai #orchestration #python #rag #workflow #workflows

Dify is an open-source platform for developing AI applications, especially those using Large Language Models (LLMs). It offers a user-friendly interface to build and test AI workflows, integrate various LLMs, and manage models. Key features include a visual workflow builder, comprehensive model support (including GPT, Mistral, and more), a prompt IDE for crafting and testing prompts, RAG pipeline capabilities for document ingestion and retrieval, and agent capabilities with pre-built tools like Google Search and DALL·E.

Using Dify, you can quickly move from prototyping to production with features like observability to monitor application performance and backend-as-a-service for easy integration into your business logic. You can deploy Dify via their cloud service or self-host it in your environment. This makes it highly versatile and beneficial for developers looking to leverage AI efficiently in their projects.

https://github.com/langgenius/dify
👍1
#python #amd #cuda #gpt #inference #inferentia #llama #llm #llm_serving #llmops #mlops #model_serving #pytorch #rocm #tpu #trainium #transformer #xpu

vLLM is a library that makes it easy, fast, and cheap to use large language models (LLMs). It is designed to be fast with features like efficient memory management, continuous batching, and optimized CUDA kernels. vLLM supports many popular models and can run on various hardware including NVIDIA GPUs, AMD CPUs and GPUs, and more. It also offers seamless integration with Hugging Face models and supports different decoding algorithms. This makes it flexible and easy to use for anyone needing to serve LLMs, whether for research or other applications. You can install vLLM easily with `pip install vllm` and find detailed documentation on their website.

https://github.com/vllm-project/vllm
1
#python #ai #aws #developer_tools #gpt_4 #llm #llmops #python

Phidata is a tool that helps you build smart AI agents with memory, knowledge, tools, and reasoning. You can use it to create agents that can search the web, get financial data, or even write and run Python code. Here’s how it benefits you You can install Phidata using a simple command `pip install -U phidata`.
- **Versatile Agents** Agents can use reasoning to solve problems step-by-step and access knowledge bases to provide accurate information.
- **User-Friendly Interface** It includes built-in monitoring and debugging tools to help you track and fix issues with your agents.

Overall, Phidata makes it easy to create and manage intelligent AI agents that can perform complex tasks efficiently.

https://github.com/phidatahq/phidata
#jupyter_notebook #agent_based_framework #agent_oriented_programming #agentic #agentic_agi #chat #chat_application #chatbot #chatgpt #gpt #gpt_35_turbo #gpt_4 #llm_agent #llm_framework #llm_inference #llmops

AutoGen is a tool that helps you build AI systems where agents can work together and perform tasks on their own or with human help. It makes it easier to create scalable, distributed, and resilient AI applications. Here are the key benefits Agents can talk to each other using asynchronous messages.
- **Scalable** You can add your own agents, tools, and models to the system.
- **Multi-Language Support** It includes features to track and debug how the agents interact.

Using AutoGen, you can develop and test your AI systems locally and then move them to a cloud environment as needed. This makes it simpler to build and manage advanced AI projects.

https://github.com/microsoft/autogen
1👀1
#python #ai_gateway #anthropic #azure_openai #bedrock #gateway #langchain #llm #llm_gateway #llmops #openai #openai_proxy #vertex_ai

LiteLLM is a tool that helps you use different AI models from various providers like OpenAI, Azure, and Huggingface in a simple way. Here’s how it benefits you You can call any AI model using a consistent format, making it easier to switch between different providers.
- **Consistent Output** You can set budgets and rate limits for your projects, helping you manage costs and usage efficiently.
- **Retry and Fallback Logic** It supports streaming responses and asynchronous calls, which can improve performance.
- **Logging and Observability**: You can easily log data to various tools like Lunary, Langfuse, and Slack, helping you monitor and analyze your AI usage.

Overall, LiteLLM simplifies working with multiple AI providers, makes your code cleaner, and helps you manage resources better.

https://github.com/BerriAI/litellm
#rust #agent #ai #artificial_intelligence #automation #generative_ai #large_language_model #llm #llmops #rust #scalable_ai

Rig is a Rust library that helps you build apps using Large Language Models (LLMs) like OpenAI and Cohere. It makes it easy to integrate these models into your application with minimal code. Rig supports various vector stores like MongoDB and Neo4j, and it provides simple but powerful tools to work with LLMs. To get started, you can add Rig to your project using `cargo add rig-core` and follow the examples provided. This library is constantly improving, so your feedback is valuable. Using Rig can save you time and effort by providing a straightforward way to use LLMs in your projects.

https://github.com/0xPlaygrounds/rig