#python #ai #automl #data_science #deep_learning #devops_tools #hacktoberfest #llm #llmops #machine_learning #metadata_tracking #ml #mlops #pipelines #production_ready #pytorch #tensorflow #workflow #zenml
https://github.com/zenml-io/zenml
https://github.com/zenml-io/zenml
GitHub
GitHub - zenml-io/zenml: ZenML 🙏: One AI Platform from Pipelines to Agents. https://zenml.io.
ZenML 🙏: One AI Platform from Pipelines to Agents. https://zenml.io. - zenml-io/zenml
#python #ai #ai_alignment #ai_safety #ai_test #ai_testing #artificial_intelligence #cicd #explainable_ai #llmops #machine_learning #machine_learning_testing #ml #ml_safety #ml_test #ml_testing #ml_validation #mlops #model_testing #model_validation #quality_assurance
https://github.com/Giskard-AI/giskard
https://github.com/Giskard-AI/giskard
GitHub
GitHub - Giskard-AI/giskard-oss: 🐢 Open-Source Evaluation & Testing library for LLM Agents
🐢 Open-Source Evaluation & Testing library for LLM Agents - Giskard-AI/giskard-oss
#jupyter_notebook #ai #aihub #argo #automl #gpt #inference #kubeflow #kubernetes #llmops #mlops #notebook #pipeline #pytorch #spark #vgpu #workflow
https://github.com/tencentmusic/cube-studio
https://github.com/tencentmusic/cube-studio
GitHub
GitHub - tencentmusic/cube-studio: cube studio开源云原生一站式机器学习/深度学习/大模型AI平台,mlops算法链路全流程,算力租赁平台,notebook在线开发,拖拉拽任务流pipeline编排,多机多卡…
cube studio开源云原生一站式机器学习/深度学习/大模型AI平台,mlops算法链路全流程,算力租赁平台,notebook在线开发,拖拉拽任务流pipeline编排,多机多卡分布式训练,超参搜索,推理服务VGPU虚拟化,边缘计算,标注平台自动化标注,deepseek等大模型sft微调/奖励模型/强化学习训练,vllm/ollama/mindie大模型多机推理,私有知识库,AI模型市场...
👍3
#typescript #agent #ai #anthropic #backend_as_a_service #chatbot #gemini #genai #gpt #gpt_4 #llama3 #llm #llmops #nextjs #openai #orchestration #python #rag #workflow #workflows
Dify is an open-source platform for developing AI applications, especially those using Large Language Models (LLMs). It offers a user-friendly interface to build and test AI workflows, integrate various LLMs, and manage models. Key features include a visual workflow builder, comprehensive model support (including GPT, Mistral, and more), a prompt IDE for crafting and testing prompts, RAG pipeline capabilities for document ingestion and retrieval, and agent capabilities with pre-built tools like Google Search and DALL·E.
Using Dify, you can quickly move from prototyping to production with features like observability to monitor application performance and backend-as-a-service for easy integration into your business logic. You can deploy Dify via their cloud service or self-host it in your environment. This makes it highly versatile and beneficial for developers looking to leverage AI efficiently in their projects.
https://github.com/langgenius/dify
Dify is an open-source platform for developing AI applications, especially those using Large Language Models (LLMs). It offers a user-friendly interface to build and test AI workflows, integrate various LLMs, and manage models. Key features include a visual workflow builder, comprehensive model support (including GPT, Mistral, and more), a prompt IDE for crafting and testing prompts, RAG pipeline capabilities for document ingestion and retrieval, and agent capabilities with pre-built tools like Google Search and DALL·E.
Using Dify, you can quickly move from prototyping to production with features like observability to monitor application performance and backend-as-a-service for easy integration into your business logic. You can deploy Dify via their cloud service or self-host it in your environment. This makes it highly versatile and beneficial for developers looking to leverage AI efficiently in their projects.
https://github.com/langgenius/dify
GitHub
GitHub - langgenius/dify: Production-ready platform for agentic workflow development.
Production-ready platform for agentic workflow development. - langgenius/dify
👍1
#python #amd #cuda #gpt #inference #inferentia #llama #llm #llm_serving #llmops #mlops #model_serving #pytorch #rocm #tpu #trainium #transformer #xpu
vLLM is a library that makes it easy, fast, and cheap to use large language models (LLMs). It is designed to be fast with features like efficient memory management, continuous batching, and optimized CUDA kernels. vLLM supports many popular models and can run on various hardware including NVIDIA GPUs, AMD CPUs and GPUs, and more. It also offers seamless integration with Hugging Face models and supports different decoding algorithms. This makes it flexible and easy to use for anyone needing to serve LLMs, whether for research or other applications. You can install vLLM easily with `pip install vllm` and find detailed documentation on their website.
https://github.com/vllm-project/vllm
vLLM is a library that makes it easy, fast, and cheap to use large language models (LLMs). It is designed to be fast with features like efficient memory management, continuous batching, and optimized CUDA kernels. vLLM supports many popular models and can run on various hardware including NVIDIA GPUs, AMD CPUs and GPUs, and more. It also offers seamless integration with Hugging Face models and supports different decoding algorithms. This makes it flexible and easy to use for anyone needing to serve LLMs, whether for research or other applications. You can install vLLM easily with `pip install vllm` and find detailed documentation on their website.
https://github.com/vllm-project/vllm
GitHub
GitHub - vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs
A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm
❤1
#python #ai #aws #developer_tools #gpt_4 #llm #llmops #python
Phidata is a tool that helps you build smart AI agents with memory, knowledge, tools, and reasoning. You can use it to create agents that can search the web, get financial data, or even write and run Python code. Here’s how it benefits you You can install Phidata using a simple command `pip install -U phidata`.
- **Versatile Agents** Agents can use reasoning to solve problems step-by-step and access knowledge bases to provide accurate information.
- **User-Friendly Interface** It includes built-in monitoring and debugging tools to help you track and fix issues with your agents.
Overall, Phidata makes it easy to create and manage intelligent AI agents that can perform complex tasks efficiently.
https://github.com/phidatahq/phidata
Phidata is a tool that helps you build smart AI agents with memory, knowledge, tools, and reasoning. You can use it to create agents that can search the web, get financial data, or even write and run Python code. Here’s how it benefits you You can install Phidata using a simple command `pip install -U phidata`.
- **Versatile Agents** Agents can use reasoning to solve problems step-by-step and access knowledge bases to provide accurate information.
- **User-Friendly Interface** It includes built-in monitoring and debugging tools to help you track and fix issues with your agents.
Overall, Phidata makes it easy to create and manage intelligent AI agents that can perform complex tasks efficiently.
https://github.com/phidatahq/phidata
GitHub
GitHub - agno-agi/agno: The unified stack for multi-agent systems.
The unified stack for multi-agent systems. Contribute to agno-agi/agno development by creating an account on GitHub.
#jupyter_notebook #agent_based_framework #agent_oriented_programming #agentic #agentic_agi #chat #chat_application #chatbot #chatgpt #gpt #gpt_35_turbo #gpt_4 #llm_agent #llm_framework #llm_inference #llmops
AutoGen is a tool that helps you build AI systems where agents can work together and perform tasks on their own or with human help. It makes it easier to create scalable, distributed, and resilient AI applications. Here are the key benefits Agents can talk to each other using asynchronous messages.
- **Scalable** You can add your own agents, tools, and models to the system.
- **Multi-Language Support** It includes features to track and debug how the agents interact.
Using AutoGen, you can develop and test your AI systems locally and then move them to a cloud environment as needed. This makes it simpler to build and manage advanced AI projects.
https://github.com/microsoft/autogen
AutoGen is a tool that helps you build AI systems where agents can work together and perform tasks on their own or with human help. It makes it easier to create scalable, distributed, and resilient AI applications. Here are the key benefits Agents can talk to each other using asynchronous messages.
- **Scalable** You can add your own agents, tools, and models to the system.
- **Multi-Language Support** It includes features to track and debug how the agents interact.
Using AutoGen, you can develop and test your AI systems locally and then move them to a cloud environment as needed. This makes it simpler to build and manage advanced AI projects.
https://github.com/microsoft/autogen
GitHub
GitHub - microsoft/autogen: A programming framework for agentic AI
A programming framework for agentic AI. Contribute to microsoft/autogen development by creating an account on GitHub.
❤1👀1
#python #ai_gateway #anthropic #azure_openai #bedrock #gateway #langchain #llm #llm_gateway #llmops #openai #openai_proxy #vertex_ai
LiteLLM is a tool that helps you use different AI models from various providers like OpenAI, Azure, and Huggingface in a simple way. Here’s how it benefits you You can call any AI model using a consistent format, making it easier to switch between different providers.
- **Consistent Output** You can set budgets and rate limits for your projects, helping you manage costs and usage efficiently.
- **Retry and Fallback Logic** It supports streaming responses and asynchronous calls, which can improve performance.
- **Logging and Observability**: You can easily log data to various tools like Lunary, Langfuse, and Slack, helping you monitor and analyze your AI usage.
Overall, LiteLLM simplifies working with multiple AI providers, makes your code cleaner, and helps you manage resources better.
https://github.com/BerriAI/litellm
LiteLLM is a tool that helps you use different AI models from various providers like OpenAI, Azure, and Huggingface in a simple way. Here’s how it benefits you You can call any AI model using a consistent format, making it easier to switch between different providers.
- **Consistent Output** You can set budgets and rate limits for your projects, helping you manage costs and usage efficiently.
- **Retry and Fallback Logic** It supports streaming responses and asynchronous calls, which can improve performance.
- **Logging and Observability**: You can easily log data to various tools like Lunary, Langfuse, and Slack, helping you monitor and analyze your AI usage.
Overall, LiteLLM simplifies working with multiple AI providers, makes your code cleaner, and helps you manage resources better.
https://github.com/BerriAI/litellm
GitHub
GitHub - BerriAI/litellm: Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking…
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthr...
#rust #agent #ai #artificial_intelligence #automation #generative_ai #large_language_model #llm #llmops #rust #scalable_ai
Rig is a Rust library that helps you build apps using Large Language Models (LLMs) like OpenAI and Cohere. It makes it easy to integrate these models into your application with minimal code. Rig supports various vector stores like MongoDB and Neo4j, and it provides simple but powerful tools to work with LLMs. To get started, you can add Rig to your project using `cargo add rig-core` and follow the examples provided. This library is constantly improving, so your feedback is valuable. Using Rig can save you time and effort by providing a straightforward way to use LLMs in your projects.
https://github.com/0xPlaygrounds/rig
Rig is a Rust library that helps you build apps using Large Language Models (LLMs) like OpenAI and Cohere. It makes it easy to integrate these models into your application with minimal code. Rig supports various vector stores like MongoDB and Neo4j, and it provides simple but powerful tools to work with LLMs. To get started, you can add Rig to your project using `cargo add rig-core` and follow the examples provided. This library is constantly improving, so your feedback is valuable. Using Rig can save you time and effort by providing a straightforward way to use LLMs in your projects.
https://github.com/0xPlaygrounds/rig
GitHub
GitHub - 0xPlaygrounds/rig: ⚙️🦀 Build modular and scalable LLM Applications in Rust
⚙️🦀 Build modular and scalable LLM Applications in Rust - 0xPlaygrounds/rig
#typescript #agent_monitoring #analytics #evaluation #gpt #langchain #large_language_models #llama_index #llm #llm_cost #llm_evaluation #llm_observability #llmops #monitoring #open_source #openai #playground #prompt_engineering #prompt_management #ycombinator
Helicone is an all-in-one, open-source platform for developing and managing Large Language Models (LLMs). It allows you to integrate with various LLM providers like OpenAI, Anthropic, and more with just one line of code. You can observe and debug your model's performance, analyze metrics such as cost and latency, and fine-tune your models easily. The platform also offers a playground to test and iterate on prompts and sessions, and it supports prompt management and automatic evaluations. Helicone is enterprise-ready, compliant with SOC 2 and GDPR, and offers a generous free tier of 100k requests per month. This makes it easier to manage and optimize your LLM projects efficiently.
https://github.com/Helicone/helicone
Helicone is an all-in-one, open-source platform for developing and managing Large Language Models (LLMs). It allows you to integrate with various LLM providers like OpenAI, Anthropic, and more with just one line of code. You can observe and debug your model's performance, analyze metrics such as cost and latency, and fine-tune your models easily. The platform also offers a playground to test and iterate on prompts and sessions, and it supports prompt management and automatic evaluations. Helicone is enterprise-ready, compliant with SOC 2 and GDPR, and offers a generous free tier of 100k requests per month. This makes it easier to manage and optimize your LLM projects efficiently.
https://github.com/Helicone/helicone
GitHub
GitHub - Helicone/helicone: 🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC…
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓 - Helicone/helicone
❤1
#other #chatbot #hugging_face #llm #llm_local #llm_prompting #llm_security #llmops #machine_learning #open_ai #pathway #rag #real_time #retrieval_augmented_generation #vector_database #vector_index
Pathway's AI Pipelines help you quickly create and deploy AI applications with high accuracy. These pipelines use the latest knowledge from your data sources and offer ready-to-deploy templates for large language models. You can test these apps on your own machine and deploy them on cloud services like GCP, AWS, or Azure, or on-premises. The apps connect to various data sources such as file systems, Google Drive, and databases, and they include built-in data indexing for efficient searches. This makes it easy to extract and organize data from documents in real-time, reducing the need for separate infrastructure setups. This simplifies the process of building and maintaining AI applications, saving you time and effort.
https://github.com/pathwaycom/llm-app
Pathway's AI Pipelines help you quickly create and deploy AI applications with high accuracy. These pipelines use the latest knowledge from your data sources and offer ready-to-deploy templates for large language models. You can test these apps on your own machine and deploy them on cloud services like GCP, AWS, or Azure, or on-premises. The apps connect to various data sources such as file systems, Google Drive, and databases, and they include built-in data indexing for efficient searches. This makes it easy to extract and organize data from documents in real-time, reducing the need for separate infrastructure setups. This simplifies the process of building and maintaining AI applications, saving you time and effort.
https://github.com/pathwaycom/llm-app
GitHub
GitHub - pathwaycom/llm-app: Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker…
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs,...
👍2
#python #cloud_native #cncf #deep_learning #docker #fastapi #framework #generative_ai #grpc #jaeger #kubernetes #llmops #machine_learning #microservice #mlops #multimodal #neural_search #opentelemetry #orchestration #pipeline #prometheus
Jina-serve is a tool that helps you build and deploy AI services easily. It supports major machine learning frameworks and allows you to scale your services from local development to production quickly. You can use it to create AI services that communicate via gRPC, HTTP, and WebSockets. It has features like built-in Docker integration, one-click cloud deployment, and support for Kubernetes and Docker Compose, making it easy to manage and scale your AI applications. This makes it simpler for you to focus on the core logic of your AI projects without worrying about the technical details of deployment and scaling.
https://github.com/jina-ai/serve
Jina-serve is a tool that helps you build and deploy AI services easily. It supports major machine learning frameworks and allows you to scale your services from local development to production quickly. You can use it to create AI services that communicate via gRPC, HTTP, and WebSockets. It has features like built-in Docker integration, one-click cloud deployment, and support for Kubernetes and Docker Compose, making it easy to manage and scale your AI applications. This makes it simpler for you to focus on the core logic of your AI projects without worrying about the technical details of deployment and scaling.
https://github.com/jina-ai/serve
GitHub
GitHub - jina-ai/serve: ☁️ Build multimodal AI applications with cloud-native stack
☁️ Build multimodal AI applications with cloud-native stack - jina-ai/serve
#python #agents #ai #ai_agents #aiagents #developer_tools #function_calling #gpt_4 #gpt_4o #hacktoberfest #hacktoberfest2024 #javascript #js #llm #llmops #python #typescript
Composio is a powerful tool that helps AI agents work with many different apps and services. It supports over 250 tools, including popular ones like GitHub, Gmail, and Salesforce. Composio makes it easy to manage authentication across multiple accounts, which means you can securely connect your AI agents to various platforms without worrying about security issues. This integration enhances productivity by automating tasks and streamlining workflows, making it easier for developers and users to get more out of their AI tools.
https://github.com/ComposioHQ/composio
Composio is a powerful tool that helps AI agents work with many different apps and services. It supports over 250 tools, including popular ones like GitHub, Gmail, and Salesforce. Composio makes it easy to manage authentication across multiple accounts, which means you can securely connect your AI agents to various platforms without worrying about security issues. This integration enhances productivity by automating tasks and streamlining workflows, making it easier for developers and users to get more out of their AI tools.
https://github.com/ComposioHQ/composio
GitHub
GitHub - ComposioHQ/composio: Composio equips your AI agents & LLMs with 100+ high-quality integrations via function calling
Composio equips your AI agents & LLMs with 100+ high-quality integrations via function calling - ComposioHQ/composio
#typescript #ai #analytics #datasets #dspy #evaluation #gpt #llm #llmops #low_code #observability #openai #prompt_engineering
LangWatch helps you monitor, test, and improve AI applications by tracking performance, comparing different setups, and optimizing prompts automatically. It works with any AI tool or framework, keeps your data secure, and lets you collaborate with experts to fix issues quickly, making your AI more reliable and efficient.
https://github.com/langwatch/langwatch
LangWatch helps you monitor, test, and improve AI applications by tracking performance, comparing different setups, and optimizing prompts automatically. It works with any AI tool or framework, keeps your data secure, and lets you collaborate with experts to fix issues quickly, making your AI more reliable and efficient.
https://github.com/langwatch/langwatch
GitHub
GitHub - langwatch/langwatch: The open LLM Ops platform - Traces, Analytics, Evaluations, Datasets and Prompt Optimization ✨
The open LLM Ops platform - Traces, Analytics, Evaluations, Datasets and Prompt Optimization ✨ - langwatch/langwatch
#typescript #ci #ci_cd #cicd #evaluation #evaluation_framework #llm #llm_eval #llm_evaluation #llm_evaluation_framework #llmops #pentesting #prompt_engineering #prompt_testing #prompts #rag #red_teaming #testing #vulnerability_scanners
Promptfoo is a tool that helps developers test and improve AI applications using Large Language Models (LLMs). It allows you to **test prompts and models** automatically, **secure your apps** by finding vulnerabilities, and **compare different models** side-by-side. You can use it on your computer or integrate it into your development workflow. This tool helps you make sure your AI apps work well and are secure before you release them. It saves time and ensures quality by using data instead of guessing.
https://github.com/promptfoo/promptfoo
Promptfoo is a tool that helps developers test and improve AI applications using Large Language Models (LLMs). It allows you to **test prompts and models** automatically, **secure your apps** by finding vulnerabilities, and **compare different models** side-by-side. You can use it on your computer or integrate it into your development workflow. This tool helps you make sure your AI apps work well and are secure before you release them. It saves time and ensures quality by using data instead of guessing.
https://github.com/promptfoo/promptfoo
GitHub
GitHub - promptfoo/promptfoo: Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs.…
Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with co...
#rust #ai #ai_engineering #anthropic #artificial_intelligence #deep_learning #genai #generative_ai #gpt #large_language_models #llama #llm #llmops #llms #machine_learning #ml #ml_engineering #mlops #openai #python #rust
TensorZero is a free, open-source tool that helps you build and improve large language model (LLM) applications by using real-world data and feedback. It gives you one simple API to connect with all major LLM providers, collects data from your app’s use, and lets you easily test and improve prompts, models, and strategies. You can see how your LLMs perform, compare different options, and make them smarter, faster, and cheaper over time—all while keeping your data private and under your control. This means you get better results with less effort and cost, and your apps keep improving as you use them[1][2][3].
https://github.com/tensorzero/tensorzero
TensorZero is a free, open-source tool that helps you build and improve large language model (LLM) applications by using real-world data and feedback. It gives you one simple API to connect with all major LLM providers, collects data from your app’s use, and lets you easily test and improve prompts, models, and strategies. You can see how your LLMs perform, compare different options, and make them smarter, faster, and cheaper over time—all while keeping your data private and under your control. This means you get better results with less effort and cost, and your apps keep improving as you use them[1][2][3].
https://github.com/tensorzero/tensorzero
GitHub
GitHub - tensorzero/tensorzero: TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway…
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluation, and experimentation. - tensorzero/tensorzero
#typescript #ai_gateway #gateway #generative_ai #hacktoberfest #langchain #llama_index #llmops #llms #openai #prompt_engineering #router
The AI Gateway by Portkey lets you connect to over 1600 AI models quickly and securely through one simple API, making it easy to integrate any language, vision, or audio AI model in under two minutes. It ensures fast responses with less than 1ms latency, automatic retries, load balancing, and fallback options to keep your AI apps reliable and scalable. It also offers strong security with role-based access, guardrails, and compliance with standards like SOC2 and GDPR. You can save costs with smart caching and optimize usage without changing your code. This helps you build powerful, cost-effective, and secure AI applications faster and with less hassle.
https://github.com/Portkey-AI/gateway
The AI Gateway by Portkey lets you connect to over 1600 AI models quickly and securely through one simple API, making it easy to integrate any language, vision, or audio AI model in under two minutes. It ensures fast responses with less than 1ms latency, automatic retries, load balancing, and fallback options to keep your AI apps reliable and scalable. It also offers strong security with role-based access, guardrails, and compliance with standards like SOC2 and GDPR. You can save costs with smart caching and optimize usage without changing your code. This helps you build powerful, cost-effective, and secure AI applications faster and with less hassle.
https://github.com/Portkey-AI/gateway
GitHub
GitHub - Portkey-AI/gateway: A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1…
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API. - Portkey-AI/gateway
#python #agents #gcp #gemini #genai_agents #generative_ai #llmops #mlops #observability
You can quickly create and deploy AI agents using the Agent Starter Pack, a Python package with ready-made templates and full infrastructure on Google Cloud. It handles everything except your agent’s logic, including deployment, monitoring, security, and CI/CD pipelines. You can start a project in just one minute, customize agents for tasks like document search or real-time chat, and extend them as needed. This saves you time and effort by providing production-ready tools and integration with Google Cloud services, letting you focus on building smart AI agents without worrying about backend setup or deployment details.
https://github.com/GoogleCloudPlatform/agent-starter-pack
You can quickly create and deploy AI agents using the Agent Starter Pack, a Python package with ready-made templates and full infrastructure on Google Cloud. It handles everything except your agent’s logic, including deployment, monitoring, security, and CI/CD pipelines. You can start a project in just one minute, customize agents for tasks like document search or real-time chat, and extend them as needed. This saves you time and effort by providing production-ready tools and integration with Google Cloud services, letting you focus on building smart AI agents without worrying about backend setup or deployment details.
https://github.com/GoogleCloudPlatform/agent-starter-pack
GitHub
GitHub - GoogleCloudPlatform/agent-starter-pack at producthunt
Ship AI Agents to Google Cloud in minutes, not months. Production-ready templates with built-in CI/CD, evaluation, and observability. - GitHub - GoogleCloudPlatform/agent-starter-pack at producthunt