GitHub Trends
10.1K subscribers
15.3K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#rust #ai #chatgpt #claude #cli #command_line #command_line_tool #gpt #llm #prompt #prompt_engineering #prompt_generator #prompt_toolkit #rust

`code2prompt` is a tool that helps you convert your entire codebase into a single prompt for large language models (LLMs) like GPT or Claude. It generates a well-formatted Markdown document showing your code structure and details. You can customize the prompt using templates, exclude certain files, and even get the token count of the generated prompt. This tool saves time by automating the process of copying and formatting multiple source files into a single prompt, making it easier to analyze, document, or improve your code using LLMs. It also allows you to save the prompt to a file or copy it directly to your clipboard.

https://github.com/mufeedvh/code2prompt
#other #awesome #bing_chat #catgirl #chatgpt #claude #claude_slack #neko #newbing #prompt_engineering

This guide helps you customize ChatGPT to act like a "cat girl" or other characters. Here’s the benefit:
- You can make ChatGPT role-play as a cat girl or other characters, making interactions more fun and engaging.
- There are tools and scripts provided to bypass certain limitations and restrictions, such as exporting conversations, avoiding content filters, and overcoming single-session limits.
- The guide includes step-by-step instructions on how to set up these roles and use various tools to enhance your experience with ChatGPT.

By following these steps, you can create a more personalized and interactive experience with ChatGPT.

https://github.com/L1Xu4n/Awesome-ChatGPT-prompts-ZH_CN
#other #llm #prompt

A collection of leaked AI system prompts reveals detailed instructions for various models like Claude 3.5/3.7 Sonnet (handling complex reasoning and coding), Haiku (fast responses), and others from Anthropic, OpenAI, and competitors, showing how these models are guided to perform tasks—this helps users understand their capabilities, limitations, and potential biases when interacting with them.

https://github.com/jujumilk3/leaked-system-prompts
#other #chatgpt #gpt_3_5 #gpt_4 #jailbreak #openai #prompt

ChatGPT "DAN" (Do Anything Now) and similar jailbreak prompts allow users to bypass standard restrictions, enabling unfiltered responses on any topic, including generating unverified information, explicit content, or harmful instructions. These prompts work by simulating a role-play scenario where the AI ignores ethical guidelines and content policies, providing both restricted and unrestricted answers. The benefit is accessing typically blocked information or creative outputs, though this comes with risks of misinformation and harmful content[1][2][4].

https://github.com/0xk1h0/ChatGPT_DAN
👎31
#typescript #ai #analytics #datasets #dspy #evaluation #gpt #llm #llmops #low_code #observability #openai #prompt_engineering

LangWatch helps you monitor, test, and improve AI applications by tracking performance, comparing different setups, and optimizing prompts automatically. It works with any AI tool or framework, keeps your data secure, and lets you collaborate with experts to fix issues quickly, making your AI more reliable and efficient.

https://github.com/langwatch/langwatch
#typescript #ci #ci_cd #cicd #evaluation #evaluation_framework #llm #llm_eval #llm_evaluation #llm_evaluation_framework #llmops #pentesting #prompt_engineering #prompt_testing #prompts #rag #red_teaming #testing #vulnerability_scanners

Promptfoo is a tool that helps developers test and improve AI applications using Large Language Models (LLMs). It allows you to **test prompts and models** automatically, **secure your apps** by finding vulnerabilities, and **compare different models** side-by-side. You can use it on your computer or integrate it into your development workflow. This tool helps you make sure your AI apps work well and are secure before you release them. It saves time and ensures quality by using data instead of guessing.

https://github.com/promptfoo/promptfoo
#typescript #llm #prompt #prompt_engineering #prompt_optimization #prompt_toolkit #prompt_tuning

Prompt Optimizer is a tool that helps you write better instructions for AI models, making their answers more accurate and useful. It works as a web app and a Chrome extension, supports many popular AI models like OpenAI, Gemini, and DeepSeek, and lets you compare original and improved prompts side by side. You can set advanced options for each model, and your data stays private and secure. The benefit is that you get smarter, clearer AI responses with less effort, and you can use it easily on any device or browser.

https://github.com/linshenkx/prompt-optimizer
#typescript #ai_gateway #gateway #generative_ai #hacktoberfest #langchain #llama_index #llmops #llms #openai #prompt_engineering #router

The AI Gateway by Portkey lets you connect to over 1600 AI models quickly and securely through one simple API, making it easy to integrate any language, vision, or audio AI model in under two minutes. It ensures fast responses with less than 1ms latency, automatic retries, load balancing, and fallback options to keep your AI apps reliable and scalable. It also offers strong security with role-based access, guardrails, and compliance with standards like SOC2 and GDPR. You can save costs with smart caching and optimize usage without changing your code. This helps you build powerful, cost-effective, and secure AI applications faster and with less hassle.

https://github.com/Portkey-AI/gateway
#typescript #12_factor #12_factor_agents #agents #ai #context_window #framework #llms #memory #orchestration #prompt_engineering #rag

The 12-Factor Agents are a set of proven principles to build reliable, scalable, and maintainable AI applications powered by large language models (LLMs). They help you combine the creativity of AI with the stability of traditional software by managing prompts, context, tool calls, error handling, and human collaboration effectively. Instead of relying solely on complex frameworks, you can apply these modular concepts to improve your existing products quickly and reach high-quality AI performance for real users. This approach makes AI software easier to develop, debug, and scale, ensuring it works well in production environments[1][3][5].

https://github.com/humanlayer/12-factor-agents
#typescript #llm #markup_language #prompt #vscode_extension

POML (Prompt Orchestration Markup Language) is a special markup language that helps you build clear, organized, and flexible prompts for large language models (LLMs). It uses simple HTML-like tags to separate parts like roles, tasks, examples, and data, making prompts easier to read, reuse, and update. You can include text, tables, images, and style prompts without changing their core logic. POML also has tools like a Visual Studio Code extension and SDKs for JavaScript and Python, which help you write, test, and manage prompts efficiently. This means you can create smarter, more reliable AI applications with less effort and better control.

https://github.com/microsoft/poml
#python #agents #ai #api_gateway #asyncio #authentication_middleware #devops #docker #fastapi #federation #gateway #generative_ai #jwt #kubernetes #llm_agents #mcp #model_context_protocol #observability #prompt_engineering #python #tools

The MCP Gateway is a powerful tool that unifies different AI service protocols like REST and MCP into one easy-to-use endpoint. It helps you manage multiple AI tools and services securely with features like authentication, retries, rate-limiting, and real-time monitoring through an admin UI. You can run it locally or in scalable cloud environments using Docker or Kubernetes. It supports various communication methods (HTTP, WebSocket, SSE, stdio) and offers observability with OpenTelemetry for tracking AI tool usage and performance. This gateway simplifies connecting AI clients to diverse services, making development and management more efficient and secure.

https://github.com/IBM/mcp-context-forge
#javascript #ai #anthropic #chatbots #chatgpt #claude #gemini #generative_ai #google_deepmind #large_language_models #llm #openai #prompt_engineering #prompt_injection #prompts

There is a collection of system prompts used by popular chatbots like ChatGPT and others. These prompts are instructions that guide how chatbots respond. They are now available publicly on GitHub, which can be very helpful for users. By seeing these prompts, users can understand how chatbots work and even learn how to create their own AI tools. This can help developers build better AI applications and improve their understanding of AI technology.

https://github.com/asgeirtj/system_prompts_leaks
#jupyter_notebook #chatgpt #finance #fingpt #fintech #large_language_models #machine_learning #nlp #prompt_engineering #pytorch #reinforcement_learning #robo_advisor #sentiment_analysis #technical_analysis

FinGPT is an open-source AI tool designed specifically for finance, helping you analyze financial news, predict stock prices, and get personalized investment advice quickly and affordably. Unlike costly models like BloombergGPT, FinGPT can be updated frequently with new data at a low cost, making it more accessible and timely. It uses advanced techniques like reinforcement learning from human feedback to tailor advice to your preferences, such as risk tolerance. You can use FinGPT for tasks like sentiment analysis, robo-advising, fraud detection, and portfolio optimization, helping you make smarter financial decisions with up-to-date insights.

https://github.com/AI4Finance-Foundation/FinGPT
#javascript #ai #github_copilot #prompt_engineering

You can improve your GitHub Copilot experience by using the Awesome GitHub Copilot Customizations, a collection of ready-made prompts, instructions, and chat modes tailored for different coding tasks and roles. This toolkit helps you write better code faster by providing focused code suggestions, enforcing coding standards, and offering expert AI personas for specialized help. You can easily add these customizations to editors like VS Code using the MCP Server. This saves you time, boosts productivity, ensures consistent code quality, and helps you learn best practices while coding. It also supports collaboration and onboarding by standardizing workflows and documentation.

https://github.com/github/awesome-copilot