GitHub Trends
10.1K subscribers
15.3K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#python #cloud_native #cncf #deep_learning #docker #fastapi #framework #generative_ai #grpc #jaeger #kubernetes #llmops #machine_learning #microservice #mlops #multimodal #neural_search #opentelemetry #orchestration #pipeline #prometheus

Jina-serve is a tool that helps you build and deploy AI services easily. It supports major machine learning frameworks and allows you to scale your services from local development to production quickly. You can use it to create AI services that communicate via gRPC, HTTP, and WebSockets. It has features like built-in Docker integration, one-click cloud deployment, and support for Kubernetes and Docker Compose, making it easy to manage and scale your AI applications. This makes it simpler for you to focus on the core logic of your AI projects without worrying about the technical details of deployment and scaling.

https://github.com/jina-ai/serve
#python #agents #ai #artificial_intelligence #attention_mechanism #chatgpt #gpt4 #gpt4all #huggingface #langchain #langchain_python #machine_learning #multi_modal_imaging #multi_modality #multimodal #prompt_engineering #prompt_toolkit #prompting #swarms #transformer_models #tree_of_thoughts

Swarms is an advanced multi-agent orchestration framework designed for enterprise-grade production use. Here are the key benefits and features Swarms offers production-ready infrastructure with high reliability, modular design, and comprehensive logging, reducing downtime and easing maintenance.
- **Agent Orchestration** Swarms allows multi-model support, custom agent creation, an extensive tool library, and multiple memory systems, providing flexibility and extended functionality.
- **Scalability** Swarms includes a simple API, extensive documentation, an active community, and CLI tools, making development faster and easier.
- **Security Features**//docs.swarms.world) for more detailed information.

https://github.com/kyegomez/swarms
#python #any_to_any #foundation_models #llm #multimodal #unified_model #vision_language_pretraining

The Janus-Series models, including Janus, Janus-Pro, and JanusFlow, are advanced AI tools that combine multimodal understanding and generation capabilities. These models can process both text and images, allowing for tasks like answering questions based on images and generating images from text descriptions. Janus-Pro is an improved version with better performance due to optimized training strategies and larger model sizes. JanusFlow integrates autoregressive language models with rectified flow for efficient image generation. The benefit to the user is the ability to perform complex multimodal tasks with high accuracy and flexibility, making these models useful for a wide range of applications in research and industry.

https://github.com/deepseek-ai/Janus
1
#python #llm #multimodal_large_language_models #svg #vlm

StarVector is a powerful tool that converts images into Scalable Vector Graphics (SVG) code. It uses a special kind of AI called a multimodal vision-language model to understand both images and text. This means it can create SVGs from pictures or text instructions. The benefit is that SVGs are scalable and editable, making them perfect for web design and graphic art. StarVector is especially good at vectorizing icons, logos, and diagrams, producing high-quality results that are easy to edit and resize without losing clarity[1][3][5].

https://github.com/joanrod/star-vector
#python #apple_silicon #audio_processing #mlx #multimodal #speech_recognition #speech_synthesis #speech_to_text #text_to_speech #transformers

MLX-Audio is a powerful tool for converting text into speech and speech into new audio. It works well on Apple Silicon devices, like M-series chips, making it fast and efficient. You can choose from different languages and voices, and even adjust how fast the speech is. It also includes a web interface where you can see audio in 3D and play your own files. This tool is helpful for making audiobooks, interactive media, and personal projects because it's easy to use and provides high-quality audio quickly.

https://github.com/Blaizzy/mlx-audio
#python #asr #deeplearning #generative_ai #large_language_models #machine_translation #multimodal #neural_networks #speaker_diariazation #speaker_recognition #speech_synthesis #speech_translation #tts

NVIDIA NeMo is a powerful, easy-to-use platform for building, customizing, and deploying generative AI models like large language models (LLMs), vision language models, and speech AI. It lets you quickly train and fine-tune models using pre-built code and checkpoints, supports the latest model architectures, and works on cloud, data center, or edge environments. NeMo 2.0 is even more flexible and scalable, with Python-based configuration and modular design, making it simple to experiment and scale up. The main benefit is that you can create advanced AI applications faster, with less effort, and at lower cost, while getting high performance and easy deployment options[1][2][3].

https://github.com/NVIDIA/NeMo
#python #ai #ai_art #art #asset_generator #chatbot #deep_learning #desktop_app #image_generation #mistral #multimodal #privacy #pygame #pyside6 #python #self_hosted #speech_to_text #stable_diffusion #text_to_image #text_to_speech #text_to_speech_app

AI Runner is a tool that lets you use AI on your own computer without needing the internet. It can do many things like **voice chatbots**, **text-to-image** generation, and **image editing**. You can also make AI personalities for more interesting conversations. It runs fast and securely, keeping your data private. To use AI Runner, you need a good computer with a strong GPU, like an NVIDIA RTX 3060 or better. This helps keep your data safe and makes AI tasks faster.

https://github.com/Capsize-Games/airunner
#typescript #agents #ai #embedders #genkit #llm #machine_learning #multimodal #rag #vector_database

Genkit is an open-source framework by Google Firebase that helps you easily build AI-powered apps using a single interface to connect many AI models like Google Gemini, OpenAI, and Anthropic. It supports JavaScript/TypeScript (stable), Go (beta), and Python (alpha), letting you create chatbots, automations, and recommendations quickly with simple code. Genkit works well with web and mobile platforms, offers tools for testing and debugging AI features locally, and lets you deploy and monitor your AI apps on Firebase or other cloud services. This saves you time and effort in developing and managing AI applications efficiently.

https://github.com/firebase/genkit
#rust #artificial_intelligence #big_data #data_engineering #distributed_computing #machine_learning #multimodal #python #rust

Daft is a powerful, easy-to-use data engine that lets you process large-scale data using Python or SQL with high speed and efficiency. It supports complex data types like images and tensors, works well interactively for quick data exploration, and can scale to huge cloud clusters using Ray. Daft integrates smoothly with cloud storage and data catalogs, making it ideal for data engineering, analytics, and machine learning workflows. By using Daft, you can handle big, multimodal datasets faster and more flexibly, improving your ability to analyze and prepare data for AI models without complex setup or slowdowns.

https://github.com/Eventual-Inc/Daft
#python #audio_generation #diffusion #image_generation #inference #model_serving #multimodal #pytorch #transformer #video_generation

vLLM-Omni is a free, open-source tool that makes serving AI models for text, images, videos, and audio fast, easy, and cheap. It builds on vLLM for top speed using smart memory tricks, overlapping tasks, and flexible resource sharing across GPUs. You get 2x higher throughput, 35% less delay, and simple setup with Hugging Face models via OpenAI API—perfect for building quick multi-modal apps like chatbots or media generators without high costs.

https://github.com/vllm-project/vllm-omni