#python #ai #blender #blender_addon #image_generation #stable_diffusion
https://github.com/carson-katri/dream-textures
https://github.com/carson-katri/dream-textures
GitHub
GitHub - carson-katri/dream-textures: Stable Diffusion built-in to Blender
Stable Diffusion built-in to Blender. Contribute to carson-katri/dream-textures development by creating an account on GitHub.
#jupyter_notebook #ai #artificial_intelligence #image_generation #img2img #latent_diffusion #machine_learning #model_training #stable_diffusion #txt2img
https://github.com/JoePenna/Dreambooth-Stable-Diffusion
https://github.com/JoePenna/Dreambooth-Stable-Diffusion
GitHub
GitHub - JoePenna/Dreambooth-Stable-Diffusion: Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual…
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focuse...
#jupyter_notebook #ai_art #artificial_intelligence #generative_art #image_generation #img2img #inpainting #latent_diffusion #linux #macos #outpainting #stable_diffusion #txt2img #windows
https://github.com/invoke-ai/InvokeAI
https://github.com/invoke-ai/InvokeAI
GitHub
GitHub - invoke-ai/InvokeAI: Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists…
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The ...
#python #computer_vision #deep_learning #diffusion_models #image_editing #image_generation #image_manipulation #paint_by_example #pytorch #stable_diffusion
https://github.com/Fantasy-Studio/Paint-by-Example
https://github.com/Fantasy-Studio/Paint-by-Example
GitHub
GitHub - Fantasy-Studio/Paint-by-Example: Paint by Example: Exemplar-based Image Editing with Diffusion Models
Paint by Example: Exemplar-based Image Editing with Diffusion Models - Fantasy-Studio/Paint-by-Example
#cplusplus #ai #api #audio_generation #distributed #gemma #gpt4all #image_generation #kubernetes #llama #llama3 #llm #mamba #mistral #musicgen #p2p #rerank #rwkv #stable_diffusion #text_generation #tts
LocalAI is a free, open-source alternative to OpenAI that you can run on your own computer or server. It allows you to generate text, images, and audio locally without needing a GPU. You can use it with various models and it supports multiple functionalities like text-to-audio, audio-to-text, and image generation. LocalAI is easy to set up using an installer script or Docker, and it has a user-friendly web interface. This tool is beneficial because it saves you money by not requiring cloud services and gives you full control over your data privacy. Plus, it's community-driven, so there are many resources and integrations available to help you get started and customize it to your needs.
https://github.com/mudler/LocalAI
LocalAI is a free, open-source alternative to OpenAI that you can run on your own computer or server. It allows you to generate text, images, and audio locally without needing a GPU. You can use it with various models and it supports multiple functionalities like text-to-audio, audio-to-text, and image generation. LocalAI is easy to set up using an installer script or Docker, and it has a user-friendly web interface. This tool is beneficial because it saves you money by not requiring cloud services and gives you full control over your data privacy. Plus, it's community-driven, so there are many resources and integrations available to help you get started and customize it to your needs.
https://github.com/mudler/LocalAI
GitHub
GitHub - mudler/LocalAI: :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop…
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf,...
#python #auto_regressive_model #autoregressive_models #diffusion_models #generative_ai #generative_model #gpt #gpt_2 #image_generation #large_language_models #neurips #transformers #vision_transformer
VAR (Visual Autoregressive Modeling) is a new way to generate images that improves upon existing methods. It uses a "next-scale prediction" approach, which means it generates images from coarse to fine details, unlike the traditional method of predicting pixel by pixel. This makes VAR models better than diffusion models for the first time. You can try VAR on a demo website and generate images interactively, which is fun and easy. VAR also follows power-law scaling laws, making it efficient and scalable. The benefit to you is that you can create high-quality images quickly and easily, and even explore technical details through provided scripts and models.
https://github.com/FoundationVision/VAR
VAR (Visual Autoregressive Modeling) is a new way to generate images that improves upon existing methods. It uses a "next-scale prediction" approach, which means it generates images from coarse to fine details, unlike the traditional method of predicting pixel by pixel. This makes VAR models better than diffusion models for the first time. You can try VAR on a demo website and generate images interactively, which is fun and easy. VAR also follows power-law scaling laws, making it efficient and scalable. The benefit to you is that you can create high-quality images quickly and easily, and even explore technical details through provided scripts and models.
https://github.com/FoundationVision/VAR
GitHub
GitHub - FoundationVision/VAR: [NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official…
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Predi...
👍1😁1
#python #ai #ai_art #art #asset_generator #chatbot #deep_learning #desktop_app #image_generation #mistral #multimodal #privacy #pygame #pyside6 #python #self_hosted #speech_to_text #stable_diffusion #text_to_image #text_to_speech #text_to_speech_app
AI Runner is a tool that lets you use AI on your own computer without needing the internet. It can do many things like **voice chatbots**, **text-to-image** generation, and **image editing**. You can also make AI personalities for more interesting conversations. It runs fast and securely, keeping your data private. To use AI Runner, you need a good computer with a strong GPU, like an NVIDIA RTX 3060 or better. This helps keep your data safe and makes AI tasks faster.
https://github.com/Capsize-Games/airunner
AI Runner is a tool that lets you use AI on your own computer without needing the internet. It can do many things like **voice chatbots**, **text-to-image** generation, and **image editing**. You can also make AI personalities for more interesting conversations. It runs fast and securely, keeping your data private. To use AI Runner, you need a good computer with a strong GPU, like an NVIDIA RTX 3060 or better. This helps keep your data safe and makes AI tasks faster.
https://github.com/Capsize-Games/airunner
GitHub
GitHub - Capsize-Games/airunner: Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated…
Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated workflows - Capsize-Games/airunner
#rust #2d_graphics #art #compositor #design #graphic_design #graphics_editor #image_generation #image_manipulation #image_processing #node_editor #node_graph #photo_editing #photo_editor #procedural #procedural_art #procedural_drawing #svg_editor #vector_editor
Graphite is a free, open-source 2D graphics editor that combines vector and raster tools with a unique hybrid workflow using layers and nodes. It lets you create detailed vector art and designs with nondestructive editing, meaning you can change your work anytime without losing quality. The node-based system offers powerful, flexible control like visual programming, while the layer system keeps things simple and familiar. This makes it easy to create complex graphics, animations, and effects all in one tool. Graphite is still evolving but aims to be a versatile, all-in-one creative platform accessible to everyone, helping you unleash your artistic potential efficiently[1][2][4].
https://github.com/GraphiteEditor/Graphite
Graphite is a free, open-source 2D graphics editor that combines vector and raster tools with a unique hybrid workflow using layers and nodes. It lets you create detailed vector art and designs with nondestructive editing, meaning you can change your work anytime without losing quality. The node-based system offers powerful, flexible control like visual programming, while the layer system keeps things simple and familiar. This makes it easy to create complex graphics, animations, and effects all in one tool. Graphite is still evolving but aims to be a versatile, all-in-one creative platform accessible to everyone, helping you unleash your artistic potential efficiently[1][2][4].
https://github.com/GraphiteEditor/Graphite
GitHub
GitHub - GraphiteEditor/Graphite: Open source comprehensive 2D content creation tool suite for graphic design, digital art, and…
Open source comprehensive 2D content creation tool suite for graphic design, digital art, and interactive real-time motion graphics — featuring node-based procedural editing - GraphiteEditor/Graphite
❤2
#python #deep_learning #diffusion #flax #flux #hacktoberfest #image_generation #image2image #image2video #jax #latent_diffusion_models #pytorch #score_based_generative_modeling #stable_diffusion #stable_diffusion_diffusers #text2image #text2video #video2video
The Hugging Face Diffusers library is a powerful and easy-to-use tool for generating images, audio, and 3D molecular structures using advanced diffusion models. It offers ready-to-use pretrained models and flexible components like pipelines, schedulers, and model building blocks, allowing you to quickly create or customize your own diffusion-based projects. Installation is simple via pip or conda, and you can generate high-quality outputs with just a few lines of code. This library benefits you by making cutting-edge AI generation accessible, customizable, and efficient, whether you want to run models or train your own[1][2][5].
https://github.com/huggingface/diffusers
The Hugging Face Diffusers library is a powerful and easy-to-use tool for generating images, audio, and 3D molecular structures using advanced diffusion models. It offers ready-to-use pretrained models and flexible components like pipelines, schedulers, and model building blocks, allowing you to quickly create or customize your own diffusion-based projects. Installation is simple via pip or conda, and you can generate high-quality outputs with just a few lines of code. This library benefits you by making cutting-edge AI generation accessible, customizable, and efficient, whether you want to run models or train your own[1][2][5].
https://github.com/huggingface/diffusers
GitHub
GitHub - huggingface/diffusers: 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch. - huggingface/diffusers
#python #audio_generation #diffusion #image_generation #inference #model_serving #multimodal #pytorch #transformer #video_generation
vLLM-Omni is a free, open-source tool that makes serving AI models for text, images, videos, and audio fast, easy, and cheap. It builds on vLLM for top speed using smart memory tricks, overlapping tasks, and flexible resource sharing across GPUs. You get 2x higher throughput, 35% less delay, and simple setup with Hugging Face models via OpenAI API—perfect for building quick multi-modal apps like chatbots or media generators without high costs.
https://github.com/vllm-project/vllm-omni
vLLM-Omni is a free, open-source tool that makes serving AI models for text, images, videos, and audio fast, easy, and cheap. It builds on vLLM for top speed using smart memory tricks, overlapping tasks, and flexible resource sharing across GPUs. You get 2x higher throughput, 35% less delay, and simple setup with Hugging Face models via OpenAI API—perfect for building quick multi-modal apps like chatbots or media generators without high costs.
https://github.com/vllm-project/vllm-omni
GitHub
GitHub - vllm-project/vllm-omni: A framework for efficient model inference with omni-modality models
A framework for efficient model inference with omni-modality models - vllm-project/vllm-omni