#python #ai_art #stable_diffusion #stable_diffusion_webui #stable_diffusion_webui_plugin
https://github.com/OedoSoldier/enhanced-img2img
https://github.com/OedoSoldier/enhanced-img2img
GitHub
GitHub - OedoSoldier/sd-webui-image-sequence-toolkit: Extension for AUTOMATIC111's WebUI
Extension for AUTOMATIC111's WebUI. Contribute to OedoSoldier/sd-webui-image-sequence-toolkit development by creating an account on GitHub.
#python #ai_art #huggingface #huggingface_diffusers #machine_learning #stable_diffusion
https://github.com/nateraw/stable-diffusion-videos
https://github.com/nateraw/stable-diffusion-videos
GitHub
GitHub - nateraw/stable-diffusion-videos: Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between…
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts - nateraw/stable-diffusion-videos
#cplusplus #4_bits #attention_sink #chatbot #chatpdf #intel_optimized_llamacpp #large_language_model #llm_cpu #llm_inference #smoothquant #sparsegpt #speculative_decoding #stable_diffusion #streamingllm
https://github.com/intel/intel-extension-for-transformers
https://github.com/intel/intel-extension-for-transformers
GitHub
GitHub - intel/intel-extension-for-transformers: ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression…
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡ - intel/intel-extension-for-transformers
#python #pytorch #stable_diffusion
ComfyUI is a powerful tool that helps you create and run complex Stable Diffusion workflows without coding. It uses a graph/nodes/flowchart interface, making it easy to design and execute advanced tasks. Here are the key benefits You can create complex workflows using a visual interface, no coding required.
- **Support for Various Models** It only re-executes parts of the workflow that have changed, saving time and resources.
- **Smart Memory Management** ComfyUI works fully offline, never downloading anything.
- **Extensive Features**: Includes features like asynchronous queue systems, embeddings, hypernetworks, and more.
Overall, ComfyUI makes it easier and more efficient to work with Stable Diffusion models, even for those without extensive coding knowledge.
https://github.com/comfyanonymous/ComfyUI
ComfyUI is a powerful tool that helps you create and run complex Stable Diffusion workflows without coding. It uses a graph/nodes/flowchart interface, making it easy to design and execute advanced tasks. Here are the key benefits You can create complex workflows using a visual interface, no coding required.
- **Support for Various Models** It only re-executes parts of the workflow that have changed, saving time and resources.
- **Smart Memory Management** ComfyUI works fully offline, never downloading anything.
- **Extensive Features**: Includes features like asynchronous queue systems, embeddings, hypernetworks, and more.
Overall, ComfyUI makes it easier and more efficient to work with Stable Diffusion models, even for those without extensive coding knowledge.
https://github.com/comfyanonymous/ComfyUI
GitHub
GitHub - comfyanonymous/ComfyUI: The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. - comfyanonymous/ComfyUI
#cplusplus #ai #api #audio_generation #distributed #gemma #gpt4all #image_generation #kubernetes #llama #llama3 #llm #mamba #mistral #musicgen #p2p #rerank #rwkv #stable_diffusion #text_generation #tts
LocalAI is a free, open-source alternative to OpenAI that you can run on your own computer or server. It allows you to generate text, images, and audio locally without needing a GPU. You can use it with various models and it supports multiple functionalities like text-to-audio, audio-to-text, and image generation. LocalAI is easy to set up using an installer script or Docker, and it has a user-friendly web interface. This tool is beneficial because it saves you money by not requiring cloud services and gives you full control over your data privacy. Plus, it's community-driven, so there are many resources and integrations available to help you get started and customize it to your needs.
https://github.com/mudler/LocalAI
LocalAI is a free, open-source alternative to OpenAI that you can run on your own computer or server. It allows you to generate text, images, and audio locally without needing a GPU. You can use it with various models and it supports multiple functionalities like text-to-audio, audio-to-text, and image generation. LocalAI is easy to set up using an installer script or Docker, and it has a user-friendly web interface. This tool is beneficial because it saves you money by not requiring cloud services and gives you full control over your data privacy. Plus, it's community-driven, so there are many resources and integrations available to help you get started and customize it to your needs.
https://github.com/mudler/LocalAI
GitHub
GitHub - mudler/LocalAI: :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop…
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf,...
#cplusplus #accelerator #llama #llm #low_level_programming #metal #mistral #mixtral #ml #resnet #stable_diffusion #tenstorrent
Tenstorrent's TT-Metal is a powerful tool for developing AI models. It allows users to create custom kernels for their hardware, which can improve performance by reducing memory usage. This is especially useful for large language models (LLMs) like Llama and Mixtral. The TT-Metal system supports efficient data movement and computation, making it beneficial for users who need to run complex AI tasks quickly and effectively. By optimizing how data is stored and processed, TT-Metal helps users achieve better results with less effort.
https://github.com/tenstorrent/tt-metal
Tenstorrent's TT-Metal is a powerful tool for developing AI models. It allows users to create custom kernels for their hardware, which can improve performance by reducing memory usage. This is especially useful for large language models (LLMs) like Llama and Mixtral. The TT-Metal system supports efficient data movement and computation, making it beneficial for users who need to run complex AI tasks quickly and effectively. By optimizing how data is stored and processed, TT-Metal helps users achieve better results with less effort.
https://github.com/tenstorrent/tt-metal
GitHub
GitHub - tenstorrent/tt-metal: :metal: TT-NN operator library, and TT-Metalium low level kernel programming model.
:metal: TT-NN operator library, and TT-Metalium low level kernel programming model. - tenstorrent/tt-metal
#python #ai #ai_art #art #asset_generator #chatbot #deep_learning #desktop_app #image_generation #mistral #multimodal #privacy #pygame #pyside6 #python #self_hosted #speech_to_text #stable_diffusion #text_to_image #text_to_speech #text_to_speech_app
AI Runner is a tool that lets you use AI on your own computer without needing the internet. It can do many things like **voice chatbots**, **text-to-image** generation, and **image editing**. You can also make AI personalities for more interesting conversations. It runs fast and securely, keeping your data private. To use AI Runner, you need a good computer with a strong GPU, like an NVIDIA RTX 3060 or better. This helps keep your data safe and makes AI tasks faster.
https://github.com/Capsize-Games/airunner
AI Runner is a tool that lets you use AI on your own computer without needing the internet. It can do many things like **voice chatbots**, **text-to-image** generation, and **image editing**. You can also make AI personalities for more interesting conversations. It runs fast and securely, keeping your data private. To use AI Runner, you need a good computer with a strong GPU, like an NVIDIA RTX 3060 or better. This helps keep your data safe and makes AI tasks faster.
https://github.com/Capsize-Games/airunner
GitHub
GitHub - Capsize-Games/airunner: Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated…
Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated workflows - Capsize-Games/airunner
#python #deep_learning #diffusion #flax #flux #hacktoberfest #image_generation #image2image #image2video #jax #latent_diffusion_models #pytorch #score_based_generative_modeling #stable_diffusion #stable_diffusion_diffusers #text2image #text2video #video2video
The Hugging Face Diffusers library is a powerful and easy-to-use tool for generating images, audio, and 3D molecular structures using advanced diffusion models. It offers ready-to-use pretrained models and flexible components like pipelines, schedulers, and model building blocks, allowing you to quickly create or customize your own diffusion-based projects. Installation is simple via pip or conda, and you can generate high-quality outputs with just a few lines of code. This library benefits you by making cutting-edge AI generation accessible, customizable, and efficient, whether you want to run models or train your own[1][2][5].
https://github.com/huggingface/diffusers
The Hugging Face Diffusers library is a powerful and easy-to-use tool for generating images, audio, and 3D molecular structures using advanced diffusion models. It offers ready-to-use pretrained models and flexible components like pipelines, schedulers, and model building blocks, allowing you to quickly create or customize your own diffusion-based projects. Installation is simple via pip or conda, and you can generate high-quality outputs with just a few lines of code. This library benefits you by making cutting-edge AI generation accessible, customizable, and efficient, whether you want to run models or train your own[1][2][5].
https://github.com/huggingface/diffusers
GitHub
GitHub - huggingface/diffusers: 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch. - huggingface/diffusers
#typescript #agent #ai #aiagent #aiimage #aiimagegenerator #aitool #aitools #canva #comfyui #flux #stable_diffusion
Jaaz.app is a free, open-source creative tool like Canva that works locally on your computer, keeping your data private. It lets you create images and videos quickly using AI by simply sketching or describing what you want, without needing to write complex prompts. You can plan scenes on an unlimited canvas, collaborate in real time, and use smart AI agents to add objects or styles. It supports offline use and works on Windows and Mac. This means you get powerful, privacy-focused design and video creation without relying on the internet or risking your data security.
https://github.com/11cafe/jaaz
Jaaz.app is a free, open-source creative tool like Canva that works locally on your computer, keeping your data private. It lets you create images and videos quickly using AI by simply sketching or describing what you want, without needing to write complex prompts. You can plan scenes on an unlimited canvas, collaborate in real time, and use smart AI agents to add objects or styles. It supports offline use and works on Windows and Mac. This means you get powerful, privacy-focused design and video creation without relying on the internet or risking your data security.
https://github.com/11cafe/jaaz
GitHub
GitHub - 11cafe/jaaz: The world's first open-source multimodal creative assistant This is a substitute for Canva and Manus that…
The world's first open-source multimodal creative assistant This is a substitute for Canva and Manus that prioritizes privacy and is usable locally. - 11cafe/jaaz
#go #gemma3 #go #gpt_oss #granite4 #llama #llama3 #llm #on_device_ai #phi3 #qwen3 #qwen3vl #sdk #stable_diffusion #vlm
NexaSDK runs AI models locally on CPUs, GPUs, and NPUs with a single command, supports GGUF/MLX/.nexa formats, and offers NPU-first Android and macOS support for fast, multimodal (text, image, audio) inference, plus an OpenAI‑compatible API for easy integration. This gives you low-latency, private on-device AI across laptops, phones, and embedded systems, reduces cloud costs and data exposure, and lets you deploy and test new models immediately on target hardware for faster development and better user experience.
https://github.com/NexaAI/nexa-sdk
NexaSDK runs AI models locally on CPUs, GPUs, and NPUs with a single command, supports GGUF/MLX/.nexa formats, and offers NPU-first Android and macOS support for fast, multimodal (text, image, audio) inference, plus an OpenAI‑compatible API for easy integration. This gives you low-latency, private on-device AI across laptops, phones, and embedded systems, reduces cloud costs and data exposure, and lets you deploy and test new models immediately on target hardware for faster development and better user experience.
https://github.com/NexaAI/nexa-sdk
GitHub
GitHub - NexaAI/nexa-sdk: Run the latest LLMs and VLMs across GPU, NPU, and CPU with PC (Python/C++) & mobile (Android & iOS) support…
Run the latest LLMs and VLMs across GPU, NPU, and CPU with PC (Python/C++) & mobile (Android & iOS) support, running quickly with OpenAI gpt-oss, Granite4, Qwen3VL, Gemma 3n and mor...