#cplusplus #cuda #deep_learning #deep_neural_networks #distributed #machine_learning #ml #neural_network
https://github.com/Oneflow-Inc/oneflow
https://github.com/Oneflow-Inc/oneflow
GitHub
GitHub - Oneflow-Inc/oneflow: OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient. - Oneflow-Inc/oneflow
#cplusplus #cuda #deep_learning #gpu #mlp #nerf #neural_network #real_time #rendering
https://github.com/NVlabs/tiny-cuda-nn
https://github.com/NVlabs/tiny-cuda-nn
GitHub
GitHub - NVlabs/tiny-cuda-nn: Lightning fast C++/CUDA neural network framework
Lightning fast C++/CUDA neural network framework. Contribute to NVlabs/tiny-cuda-nn development by creating an account on GitHub.
#cuda #3d_reconstruction #computer_graphics #computer_vision #function_approximation #machine_learning #nerf #neural_network #real_time #real_time_rendering #realtime #signed_distance_functions
https://github.com/NVlabs/instant-ngp
https://github.com/NVlabs/instant-ngp
GitHub
GitHub - NVlabs/instant-ngp: Instant neural graphics primitives: lightning fast NeRF and more
Instant neural graphics primitives: lightning fast NeRF and more - NVlabs/instant-ngp
#python #anomaly_detection #anomaly_localization #anomaly_segmentation #neural_network_compression #openvino #unsupervised_learning
https://github.com/openvinotoolkit/anomalib
https://github.com/openvinotoolkit/anomalib
GitHub
GitHub - open-edge-platform/anomalib: An anomaly detection library comprising state-of-the-art algorithms and features such as…
An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference. - open-edge-platform/anomalib
#python #action_recognition #anomaly_detection #audio_processing #background_removal #crowd_counting #deep_learning #face_detection #face_recognition #fashion_ai #gan #hand_detection #image_classification #image_segmentation #machine_learning #neural_network #object_detection #object_recognition #object_tracking #pose_estimation
https://github.com/axinc-ai/ailia-models
https://github.com/axinc-ai/ailia-models
GitHub
GitHub - axinc-ai/ailia-models: The collection of pre-trained, state-of-the-art AI models for ailia SDK
The collection of pre-trained, state-of-the-art AI models for ailia SDK - axinc-ai/ailia-models
#jupyter_notebook #andrew_ng #andrew_ng_course #andrew_ng_machine_learning #andrewng #coursera #coursera_machine_learning #data_science #deep_learning #deep_neural_networks #dl #machine_learning #ml #neural_network #neural_networks #numpy #pandas #python #pytorch #reinforcement_learning
https://github.com/ashishpatel26/Andrew-NG-Notes
https://github.com/ashishpatel26/Andrew-NG-Notes
GitHub
GitHub - ashishpatel26/Andrew-NG-Notes: This is Andrew NG Coursera Handwritten Notes.
This is Andrew NG Coursera Handwritten Notes. Contribute to ashishpatel26/Andrew-NG-Notes development by creating an account on GitHub.
#other #ai #artificial_intelligence #guide #keywords #midjourney #neural_network #reference #styles
https://github.com/willwulfken/MidJourney-Styles-and-Keywords-Reference
https://github.com/willwulfken/MidJourney-Styles-and-Keywords-Reference
GitHub
GitHub - willwulfken/MidJourney-Styles-and-Keywords-Reference: A reference containing Styles and Keywords that you can use with…
A reference containing Styles and Keywords that you can use with MidJourney AI. There are also pages showing resolution comparison, image weights, and much more! - willwulfken/MidJourney-Styles-and...
#python #anime4k #machine_learning #ncnn #neural_network #qt5 #realsr #rife #srmd #super_resolution #upscaling #video #video_enlarger #vulkan #waifu2x
https://github.com/k4yt3x/video2x
https://github.com/k4yt3x/video2x
GitHub
GitHub - k4yt3x/video2x: A machine learning-based video super resolution and frame interpolation framework. Est. Hack the Valley…
A machine learning-based video super resolution and frame interpolation framework. Est. Hack the Valley II, 2018. - k4yt3x/video2x
#other #2022 #ai #artificial_intelligence #computer_science #computer_vision #deep_learning #innovation #machine_learning #machinelearning #neural_network #paper #papers #python #sota #state_of_art #state_of_the_art #technology
https://github.com/louisfb01/best_AI_papers_2022
https://github.com/louisfb01/best_AI_papers_2022
GitHub
GitHub - louisfb01/best_AI_papers_2022: A curated list of the latest breakthroughs in AI (in 2022) by release date with a clear…
A curated list of the latest breakthroughs in AI (in 2022) by release date with a clear video explanation, link to a more in-depth article, and code. - louisfb01/best_AI_papers_2022
#python #cifar10 #david_page #deep_learning #experimentation #language_models_are_next #machine_learning #neural_network #resnet9 #responsive_to_issue_tickets #world_record
https://github.com/tysam-code/hlb-CIFAR10
https://github.com/tysam-code/hlb-CIFAR10
GitHub
GitHub - tysam-code/hlb-CIFAR10: Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)
Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!) - tysam-code/hlb-CIFAR10
#rust #approximate_nearest_neighbor_search #embeddings_similarity #hnsw #image_search #knn_algorithm #machine_learning #matching #mlops #nearest_neighbor_search #neural_network #neural_search #recommender_system #search #search_engine #search_engines #similarity_search #vector_database #vector_search #vector_search_engine
https://github.com/qdrant/qdrant
https://github.com/qdrant/qdrant
GitHub
GitHub - qdrant/qdrant: Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation…
Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/ - qdrant/qdrant
#jupyter_notebook #computer_vision #deep_learning #image_classification #imagenet #neural_network #object_detection #pretrained_models #pretrained_weights #pytorch #semantic_segmentation #transfer_learning
https://github.com/Deci-AI/super-gradients
https://github.com/Deci-AI/super-gradients
GitHub
GitHub - Deci-AI/super-gradients: Easily train or fine-tune SOTA computer vision models with one open source training library.…
Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS. - Deci-AI/super-gradients
#python #dataset #deep_learning #machine_learning #neural_network #pytorch #streaming
https://github.com/mosaicml/streaming
https://github.com/mosaicml/streaming
GitHub
GitHub - mosaicml/streaming: A Data Streaming Library for Efficient Neural Network Training
A Data Streaming Library for Efficient Neural Network Training - mosaicml/streaming
#rust #deep_learning #machine_learning #neural_network #pytorch
https://github.com/LaurentMazare/tch-rs
https://github.com/LaurentMazare/tch-rs
GitHub
GitHub - LaurentMazare/tch-rs: Rust bindings for the C++ api of PyTorch.
Rust bindings for the C++ api of PyTorch. Contribute to LaurentMazare/tch-rs development by creating an account on GitHub.
#python #autograd #deep_learning #gpu #machine_learning #neural_network #numpy #python #tensor
PyTorch is a powerful Python package that helps you with tensor computations and deep neural networks. It uses strong GPU acceleration, making your computations much faster. Here are the key benefits PyTorch allows you to use GPUs for tensor computations, similar to NumPy, but much faster.
- **Flexible Neural Networks** You can seamlessly use other Python packages like NumPy, SciPy, and Cython with PyTorch.
- **Fast and Efficient**: PyTorch has minimal framework overhead and is highly optimized for speed and memory efficiency.
Overall, PyTorch makes it easier and faster to work with deep learning projects by providing a flexible and efficient environment.
https://github.com/pytorch/pytorch
PyTorch is a powerful Python package that helps you with tensor computations and deep neural networks. It uses strong GPU acceleration, making your computations much faster. Here are the key benefits PyTorch allows you to use GPUs for tensor computations, similar to NumPy, but much faster.
- **Flexible Neural Networks** You can seamlessly use other Python packages like NumPy, SciPy, and Cython with PyTorch.
- **Fast and Efficient**: PyTorch has minimal framework overhead and is highly optimized for speed and memory efficiency.
Overall, PyTorch makes it easier and faster to work with deep learning projects by providing a flexible and efficient environment.
https://github.com/pytorch/pytorch
GitHub
GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
#cplusplus #deep_learning #deep_neural_networks #distributed #machine_learning #ml #neural_network #python #tensorflow
TensorFlow is a powerful tool for machine learning that helps you build and deploy AI applications easily. It was developed by Google and is now open source, meaning anyone can use and contribute to it. TensorFlow provides tools, libraries, and a strong community to support your work. You can install it using Python with a simple command like `pip install tensorflow`, and it supports various devices including GPUs. This makes it versatile for researchers and developers alike, allowing you to push the boundaries of machine learning and create innovative applications.
https://github.com/tensorflow/tensorflow
TensorFlow is a powerful tool for machine learning that helps you build and deploy AI applications easily. It was developed by Google and is now open source, meaning anyone can use and contribute to it. TensorFlow provides tools, libraries, and a strong community to support your work. You can install it using Python with a simple command like `pip install tensorflow`, and it supports various devices including GPUs. This makes it versatile for researchers and developers alike, allowing you to push the boundaries of machine learning and create innovative applications.
https://github.com/tensorflow/tensorflow
GitHub
GitHub - tensorflow/tensorflow: An Open Source Machine Learning Framework for Everyone
An Open Source Machine Learning Framework for Everyone - tensorflow/tensorflow
#c_lang #convolutional_neural_network #convolutional_neural_networks #cpu #inference #inference_optimization #matrix_multiplication #mobile_inference #multithreading #neural_network #neural_networks #simd
XNNPACK is a powerful tool that helps make neural networks run faster on various devices like smartphones, computers, and Raspberry Pi boards. It supports many different types of processors and operating systems, making it very versatile. XNNPACK doesn't work directly with users but instead helps other machine learning frameworks like TensorFlow Lite, PyTorch, and ONNX Runtime to perform better. This means your apps and programs that use these frameworks can run neural networks more quickly and efficiently, which is beneficial because it saves time and improves performance.
https://github.com/google/XNNPACK
XNNPACK is a powerful tool that helps make neural networks run faster on various devices like smartphones, computers, and Raspberry Pi boards. It supports many different types of processors and operating systems, making it very versatile. XNNPACK doesn't work directly with users but instead helps other machine learning frameworks like TensorFlow Lite, PyTorch, and ONNX Runtime to perform better. This means your apps and programs that use these frameworks can run neural networks more quickly and efficiently, which is beneficial because it saves time and improves performance.
https://github.com/google/XNNPACK
GitHub
GitHub - google/XNNPACK: High-efficiency floating-point neural network inference operators for mobile, server, and Web
High-efficiency floating-point neural network inference operators for mobile, server, and Web - google/XNNPACK
#python #ai #artificial_intelligence #cython #data_science #deep_learning #entity_linking #machine_learning #named_entity_recognition #natural_language_processing #neural_network #neural_networks #nlp #nlp_library #python #spacy #text_classification #tokenization
spaCy is a powerful tool for understanding and processing human language. It helps computers analyze text by breaking it into parts like words, sentences, and entities (like names or places). This makes it useful for tasks such as identifying who is doing what in a sentence or finding specific information from large texts. Using spaCy can save time and improve accuracy compared to manual analysis. It supports many languages and integrates well with advanced models like BERT, making it ideal for real-world applications.
https://github.com/explosion/spaCy
spaCy is a powerful tool for understanding and processing human language. It helps computers analyze text by breaking it into parts like words, sentences, and entities (like names or places). This makes it useful for tasks such as identifying who is doing what in a sentence or finding specific information from large texts. Using spaCy can save time and improve accuracy compared to manual analysis. It supports many languages and integrates well with advanced models like BERT, making it ideal for real-world applications.
https://github.com/explosion/spaCy
GitHub
GitHub - explosion/spaCy: 💫 Industrial-strength Natural Language Processing (NLP) in Python
💫 Industrial-strength Natural Language Processing (NLP) in Python - explosion/spaCy
#python #deep_learning #intel #machine_learning #neural_network #pytorch #quantization
Intel Extension for PyTorch boosts the speed of PyTorch on Intel hardware, including both CPUs and GPUs, by using special features like AVX-512, AMX, and XMX for faster calculations[5][2][4]. It supports many popular large language models (LLMs) such as Llama, Qwen, Phi, and DeepSeek, offering optimizations for different data types and easy GPU acceleration. This means you can run advanced AI models much faster and more efficiently on your Intel computer, with simple setup and support for both ready-made and custom models.
https://github.com/intel/intel-extension-for-pytorch
Intel Extension for PyTorch boosts the speed of PyTorch on Intel hardware, including both CPUs and GPUs, by using special features like AVX-512, AMX, and XMX for faster calculations[5][2][4]. It supports many popular large language models (LLMs) such as Llama, Qwen, Phi, and DeepSeek, offering optimizations for different data types and easy GPU acceleration. This means you can run advanced AI models much faster and more efficiently on your Intel computer, with simple setup and support for both ready-made and custom models.
https://github.com/intel/intel-extension-for-pytorch
GitHub
GitHub - intel/intel-extension-for-pytorch: A Python package for extending the official PyTorch that can easily obtain performance…
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform - intel/intel-extension-for-pytorch
#cplusplus #arm #baidu #deep_learning #embedded #fpga #mali #mdl #mobile #mobile_deep_learning #neural_network
Paddle Lite is a lightweight, high-performance deep learning inference framework designed to run AI models efficiently on mobile, embedded, and edge devices. It supports multiple platforms like Android, iOS, Linux, Windows, and macOS, and languages including C++, Java, and Python. You can easily convert models from other frameworks to PaddlePaddle format, optimize them for faster and smaller deployment, and run them with ready-made examples. This helps you deploy AI applications quickly on various devices with low memory use and fast speed, making it ideal for real-time, resource-limited environments. It also supports many hardware accelerators for better performance.
https://github.com/PaddlePaddle/Paddle-Lite
Paddle Lite is a lightweight, high-performance deep learning inference framework designed to run AI models efficiently on mobile, embedded, and edge devices. It supports multiple platforms like Android, iOS, Linux, Windows, and macOS, and languages including C++, Java, and Python. You can easily convert models from other frameworks to PaddlePaddle format, optimize them for faster and smaller deployment, and run them with ready-made examples. This helps you deploy AI applications quickly on various devices with low memory use and fast speed, making it ideal for real-time, resource-limited environments. It also supports many hardware accelerators for better performance.
https://github.com/PaddlePaddle/Paddle-Lite
GitHub
GitHub - PaddlePaddle/Paddle-Lite: PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)
PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎) - PaddlePaddle/Paddle-Lite