#c_lang #aarch64 #arduino #arm #embedded #esp32 #esp8266 #interpreter #iot #mips #risc_v #runtime #wasi #wasm #webassembly
https://github.com/wasm3/wasm3
https://github.com/wasm3/wasm3
GitHub
GitHub - wasm3/wasm3: 🚀 A fast WebAssembly interpreter and the most universal WASM runtime
🚀 A fast WebAssembly interpreter and the most universal WASM runtime - wasm3/wasm3
#vue #apple_silicon #apple_silicon_launch #arm #arm64 #compatibility #macos #wwdc
https://github.com/ThatGuySam/doesitarm
https://github.com/ThatGuySam/doesitarm
GitHub
GitHub - ThatGuySam/doesitarm: 🦾 A list of reported app support for Apple Silicon as well as Apple M4 and M3 Ultra Macs
🦾 A list of reported app support for Apple Silicon as well as Apple M4 and M3 Ultra Macs - ThatGuySam/doesitarm
#cplusplus #arm #arm_assembly #assembly #assembly_arm #assembly_language #assembly_language_programming #assembly_x86_64 #c #c_plus_plus #cyber #cyber_security #cyber_threat_intelligence #cybersecurity #hack #hacking #malware #reverse_engineering #reverse_engineering_tutorial #x64 #x86
https://github.com/mytechnotalent/Reverse-Engineering-Tutorial
https://github.com/mytechnotalent/Reverse-Engineering-Tutorial
GitHub
GitHub - mytechnotalent/Reverse-Engineering: A FREE comprehensive reverse engineering tutorial covering x86, x64, 32-bit/64-bit…
A FREE comprehensive reverse engineering tutorial covering x86, x64, 32-bit/64-bit ARM, 8-bit AVR and 32-bit RISC-V architectures. - mytechnotalent/Reverse-Engineering
#c_lang #amd #android #arm #cpus #intel #linux #macos #microarchitecture #windows
https://github.com/Dr-Noob/cpufetch
https://github.com/Dr-Noob/cpufetch
GitHub
GitHub - Dr-Noob/cpufetch: Simple yet fancy CPU architecture fetching tool
Simple yet fancy CPU architecture fetching tool. Contribute to Dr-Noob/cpufetch development by creating an account on GitHub.
#cplusplus #3d #3d_perception #arm #computer_graphics #cpp #cuda #gpu #gui #machine_learning #mesh_processing #odometry #opengl #pointcloud #python #pytorch #reconstruction #registration #rendering #tensorflow #visualization
https://github.com/isl-org/Open3D
https://github.com/isl-org/Open3D
GitHub
GitHub - isl-org/Open3D: Open3D: A Modern Library for 3D Data Processing
Open3D: A Modern Library for 3D Data Processing. Contribute to isl-org/Open3D development by creating an account on GitHub.
#bicep #arm #azure #bicep_templates #building_block #deployment_automation #iac #microsoft #modules #platform #publishing #testing
https://github.com/Azure/ResourceModules
https://github.com/Azure/ResourceModules
GitHub
GitHub - Azure/ResourceModules: This repository includes a CI platform for and collection of mature and curated Bicep modules.…
This repository includes a CI platform for and collection of mature and curated Bicep modules. The platform supports both ARM and Bicep and can be leveraged using GitHub actions as well as Azure De...
#c_lang #arm #baremetal #cmsis #ethernet #gcc #gpio #irq #make #stm32 #tutorial #uart #webserber
https://github.com/cpq/bare-metal-programming-guide
https://github.com/cpq/bare-metal-programming-guide
GitHub
GitHub - cpq/bare-metal-programming-guide: A bare metal programming guide (ARM microcontrollers)
A bare metal programming guide (ARM microcontrollers) - cpq/bare-metal-programming-guide
#rust #aarch64 #arm #bsd #cargo #cross_compilation #cross_testing #linux #mips #powerpc #s390x #sparc #windows #x86
https://github.com/cross-rs/cross
https://github.com/cross-rs/cross
GitHub
GitHub - cross-rs/cross: “Zero setup” cross compilation and “cross testing” of Rust crates
“Zero setup” cross compilation and “cross testing” of Rust crates - cross-rs/cross
🖕1
#cplusplus #arm #c #dos #dosbox #dosbox_staging #emulator #games #linux #macos #meson #opengl #sdl2 #windows #x86
https://github.com/dosbox-staging/dosbox-staging
https://github.com/dosbox-staging/dosbox-staging
GitHub
GitHub - dosbox-staging/dosbox-staging: DOSBox Staging is a modern continuation of DOSBox with advanced features and current development…
DOSBox Staging is a modern continuation of DOSBox with advanced features and current development practices. - dosbox-staging/dosbox-staging
🤯3
#cplusplus #aarch64 #arm #arm64 #avx2 #avx512 #c_plus_plus #clang #clang_cl #cpp11 #gcc_compiler #json #json_parser #json_pointer #loongarch #neon #simd #sse42 #vs2019 #x64
The simdjson library is very fast and efficient for parsing JSON files. It uses special computer instructions called SIMD to parse JSON up to 4 times faster than other popular parsers. Here are the key benefits Parses JSON much quicker than other libraries.
- **Easy to Use** Ensures full JSON and UTF-8 validation without losing any data.
- **Automatic Optimization** Designed to avoid unexpected errors and surprises.
Using simdjson can significantly speed up your application's performance when dealing with large amounts of JSON data.
https://github.com/simdjson/simdjson
The simdjson library is very fast and efficient for parsing JSON files. It uses special computer instructions called SIMD to parse JSON up to 4 times faster than other popular parsers. Here are the key benefits Parses JSON much quicker than other libraries.
- **Easy to Use** Ensures full JSON and UTF-8 validation without losing any data.
- **Automatic Optimization** Designed to avoid unexpected errors and surprises.
Using simdjson can significantly speed up your application's performance when dealing with large amounts of JSON data.
https://github.com/simdjson/simdjson
GitHub
GitHub - simdjson/simdjson: Parsing gigabytes of JSON per second : used by Facebook/Meta Velox, the Node.js runtime, ClickHouse…
Parsing gigabytes of JSON per second : used by Facebook/Meta Velox, the Node.js runtime, ClickHouse, WatermelonDB, Apache Doris, Milvus, StarRocks - simdjson/simdjson
#cplusplus #arm #convolution #deep_learning #embedded_devices #llm #machine_learning #ml #mnn #transformer #vulkan #winograd_algorithm
MNN is a lightweight and efficient deep learning framework that helps run AI models on mobile devices and other small devices. It supports many types of AI models and can handle tasks like image recognition and language processing quickly and locally on your device. This means you can use AI features without needing to send data to the cloud, which improves privacy and speed. MNN is used in many apps, including those from Alibaba, and supports various platforms like Android and iOS. It also helps reduce the size of AI models, making them faster and more efficient.
https://github.com/alibaba/MNN
MNN is a lightweight and efficient deep learning framework that helps run AI models on mobile devices and other small devices. It supports many types of AI models and can handle tasks like image recognition and language processing quickly and locally on your device. This means you can use AI features without needing to send data to the cloud, which improves privacy and speed. MNN is used in many apps, including those from Alibaba, and supports various platforms like Android and iOS. It also helps reduce the size of AI models, making them faster and more efficient.
https://github.com/alibaba/MNN
GitHub
GitHub - alibaba/MNN: MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases…
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM Android App:[MNN-LLM-Android](./apps/Android/MnnLlmChat/READ...
#cplusplus #arm #baidu #deep_learning #embedded #fpga #mali #mdl #mobile #mobile_deep_learning #neural_network
Paddle Lite is a lightweight, high-performance deep learning inference framework designed to run AI models efficiently on mobile, embedded, and edge devices. It supports multiple platforms like Android, iOS, Linux, Windows, and macOS, and languages including C++, Java, and Python. You can easily convert models from other frameworks to PaddlePaddle format, optimize them for faster and smaller deployment, and run them with ready-made examples. This helps you deploy AI applications quickly on various devices with low memory use and fast speed, making it ideal for real-time, resource-limited environments. It also supports many hardware accelerators for better performance.
https://github.com/PaddlePaddle/Paddle-Lite
Paddle Lite is a lightweight, high-performance deep learning inference framework designed to run AI models efficiently on mobile, embedded, and edge devices. It supports multiple platforms like Android, iOS, Linux, Windows, and macOS, and languages including C++, Java, and Python. You can easily convert models from other frameworks to PaddlePaddle format, optimize them for faster and smaller deployment, and run them with ready-made examples. This helps you deploy AI applications quickly on various devices with low memory use and fast speed, making it ideal for real-time, resource-limited environments. It also supports many hardware accelerators for better performance.
https://github.com/PaddlePaddle/Paddle-Lite
GitHub
GitHub - PaddlePaddle/Paddle-Lite: PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)
PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎) - PaddlePaddle/Paddle-Lite