This is a multimodal language model (LLM) Android app
Multimodal Support: Enables functionality across diverse tasks, including text-to-text, image-to-text, audio-to-text, and text-to-image generation (via diffusion models).
CPU Inference Optimization: MNN-LLM demonstrates exceptional performance in CPU benchmarking in Android.
Broad Model Compatibility: Supports multiple leading model providers, such as Qwen, Gemma, Llama (including TinyLlama and MobileLLM), Baichuan, Yi, DeepSeek, InternLM, Phi, ReaderLM, and Smolm.
Privacy First: Runs entirely on-device, ensuring complete data privacy with no information uploaded to external servers.