The Technology Innovation Institute (TII), the applied research arm of Abu Dhabi’s Advanced Technology Research Council (ATRC), has released Falcon H1R 7B, a new artificial intelligence model designed to deliver strong reasoning ability in a smaller and more efficient format.
Falcon H1R 7B uses 7 billion parameters and is built to balance performance with speed and lower computing demands. Despite its size, the model matches or exceeds the performance of much larger open-source models, including Microsoft’s Phi 4 Reasoning Plus 14B, Alibaba’s Qwen3 32B, and NVIDIA’s Nemotron H 47B. The release highlights TII’s focus on efficient model design and supports the UAE’s wider technology ambitions.
His Excellency Faisal al Bannai, Adviser to the UAE President and Secretary General of the Advanced Technology Research Council, said “Falcon H1R reflects the UAE’s commitment to building open and responsible AI that delivers real national and global value. By bringing world-class reasoning into a compact, efficient model, we are expanding access to advanced AI in a way that supports economic growth, research leadership, and long-term technological resilience.”

Advances in Reasoning Design
Falcon H1R 7B is based on the Falcon H1-7B platform and uses a hybrid Transformer–Mamba architecture. This design improves processing speed while maintaining strong reasoning accuracy. The model uses a targeted training method to strengthen test-time reasoning.
“Falcon H1R 7B marks a leap forward in the reasoning capabilities of compact AI systems,” said Dr Najwa Aaraj, CEO of TII. “It achieves near-perfect scores on elite benchmarks while keeping memory and energy use exceptionally low, critical criteria for real-world deployment and sustainability.”
The model applies what researchers describe as latent intelligence, which allows it to solve complex tasks using fewer resources. This places Falcon H1R 7B at a point where performance and efficiency improve together.
Benchmark Results
Testing shows Falcon H1R 7B performing strongly across multiple categories:
In mathematics, the model scored 88.1% on AIME-24, exceeding ServiceNow AI’s Apriel 1.5 (15B), which scored 86.2%.
In coding and agent-based tasks, Falcon H1R 7B achieved 68.6% accuracy. It led models under 8B parameters and outperformed several larger systems on benchmarks including LCB v6, SciCode Sub, and TB Hard.
In general reasoning tasks, the model matched or came close to the results of larger systems such as Microsoft’s Phi 4 Reasoning Plus (14B), while using fewer parameters.
In efficiency tests, Falcon H1R 7B reached up to 1,500 tokens per second per GPU at batch size 64. This was almost twice the speed of Qwen3-8B, without a drop in accuracy.
“This model is the result of world-class research and engineering. It shows how scientific precision and scalable design can go hand in hand,” said Dr Hakim Hacid, Chief Researcher at TII’s Artificial Intelligence and Digital Research Centre. “We are proud to deliver a model that enables the community to build smarter, faster, and more accessible AI systems.”

Open Access for the Research Community
Falcon H1R 7B is released as open source under the Falcon TII License. The model and full technical documentation are available on Hugging Face. The release includes detailed information on training methods and benchmark results.
The launch builds on the wider Falcon programme, which has delivered several high-ranking models since its start. Earlier Falcon releases achieved leading global positions in their size categories. The programme continues to show that smaller models can compete with larger systems when designed with care, reinforcing the UAE’s role in advanced AI research.
