HANA Architecture on Azure
Design HANA deployments on Azure including scale-up single-node for most workloads, scale-out multi-node with shared storage, and understand the retired HANA Large Instances. Learn memory sizing and dynamic tiering concepts.
Designing the HANA database layer
☁️ Mei opens the architecture document. “We have VMs, storage, and networking sorted out. Now the big question: how do we architect the HANA database itself? Do we put everything on one big VM or spread it across multiple nodes?”
🏗️ Raj thinks for a moment. “Our database is 2 TB. I assume that fits on a single VM?”
☁️ Mei nods. “Easily. But not every customer is PrecisionSteel. Some have 8 TB or even 20 TB databases. Let me walk you through all three options so you can handle any exam question.”
Think of it like moving furniture.
Scale-up is hiring one very strong mover with a giant truck — they carry everything alone. Scale-out is hiring a team of movers who split the load across multiple trucks. HANA Large Instances used to be a warehouse-sized crane for the heaviest loads, but it was retired at the end of 2025. Most moves need just the one strong mover (scale-up). You only bring the team when the load is truly enormous.
Scale-up: single-node HANA
Scale-up is the most common HANA deployment on Azure. One VM hosts the entire HANA database. This is simpler to manage, easier to back up, and less complex for HA/DR.
Key characteristics:
- Single VM with M-series, Mv2-series, or newer Msv3/Mdsv3-series
- Up to approximately 12 TB memory on Mv2 (higher with newer Msv3/Mdsv3 generations)
- All HANA storage (/hana/data, /hana/log, /hana/shared) attached directly
- HA achieved via HANA System Replication (HSR) to a second VM
- Suitable for the vast majority of SAP workloads
🏗️ Raj confirms. “So PrecisionSteel is a textbook scale-up case. One M192ms for production, one for the HA replica.”
☁️ Mei agrees. “Exactly. Scale-up is always the first choice unless the database physically does not fit in a single VM.”
Scale-out: multi-node HANA
When a HANA database exceeds the memory of the largest available VM, you distribute it across multiple worker nodes. Each node holds a portion of the data, and HANA coordinates queries across them.
Key characteristics:
- Multiple VMs each running a HANA worker process
- Azure NetApp Files (ANF) is recommended for shared storage between nodes (NFS on Azure Files also supported for certain configurations)
- Typically includes a standby node for automatic failover
- More complex to manage, monitor, and back up
- Used for very large BW/4HANA or S/4HANA databases
ANF is the primary choice for scale-out
The exam tests whether you know that HANA scale-out on Azure requires shared NFS storage between nodes. Azure NetApp Files is the recommended and most commonly tested option. NFS on Azure Files is also supported for certain configurations. You cannot use Azure Managed Disks for this because they cannot be shared across VMs in the way HANA requires.
HANA Large Instances (HLI) — retired December 2025
HANA Large Instances were bare-metal servers colocated in Azure datacenters. They were created when Azure VMs could not support large HANA databases. HLI was fully decommissioned on December 31, 2025. With Mv2-series reaching 12 TB and newer Msv3/Mdsv3 VMs offering even more, HLI is no longer available for new deployments.
Key facts for the exam:
- Were bare-metal hardware — no hypervisor overhead
- Were connected to Azure VNets via dedicated ExpressRoute
- Offered up to 24 TB memory configurations
- Retired because Azure VM families now cover the workloads HLI served
- Existing HLI customers must migrate to Azure VMs (HSR migration pattern)
| Feature | Scale-up (Single Node) | Scale-out (Multi-Node) | HANA Large Instances (RETIRED) |
|---|---|---|---|
| Architecture | One VM, one HANA instance | Multiple VMs, distributed HANA | Was bare-metal server (retired Dec 31, 2025) |
| Max memory | Up to ~12 TB (Mv2), higher with Msv3/Mdsv3 | Aggregate across nodes (practically unlimited) | Was up to 24 TB |
| Shared storage | Not needed | ANF recommended (NFS on Azure Files also supported) | Was direct-attached SAN |
| Management complexity | Low | High | N/A — no longer available |
| HA approach | HSR to second VM | HSR + standby node + shared NFS | Was HSR + storage replication |
| Current recommendation | Preferred for most workloads | When database exceeds single VM | Retired — migrate off HLI to Azure VMs |
| Exam weight | Heavily tested | Know when to recommend | Know it existed and HLI-to-VM migration |
⚠️ Recently changed — exam alert
HANA Large Instances (HLI) were fully retired on December 31, 2025. Microsoft announced the retirement in September 2022 with a 3-year transition period. The exam may still ask about HLI — but the correct answers will focus on migrating AWAY from HLI to Azure VMs, not on deploying new HLI systems. If a question offers ‘deploy HANA Large Instance’ as a solution for a new workload, it is wrong.
Memory sizing for HANA
Getting the memory size right is critical — too small and HANA starts disk paging, too large and you waste money.
Sizing approaches:
- SAP Quick Sizer — SAP’s official tool for new implementations, estimates memory based on transaction volumes
- HANA memory sizing report — for existing HANA systems, check current peak memory usage and add growth headroom
- Migration sizing — for non-HANA to HANA migrations, use SAP’s conversion factors (database size does not equal HANA memory requirement due to compression)
Rule of thumb: HANA compresses data significantly (2x to 5x depending on data type). A 10 TB Oracle database might only need 3-4 TB of HANA memory. Always verify with SAP tools rather than guessing.
Dynamic tiering
HANA dynamic tiering (also called Native Storage Extension in newer versions) allows less-frequently accessed data to be stored on disk rather than in memory. This reduces memory requirements for large databases.
- Hot data stays in memory for fast access
- Warm data is moved to disk-based storage
- HANA manages the tiering automatically based on access patterns
- Reduces the VM memory size needed, which can lower costs
- Not all SAP applications support dynamic tiering — check compatibility
Exam tip: Dynamic tiering reduces memory cost
If the exam presents a scenario where the HANA database is large but much of the data is historical and rarely queried, dynamic tiering (or Native Storage Extension) is the answer to reduce memory requirements. The key phrase to look for is “infrequently accessed data” or “historical data.”
Knowledge check
A customer has a 15 TB SAP HANA database that cannot be reduced with data archiving. What HANA architecture should Mei recommend on Azure?
PrecisionSteel's 2 TB HANA database includes 800 GB of historical data that is queried only during annual audits. What feature can reduce their memory requirement?
Yuki is designing a HANA scale-out architecture for a customer with a 20 TB database. What shared storage service is required for HANA scale-out deployments on Azure VMs?
Summary
You now understand the three HANA architecture options: scale-up for most workloads (preferred), scale-out with shared NFS storage (ANF recommended) when a single VM is not enough, and the now-retired HLI. Memory sizing uses SAP tools and accounts for HANA compression, and dynamic tiering can reduce requirements for databases with lots of historical data.
Next, we look at the SAP application tier — the servers that sit above the database and handle business logic, messaging, and user connections.
🎬 Video coming soon