How inBuilding differs from the two conventional models at every layer: physical infrastructure, bandwidth enforcement, economics, and tenant capability.
Six Ways UDI3 Differs
The carrier controls hardware, protocol, and pricing. The building operator has no active role above the physical conduit.
inBuilding leases raw dark fiber and installs its own optics on each end. The carrier's role ends at the fiber strand. All active equipment is operator-owned and operator-managed.
No meaningful per-tenant enforcement. One tenant can saturate the building's bandwidth and degrade service for everyone else.
Each tenant's committed rate is enforced at the MPoE switch using H-QoS child policies — in hardware, at line rate. A 1 Gb tenant cannot consume 10 Gb under any circumstance.
Bandwidth tiers enforced by the ISP upstream — subject to congestion, requiring a service order to adjust.
The dark fiber transceiver speed is the physical bandwidth ceiling. This cannot be misconfigured or exceeded regardless of any software setting.
Each building connects to the internet independently. No campus-level fabric exists. No shared infrastructure and no wholesale leverage across buildings.
A single high-capacity backbone feeds all buildings from the data center. The fabric carves that capacity into dedicated per-building port allocations at precisely defined speeds.
Not possible without separate ISP circuits for each tier — multiple service orders, multiple circuit installations.
Fully supported from a single dark fiber run. One building simultaneously serves 1 Gb, 10 Gb, 50 Gb, and 100 Gb tenants. H-QoS enforces each allocation independently.
Not supported. Standard commercial networks lack lossless fabric, PFC/ECN, and the port density required for GPU cluster workloads.
Buildings for AI/HPC are enabled with a lossless spine-leaf GPU fabric — PFC, ECN, and DCQCN. GPU clusters can run where engineers sit. A first for commercial office buildings.
| Dimension | Conventional | UDI3 |
|---|---|---|
| Connectivity Architecture | ||
| How internet reaches tenants | Carrier-managed circuits per tenant, or shared bulk internet with no enforcement. | Operator leases dark fiber per building. Carrier's role ends at the fiber strand. |
| Building interconnection | None. Each building connects independently. | All buildings feed from a single data center backbone with per-building allocations. |
| Bandwidth Enforcement | ||
| Per-tenant enforcement | Absent or rudimentary. One tenant can saturate bandwidth and degrade all others. | Hardware-enforced at line rate. H-QoS child policy per tenant on MPoE switch. |
| Enforcement location | At the ISP upstream — miles from the tenant, subject to ISP congestion. | At the MPoE switch inside the building, in hardware, at the point of ingress. |
| Bandwidth ceiling | Software rate limiting. Inconsistent and difficult to audit. | Dark fiber transceiver speed is the physical ceiling. Cannot be exceeded regardless of configuration. |
| Tenant Capability | ||
| Bandwidth tiers | Flat-rate shared packages. No guaranteed committed rate. | 1 Gb, 10 Gb, 50 Gb, 100 Gb — hardware-enforced committed information rates. |
| Mixed speeds in one building | Not possible without separate ISP circuits per tier. | Fully supported from one dark fiber run. H-QoS enforces each allocation independently. |
| AI/HPC tenant support | Not supported. Standard networks lack lossless fabric and GPU cluster capability. | Lossless spine-leaf GPU fabric (PFC/ECN/DCQCN) enables GPU training and RDMA workloads. |
"The carrier's role ends at the fiber strand. Everything above the physical layer — the switching, the enforcement, the SLAs, the per-tenant guarantees — is owned, managed, and backed by inBuilding."
This is not a resale model. It is facilities-based internet infrastructure — operating at the building level, with the economics of a regional carrier and the precision of enterprise networking.
Talk to inBuilding About Your Building →