Aurora 615
  • • Marvell Teralynx 5
  • • 8 ✕ 100G + 48 ✕ 25G
  • • Intel Pentium D-1508 CPU
  • • BMC enabled
  • • SONiC ready

Netberg Aurora 615 48x25G and 8x 100GE, Marvell Teralynx 5 data center switch, front angled view Netberg Aurora 615 48x25G and 8x 100GE, Marvell Teralynx 5 data center switch, front top view Netberg Aurora 615 48x25G and 8x 100GE, Marvell Teralynx 5 data center switch, rear angled view Netberg Aurora 615 48x25G and 8x 100GE, Marvell Teralynx 5 data center switch, rear top view
Innovium Teralynx architecture
PRODUCTS
Datasheet.

Netberg Innovium NOS support

With the emergence of Autonomous cars, 5G cloud-native architectures, Smart Cities, and other Edge Applications, lots of data is being generated and processed at the Edge Data Centers. In addition to performance, critical data center requirements include real-time, actionable analytics, programmability for future-proof infrastructure, low latency, high power efficiency, and flexibility.

Aurora 615 with 2.4T Marvell Teralynx switching silicon aims to answer these challenges. Up to 2x lower latency, most power-efficient design reducing carbon footprint, and unmatched telemetry&analytics are packed into a 1U box with breakthrough cost in its class.

Aurora 615 provides breakthrough capabilities for ToR, Enterprise, Edge, and 5G deployment scenarios with:

  • Largest on-chip buffers in a switch of its class to deliver best-in-class network quality
  • Robust RoCE for lowest latency and rich QoS necessary for distributed storage and AI applications
  • Leading table sizes&ACLs needed at the access layer and edge
  • FLASHLIGHT delivers actionable, real-time telemetry data correlated to applications, which is essential to monitor, troubleshoot and simplify network operations
  • A breakthrough cost that delivers 2X+performance per $ vs. alternatives
  • SW programmability for support of new protocols, achieved without impact to throughput or latency
FLASHLIGHT Telemetry
  • Flexible Statistics counters to monitor forwarding decisions, security actions and traffic distribution. Flex Stats provide visibility far beyond MIB counters by enabling fine grained counting of data-plane metrics such as Table Lookups, Prefix Hits, Routes, Tunnel Termination, Tunnel Initiations, etc.
  • Buffer and Queue Forensics encompass a suite of Telemetry tools that enable monitoring and capturing anomalies triggered by customer focused visibility attributes that provide policy-based alerts vs false positive bombardment.
  • Flow tracker provides the ability to track flows in a network. Flow tracker provides flow centric insights such as packets per flow, rates of flow, peers of flow, etc.
  • TERALYNX provides the ability to add per packet ingress and egress timestamps, at line rate to live traffic. Timestamps can also be included as metadata in telemetry packets. This enables an accurate view of network per hop latencies.
  • The Marvell Path Telemetry (IPT) provides the ability to auto generate probe packets, triggered by flexible scope attributes that identify only packets of interest from live traffic, then appends extensive telemetry information.
  • TERALYNX offers unmatched sFLOW and Flex Mirroring capability. User may choose to forward copies to a wide collector set to efficiently load balance high bandwidth monitored traffic.
  • TERALYNX ensures that there are no silent packet drops in the device. Packet drops at all stages of the packet processing pipeline are counted.
  • In-Band Network Telemetry (P4-INT) helps construct end-to-end flow profiles, providing real-time telemetry on application behavior in the network.
Performance
  • 48x 25G + 8x 100/50/40GbE QSFP28 ports in 1 RU
    Up to 80x 25/10G SFP28 port via break-out cables
  • 2.4Tbps Marvell Teralynx IVM 55200
  • Intel® Pentium® Processor D-1508 dual core processor for application deployment
  • 16GB of DDR4 memory
  • Configurable pipeline latency enabling sub-400 ns operation
Reliable hardware platform
  • Redundant 600W 1+1 power
  • Redundant N+1 cooling
  • BMC (Baseboard Management Controller) enables remote switch power control and providing health monitoring of the temperature, power status, and cooling fans.
Network OS (NOS) options
  • Open Network Linux is a Linux distribution for "bare metal" switches, that is, network forwarding devices built from commodity components. ONL uses ONIE to install onto on-board flash memory. Open Network Linux is a part of the Open Compute Project and is a component in a growing collection of open source and commercial projects.
  • Microsoft SONiC - a collection of networking software components required to have a fully functional L3 device. It is designed to meet the requirements of a cloud data center. It is fully open-sourced at OCP.
Specification
Ports 48x 10/25GbE SFP28 + 8x 100/40GbE QSFP28 ports in 1 RU
Up to 80 x 25/10G SFP28 ports via break-out cables
1x RJ-45 out-of-band (10/100/1000) management
1x RJ-45 console (RS232)
1x USB
Front IO Power LED
System status LED
PSU1 status LED
PSU2 status LED
Fan status LED
Airflow direction LED
Location LED
Console LED
Performance Switching silicon: 2.4Tbps Marvell Teralynx IVM 55200
Latency: <400 ns (PHY-less)
Packet Buffer: 45MB
Intel® Pentium D-1508 CPU
16GB DDR4 ECC
128GB M.2 SSD
Management ASPEED AST2520 BMC
IPMI 2.0, shared Ethernet port
Power 600W 1+1 RPSU 80+ Platinum:
100V~240V AC / 50~60Hz


800W 1+1 -40V~-60V DC RPSU (option)

Typical power - 300W/ Maximum power - 480W (with optics)
Cooling 4 N+1 redundant fans
Front-to-Back
Dimensions (DxWxH) 1U, 470 x 440 x 44 mm
Rackmount kit
Environment Operating temperature: 0~40°C
Operating humidity: 20-90% maximum relative humidity (non-condensing)
Warranty 3 year
EMC and safety FCC
CE Declaration of Conformity
Reduction of Hazardous Substances (RoHS) 6
Compatible NOS ONIE bootloader
Open Network Linux
SONiC (Software for Open Networking in the Cloud)
Netberg SONiC SONiC.202012

* BGP
* ECMP
* LAG
* LLDP
* QoS - ECN
* QoS - RDMA
* Priority Flow Control
* WRED
* COS
* SNMP
* Syslog
* Sysdump
* NTP
* COPP
* DHCP Relay Agent
* SONiC to SONiC upgrade
* One Image
* VLAN
* ACL permit/deny
* IPv6
* Tunnel Decap
* Mirroring
* Post Speed Setting
* BGP Graceful restart helper
* BGP MP
* Fast Reload
* PFC WD
* TACACS+
* MAC Aging
* LACP
* MTU Setting
* Vlan Trunk
* IPv6 ACL
* BGP/Neighbor-down fib-accelerate
* Port breakout
* Dynamic ACL Upgrade
* SWSS Unit Test Framework (best effort)
* ConfigDB Framework
* Critical Resource Monitoring
* MAC Aging
* IPv6 ACL
* BGP/Neighbor-down fib-accelerate
* gRPC
* Sensor transceiver monitoring
* LLDP extended MIB: lldpremtable, lldplocporttable, lldpremmanaddrtable, lldplocmanaddrtable, lldplocporttable, lldpLocalSystemData
* Warm Reboot
* Incremental Config (IP, LAG, Port shut/unshut)
* Asymmetric PFC
* PFC Watermark
* Routing Stack Graceful Restart
* L3 VXLAN
* FRR as default routing stack
* Everflow V2 - IPV4/IPv6 Portion 2.0
* L3 RIF counter support
* BGP-EVPN support(type 5)
* Mgmt VRF
* sFlow
* VRF
* Sub-port support
* Egress mirroring and ACL action support
* Configurable drop counters
* HW resource monitor
* NAT
* Egress shaping (port, queue)
* Port Mirroring
* Proxy ARP
* Consistent ECMP (fine grain ECMP)
* EVPN/VXLAN
* CoPP Config/Management
* Dynamic headroom calculation (RoCEv2)
* EVPN/VXLAN
* FRR BGP NBI
* Streaming telemetry