Ericsson Lab


Ericsson SoftFIRE testbed is part of the Ericsson RMED CloudLab. Located in Rome, the Lab scope is to provide Hands on and Competence Build-up, show specific and concrete “proof” points related to the cloud benefits, Customer Demo on specific products and demonstrate how issues and concerns can be managed mitigating the risks.

Main activities performed are:

  • Standard Customer Demo
  • Deep Dive on Customer specific request
  • PoC on Customer premises
  • Fully Customized PoC on Customer premises
  • Validation and Certification on Customer specific stack / solution

The Ericsson Cloud Lab is a flexible environment where it is possible to combine different hardware configurations to support different delivery policies. The Lab is anyway adaptable to guarantee the Ericsson Platform requirements and commercial products.


Ericsson Testbed Architecture for SoftFire

Ericsson is sharing an experimental infrastructure (Ericsson SoftFIRE testbed) to be interconnected within the SoftFIRE project.

The scope of Ericsson testbed in SoftFIRE project is to provide an OpenStack Liberty in order to delivery an infrastructure as a service (IaaS) for creating and managing large groups of virtual private servers in a data center.

The architecture is based on Dell Hardware, as showed in the table below:

Description (single node configuration) Qty
Dell PowerEdge R620 1
Intel Xeon E5-2660v2 2.2GHz, 25M Cache, 8.0GT/s QPI, Turbo, HT, 10C, 95W, Max Mem 1866MHz 1
Intel Xeon E5-2660v2 2.2GHz, 25M Cache, 8.0GT/s QPI, Turbo, HT, 10C, 95W, Max Mem 1866MHz,2nd Proc 1
PowerEdge R620 Shipping – 4/8 Drive Chassis, EMEA1 (English/French/German/Spanish/Russian/Hebrew) 1
Chassis with up to 8 Hard Drives and 3 PCIe Slots, Low Profile PCI Cards Only 1
Bezel – 4/8 Drive Chassis 1
Performance Optimized 1
1866MT/s RDIMMs 1
16GB RDIMM, 1866MT/s, Standard Volt, Dual Rank, x4 Data Width 8
Heat Sink for PowerEdge R620 2
DIMM Blanks for Systems with 2 Processors 1
400GB, SSD SAS Value SLC 6Gbps, 2.5in Hard Drive (Hot-plug) 2
PERC H310 Integrated RAID Controller 1
Active Power Controller BIOS Setting 1
DVD ROM, SATA, Internal 1
Dual, Hot-plug, Redundant Power Supply (1+1), 750W 1
2M Rack Power Cord C13/C14 12A 2
Cable for Mini PERC Cards for Chassis with up to 8 Hard Drives 1
Intel X520 DP 10Gb DA/SFP+ Server Adapter, Low Profile 2
Intel Ethernet i350 QP 1Gb Network Daughter Card 1
PowerEdge R620 Motherboard, TPM 1
2/4-Post Static Rails 1
RAID 1 for H710p, H710, H310 Controllers 1
iDRAC7 Enterprise 1
10Gb LR SFP+ modules 2

 

The number of server reserved for the project are three: one controller and two compute node.

From storage point of view, all servers are equipped with 2x 400 GB disk in mirroring for OS an 2 additional disk by 2TB each one, also in mirroring.

In the controller 1 TB is used for Cinder and 1 TB is used for Glance; in each compute node 2 TB are reserved for Nova.

ERTesbed1

The installed Openstack has a classical modular architecture where main components are:

Nova – provides virtual machines (VMs) upon demand.

Cinder – provides persistent block storage to guest VMs.

Glance – provides a catalog and repository for virtual disk images.

Keystone – provides authentication and authorization for all the OpenStack services.

Horizon – provides a modular web-based user interface (UI) for OpenStack services.

Neutron – provides network connectivity-as-a-service between interface devices managed by OpenStack services.

Ceilometer – provides a single point of contact for billing systems.

As for Ceilometer, it is not integrated in the overall picture due to Zabbix takes over it.

ERTestbed2


Network Design

The network design of the proposed architecture is composed by 4 network:

  • IDRAC network: is the console network;
  • Tunnel network: used for internal communication between the servers;
  • Management Network: used for management and API;
  • Floating network: used for expose the VM to external

ERTestbed3